b]) l [left arrow] m [left arrow] n [left arrow] 0 S [l] [left arrow] URL of current DAPS S [ ceil( ([absolute value of R]+2)/2)] [left arrow] URL of current NS while (n < [absolute value of R]) l [left arrow] ((++m ) * ((n+1)%2==1)) + ((ceil(([absolute value of R]+2)/2) + m) * ((n+1)%2==0)) S[l] [left arrow] R [n] n [left arrow] n + 1
This list excludes the entries of the current DAPS and NS.
These thresholds state the maximum number of the backups for each DAPS and NS section.
The upper portion of S that is of size p = ceil([absolute value of S]/2) is used by DAPS backups while the remaining lower portion is occupied by NS backups at the size of q = floor([absolute value of S]/2) such that p [greater than or equal to] q.
Doing this keeps service interruptions to the CPF-enabled applications at a minimal level in the event of DAPS failure, at the expense of frequent monitoring traffics.
This growing rate is also applicable to the DAPS allocations.
Based on the current proxy allocation policy in CPF, the first voluntary client will be selected as DAPS and regarded as the NC.
The reason of separating the execution of DAPS and NS into different hosts is to avoid overloading the DAPS host with workloads incurred by NS.
In cases where DAPS is found overloaded, the second client could take over the task of DAPS only if the second client is more capable than the DAPS host.
In scenario where the DAPS in the ISP network fails or overloads, parent NS could activate the backup DAPS (if there is any).
However, if DAPS services are discovered in the network neighbourhood (ISP or other subnetworks) within the performance and security boundary, client requests will be redirected accordingly.
DAPS and NS failures, which essentially occur in the client network.