cache

(redirected from Cache misses)
Also found in: Dictionary, Thesaurus, Legal, Encyclopedia.
Related to Cache misses: Cache line

cache

 [kash]
a memory mechanism used by a computer to accelerate access to information.

cache

[kash]
(in computer technology) a fast storage buffer in the central processing unit or hard drive used to increase the amount and speed of data processing. Also called cache memory.

cache

A storage area on a PC’s hard drive where the browser temporarily stores web pages and/or graphic elements.
References in periodicals archive ?
j]) does not provide a tight upper bound, then processing the supernode requires multiplying large matrices, which will cause many cache misses due to the memory size M.
j]), and then the number of cache misses performed during the factorization of the diagonal block can not longer provide a useful upper bound on data reuse.
The first factor is the same as the one that made the analysis under the infinite-cache assumption complex: the fact that the number of cache misses depends on the interaction between the updating and the updated supernodes.
Therefore, the total number of cache misses during the update, even with an infinite cache, must be in the range between [[lambda].
In this section we show that the left-looking algorithm can sometimes perform asymptotically more cache misses then the multifrontal algorithm.
For any large enough cache size M, there is a matrix on which the left-looking algorithm incurs at least a factor of [square root of (M)]/32 more cache misses than the multifrontal algorithm.
Furthermore, in both algorithms we only counted capacity cache misses that occur in the context of supernode-supernode updates.
We now show that there are also matrices on which the multifrontal algorithm incurs asymptotically more cache misses.
Since the total number of cache misses, both compulsory and capacity, in the left-looking algorithm is only [THETA] (l(n - l)), it achieves a data-reuse level of about [THETA] (l) = [THETA] ([square root of (M)]).
For any large enough cache size M, there is a matrix on which the multifrontal algorithm incurs at least a factor of [OMEGA]([square root of (M)]) more cache misses than the leftlooking algorithm.
When L fits in cache (or in main memory), the left-looking algorithm works solely with data structures in the cache (main memory), but the multifrontal algorithm may experience cache misses (virtual-memory page faults).
In practice, the cache is not flushed after every supernode, so the allocation and extend-add schedule does have an influence on the actual number of cache misses.