answersLogoWhite

0

What is cache hit and cache miss?

Updated: 10/3/2023
User Avatar

Wiki User

12y ago

Best Answer

Type your answer here... in cache memory when the CPU refer to the memory and find the word in cache it is said to be hit or produced.......

if the word is not found in cache it is in main memory it counts as a miss

User Avatar

Wiki User

12y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

7y ago

In memory terms, this means the memory location you tried to hit was in the cache. Hitting the cache is very fast but due to hardware limitations is very small. When a program needs to fetch memory it will look in the cache first. Depending on the operating system the cache management is designed differently but in most cases memory locations that are used a lot are in the cache. The same can be applied to a disk cache. Instead of moving the disk drive head to find a file, if the file is in the cache it is faster. When the file is needed it will look at this cache first and if it's there then it hits the cache.

This answer is:
User Avatar

User Avatar

Wiki User

12y ago

Well cache is a high speed memory whcih is basically used to reduce the speed mismatch between the CPU and the main memory as it acts as a buffer.

cache hit-whenever the CPU requests for any data or information then it first checks the cache whether the data is present or not.if it is present then the data is being taken from the cache memory itself and this is referred as chache hit

cache miss-when the data is not found in cache memory then the data is taken from the main memory and the copy of it is kept in the cache memory also for any further use of it.thsi is known as cache miss

Anand bhat(mca@kiit-870024)

This answer is:
User Avatar

User Avatar

Wiki User

9y ago

To refer to something as a hit or , means it is either right on or not. It's either one or the other, and there is no in between.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is cache hit and cache miss?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What is the difference between cache vs cold cache vs hot cache vs warm cache vs cache hit vs cache miss?

Firstly, it sounds like you are asking for general definitions, rather than differential definitions, which is problematic when the definitions are differential and context specific. Cache miss: not in cache, must be loaded from the original source Cache hit: was loaded from cache (no implication of what "type" of cache was hit). cold cache: The slowest cache hit possible. The actual loading mechanism depends on the type of cache (CPU cache could refer to an L2 (or L3) hit, disk cache could refer to a RAM hit on the drive, web cache could refer to a drive cache hit) hot cache: The fastest cache hit possible. Depends on mechanism described (CPU could be L1 cache, disk could be OS cache hit, web cache could be RAM hit in cache device) Warm cache: Anything between, like L2 when L1 is hot and L3 is cold. It is a less precise term and often used to imply "hot" when the performance is closer to "cold."


What is the objective of cache only memory architecture?

Cache memory is the high speed memories which are repeatedly requested by the Cache client (CPU). Whenever the requested data from the cpu is present in the cache, it directly supply the data and is known as cache hit(fast) and when the data is not accessible in cache then cache access the block of the main memory and feed to the CPU and it is termed as cache miss (slow).


What is miss latency?

miss latency is the time (in cycles) the CPU waits when a miss happen in the cache. (the time needed to bring the data from the main memory to the cache).


What is the computer definition for hits?

There are several computer definitions for the word hits. If someone accesses your website, that is called a hit. Results on a search engine are also called hits. This word is also related to caching. When data that is already in the cache is reused, that is called a hit. When data cannot be found in the cache, that is called a miss. The idea of the caching scheme is to be good at predicting hits and thus improve performance. If everything is a miss, then the cache is useless and may actually be reducing performance.


Is cache memory a removable memory?

No, a cache memory is often used to store data that has been needed recently on grounds that it will be faster to access when/if it is needed again. When data that is requested is contained in the cache you have a cache hit, and when you have to retrieve it from the hard drive (or where ever its original storage was) again it is called a cache miss. Retrieving data from the hard drive is slower than retrieving it from the cache.


Why it is possible to achieve a high hit rate with a relatively small amount of cache?

Well, cache hit rate isn't always determined by the size of the cache. If the cache is inefficient, or if the processor is clocked too far out of stability, the hit rate can decrease. Same as such in the inverse-- If you have cache functioning at near perfection and the processor is clocked properly, the cache hit rate will be reasonably high. High miss rates are most often caused by having too little cache, but the two above mentioned things have an impact too. A small cache isn't bad-- if it is enough. Processing aims for simple-instruction hit rate at least 97%. Misses beyond this have an increasingly heavy impact on performance. 3 misses out of 100 is a bit rough when you consider how many billions of cycles a second a processor goes through.


What is cache miss penalty?

Additional time required because of a miss it is generally the 30~40 cycles for Main Memory.


What is the impact of cache miss on system performance?

A cache miss is where the processor requests a memory transfer, and that data is not in cache. This requires the bus interface unit to perform a slow access to memory, as opposed to a fast access to cache, or it requires the cache manager to make disk accesses, which can be millions of times slower than main memory. Depending on the cache level, a consistently high percentage of cache misses can impact performance significantly. This is most often seen in low physical memory machines, where the swap file hit-miss ratio is poor. The working set is the memory that is most recently used. Ideally, you want short-term working set to always be smaller than physical memory. Since working set is hard to measure, you can use commit charge, though that is not as accurate. You want commit charge for currently active applications plus kernel memory to be less than physical memory.


What is the size of L1 and L2 cache?

Usually the size of the L2 cache will be larger than the L1 cache so that if data hit in L 1 cache occurs, it can look for it in L 2 cache.. If data is not in both of the caches, then it goes to the main memory...


What is cache and what is its purpose?

Cache is a high speed memory which is basically used for the following reason: As the speed of the main memory is not as much as the speed of the CPU.so just to compensate the speed mistmatch between the CPU and main memory the cache is used in between the two.so whenever the CPU asks for any data its being checked with the cache memory and if present then "cache hit" occurs or else "cache miss" occurs wher the CPU takes the data form the main memory and that data's cpoy is being send to the cache for any further operation where the CPU can request for the same data. Anand bhat(mca@kiit-870024)


F Explain the concept of 2 way set associative Cache memory with the help of an example?

Which memory locations can be cached by which cache locationsThe replacement policy decides where in the cache a copy of a particular entry of main memory will go. If the replacement policy is free to choose any entry in the cache to hold the copy, the cache is called fully associative. At the other extreme, if each entry in main memory can go in just one place in the cache, the cache is direct mapped. Many caches implement a compromise in which each entry in main memory can go to any one of N places in the cache, and are described as N-way set associative. For example, the level-1 data cache in an AMD Athlon is 2-way set associative, which means that any particular location in main memory can be cached in either of 2 locations in the level-1 data cache. Associativity is a trade-off. If there are ten places the replacement policy can put a new cache entry, then when the cache is checked for a hit, all ten places must be searched. Checking more places takes more power, chip area, and potentially time. On the other hand, caches with more associativity suffer fewer misses (see conflict misses, below), so that the CPU spends less time servicing those misses. The rule of thumb is that doubling the associativity, from direct mapped to 2-way, or from 2-way to 4-way, has about the same effect on hit rate as doubling the cache size. Associativity increases beyond 4-way have much less effect on the hit rate, and are generally done for other reasons (see virtual aliasing, below). In order of increasing (worse) hit times and decreasing (better) miss rates, * direct mapped cache -- the best (fastest) hit times, and so the best tradeoff for "large" caches * 2-way set associative cache * 2-way skewed associative cache -- "the best tradeoff for .... caches whose sizes are in the range 4K-8K bytes" -- André Seznec[3] * 4-way set associative cache * fully associative cache -- the best (lowest) miss rates, and so the best tradeoff when the miss penalty is very high


What will happen next if a cache miss occurs in the level 1 cache in a system with a level 1 and level 2 cache where will the required data be requested from next?

Cache misses move up the chain (or down the chain, if you want to think of it that way). If the information required is not in your L1 cache, then it checks for it in the L2 cache. If it isn't there either, then you need to go out and grab it from main memory.