answersLogoWhite

0

Its simply done. By connecting many lines :p

User Avatar

Wiki User

14y ago

What else can I help you with?

Related Questions

What is read or write hit?

I think this is probably in the context of IT and data cache technology. Memory works at different speeds: The registers in the processor are the fastest but there are only a few of those. Cache memory is a little slower but there is more of that, system memory is slower still but there is Gigabytes of that. Disk storage is much slower and there is more of that. A read hit means that the information that the processor wants is in the cache so does not need to be read from main system memory. A write hit means that data is in the cache and waiting to be sent to main system memory is being changed again. Because the transfer to main memory has not yet being done the update has not increased the total work to be done.


Why is it that the instructions being executed must be in physical memory of the computer?

When a computer gets ready to execute the next instruction, it pulls it out of memory of some sort or another. first it tries it's local high speed cache RAM, usually a part of the CPU chip. If it's not there, then it looks in the slower speed RAM. If it finds it there, the memory controller pulls a block of memory from RAM to cache then executes it there. If it doesn't see it in RAM, it looks for it in virtual memory, which is actually a part of the hard disk drive. When it finds it there, it pulls a block into RAM, then into Cache memory, where it is executed. Actually, the move from virtual memory to RAM is done way ahead of time, as the controllers see that the computer might need that block of memory in the near future. So you can see, all the instructions are executed in the small high speed cache RAM. This is done for speed. If all the instructions were executed in RAM, as computers once did, they would be 10 times slower. A lot of computer design is optimizing the memory controllers so that almost all of the instructions are executed out of high speed cache, and the processor rarely has to wait for the cache to fill up. If the computer executed out of hard disk space, it would be thousands of time slower.


What is reducing cache misses?

Cache memory is fast memory that resides immediately next to the CPU; it is virtually 15 times faster than the next fastest memory: system RAM. The system RAM is organised from "location 0" to "location n" where "n" is the number of bytes your computer has of RAM. The Cache is like yellow stickers on your desk, with the location slot on top. So this memory is accessed according to content not to where it is.So cache memory is very limited-- usually less than 4 MB-- compared to system RAM, which may have many gigabytes. That being said, the CPU tries to predict which system memory it will need next, and instructs the address control unit to prefetch this data and place it into the cache. If the CPU was correct in "guessing" the memory blocks the CPU can blaze along with its processing at the 2ns speed typically found in cache memory. This is a cache hit. However, if the CPU incorrect in its educated guess, then it must request the memory content from the RAM memory and wait up to 15 times longer, somewhere in the neighborhood of 60ns. This is referred to a "cache miss.". This "copy" is kept in the cache, and should the CPU modify this, it will modify the cache content only, and leave it to the cache to update the RAM - "write-through". Cache misses are undesirable, while cache hits are highly desirable; a system that missed 100% of the time would literally run more than 15 times slower than a cache that hit 100% of the time. A cache miss actually increases the processing time above what it would have been without any cache at all because of the extra memory it has to request. Reduction of cache misses is done by changing the microcode that the CPU uses to "guess" which memory it will need next. These are very sophisticated algorithms that have remained the same for decades. Guessing the next instruction is simple, it is usually the next and for a condition, you can pre-fetch for both results. Data is based on "what you have used lately will be used again". You now have to consider the complexity of the cache memory - this is accessed by content "location"/"address" and return the content. The more you have to search, the slower will the cache be. The one that write the code can make a huge change. Simply by not use data that is all over the place, but a small set, and reuse this. The other is to minimize "context switches" - execute as much as possible without having to wait for the entire cache to be cleared. Writing code in VBA will be encoded as it is executed, which is a huge overhead compared to defining simple sequences that gets things done. The basics is the same today as those that P.J. Denning and D. Coffman described in "Operating System Theory" in 1978, when they introduced the notion of "Working Set". They describe the reason for why "more" does not always mean "better performance".


Definition of RAM ROM and cache memory?

Random Access Memory (RAM) is the memory in a computer that is used to store computer programs while they are running and any information the programs need to do their job. Information in the RAM can be read and written quickly in any order. Usually, the RAM is cleared every time the computer is turned off. It is known as 'volatile memory'. Cache memory is a more expensive memory much faster than regular memory. It is used as a buffer between main memory and the CPU so that repeated access to the same memory address will actually reference t a copy of the information if it is still stored cache memory. This speeds up the way applications work. This is different than cache storage as when Internet Explorer uses a cache to store recently visited web page information so that a subsequent access to the web page will retrieve the data from cache instead of fetching it through the internet. This makes it much faster when you browse the Internet as it doesn't have to fetch every single file every time. ROM is a memory which is not cleared when power is turned off. The BIOS ROM permanently stores essential system instructions (BIOS). The data held on ROM can be read but not changed. This is done during manufacturing. ROM is non volatile, meaning that the data stored on it will not be lost when the computer is switched off.


How is most mapping done today?

gps


How do you purge the cache in your web browser?

We can easily purge the cache in the web browser. This can be done in the settings of the web page.


Storage mapping is done by compiler or loader?

Compiler


How do cache memory improve the system performance?

Caches are meant to improve memory access performance of the computer system. There are hardware caches implemented as well software caching is also done in Operating system to improve performance.


What is the subset mapping in DBMS?

Subset mapping in DBMS refers to the process of mapping one subset of data from one database to another subset of data in another database. This is typically done to synchronize or transfer data between databases while ensuring that only relevant subsets are affected. It helps in maintaining data consistency and integrity between databases.


How do you get rid of cache?

You can't get rid of cache. It is a feature on the pc so people are able to track you down if you have done a crime online.


What is a cache on farmville?

Cache is not ON farmville itself. Cache is your temporary internet files and history as such. In some cases, Zynga might request you clear your cache to see new features. This can be done by going to (IE7,IE8) Safety --> Delete Browsing History.


F Explain the concept of 2 way set associative Cache memory with the help of an example?

Which memory locations can be cached by which cache locationsThe replacement policy decides where in the cache a copy of a particular entry of main memory will go. If the replacement policy is free to choose any entry in the cache to hold the copy, the cache is called fully associative. At the other extreme, if each entry in main memory can go in just one place in the cache, the cache is direct mapped. Many caches implement a compromise in which each entry in main memory can go to any one of N places in the cache, and are described as N-way set associative. For example, the level-1 data cache in an AMD Athlon is 2-way set associative, which means that any particular location in main memory can be cached in either of 2 locations in the level-1 data cache. Associativity is a trade-off. If there are ten places the replacement policy can put a new cache entry, then when the cache is checked for a hit, all ten places must be searched. Checking more places takes more power, chip area, and potentially time. On the other hand, caches with more associativity suffer fewer misses (see conflict misses, below), so that the CPU spends less time servicing those misses. The rule of thumb is that doubling the associativity, from direct mapped to 2-way, or from 2-way to 4-way, has about the same effect on hit rate as doubling the cache size. Associativity increases beyond 4-way have much less effect on the hit rate, and are generally done for other reasons (see virtual aliasing, below). In order of increasing (worse) hit times and decreasing (better) miss rates, * direct mapped cache -- the best (fastest) hit times, and so the best tradeoff for "large" caches * 2-way set associative cache * 2-way skewed associative cache -- "the best tradeoff for .... caches whose sizes are in the range 4K-8K bytes" -- André Seznec[3] * 4-way set associative cache * fully associative cache -- the best (lowest) miss rates, and so the best tradeoff when the miss penalty is very high