Updated April 7, 2023
Introduction to Cache Memory
Cache Memory is a high speed memory and is used for speeding up and for synchronising with the high speed CPU. Cache memory is considered costlier than disk memory or main memory but it is economical if compared to CPU registers. Cache memory acts as a buffer between Central Processing Unit and Random Access memory because of its extremely fast attribute. Cache memory is used for holding the frequently used data and instructions which helps in keeping these data and instructions quickly available to the CPU when requested. It is also used for reducing the average time taken for accessing the data from Main Memory.
Advantages of Cache Memory
Following are the advantages are given below:
- Cache memory is a faster and smaller memory which is used for storing the copies of data which are used frequently from the main memory locations. In a central processing unit, there are several independent caches available for storing the instructions and data.
- Cache memory is faster than main memory as a result of two major reasons which are:
- Static Random Access Memory (RAM) is used in Cache memory on the other hand, main memory uses dynamic Random Access Memory (RAM).
- Cache memory also stores the instructions which may be required by the processor next time, as it helps in retrieving the information faster as compared to instructions held by random access memory (RAM)
- Using static random access memory means that the access time is low resulting in fast executions while retrieving data from the cache memory as compared to the computer’s random access memory. There is no need to refresh static random access memory as it is required to do in in dynamic random access memory. The process to refresh the random access memory means that it will take longer for retrieving data from the main memory.
- Cache memory also copies the some of the content of data held by the random access memory. For simplifying the process, it works on the concept that the major programs store the data in a sequential order. If in case, the processor is processing data from the locations 0-32, so cache memory would copy the content of the locations from 33-64 as it would be expecting that these locations would be required next.
- While the processor initiates the memory read, the first thing it will do is to check the cache memory first. While checking the cache memory, the processor will come through a cache hit or cache miss. For example, if the processor had to receive the content of the data which was in random access memory location 37 then it would look for those content in cache memory which is called a cache hit.
- Generally, a reasonable number of blocks can be stored by the cache memory at any particular time, but the main memory has a higher number of memory blocks as compared to cache memory.
- A mapping function is used to correspond the main memory blocks to those memory blacks in cache memory.
- A primary cache is small with an access time comparable to processor registers. It is always placed on the processor chip.
- The secondary cache is referred as the level 2 cache and is also placed on the processor chip. It is generally placed between the primary cache and the rest of the memory.
- In direct mapping, each block of the main memory is mapped to one possible cache line. Each memory block is assigned to a particular cache line in direct mapping. If in case, a line is already taken by a memory block while a new block requires to be loaded then the old block would be trashed. The address space is divided into two parts, mainly index field and the tag field and the main memory stores the rest. The performance of direct mapping is directly proportional to the Hit ratio.
- For the use of cache accesses, each and every main memory address can be seen with three fields. A unique word or byte is identified by the least significant w bits in a block of main memory. In most of the machines, the addresses are at byte level. One of the 2s blocks of the main memory is specified by the remaining s bits. These s bits are interpreted by the cache logic as a tag of s r bits and r bits line field. One of the m = 2r lines from the cache is identified by the line field.
- In an associative type of mapping, for storing the content and addresses of the memory byte, the associative memory is used. Here, any block can go into the line of cache. This explains that for identifying which word of the block is required, the word id bits are used but the remaining bits complete the tag. This allows the placement of any byte or word at any place of the cache memory. Associative mapping is considered to be the most flexible and fast mapping form.
- Set associative mapping is an improved form of direct mapping in which the drawbacks of direct mapping are eliminated. The possible thrashing problem of direct mapping is addressed by the set associative mapping. This problem is solved by having a group of few lines which will create a set for mapping the blocks into cache instead of having one line for mapping a block into cache. Now, a block in memory is able to map to any of the lines of a set. Each word present in the cache is allowed by set associative mapping to have two or more words in main memory with same index address. The best of associative and direct mapping features are combined in set associative mapping technique.
Conclusion
On the basis of the above article on Cache Memory advantages, we understood the basic of cache memory and its advantages. We understood how cache memory improves the performance of a CPU and increases the speed of executions in a machine. This article would help the students in understanding the concept of cache memory in detail.
Recommended Articles
This is a guide to Cache Memory Advantages. Here we also discuss the introduction and advantages of cache memory along with a detailed explanation. You may also have a look at the following articles to learn more –