Updated April 13, 2023
Introduction to Cache Memory Types
In Cache Memory Types, Cache Memory is a high-speed auxiliary memory that holds frequently accessed instructions and data to make it available to the processor at the shortest possible time thus reducing the overall process cycle time. The cost of this memory is higher than the main memory of the computer as well as external hard disks but cheaper than Register and it is an intermediary between main memory and the processor.
In earlier computers, Cache memory used to be a separate entity and in modern microprocessors, it is part of the chipset, synchronizes with the speed of CPU in performing tasks, and helps in encoding and retrieving data. Let’s study various types of Cache memory types in this article.
Types of Cache Memory
Cache memory within a computer is classified under various types depending upon its physical location within the computer whether they are:
1. Part of the processor chip (Primary Cache L1)
2. Located between the processor and main memory (Secondary Cache L2)
3. External to the processor (Main Memory L3)
Apart from being used in the main memory arena Cache concept is put into use in the following areas to enhance the performance of Web applications. Some of these are under the control of the users and the rest of them are under the system administrator’s control.
1. Caching in browser, proxy, Gateway during Web browsing
2. Caching in database server during data access
3. Application/output caching
4. Distributed caching.
Level-1 – Primary Cache L1
Primary Cache memory is part of the processor and it is located very near to CPU making it an integral part of CPU. The size of this memory is very small ranging between 2KB to 64KB. Early Pentium Chip and Intel 486 chips had a 16KB built-in cache memory. It is a high-speed memory that allows storing/ retrieving of instructions and data at a speed matching processer speed. CPU looks for data or instruction in the L1 cache first before seeking others. In modern Microprocessors L1 memory is integrated into CPU and accessing data is quicker than regular memory and the overall processing speed increases. There will be separate L1 memory for each processor in case of Multicore CPUs.
Level-2 – Secondary Cache L2
The size of the Secondary cache is more than L1 Cache, ranging from 256KB to 512KB. If a cache miss occurs during the L1 search, instructions and Data will be searched in the L2 cache. In most cases, this memory is external to the processor and this memory is connected to the processor through the high-speed bus, and hence the speed of storing and retrieval is reasonably fast.
Till 486 chips family, there was no internal cache and everything was external and it was called primary. 80486 was the first processor to have an internal cache of 8 KB and the Pentium family had 256KB to 512 KB external secondary L2 cache.
Level-3 – L3 Cache
The memory size of this cache is between 1 MB to 8 MB and it is the largest among L1, L2, and L3. Multicore CPUs have separate L1 cache and L2 cache for every core but L3 cache is common to all cores and it is being shared by them. The speed of the L3 cache is better than the main memory.
Web Caching
The browser maintains the history of data browsed as a cache on the local computer. When a new request for any data is raised, the local cache memory is searched. If the data is available in cache and it is not updated since last access, the data is used from the cache and network traffic and latency is reduced considerably. This caching is controlled at the user level and the user can clear the stored cache data at any time.
Caching in Proxy and gateway servers are different from browser caching and it involves large user groups and hence it is administered centrally. Domain name server data and mail server records are stored in the cache in these servers and these data are not changed frequently and it is better to allow them to remain in the cache for a longer time. This cache helps to avoid unnecessary network access and reduces the browsing time.
Data Caching
Fetching data from a Database always takes more time due to Input/ Output processing hardware constraints. If the same set of data is accessed quite frequently and if it remains the same over some time, it would be better to store such data in the data cache maintained in the application server. By doing so, the number of database accesses would come down and the application would be able to retrieve information faster from the data cache.
There should be a mechanism to clear the cache if the original data in the database is altered so that a fresh version is brought from the database during the next request. Caching and de-caching operations are handled at the Application and database server level.
Application/Output Caching
It is a built-in mechanism in content management applications and it enables faster download of the web page and results in a drastic reduction in server overhead. It leverages the caching techniques at the server level that caches raw HTML instead of caching raw data sets in the data caching option. Caching may occur at the page level or parts of page-level or module level but normally caching occurs at the HTML level. The reduction of 50% page loading time is achievable in this method.
Distributed Caching
In big high volume systems, Data is stored across distributed database servers. Data from these database servers are cached in web servers. There will be multiple servers in the distributed cache. Data is supplied to applications from this cache. The cache will not run out of space as new servers can easily be added to the server pool without disturbing users. This technique is adopted by many big players like Youtube, Google, and Amazon in serving content faster to the users.
Conclusion
Cache Concept started with computer memory and it gained wide traction in other areas as well to optimize the performance of the applications.
Recommended Articles
This is a guide to Cache Memory Types. Here we discuss an introduction to Cache Memory Types, various types with detail explanation. You can also go through our other related articles to learn more –