Cache is a storage mechanism, either hardware or software, used to store data temporarily in a device. The data is small in size and is such that it is frequently used by the cache client. The cache storage is different from the primary storage. It should be on a local drive and be readily available for the cache client.
A pool of data entries compiles together to form a cache. Each entry consists of an associated data along with a tag. The associated data is a copy of the data stored in the backing store while a tag identifies the data from which associated data is copied.
Importance of cache
Cache is essential for the following reasons.
- Cache reduces the time taken for data computing by reducing the data latency. Data latency is the amount of time taken by a packet data to reach from one node to another.
- Using cache is cost-effective.
- The throughput becomes high by the use of the cache. A throughout is the speed at which a data is processed.
- Â The I/O (Input-Output) traffic is diverted to the cache, thereby reducing the I/O operations to the primary storage.
Also read: What is L1, L2, and L3 CPU cache? How does it work?
Working of cache
Cache is used by a variety of cache clients such as CPU hardware, a Random Access Memory(RAM), Web browsers and Web servers.
When the requested data is found in the cache, it is known as a cache hit. In other words, if the tag of the cache data matches the tag of the required data, the data in the entry is used. Web browsers usually check the cache for the local copy of the data. If the required data is not found ln the cache memory, it is termed as a cache miss, whereby the data is located from the primary storage and copied to the cache.
In case of a cache miss, the Least Recently Used (LRU) entry is removed, and a new entry is created. Other cache algorithms often used are Least Recently Used (LRU) and Adaptive Replacement Cache (ARC), among others.
Writing Policies
A write policy controls the timing of writing data to the cache as well as to the backing store. Different approaches to write policy are as follows.
Write-through
When writing is done both to the cache as well to the storage simultaneously, it’s termed as write-through. This reduces the time required to process as the data is always cached. One disadvantage is that latency is introduced in the operation as the writing process is not complete until data is written to both the cache as well as primary memory.
Write-Around
Using this approach, operations are written only to the memory, and the cache is skipped altogether. This increases the read operation as the data is read from the main memory. This writing policy is used when the I/O traffic is higher.
Write-Back
Over here, writing is done only to the cache, and the process is considered completed after the writing is done. After the process, the data is copied to the main memory. The advantage is that both the read and write operations have less time delay while the disadvantage is shown in the fact that until the data is transferred to the storage it is vulnerable to loss.
To learn about DNS Cache poisoning, check out this article.
Also read: What are writing assistants? Why should you use them? Top 6 assistants