Monday, 2 December 2013

Direct Mapped Cache

Direct Mapped Cache
Let assume, as we did for fully associate caches that we have 128 slots and 32 bytes per slot
In a direct mapped cache, we treat the 128 slots as like they were an array of slots. We can index the into the array using binary numbers. For 128 slots, you need 7 bits. So, the index of the array is from 00000002 up to 11111112.
Since we have 128 slots, we need to specify which one we need the cache line to go in and this requires lg128=7bits. We can get the bits from the address itself directly.
Bits A4-0 is still the offset. The slot number are the next 7bits, Bits A11-5. The remaining bits, A31-12 is the
tag.

Finding the slot
Suppose you have address of B31-0 and you want to find it in cache memory.
By using bits B11-5 to find the slot.
See if bits B31-12match the tag of the slot.
If so, get the bytes at offset B4-0.
If not, fetch the 32 bytes from memory and place it in the slot, updating valid bit, dirty bit and tag as needed.

Advantages
If there's an advantage to the scheme, it's that it's very simple. You don't have to simultaneously match tags with all slots. You just have one slot to check. If the data isn't in the slot, it's obvious that this is the slot to get evicted.

Summary
A direct-mapped cache scheme makes picking the slot easily. It treats the slot as a large array, and the index of the array is picked from bits of the address which is why we need the number of slots to be a power of 2, otherwise we can’t select bits from the address.
The scheme can suffer from many addresses “colliding” to the same slot, thus causing the cache line to be repeatedly evicted, even though there may be empty slots than aren’t being used, or being used with less frequency.

No comments:

Post a Comment