- Base addressing is also known as indirect addressing.
- The address of the operand is the sum of the immediate and the value in a register (rs). The size of operand is limited to 16 bits because each MIPS instruction fits into a word.
- This is used in the lw and sw (load word, store word) instructions.
- The offset value is a signed number which is represented in a two's complement format.
Computer Organization and Architecture
Thursday, 19 December 2013
4. Base Addressing
4. Base Addressing
3. PC-Relative Addressing
3. PC-Relative Addressing
- A data or instruction memory location is specified as an offset relative to the incremented PC.
- PC-Relative addressing usually used in the beq and bne (branch equal, branch not equal) instructions.
- It implements position-independent codes. Only a small offset is adequate for shorter loops.
2. Immediate Addressing
2. Immediate Addressing
- The operand is a constant within the instruction itself in immediate addressing.
- Immediate addressing is similar to register addressing, the instructions will be executed quickly because they avoid the delays associated with memory access.
- Since the destination of the jump instruction format is held in the instruction, so it can also be considered an example of immediate addressing.
1. Register Modes
1. Register Modes
- Register addressing is the simplest addressing mode.
- Both operands are in a register.Instructions will be executed quickly because they avoid the delays associated with memory access.
- The number of registers is limited since only a few bits are reserved to select a register.
- It takes n bits to address 2n registers, which can fit in five bits since there are 32 registers.
Addressing Modes
Addressing Modes
MIPS has only a small number of ways that is computes addresses in memory. The address can be an address of an instruction (for branch and jump instructions) or it can be an address of data (for load and store instructions).
For any given operation, such as load, add or branch, there are often many different ways to specify the address of the operand(s).
The different ways of determining the address are called addressing modes.
We will look at the 4 ways addresses are computed
MIPS has only a small number of ways that is computes addresses in memory. The address can be an address of an instruction (for branch and jump instructions) or it can be an address of data (for load and store instructions).
For any given operation, such as load, add or branch, there are often many different ways to specify the address of the operand(s).
The different ways of determining the address are called addressing modes.
We will look at the 4 ways addresses are computed
- Register Addressing
- Immediate Addressing
- PC-Relative Addressing
- Base Addressing
Thursday, 12 December 2013
Associative Caches
Associative Caches
Spectrum of associativity
Type of associative
1. Fully Associative
--In a fully associative scheme, any slot can store the cache line. The hardware for finding whether the desired data is in the cache requires comparing the tag bits of the address to the tag bits of every slot (in parallel), and making sure the valid bit is set.
2. Set associative
--A set-associative cache scheme is a combination of fully associative and direct mapped schemes. You group slots into sets. You find the appropriate set for a given address (which is like the direct mapped scheme), and within the set you find the appropriate slot (which is like the fully associative scheme).
Example
--Compare 4-blocks caches with block access sequence: 0,8,0,6,8
Cache Performance
Cache Performance
- There are 3 formula need to memorize:
2. CPU time
= (CPU execution cycles + Memory stall cycles) x Cycle time
3. Average memory access time (AMAT)
= Hit time + (Miss rate x Miss penalty)
More Cache Performance Formula
nCan split access time into instructions
& data:
pAverage. memory. access. time =
(% instruction accesses) × (instruction. memory. access time) + (% data accesses) × (data memory. access time)
(% instruction accesses) × (instruction. memory. access time) + (% data accesses) × (data memory. access time)
nAnother simple formula:
pCPU time = (CPU execution clock cycles +
Memory stall clock cycles) × cycle time
nCan break stalls into reads and writes:
pMemory stall cycles =
(Reads × read miss rate × read miss penalty) + (Writes × write miss rate × write miss penalty)
(Reads × read miss rate × read miss penalty) + (Writes × write miss rate × write miss penalty)
Performance Summary
1. Improving Cache Performance
2. The organization of a memory system affects its performance.
--The cache size, block size, and associativity affect the miss rate.
--We can organize the main memory to help reduce miss penalties. For example, interleaved memory supports pipelined data accesses.
3.Can’t neglect cache behavior when evaluating system performance
Subscribe to:
Posts (Atom)