Computer Architecture and Organization : CT 2

Created by
BBorhan
Last edited time
Tag

Syllabus: Chapter 2, 4 and chapter 7 micro operation পর্যন্ত।

Resources:

Ch.2 : Performance Measuring

Ch. 4 : Cache Memory

CPU → L1 → L2 → L3 → Main Memory

Memory Access method

Memory Access MethodDescriptionAdvantagesDisadvantages
Direct AccessDirectly accesses memory location using an index or addressFast retrieval of dataLimited flexibility, requires contiguous memory allocation
Sequential AccessAccesses memory sequentially, reading data in orderSimple implementation, suitable for linear data structuresSlow for random access, inefficient for large data sets
Random AccessAccesses memory locations randomly without sequenceFast access to any element, suitable for non-linear data structuresRequires additional data structures for indexing, overhead for maintaining index
Associative AccessAccesses memory based on content rather than addressFlexible, efficient for search operationsComplex implementation, higher hardware requirements

Memory Hierarchy

Access time, size both are increasing from top to bottom

Cost and frequency both are increasing from bottom to top

Working of Cache

Data transfer between cache and main memory → in blocks

Data transfer between CPU and cache memory → in words

Cache Hit: If the required data is found in cache

Cache Miss: If the required data is not found in cache

Cache access time (Cache Hit time): Time required to access word from the cache

Miss Penalty (Cache Miss Time Penalty) : The time required to fetch the required block from main memory

Miss Penalty = Cache + Main Memory Access Time\text{Miss Penalty = Cache + Main Memory Access Time}

Write through procedure: Changing something in Cache memory, immediately changes in main memory too. Information is written to both the block in the cache and to the block in the MM.

Write back procedure: Changing something in Cache memory, doesn’t immediately change in main memory but when it removes from cache. Information is written only to the block in the cache. The modified cache block is written to MM only when it is replaced. This requires an additional information (either hardware or software), called dirty bits.

Cache Performance

Miss Penalty = Cache Access Time + Main Memory Access Time\text{Miss Penalty = Cache Access Time + Main Memory Access Time}

Example:

The access time of a cache memory is 100ns100ns and that of main memory 1000ns1000ns. It is estimated that 8080 percent of the memory request are for read and remaining 2020 percent for write. The hit ratios for read access is only is 0.90.9. A write through procedure is used.

(a) What is the average access time of the system considering only memory read cycle and write cycle separately ?

Ans:

Average read access time =h×Tc+(1h)×Tm=200ns= h \times T_c + (1-h)\times T_m = 200 ns

Total write access time =1×max(Cache memory access time,main memory access time)=(100,1000)=1000ns = 1 \times max(\text{Cache memory access time}, \text{main memory access time}) \\ = (100, 1000) = 1000 ns

(b) What is the average access time for both read and write cycle?

Ans:

Average access time for both = 80% for read+20% for write=0.8×200+0.2×1000=360ns80 \% \text{ for read} + 20\% \text{ for write} \\ = 0.8\times200 + 0.2\times1000 \\ = 360 ns

(c) What is hit ratio taking into consideration the write cycle?

Ans:

Hit Ratio taking =0.8×0.9+0.2×0=0.72 = 0.8 \times 0.9 + 0.2 \times 0 = 0.72

[Hit Ratio for only write is assumed as 0]

Cache Organization

Locality of Reference

Also known as Principle of Locality.

Two types of locality of reference:

Level of Cache Memory

Level 1 (L1) Cache:

Level 2(L2) Cache:

Level 3 (L3) Cache:

Cache Mapping

Connections:

CPU → Cache → Main Memory → Secondary Memory (Hard Disk)

3 Types:

  • Direct Mapping drawbacks
    • Two words with the same index in their address with different tag values cannot reside in cache memory at the same time

    Questions:

    • A digital computer has memory unit of 64K×1664K \times16 and a cache memory of 1K1K words. The cache uses direct mapping with a block size of four words.

      a) How many its are there in the tag, index, block (line) and word (block offset) filed of address formula ?

      Ans:

      Main memory is represented as : 64K×16=64K words of 16 bits eachMain memory word size/ data = 16 bits\\64K \times 16 \\ = 64K \text{ words of 16 bits each} \\\text{Main memory word size/ data = 16 bits}

      Main memory size = 64K = 26×210=216 words\text{Main memory size = 64K = } 2^6 \times 2^{10} = 2^{16}\text{ words}

      Address bits = 16\text{Address bits = 16} 

      Cache memory size = 1K words=210 words\text{Cache memory size = 1K words} = 2^{10} \text{ words}

      Index = 10 bitsTag=(1610)=6 bits\text{Index = 10 bits} \\ \text{Tag} = (16-10) = 6 \text{ bits}

      Number of block =2104=256=28 blocks\text{Number of block =} \frac{2^{10}}{4} = 256 = 2^8 \text{ blocks}

      Number of word in 1 block = 4 words=22 words\text{Number of word in 1 block = 4 words} = 2^2 \text{ words} 

      Word = 2 bits\text{Word = 2 bits}

      b) How many bits are there in each word of cache and how are they divided into functions ? Include a valid bit.

      [Valid bit is one bit only (0 or 1)]

      Ans: 23

      c) How many blocks can accommodate ?

      Ans: 1K4=256 blocks\frac{1K}{4} = 256 \text{ blocks}

Virtual Memory

How the VM is implemented

In virtual memory, there are the address of main memory and secondary memory. When a secondary memory address gets, it transfer it’s to HDD to Main memory (which is know as Swap-in). Besides, it transfer those paused main memory task to HDD which is called Swap-out.

Virtual Address/Logical address: Virtual address is the address used by the programmers. The set of such addresses is called address space or virtual memory.

Physical Address: An address of main memory is called Physical Address. The set of such locations in main memory is called the Memory Space or Physical Memory. Physical address is stored in Memory Adder Register (MAR).

Cache replacement policies

Cache replacement policies (also known as cache replacement algorithms or cache algorithms) are optimizing instructions or algorithms which a computer program or hardware-maintained structure can utilize to manage a cache of information

Placement Policy:

Identification Policy

Ch.7 : Register Transfer and Microoperations

Microoperations

The operations on the data in registers are called microoperations.

A microoperation is an elementary operation performed on information stored in one or more register. Result of operation way replace the previous binary information of a register or may be transferred to another register.

Example : a++, (replaced the previous register) a = b+c; (transferred to another register)

Register Transfer Language

The symbolic notation used to describe the microoperation transfers among register is called Register Transfer Language.

Information transfer from one register to another is designated in symbolic form by meant of replaced operations,

R1R2R1 \larr R2, Information transferring from R2R2 to R1R1

Under a predefined condition/Control function

If P=1P = 1 then R1R2R1 \larr R2

Symbol : P:R1R2P : R1 \larr R2

Microoperations

Bus & Memory Transfer

A digital computer has many registers and rather that connecting wires between all registers to transfer information between them, a common bus is used.

Bus is a path over which info is transferred from any of several sources to any of several destination.

From a register to Bus : BusRBus \larr R

From Bus to Register : RBusR \larr Bus

Types of Bus:

Bus Arbitration

Bus master: The device that is allowed to initiate data transfer on the bus at any given time is called data bus master.

Bus arbitration is the process by which the next device to become the bus muster is selected and bus mastership is transferred to it. Selecting of bus master is usually done on priority basis.

There are two approaches

Memory (RAM)

Memory is usually accessed in computer by putting desired address in a special register, Memory Address Register (MAR or AR)

When memory is accessed, the contents of the MAR send to the memory unit’s address lines.

Memory read: R1M[MAR]R1 \larr M[MAR]

  • the contents of the MAR get sent to the memory address lines
  • Read = 1
  • The contents of the address are put on the memory’s output data lines
  • these get sent over the bus to be loaded into register R1

Memory write : M[MAR]R1M[MAR] \larr R1

  • The contents of the MAR get sent to the memory address lines
  • Write = 1
  • The values of R1 get sent over the bus to the data input lines of the memory
  • the values get loaded into the specified address in the memory

cil    for a circular shift left

cir  for a circular shift right

ashl    for an arithmetic shift left

ashr  for an arithmetic shift right

hardware implementation for shift microoperation

Out of Syllabus

Page Replacement Algorithm

Page Replacement Algorithm decides which memory page to page or page-out (swap-out, write to disk) when a page of memory needs to be allocated.

When a program starts execution

  • one or more pages transferred into main memory and
  • the page-table is set to indicate their position

The program is executed from main memory until it attempts to reference a page that is still in auxiliary or secondary memory. This is called Page Fault.

Page Fault: If the required page is not present in main memory the page fault occurs, which requires I/O operation.

The goal of a replacement policy is to try to remove the page that least likely to be referenced in the immediate future.

Some Replacement Policies: