Operating System

Created by
BBorhan
Last edited time
Tag
Year 3 Term 1

Resources


Introduction & Basics of OS 


Computer System Architecture

Computer System Architecture refers to the design and organization of the components of a computer system, including the hardware and software architecture that allows the system to function effectively.

Components in Computer Architectures

Computer System Structure/Components: Four Components

  • Hardware
  • Operating System
  • Application system
  • Users

Operating System

An Operating system is a program that controls the execution of application programs and acts as an interface between the user of a computer and the computer hardware.

More definition

Components of OS

Functions of Operating Systems

Goals of Operating System

This text representation outlines the main goals of an operating system and their sub-components, as shown in the original diagram.

%%{init: {'theme': 'base', 'themeVariables': { 'fontSize': '16px' }}}%%
graph LR
    OS["Operating System Goals"]
    OS --> Convenience["Convenience (User-friendly)"]
    OS --> Efficiency["Efficiency (Best using all hardware)"]
    OS --> Portability["Portability"]
    OS --> Reliability["Reliability"]
    OS --> Scalability["Scalability (Updating)"]
    OS --> Robustness["Robustness (Tackling error)"]

    Convenience --> UI["User Interface"]
    Convenience --> EasyUse["Ease of Use"]

    Efficiency --> ResourceUtil["Resource Utilization"]
    Efficiency --> PerformanceOpt["Performance Optimization"]

    Portability --> CrossPlatform["Cross-platform Compatibility"]
    Portability --> HardwareIndep["Hardware Independence"]

    Reliability --> Stability["System Stability"]
    Reliability --> Consistency["Consistent Performance"]

    Scalability --> SysUpgrades["System Upgrades"]
    Scalability --> ExpandCapacity["Expandable Capacity"]

    Robustness --> ErrorDetection["Error Detection"]
    Robustness --> ErrorRecovery["Error Recovery"]

Operating System Services:

Types of OS

System Call

Types of system call

Process Control

  • end, abort
  • load, execute
  • create process, terminate process
  • get process attributes, set process attributes
  • wait for time
  • wait event, signal event
  • allocate and free memory

Device Management

  • request device, release device
  • read, write, reposition
  • get/set device attributes
  • logically attach or detach devices

File management

  • create, delete file
  • open, close
  • read, write, reposition
  • get file attributes, set file attributes

Information maintenance

  • get/set time or date
  • get/set system data
  • get/set process attributes, file attributes, devices attributes

Communications

  • create, delete communication connection
  • send, receive message
  • transfer status information
  • attach or detach remote device

Dual Modes of Operation

Used to implement protection

Questions

Process


A process is sequential program in execution. A process defines the fundamental unit of computation for the computer. Components of process are :

  1. Object Program
  1. Data
  1. Resources
  1. Status of the process execution

Object program i.e. code to be executed. Data is used for executing the program. While executing the program, it may require some resources. Last component is used for verifying the status of the process execution. A process can run to completion only when all requested resources have been allocated to the process.

Process = Program (Code) + Running environment (operands and other information)

AspectProgramProcess
NatureStatic objectDynamic object
Storage LocationResides in secondary storage (e.g., hard drive)Resides in main memory (RAM)
ExecutionInactive until executedActive, executing at any given time
LifetimeSpan time is unlimitedSpan time is limited (ends when execution completes)
Entity TypePassive entity (just a set of instructions)Active entity (involves execution)
RepresentationExpressed in a programming languageExpressed in assembly or machine language (for execution)

Process as a Data Structure

PCB

A Process Control Block (PCB) is a data structure used by the operating system to store information about a specific process.

Components of PCB

  • Pointer: Pointer points to another PCB to maintain the scheduling list.
  • Process state
  • Program Counter : The address of the next instruction
  • CPU Registers : Registers like AC, GPR, IR etc.
  • CPU Scheduling information: process priority, pointer to scheduling queues and any other scheduling parameters.
  • Memory-management information: base and limit register, page or segments table etc.
  • Accounting information: amount of CPU & real time uses, time limits, account numbers, job or process numbers.
  • I/O Status information: List of I/O Devices allocated to the process, a list of open files etc.


Context

The content of PCB in a process are collectively known as “Context” of that process

Process States

When process executes, it changes state. Process state is defined as the current activity of the process.

New: A process that just been created.

Ready: A process said to be ready if it needs a CPU to execute. A ready process is runnable but temporarily stopped running to let another process run.

Running: A process that is currently being executed/ a process which has the CPU to run. A running process possesses all the resources needed for its execution, including the processor.

Terminated : The process has finished execution.

Blocked/Waiting : A process which is waiting for some event to happen such that as an I/O completion before it can proceed.

Transition

Process states : Non-preemptive

CPU vs IO Bound Process

Process Scheduling

The scheduling mechanism is the part of the process manager that handles the removal of the running process from the CPU and the selection of another process basis of particular strategy.

Process Queues

Schedulers

A scheduler is a decision maker that selects the processes from one scheduling queue to another or allocates CPU for execution.

Types of schedulers

AspectLong Term SchedulerShort Term SchedulerMedium Term Scheduler
FunctionJob schedulerCPU schedulerSwapping scheduler
SpeedSlower than short-term schedulerVery fastModerate speed, between long and short-term
Control over MultiprogrammingControls the degree of multiprogrammingMinimal control over multiprogrammingReduces the degree of multiprogramming
Presence in Time-Sharing SystemsUsually absent or minimalMinimal roleCommonly used
SelectionChooses processes from a pool and loads them into memory for executionSelects processes ready for CPU executionReintroduces a process into memory for resumed execution
Process State TransitionTransitions processes from New to Ready stateTransitions processes from Ready to Running stateDoes not directly involve state transitions
Process MixSelects a balanced mix of I/O-bound and CPU-bound processesFrequently selects a process for the CPU-

Operations

  1. The task given to the child is no longer required.
  1. Child has exceeded its usage of some of the resources that it has been allocated.
  1. Operating system does not allow a child to continue if its parent terminates.

CPU Scheduling

CPU scheduling refers to the switching between processes that are being executed. It forms the basis of multiprogrammed systems. 

Scheduling Criteria

[Between the parenthesis : Optimization Criteria]

CPU Types

AspectPreemptive SchedulingNon-Preemptive Scheduling
DefinitionThe CPU can be taken away from a running process before it finishes its execution.Once a process starts executing, it runs to completion or until it voluntarily releases the CPU.
ControlThe operating system has control and can interrupt processes.The process controls when it gives up the CPU.
InterruptionsA process can be interrupted by the scheduler to give CPU time to another process.A process cannot be interrupted; it must finish its execution.
Context SwitchingFrequent context switching due to preemption.Less frequent context switching as processes are not preempted.
ExampleRound-robin, Shortest Remaining Time First (SRTF).First-Come, First-Served (FCFS), Priority Scheduling (non-preemptive).
AdvantagesBetter responsiveness and fairness in handling processes.Simpler to implement, no need for frequent context switching.
DisadvantagesIncreased overhead due to context switching.Can lead to poor response times, especially with long-running processes.

[Note : Every process has no any I/O operation. (Assumption)]

Algorithms

Non preemptive

Preemptive

Memory Management

Memory Management is the process of efficiently handling the computer’s memory, which includes the allocation, deallocation and organization of memory during execution.

Binding of Instruction Data to Memory

  1. Compiler Time : If it is known at compiler time where the process will reside in memory, then absolute code can be generated.
  1. Load time : It is the time taken to link all related program file and load into the main memory. It must generate relocatable code if memory location is not known at compiler time.
  1. Execution: Time taken to execute the program by processor. If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time.

Technique to optimize use and increase system efficiency

  1. Dynamic Loading
    1. a programs code and data are loaded into memory only when they are needed during execution
    1. improve memory utilization, save memory, No OS supper required
  1. Dynamic Linking
    1. postpones the linking of libraries to a program until runtime, instead of linking them at compiler time
    1. Small piece of code → stub, used to find library in memory
    1. Save memory by sharing common libraries, Need support from OS
  1. Overlays
    1. process is larger than available memory → the necessary parts (overlay) are loaded into memory while other parts are swapped in/out
    1. designed and managed by programmer. no need of OS support
  1. Swapping
    1. temporality moves a process (or a part) out of memory (RAM) to secondary storage to free up memory for others
    1. when the process is needed again it is swapped back into memory
    1. Process : Program Execution → Memory Full or Process Waiting → Swapping out (Roll out) a selected process MM to secondary (swap space) → Swapping In the process needs CPU or ready for execution, secondary to MM → Process resumption, process resume its execution from where it left of

Memory Management Technique

  1. Contiguous
    • each process in the system is assigned in a single continuous of memory during its execution
    • Types
      • Single Partition Scheme
      • Multiple Partitions Scheme : main memory is divided into a number of fixed-sized partitions where each partition should contain only one process
      FeatureMultiple Fixed PartitionsMultiple Variable Partitions
      Partition SizeFixed partition sizes defined during system generation.Variable partition sizes based on process requirements.
      Memory AllocationAny process can fit into an available partition if its size ≤ partition size.Processes are allocated exactly the memory they require.
      Efficiency of Memory UsageInefficient due to internal fragmentation.More efficient, with no internal fragmentation.
      Max Active ProcessesFixed, limited by the number of partitions.Flexible, limited by the available memory.
      SwappingProcesses can be swapped in and out of partitions.Compaction is needed to manage memory, especially due to external fragmentation.
      Operating System OverheadLow overhead.Higher overhead due to memory compaction.
      External FragmentationDoes not occur, as memory is statically partitioned.Can occur, requiring compaction to manage free memory.
      Implementation ComplexitySimple and easy to implement.More complex due to dynamic memory allocation and compaction.
    • Fragmentation: a situation in memory management when memory space is used inefficiently, leading to wasted or unusable memory

      Types

      FeatureInternal FragmentationExternal Fragmentation
      DefinitionWasted memory within a partition that is not utilized by a process.Wasted memory outside the allocated memory regions, leaving gaps between blocks.
      CauseOccurs when a process is allocated more memory than it needs, causing unused space within the partition.Occurs when free memory is scattered in small blocks, but no contiguous space is large enough for a new process.
      Location of Wasted SpaceInside the allocated partition.Between allocated memory blocks.
      Memory AllocationMemory is allocated in fixed-size blocks, leading to unused space if the process is smaller than the block size.Memory is allocated dynamically, and free space is fragmented.
      Impact on PerformanceLeads to inefficient use of memory, but doesn't require extra work to manage.Causes difficulty in allocating memory, potentially leading to delays or inability to allocate memory even if total free space is sufficient.
      Management ComplexitySimple to manage; internal fragmentation is a result of fixed-size partitions. More complex to manage; requires techniques like compaction to reduce fragmentation.
      SolutionCannot be fully avoided, but can be minimized by using smaller partition sizes.Can be resolved by techniques such as memory compaction or using more efficient allocation strategies.
      ExampleA partition of 1 KB allocated to a process requiring 900 bytes results in 100 bytes of internal fragmentation.Free memory blocks of 100 KB, 50 KB, and 25 KB scattered across memory might prevent allocation of a 120 KB process.
      Advantages- Simple to manage.
      - Low overhead in allocation.
      - Memory is allocated quickly and predictably.
      - Flexible memory usage, as memory can be allocated dynamically.
      - No need to allocate more space than required by the process.
      Disadvantages- Inefficient use of memory due to unused space within partitions.
      -Can lead to significant wastage if partitions are large and processes are smaller.
      - Inefficient memory use due to scattered free space.
      - May cause memory allocation failures even when there is sufficient total free memory.
      - Requires memory compaction, which adds overhead.

    • Partition Selection Policy : request memory → the OS must decide which free memory partition to allocate → guided by Partition Selection Policy
      • First Fit: memory manager scans the list of free block from beginning → allocate the first block that is big enough
      • Next Fit : starts from the last block allocated → the current process allocated to the next block which is big enough
      • Best Fit : searches the entire list of blocks → find the smallest block which is big enough for the process
      • Worst fit: searches the entire list of blocks→ find the largest block that is big enough size than the size of process
  1. Non contiguous: it is allowed to store the processes in noncontiguous memory locations.

    Types

    1. Paging
      • Frames : Main memory is divided into a number of equal size blocks
      • Pages : Each process divided into number of equal size block of the same length as frames
      • From logical to physical addresses
        • CPU generated/logical address : page number and page offset
        • Page table contains the base address of each page in physical memory
        • Logical address space 2m2^m  and page size 2n2^n addressing units, then (m-n) bits = page number and n bits = page offset.

        Example

        Logical memory = 4B = 222^2, m=2m=2
        Page size = 21,n=12^1, n = 1

        Logical address = 222^2, 2 bits

        Page number bit = mn=21=1m-n = 2 - 1 = 1

        Offset = n=1n = 1

        Primary memory = 8 B = 23,m=32^3, m = 3

        Frame size = 21,n=12^1 , n= 1

        Physical address = 23=32^3 = 3  bits

        Frame number bit = mn=2m-n=2

        Offset = n = 1

        • Find physical address using decimal value

          Physical address = (frame number * frame size) + offset

    1. Segmentation : Segmentation is a memory management technique that divides a process memory into variable sized blocks called segments, based on the logical division of program. Segments are logical unit such as function, array, method etc.
      • base : contains the starting physical address
      • limit : specifies the length of the segment
      • STBR : segment table base register , points to the segment table’s location in memory
      • STLR: segment table length register, indicates number of segments used by a program
      • A logical-address space is a collection of segments.. Logical address consists of a two tuple: <segment-number, offset>
      • The segment number is used as an index into the segment table. The offset d of the logical address must be between 0 and the segment limit
FeaturePagingSegmentation
DefinitionA memory management technique that divides the process into fixed-sized blocks, called pages, which are mapped to physical memory frames.A memory management technique that divides the process into variable-sized segments, each representing logical units of the program.
Division of ProgramDivides the program into fixed-sized pages.Divides the program into variable-sized segments.
ResponsibilityManaged by the Operating System.Managed by the Compiler.
Size DeterminationPage size is fixed and determined by hardware.Segment size varies and is determined by the user.
SpeedGenerally faster than segmentation.Generally slower than paging.
Fragmentation TypeInternal Fragmentation.External Fragmentation.
Mapping TableUses a Page Table to map logical pages to physical memory frames.Uses a Segment Table with base and limit addresses for each segment.

File System

A file system is a method and data structure that an operating system uses to manage, organize files on storage devices.

File system consists of two parts

AspectSequential AccessDirect Access
Access PatternData is accessed in a fixed, linear sequence.Data can be accessed in any order.
NavigationStep-by-step; must go through preceding data.Can jump directly to any data block.
EfficiencyEfficient for linear access (e.g., reading from start to end).Efficient for quick access to specific data points.
ComplexitySimple to implement.More complex to implement.
Best Use CaseText files, logs, sequential data processing.Databases, index files, applications needing random access.
AdvantageMinimal overhead, straightforward access.Fast access to specific data.
DisadvantageInefficient for random access.Less efficient for reading large files sequentially.

File Allocation: the strategies employed by computer OS for the efficient distributing of storage space on disks or others

AspectContiguous File AllocationLinked File AllocationIndexed File Allocation
Storage PatternFiles stored in a single, continuous blockFiles stored in non-contiguous blocks with pointersFiles stored in non-contiguous blocks with an index block
Access SpeedFast access (sequential)Slower access (following pointers)Fast access (uses index block)
FragmentationProne to fragmentationLow fragmentationLow fragmentation
File Size FlexibilityLimited by contiguous free spaceFlexible, any sizeFlexible, any size
Disk Space EfficiencyEfficient for continuous storageLess efficient due to pointer storageSlightly less efficient due to index block
Risk of Data LossLow, as entire file is in one blockHigher, as broken pointers can lose access to dataLow, as the index block can be duplicated
Best forLarge files with sequential access needs (e.g., video files)Files of varying sizes in a fragmented disk spaceFiles needing random access or large files with redundancy needs

Thread

AspectUser-Level ThreadsKernel-Level Threads
DefinitionThreads managed at the user level without kernel interventionThreads managed directly by the operating system kernel
Creation & Management SpeedFaster to create and manageSlower to create and manage
ImplementationImplemented by a thread library at the user levelDirectly supported by the operating system
Operating System DependencyCan run on any operating systemSpecific to the operating system
Support LevelSupport is provided at the user level, known as user-level threadsSupport is provided by the kernel, known as kernel-level threads
Multiprocessing CapabilityLimited multiprocessing due to lack of direct kernel supportKernel routines can be multithreaded, enabling better multiprocessing

AspectProcessThread
DefinitionCalled a "heavyweight process" that operates independentlyCalled a "lightweight process" that operates within a process
Switching RequirementProcess switching requires interaction with the operating systemThread switching doesn’t need an OS call or kernel interrupt
Memory & Resource AllocationEach process has its own memory and file resourcesThreads share the same memory and file resources
Blocking BehaviorIf one process is blocked, no other process can execute until it’s unblockedIf one thread is blocked, other threads in the same process can still execute
Resource UsageMultiple processes use more resources compared to threadsMultiple threads use fewer resources than processes
Interaction & IndependenceProcesses operate independently from each otherThreads can access and modify each other's stacks within the same process