CPU Scheduling in Operating System

CPU scheduling is the process by which the operating system selects the next process to be executed on the CPU. The objective is to achieve high throughput, low latency, and fair allocation of resources among competing processes. This involves managing various scheduling algorithms and policies tailored to meet specific system requirements and workload characteristics.

In the below PDF we discuss about CPU Scheduling  in detail in simple language, Hope this will help in better understanding.

Topics Covered inside the PDF :

  • What is CPU?
  • What is CPU Scheduling?
  • CPU Scheduling Criteria
  • Types of CPU Scheduling
  • CPU Scheduling Algorithm

 

Operating System

Types of CPU Scheduling

1.  Preemptive Scheduling:

  • In preemptive scheduling, the operating system can interrupt a currently running process and allocate the CPU to another process with higher priority or a more urgent task.
  • The CPU can be preempted (taken away) from a process even if it has not completed its execution.
  • Preemptive scheduling is commonly used in environments where responsiveness and fairness are critical, such as interactive systems or real-time operating systems.
  • Examples of preemptive scheduling algorithms include Shortest Remaining Time First (SRTF), Round Robin, and Priority Scheduling with preemption.

 

2. Non-Preemptive Scheduling:

  • In non-preemptive scheduling, a process continues to run until it voluntarily relinquishes the CPU, typically by completing its execution or entering a waiting state.
  • Once a process starts executing, it retains control of the CPU until it finishes or explicitly gives up control.
  • Non-preemptive scheduling is simpler to implement and may be suitable for systems where fairness is less critical, such as batch processing environments.
  • Examples of non-preemptive scheduling algorithms include First-Come, First-Served (FCFS), Shortest Job First (SJF), and Priority Scheduling without preemption.

CPU Scheduling Criteria:

CPU scheduling criteria are the metrics or objectives that a scheduling algorithm aims to optimize or satisfy when deciding which process to execute next on the CPU. The primary criteria used to evaluate CPU scheduling algorithms include:

  • CPU Utilization: Maximizing CPU utilization ensures that the CPU remains busy and does not remain idle, thereby making efficient use of computing resources.
  • Throughput: Throughput measures the number of processes completed per unit of time. A good scheduling algorithm should aim to maximize throughput to ensure efficient processing of tasks.
  • Turnaround Time: Turnaround time is the total time taken to execute a process from submission to completion, including waiting time and execution time. Minimizing turnaround time ensures that processes are executed quickly, improving overall system responsiveness.
  • Waiting Time: Waiting time is the total time a process spends waiting in the ready queue before getting CPU time. Minimizing waiting time helps in reducing process wait times and improving user satisfaction.
  • Response Time: Response time is the time taken from when a request is submitted until the first response is produced. Minimizing response time is crucial for interactive systems to provide quick feedback to users.
  • Fairness: Fairness ensures that all processes receive a fair share of CPU time, preventing any process from being starved or monopolizing system resources.
  • Predictability: Predictability refers to the ability of a scheduling algorithm to provide consistent and predictable performance, which is essential for real-time systems and time-sensitive applications.
  • Priority: Priority scheduling aims to execute higher-priority processes before lower-priority ones, allowing critical tasks to receive preferential treatment.

CPU Scheduling Algorithms:

There exist numerous scheduling algorithms, each with its own strengths and weaknesses. Here are a few prominent ones:

  1. First-Come, First-Served (FCFS): As the name suggests, this algorithm schedules processes based on their arrival time. The CPU is allocated to the process that arrives first and is released only when it completes its execution. While simple, FCFS may lead to convoy effect, where short processes get delayed behind long ones.
  2. Shortest Job Next (SJN) or Shortest Job First (SJF): SJN selects the process with the smallest burst time (execution time) next. This algorithm minimizes average waiting time and is optimal in terms of average turnaround time for a set of processes. However, it requires knowledge of burst times, which may not be available in real-world scenarios.
  3. Round Robin (RR): In RR scheduling, each process is assigned a fixed time quantum during which it can execute on the CPU. Once a time quantum elapses, the process is preempted, and the CPU is allocated to the next process in the queue. RR ensures fairness and prevents starvation, albeit at the cost of increased context switching overhead.
  4. Priority Scheduling: Priority scheduling assigns priorities to processes, with higher priority processes being scheduled first. This can be either preemptive or non-preemptive. While effective in ensuring timely execution of critical tasks, priority scheduling can lead to starvation of lower priority processes if not implemented carefully.
  5. Multi-Level Queue (MLQ): MLQ divides processes into separate queues based on priority or other criteria, each with its own scheduling algorithm. This approach is suitable for systems with diverse workloads and allows for fine-tuning of scheduling policies for different classes of processes.

Conclusion:

CPU scheduling lies at the heart of operating system design, balancing the conflicting goals of resource utilization, responsiveness, and fairness. By employing efficient scheduling algorithms and policies, operating systems can optimize system performance and enhance user experience across a wide range of applications and workloads. Understanding the intricacies of CPU scheduling is essential for system administrators, developers, and anyone seeking to delve deeper into the workings of modern computing systems.

Related Question

CPU scheduling is the process of determining which process should be allocated the CPU next when multiple processes are competing for the CPU’s attention.

CPU scheduling is necessary to maximize CPU utilization, ensure fairness among processes, and minimize response time and waiting time for processes in a multiprogramming environment.

The objectives of CPU scheduling include maximizing CPU utilization, minimizing turnaround time, minimizing waiting time, ensuring fairness, and maximizing throughput.

Some common CPU scheduling algorithms include First-Come, First-Served (FCFS), Shortest Job Next (SJN) or Shortest Job First (SJF), Priority Scheduling, Round Robin (RR), Multilevel Queue Scheduling, and Multilevel Feedback Queue Scheduling

FCFS is the simplest CPU scheduling algorithm where the process that arrives first is executed first. It operates on a FIFO (First In, First Out) basis.

 

Relevant

Distributed Operating System A Distributed

What is Data Encryption? Data

What is Cryptography? Cryptography is

Disk Scheduling Algorithms Disk scheduling

Disk Management in OS Disk

Deadlock in Operating System (OS)

Leave a Comment

Your email address will not be published. Required fields are marked *

// Sticky ads
Your Poster