Ora

What is blocked in OS?

Published in OS Process States 5 mins read

In an operating system (OS), when a program or process is "blocked," it means the OS has temporarily halted its execution because it is waiting for a specific operation or event to complete before it can proceed. Essentially, the OS decides that it needs to wait for a certain operation to complete before allowing the program to continue execution.

Understanding the 'Blocked' State

The "blocked" state is one of the fundamental states a process or thread can be in within an operating system's lifecycle. When a process enters this state, it signifies that it cannot make any further progress until an external event occurs or a required resource becomes available. This is distinct from a "ready" state (waiting for CPU time) or a "running" state (actively executing instructions).

  • What Gets Blocked? Typically, a process or a thread is what enters the blocked state. These are the units of execution managed by the OS.
  • How it Works: When a running process requests an operation that cannot be completed immediately (like reading data from a slow device), the OS intervenes. Instead of letting the process waste CPU cycles by constantly checking if the operation is done, the OS moves the process from the "running" state to the "blocked" (or "waiting") state. This frees up the CPU for other processes to run, improving overall system utilization. Once the requested operation finishes, the OS moves the blocked process back to the "ready" state, making it eligible to be scheduled for CPU execution again. You can learn more about process states on GeeksforGeeks.

Why Does a Process Get Blocked?

Processes are primarily blocked due to the need to wait for external resources or events. Common scenarios include:

  • Input/Output (I/O) Operations: This is the most frequent reason. When a process needs to read from a disk, write to a printer, send data over a network, or wait for user input from the keyboard, these operations often take significantly longer than CPU operations.
    • Example: A program trying to load a large file from a hard drive will block until the file data is read into memory.
  • Resource Contention: A process might require access to a shared resource (like a specific memory region, a file lock, or a peripheral device) that is currently being used by another process. It will block until the resource becomes available.
    • Example: Two processes trying to write to the same file simultaneously might lead to one blocking until the other releases its lock.
  • Inter-Process Communication (IPC): When processes communicate, one might block while waiting for a message, signal, or data from another process.
    • Example: A client process might block waiting for a response from a server process.
  • Timer Events: A process might explicitly put itself to sleep for a certain duration, effectively blocking until a timer expires.
    • Example: A program that updates a display every second will block for one second after each update.

Blocking vs. Non-Blocking Operations

The concept of a "blocked" state is closely tied to the design of I/O operations:

Feature Blocking Operations Non-Blocking Operations
Execution Flow The OS decides that it needs to wait for a certain operation to complete before allowing the program to continue execution. The process calling a blocking function pauses and waits until the operation completes. The calling process does not wait for the operation to complete. The OS initiates the operation and immediately returns control to the process, often with an indication that the operation is pending or partially complete.
CPU Usage While blocked, the process uses no CPU cycles; the OS can schedule other processes. The process can continue executing other tasks while the I/O operation proceeds in the background.
Complexity Simpler to program, as code executes sequentially. More complex to program, often requiring polling, callbacks, or event loops to manage pending operations.
Use Cases Suitable for applications where waiting is acceptable or for simpler, less concurrent tasks. Essential for high-performance servers, real-time systems, and responsive user interfaces that need to handle multiple tasks concurrently without freezing.

Most traditional I/O functions (like read(), write(), connect()) are blocking by default. However, modern OSes and programming models offer mechanisms for non-blocking I/O to improve concurrency and responsiveness.

Impact and Management of Blocked Processes

The existence of blocked processes is a normal and necessary part of OS operation. It allows for efficient resource management and multitasking.

  • System Responsiveness: By blocking processes waiting for I/O, the OS can ensure that the CPU is always busy with runnable tasks, making the system feel more responsive.
  • Concurrency: It enables multiple processes to share resources and time, even if some are waiting for slow operations.
  • Management: The OS kernel's scheduler is responsible for managing these states. It maintains various queues, such as a "wait queue" for blocked processes, and moves processes between states as events occur.

Practical Insights and Solutions:

  • Asynchronous I/O (AIO): Many programming languages and OS APIs provide ways to perform I/O operations asynchronously, which is a form of non-blocking I/O. This allows a program to initiate an I/O operation and then continue with other tasks, receiving a notification when the I/O is complete.
  • Multi-threading/Multi-processing: In applications where blocking operations are unavoidable, developers can use multiple threads or processes. If one thread blocks, others can continue to execute, preventing the entire application from freezing.
  • Event-Driven Architectures: These architectures are designed around handling events (like I/O completion) rather than sequential blocking calls. They are common in web servers and graphical user interfaces.