Process Synchronization
Process synchronization is a crucial concept in an Operating System (OS) that ensures cooperating processes execute in a coordinated manner to avoid conflicts while accessing shared resources. Without proper synchronization, processes can interfere with each other, leading to race conditions, data inconsistency, and deadlocks.
This blog covers:
- Critical Section Problem
- Peterson’s Solution
- Synchronization Hardware
- Mutex Locks
- Semaphores
- Monitors
- Classic Problems of Synchronization
Critical Section Problem
What is the Critical Section?
- A critical section is a part of a program where shared resources (e.g., files, memory, variables) are accessed by multiple processes.
- To ensure data consistency, only one process should execute in the critical section at a time.
Race Condition
A race condition occurs when multiple processes access shared resources simultaneously, leading to unexpected results.
Example:
- Two processes withdrawing money from the same bank account at the same time, leading to incorrect balance calculations.
Requirements for a Synchronization Solution
A proper synchronization mechanism must satisfy these three conditions (also called the Critical Section Problem Solution Requirements):
- Mutual Exclusion – Only one process can enter the critical section at a time.
- Progress – If no process is in the critical section, others should be able to enter.
- Bounded Waiting – A process should not wait indefinitely to enter the critical section.
Peterson’s Solution
Peterson’s Solution is a software-based synchronization algorithm that allows two processes to share a critical section without conflicts.
How Peterson’s Solution Works?
- Uses two variables:
- flag[i] – Indicates if process i wants to enter the critical section.
- turn – Indicates whose turn it is to enter the critical section.
Algorithm:
// Process i
flag[i] = true;
turn = j;
while (flag[j] && turn == j); // Wait if process j is inside
// Critical Section
<execute critical code>
// Exit Section
flag[i] = false;
Why is Peterson’s Solution Effective?
Ensures Mutual Exclusion – Only one process enters at a time.
Provides Progress and Bounded Waiting – No indefinite waiting.
Limitation: Works only for two processes and may not be efficient on modern multi-core processors due to instruction reordering.
Synchronization Hardware
Modern CPUs provide hardware support for synchronization through special instructions.
Test and Set Lock (TSL) Instruction
- Atomically tests and modifies a variable to implement mutual exclusion.
Example:
while (test_and_set(lock) == 1); // Wait if lock is taken
// Critical Section
lock = 0; // Release lock
Eliminates race conditions by ensuring atomic updates.
Compare and Swap (CAS) Instruction
- Checks if a variable matches a given value and updates it if true.
Efficient for implementing locks in multi-core processors.
Limitation: Hardware-based solutions require CPU support and may cause busy waiting.
Mutex Locks
A Mutex (Mutual Exclusion) Lock is a software-based mechanism to protect critical sections.
How Mutex Works?
- A lock variable is used:
- Lock (1) → Critical section is occupied.
- Unlock (0) → Critical section is free.
Example Code (C)
pthread_mutex_t lock;
pthread_mutex_lock(&lock); // Acquire lock
// Critical Section
pthread_mutex_unlock(&lock); // Release lock
Ensures mutual exclusion and prevents race conditions.
Limitation: Can cause busy waiting (if implemented using spinlocks).
Semaphores
A semaphore is a synchronization tool that uses a counter to control access to resources.
Types of Semaphores
Type | Description | Example |
Binary Semaphore | Works like a mutex (0 or 1). | Process synchronization. |
Counting Semaphore | Allows multiple resources to be used simultaneously. | Managing multiple database connections. |
Semaphore Operations
Two atomic operations:
- Wait (P) – Decreases the semaphore value if it’s greater than 0; otherwise, waits.
- Signal (V) – Increases the semaphore value, allowing a waiting process to proceed.
Example Code (C)
sem_wait(&sem); // Wait operation
// Critical Section
sem_post(&sem); // Signal operation
Prevents race conditions and avoids busy waiting.
Monitors
A monitor is a higher-level synchronization construct that encapsulates shared resources and provides automatic mutual exclusion.
How Monitors Work?
- A monitor is a special class that contains:
- Shared variables.
- Methods for accessing shared resources.
- Condition variables for waiting and signaling.
Example:
class SharedResource {
synchronized void access() {
// Critical Section
}
}
Ensures safety by design (no need for explicit locks).
Used in: Java (synchronized keyword), C++ (std::mutex).
Classic Problems of Synchronization
The following problems illustrate real-world scenarios where synchronization is necessary.
The Producer-Consumer Problem
- Scenario:
- A producer generates data and places it in a buffer.
- A consumer retrieves and processes data.
- Challenge: Prevent overflows (buffer full) and underflows (buffer empty).
- Solution: Use semaphores for mutual exclusion and synchronization.
The Readers-Writers Problem
- Scenario: Multiple readers can read a shared resource simultaneously, but only one writer can write at a time.
- Challenge: Prevent writers from modifying data while it is being read.
- Solution: Use semaphores or read-write locks to manage access.
The Dining Philosophers Problem
- Scenario: Five philosophers sit at a table with five chopsticks (shared resources).
- Challenge: Avoid deadlock (each philosopher picks up one chopstick and waits forever for the second one).
- Solution: Use mutex locks, semaphores, or resource hierarchy.
Example Solution Using Semaphores:
- Allow only four philosophers to pick up chopsticks at the same time.
Comparison of Synchronization Techniques
Technique | Advantages | Disadvantages |
Peterson’s Solution | Simple software-based solution | Works only for two processes |
Mutex Locks | Fast, ensures mutual exclusion | Requires manual locking/unlocking |
Semaphores | Avoids busy waiting | Can cause deadlocks |
Monitors | Encapsulates synchronization logic | Less flexible than semaphores |
Conclusion
Process synchronization ensures data consistency and system stability by preventing race conditions and deadlocks. The OS provides multiple synchronization mechanisms such as Peterson’s Solution, Mutex Locks, Semaphores, and Monitors to achieve safe and efficient concurrency.
Key Takeaways
Critical Section Problem arises when multiple processes share resources.
Peterson’s Solution ensures mutual exclusion for two processes.
Semaphores & Mutex Locks prevent race conditions.
Monitors provide a structured way to manage shared resources.
Classic Problems (Producer-Consumer, Readers-Writers, Dining Philosophers) highlight real-world synchronization challenges.
Deadlock in Operating Systems:
Deadlock is a situation where a set of processes become stuck indefinitely because each process is waiting for a resource that another process is holding. This leads to a circular wait, preventing the system from making further progress.
In this blog, we will cover:
- System Model
- Deadlock Characterization
- Deadlock Prevention
- Deadlock Avoidance
- Deadlock Detection
- Recovery from Deadlock
System Model
Resource Types
A computer system consists of multiple processes and resources. The OS manages how resources (CPU, memory, printers, files, etc.) are allocated to processes.
Resources in a system are categorized as:
- Preemptible Resources – Can be taken from one process and given to another (e.g., CPU, RAM).
- Non-Preemptible Resources – Cannot be forcibly taken away (e.g., printers, files).
Resource Allocation Model
- Processes request resources, use them, and then release them.
- If a process holds a resource and requests another that is not available, it must wait.
- If such waiting leads to a circular dependency, deadlock occurs.
Example:
- Process A holds Resource X and requests Resource Y.
- Process B holds Resource Y and requests Resource X.
- Both are waiting for each other → Deadlock occurs.
Deadlock Characterization
A system is in deadlock if the following four conditions hold simultaneously (Coffman’s Conditions):
Condition | Description | Example |
Mutual Exclusion | Only one process can use a resource at a time. | A printer is being used by one process, others must wait. |
Hold and Wait | A process holds a resource and waits for another. | Process A holds a scanner and waits for a printer. |
No Preemption | A resource cannot be forcibly taken from a process. | Process B holds a file and does not release it. |
Circular Wait | A cycle of processes exists, each waiting for a resource held by the next | Process A → B → C → A (circular dependency). |
Deadlock occurs only if all four conditions hold.
Deadlock Prevention
Deadlock prevention ensures that at least one of Coffman’s conditions is violated, so deadlock never occurs.
Breaking Coffman’s Conditions
Condition | Prevention Strategy | Example |
Mutual Exclusion | Allow resource sharing where possible. | Spooling for printers (multiple users can queue jobs). |
Hold and Wait | A process must request all resources at once. | A process requests CPU + memory at the start. |
No Preemption | Forcefully take resources when needed. | If a process is waiting, take back resources and assign to another process. |
Circular Wait | Enforce ordered resource allocation. | If a process holds R1, it cannot request R2 unless released first. |
Example: Banking system requests all required resources upfront, avoiding hold and wait.
Deadlock Avoidance
Deadlock avoidance requires the OS to carefully allocate resources so that the system never enters an unsafe state.
Safe vs. Unsafe States
- Safe State → The system can allocate resources without leading to deadlock.
- Unsafe State → Deadlock is possible if resources are granted.
Example:
- If there are 5 printers and three processes need 2 each, granting a process 2 printers immediately may make the system unsafe.
Banker’s Algorithm (Edgar Dijkstra)
The Banker’s Algorithm is used for deadlock avoidance in multi-resource systems.
Steps:
- Check if the request is within limits.
- Check if granting the request leaves the system in a safe state.
- If safe, allocate resources; otherwise, make the process wait.
Example: A bank won’t approve a loan if doing so means it cannot handle further transactions safely.
Deadlock Detection
If deadlock prevention or avoidance isn’t used, the OS must detect deadlocks when they occur.
Deadlock Detection using Resource Allocation Graph
- Vertices represent processes and resources.
- Edges represent resource requests and allocations.
- If there is a cycle in the graph, deadlock has occurred.
Example:
scss
CopyEdit
P1 → R1 → P2 → R2 → P1 (Deadlock Detected)
Wait-For Graph (For Single-Instance Resources)
- Similar to a Resource Allocation Graph, but without resource nodes.
- A cycle in the wait-for graph confirms deadlock.
Example:
If P1 → P2 → P3 → P1, then deadlock is present.
Deadlock Detection Algorithm for Multiple Instances
- Check if resources are available to satisfy a process’s request.
- If a process can finish execution, release its resources.
- Repeat until all processes finish or a deadlock is detected.
Example: In a printer queue, if all print jobs are waiting for resources held by each other, a deadlock is detected.
Recovery from Deadlock
Once a deadlock is detected, the system must recover by breaking the cycle.
Process Termination
- Kill all deadlocked processes (brute-force method).
- Terminate one process at a time until deadlock is resolved.
- Example: Restarting a frozen application forcefully.
Resource Preemption
- Take a resource away from one process and give it to another.
- Challenge: Must rollback processes to prevent inconsistency.
- Example: If Process A holds a scanner and B holds a printer, take back the scanner and allocate it to B.
Rollback Mechanism
- Checkpoint processes before they request resources.
- If deadlock occurs, rollback to the last checkpoint.
- Example: Databases use transaction rollbacks to undo changes.
Comparison of Deadlock Handling Techniques
Technique | Strategy | Pros | Cons |
Prevention | Ensure at least one Coffman condition is violated. | Guarantees no deadlock. | Can limit resource utilization. |
Avoidance | Use safe-state checking (Banker’s Algorithm). | No deadlock occurs | Requires prior knowledge of resource requests. |
Detection & Recovery | Detect deadlocks and resolve them dynamically. | Doesn’t require prevention mechanisms. | Can cause process termination & delays. |
Example: Real-time systems use deadlock prevention to guarantee smooth execution.
Conclusion
Key Takeaways
Deadlock occurs when processes wait for each other in a circular loop.
Coffman’s conditions must hold for a deadlock to occur.
Deadlock prevention breaks at least one Coffman condition.
Banker’s Algorithm ensures safe resource allocation to avoid deadlocks.
Deadlock detection uses Resource Allocation Graphs and Wait-for Graphs.
Recovery methods include process termination, resource preemption, and rollbacks.
Deadlock is a serious issue in multitasking operating systems, but it can be prevented, avoided, detected, and recovered from using various techniques.