1. What is PCB (Process Control Block)?
Definition
A Process Control Block (PCB) is a data structure maintained by the operating system that stores all information required to manage and execute a process.
Concept
A process is not just code—it is a dynamic entity. The OS needs to:
Track its execution
Pause and resume it
Switch between processes
The PCB is what makes all of this possible. It acts as the process’s identity + execution snapshot.
Role in Context Switching
During context switching:
OS saves current process state into its PCB
Loads another process’s state from its PCB
CPU resumes execution
Without PCB → multitasking is impossible.
Structure of PCB
struct PCB {
int pid;
int process_state;
int program_counter;
int cpu_registers[...];
int priority;
memory_info mem;
file_info files;
};
Components Explained
1. Process Identification
PID uniquely identifies process
Parent PID helps maintain hierarchy
2. Process State
New / Ready / Running / Waiting / Terminated
3. CPU Context
Program Counter → next instruction
Registers → current execution values
4. Scheduling Information
Priority
Queue pointers
5. Memory Management Info
Page table
Base/limit registers
6. I/O Information
Open files
Allocated devices
Real-Life Analogy
Think of PCB as a resume of a process:
What it is doing
Where it stopped
What resources it owns
Summary
PCB is the backbone of process management, enabling scheduling, execution tracking, and context switching.
2. What information is stored in PCB?
Definition
The PCB stores all the information required by the OS to manage a process throughout its lifecycle.
Detailed Breakdown
1. Process Identification
PID
PPID
2. Process State
Helps scheduler decide execution
3. CPU Registers
PC (Program Counter)
Accumulator
Stack Pointer
General Registers
4. Scheduling Information
Priority
Scheduling queue position
5. Memory Management Information
Page table pointer
Segment table
Memory limits
6. I/O Status Information
Open files
Devices assigned
7. Accounting Information
CPU usage
Execution time
Why This Matters
During context switch:
Entire execution context is restored from PCB
Summary
PCB is a complete snapshot of a process’s execution environment.
3. What is the difference between logical and physical address?
Definition
Logical Address: Address generated by CPU
Physical Address: Actual address in RAM
Concept
Programs operate in a virtual address space, not real memory.
This abstraction allows:
Memory protection
Relocation
Efficient memory use
Address Translation
Logical → Physical using MMU (Memory Management Unit)
Diagram Concept
CPU → Logical Address → MMU → Physical Address → Memory
Example
Program uses address:
Logical: 1000
Base Register: 5000
Physical: 6000
Key Differences
Summary
Logical addresses provide abstraction, physical addresses represent actual memory location.
4. What is address binding?
Definition
Address binding is the process of mapping logical addresses to physical addresses.
Concept
Programs do not know where they will be loaded in memory. Address binding resolves this uncertainty.
Why Needed
Memory relocation
Efficient memory usage
Process isolation
Summary
Address binding ensures that programs execute correctly regardless of memory location.
5. What are the types of address binding?
Types
1. Compile-Time Binding
Address fixed during compilation
No flexibility
Example:
int x; // fixed address assigned
2. Load-Time Binding
Address assigned when program is loaded
Can relocate before execution
3. Execution-Time Binding
Address resolved during execution
Requires MMU
Comparison
Summary
Execution-time binding is most powerful and used in modern OS with virtual memory.
6. What is dynamic loading?
Definition
Dynamic loading loads program modules into memory only when they are needed.
Concept
Instead of loading entire program:
Load only required parts
Save memory
Working
Program starts execution
Calls a function
Function loaded from disk into memory
Execution continues
Example
if (condition) {
load_module("math_library");
}
Advantages
Reduced memory usage
Faster startup
Summary
Dynamic loading improves memory efficiency by loading code on demand.
7. What is dynamic linking?
Definition
Dynamic linking links libraries at runtime instead of compile time.
Concept
Instead of embedding library code:
Reference shared library
Load only when needed
Working
Program contains stub
At runtime, stub loads library
Execution continues
Example
Shared library:
libc.so
Advantages
Smaller executable size
Easy updates
Shared memory usage
Summary
Dynamic linking improves modularity and reduces redundancy.
8. What are overlays in OS?
Definition
Overlays allow execution of programs larger than memory by loading only required parts.
Concept
Divide program into:
Root module
Overlay modules
Working
Root always in memory
Overlay loaded when needed
Previous overlay replaced
Example
Main program → Load Module A → Replace with Module B
Summary
Overlays are an early technique for memory optimization before virtual memory existed.
9. What is demand paging?
Definition
Demand paging loads pages into memory only when they are required.
Concept
Avoids loading entire program → reduces memory usage.
Key Idea
Lazy loading:
Load only when accessed
Summary
Demand paging enables efficient use of memory using on-demand loading.
10. How does demand paging work?
Step-by-Step Working
CPU generates logical address
Page table checked
If page not present → Page Fault
Page Fault Handling Algorithm
if (page not in memory) {
trap to OS;
find free frame;
load page from disk;
update page table;
restart instruction;
}
Page Table Entry
Valid bit (present/absent)
Frame number
Example
Access page 5:
Not in memory → fault
Loaded into frame
Execution resumes
Performance
Effective Access Time:
EAT = (1 - p) * memory_time + p * page_fault_time
Where:
p = page fault probability
Summary
Demand paging relies on page faults + OS intervention to load pages dynamically.
Good—now we get into the most important and most asked interview topics (paging + replacement + memory strategies). I’ll go deep with algorithms, examples, and edge cases.
11. What is page replacement?
Definition
Page replacement is the process of selecting a page to remove from memory when a new page needs to be loaded and all memory frames are occupied.
Concept
In demand paging:
Pages are loaded only when needed
Memory is limited
So when memory is full:
👉 OS must choose a victim page to replace
Key Goal
Minimize:
Page faults
Disk I/O
General Page Replacement Algorithm
if (page fault occurs) {
if (free frame available)
load page
else {
select victim page
remove victim page
load new page
}
}
Important Considerations
Which page to replace?
How frequently pages are used
Future usage (if predictable)
Summary
Page replacement is critical for efficient memory utilization in virtual memory systems.
12. What are page replacement algorithms?
Definition
Page replacement algorithms determine which page should be replaced when memory is full.
Common Algorithms
FIFO (First-In First-Out)
LRU (Least Recently Used)
Optimal
LFU (Least Frequently Used)
Clock (Second Chance)
Objective
Reduce page faults
Improve performance
Comparison Idea
Summary
Each algorithm uses a different strategy to approximate future usage.
13. What is FIFO page replacement algorithm?
Definition
FIFO replaces the page that has been in memory for the longest time.
Concept
Uses a queue:
Oldest page at front
New pages added at rear
Algorithm
queue = empty
for each page request:
if page not in memory:
if memory full:
remove front page
insert new page at rear
Example
Reference string:
1 2 3 4 1 2 5 1 2 3 4 5
Frames = 3
Step-by-step:
Problem: Belady’s Anomaly
Increasing frames can increase page faults (counterintuitive).
Summary
FIFO is simple but often inefficient due to lack of usage awareness.
14. What is LRU page replacement algorithm?
Definition
LRU replaces the page that has not been used for the longest time.
Concept
Assumes:
👉 “Past behavior predicts future behavior”
Implementation Approaches
1. Counter Method
Store last used time
2. Stack Method
Maintain order of usage
Algorithm (Conceptual)
for each page:
if page present:
update usage
else:
if full:
remove least recently used
insert new page
Example
Reference string:
1 2 3 1 4 5
Frames = 3
Advantages
Good performance
Close to optimal
Disadvantages
Expensive to implement
Summary
LRU is widely used because it balances performance and practicality.
15. What is Optimal page replacement algorithm?
Definition
Optimal replaces the page that will not be used for the longest time in the future.
Concept
Uses future knowledge → theoretical best.
Algorithm
for each page fault:
look ahead in reference string
find page used farthest in future
replace that page
Example
Reference:
7 0 1 2 0 3 0 4
Frames = 3
At replacement:
Choose page with farthest future use
Key Insight
Gives minimum possible page faults
Used as benchmark
Limitation
Not implementable in real systems
Summary
Optimal is the best theoretical algorithm used for comparison.
16. What is Belady’s anomaly?
Definition
Belady’s anomaly occurs when increasing the number of memory frames results in more page faults.
Example (FIFO)
Frames = 3 → 9 faults
Frames = 4 → 10 faults
Reason
FIFO does not consider:
Frequency
Recency
Key Insight
Happens in FIFO
Does NOT occur in LRU/Optimal
Summary
Belady’s anomaly shows that more memory ≠ better performance in all cases.
17. What is a Translation Lookaside Buffer (TLB)?
Definition
TLB is a small, fast cache that stores recent address translations (logical → physical).
Concept
Without TLB:
Every memory access → 2 accesses (page table + data)
With TLB:
Often only 1 access
Working
if (page in TLB)
use frame directly
else
check page table
update TLB
Effective Access Time
EAT = (Hit Ratio × Memory Time) + (Miss Ratio × 2 × Memory Time)
Example
Hit ratio = 0.9
Memory time = 100 ns
EAT:
= 0.9×100 + 0.1×200 = 110 ns
Summary
TLB significantly improves memory access speed.
18. What is segmentation with paging?
Definition
Segmentation with paging combines:
Logical view (segmentation)
Physical efficiency (paging)
Concept
Program divided into segments
Each segment divided into pages
Address Format
<segment number, page number, offset>
Working
Segment table → base address
Page table → frame number
Final physical address computed
Advantages
Logical structure + efficient allocation
Reduces fragmentation
Summary
Provides both user-friendly memory view and efficient allocation.
19. What is compaction?
Definition
Compaction is the process of moving processes in memory to combine scattered free spaces into one large block.
Concept
Used to solve:
👉 External fragmentation
Working
Move processes
Combine free memory
Create contiguous block
Limitation
Expensive (requires relocation)
Summary
Compaction improves memory usability but adds overhead.
20. What is memory allocation?
Definition
Memory allocation is the process of assigning memory blocks to processes.
Concept
OS must:
Allocate efficiently
Minimize fragmentation
Types
Contiguous allocation
Non-contiguous (paging, segmentation)
Goals
Maximize utilization
Reduce waste
Summary
Memory allocation is fundamental for efficient process execution.
21. What are memory allocation strategies?
Definition
Memory allocation strategies are algorithms used by the operating system to decide which free memory block (hole) should be assigned to a process in contiguous memory allocation.
Concept
In contiguous allocation, memory contains:
Allocated blocks (occupied by processes)
Free blocks (holes)
When a process requests memory, the OS must choose one of the available holes.
The choice directly impacts:
Fragmentation
Memory utilization
Allocation speed
Problem Statement
Given:
A list of free memory blocks
A process requesting size S
Select a block such that:
Block size ≥ S
Strategies
1. First Fit
Select the first block that satisfies the request.
2. Best Fit
Select the smallest block that satisfies the request.
3. Worst Fit
Select the largest available block.
Trade-offs
Faster allocation vs better memory utilization
Fragmentation patterns differ across strategies
Summary
Memory allocation strategies are essential for balancing speed and memory efficiency, and each strategy leads to different fragmentation behavior.
22. What is First Fit memory allocation?
Definition
First Fit is a memory allocation strategy that assigns the first free block that is large enough to satisfy the process request.
Algorithm
for each block in free_list:
if block.size >= process.size:
allocate block
reduce block size
break
Example
Free blocks:
[100 KB, 500 KB, 200 KB, 300 KB]
Process size = 212 KB
Step-by-step:
100 → too small
500 → fits → allocate here
Remaining block:
[100 KB, 288 KB, 200 KB, 300 KB]
Characteristics
Stops at first suitable block
Does not search entire list
Advantages
Fast allocation (O(n) but stops early)
Simple to implement
Disadvantages
Causes fragmentation at the beginning of memory
Over time, small unusable holes accumulate
Summary
First Fit is efficient in terms of speed but can lead to uneven memory distribution and fragmentation.
23. What is Best Fit memory allocation?
Definition
Best Fit allocates the smallest free block that is large enough to satisfy the process request.
Algorithm
best_block = NULL
for each block in free_list:
if block.size >= process.size:
if best_block == NULL or block.size < best_block.size:
best_block = block
allocate best_block
Example
Free blocks:
[100, 500, 200, 300]
Process = 212
Eligible blocks:
500
300
Best fit = 300
After allocation:
[100, 500, 200, 88]
Characteristics
Searches entire list
Minimizes immediate leftover space
Advantages
Reduces internal wastage in each allocation
Disadvantages
Slow (must scan entire list)
Produces many small unusable fragments (external fragmentation)
Summary
Best Fit minimizes immediate waste but leads to long-term fragmentation and higher search cost.
24. What is Worst Fit memory allocation?
Definition
Worst Fit allocates the largest available free block to the process.
Algorithm
worst_block = NULL
for each block:
if block.size >= process.size:
if worst_block == NULL or block.size > worst_block.size:
worst_block = block
allocate worst_block
Example
Free blocks:
[100, 500, 200, 300]
Process = 212
Worst block = 500
After allocation:
[100, 288, 200, 300]
Concept
By using the largest block:
Leaves relatively large remaining blocks
Avoids creating many tiny fragments
Advantages
Attempts to reduce external fragmentation
Disadvantages
Can waste large memory chunks
Not optimal in practice
Summary
Worst Fit tries to preserve large blocks but is generally less efficient than First Fit and Best Fit.
25. What is the Banker’s algorithm?
Definition
Banker’s Algorithm is a deadlock avoidance algorithm that ensures the system remains in a safe state before allocating resources.
Core Idea
Before granting a resource request, the OS checks:
“Will this allocation leave the system in a safe state?”
If yes → allocate
If no → deny request
Data Structures
Let:
n = number of processes
m = number of resource types
Matrices:
Available[m] → available resources
Max[n][m] → maximum demand
Allocation[n][m] → allocated resources
Need[n][m] = Max - Allocation
Safety Algorithm
Work = Available
Finish[i] = false for all i
while (exists i such that Finish[i] == false and Need[i] <= Work):
Work = Work + Allocation[i]
Finish[i] = true
if all Finish[i] == true:
system is safe
else:
system is unsafe
Example
Processes: P0, P1, P2
Resources: 1 type
Available = 3
Step-by-step:
Work = 3
P2: Need 1 ≤ 3 → execute → Work = 3+1 = 4
P1: Need 2 ≤ 4 → execute → Work = 4+1 = 5
P0: Need 2 ≤ 5 → execute → Work = 5+5 = 10
Safe sequence:
P2 → P1 → P0
Summary
Banker’s Algorithm avoids deadlocks by simulating safe execution before allocation.
26. What is a safe state in deadlock?
Definition
A system is in a safe state if there exists a sequence of processes such that all processes can complete without deadlock.
Concept
Safe state means:
There is at least one safe sequence
Unsafe state:
No guaranteed sequence
May lead to deadlock
Example
Available = 2
Sequence:
P0 runs → releases resources
Then P1 runs
Safe sequence exists → safe state
Key Insight
Safe state → no deadlock
Unsafe state ≠ deadlock, but risky
Summary
Safe state ensures future execution without deadlock is possible.
27. What is resource allocation graph?
Definition
A Resource Allocation Graph (RAG) is a directed graph used to represent resource allocation and requests.
Components
Processes → circles
Resources → rectangles
Edges
Request edge: Process → Resource
Allocation edge: Resource → Process
Example
P1 → R1
R1 → P2
P2 → R2
R2 → P1
This forms a cycle.
Deadlock Detection
Single instance resources:
Cycle ⇒ deadlock
Multiple instances:
Cycle ⇒ possible deadlock
Summary
RAG provides a visual and analytical tool to detect deadlocks.
28. What is deadlock prevention?
Definition
Deadlock prevention ensures that at least one of the four necessary conditions of deadlock never holds.
Four Conditions
Mutual exclusion
Hold and wait
No preemption
Circular wait
Prevention Techniques
1. Eliminate Hold and Wait
Request all resources at once
2. Allow Preemption
Force release of resources
3. Impose Ordering
Assign resource ordering and enforce acquisition order
Example
Resources ordered: R1 < R2 < R3
Processes must request in this order → prevents circular wait
Drawback
Low resource utilization
Reduced system performance
Summary
Deadlock prevention avoids deadlock by restricting how processes request resources.
29. What is deadlock avoidance?
Definition
Deadlock avoidance ensures that the system never enters an unsafe state by carefully granting resource requests.
Concept
Before allocating resources:
Check if system remains safe
Use Banker’s Algorithm
Key Difference from Prevention
Prevention → restricts system behavior
Avoidance → allows flexibility but checks safety
Example
If granting request leads to unsafe state:
Deny request
Summary
Deadlock avoidance dynamically ensures safe execution paths using runtime checks.
30. What is deadlock detection?
Definition
Deadlock detection identifies deadlocks after they have occurred.
Concept
Instead of preventing or avoiding:
Allow system to run freely
Detect deadlock periodically
Detection Algorithm (Multiple Instances)
Work = Available
Finish[i] = false if Allocation[i] != 0
while (exists i such that Finish[i] == false and Need[i] <= Work):
Work += Allocation[i]
Finish[i] = true
if any Finish[i] == false:
deadlock exists
Example
If some processes cannot proceed:
They are deadlocked
What Happens Next?
Requires recovery methods:
Kill process
Preempt resources
Summary
Deadlock detection allows flexibility but requires extra overhead and recovery mechanisms.
31. What is deadlock recovery?
Definition
Deadlock recovery refers to techniques used to resolve a deadlock after it has been detected, allowing the system to return to a normal state.
Concept
Once a deadlock is detected:
Processes are stuck waiting indefinitely
OS must intervene and break the cycle
Recovery Methods
1. Process Termination
(a) Abort all deadlocked processes
Simple but drastic
All progress is lost
(b) Abort one process at a time
Select a victim
Repeat until deadlock resolves
Victim Selection Criteria
Process priority
Amount of work done
Resources held
Number of processes affected
2. Resource Preemption
Temporarily take resources from a process
Allocate them to another
Issues in Preemption
Selecting victim
Rollback (restore previous state)
Starvation risk
Summary
Deadlock recovery resolves deadlock by terminating processes or preempting resources, but may lead to performance loss.
32. What is starvation in OS?
Definition
Starvation is a condition where a process waits indefinitely because resources are continuously allocated to other processes.
Concept
Occurs in systems where:
Scheduling is biased
High-priority processes dominate
Example
In priority scheduling:
Low-priority process never gets CPU
High-priority processes keep arriving
Characteristics
Process is ready but never executes
System is not deadlocked
Summary
Starvation leads to unfair resource allocation and indefinite waiting.
33. What is aging in OS?
Definition
Aging is a technique used to prevent starvation by gradually increasing the priority of waiting processes.
Concept
The longer a process waits:
The higher its priority becomes
Example
Initial priority:
P1 = 1 (low)
After waiting:
P1 → 2 → 3 → 4 → ...
Eventually:
It becomes high priority
Gets CPU
Implementation
for each waiting process:
increase priority over time
Summary
Aging ensures fair scheduling by preventing starvation.
34. What is concurrency?
Definition
Concurrency is the ability of a system to execute multiple processes in overlapping time periods.
Concept
Concurrency does not mean true parallelism:
Single CPU → interleaved execution
Multiple CPU → parallel execution
Key Challenges
Race conditions
Synchronization
Data inconsistency
Example
Two processes updating same variable:
x = x + 1;
Without synchronization:
Incorrect result
Summary
Concurrency improves performance but introduces complex coordination issues.
35. What are classic synchronization problems?
Definition
Classic synchronization problems are standard problems used to demonstrate issues in process coordination and mutual exclusion.
Major Problems
Producer–Consumer
Readers–Writers
Dining Philosophers
Purpose
Demonstrate race conditions
Test synchronization techniques
Summary
These problems help understand critical section handling and synchronization mechanisms.
36. What is the producer-consumer problem?
Definition
The Producer-Consumer problem involves two processes:
Producer → generates data
Consumer → consumes data
Both share a bounded buffer.
Problem
Producer should not add when buffer is full
Consumer should not remove when buffer is empty
Avoid race conditions
Solution Using Semaphores
Variables
semaphore mutex = 1;
semaphore empty = N;
semaphore full = 0;
Producer Code
while (true) {
wait(empty);
wait(mutex);
insert_item();
signal(mutex);
signal(full);
}
Consumer Code
while (true) {
wait(full);
wait(mutex);
remove_item();
signal(mutex);
signal(empty);
}
Explanation
empty → tracks empty slots
full → tracks filled slots
mutex → ensures mutual exclusion
Summary
Producer-consumer ensures synchronized access to shared buffer using semaphores.
37. What is the readers-writers problem?
Definition
The Readers-Writers problem involves:
Multiple readers (can read simultaneously)
Writers (require exclusive access)
Problem
Readers should not be blocked unnecessarily
Writers must have exclusive access
Solution Using Semaphores
semaphore mutex = 1;
semaphore wrt = 1;
int readcount = 0;
Reader
wait(mutex);
readcount++;
if (readcount == 1)
wait(wrt);
signal(mutex);
read_data();
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex);
Writer
wait(wrt);
write_data();
signal(wrt);
Issues
Reader preference → writer starvation
Writer preference → reader blocking
Summary
Ensures multiple readers or one writer at a time with proper synchronization.
38. What is the dining philosophers problem?
Definition
A classic synchronization problem where philosophers share forks and must pick two forks to eat.
Problem
Each philosopher needs 2 forks
Limited forks → deadlock possible
Deadlock Scenario
All philosophers pick left fork:
No one can pick right fork
System stuck
Solution (Semaphore Approach)
semaphore fork[5] = {1,1,1,1,1};
Philosopher
wait(fork[i]);
wait(fork[(i+1)%5]);
eat();
signal(fork[i]);
signal(fork[(i+1)%5]);
Deadlock Avoidance
Pick forks in order
Limit number of philosophers
Summary
Illustrates deadlock and resource allocation issues.
39. What is Peterson’s algorithm?
Definition
Peterson’s Algorithm is a software-based solution for mutual exclusion between two processes.
Concept
Uses:
Shared variables
Turn mechanism
Variables
bool flag[2];
int turn;
Algorithm
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
critical_section();
flag[i] = false;
Properties
Mutual exclusion
Progress
Bounded waiting
Limitation
Works only for 2 processes
Summary
Peterson’s algorithm ensures safe entry into critical section without hardware support.
40. What is bounded waiting?
Definition
Bounded waiting ensures that a process will not wait indefinitely to enter the critical section.
Concept
There is a limit on:
Number of times other processes can enter critical section before a waiting process
Importance
Prevents:
Starvation
Unfair scheduling
Example
If process P1 is waiting:
Other processes can enter only limited times before P1 gets access
Summary
Bounded waiting guarantees fair access to shared resources.
Good—this final block (41–50) includes threads, IPC, and subtle OS concepts that interviewers use to test depth. I’ll keep the same in-depth, structured, and implementation-focused approach.
41. What are user-level threads?
Definition
User-level threads (ULTs) are threads that are created and managed entirely in user space by a thread library, without direct involvement of the operating system kernel.
Concept
The OS is unaware of user-level threads. It only sees a single process, while multiple threads exist within that process at the user level.
Thread management is handled by:
Thread libraries (e.g., POSIX threads in user mode)
Runtime systems
Working
Thread creation, scheduling, and switching are handled in user space
No system calls are required for thread operations
Advantages
Fast thread creation and switching (no kernel involvement)
Portable across systems
Low overhead
Disadvantages
Blocking system call blocks entire process
Cannot take advantage of multiple CPUs
No true parallelism
Example
If one thread performs I/O:
Entire process blocks
All threads stop
Summary
User-level threads are efficient but limited due to lack of kernel awareness and scheduling control.
42. What are kernel-level threads?
Definition
Kernel-level threads (KLTs) are threads that are managed directly by the operating system kernel.
Concept
Each thread is known to the OS, and the kernel schedules them individually.
Working
Thread operations require system calls
OS handles scheduling and context switching
Advantages
True parallelism (multi-core support)
Blocking of one thread does not affect others
Better scheduling control
Disadvantages
Higher overhead (system calls required)
Slower context switching
Summary
Kernel-level threads provide better concurrency and system control at the cost of performance overhead.
43. What is the difference between user thread and kernel thread?
Definition
User threads and kernel threads differ in how they are managed and scheduled.
Key Differences
Mapping Models
Many-to-One (ULT → single KLT)
One-to-One (ULT → KLT)
Many-to-Many
Summary
User threads are faster but limited; kernel threads are powerful but heavier.
44. What is copy-on-write?
Definition
Copy-on-write (COW) is a memory management technique where copying of data is delayed until a process actually modifies it.
Concept
When a process is duplicated (e.g., using fork()):
Parent and child share memory
No immediate copying
Working
Parent process calls fork()
Child shares same memory pages
Pages marked as read-only
When write occurs:
Copy is created
Only modified page is duplicated
Example
int *x = malloc(sizeof(int));
*x = 10;
fork(); // parent & child share x
Only when one modifies:
Separate copy created
Advantages
Saves memory
Faster process creation
Summary
Copy-on-write optimizes memory by delaying duplication until modification occurs.
45. What is priority inversion?
Definition
Priority inversion occurs when a lower-priority process holds a resource needed by a higher-priority process, causing the higher-priority process to wait.
Concept
Scenario:
Low priority process holds lock
High priority process waits
Medium priority process runs
Result:
High priority process is indirectly blocked
Example
Low → holds resource
High → waiting
Medium → keeps running
Solution: Priority Inheritance
Low-priority process temporarily gets higher priority
Releases resource faster
Summary
Priority inversion disrupts scheduling and must be handled using priority inheritance protocols.
46. What is spurious wakeup?
Definition
A spurious wakeup occurs when a thread wakes up from a waiting state without being explicitly signaled.
Concept
Occurs in condition variables due to:
Implementation details
OS scheduling
Problem
Thread assumes condition is true → may cause errors
Correct Practice
Always check condition in loop:
while (!condition) {
wait();
}
Summary
Spurious wakeups require rechecking conditions to ensure correctness.
47. What is pipe vs message queue?
Definition
Both are IPC mechanisms used for communication between processes.
Pipe
Unidirectional
Simple byte stream
Used between related processes
Message Queue
Bidirectional
Structured messages
Works between unrelated processes
Comparison
Summary
Message queues are more flexible, while pipes are simpler and faster for basic communication.
48. What is shared memory in IPC?
Definition
Shared memory is an IPC mechanism where multiple processes access a common memory region.
Concept
Processes communicate by:
Reading/writing shared memory
Working
OS creates shared segment
Processes attach to it
Read/write directly
Code Concept
shmget();
shmat();
Advantages
Fastest IPC method
No data copying
Disadvantages
Requires synchronization
Risk of race conditions
Summary
Shared memory provides high-speed communication but requires careful synchronization.
49. What is the difference between multitasking and multithreading?
Definition
Both involve concurrent execution but differ in unit of execution.
Differences
Concept
Multitasking → multiple programs
Multithreading → multiple tasks within program
Example
Multitasking → Browser + Music player
Multithreading → Multiple tabs in browser
Summary
Multithreading is lighter and faster, while multitasking provides process-level isolation.
50. What is the goal of CPU scheduling?
Definition
The goal of CPU scheduling is to allocate CPU to processes in a way that maximizes system performance and fairness.
Objectives
1. Maximize CPU Utilization
Keep CPU busy
2. Maximize Throughput
Complete more processes
3. Minimize Turnaround Time
Faster completion
4. Minimize Waiting Time
Less time in queue
5. Minimize Response Time
Quick system response
Trade-offs
No single algorithm satisfies all goals
Balance required
Summary
CPU scheduling aims to optimize performance metrics while ensuring fairness among processes.