1. What is an Operating System?
Definition
An Operating System (OS) is system software that acts as an intermediary between users and computer hardware, managing system resources and providing services for program execution.
Concept
An OS abstracts the complexity of hardware and provides a controlled environment where applications can run efficiently. It ensures that multiple programs and users can share system resources without conflict.
Without an OS:
Programs cannot access hardware safely
No memory isolation exists
No scheduling or file management
Working Principle
User runs an application
Application requests resources via system calls
OS kernel processes the request
Hardware executes the operation
Results are returned to the application
Key Responsibilities
Resource management (CPU, memory, I/O)
Process scheduling
Security and protection
Providing abstractions (files, processes, virtual memory)
Example
When you open a browser:
OS allocates memory
Schedules CPU time
Handles disk and network access
Summary
An OS is the core manager of a computer system, enabling safe and efficient execution of programs.
2. What are the main functions of an Operating System?
Definition
The OS performs essential functions to manage hardware resources and provide services for applications.
Detailed Functions
1. Process Management
Creates and terminates processes
Schedules processes using algorithms
Handles synchronization and communication
2. Memory Management
Allocates memory dynamically
Keeps track of memory usage
Implements virtual memory
3. File System Management
Organizes data into files/directories
Provides file access methods
Maintains permissions
4. Device Management
Uses device drivers
Handles interrupts
Manages I/O operations
5. Security and Protection
Authentication (login systems)
Authorization (access control)
Data protection
6. User Interface
CLI (Terminal)
GUI (Windows, macOS)
7. Resource Allocation
The OS uses scheduling and allocation strategies to:
Avoid deadlocks
Maximize efficiency
Summary
The OS acts as a resource manager + control system + service provider.
3. What are the different types of Operating Systems?
Definition
Operating systems are categorized based on how they manage tasks, users, and resources.
Types with Concept
1. Batch Operating System
Jobs are grouped into batches
No user interaction during execution
Working:
Jobs submitted
Stored in queue
Executed sequentially
2. Time-Sharing Operating System
CPU time divided into slices
Multiple users interact simultaneously
Key Idea: Context switching
3. Distributed Operating System
Multiple systems work together
Appears as a single system
4. Network Operating System
Provides services over network
Centralized control
5. Real-Time Operating System (RTOS)
Strict time constraints
Deterministic response
6. Multiprogramming OS
Multiple processes in memory
CPU switches during I/O
7. Multiprocessing OS
Multiple CPUs execute tasks in parallel
Summary
Each OS type is optimized for specific use cases like performance, responsiveness, or reliability.
4. What is a process in an Operating System?
Definition
A process is a program in execution along with its execution context.
Concept
A program becomes a process when it is loaded into memory and starts execution.
Components of a Process
Program code
Program counter
CPU registers
Stack and heap
Open files
Process Control Block (PCB)
PCB stores:
Process ID
State
CPU registers
Memory info
Summary
A process is the execution instance of a program managed by the OS.
5. What are the different states of a process?
Definition
Process states represent the lifecycle stages of a process.
Process State Diagram (Concept)
States:
New → Ready → Running → Waiting → Ready → Terminated
Explanation
New: Process creation
Ready: Waiting for CPU
Running: Executing
Waiting: Waiting for I/O
Terminated: Finished
Transition Logic
Scheduler moves Ready → Running
I/O causes Running → Waiting
Completion causes Waiting → Ready
Summary
States help the OS track and manage process execution efficiently.
6. What is a thread?
Definition
A thread is the smallest unit of execution within a process.
Concept
Threads share:
Memory
Files
Resources
But have:
Separate program counters
Separate stacks
Types
User-level threads
Kernel-level threads
Summary
Threads enable concurrent execution within a process.
7. What is the difference between a process and a thread?
Concept
Processes are independent execution units, while threads are lightweight units within a process.
Key Differences
Summary
Threads improve performance, but processes provide isolation.
8. What is multitasking?
Definition
Multitasking is the ability of a system to execute multiple processes by rapidly switching between them.
Concept
Uses context switching to simulate parallel execution.
Types
Preemptive
Non-preemptive
Summary
Multitasking improves user experience and system utilization.
9. What is multiprocessing?
Definition
Multiprocessing involves using multiple processors to execute processes simultaneously.
Types
Symmetric multiprocessing
Asymmetric multiprocessing
Summary
Enables true parallel execution.
10. What is multiprogramming?
Definition
Multiprogramming allows multiple programs to reside in memory and share CPU.
Working
Multiple jobs loaded
CPU executes one
Switches when I/O occurs
Summary
Maximizes CPU utilization.
11. What is a time-sharing system?
Definition
A time-sharing system allows multiple users to interact with a system simultaneously.
Working
CPU divided into time slices
Rapid switching between users
Summary
Provides interactive computing.
12. What is a batch operating system?
Definition
A batch OS processes jobs in groups without interaction.
Working
Jobs queued
Executed sequentially
Summary
Efficient but lacks flexibility.
13. What is a real-time operating system (RTOS)?
Definition
RTOS ensures responses within strict deadlines.
Types
Hard RTOS
Soft RTOS
Summary
Used in time-critical systems.
14. What is the kernel?
Definition
The kernel is the core of the OS that manages hardware.
Functions
Memory management
Process scheduling
System calls
Summary
Kernel = control center of OS.
15. What is the difference between OS and Kernel?
Concept
OS is the full system; kernel is its core.
Difference
Summary
Kernel is a subset of OS.
16. What is OS structure?
Definition
OS structure refers to the internal design of an operating system and how its components are organized and interact.
Concept
Different OS structures are designed to balance:
Performance
Maintainability
Security
Types of OS Structures
1. Monolithic Structure
All OS services run in a single large kernel.
Working:
All components (file system, memory management, drivers) share the same space
Advantages:
Fast execution (no overhead of communication)
Disadvantages:
Hard to debug
Poor modularity
2. Layered Structure
OS is divided into layers, each built on top of another.
Working:
Each layer interacts only with adjacent layers
Advantages:
Easy to design and maintain
Disadvantages:
Performance overhead
3. Microkernel Structure
Only essential services are in kernel; others run in user space.
Core services:
Scheduling
Memory management
IPC
Advantages:
High security
Modular
Disadvantages:
Communication overhead
4. Hybrid Structure
Combination of monolithic and microkernel.
Used in: Windows, modern Linux variants
Summary
OS structure determines performance, flexibility, and reliability of the system.
17. What is Kernel Mode and User Mode?
Definition
Kernel Mode and User Mode are two execution modes that control access to system resources.
Concept
This separation ensures system security and stability.
Kernel Mode
Full access to hardware
Executes privileged instructions
Used by OS kernel
User Mode
Limited access
Cannot directly access hardware
Used by applications
Mode Switching
Application requests service
System call triggers switch to kernel mode
OS executes request
Returns to user mode
Summary
Mode separation prevents user programs from crashing the system or accessing restricted resources.
18. What are System Calls?
Definition
System calls are the interface through which user programs request services from the OS.
Concept
Applications cannot directly access hardware → must go through OS.
Working Mechanism
User program invokes system call
Trap instruction switches to kernel mode
OS performs requested operation
Control returns to user mode
Types of System Calls
Process Control: fork(), exit()
File Management: open(), read(), write()
Device Management
Information Maintenance
Communication (IPC)
Example
read(fd, buffer, size);
Summary
System calls provide a controlled gateway between applications and hardware.
19. What is Process Scheduling?
Definition
Process scheduling is the method by which the OS selects a process from the ready queue to execute.
Concept
Since CPU is limited, scheduling ensures:
Fairness
Efficiency
Maximum utilization
Types of Schedulers
1. Long-Term Scheduler
Selects processes to load into memory
2. Short-Term Scheduler
Chooses process for CPU execution
3. Medium-Term Scheduler
Suspends/resumes processes
Scheduling Queue
Ready queue
Waiting queue
Summary
Process scheduling ensures optimal CPU usage and smooth multitasking.
20. What is Context Switching?
Definition
Context switching is the process of saving the state of one process and loading another process’s state.
Concept
CPU switches between processes to simulate parallelism.
Steps
Save current process state (PCB)
Load next process state
Resume execution
Overhead
No useful work done
Frequent switching reduces performance
Summary
Context switching enables multitasking but introduces performance cost.
21. What is Interprocess Communication (IPC)?
Definition
IPC is a mechanism that allows processes to communicate and synchronize.
Concept
Processes are isolated → need IPC to cooperate.
Types
1. Shared Memory
Common memory space
Fast communication
2. Message Passing
Send/receive messages
Safer but slower
Summary
IPC is essential for process coordination and data exchange.
22. What is Shared Memory?
Definition
Shared memory is an IPC mechanism where multiple processes access a common memory region.
Working
OS creates shared segment
Processes attach to it
Read/write data
Key Issue
Requires synchronization (mutex, semaphores)
Advantages
Very fast
Disadvantages
Risk of race conditions
Summary
Shared memory is efficient but unsafe without synchronization.
23. What is Message Passing?
Definition
Message passing allows processes to communicate via messages.
Types
Direct communication
Indirect (mailbox/queue)
Operations
send(message)
receive(message)
Advantages
No shared memory issues
Disadvantages
Slower
Summary
Message passing is safe but less efficient than shared memory.
24. What is CPU Scheduling?
Definition
CPU scheduling is the process of selecting a process from the ready queue for execution.
Concept
CPU scheduling improves:
Throughput
Response time
Types
Preemptive
Non-preemptive
Summary
CPU scheduling is critical for system performance optimization.
25. What are Scheduling Criteria?
Definition
Scheduling criteria are metrics used to evaluate scheduling algorithms.
Key Metrics
CPU Utilization → maximize usage
Throughput → processes completed
Turnaround Time → completion time
Waiting Time → time in queue
Response Time → first response delay
Summary
These criteria help choose the best scheduling algorithm.
26. What is First Come First Serve (FCFS)?
Definition
FCFS executes processes in order of arrival.
Algorithm
Add processes to queue
Execute first process
Continue sequentially
Example
P1 → P2 → P3
Problem
Convoy effect (long job delays others)
Summary
FCFS is simple but inefficient.
27. What is Shortest Job First (SJF)?
Definition
SJF selects the process with the smallest burst time.
Algorithm
Select shortest job
Execute it
Repeat
Advantage
Minimizes waiting time
Problem
Starvation
Burst time prediction needed
Summary
SJF is optimal but impractical.
28. What is Shortest Remaining Time First (SRTF)?
Definition
SRTF is preemptive SJF.
Algorithm
Select process with shortest remaining time
Preempt if shorter job arrives
Advantage
Better response time
Problem
High overhead
Summary
SRTF improves SJF but adds complexity.
29. What is Priority Scheduling?
Definition
Processes are executed based on priority.
Algorithm
Assign priority
Select highest priority
Problem
Starvation
Solution
Aging
Summary
Priority scheduling needs fairness mechanisms.
30. What is Round Robin (RR)?
Definition
Round Robin allocates CPU time in fixed time slices.
Algorithm
Assign time quantum
Execute process for quantum
Move to next process
Key Factor
Time quantum size
Advantages
Fair
Good response time
Disadvantages
Context switching overhead
Summary
Round Robin is widely used in time-sharing systems.
31. What is spooling?
Definition
Spooling (Simultaneous Peripheral Operations On-Line) is a technique where data is temporarily stored in a buffer (usually disk) so that a device can access it at its own speed.
Concept
Spooling is used to handle speed mismatch between devices. For example, a CPU is much faster than a printer, so data is first stored on disk and then sent to the printer sequentially.
Working
Multiple jobs send output to a spool (disk buffer)
Jobs are queued
Device (e.g., printer) processes them one by one
Example
Print queue in a printer
Advantages
Efficient device utilization
Allows parallel processing
Summary
Spooling enables asynchronous processing of slow I/O devices using disk as an intermediate buffer.
32. What is caching?
Definition
Caching is a technique of storing frequently accessed data in a small, fast memory to reduce access time.
Concept
Instead of accessing slow memory (like disk or RAM), data is stored in cache for quick retrieval.
Working
Request data
Check cache
If found → cache hit
Else → cache miss → fetch from main memory
Store in cache for future use
Types
CPU cache (L1, L2, L3)
Disk cache
Web cache
Advantages
Faster data access
Improves performance
Summary
Caching improves system speed by reducing access time to frequently used data.
33. What is buffering?
Definition
Buffering is the process of storing data temporarily in memory while it is being transferred between two devices.
Concept
Used to handle differences in data transfer speeds between devices.
Working
Data is written to buffer
Buffer holds data temporarily
Data is transferred to destination
Types
Single buffering
Double buffering
Circular buffering
Example
Streaming video uses buffering
Summary
Buffering smooths data flow between devices with different speeds.
34. What is a system call?
Definition
A system call is a mechanism through which a user program requests services from the operating system.
Concept
User programs cannot directly access hardware → must use system calls.
Working
User program invokes system call
Switch to kernel mode
OS performs operation
Return to user mode
Examples
fork() → create process
read() → read file
write() → write file
Summary
System calls act as a bridge between user programs and OS kernel.
35. What is an interrupt?
Definition
An interrupt is a signal that temporarily stops the current execution of the CPU and transfers control to an interrupt handler.
Concept
Interrupts allow the CPU to respond to events asynchronously.
Types
Hardware interrupt (e.g., keyboard input)
Software interrupt
Working
Interrupt signal occurs
CPU pauses current process
Executes interrupt handler
Resumes execution
Summary
Interrupts enable the system to handle events efficiently without polling.
36. What is the difference between interrupt and trap?
Definition
Interrupts and traps are mechanisms that transfer control to the OS, but differ in origin and purpose.
Key Differences
Explanation
Interrupt → triggered by hardware
Trap → triggered by program (e.g., divide by zero)
Summary
Interrupt = external event
Trap = internal event
37. What is a bootstrap program?
Definition
A bootstrap program is a small program that initializes the system and loads the operating system into memory during startup.
Concept
When a computer starts, hardware has no OS loaded. Bootstrap program loads the OS from disk.
Working
Stored in ROM
Executes on startup
Loads OS kernel into memory
Transfers control to OS
Example
BIOS / UEFI
Summary
Bootstrap program is responsible for starting the operating system.
38. What is context switching?
Definition
Context switching is the process of saving the state of one process and loading another process’s state.
Concept
Enables multitasking by allowing CPU to switch between processes.
Steps
Save current process state (PCB)
Load next process state
Resume execution
Overhead
Time consumed without useful work
Summary
Context switching allows multitasking but introduces performance overhead.
39. What is CPU scheduling?
Definition
CPU scheduling is the process of selecting a process from the ready queue for execution.
Concept
Ensures efficient CPU usage among multiple processes.
Types
Preemptive
Non-preemptive
Summary
CPU scheduling improves performance and fairness.
40. What is FCFS scheduling algorithm?
Definition
FCFS executes processes in the order they arrive.
Algorithm
Insert processes into queue
Execute first process
Continue sequentially
Problem
Convoy effect
Summary
Simple but inefficient.
41. What is Round Robin scheduling?
Definition
Round Robin assigns a fixed time slice to each process.
Algorithm
Assign time quantum
Execute process for that time
Move to next process
Key Factor
Time quantum size
Summary
Ensures fairness but has switching overhead.
42. What is Priority Scheduling?
Definition
Processes are scheduled based on priority levels.
Algorithm
Assign priority
Execute highest priority process
Problem
Starvation
Solution
Aging
Summary
Priority scheduling requires fairness control.
43. What is a dispatcher?
Definition
The dispatcher is a component of the OS that gives control of the CPU to the process selected by the scheduler.
Functions
Context switching
Switching to user mode
Starting process execution
Summary
Dispatcher executes the decision made by the scheduler.
44. What is dispatch latency?
Definition
Dispatch latency is the time taken by the dispatcher to stop one process and start another.
Concept
Includes:
Context switching time
Mode switching
Impact
High latency reduces performance
Summary
Dispatch latency should be minimized for better system efficiency.
45. What is a zombie process?
Definition
A zombie process is a process that has completed execution but still has an entry in the process table.
Concept
Occurs when parent has not read child’s exit status.
Characteristics
No execution
Still occupies PID
Solution
Parent calls wait()
Summary
Zombie processes are dead processes waiting to be cleaned up.
46. What is an orphan process?
Definition
An orphan process is a process whose parent has terminated.
Concept
Adopted by init process (PID 1).
Behavior
Continues execution
Managed by OS
Summary
Orphan processes are reassigned to system processes.
47. What is fragmentation?
Definition
Fragmentation is the inefficient use of memory where free space is divided into small unusable pieces.
Types
Internal fragmentation
External fragmentation
Summary
Fragmentation reduces memory utilization efficiency.
48. What is internal fragmentation?
Definition
Internal fragmentation occurs when allocated memory is larger than required, leaving unused space inside allocated blocks.
Example
Allocated 10 KB, used 7 KB → 3 KB wasted
Summary
Wasted space inside allocated memory.
49. What is external fragmentation?
Definition
External fragmentation occurs when free memory is scattered into small non-contiguous blocks.
Problem
Cannot allocate large contiguous memory
Solution
Compaction
Paging
Summary
Wasted space outside allocated memory blocks.
50. What is thrashing?
Definition
Thrashing is a condition where the system spends more time swapping pages in and out of memory than executing processes.
Concept
Occurs when:
Too many processes
Insufficient memory
Symptoms
High CPU usage
Low throughput
Solution
Reduce multiprogramming
Increase memory
Summary
Thrashing severely degrades system performance due to excessive paging.