1. What is the difference between Stack and Queue?
Definition
Stack and Queue are both linear data structures, but they differ in how elements are inserted and removed.
Stack → Follows LIFO (Last In First Out)
Queue → Follows FIFO (First In First Out)
Key Differences
Applications
Stack → Undo operations, recursion, expression evaluation
Queue → Scheduling, BFS, request handling
2. How can you implement a Stack using Arrays?
Definition
A stack can be implemented using an array by maintaining a variable called top to track the last inserted element.
Implementation
Use an array arr[]
Initialize top = -1
Operations
Push
Increment top
Insert element at arr[top]
Pop
Return arr[top]
Decrement top
Peek
Return arr[top]
Example (Pseudo Code)
push(x):
top = top + 1
arr[top] = x
pop():
x = arr[top]
top = top - 1
return x
3. How can you implement a Stack using Linked Lists?
Definition
A stack can be implemented using a linked list, where insertion and deletion happen at the head node.
Implementation
Use a linked list
Head node acts as the top
Operations
Push
Create a new node
Point it to current head
Update head
Pop
Store head value
Move head to next node
Example (Pseudo Code)
push(x):
newNode = create node
newNode.next = head
head = newNode
pop():
x = head.data
head = head.next
return x
4. How can you implement a Queue using Arrays?
Definition
A queue can be implemented using an array by maintaining two pointers: front and rear.
Implementation
Use array arr[]
Initialize front = 0, rear = -1
Operations
Enqueue
Increment rear
Insert element at arr[rear]
Dequeue
Return arr[front]
Increment front
Example (Pseudo Code)
enqueue(x):
rear = rear + 1
arr[rear] = x
dequeue():
x = arr[front]
front = front + 1
return x
5. How can you implement a Queue using Stacks?
Definition
A queue can be implemented using two stacks to maintain FIFO order.
Implementation
Use two stacks: stack1 and stack2
Operations
Enqueue
Push element into stack1
Dequeue
If stack2 is empty:
Move all elements from stack1 to stack2
Pop from stack2
Example (Pseudo Code)
enqueue(x):
stack1.push(x)
dequeue():
if stack2 is empty:
while stack1 not empty:
stack2.push(stack1.pop())
return stack2.pop()
6. How do you implement Stack using Queues?
Definition
A stack can be implemented using queues by manipulating queue operations in such a way that the LIFO (Last In First Out) behavior of a stack is achieved.
Implementation (Using One Queue)
Use a single queue q
Ensure that the most recently inserted element comes to the front
Working Principle
In a queue, elements are processed in FIFO order. To simulate LIFO:
After inserting a new element, move all previous elements behind it
This makes the newly inserted element appear at the front of the queue
Operations
Push
Enqueue the new element
Rotate the queue by dequeuing and enqueuing all previous elements
Pop
Simply dequeue the front element (which is the last inserted element)
Top
Return the front element of the queue
Example
Push sequence: 10, 20, 30
Push(10) → [10]
Push(20) → [20, 10]
Push(30) → [30, 20, 10]
Pop → removes 30
Example (Pseudo Code)
push(x):
q.enqueue(x)
for i in range(size-1):
q.enqueue(q.dequeue())
pop():
return q.dequeue()
7. What is collision in Hashing?
Definition
A collision occurs when two or more different keys are mapped to the same index in a hash table by a hash function.
Explanation
A hash function converts a key into an index
Ideally, each key should map to a unique index
However, due to limited table size, multiple keys may map to the same index
Why Collisions Occur
Finite size of hash table
Large number of possible keys
Poorly designed hash function
Example
If h(key) = key % 5:
h(10) = 0
h(15) = 0
Both keys map to index 0 → collision
8. What are collision resolution techniques?
Definition
Collision resolution techniques are methods used to handle collisions in a hash table, ensuring that all elements can still be stored and accessed correctly.
Techniques
1. Chaining
Each index stores a linked list of elements
Multiple elements can exist at the same index
Example
Index 0 → 10 → 15 → 20
2. Open Addressing
When a collision occurs, find another empty slot in the table.
a) Linear Probing
Check the next available slot sequentially
Formula: (index + i) % size
b) Quadratic Probing
Check positions using quadratic increments
Formula: (index + i²) % size
c) Double Hashing
Use a second hash function to find the next position
9. What is a good Hash Function?
Definition
A good hash function is one that distributes keys uniformly across the hash table, minimizing collisions and ensuring efficient performance.
Characteristics
Uniform Distribution – Keys spread evenly across indices
Low Collision Rate – Different keys map to different indices
Deterministic – Same key always gives same index
Fast Computation – Should be efficient to compute
Uses Entire Key – Considers all parts of the key
Example
A simple hash function:
h(key) = key % table_size
A better function ensures more uniform spread for different inputs.
10. What is the load factor in Hashing?
Definition
The load factor (α) represents how full a hash table is, and is used to measure its efficiency.
Formula
Load Factor (α) = Number of elements / Size of hash table
Explanation
Indicates how many elements are stored relative to table size
Helps decide when to resize or rehash the table
Impact of Load Factor
Low load factor → fewer collisions, more unused space
High load factor → more collisions, slower performance
Example
If 7 elements are stored in a table of size 10:
α = 7 / 10 = 0.7
This means the table is 70% full
11. What is separate chaining?
Definition
Separate chaining is a collision resolution technique in hashing where each index of the hash table maintains a collection (usually a linked list) of all elements that hash to the same index.
Working Principle
A hash function maps keys to indices in a hash table
When multiple keys map to the same index, instead of overwriting, they are stored in a list at that index
Each index acts as a bucket that can hold multiple elements
Example
If h(key) = key % 5:
h(10) = 0
h(15) = 0
h(20) = 0
All elements are stored at index 0 as:
Index 0 → 10 → 15 → 20
Operations
Insertion → Add element to the linked list at computed index
Search → Traverse the list at that index
Deletion → Remove element from the list
Time Complexity
Average Case: O(1) (if chains are short)
Worst Case: O(n) (if all elements fall into one chain)
Advantages
Simple and easy to implement
Efficient when load factor is high
No need to search for empty slots
Disadvantages
Requires extra memory for linked lists
Performance degrades if chains become long
12. What is open addressing?
Definition
Open addressing is a collision resolution technique in which all elements are stored within the hash table itself, and collisions are resolved by finding another empty slot using a probing sequence.
Working Principle
When a collision occurs, the algorithm searches for another free position in the table
The sequence of positions checked is determined by a probing method
Types of Open Addressing
Linear Probing
Quadratic Probing
Double Hashing
Example
If index 2 is occupied:
Check index 3 → if free, insert
Otherwise, continue probing
Time Complexity
Average Case: O(1)
Worst Case: O(n)
Advantages
No extra memory required
Better cache performance due to contiguous storage
Disadvantages
Clustering problem (especially in linear probing)
Performance decreases as table becomes full
Deletion is more complex
13. What is probing in Hashing?
Definition
Probing is the process of searching for an alternative index in a hash table when a collision occurs.
Working Principle
When the computed index is occupied, probing determines the next position to check
Continues until an empty slot is found
Types of Probing
Linear Probing – Sequential checking
Quadratic Probing – Quadratic jumps
Double Hashing – Uses second hash function
Importance
Ensures that all elements can be stored even when collisions occur
Plays a key role in open addressing techniques
14. What is quadratic probing?
Definition
Quadratic probing is a collision resolution technique where the next index is calculated using a quadratic function of the probe number.
Formula
New index = (h(key) + i²) % table_size
Where i = 1, 2, 3, …
Working Principle
Instead of checking consecutive slots, it checks positions at increasing distances
Reduces clustering compared to linear probing
Example
If collision occurs at index 2:
i = 1 → (2 + 1²) = 3
i = 2 → (2 + 4) = 6
i = 3 → (2 + 9) = 11
Advantages
Reduces primary clustering
Better distribution of elements
Disadvantages
Still suffers from secondary clustering
May not check all slots if not properly designed
15. What is double hashing?
Definition
Double hashing is a collision resolution technique that uses two hash functions to calculate the next index, improving distribution and reducing clustering.
Formula
New index = (h1(key) + i × h2(key)) % table_size
Working Principle
First hash function determines the initial index
Second hash function determines the step size
Helps avoid patterns in probing
Example
If:
h1(key) = key % 7
h2(key) = 1 + (key % 5)
Then next positions are computed using both functions
Advantages
Minimizes clustering
Better performance than linear and quadratic probing
Disadvantages
More complex implementation
Requires careful design of second hash function
16. What is the height of a tree?
Definition
The height of a tree is the number of edges in the longest path from the root node to any leaf node.
Explanation
It represents the maximum depth of the tree
Height is measured in terms of edges, not nodes
Example
A
/ \
B C
/
D
Longest path: A → B → D
Number of edges = 2 → Height = 2
Special Cases
Empty tree → height = -1
Single node tree → height = 0
Importance
Determines efficiency of tree operations
Affects time complexity of search, insert, delete
17. What is a balanced binary tree?
Definition
A balanced binary tree is a binary tree in which the difference in height between the left and right subtrees of any node is at most 1.
Working Principle
Ensures that the tree remains approximately balanced
Prevents one side from becoming significantly deeper than the other
Example
Balanced Tree:
10
/ \
5 15
Unbalanced Tree:
10
/
5
/
2
Importance
Ensures operations like search, insert, delete run in O(log n)
Prevents worst-case degradation to O(n)
Examples of Balanced Trees
AVL Tree
Red-Black Tree
18. What is a full binary tree?
Definition
A full binary tree is a binary tree in which every node has either 0 or 2 children.
Explanation
Nodes cannot have exactly one child
Each node is either:
A leaf node (0 children)
An internal node (2 children)
Example
A
/ \
B C
/ \
D E
All nodes have either 0 or 2 children
Properties
Total nodes = 2n + 1 (where n = number of internal nodes)
Number of leaf nodes = internal nodes + 1
Importance
Used in designing efficient tree structures
Ensures proper structure for recursive algorithms
19. What is a complete binary tree?
Definition
A complete binary tree is a binary tree in which all levels are completely filled except possibly the last level, and the last level is filled from left to right without gaps.
Explanation
Every level except the last must be fully filled
In the last level, nodes must be as far left as possible
No empty spaces are allowed between nodes
Example
1
/ \
2 3
/ \ /
4 5 6
This is complete because the last level is filled from left to right.
Properties
Efficiently represented using arrays
Used in heap implementations
Height is log(n)
Importance
Ensures compact structure
Minimizes height → improves efficiency of operations
20. What is a perfect binary tree?
Definition
A perfect binary tree is a binary tree in which all internal nodes have exactly two children and all leaf nodes are at the same level.
Explanation
All levels are completely filled
Every node has either 2 or 0 children
Tree is perfectly symmetrical
Example
1
/ \
2 3
/ \ / \
4 5 6 7
Properties
Total nodes = 2^h+1 − 1
Number of leaf nodes = 2^h
Height = log(n + 1) − 1
Importance
Ideal structure with maximum efficiency
Used in theoretical analysis
21. What is a heap data structure?
Definition
A heap is a complete binary tree that satisfies the heap property, where the value of each node is ordered with respect to its children.
Types of Heap
Min Heap – Parent node is smaller than children
Max Heap – Parent node is greater than children
Key Features
Always a complete binary tree
Efficiently implemented using arrays
Root node contains minimum or maximum element
Operations
Insertion
Deletion (extract min/max)
Heapify (maintaining heap property)
Time Complexity
Insert → O(log n)
Delete → O(log n)
Access root → O(1)
Applications
Priority queues
Heap sort
Scheduling algorithms
22. Difference between Min Heap and Max Heap
Definition
Both are types of heaps that differ in how elements are ordered within the tree.
Key Differences
Example
Min Heap:
10
/ \
20 30
Max Heap:
50
/ \
30 20
23. What is a priority queue?
Definition
A priority queue is a data structure where each element has a priority, and elements are processed based on priority rather than insertion order.
Explanation
Higher priority elements are removed first
If priorities are equal, FIFO order may be followed
Implementation
Using heap (most common)
Using arrays or linked lists
Operations
Insert element with priority
Remove element with highest/lowest priority
Time Complexity (Heap)
Insert → O(log n)
Delete → O(log n)
24. What are the applications of priority queues?
Definition
Priority queues are used in applications where elements must be processed based on priority rather than arrival time.
Applications
CPU scheduling
Dijkstra’s shortest path algorithm
Prim’s minimum spanning tree
Huffman coding
Event-driven simulations
Explanation
Tasks with higher priority are executed first
Ensures efficient management of resources
25. What is tree traversal and why is it important?
Definition
Tree traversal is the process of visiting all nodes of a tree in a specific order.
Types of Traversal
Depth First Traversal (DFS)
Inorder
Preorder
Postorder
Breadth First Traversal (BFS)
Level Order
Importance
Used to access and process all nodes
Helps in searching, updating, and deleting nodes
Used in expression evaluation and tree operations
26. What is level order traversal?
Definition
Level order traversal is a tree traversal method in which nodes are visited level by level from left to right, starting from the root.
Working Principle
Uses a queue
First visit root
Then visit all nodes at the next level
Continue level by level
Example
A
/ \
B C
/ \
D E
Traversal:
A → B → C → D → E
Algorithm Steps
Enqueue root
Dequeue node and process it
Enqueue its children
Repeat until queue is empty
Time Complexity
O(n) (visits all nodes once)
27. What is a binary heap?
Definition
A binary heap is a complete binary tree that satisfies the heap property, where each node is ordered with respect to its children.
Types of Binary Heap
Min Heap – Parent node is smaller than or equal to its children
Max Heap – Parent node is greater than or equal to its children
Structure
Always a complete binary tree
Commonly implemented using an array
For index i:
Left child → 2i + 1
Right child → 2i + 2
Parent → (i - 1) / 2
Operations
Insert → Add element and maintain heap property (heapify up)
Delete (extract min/max) → Remove root and reheapify (heapify down)
Time Complexity
Insert → O(log n)
Delete → O(log n)
Access root → O(1)
Applications
Priority queues
Heap sort
Scheduling problems
28. What is dynamic programming?
Definition
Dynamic programming (DP) is an optimization technique used to solve problems by breaking them into overlapping subproblems and storing their results to avoid recomputation.
Key Concepts
Overlapping Subproblems – Same subproblems are solved multiple times
Optimal Substructure – Optimal solution can be built from subproblems
Approaches
Memoization (Top-Down) – Store results of recursive calls
Tabulation (Bottom-Up) – Build solution iteratively
Example
Fibonacci:
Without DP → exponential time
With DP → linear time
Applications
Fibonacci series
Knapsack problem
Longest Common Subsequence
Shortest path problems
29. Difference between Greedy and Dynamic Programming
Definition
Both are algorithmic techniques used for optimization problems but differ in approach.
Key Differences
Explanation
Greedy → Chooses best option at each step
DP → Explores all possibilities and stores results
30. What is Divide and Conquer strategy?
Definition
Divide and Conquer is an algorithm design strategy that divides a problem into smaller subproblems, solves them recursively, and combines their results.
Steps
Divide – Break problem into subproblems
Conquer – Solve subproblems recursively
Combine – Merge solutions
Examples
Merge Sort
Quick Sort
Binary Search
Advantages
Simplifies complex problems
Efficient for large datasets
31. What are edge cases in algorithm design?
Definition
Edge cases are special or extreme input conditions that may cause an algorithm to behave incorrectly if not handled properly.
Examples
Empty input
Single element
Maximum or minimum values
Duplicate values
Importance
Ensures correctness and robustness of algorithm
Prevents runtime errors and bugs
32. What is amortized time complexity?
Definition
Amortized time complexity is the average time taken per operation over a sequence of operations, even if some individual operations are costly.
Explanation
Some operations may take longer
But overall average cost remains low
Example
Dynamic array resizing:
Most insertions → O(1)
Occasional resizing → O(n)
Amortized cost → O(1)
33. What is the difference between recursion and backtracking?
Definition
Both involve function calls, but they differ in purpose and behavior.
Key Differences
Explanation
Recursion → Breaks problem into smaller subproblems
Backtracking → Tries all possibilities and backtracks on failure
34. What is the difference between BFS and DFS?
Definition
BFS and DFS are graph traversal algorithms that differ in how they explore nodes.
Key Differences
Explanation
BFS → Explores neighbors first
DFS → Goes deep before exploring siblings
35. What is a directed graph?
Definition
A directed graph (digraph) is a graph in which edges have a direction, indicating a one-way relationship between vertices.
Explanation
Each edge is represented as an ordered pair (u → v)
It means there is a connection from u to v, but not necessarily from v to u
Direction defines how traversal can occur
Example
If there is an edge A → B, it means:
You can go from A to B
But not from B to A (unless explicitly defined)
Applications
Web page linking
Social media following systems
Task scheduling (dependencies)
36. What is an undirected graph?
Definition
An undirected graph is a graph in which edges do not have any direction, representing a two-way relationship between vertices.
Explanation
Each edge is represented as (u — v)
Connection is mutual
Traversal can happen in both directions
Example
If there is an edge A — B:
You can go from A to B and B to A
Applications
Road networks
Social networks (mutual friendships)
Network connections
37. What is a cycle in a graph?
Definition
A cycle in a graph is a path where the starting and ending vertices are the same, and no edges are repeated.
Explanation
A cycle must contain at least three vertices (in undirected graphs)
Forms a closed loop
Important in detecting infinite loops or dependencies
Example
A → B → C → A forms a cycle
Importance
Used in cycle detection algorithms
Important in topological sorting and scheduling problems
38. What is topological sorting?
Definition
Topological sorting is a linear ordering of vertices in a Directed Acyclic Graph (DAG) such that for every directed edge u → v, vertex u appears before v in the ordering.
Explanation
Applicable only to DAGs (no cycles)
Ensures dependencies are respected
Multiple valid orderings may exist
Example
If A → B and A → C:
Valid orders: A, B, C or A, C, B
Applications
Task scheduling
Course prerequisite ordering
Dependency resolution
39. What is a spanning tree?
Definition
A spanning tree of a graph is a subgraph that includes all vertices of the graph and forms a tree (no cycles).
Explanation
Connects all vertices
Contains exactly V - 1 edges (where V = number of vertices)
No cycles
Properties
Multiple spanning trees can exist for a graph
Must be connected and acyclic
Applications
Network design
Circuit design
Routing
40. What is a minimum spanning tree?
Definition
A Minimum Spanning Tree (MST) is a spanning tree of a weighted graph such that the sum of the edge weights is minimum among all possible spanning trees.
Explanation
Includes all vertices
No cycles
Minimizes total edge cost
Applications
Network design (minimum cost wiring)
Road construction
Telecommunication networks
41. What is Kruskal’s Algorithm?
Definition
Kruskal’s Algorithm is a greedy algorithm used to find the Minimum Spanning Tree (MST) by selecting edges in increasing order of weight.
Working Principle
Sort all edges by weight
Pick the smallest edge
Add it to MST if it does not form a cycle
Repeat until MST has V − 1 edges
Key Concepts
Uses Disjoint Set (Union-Find) for cycle detection
Greedy approach
Time Complexity
O(E log E) (due to sorting edges)
42. What is Prim’s Algorithm?
Definition
Prim’s Algorithm is a greedy algorithm used to find the Minimum Spanning Tree (MST) by growing the tree one vertex at a time.
Working Principle
Start from any vertex
Select the minimum weight edge connecting the tree to a new vertex
Add the vertex to the MST
Repeat until all vertices are included
Key Concepts
Uses priority queue (min heap)
Builds MST incrementally
Time Complexity
O(E log V) (using priority queue)
43. What is Dijkstra’s Algorithm?
Definition
Dijkstra’s Algorithm is a greedy algorithm used to find the shortest path from a source vertex to all other vertices in a weighted graph with non-negative edge weights.
Working Principle
Maintains a distance array to store shortest distances from the source
Uses a priority queue (min heap) to select the vertex with the smallest distance
Relaxes edges by updating shorter paths
Algorithm Steps
Initialize distance of source = 0, others = ∞
Pick the vertex with the smallest distance
Update distances of its adjacent vertices
Repeat until all vertices are processed
Example
If A → B (weight 2), A → C (weight 5):
Shortest path from A to B = 2
Time Complexity
Using priority queue: O(E log V)
Limitation
Does not work with negative edge weights
44. What is Bellman-Ford Algorithm?
Definition
The Bellman-Ford Algorithm is used to find the shortest path from a source vertex to all other vertices, even in graphs with negative edge weights.
Working Principle
Repeatedly relax all edges V - 1 times
Detects negative weight cycles
Algorithm Steps
Initialize distances (source = 0, others = ∞)
Relax all edges V − 1 times
Check for negative cycles
Time Complexity
O(V × E)
Advantage
Works with negative weights
Disadvantage
Slower than Dijkstra’s Algorithm
45. What is interpolation search?
Definition
Interpolation search is a searching algorithm used on sorted arrays, which estimates the position of the target value based on its value.
Working Principle
Uses a formula to estimate the position instead of checking the middle element
Works best when data is uniformly distributed
Formula
Position =
low + [(key − arr[low]) × (high − low) / (arr[high] − arr[low])]
Time Complexity
Best Case: O(log log n)
Worst Case: O(n)
Limitation
Requires uniformly distributed sorted data
46. What is exponential search?
Definition
Exponential search is a searching algorithm used to find an element in a sorted array by first finding a range and then applying binary search.
Working Principle
Start from index 1
Double the index (1, 2, 4, 8, …) until the element is exceeded
Perform binary search in that range
Time Complexity
O(log n)
Advantage
Efficient for unbounded or large arrays
47. What is jump search?
Definition
Jump search is a searching algorithm for sorted arrays that works by jumping ahead by fixed steps and then performing a linear search.
Working Principle
Jump in steps of size √n
Once the range is found, perform linear search
Time Complexity
O(√n)
Advantage
Faster than linear search
Simpler than binary search
48. What is the difference between merge sort and quick sort?
Definition
Both are divide and conquer sorting algorithms, but they differ in approach and performance.
Key Differences
Explanation
Merge Sort → Divides and merges sorted arrays
Quick Sort → Partitions around pivot
49. What are stable and unstable sorting algorithms?
Definition
Sorting algorithms are classified based on whether they preserve the relative order of equal elements.
Stable Sorting
Maintains the order of equal elements
Examples
Merge Sort
Insertion Sort
Bubble Sort
Unstable Sorting
May change the order of equal elements
Examples
Quick Sort
Selection Sort
Heap Sort
Importance
Important when sorting multiple fields
Used in applications where order matters
50. What is the time complexity of HashMap operations?
Definition
A HashMap provides efficient operations using hashing.
Time Complexity
Explanation
Average case assumes good hash function and low collisions
Worst case occurs when all elements fall into the same bucket
Note
Modern implementations (like Java) use balanced trees in case of many collisions, improving worst-case performance.
Start writing your current affairs content here...