Event Loop & Concurrency (Core JS Depth)
1. You have heavy synchronous computation blocking the UI. How would you redesign it to keep the app responsive?Heavy synchronous computations block the JavaScript main thread, which is responsible for rendering the UI and handling user interactions. When the main thread is busy, the UI becomes unresponsive or frozen.
To keep the application responsive, the heavy computation should be moved off the main thread or broken into smaller tasks.
1. Use Web Workers (Best Solution for Heavy Tasks)
Web Workers allow running JavaScript in a separate background thread, preventing the main UI thread from being blocked.
Main thread:
worker.postMessage(data);
worker.onmessage = function(event) {
console.log("Result:", event.data);
};const worker = new Worker("worker.js");
Worker file (worker.js):
Benefits:
-
Runs computation in a separate thread
-
UI remains responsive
-
Suitable for CPU-heavy operations
2. Break Work into Smaller Chunks
If Web Workers are not needed, the computation can be split into smaller tasks using setTimeout() or requestIdleCallback().
Example idea:
This allows the event loop to handle UI updates between chunks.
3. Use Asynchronous APIs
If the heavy work involves I/O operations such as network requests or file reading, use asynchronous APIs like:
-
fetch() -
Promise -
async/await
This prevents blocking the main thread.
2. A promise chain sometimes executes out of order in production. What could cause this?
Heavy synchronous computations block the JavaScript main thread, which is responsible for rendering the UI and handling user interactions. When the main thread is busy, the UI becomes unresponsive or frozen.
To keep the application responsive, the heavy computation should be moved off the main thread or broken into smaller tasks.
1. Use Web Workers (Best Solution for Heavy Tasks)
Web Workers allow running JavaScript in a separate background thread, preventing the main UI thread from being blocked.
Main thread:
Worker file (worker.js):
Benefits:
-
Runs computation in a separate thread
-
UI remains responsive
-
Suitable for CPU-heavy operations
2. Break Work into Smaller Chunks
If Web Workers are not needed, the computation can be split into smaller tasks using setTimeout() or requestIdleCallback().
Example idea:
This allows the event loop to handle UI updates between chunks.
3. Use Asynchronous APIs
If the heavy work involves I/O operations such as network requests or file reading, use asynchronous APIs like:
-
fetch() -
Promise -
async/await
This prevents blocking the main thread.
3. How would you implement your own task scheduler (like microtask vs macrotask control)?
JavaScript’s event loop manages two main types of task queues:
-
Microtasks – High priority tasks (Promises,
queueMicrotask) -
Macrotasks – Lower priority tasks (
setTimeout,setInterval,setImmediate, I/O)
A custom task scheduler can control when tasks run and their priority, allowing you to manage execution order similar to how browsers handle microtasks and macrotasks.
1. Understand the Execution Order
The event loop processes tasks in this order:
Call Stack
↓
Microtask Queue
↓
Macrotask Queue
Example behavior:
Output:
Start
End
Microtask
Macrotask
Microtasks always run before the next macrotask cycle.
2. Creating a Simple Task Scheduler
We can implement a scheduler that manages two queues:
-
Microtask queue
-
Macrotask queue
Example design:
Usage:
Expected order:
Microtask 1
Microtask 2
Macrotask 1
4. How would you detect and fix a race condition between two async API calls?
A race condition occurs when multiple asynchronous operations run at the same time and update shared state, and the final result depends on which one finishes first. In UI applications this often causes incorrect or stale data to appear.
1. How to Detect the Race Condition
Common symptoms:
-
UI shows older data after a newer request
-
Results change depending on network speed
-
Logs show responses arriving in different order than requests
Example problem:
If the user quickly triggers:
Possible execution:
Request 1 sent
Request 2 sent
Request 2 finishes → UI shows user 2
Request 1 finishes → UI overwritten with user 1 ❌
This creates incorrect UI state.
2. Fix Using Request ID Tracking
Track the latest request and ignore older responses.
Now only the most recent request updates the UI.
3. Cancel Previous Requests (Best Practice)
Use AbortController to cancel outdated requests.
Benefits:
-
Prevents unnecessary API calls
-
Avoids outdated responses
-
Saves network resources
4. Serialize Requests When Order Matters
If operations must execute in sequence, ensure one finishes before the next.
Example:
This ensures requests run in order.
🚀 Performance & Optimization
Rendering 50,000 DOM elements at once is expensive because DOM operations, layout calculations, and memory usage increase significantly, causing slow performance and UI lag. The goal is to reduce the number of elements rendered and optimize rendering operations.
1. Use Virtualization (Most Effective Solution)
Instead of rendering all rows, render only the rows visible in the viewport.
Concept:
50,000 rows
↓
Only render ~30–50 rows visible on screen
↓
Update rows while scrolling
Libraries commonly used:
-
react-window
-
react-virtualized
-
TanStack Virtual
Example idea:
<List
height={600}
itemCount={50000}
itemSize={35}
width={800}
/>
This drastically reduces DOM nodes.
2. Implement Pagination
Load and render data in smaller chunks instead of all at once.
Example:
Page 1 → 100 rows
Page 2 → 100 rows
Page 3 → 100 rows
Benefits:
-
Smaller DOM size
-
Faster initial load
-
Reduced memory usage
3. Use Lazy Loading / Infinite Scrolling
Load rows only when the user scrolls near the bottom.
Concept:
Initial load → 100 rows
Scroll → load next 100 rows
Scroll → load next 100 rows
This prevents loading the entire dataset immediately.
6. How would you implement virtual scrolling from scratch?
Virtual scrolling (or windowing) improves performance when displaying large lists (e.g., 50k+ items) by rendering only the items visible in the viewport instead of the entire dataset.
The key idea is to simulate the full scroll height while only rendering a small subset of items.
1. Core Concept
Instead of rendering all items:
50,000 rows in DOM ❌
Render only visible items:
Viewport shows ~20 rows
Render only 20–30 rows ✔
But maintain the full scroll height so the scrollbar behaves normally.
2. Layout Structure
Virtual scrolling usually uses three elements:
Scrollable Container
│
▼
Full Height Spacer
│
▼
Visible Items (absolute positioned)
Example HTML:
<div id="container">
<div id="spacer"></div>
<div id="list"></div>
</div>
-
container→ scrollable viewport -
spacer→ simulates full list height -
list→ contains visible rows only
3. Basic Logic
Important variables:
Total items
Item height
Viewport height
Scroll position
Formula:
startIndex = scrollTop / itemHeight
endIndex = startIndex + visibleItemCount
7. A memory leak is reported after users keep the tab open for hours. How would you debug it?
When users keep a tab open for hours, memory leaks usually occur because objects, DOM nodes, or listeners are unintentionally retained in memory. The goal is to identify what objects keep growing and why they are not being garbage collected.
1. Reproduce the Issue
First, reproduce the problem locally.
Steps:
-
Open the app in Chrome DevTools
-
Simulate user activity for a long period
-
Monitor memory usage over time
Tools:
Chrome DevTools → Performance / Memory Tab
Look for steady memory growth without dropping after garbage collection.
2. Take Heap Snapshots
Use Heap Snapshots to inspect memory allocations.
Steps:
-
Open Chrome DevTools → Memory
-
Take a baseline snapshot
-
Interact with the app
-
Take another snapshot
-
Compare the snapshots
Look for:
-
Increasing object counts
-
Detached DOM nodes
-
Large arrays growing over time
Example pattern:
Snapshot 1 → 10,000 objects
Snapshot 2 → 50,000 objects
3. Check for Detached DOM Nodes
A common leak occurs when DOM elements are removed from the page but still referenced in JavaScript.
Example problem:
If references remain, the browser cannot garbage collect the node.
DevTools can show Detached DOM Trees.
4. Look for Unremoved Event Listeners
Event listeners can keep objects alive.
Example issue:
If the component is destroyed but the listener isn't removed:
The handler may keep references alive.
Common cases:
-
Scroll listeners
-
Window events
-
WebSocket listeners
8. How would you measure JS performance in production?
Measuring JS performance in production helps understand real user experience (RUM – Real User Monitoring) rather than just lab benchmarks. The goal is to track page load times, interaction latency, memory usage, and runtime performance across real users and devices.
1. Use the Performance API
The Performance API provides detailed timing metrics directly from the browser.
Example:
Important metrics include:
DNS lookup time
Connection time
TTFB (Time To First Byte)
DOM Content Loaded
Page Load Time
Example measurement:
2. Track Web Vitals
Google’s Core Web Vitals measure user experience.
Key metrics:
| Metric | Meaning |
|---|---|
| LCP | Largest Contentful Paint (load performance) |
| FID | First Input Delay (interactivity delay) |
| CLS | Cumulative Layout Shift (visual stability) |
| INP | Interaction to Next Paint (responsiveness) |
Example:
These metrics reflect actual user experience.
3. Real User Monitoring (RUM)
Send performance data from users’ browsers to a monitoring service.
Flow:
User Browser
│
Collect performance metrics
│
Send to monitoring backend
│
Analytics dashboards
Popular tools:
-
Sentry
-
Datadog
-
New Relic
-
LogRocket
-
Elastic APM
These tools track performance across thousands of real sessions.
9. Explain how V8 optimizes JavaScript and how your code can accidentally de-optimize it.
V8 (the JavaScript engine used in Chrome and Node.js) improves performance by analyzing runtime behavior and compiling frequently executed code into optimized machine code. However, certain coding patterns can cause V8 to de-optimize, forcing it to fall back to slower execution.
1. V8 Execution Pipeline
V8 uses multiple stages to execute JavaScript efficiently.
JavaScript Code
↓
Parser → AST
↓
Ignition (Interpreter)
↓
TurboFan (Optimizing Compiler)
↓
Optimized Machine Code
Explanation:
-
Ignition Interpreter
Executes code first and collects runtime information. -
TurboFan Optimizer
If a function runs frequently (“hot function”), V8 compiles it into optimized machine code.
This optimization relies on predictable code behavior.
2. Hidden Classes (Object Shape Optimization)
V8 creates hidden classes to optimize object property access.
Example:
If objects are created with the same property order, V8 can optimize property access.
Bad example:
Adding properties dynamically changes the object’s hidden class, which may reduce optimization efficiency.
Best practice:
3. Inline Caching
V8 uses inline caching to speed up repeated operations.
Example:
If user objects always have the same structure, V8 can cache property access and make it very fast.
However, if different object shapes are used:
V8 may invalidate the cache and de-optimize the function.
🧩 Closures & Functional Patterns
Memoization is a technique where the result of a function is cached based on its inputs, so repeated calls with the same arguments return the cached result instead of recomputing it.
However, in real applications we also need cache invalidation so stale data can be refreshed.
1. Basic Memoization Idea
Example problem:
Memoization stores:
input → result
Example:
5 → 25
2. Memoization Utility with Cache
We can implement memoization using a Map.
Usage:
3. Adding Cache Invalidation
Sometimes cached results become stale, so we allow manual cache clearing.
Usage:
A closure occurs when a function retains access to variables from its outer scope even after the outer function has finished executing.
Example:
The inner function remembers count even after createCounter() finishes.
How This Can Cause Memory Leaks
Closures can cause memory leaks when they retain references to large objects or DOM elements that are no longer needed. Because the closure still references them, the JavaScript garbage collector cannot free that memory.
Example Problem
function setupHandler() {
const largeData = new Array(1000000).fill("data");
document.getElementById("btn").onclick = function () {
console.log(largeData.length);
};
}
Problem:
-
largeDatais captured by the closure. -
As long as the event handler exists,
largeDatastays in memory. -
Even if the data is no longer needed, it cannot be garbage collected.
Common Situations Where This Happens
1. Event Listeners
Closures in event handlers may keep large objects alive.
Example:
If the listener isn't removed, the closure holds the reference.
2. Timers or Intervals
Closures inside timers can hold references indefinitely.
Example:
If the interval isn't cleared, bigData stays in memory.
3. Long-lived Callbacks
Closures used in asynchronous callbacks may keep unnecessary data alive longer than expected.
12. Write a curry function and explain real use cases.
Currying is a technique where a function that takes multiple arguments is transformed into a sequence of functions each taking a single argument.
Example transformation:
add(a, b, c)
→ add(a)(b)(c)
Implementation
function curry(fn) {
return function curried(...args) {
if (args.length >= fn.length) {
return fn(...args);
}
return (...next) => curried(...args, ...next);
};
}
Example usage:
Real Use Cases
1. Function Reusability
You can create specialized functions by partially applying arguments.
This improves code reuse.
2. Event Handlers / UI Logic
Currying helps pass configuration into event handlers.
The handler is pre-configured with "login".
13. Implement
once() function that runs a function only once.The once() utility ensures that a function executes only the first time it is called. Any subsequent calls return the previously computed result without re-running the function.
Implementation
function once(fn) {
let called = false;
let result;
return function (...args) {
if (!called) {
called = true;
result = fn(...args);
}
return result;
};
}
Example
const init = once(() => {
console.log("Initialized");
return 42;
});
init(); // "Initialized"
init(); // nothing printed
init(); // nothing printed
How It Works
-
A closure keeps two variables:
-
called→ tracks whether the function has already run. -
result→ stores the result of the first execution.
-
-
On the first call, the function runs and the result is saved.
-
On later calls, the stored result is returned without executing the function again.
14. Implement a robust debounce and throttle from scratch.
Debounce ensures a function runs only after a delay since the last call. Useful for events like search input, resize, scroll.
Implementation
function debounce(fn, delay) {
let timer;
return function (...args) {
clearTimeout(timer);
timer = setTimeout(() => fn.apply(this, args), delay);
};
}
Example
const handleSearch = debounce((query) => {
console.log("Searching:", query);
}, 300);
Every new call resets the timer, so the function runs only once after the user stops triggering events.
Throttle
Throttle ensures a function runs at most once every given interval, no matter how many times it’s triggered.
Useful for scroll events, mouse movement, window resize.
Implementation
function throttle(fn, limit) {
let lastCall = 0;
return function (...args) {
const now = Date.now();
if (now - lastCall >= limit) {
lastCall = now;
fn.apply(this, args);
}
};
}
Example
const handleScroll = throttle(() => {
console.log("Scroll event handled");
}, 200);
The function executes once every 200ms, regardless of how frequently the event fires.
Key Difference
| Feature | Debounce | Throttle |
|---|---|---|
| Execution | After user stops triggering | At regular intervals |
| Best for | Search input, autocomplete | Scroll, resize, drag events |
| Behavior | Cancels previous calls | Limits call frequency |
⚙️ System Design in Frontend
For an application serving hundreds of millions or billions of users, the frontend architecture must focus on performance, scalability, maintainability, and fast global delivery. The design must ensure users across different devices and regions experience low latency and responsive UI.
1. Layered Frontend Architecture
A scalable frontend is usually structured into clear layers.
UI Components
│
State Management
│
API / Data Layer
│
Backend Services
Layers:
-
UI Layer – reusable components and design system
-
State Layer – manages app state
-
Data Layer – API communication and caching
-
Infrastructure Layer – CDN, edge caching, monitoring
This separation makes the codebase maintainable and scalable.
2. Component-Based Design System
Use a component-based framework such as:
-
React
-
Vue
-
Angular
Structure components into:
Atoms → Buttons, Inputs
Molecules → Form Fields
Organisms → Complex UI Blocks
Pages → Full screens
Benefits:
-
Reusability
-
Consistent UI
-
Easier team collaboration
3. Micro-Frontend Architecture
Large applications often split the frontend into independent micro frontends owned by different teams.
Example:
App Shell
│
├── Auth Module
├── Dashboard Module
├── Payment Module
└── Notification Module
Tools commonly used:
-
Module Federation
-
Single-SPA
Benefits:
-
Independent deployments
-
Smaller teams working in parallel
-
Reduced coupling
4. Global Content Delivery (CDN)
To serve billions of users efficiently:
User
│
CDN Edge Server
│
Origin Server
Use CDNs such as:
-
Cloudflare
-
Akamai
-
AWS CloudFront
Benefits:
-
Reduced latency
-
Faster static asset delivery
-
Edge caching
5. Code Splitting and Lazy Loading
Avoid loading the entire application at once.
Example:
Initial Bundle
│
Load page-specific bundles on demand
Techniques:
-
Dynamic imports
-
Route-based code splitting
This significantly reduces initial page load time.
16. How would you implement a global state manager from scratch?A global state manager allows different parts of an application to share and react to the same state without passing props through many layers.
The core idea is simple:
State
│
Store
│
Subscribers (components)
Whenever the state changes, all subscribers are notified and updated.
Basic Implementation
function createStore(initialState) {
let state = initialState;
const listeners = new Set();
function getState() {
return state;
}
function setState(updater) {
state =
typeof updater === "function"
? updater(state)
: { ...state, ...updater };
listeners.forEach((listener) => listener(state));
}
function subscribe(listener) {
listeners.add(listener);
return () => listeners.delete(listener);
}
return { getState, setState, subscribe };
}
Usage Example
const store = createStore({ count: 0 });
store.subscribe((state) => {
console.log("State updated:", state);
});
store.setState({ count: 1 });
store.setState((prev) => ({ count: prev.count + 1 }));
Output:
State updated: { count: 1 }
State updated: { count: 2 }
How It Works
-
State Storage
let state = initialState;
The store holds the current global state.
Subscribers
const listeners = new Set();
Components or modules subscribe to state changes.
State Updates
setState(...)
When state changes:
-
The new state is computed
-
All listeners are notified
Subscription System
subscribe(listener)
Allows components to react when the state updates.
17. Design an offline-first web application.
An offline-first application is designed so that it works without internet connectivity, storing data locally and synchronizing with the server when the connection becomes available. The goal is to provide a seamless user experience regardless of network conditions.
1. Core Architecture
An offline-first app usually has three layers:
UI Layer
│
Local Data Layer (IndexedDB / Local Storage)
│
Sync Layer
│
Remote Server
Flow:
User Action
│
Write to Local DB
│
Update UI immediately
│
Background Sync → Server
This ensures the app remains responsive even when offline.
2. Service Workers for Offline Support
Service workers act as a network proxy between the app and the network.
Responsibilities:
-
Cache static assets
-
Intercept network requests
-
Serve cached content when offline
Example concept:
User Request
│
Service Worker
│
├─ Cached response (offline)
└─ Network request (online)
Common caching strategies:
-
Cache First
-
Network First
-
Stale-While-Revalidate
3. Local Data Storage
Offline apps need a persistent local database.
Common options:
-
IndexedDB (best for large data)
-
LocalStorage (small data)
-
Libraries like Dexie.js or PouchDB
Example flow:
User creates note
│
Save to IndexedDB
│
UI updates instantly
The app works even without internet.
4. Background Synchronization
When connectivity returns, the app should sync local changes with the server.
Example process:
Offline Action
│
Store in Sync Queue
│
Connection Restored
│
Send Updates to Server
Tools:
-
Background Sync API
-
Retry queues
-
Conflict resolution logic
18. How would you structure a large-scale codebase with multiple teams?
When multiple teams work on the same codebase, the architecture must ensure clear ownership, modularity, and minimal coupling. The goal is to allow teams to work independently without breaking each other’s code.
1. Domain-Based Modular Structure
Organize the codebase around business domains, not technical layers.
Example:
src/
auth/
payments/
orders/
notifications/
shared/
Each domain contains its own:
auth/
components/
services/
hooks/
api/
tests/
Benefits:
-
Clear team ownership
-
Easier maintenance
-
Reduced cross-team conflicts
2. Shared Core Layer
Common utilities should live in a shared module.
Example:
shared/
ui/
utils/
hooks/
config/
Examples of shared resources:
-
UI components
-
Helper functions
-
API clients
-
Design tokens
Important rule:
Domains can depend on shared modules
Shared modules must NOT depend on domains
3. Enforce Clear Module Boundaries
Use tooling to prevent accidental coupling between modules.
Techniques:
-
TypeScript path aliases
-
ESLint dependency rules
-
Architecture linting tools
Example dependency rule:
auth → shared
auth → payments
19. How would you implement feature flags in frontend?
Feature flags (feature toggles) allow you to enable or disable features dynamically without redeploying the application. They are widely used for A/B testing, gradual rollouts, canary releases, and safe deployments.
1. Basic Concept
Instead of directly enabling a feature in code:
showNewDashboard();
Use a feature flag:
if (flags.newDashboard) {
showNewDashboard();
}
Now the feature can be turned on/off remotely.
2. Central Feature Flag Store
Maintain a centralized configuration for feature flags.
Example:
Usage:
This keeps feature control centralized.
3. Fetch Flags from Backend
In production systems, feature flags are usually fetched from a remote config service.
Flow:
Frontend App
│
Fetch Feature Flags
│
Feature Flag Service
Example:
The app loads flags during startup.
4. Create a Feature Flag Utility
A simple helper function can make feature checks easier.
Usage:
🌐 Networking & APIs
20. Design a resilient API layer with retries and exponential backoff.
A resilient API layer ensures that temporary network failures or server issues do not immediately break the application. Instead, failed requests are retried with increasing delays to avoid overwhelming the server.
1. Core Idea
When an API request fails:
Request
│
Failure
│
Retry with delay
│
Retry again with longer delay
│
Success / Give up
Instead of retrying immediately, we use exponential backoff so the delay increases after each failure.
Example delay pattern:
1st retry → 1s
2nd retry → 2s
3rd retry → 4s
4th retry → 8s
This prevents retry storms during outages.
2. Retry Utility with Exponential Backoff
async function requestWithRetry(fn, retries = 3, delay = 500) {
try {
return await fn();
} catch (err) {
if (retries === 0) throw err;
await new Promise(res => setTimeout(res, delay));
return requestWithRetry(fn, retries - 1, delay * 2);
}
}
Usage:
requestWithRetry(() => fetch("/api/data"));
This automatically retries failed requests.
3.API Layer Wrappe
Create a centralized API helper so all requests benefit from retry logic.
async function apiRequest(url, options) {
return requestWithRetry(() => fetch(url, options));
}
Example:
apiRequest("/api/users");
apiRequest("/api/orders");
This ensures consistent resilience across the app.
4. Retry Only for Safe Errors
Not all failures should be retried.
Safe to retry:
Network errors
Timeouts
HTTP 5xx errors
Do NOT retry:
HTTP 400
HTTP 401
HTTP 403
Retrying these wastes resources.
21. How would you cancel in-flight API calls when a component unmounts?
When a component unmounts while an API request is still running, the request may resolve later and try to update state on an unmounted component, causing memory leaks or warnings.
To prevent this, the request should be cancelled when the component unmounts.
1. Use AbortController (Standard Solution)
Modern browsers support AbortController, which allows canceling fetch requests.
Example
useEffect(() => { const controller = new AbortController(); fetch("/api/data", { signal: controller.signal }) .then(res => res.json()) .then(data => setData(data)) .catch(err => { if (err.name !== "AbortError") { console.error(err); } }); return () => controller.abort(); }, []);
How it works
-
Create an
AbortController. -
Pass its
signaltofetch. -
When the component unmounts, call:
controller.abort();
This cancels the request immediately.
2. Why This Is Important
Without cancellation:
Component unmounts
│
API response arrives
│
setState runs on unmounted component
With cancellation:
Component unmounts
│
Abort request
│
No state update ✔
22. How would you prevent duplicate API calls?
Duplicate API calls can happen due to rapid user actions, repeated component renders, or multiple parts of the app requesting the same data simultaneously. Preventing them improves performance, reduces server load, and avoids inconsistent state.
1. Request Deduplication (Most Common Solution)
Store in-flight requests and reuse them if the same request is triggered again.
Example
const pendingRequests = new Map(); function fetchData(url) { if (pendingRequests.has(url)) { return pendingRequests.get(url); } const request = fetch(url).then(res => res.json()); pendingRequests.set(url, request); request.finally(() => pendingRequests.delete(url)); return request; } How it works
First request → API call
Second request → reuse same promise
Only one network request is sent.
2. Debounce User Actions
If API calls are triggered by user input (like search), use debouncing.
Example concept:
User typing → delay request
Send API call only after user stops typing
This prevents multiple rapid requests.
3. Disable UI During Requests
Prevent users from triggering the same request multiple times.
Example:
User clicks button
│
Disable button
│
Enable again after response
This is common in form submissions and payments.
23. Implement Promise.allSettled() polyfill.
Promise.allSettled() waits for all promises to settle (either fulfilled or rejected) and returns their results without failing early.
Each result has the format:
{ status: "fulfilled", value: result }
{ status: "rejected", reason: error }
Implementation
function allSettled(promises) { return Promise.all( promises.map(p => Promise.resolve(p) .then(value => ({ status: "fulfilled", value })) .catch(reason => ({ status: "rejected", reason })) ) ); }Example
const p1 = Promise.resolve(10); const p2 = Promise.reject("error"); const p3 = Promise.resolve(30); allSettled([p1, p2, p3]).then(console.log);
Output:
[
{ status: "fulfilled", value: 10 },
{ status: "rejected", reason: "error" },
{ status: "fulfilled", value: 30 }
]
How It Works
-
Convert each input into a promise using
Promise.resolve. -
Attach
.then()and.catch()handlers to capture both outcomes. -
Wrap all results with
Promise.allso the function resolves after every promise settles.
Key Difference from Promise.all
| Method | Behavior |
|---|---|
Promise.all | Rejects immediately if any promise fails |
Promise.allSettled | Waits for all promises and returns all results |
24. Implement request batching for performance optimization.
Request batching groups multiple API calls into a single network request. Instead of sending requests individually, they are collected for a short period and sent together. This reduces network overhead, server load, and latency.
Example problem:
5 API calls → 5 network requests
With batching:
5 API calls → 1 network request
Basic Implementation
function createBatcher(batchFn, delay = 50) { let queue = []; let timer = null; return function request(data) { return new Promise((resolve, reject) => { queue.push({ data, resolve, reject }); if (!timer) { timer = setTimeout(async () => { const current = queue; queue = []; timer = null; try { const results = await batchFn(current.map(item => item.data)); results.forEach((res, i) => current[i].resolve(res)); } catch (err) { current.forEach(item => item.reject(err)); } }, delay); } }); }; }How It Works-
Requests are added to a queue.
-
A short batch window (e.g., 50ms) collects requests.
-
After the window expires:
-
All queued requests are sent together.
-
-
Responses are mapped back to individual callers
🔐 Security & Edge Cases
25. How would you prevent XSS in a frontend app?
XSS occurs when malicious scripts are injected into a webpage and executed in users’ browsers. This can lead to data theft, session hijacking, or unauthorized actions. Preventing XSS requires proper input handling, safe rendering, and strong browser security policies.
1. Escape or Sanitize User Input
Never render raw user input directly into the DOM.
Example problem:
element.innerHTML = userInput;
If userInput contains:
<script>alert("XSS")</script>
the script will execute.
Safer approach:
element.textContent = userInput;
This renders the text without executing scripts.
2. Use Framework Auto-Escaping
Modern frameworks automatically escape HTML when rendering data
Example:
<div>{userInput}</div>
Frameworks like:
-
React
-
Vue
-
Angular
automatically prevent script injection unless explicitly overridden.
Avoid dangerous APIs like:
dangerouslySetInnerHTML
v-html
innerHTML
unless the content is sanitized.
26. How would you securely store tokens in browser?
Authentication tokens (like JWTs or session tokens) must be stored carefully in the browser to prevent XSS attacks, token theft, and session hijacking. The storage strategy should minimize exposure to JavaScript and follow strong security practices.
1. Use HttpOnly Cookies (Most Secure Approach)
The safest way to store tokens is using HttpOnly cookies.
Example concept:
Server → sets cookie
Browser → stores cookie automatically
JS → cannot access it
Cookie example:
Set-Cookie: authToken=abc123; HttpOnly; Secure; SameSite=Strict
Benefits:
-
Not accessible via JavaScript
-
Protects against XSS token theft
-
Automatically sent with requests
Security flags:
| Flag | Purpose |
|---|---|
| HttpOnly | Prevent JS access |
| Secure | Only sent over HTTPS |
| SameSite | Protect against CSRF |
2. Avoid Storing Tokens in LocalStorage
Example:
localStorage.setItem("token", token);
Problem:
XSS attack → JS can read token → token stolen
Since localStorage is fully accessible to JavaScript, it is vulnerable to XSS.
3. Avoid Session Storage for Sensitive Tokens
sessionStorage has the same problem as localStorage.
sessionStorage.setItem("token", token);
It can still be accessed by malicious scripts.
4. In-Memory Storage (Safer Alternative)
Another approach is storing tokens only in memory.
Example:
let authToken = null;
Advantages:
-
Token disappears on page refresh
-
Harder for attackers to persist access
However:
-
Users must re-authenticate after refresh unless a refresh token exists.
27. What is prototype pollution? How would you prevent it?
Prototype pollution is a JavaScript vulnerability where an attacker modifies the prototype of an object, which then affects all objects that inherit from that prototype.
Because JavaScript uses prototype-based inheritance, modifying Object.prototype can change behavior globally.
Example of Prototype Pollution
const obj = {}; obj.__proto__.isAdmin = true; const user = {}; console.log(user.isAdmin); // trueconst obj = {};
Even though user never had isAdmin, it inherited it from the polluted prototype.
How Prototype Pollution Happens
It often occurs when merging user input into objects without validation.
Example:
Object.prototype.admin = true
Now every object may have admin = true.
Security Impact
Prototype pollution can lead to:
-
Privilege escalation
-
Authentication bypass
-
Unexpected behavior in applications
-
XSS vulnerabilities
-
Denial of service
Because the polluted prototype affects all objects globally.
How to Prevent Prototype Pollution
1. Validate Input Keys
Reject dangerous keys such as:
__proto__
prototype
constructor
Example concept:
If key matches dangerous property → ignore
2. Use Safe Object Creation
Create objects without a prototype.
Example:
const obj = Object.create(null);
This prevents inheritance from Object.prototype.
3. Avoid Unsafe Object Merging
Many vulnerabilities occur in deep merge utilities.
Instead of naive merging:
Use secure libraries
Examples:
-
lodash (updated versions)
-
structuredClone
-
secure merge utilities
4. Use hasOwnProperty
Ensure you only access properties belonging to the object.
Example:
if (Object.hasOwn(obj, key)) {
// safe access
}
5. Keep Dependencies Updated
Prototype pollution often appears in vulnerable packages.
Example:
Old lodash versions
deep merge libraries
Regularly audit dependencies.
28. How would you sandbox untrusted JavaScript code?
When executing untrusted JavaScript (e.g., plugins, user scripts, or dynamic code), the goal is to isolate it from the main application so it cannot access sensitive data, modify the DOM, or perform malicious actions.
1. Use Web Workers (Common Approach)
Web Workers run JavaScript in a separate thread with no direct access to the DOM, which naturally limits what the code can do.
Example concept:
Main App
│
Web Worker
│
Untrusted Code
Example:
Benefits
-
No DOM access
-
Isolated execution environment
-
Communication only through
postMessage
2. Use an iframe Sandbox
You can run untrusted code inside a sandboxed iframe.
Example:
<iframe sandbox="allow-scripts"></iframe>
The sandbox attribute restricts capabilities such as:
DOM access
Cookies
Local storage
Top-level navigation
The main application communicates with the iframe using:
window.postMessage()
3. Use Realms / VM-like Execution (Controlled Context)
Untrusted code can be executed in a restricted environment where only specific APIs are exposed.
Example idea:
Here only safeAPI is available to the script.
However, this approach must be used carefully to avoid escape vulnerabilities.
29. What security concerns arise from using third-party scripts?
Third-party scripts (analytics, ads, chat widgets, SDKs, etc.) run inside your webpage with the same privileges as your own code. Because of this, they introduce several security risks.
1. Supply Chain Attacks
If a third-party provider is compromised, malicious code can be injected into their script and executed on your website.
Example scenario:
Your site → loads analytics.js
analytics.js compromised → injects malicious code
Users affected on every page load
Real-world incidents have occurred where attackers modified scripts hosted on CDNs or npm packages.
2. Data Leakage
Third-party scripts may access sensitive information available on the page.
Examples of accessible data:
User IDs
Session information
Form inputs
Browsing behavior
If the script sends this data to external servers, it may cause privacy violations or data leaks.
3. XSS Injection Risk
A malicious or compromised script can inject additional scripts.
Example:
Third-party script
│
Injects malicious JS
│
Steals cookies or tokens
Since the script runs with full page privileges, it can manipulate the DOM or steal credentials.
4. Access to Sensitive Browser APIs
Third-party scripts may access:
Cookies
LocalStorage
SessionStorage
DOM content
Network requests
This can expose authentication tokens or user data if the script behaves maliciously.
🏗️ Core JavaScript Deep Internals
30. Implement Promise.all() from scratch.
Implementing Promise.all() from Scratch
Promise.all() takes an array of promises and returns a single promise that:
-
Resolves when all promises resolve.
-
Rejects immediately if any promise rejects.
The resolved value is an array of results in the same order as the input promises.
Implementation
const p1 = Promise.resolve(1); const p2 = Promise.resolve(2); const p3 = Promise.resolve(3); promiseAll([p1, p2, p3]).then(console.log);
Output:
[1, 2, 3]
How It Works
-
Create a new promise.
-
Track results using an array.
-
Track completion count.
-
Wrap each item with
Promise.resolve()to handle non-promises. -
Store results using the original index to maintain order.
-
Resolve when all promises complete.
-
Reject immediately if any promise fails.
31. Implement a custom event emitter.
An Event Emitter allows different parts of an application to communicate through events.
It follows the publish–subscribe pattern:
Emitter
│
Event Triggered
│
Subscribed Listeners Execute
This pattern is widely used in Node.js, frontend frameworks, and messaging systems.
Implementation
:
class EventEmitter { constructor() { this.events = {}; } on(event, listener) { if (!this.events[event]) this.events[event] = []; this.events[event].push(listener); } emit(event, ...args) { if (!this.events[event]) return; this.events[event].forEach(listener => listener(...args)); } off(event, listener) { if (!this.events[event]) return; this.events[event] = this.events[event].filter(l => l !== listener); } once(event, listener) { const wrapper = (...args) => { listener(...args); this.off(event, wrapper); }; this.on(event, wrapper); } }
Example Usage
Output:
Hello Lingesh
32. Explain how garbage collection works in V8.
Garbage Collection (GC) in V8 automatically frees memory that is no longer reachable by the program. This prevents memory leaks and ensures efficient memory usage in JavaScript applications.
The core idea is simple:
Objects that are reachable → kept in memory
Objects that are unreachable → removed
1. Memory Regions in V8
V8 divides memory into two main areas:
Heap
├─ New Space (young objects)
└─ Old Space (long-lived objects)
New Space
-
Stores short-lived objects
-
Most objects are created here
-
Small and optimized for fast allocation
Old Space
-
Stores long-lived objects
-
Objects promoted from new space
-
Larger and collected less frequently
2. Generational Garbage Collection
V8 uses the Generational Hypothesis:
Most objects die young
Few objects live long
Because of this, V8 uses different algorithms for different generations.
3. Minor Garbage Collection (Scavenge)
Used for New Space.
Process:
New Space
├─ From Space
└─ To Space
Steps:
-
Objects are allocated in From Space.
-
During GC:
-
Live objects are copied to To Space.
-
Dead objects are discarded.
-
-
Spaces are swapped.
Benefits:
-
Very fast
-
Efficient for short-lived objects
4. Promotion to Old Space
If objects survive multiple minor GCs:
New Space → Old Space
This means the object is likely long-lived.
Example:
const config = { theme: "dark" };
Objects used for the entire app lifetime move to Old Space.
33. Implement deep clone handling circular references.
A deep clone creates a completely independent copy of an object, including all nested objects.
When objects contain circular references, naive recursion causes an infinite loop.
Example circular structure:
const obj = {};
obj.self = obj;
To handle this, we track already-cloned objects using a WeakMap
Implementation
function deepClone(obj, map = new WeakMap()) { if (obj === null || typeof obj !== "object") return obj; if (map.has(obj)) return map.get(obj); const clone = Array.isArray(obj) ? [] : {}; map.set(obj, clone); for (const key in obj) { clone[key] = deepClone(obj[key], map); } return clone; }
Example
const a = { value: 1 }; a.self = a; const copy = deepClone(a); console.log(copy.value); // 1 console.log(copy.self === copy); // trueHow It Works-
Primitive values are returned directly.
-
A WeakMap stores already cloned objects.
-
If the object is encountered again, return the stored clone.
-
This prevents infinite recursion from circular references
34. Write a polyfill for bind()
bind() creates a new function with a fixed this context and optionally preset arguments.
Example:
Function.prototype.myBind = function (context, ...args) { const fn = this; return function (...newArgs) { return fn.apply(context, [...args, ...newArgs]); }; };Example Usagefunction greet(greeting) { console.log(greeting + " " + this.name); } const user = { name: "Lingesh" }; const sayHello = greet.myBind(user, "Hello"); sayHello(); // Hello LingeshHow It Works-
thisrefers to the original function. -
Store the original function (
fn). -
Return a new function.
-
When the returned function runs:
-
Call the original function using
apply. -
Set
contextasthis. -
Combine preset and runtime arguments.
-
📊 Real-World Architecture Problems
35. How would you design a real-time chat system?
A real-time chat system must support instant message delivery, scalability, and reliability while handling large numbers of concurrent users.
1. High-Level Architecture
A typical chat architecture looks like:
Client (Web / Mobile)
│
WebSocket Gateway
│
Chat Service
│
Message Queue
│
Database / Storage
Flow:
User A sends message
│
WebSocket Server
│
Chat Service
│
Message Queue
│
User B receives message
2. Real-Time Communication
For real-time messaging, use persistent connections instead of HTTP polling
Common protocols:
-
WebSockets (most common)
-
Server-Sent Events
-
Long polling (fallback)
Example:
Client ↔ WebSocket Connection ↔ Server
This allows instant bidirectional communication.
3. Message Flow
When a user sends a message:
User sends message
│
WebSocket Server receives it
│
Store message in database
│
Publish event to message queue
│
Deliver to recipient(s)
This ensures messages are both delivered and persisted.
36. Design infinite scroll with minimal re-renders.
Infinite scroll loads additional data as the user scrolls, instead of loading everything at once. The key challenge is to avoid unnecessary re-renders and heavy DOM updates, especially when large datasets are involved.
1. Core Architecture
A typical infinite scroll flow:
User Scrolls
│
Intersection Observer detects bottom
│
Fetch next page
│
Append items to list
This avoids continuous scroll event listeners and improves performance.
2. Detect Scroll Position Efficiently
Instead of using scroll events (which fire frequently), use IntersectionObserver to detect when the user reaches the bottom.
Example idea:
:
3. Maintain Append-Only State
To avoid unnecessary re-renders:
Existing items stay unchanged
Only new items are appended
Example concept:
setItems(prev => [...prev, ...newItems]);
Appending ensures previously rendered items are not recreated.
4. Use Stable Keys
Each item must have a stable unique key.
Example:
items.map(item => (
<Item key={item.id} data={item} />
));
Stable keys allow the rendering engine to reuse existing DOM nodes.
37. How would you design a collaborative editor (like Google Docs)?
A collaborative editor allows multiple users to edit the same document simultaneously, while keeping the document consistent across all clients in real time.
The system must solve challenges such as concurrent edits, conflict resolution, low latency updates, and document synchronization.
1. High-Level Architecture
A typical architecture:
Client Editors
│
WebSocket Server
│
Collaboration Service
│
Document Storage
Flow:
User edits document
│
Send operation to server
│
Broadcast update to other users
│
Update their editors
WebSockets are commonly used for real-time bidirectional communication.
2. Document Representation
The document is usually represented as a sequence of operations rather than full text updates.
Example:
Insert "Hello"
Delete character at position 5
Insert "!"
Sending operations instead of entire documents reduces network overhead
3. Conflict Resolution
When multiple users edit the document simultaneously, conflicts must be resolved.
Two major techniques are used:
Operational Transformation (OT)
Used in systems like Google Docs.
Concept:
User A inserts text
User B deletes text
System transforms operations to maintain consistency
Operations are transformed based on concurrent edits.
CRDT (Conflict-Free Replicated Data Types)
Alternative approach used by many modern collaborative systems.
Concept:
Each change has a unique ID
Changes can merge automatically
All replicas converge to the same state
CRDT allows edits to be applied without centralized ordering.
38. How would you implement a rate limiter in JavaScript?
A rate limiter controls how frequently a function can execute within a specific time window. It is useful for protecting APIs, preventing spam requests, and limiting user actions.
Example goal:
Allow only 5 requests per second
1. Fixed Window Rate Limiter
Track how many calls occur within a time window.
Implementation
function createRateLimiter(limit, interval) { let count = 0; let start = Date.now(); return function () { const now = Date.now(); if (now - start > interval) { count = 0; start = now; } if (count < limit) { count++; return true; } return false; }; }Usage
const limiter = createRateLimiter(3, 1000); console.log(limiter()); // true console.log(limiter()); // true console.log(limiter()); // true console.log(limiter()); // false
Only 3 calls per second are allowed.
2. Token Bucket Rate Limiter (More Flexible)
The token bucket algorithm allows bursts while maintaining a maximum rate.
Concept:
Bucket holds tokens
Each request consumes 1 token
Tokens refill over time
Implementation
function tokenBucket(limit, refillRate) { let tokens = limit; let lastRefill = Date.now(); return function () { const now = Date.now(); const elapsed = (now - lastRefill) / 1000; tokens = Math.min(limit, tokens + elapsed * refillRate); lastRefill = now; if (tokens >= 1) { tokens -= 1; return true; } return false; }; }39. How would you handle errors globally in a large frontend app?
In large frontend applications, errors can occur in UI components, API calls, asynchronous operations, or runtime exceptions. A global error-handling strategy ensures that errors are captured, logged, and handled consistently without crashing the application.
1. Global Error Boundary for UI Errors
UI frameworks often provide error boundaries to catch rendering errors in components.
Concept:
Component error
│
Error Boundary
│
Fallback UI
Example idea:
Try rendering component
If error occurs → show fallback UI
Benefits:
-
Prevents the entire application from crashing
-
Allows graceful fallback screens
2. Global Error Listeners
Capture runtime errors that occur outside component rendering.
Example concept:
window.onerror
window.onunhandledrejection
These listeners capture:
-
JavaScript runtime errors
-
Unhandled promise rejections
Example:
window.addEventListener("unhandledrejection", (event) => {
console.error("Unhandled promise:", event.reason);
});
3. Centralized API Error Handling
API requests should be handled through a centralized API client.
Architecture:
UI Components
│
API Client
│
Error Interceptor
│
Server
Common logic handled centrally:
-
Authentication errors
-
Retry logic
-
Network failures
-
Error logging
4. Logging and Monitoring
Errors should be logged to monitoring systems so developers can diagnose issues.
Popular tools:
-
Sentry
-
Datadog
-
New Relic
-
LogRocket
Captured information typically includes:
Error message
Stack trace
User session
Browser details
5. User-Friendly Error Handling
Users should not see raw error messages.
Example:
Internal error → "Something went wrong"
Network error → "Check your internet connection"
This improves user experience while still logging the real error internally.