JavaScript interviews for experienced engineers demand more than just a basic understanding of syntax and concepts. They require a deep dive into advanced topics that demonstrate your ability to solve complex problems and architect robust solutions. Whether you're aiming to advance your career or secure a new role, mastering these 20 advanced JavaScript interview questions will not only enhance your technical prowess but also set you apart from others.
function foo(){ }();
. What needs to be changed to properly make it an IIFE?IIFE stands for Immediately Invoked Function Expressions. The JavaScript parser reads function foo(){ }();
as function foo(){ }
and ();
, where the former is a function declaration and the latter is an attempt at calling a function without a name. This results in a SyntaxError
.
To fix this, wrap the function in parentheses: (function foo(){ })()
. This turns it into a function expression, allowing it to be executed immediately.
In JavaScript, iterators and generators are powerful tools for managing sequences of data and controlling the flow of execution in a more flexible way.
Iterators are objects that define a sequence and provide a next()
method to access the next value in the sequence. They are used to iterate over data structures like arrays, strings, and custom objects.
Creating a custom iterator for a range of numbers
In JavaScript, we can provide a default implementation for iterator by implementing [Symbol.iterator]()
in any custom object.
class Range {constructor(start, end) {this.start = start;this.end = end;}[Symbol.iterator]() {let current = this.start;const end = this.end;return {next() {if (current <= end) {return { value: current++, done: false };} elsereturn { value: undefined, done: true };}},};}}const range = new Range(1, 3);for (const number of range) {console.log(number); // 1, 2, 3}
Generators are a special kind of function that can pause and resume their execution, allowing them to generate a sequence of values on-the-fly. They are commonly used to create iterators but have other applications as well.
Creating an iterator using a generator function
We can rewrite our Range
example to use a generator function:
class Range {constructor(start, end) {this.start = start;this.end = end;}*[Symbol.iterator]() {let current = this.start;while (current <= this.end) {yield current++;}}}const range = new Range(1, 3);for (const number of range) {console.log(number); // 1, 2, 3}
Iterating over data streams
Generators are well-suited for iterating over data streams, such as fetching data from an API or reading files.
function* fetchDataInBatches(url, batchSize = 10) {let startIndex = 0;while (true) {const response = await fetch(`${url}?start=${startIndex}&limit=${batchSize}`);const data = await response.json();if (data.length === 0) break;yield data;startIndex += batchSize;}}const dataGenerator = fetchDataInBatches('https://api.example.com/data');for await (const batch of dataGenerator) {console.log(batch);}
Property flags and descriptors in JavaScript manage how object properties behave, allowing control over property access, modification, and inheritance.
Property flags are defined using Object.defineProperty()
. Key flags include:
writable
: Can the property be written to? Default is true
.enumerable
: Is the property enumerable? Default is true
.configurable
: Can the property be deleted or reconfigured? Default is true
.Property descriptors provide detailed information about a property, including its value and flags. Use Object.getOwnPropertyDescriptor()
to retrieve and Object.defineProperty()
to set them.
Example:
let user = { name: 'John Doe' };let descriptor = Object.getOwnPropertyDescriptor(user, 'name');console.log(descriptor); // {value: "John Doe", writable: true, enumerable: true, configurable: true}
writable
: Controls if a property can be written to. If false
, writing fails silently in non-strict mode and throws TypeError
in strict mode.
const obj = {};Object.defineProperty(obj, 'name', { writable: false, value: 'John Doe' });console.log(obj.name); // John Doeobj.name = 'Jane Doe'; // TypeError in strict mode
enumerable
: Controls if a property is visible in for...in
loops.
const obj = {};Object.defineProperty(obj, 'name', {enumerable: false,value: 'John Doe',});for (const prop in obj) console.log(prop); // No output
configurable
: Controls if a property can be deleted or reconfigured. If false
, deleting or altering fails silently in non-strict mode and throws TypeError
in strict mode.
const obj = {};Object.defineProperty(obj, 'name', {configurable: false,value: 'John Doe',});delete obj.name; // TypeError in strict mode
Polyfills are scripts that enable modern JavaScript features in older browsers that lack support, allowing developers to use the latest language features while maintaining compatibility.
Polyfills detect missing features and provide custom implementations using existing JavaScript. For example, Array.prototype.includes()
is not supported in older browsers like Internet Explorer 11:
if (!Array.prototype.includes) {Array.prototype.includes = function (searchElement) {for (var i = 0; i < this.length; i++) {if (this[i] === searchElement) return true;}return false;};}
typeof
, in
, or window
.core-js
: Provides polyfills for many ECMAScript features.
import 'core-js/actual/array/flat-map';[1, 2].flatMap((it) => [it, it]); // => [1, 1, 2, 2]
Polyfill.io: Serves polyfills based on requested features and user agents.
<script src="https://polyfill.io/v3/polyfill.min.js"></script>
Polyfills ensure modern JavaScript features work across all browsers, enhancing compatibility and functionality.
Server-Sent Events (SSE) is a standard that allows servers to push updates to web clients over a single, long-lived HTTP connection. This enables real-time updates without the client constantly polling the server for new data.
EventSource
object, providing the URL of the server-side script that generates the event stream.event
, data
, and id
.EventSource
object receives events and dispatches them as browser events, which can be handled using event listeners.EventSource
automatically handles reconnection if the connection is lost, resuming the stream from the last received event ID.Last-Event-Id
: The client sends the Last-Event-Id
header when reconnecting, allowing the server to resume the stream.Client:
const eventSource = new EventSource('/sse');eventSource.onmessage = (event) => console.log('New message:', event.data);
Server (Node.js):
const http = require('http');http.createServer((req, res) => {if (req.url === '/sse') {// Set headers for SSEres.writeHead(200, {'Content-Type': 'text/event-stream','Cache-Control': 'no-cache',Connection: 'keep-alive',});// Function to send a messageconst sendMessage = (message) => {res.write(`data: ${message}\n\n`); // Messages are delimited with double line breaks.};// Send a message every 5 secondsconst intervalId = setInterval(() => {sendMessage(`Current time: ${new Date().toLocaleTimeString()}`);}, 5000);// Handle client disconnectreq.on('close', () => {clearInterval(intervalId);res.end();});} else {res.writeHead(404);res.end();}}).listen(8080, () => {console.log('SSE server running on port 8080');});
SSE provides an efficient and straightforward way to push updates from a server to a client in real-time. It is well-suited for applications requiring continuous data streams but not full bidirectional communication.
JavaScript workers run scripts in background threads, offloading intensive tasks to keep the user interface responsive. There are three main types of workers in JavaScript: Web Workers / Dedicated Workers, Service Workers and Shared Workers.
postMessage()
and onmessage
.main.js
:
// Check if the browser supports workersif (window.Worker) {// Create a new Workerconst myWorker = new Worker('worker.js');// Post a message to the workermyWorker.postMessage('Hello, Worker!');// Listen for messages from the workermyWorker.onmessage = function (event) {console.log('Message from Worker:', event.data);};// Error handlingmyWorker.onerror = function (error) {console.error('Error from Worker:', error);};}
worker.js
:
// Listen for messages from the main scriptonmessage = function (event) {console.log('Message from Main Script:', event.data);// Perform a task (e.g., some computation)const result = event.data + ' - Processed by Worker';// Post the result back to the main scriptpostMessage(result);};
main.js
:
if ('serviceWorker' in navigator) {navigator.serviceWorker.register('/service-worker.js').then((registration) => {console.log('Service Worker registered:', registration);}).catch((err) => {console.log('Service Worker registration failed:', err);});}
service-worker.js
:
self.addEventListener('fetch', (event) => {event.respondWith(caches.match(event.request).then((response) => {return response || fetch(event.request);}),);});
"use strict";
?** What are the advantages and disadvantages to using it?"use strict";
is a directive from ECMAScript 5 (ES5) that enforces stricter parsing and error handling in JavaScript, making code more secure and less error-prone.
Global Scope: Add at the beginning of a JavaScript file.
'use strict';function add(a, b) {return a + b;}
Local Scope: Add at the beginning of a function.
function myFunction() {'use strict';// Strict mode only within this function}
arguments.caller
.eval()
to prevent variable declarations in the calling scope.javascriptCopy code// Without strict modefunction defineNumber() { count = 123; }defineNumber();console.log(count); // Logs: 123// With strict mode'use strict';function strictFunc() {strictVar = 123; // ReferenceError}strictFunc();console.log(strictVar); // ReferenceError
While "use strict";
is not mandatory in these contexts, it is still recommended for older code and broader compatibility.
To secure authentication and authorization in JavaScript applications, use HTTPS to encrypt data in transit and store sensitive data like tokens securely with localStorage
or sessionStorage
. Employ token-based authentication using JWTs, validating tokens server-side. Utilize libraries like OAuth for third-party authentication and enforce role-based access control (RBAC) for proper authorization.
Minimize direct DOM access by batching changes, using documentFragment
, and leveraging virtual DOM libraries like React. Use requestAnimationFrame
for animations and avoid layout thrashing by separating DOM reads and writes.
Minimize the number of requests, use caching, compress data, and leverage HTTP/2 and service workers. Combine CSS files, use Cache-Control
headers for static assets, and enable Gzip compression to reduce data size.
Prevent clickjacking by using the X-Frame-Options HTTP header set to DENY or SAMEORIGIN to control iframe embedding. Additionally, use the Content-Security-Policy header with the frame-ancestors directive to specify allowed origins.
X-Frame-Options: DENYContent-Security-Policy: frame-ancestors 'self'
Use the Constraint Validation API with properties like validity
and validationMessage
, and methods like checkValidity()
and setCustomValidity()
. For example:
const input = document.querySelector('input');if (input.checkValidity()) {console.log('Input is valid');} else {console.log(input.validationMessage);}
In JavaScript, hoisting moves function declarations to the top of their scope, making them callable before their definition. Function expressions are not hoisted similarly; the variable is hoisted, but its assignment is not.
// Function declarationconsole.log(foo()); // Works finefunction foo() {return 'Hello';}// Function expressionconsole.log(bar()); // Throws TypeError: bar is not a functionvar bar = function () {return 'Hello';};
JavaScript uses automatic garbage collection to reclaim memory from objects and variables no longer in use. The two main algorithms are mark-and-sweep and generational garbage collection.
Mark-and-Sweep
Generational Garbage Collection Modern engines divide objects into generations based on age and usage. Frequently accessed objects stay in younger generations, while less-used objects move to older generations, optimizing garbage collection by focusing on short-lived objects.
Different JavaScript engines may use different garbage collection strategies.
Mocks and stubs simulate real objects in testing. Stubs provide predefined responses to function calls, isolating the code being tested from external dependencies. Mocks are more complex, verifying interactions like whether a function was called and with what arguments. Stubs focus on isolating functionality, while mocks ensure correct interaction with dependencies.
A proxy in JavaScript is an intermediary object that intercepts and customizes operations on another object, such as property access, assignment, and function invocation.
Example:
const myObject = {name: 'John',age: 42,};const handler = {get: function (target, prop) {console.log(`Accessed property "${prop}"`);return target[prop];},};const proxiedObject = new Proxy(myObject, handler);console.log(proxiedObject.name); // Logs: 'John'// Accessed property "name"console.log(proxiedObject.age); // Logs: 42// Accessed property "age"
Use cases:
Using languages like TypeScript or CoffeeScript, which compile to JavaScript, has several pros and cons.
Advantages:
Disadvantages:
requestAnimationFrame
for synchronized animations.will-change
to elements that change frequently.Tools such as Chrome DevTools, Lighthouse, WebPageTest, and JSPerf are commonly used for this purpose. Chrome DevTools includes a Performance panel for profiling, Lighthouse provides performance audits, WebPageTest offers detailed performance testing, and JSPerf aids in comparing JavaScript snippet performance.
Web Workers enable running JavaScript in the background, independent of the main execution thread of a web application. This is beneficial for handling intensive computations without blocking the user interface. Web Workers are created using the Worker
constructor and communication with them is facilitated through the postMessage
and onmessage
methods.
Preparing yourself to answer these questions in an interview setting will certainly help you stand out from the crowd. It's not just about knowing the answers; it's about understanding the underlying concepts and applying them effectively in real-world scenarios. Mastering these advanced JavaScript topics will not only boost your confidence during technical interviews but also equip you to build scalable and efficient web applications.
For seasoned frontend engineers with over a decade of experience, interviews delve into sophisticated topics that test problem-solving skills and architectural expertise. To help you excel in these interviews, we've curated a definitive list of 20 advanced JavaScript questions. These questions cover intricate concepts like microtask queues, closures, async/await, and more, designed to showcase your deep understanding and ability to navigate complex challenges.
The microtask queue in JavaScript is where tasks like promise callbacks (then
and catch
), async
functions, and certain APIs like MutationObserver
are queued for execution. It's separate from the regular task queue and has higher priority, ensuring microtasks are processed immediately after the current execution context is clear. This queue follows FIFO (First In, First Out) order, ensuring predictable handling of asynchronous operations in JavaScript applications.
Potential pitfalls of using closures in JavaScript include:
Anonymous functions offer a concise way to define functions, especially for simple operations or callbacks. They are commonly used in Immediately Invoked Function Expressions (IIFEs) to encapsulate code within a local scope, preventing variables from leaking into the global scope:
(function () {var x = 10;console.log(x); // 10})();// x is not accessible hereconsole.log(typeof x); // undefined
Anonymous functions are also effective as callbacks, enhancing code readability by defining handlers inline:
setTimeout(() => {console.log('Hello world!');}, 1000);
Moreover, they are utilized with higher-order functions like map()
, filter()
, and reduce()
in functional programming:
const arr = [1, 2, 3];const double = arr.map((el) => el * 2);console.log(double); // [2, 4, 6]
In event handling, anonymous functions are widely employed in frameworks like React to define inline callback functions:
function App() {return <button onClick={() => console.log('Clicked!')}>Click Me</button>;}
These uses showcase how anonymous functions streamline code by keeping logic concise and scoped appropriately.
Languages that compile to JavaScript, like TypeScript or CoffeeScript, offer advantages such as improved syntax, type safety, and better tooling. These languages enhance code readability, provide robust error checking, and support advanced IDE features.
However, using such languages also introduces challenges. Developers may face additional build steps and increased complexity in their workflow. There could be potential performance overhead compared to writing directly in JavaScript. Moreover, adapting to new syntax and learning the intricacies of these languages can pose a learning curve initially.
The event loop in JavaScript manages asynchronous operations to prevent blocking the single-threaded execution:
This cycle ensures JavaScript remains responsive by handling both synchronous and asynchronous tasks efficiently.
Data binding in JavaScript automates the synchronization of data between the model (data source) and the view (UI). It ensures changes in one are immediately reflected in the other, enhancing application interactivity and reducing manual updates. There are two types:
Hoisting in JavaScript can cause unexpected outcomes because variable and function declarations are lifted to the top of their scope during compilation. This behavior can lead to variables being accessed before their declaration, resulting in undefined
values. It can also create confusion between function declarations and expressions. For instance:
console.log(a); // undefinedvar a = 5;console.log(b); // ReferenceError: b is not definedlet b = 10;
In the example above, a
is hoisted and initialized as undefined
before it's assigned 5
. However, b
throws a ReferenceError
because let
variables aren't hoisted like var
variables are.
async
/await
is a contemporary feature in JavaScript designed to streamline the handling of promises. When you declare a function with the async keyword, you can utilize the await keyword within that function to halt execution until a promise resolves. This approach aligns asynchronous code structure more closely with synchronous code, enhancing readability and maintainability.
Example usage:
async function fetchData() {try {const response = await fetch('https://api.example.com/data');const data = await response.json();console.log(data);} catch (error) {console.error('Error fetching data:', error);}}
In this example:
In JavaScript, iterators and generators offer flexible ways to manage data sequences and control execution flow.
Iterators define a sequence and terminate with a potential return value. They require a next()
method that returns an object with value
(next sequence value) and done
(boolean indicating completion) properties.
Example of an iterator:
const iterator = {current: 0,last: 5,next() {if (this.current <= this.last) {return { value: this.current++, done: false };} else {return { value: undefined, done: true };}},};let result = iterator.next();while (!result.done) {console.log(result.value); // Logs 0, 1, 2, 3, 4, 5result = iterator.next();}
Generators are special functions using function*
syntax and yield
keyword to control execution flow. They return an iterator object, allowing pausing and resuming execution.
Example of a generator:
function* numberGenerator() {let num = 0;while (num <= 5) {yield num++;}}const gen = numberGenerator();console.log(gen.next()); // { value: 0, done: false }console.log(gen.next()); // { value: 1, done: false }console.log(gen.next()); // { value: 2, done: false }console.log(gen.next()); // { value: 3, done: false }console.log(gen.next()); // { value: 4, done: false }console.log(gen.next()); // { value: 5, done: false }console.log(gen.next()); // { value: undefined, done: true }
Generators are efficient for creating iterators on-demand, useful for lazy evaluation, custom data structures, and asynchronous data handling.
Web Workers enable JavaScript code to run in the background, separate from the main execution thread of a web application. They handle intensive computations without freezing the user interface. Here's a concise example:
main.js:
const worker = new Worker('worker.js');worker.postMessage('Hello, worker!');worker.onmessage = (event) => console.log('Message from worker:', event.data);
worker.js:
onmessage = (event) => {console.log('Message from main script:', event.data);postMessage('Hello, main script!');};
Web Workers boost performance by offloading heavy tasks, ensuring smoother user interaction in web applications.
Memoization in JavaScript is a technique used to optimize functions by caching the results of expensive function calls and returning the cached result when the same inputs occur again. This can significantly improve performance by avoiding redundant calculations.
It is particularly useful for functions that are computationally expensive but deterministic—meaning they always produce the same output for the same input.
Here's a concise implementation example using a Fibonacci function:
function memoize(fn) {const cache = {};return function (...args) {const key = JSON.stringify(args);return cache[key] || (cache[key] = fn.apply(this, args));};}function fibonacci(n) {if (n <= 1) return n;return fibonacci(n - 1) + fibonacci(n - 2);}const memoizedFibonacci = memoize(fibonacci);console.log(memoizedFibonacci(6)); // Output: 8console.log(memoizedFibonacci(7)); // Output: 13console.log(memoizedFibonacci(6)); // Output: 8 (retrieved from cache)
To optimize performance and reduce reflows and repaints, follow these strategies:
DocumentFragment
or innerHTML
to insert multiple DOM nodes at once.requestAnimationFrame
: Schedule animations and layout changes using requestAnimationFrame
for smoother rendering.will-change
: Mark elements that will undergo frequent changes with the will-change
CSS property to optimize rendering.Implementing these practices helps ensure that your web application performs efficiently, maintaining smooth user interactions and responsive UI updates.
JavaScript polyfills are code snippets designed to replicate the behavior of modern JavaScript features on browsers that do not natively support them. They detect the absence of a specific feature and provide an alternative implementation using existing JavaScript capabilities.
For instance, consider the Array.prototype.includes()
method, which verifies if an array contains a particular element. This method isn't supported in older browsers such as Internet Explorer 11. To address this gap, a polyfill for Array.prototype.includes()
can be implemented as follows:
// Polyfill for Array.prototype.includes()if (!Array.prototype.includes) {Array.prototype.includes = function (searchElement) {for (var i = 0; i < this.length; i++) {if (this[i] === searchElement) {return true;}}return false;};}
typeof
, in
, or window
.import 'core-js/actual/array/flat-map'; // Example: polyfill for Array.prototype.flatMap[1, 2].flatMap((it) => [it, it]); // Output: [1, 1, 2, 2]
<script src="https://polyfill.io/v3/polyfill.min.js"></script>
JavaScript polyfills play a crucial role in ensuring cross-browser compatibility and enabling the adoption of modern JavaScript features in environments with varying levels of browser support.
Module bundlers like Webpack, Parcel, and Rollup offer key benefits for web development:
Module bundlers streamline code organization, enhance performance, ensure compatibility, and integrate seamlessly with development tools, essential for modern web development.
Tree shaking is a module bundling technique that removes dead code — code that's never used or executed — from the final bundle. This optimization reduces bundle size and enhances application performance. Tools like Webpack and Rollup support tree shaking primarily with ES6 module syntax (import
/export
), analyzing the code's dependency graph to eliminate unused exports efficiently.
Common performance bottlenecks in JavaScript applications often stem from inefficient DOM manipulation, excessive global variables, blocking the main thread with heavy computations, memory leaks, and improper use of asynchronous operations.
To address these challenges, employing techniques such as debouncing and throttling for event handling, optimizing DOM updates with batch processing, and utilizing web workers for offloading heavy computations can significantly enhance application responsiveness and efficiency. These approaches help mitigate the impact of these bottlenecks on user experience and overall application performance.
Each type of testing plays a crucial role in ensuring software quality across different levels of application functionality and integration.
Using these tools and techniques helps ensure JavaScript applications are secure against common vulnerabilities.
Content Security Policy (CSP) is a critical security feature designed to mitigate vulnerabilities like Cross-Site Scripting (XSS) and data injection attacks. By defining a whitelist of trusted sources for content such as scripts, stylesheets, and images, CSP restricts which resources a browser can load and execute on a webpage. This is typically set using HTTP headers or <meta>
tags in HTML. For instance, the Content-Security-Policy
header can specify that only scripts from the same origin ('self') are allowed to execute:
content-security-policy: script-src 'self';
This approach ensures that only trusted scripts from specified sources can run, enhancing the security of web applications by preventing unauthorized script execution and protecting against malicious code injection attempts.
document.write()
?document.write()
is rarely used in modern web development because if called after the page has loaded, it can overwrite the entire document. It's typically reserved for simple tasks during initial page load, such as for educational purposes or quick debugging. Instead, it's generally recommended to use safer methods like innerHTML
, appendChild()
, or modern frameworks/libraries for more controlled and secure DOM manipulation.
Well done, you've reached the end! These questions serve as a comprehensive guide to showcasing your breadth and depth of knowledge in JavaScript. If you're already familiar with all of them, that's fantastic! If not, don't be disheartened; view this as an opportunity to dive deeper into these intricate topics. Mastering these concepts will not only prepare you for advanced JavaScript interviews but also strengthen your overall technical expertise.
As a seasoned JavaScript developer with 5+ years of experience, you're likely no stranger to the intricacies of JavaScript. However, even the most experienced developers can benefit from a refresher on the most critical concepts and nuances of the language. In this article, we'll cover the top 20 JavaScript interview questions that can help you prepare effectively for your next interview.
Anonymous functions provide a concise way to define functions, especially useful for simple operations or callbacks. They are commonly used in:
map()
, filter()
, and reduce()
.Checkout the below code example-
// Encapsulating Code using IIFE(function () {// Some code here.})();// CallbackssetTimeout(function () {console.log('Hello world!');}, 1000);// Functional programming constructsconst arr = [1, 2, 3];const double = arr.map(function (el) {return el * 2;});console.log(double); // [2, 4, 6]
A closure is a function that retains access to these variables even after the outer function has finished executing. This is like the function has a memory of its original environment.
function outerFunction() {const outerVar = 'I am outside of innerFunction';function innerFunction() {console.log(outerVar); // `innerFunction` can still access `outerVar`.}return innerFunction;}const inner = outerFunction(); // `inner` now holds a reference to `innerFunction`.inner(); // "I am outside of innerFunction"// Even though `outerFunction` has completed execution, `inner` still has access to variables defined inside `outerFunction`.
Closures are useful for:
Pros:
Avoid callback hell: Promises simplify nested callbacks.
// Callback hellgetData1((data) => {getData2(data, (data) => {getData3(data, (result) => {console.log(result);});});
Sequential code: Easier to write and read using .then()
.
Parallel code: Simplifies managing multiple promises with Promise.all()
.
Promise.all([getData1(), getData2(), getData3()]).then((results) => {console.log(results);}).catch((error) => {console.error('Error:', error);});
Cons:
AbortController
in JavaScript?AbortController
allows you to cancel ongoing asynchronous operations like fetch requests. To use it:
const controller = new AbortController();
2.Pass the signal: Add the signal to the fetch request options.controller.abort()
to cancel the request.Here is an example of how to use **AbortController
**s with the fetch()
API:
const controller = new AbortController();const signal = controller.signal;fetch('YOUR API', { signal }).then((response) => {// Handle response}).catch((error) => {if (error.name === 'AbortError') {console.log('Request aborted');} else {console.error('Error:', error);}});// Call abort() to abort the requestcontroller.abort();
Some of its use cases can be:
fetch()
request on a user actionIn JavaScript it's very easy to extend a built-in/native object. You can simply extend a built-in object by adding properties and functions to its prototype
.
String.prototype.reverseString = function () {return this.split('').reverse().join('');};console.log('hello world'.reverseString()); // Outputs 'dlrow olleh'// Instead of extending the built-in object, write a pure utility function to do it.function reverseString(str) {return str.split('').reverse().join('');}console.log(reverseString('hello world')); // Outputs 'dlrow olleh'
While this may seem like a good idea at first, it is dangerous in practice. Imagine your code uses a few libraries that both extend the Array.prototype
by adding the same contains
method, the implementations will overwrite each other and your code will have unpredictable behavior if these two methods do not work the same way.
Extending built-in objects can lead to issues such as:
In the browser, the global scope refers to the top-level context where variables, functions, and objects are accessible throughout the code. This scope is represented by the window object. Variables and functions declared outside of any function or block (excluding modules) are added to the window object, making them globally accessible.
For example:
// This runs in the global scope, not within a module.let globalVar = 'Hello, world!';function greet() {console.log('Greetings from the global scope!');}console.log(window.globalVar); // 'Hello, world!'window.greet(); // 'Greetings from the global scope!'
In this example, globalVar and greet are attached to the window object and can be accessed from anywhere in the global scope.
Generally, it's advisable to avoid polluting the global namespace unless necessary. Key reasons include:
In JavaScript, modules are reusable pieces of code that encapsulate functionality, making it easier to manage, maintain, and structure your applications. Modules allow you to break down your code into smaller, manageable parts, each with its own scope.
CommonJS is an older module system that was initially designed for server-side JavaScript development with Node.js. It uses the require()
function to load modules and the module.exports
or exports
object to define the exports of a module.
// my-module.jsconst value = 42;module.exports = { value };// main.jsconst myModule = require('./my-module.js');console.log(myModule.value); // 42
ES Modules (ECMAScript Modules) are the standardized module system introduced in ES6 (ECMAScript 2015). They use the import
and export
statements to handle module dependencies.
// my-module.jsexport const value = 42;// main.jsimport { value } from './my-module.js';console.log(value); // 42
Immutability is a core principle in functional programming but it has lots to offer to object-oriented programs as well.
Mutable objects in JavaScript allow for modifications to their properties and values after creation. This behavior is default for most objects.
let mutableObject = {name: 'John',age: 30,};// Modify the objectmutableObject.name = 'Jane';console.log(mutableObject); // Output: { name: 'Jane', age: 30 }
Mutable objects like mutableObject
above can have their properties changed directly, making them flexible for dynamic updates.
In contrast, immutable objects cannot be modified once created. Any attempt to change their content results in the creation of a new object with the updated values.
const immutableObject = Object.freeze({name: 'John',age: 30,});// Attempting to modify the objectimmutableObject.name = 'Jane'; // This change won't affect the objectconsole.log(immutableObject); // Output: { name: 'John', age: 30 }
Here, immutableObject
remains unchanged after creation due to Object.freeze()
, which prevents modifications to its properties.
The primary difference lies in modifiability. Mutable objects allow changes to their properties directly, while immutable objects ensure the integrity of their initial state by disallowing direct modifications.
const
vs immutable objectsA common confusion is that declaring a variable using const
makes the value immutable, which is not true at all.
Using const
prevents the reassignment of variables but doesn't make non-primitive values immutable.
// Using constconst person = { name: 'John' };person = { name: 'Jane' }; // Error: Assignment to constant variableperson.name = 'Jane'; // Allowed, person.name is now 'Jane'// Using Object.freeze() to create an immutable objectconst frozenPerson = Object.freeze({ name: 'John' });frozenPerson.name = 'Jane'; // Fails silently (no error, but no change)frozenPerson = { name: 'Jane' }; // Error: Assignment to constant variable
In the first example with const
, reassigning a new object to person
is not allowed, but modifying the name
property is permitted. In the second example, Object.freeze()
makes the frozenPerson
object immutable, preventing any changes to its properties.
Static class members in JavaScript, denoted by the static
keyword, are accessed directly on the class itself, not on instances. They serve multiple purposes:
class Config {static API_KEY = 'your-api-key';static FEATURE_FLAG = true;}console.log(Config.API_KEY); // Output: 'your-api-key'console.log(Config.FEATURE_FLAG); // Output: true
class Arithmetic {static add(a, b) {return a + b;}static subtract(a, b) {return a - b;}}console.log(Arithmetic.add(2, 3)); // Output: 5console.log(Arithmetic.subtract(5, 2)); // Output: 3
class Singleton {static instance;static getInstance() {if (!this.instance) {this.instance = new Singleton();}return this.instance;}}const singleton1 = Singleton.getInstance();const singleton2 = Singleton.getInstance();console.log(singleton1 === singleton2); // Output: true
Symbol
s used for in JavaScript?Symbols in JavaScript, introduced in ES6, are unique and immutable identifiers primarily used as object property keys to avoid name collisions. They can be created using the Symbol()
function, ensuring each Symbol value is unique even if descriptions are identical. Symbol properties are non-enumerable, making them suitable for private object state.
const sym1 = Symbol();const sym2 = Symbol('uniqueKey');console.log(typeof sym1); // "symbol"console.log(sym1 === sym2); // false, each symbol is uniqueconst obj = {};const sym = Symbol('uniqueKey');obj[sym] = 'value';console.log(obj[sym]); // "value"
Key characteristics include:
for...in
loops or Object.keys()
.Global Symbols can be created using Symbol.for('key')
, allowing reuse across different parts of codebases:
const globalSym1 = Symbol.for('globalKey');const globalSym2 = Symbol.for('globalKey');console.log(globalSym1 === globalSym2); // trueconst key = Symbol.keyFor(globalSym1);console.log(key); // "globalKey"
There are some well known Symbol
in JavaScript like:
Symbol.iterator
: Defines the default iterator
for an object.Symbol.toStringTag
: Used to create a string description for an object.Symbol.hasInstance
: Used to determine if an object is an instance of a constructor.JavaScript object getters and setters are essential for controlling access to object properties, offering customization when getting or setting values.
const user = {_firstName: 'John',_lastName: 'Doe',get fullName() {return `${this._firstName} ${this._lastName}`;},set fullName(value) {const parts = value.split(' ');this._firstName = parts[0];this._lastName = parts[1];},};console.log(user.fullName); // Output: 'John Doe'user.fullName = 'Jane Smith';console.log(user.fullName); // Output: 'Jane Smith'
Getters (fullName
) compute values based on internal properties (_firstName
and _lastName
), while setters (fullName
) update these properties based on assigned values ('Jane Smith'
). These mechanisms enhance data encapsulation and allow for custom data handling in JavaScript objects.
Tools and techniques for debugging JavaScript code vary depending on the context:
debugger
statement: Inserting debugger;
in code triggers breakpoints when Devtools are open, pausing execution for inspection.console.log()
debugging: Using console.log()
statements to output variable values and debug messages.Currying in JavaScript is a functional programming technique where a function with multiple arguments is transformed into a sequence of nested functions, each taking a single argument. This allows for partial application of the function's arguments, meaning you can fix some arguments ahead of time and then apply the remaining arguments later.
Here's a simple example of a curry function and why this syntax offers an advantage:
// Example of a curry functionfunction curry(fn) {return function curried(...args) {if (args.length >= fn.length) {return fn(...args);} else {return function (...moreArgs) {return curried(...args, ...moreArgs);};}};}// Example function to be curriedfunction multiply(a, b, c) {return a * b * c;}// Currying the multiply functionconst curriedMultiply = curry(multiply);// Applying curried functionsconst step1 = curriedMultiply(2); // partially apply 2const step2 = step1(3); // partially apply 3const result = step2(4); // apply the final argumentconsole.log(result); // Output: 24
Advantages of Curry Syntax:
Currying enhances the functional programming paradigm in JavaScript by enabling concise, composable, and reusable functions, promoting cleaner and more modular code.
load
event and the document DOMContentLoaded
event?The DOMContentLoaded
event is triggered once the initial HTML document has been fully loaded and parsed, without waiting for stylesheets, images, and subframes to finish loading.
In contrast, the window's load
event is fired only after the DOM and all dependent resources, such as stylesheets, images, and subframes, have completely loaded.
JSONP (JSON with Padding) is a technique used to circumvent cross-domain restrictions in web browsers, as standard Ajax requests to different domains are generally blocked.
Instead of using Ajax, JSONP makes a request to a cross-origin domain by dynamically creating a <script>
tag with a callback query parameter, such as: https://example.com?callback=handleResponse
. The server wraps the data in a function named handleResponse
and returns it.
<script>function handleResponse(data) {console.log(`User: ${data.username}`);}</script><script src="<https://example.com?callback=handleResponse>"></script>
For this to work, the client must define the handleResponse
function in the global scope, which will be invoked when the response is received.
JSONP poses security risks because it executes JavaScript from external sources. Therefore, it's crucial to trust the JSONP provider.
Nowadays, CORS is the preferred method, making JSONP largely obsolete.
The same-origin policy restricts JavaScript from making requests to different domains. An origin is specified by the combination of the URI scheme, hostname, and port number. This policy is crucial for security, as it prevents a malicious script on one page from accessing sensitive data on another page's Document Object Model (DOM). This ensures that data remains secure within its designated origin, blocking unauthorized cross-origin interactions.
Single Page Apps (SPAs) are highly interactive web applications that load a single HTML page and dynamically update content as the user interacts with the app. Unlike traditional server-side rendering, SPAs use client-side rendering, fetching new data via AJAX without full-page refreshes. This approach makes the app more responsive and reduces the number of HTTP requests.
Pros:
Cons:
To enhance SEO for SPAs, consider:
In the past, developers often used Backbone for models, promoting an OOP approach by creating Backbone models and attaching methods to them.
While the module pattern remains useful, modern development often favors React/Redux, which employs a single-directional data flow based on the Flux architecture. Here, app data models are typically represented using plain objects, with utility pure functions to manipulate these objects. State changes are handled using actions and reducers, following Redux principles.
Avoid classical inheritance when possible. If you must use it, adhere to best practices and guidelines.
Possess working knowledge of it. Promises can be fulfilled, rejected, or pending, and allow users to attach callbacks for handling outcomes.
Common polyfills include $.deferred
, Q, and Bluebird. However, with ES2015 providing native support, polyfills are typically unnecessary.
Attributes: Defined in HTML tags, they provide initial info for the browser (like "Hello" in <input type="text" value="Hello">
).
Properties: Belong to the DOM (JavaScript's view of the page), allowing you to access and change element info after the page loads (like updating the text field value).
You made it till the end! I hope you found these JavaScript questions helpful in preparing for your next interview. As an experienced developer, mastering both foundational and advanced concepts is key to showcasing your expertise and acing your next interview. But the most important thing to remember is that it's okay to not know everything - it's all about being willing to learn and improve.
As a JavaScript developer with 2 years of experience, you've already demonstrated your skills in building robust and scalable applications. However, the interview process can still be daunting, especially when faced with tricky technical questions. To help you prepare and showcase your expertise, we've curated a list of 30 JavaScript interview questions that are tailored to your level of experience. These questions cover advanced topics such as performance optimization, design patterns, and more, and are designed to help you demonstrate your skills and confidence in your next interviews.
Caching involves storing copies of files or data temporarily to speed up access times. It enhances performance by minimizing the frequency of fetching data from its original source. In web development, caching techniques include utilizing browser caches, service workers, and HTTP headers such as Cache-Control
to effectively implement this optimization.
Lazy loading is a design approach that defers the loading of resources until they are required. This can notably enhance performance by decreasing initial load times and conserving bandwidth. For instance, in web development, images can be lazily loaded, ensuring they are fetched only when they enter the viewport. This is facilitated using techniques like the HTML loading="lazy"
attribute or through JavaScript libraries designed for this purpose.
<img src="image.jpg" loading="lazy" alt="Lazy loaded image" />
Design patterns offer reusable solutions to typical software design challenges, serving as a blueprint for solving problems across various contexts. They are beneficial as they guide developers in sidestepping common issues, enhancing code clarity, and simplifying the maintenance and scalability of applications.
The Prototype pattern is a creational pattern used to create new objects by copying an existing object, known as the prototype. This pattern is advantageous when creating a new object is more resource-intensive than cloning an existing one. In JavaScript, you can implement this pattern using methods like Object.create
or by utilizing the prototype
property of a constructor functions.
const prototypeObject = {greet() {console.log('Hello, world!');},};const newObject = Object.create(prototypeObject);newObject.greet(); // Outputs: Hello, world!
This pattern allows objects to inherit properties and methods from a prototype, promoting code reuse and maintaining a clear structure in object-oriented programming.
The Singleton pattern ensures that a class has only one instance and provides a global access point to that instance. It is beneficial when you need precisely one object to manage tasks or resources system-wide. In JavaScript, you can implement the Singleton pattern using closures or ES6 classes to ensure there is only one instance of a class.
class Singleton {constructor() {if (!Singleton.instance) {Singleton.instance = this;}return Singleton.instance;}}const instance1 = new Singleton();const instance2 = new Singleton();console.log(instance1 === instance2); // true
This pattern is useful in scenarios like managing configurations, logging, and resource sharing across an application, ensuring consistency and preventing multiple instances from being created unnecessarily.
The Factory pattern in software design enables object creation without specifying their exact class upfront. It encapsulates complex instantiation logic and is ideal for situations where object types are determined dynamically at runtime. In JavaScript, this pattern can be implemented using a factory function to create various objects based on conditions:
function createAnimal(type) {if (type === 'dog') {return { sound: 'woof' };} else if (type === 'cat') {return { sound: 'meow' };}}const dog = createAnimal('dog');const cat = createAnimal('cat');
This approach promotes code flexibility and modularity by centralizing object creation logic.
The Observer pattern is a design pattern where an object, called the subject, maintains a list of its dependents, known as observers, and notifies them of any state changes. This pattern facilitates loose coupling between objects, making it useful for implementing event-driven architectures, real-time updates in user interfaces, and data synchronization across different parts of an application. It enables components to react dynamically to changes without explicitly knowing each other, promoting flexibility and maintainability in software design.
The Decorator pattern is a structural design pattern that allows behavior to be added to objects without affecting other instances of the same class. It wraps objects with additional functionality, extending their capabilities. For example:
class Car {drive() {return 'Driving';}}class CarDecorator {constructor(car) {this.car = car;}drive() {return this.car.drive();}}class GPSDecorator extends CarDecorator {drive() {return `${super.drive()} with GPS`;}}const myCar = new Car();const myCarWithGPS = new GPSDecorator(myCar);console.log(myCarWithGPS.drive()); // Outputs: "Driving with GPS"
Here, CarDecorator
and GPSDecorator
dynamically add features like GPS to a basic Car
object, demonstrating how decorators can extend object functionalities.
The Strategy pattern is a behavioral design pattern that allows you to encapsulate different algorithms into separate classes that are interchangeable. It enables the selection of algorithms at runtime without modifying client code. Here’s a concise example:
class Context {constructor(strategy) {this.strategy = strategy;}execute(data) {return this.strategy.algorithm(data);}}class ConcreteStrategyA {algorithm(data) {// Implementation of algorithm Areturn data.sort(); // Example: sorting algorithm}}class ConcreteStrategyB {algorithm(data) {// Implementation of algorithm Breturn data.reverse(); // Example: reverse algorithm}}// Usageconst context = new Context(new ConcreteStrategyA());const data = [3, 1, 2];console.log(context.execute(data)); // Outputs: [1, 2, 3]context.strategy = new ConcreteStrategyB();console.log(context.execute(data)); // Outputs: [3, 2, 1]
In this pattern, Context
manages the selected strategy
object, which performs its specific algorithm
on data
. This approach allows flexible algorithm switching and enhances code maintainability by separating algorithms from client code.
The Command pattern is a behavioral design pattern that turns a request into a stand-alone object containing all information about the request. This transformation allows for parameterization of methods with different requests, queuing of requests, and logging of the requests. It also supports undoable operations. In JavaScript, it can be implemented by creating command objects with execute
and undo
methods.
class Command {execute() {}undo() {}}class LightOnCommand extends Command {constructor(light) {super();this.light = light;}execute() {this.light.on();}undo() {this.light.off();}}class Light {on() {console.log('Light is on');}off() {console.log('Light is off');}}const light = new Light();const lightOnCommand = new LightOnCommand(light);lightOnCommand.execute(); // Light is onlightOnCommand.undo(); // Light is off
The Module pattern in JavaScript is a design pattern used to create self-contained modules of code. It helps with encapsulation by allowing you to define private and public members within a module. Private members are not accessible from outside the module, while public members are exposed through a returned object. This pattern helps in organizing code, avoiding global namespace pollution, and maintaining a clean separation of concerns.
var myModule = (function () {var privateVar = 'I am private';function privateMethod() {console.log(privateVar);}return {publicMethod: function () {privateMethod();},};})();myModule.publicMethod(); // Logs: I am private
To avoid issues related to hoisting in JavaScript, use let
or const
to declare variables instead of var
. Unlike var
, let
and const
are block-scoped, meaning they are only accessible within the block they are defined in and are not hoisted to the top of the scope. Additionally, ensure functions are declared before they are called to prevent any unexpected behavior due to function hoisting.
// Use let or constlet x = 10;const y = 20;// Declare functions before calling themfunction myFunction() {console.log('Hello, world!');}myFunction();
To share code between JavaScript files, you can use modules. In modern JavaScript, ES6 modules with export
and import
statements are commonly used. Here's how you can export a function from one file and import it into another:
Using ES6 Modules:
// file1.jsexport function greet() {console.log('Hello, world!');}// file2.jsimport { greet } from './file1.js';greet();
Alternatively, in Node.js, you can use module.exports
and require
:
Using CommonJS Modules (Node.js):
// file1.jsmodule.exports = function greet() {console.log('Hello, world!');};// file2.jsconst greet = require('./file1.js');greet();
To retrieve query string values from the current page's URL in JavaScript using URLSearchParams
, you can follow these steps:
// Assuming the URL is: http://example.com/page?key=value&foo=bar// Create a URLSearchParams object from the current page's query stringconst params = new URLSearchParams(window.location.search);// Retrieve specific query parameter valuesconst keyValue = params.get('key'); // 'value'const fooValue = params.get('foo'); // 'bar'// Example usageconsole.log(keyValue); // Outputs: 'value'console.log(fooValue); // Outputs: 'bar'
This approach allows you to easily access and manipulate query string parameters directly from the browser's URL.
Handling errors in asynchronous operations can be done effectively with both async/await
and Promises:
async/await
with try...catch
:javascriptCopy codeasync function fetchData() {try {const response = await fetch('https://api.example.com/data');if (!response.ok) throw new Error('Failed to fetch data');const data = await response.json();console.log(data);} catch (error) {console.error('Error fetching data:', error.message);}}
.catch()
method:javascriptCopy codefetch('https://api.example.com/data').then(response => {if (!response.ok) throw new Error('Failed to fetch data');return response.json();}).then(data => console.log(data)).catch(error => console.error('Error fetching data:', error.message));
These methods ensure that errors, such as network failures or failed requests, are caught and handled appropriately, maintaining robust error management in your JavaScript applications.
You can manipulate CSS styles in JavaScript by directly accessing an element's style
property for specific changes like background color or font size:
// Changing background colordocument.getElementById('myDiv').style.backgroundColor = 'blue';
You can also add, remove, or toggle CSS classes using the classList
property:
document.getElementById('myDiv').classList.add('newClass');document.getElementById('myDiv').classList.remove('oldClass');document.getElementById('myDiv').classList.toggle('toggleClass');
this
keyword?Using the this
keyword can be tricky because its value depends on the function's invocation context. Common pitfalls include losing this
context when passing methods as callbacks, using this
inside nested functions, and misunderstanding this
in arrow functions. To address these issues, developers often use methods like .bind()
, arrow functions, or store this
context in a variable.
The DOM, or Document Object Model, is a programming interface for web documents. It represents the page so that programs can change the document structure, style, and content. The DOM is structured as a tree of objects, where each node represents part of the document, such as elements, attributes, and text nodes.
AMD (Asynchronous Module Definition) and CommonJS are JavaScript module systems. AMD focuses on asynchronous loading, ideal for browsers, using define()
and require()
. CommonJS, geared towards server-side environments like Node.js, employs module.exports
and require()
for synchronous module loading.
In JavaScript, there are several methods to make API calls. The traditional way is using XMLHttpRequest
, which is more verbose. fetch
is a modern approach that returns promises, making it easier to handle responses. Alternatively, Axios is a widely-used third-party library that simplifies API calls and offers additional features.
For JavaScript testing, tools like Jest, Mocha, Jasmine, and Cypress are commonly used. Jest is praised for its simplicity and built-in functionalities. Mocha offers flexibility and can be integrated with various libraries. Jasmine is known for its straightforward setup and behavior-driven development (BDD) approach. Cypress excels in end-to-end testing, emphasizing real browser interactions.
event.preventDefault()
and event.stopPropagation()
?event.preventDefault()
prevents the default action of an event, like stopping a form submission whereas event.stopPropagation()
prevents the event from bubbling up to parent elements.
innerHTML
and textContent
?innerHTML
returns or sets the HTML markup inside an element, allowing HTML tags to be parsed and rendered whereas textContent
retrieves or sets the text content inside an element, rendering HTML tags as plain text.
// Example of innerHTMLelement.innerHTML = '<strong>Bold Text</strong>'; // Renders as bold text// Example of textContentelement.textContent = '<strong>Bold Text</strong>'; // Renders as plain text: <strong>Bold Text</strong>
window
object and the document
object?The window
object represents the browser window, offering methods to control it (e.g., opening new windows, or accessing browser history). The document
object represents the web page's content within the window, providing methods to manipulate the DOM (e.g., selecting elements, and modifying content).
setTimeout()
, setImmediate()
, and process.nextTick()
?setTimeout()
schedules a callback to run after a minimum delay. setImmediate()
schedules a callback to run after the current event loop completes. process.nextTick()
schedules a callback to run before the next event loop iteration begins.
setTimeout(() => console.log('setTimeout'), 0);setImmediate(() => console.log('setImmediate'));process.nextTick(() => console.log('nextTick'));
In this example, process.nextTick()
executes first, followed by either setTimeout()
or setImmediate()
depending on the environment.
window.history
API?The window.history
API allows you to manipulate the browser's session history. You can use history.pushState()
to add a new entry to the history stack, history.replaceState()
to modify the current entry, and history.back()
, history.forward()
, and history.go()
to navigate through the history.
// Add a new entry to the historyhistory.pushState({ page: 1 }, 'title 1', '?page=1');// Replace the current history entryhistory.replaceState({ page: 2 }, 'title 2', '?page=2');// Navigate back, forward, or to a specific point in historyhistory.back(); // Go back one stephistory.forward(); // Go forward one stephistory.go(-2); // Go back two steps
Pros of Promises over Callbacks:
.then()
, improving readability and maintainability.Promise.all()
for parallel asynchronous operations, handling multiple promises concisely.Cons:
Metadata fields of a module often include the module's name, version, description, author, license, and dependencies. These fields are commonly found in a package.json file in JavaScript projects.
Example:
{"name": "my-module","version": "1.0.0","description": "A sample module","author": "John Doe","license": "MIT","dependencies": {"express": "^4.17.1"}}
These fields provide essential information about the module and its requirements.
In JavaScript, there are three main types of errors:
undefined
.Error propagation in JavaScript refers to the process of passing errors up the call stack. When an error occurs in a function, it can be caught and handled with try...catch blocks. If not caught, the error moves up the call stack until it is either caught or causes the program to terminate. For example:
function a() {throw new Error('An error occurred');}function b() {a();}try {b();} catch (e) {console.error(e.message); // Outputs: An error occurred}
In this example, the error thrown in function a
propagates to function b
and is caught in the try...catch
block.
You've reached the end of our list of 30 JavaScript interview questions! We hope these questions have helped you identify areas for improvement and solidify your understanding of advanced JavaScript concepts. Remember, the key to acing an interview is not just about knowing the answers, but also about demonstrating your thought process, problem-solving skills, and ability to communicate complex ideas simply.
JavaScript is a fundamental skill for any aspiring web developer, and landing a job in this field can be a challenging task, especially for freshers. One of the most crucial steps in the interview process is the technical interview, where your JavaScript skills are put to the test. To help you prepare and boost your confidence, we've compiled a list of the top 50 basic JavaScript interview questions and answers that are commonly asked in interviews.
Hoisting describes the behavior of variable declarations in JavaScript. Declarations using var
are "moved" to the top of their scope during compilation. Only the declaration is hoisted, not the initialization.
Example with var
:
console.log(foo); // undefinedvar foo = 1;console.log(foo); // 1
Visualized as:
var foo;console.log(foo); // undefinedfoo = 1;console.log(foo); // 1
Variables with let
, const
, and class
:
These are also hoisted but not initialized. Accessing them before declaration results in a ReferenceError
.
console.log(y); // ReferenceError: Cannot access 'y' before initializationlet y = 'local';console.log(z); // ReferenceError: Cannot access 'z' before initializationconst z = 'local';console.log(Foo); // ReferenceError: Cannot access 'Foo' before initializationclass Foo {constructor() {}}
Function Expressions:
Only the declaration is hoisted.
console.log(bar); // undefinedbar(); // Uncaught TypeError: bar is not a functionvar bar = function () {console.log('BARRRR');};
Function Declarations:
Both declaration and definition are hoisted.
console.log(foo); // [Function: foo]foo(); // 'FOOOOO'function foo() {console.log('FOOOOO');}
Import Statements:
Imports are hoisted, making them available throughout the module, with side effects occurring before other code runs.
foo.doSomething(); // Works normally.import foo from './modules/foo';
let
, var
or const
?var
: Function-scoped or globally scoped.let
and const
: Block-scoped (only accessible within the nearest set of curly braces).Example:
function foo() {var bar = 1;let baz = 2;const qux = 3;console.log(bar); // 1console.log(baz); // 2console.log(qux); // 3}console.log(bar); // ReferenceErrorconsole.log(baz); // ReferenceErrorconsole.log(qux); // ReferenceError
if (true) {var bar = 1;let baz = 2;const qux = 3;}console.log(bar); // 1console.log(baz); // ReferenceErrorconsole.log(qux); // ReferenceError
var
and let
: Can be declared without an initial value.const
: Must be initialized at the time of declaration.Example:
var foo; // Oklet bar; // Okconst baz; // SyntaxError
var
: Allows redeclaration.let
and const
: Do not allow redeclaration.Example:
var foo = 1;var foo = 2; // Oklet baz = 3;let baz = 4; // SyntaxError
var
and let
: Allow reassignment.const
: Does not allow reassignment.Example:
var foo = 1;foo = 2; // Oklet bar = 3;bar = 4; // Okconst baz = 5;baz = 6; // TypeError
var
: Variables are hoisted and initialized to undefined
.let
and const
: Variables are hoisted but not initialized, causing a ReferenceError
if accessed before declaration.Example:
console.log(foo); // undefinedvar foo = 'foo';console.log(baz); // ReferenceErrorlet baz = 'baz';console.log(bar); // ReferenceErrorconst bar = 'bar';
==
and ===
in JavaScript?==
)Examples:
42 == '42'; // true0 == false; // truenull == undefined; // true[] == false; // true'' == false; // true
###Strict Equality Operator (===
)
true
.Examples:
42 === '42'; // false0 === false; // falsenull === undefined; // false[] === false; // false'' === false; // false
Use ==
only when comparing against null
or undefined
for convenience.
var a = null;console.log(a == null); // trueconsole.log(a == undefined); // true
Prefer ===
for all other comparisons to avoid pitfalls of type coercion and ensure both value and type are the same.
The event loop is crucial for handling asynchronous operations in JavaScript, allowing single-threaded execution without blocking.
1. Call Stack:
2. Web APIs/Node.js APIs:
setTimeout()
, HTTP requests) on separate threads.3. Task Queue (Macrotask Queue):
setTimeout()
, setInterval()
, and UI events.4. Microtask Queue:
Promise
callbacks).console.log('Start');setTimeout(() => {console.log('Timeout 1');}, 0);Promise.resolve().then(() => {console.log('Promise 1');});setTimeout(() => {console.log('Timeout 2');}, 0);console.log('End');
Console Output:
StartEndPromise 1Timeout 1Timeout 2
Explanation:
Start
and End
are logged first (synchronous).Promise 1
is logged next (microtask).Timeout 1
and Timeout 2
are logged last (macrotasks).Understanding the event loop helps write efficient, non-blocking JavaScript code.
Event delegation is an efficient way to handle events on multiple child elements by attaching a single event listener to a common parent element. This is useful for managing events on many similar elements, like list items.
event.target
to identify the actual element that triggered the event.// HTML:// <ul id="item-list">// <li>Item 1</li>// <li>Item 2</li>// <li>Item 3</li>// </ul>const itemList = document.getElementById('item-list');itemList.addEventListener('click', (event) => {if (event.target.tagName === 'LI') {console.log(`Clicked on ${event.target.textContent}`);}});
A single click listener on <ul>
handles clicks on any <li>
due to event bubbling.
Dynamic Content:
const buttonContainer = document.getElementById('button-container');const addButton = document.getElementById('add-button');buttonContainer.addEventListener('click', (event) => {if (event.target.tagName === 'BUTTON') {console.log(`Clicked on ${event.target.textContent}`);}});addButton.addEventListener('click', () => {const newButton = document.createElement('button');newButton.textContent = `Button ${buttonContainer.children.length + 1}`;buttonContainer.appendChild(newButton);});
Simplifying Code:
const userForm = document.getElementById('user-form');userForm.addEventListener('input', (event) => {const { name, value } = event.target;console.log(`Changed ${name}: ${value}`);});
this
works in JavaScriptThe this
keyword in JavaScript can be quite confusing as its value depends on how a function is called. Here are the main rules that determine the value of this
:
new
KeywordCreates a new object and sets this
to that object.
function Person(name) {this.name = name;}const person = new Person('Alice');console.log(person.name); // 'Alice
apply
, call
, or bind
Explicitly sets this
to a specified object.
javascriptCopy codefunction greet() {console.log(this.name);}const person = { name: 'Alice' };greet.call(person); // 'Alice'
this
is bound to the object the method is called on.
const obj = {name: 'Alice',greet: function () {console.log(this.name);},};obj.greet(); // 'Alice'
In non-strict mode, defaults to the global object (window
in browsers); in strict mode, defaults to undefined
.
function greet() {console.log(this); // global object or undefined}greet();
Inherit this
from their lexical enclosing scope.
const obj = {name: 'Alice',greet: () => {console.log(this.name); // `this` refers to the enclosing scope},};obj.greet(); // undefined
this
ES2015 introduced arrow functions which capture this
from their lexical scope. This can simplify code but requires caution when integrating with libraries expecting traditional function context.
Example:
function Timer() {this.seconds = 0;setInterval(() => {this.seconds++; // `this` refers to the Timer instanceconsole.log(this.seconds);}, 1000);}const timer = new Timer();
sessionStorage
and localStorage
.Cookies, localStorage, and sessionStorage are key client-side storage mechanisms in web applications, each serving distinct purposes:
Purpose: Stores small data pieces sent to the server with HTTP requests.
Capacity: Limited to around 4KB per domain.
Lifespan: Can have expiration dates; session cookies are cleared when the browser closes.
Access: Domain-specific; accessible across pages and subdomains.
Security: Supports HttpOnly and Secure flags to restrict JavaScript access and ensure HTTPS transmission.
Example Usage:
// Set a cookie with an expirydocument.cookie ='auth_token=abc123def; expires=Fri, 31 Dec 2024 23:59:59 GMT; path=/';// Read all cookies (no direct method for specific cookies)console.log(document.cookie);// Delete a cookiedocument.cookie ='auth_token=; expires=Thu, 01 Jan 1970 00:00:00 GMT; path=/';
localStorage
Purpose: Stores data persistently on the client-side.
Capacity: Around 5MB per origin.
Lifespan: Data remains until explicitly cleared.
Access: Available across all tabs and windows within the same origin.
Security: All JavaScript on the page can access localStorage values.
Example Usage:
// Set an item in localStoragelocalStorage.setItem('key', 'value');// Get an item from localStorageconsole.log(localStorage.getItem('key'));// Remove an item from localStoragelocalStorage.removeItem('key');// Clear all data in localStoragelocalStorage.clear();
sessionStorage
Purpose: Stores session-specific data that persists until the browser or tab is closed.
Capacity: Similar to localStorage, around 5MB per origin.
Lifespan: Cleared when the tab or browser closes; reloading the page retains data.
Access: Limited to the current tab or window.
Security: All JavaScript on the page can access sessionStorage values.
Example Usage:
// Set an item in sessionStoragesessionStorage.setItem('key', 'value');// Get an item from sessionStorageconsole.log(sessionStorage.getItem('key'));// Remove an item from sessionStoragesessionStorage.removeItem('key');// Clear all data in sessionStoragesessionStorage.clear();
<script>
, <script async>
and <script defer>
<script>
TagThe <script>
tag is used to include JavaScript in a web page. When used without async
or defer
attributes:
Example:
<!doctype html><html><head><title>Regular Script</title></head><body><h1>Regular Script Example</h1><p>This content appears before the script executes.</p><script src="regular.js"></script><p>This content appears after the script executes.</p></body></html>
<script async>
TagExample:
<!doctype html><html><head><title>Async Script</title></head><body><h1>Async Script Example</h1><p>This content appears before the async script executes.</p><script async src="async.js"></script><p>This content may appear before or after the async script executes.</p></body></html>
<script defer>
Tag:
DOMContentLoaded
.Example:
<!doctype html><html><head><title>Deferred Script</title></head><body><h1>Deferred Script Example</h1><p>This content appears before the deferred script executes.</p><script defer src="deferred.js"></script><p>This content appears before the deferred script executes.</p></body></html>
null
, undefined
or undeclared?Undeclared: A variable that is not declared using var
, let
, or const
will be created globally and can cause errors. Avoid them by using try/catch
blocks to detect them.
undefined
: A declared variable without an assigned value is undefined
. Use ===
or typeof
to check for undefined
. Note that ==
will also return true
for null
.
null
: A variable explicitly assigned null
represents no value. Use ===
to check for null
. Don't use ==
as it will also return true
for undefined
.
Best Practices:
null
to variables if you don't intend to use them yet..call
and .apply
in JavaScript?.call
and .appl
are used to invoke functions, setting this
within the function. The difference lies in how they handle arguments:
Memory Aid:
Example:
javascriptCopy codefunction add(a, b) {return a + b;}console.log(add.call(null, 1, 2)); // 3console.log(add.apply(null, [1, 2])); // 3// ES6 with spread operatorconsole.log(add.call(null, ...[1, 2])); // 3
Function.prototype.bind
Function.prototype.bind
creates a new function with a specific this
context and optionally preset arguments. It's useful for maintaining the correct this
value in methods passed to other functions.
Example:
const john = {age: 42,getAge: function () {return this.age;},};console.log(john.getAge()); // 42const unboundGetAge = john.getAge;console.log(unboundGetAge()); // undefinedconst boundGetAge = john.getAge.bind(john);console.log(boundGetAge()); // 42const mary = { age: 21 };const boundGetAgeMary = john.getAge.bind(mary);console.log(boundGetAgeMary()); // 21
Its main purposes are:
this
to preserve context: The primary function of bind
is to attach the this
value of a function to a specific object. When you use func.bind(thisArg)
, it generates a new function with the same code as func
, but with this
permanently set to thisArg
.bind
also enables you to pre-set arguments for the new function. Any arguments provided to bind
after thisArg
will be prepended to the argument list when the new function is invoked.bind
allows you to borrow methods from one object and use them on another object, even if the methods were not initially designed for that object.The advantage of using the arrow syntax for a method in a constructor is that it automatically binds the this
value to the constructor's this
context. This means that when the method is called, it will always refer to the constructor's this
context, rather than the global scope or some other unexpected context.
In traditional function expressions, the this
value is determined by how the function is called, which can lead to unexpected behavior if not properly bound. By using an arrow function, you can ensure that the this
value is always bound to the constructor's this
context, making your code more predictable and easier to maintain.
For example, in the code snippet:
const Person = function (name) {this.name = name;this.sayName1 = function () {console.log(this.name);};this.sayName2 = () => {console.log(this.name);};};const john = new Person('John');const dave = new Person('Dave');john.sayName1(); // Johnjohn.sayName2(); // John// `this` can change for regular functions but not for arrow functionsjohn.sayName1.call(dave); // Davejohn.sayName2.call(dave); // John
The sayName1
method uses a traditional function expression, which means its this
value is determined by how it's called. If you call john.sayName1.call(dave)
, the this
value will be dave
, and the method will log Dave
to the console.
On the other hand, the sayName2
method uses an arrow function, which means its this
value is automatically bound to the constructor's this
context. If you call john.sayName2.call(dave)
, the this
value will still be john
, and the method will log John
to the console.
This can be particularly helpful in React class components, where you often need to pass methods as props to child components. By using arrow functions, you can ensure that the methods always refer to the correct this
context, without having to manually bind this
in the constructor.
Prototypical inheritance allows objects to inherit properties and methods from other objects using a prototype-based model.
Object.getPrototypeOf()
and Object.setPrototypeOf()
.function Person(name, age) {this.name = name;this.age = age;}Person.prototype.sayHello = function () {console.log(`Hello, my name is ${this.name} and I am ${this.age} years old.`,);};let john = new Person('John', 30);john.sayHello(); // "Hello, my name is John and I am 30 years old."
JavaScript looks for properties/methods on the object, then its prototype, and so on up the chain until null
.
Functions used with new
to create objects, setting their prototype to the constructor's prototype
.
function Animal(name) {this.name = name;}Animal.prototype.sayName = function () {console.log(`My name is ${this.name}`);};function Dog(name, breed) {Animal.call(this, name);this.breed = breed;}Dog.prototype = Object.create(Animal.prototype);Dog.prototype.bark = function () {console.log('Woof!');};let fido = new Dog('Fido', 'Labrador');fido.bark(); // "Woof!"fido.sayName(); // "My name is Fido"
Object.create()
Creates a new object with a specified prototype.
let proto = {greet: function () {console.log(`Hello, my name is ${this.name}`);},};let person = Object.create(proto);person.name = 'John';person.greet(); // "Hello, my name is John"
function Person(){}
, const person = Person()
, and const person = new Person()
?function Person(){}
is a standard function declaration in JavaScript. When written in PascalCase, it follows the convention for functions intended to be used as constructors.
const person = Person()
simply calls the function and executes its code. If no return value is specified, person
will be undefined
. This is not a constructor call and does not create a new object.
const person = new Person()
creates a new object using the Person
constructor function. The new
keyword creates a new object and sets its prototype to Person.prototype
. The this
keyword inside the constructor function refers to the new object being created.
foo
between function foo() {}
and var foo = function() {}
Syntax: function foo() {}
Description: Defines a named function that can be called throughout the enclosing scope.
Example:
function foo() {console.log('FOOOOO');}
Syntax: var foo = function() {}
Description: Defines a function and assigns it to a variable, often used in specific contexts.
Example:
var foo = function () {console.log('FOOOOO');};
Hoisting:
Function Declarations: The entire function is hoisted; can be called before its definition.
foo(); // 'FOOOOO'function foo() {console.log('FOOOOO');}
Function Expressions: Only the variable is hoisted, not the function body; calling it before definition results in an error.
foo(); // Uncaught TypeError: foo is not a functionvar foo = function () {console.log('FOOOOO');};
Name Scope:
Function Expressions: These can be named internally, but the name is only accessible within the function.
const myFunc = function namedFunc() {console.log(namedFunc); // Works};console.log(namedFunc); // undefined
Function Declarations:
Function Expressions:
Here are the various ways to create objects in JavaScript:
Object Literals ({}
): Simplest way to create objects using key-value pairs within curly braces.
const person = {firstName: 'John',lastName: 'Doe',};
Object() Constructor: Using the new
keyword with the built-in Object
constructor to create an object.
const person = new Object();person.firstName = 'John';person.lastName = 'Doe';
Object.create() Method: Creating a new object using an existing object as a prototype.
const personPrototype = {greet() {console.log(`Hello, my name is ${this.name} and I'm ${this.age} years old.`,);},};const person = Object.create(personPrototype);person.name = 'John';person.age = 30;person.greet(); // Output: Hello, my name is John and I'm 30 years old.
ES2015 Classes: Defining a blueprint for objects using classes, similar to other programming languages.
class Person {constructor(name, age) {this.name = name;this.age = age;}greet = function () {console.log(`Hello, my name is ${this.name} and I'm ${this.age} years old.`,);};}const person = new Person('John', 30);person.greet(); // Output: Hello, my name is John and I'm 30 years old.
Constructor Functions: Reusable blueprints for objects, using the new
keyword to create instances.
// Constructor functionfunction Person(name, age) {this.name = name;this.age = age;this.greet = function () {console.log(`Hello, my name is ${this.name} and I'm ${this.age} years old.`,);};}const person = new Person('John', 30);person.greet(); // Output: Hello, my name is John and I'm 30 years old.
Note: Constructor functions are less commonly used now that ES2015 classes are widely supported.
A higher-order function is a function that:
Takes another function as an argument: A function that accepts another function as a parameter.
function greet(name) {return `Hello, ${name}!`;}function greetName(greeter, name) {console.log(greeter(name));}greetName(greet, 'Alice'); // Output: Hello, Alice!
Returns a function as its result: A function that returns another function as its output.
function multiplier(factor) {return function (num) {return num * factor;};}const double = multiplier(2);console.log(double(5)); // Output: 10
In other words, a higher-order function is a function that operates on other functions, either by taking them as input or by producing them as output.
Uses function constructors and prototypes.
Example:
function Person(name, age) {this.name = name;this.age = age;}Person.prototype.greet = function () {console.log('Hello, my name is ' +this.name +' and I am ' +this.age +' years old.',);};var person1 = new Person('John', 30);person1.greet(); // Hello, my name is John and I am 30 years old.
Uses the class
syntax, making code more readable and adding features.
Example:
class Person {constructor(name, age) {this.name = name;this.age = age;}greet() {console.log(`Hello, my name is ${this.name} and I am ${this.age} years old.`,);}}const person1 = new Person('John', 30);person1.greet(); // Hello, my name is John and I am 30 years old.
class
keyword, more concise and easier to understandstatic
keywordObject.create()
and manual prototype chain settingextends
keyword, simpler and more intuitivesuper
keyword to call parent class's constructor and methodsEvent bubbling is a mechanism in the DOM (Document Object Model) where an event, such as a click, is first triggered on the target element and then propagates upward through the DOM tree to the root of the document.
Bubbling Phase:
Description: During the bubbling phase, the event starts at the target element and bubbles up through its ancestors in the DOM hierarchy. Event handlers attached to the target element and its ancestors can all potentially receive and respond to the event.
Example:
// HTML:// <div id="parent">// <button id="child">Click me!</button>// </div>const parent = document.getElementById('parent');const child = document.getElementById('child');parent.addEventListener('click', () => {console.log('Parent element clicked');});child.addEventListener('click', () => {console.log('Child element clicked');});
When you click the Click me!
button, both the child and parent event handlers will be triggered due to event bubbling.
Stopping Event Bubbling:
Method: Use stopPropagation()
to stop the event from bubbling up the DOM tree.
Example:
child.addEventListener('click', (event) => {console.log('Child element clicked');event.stopPropagation();});
Event capturing is a propagation mechanism in the DOM where an event, such as a click, is first triggered at the root of the document and then flows down through the DOM tree to the target element.
Event Propagation Phases:
Enabling Event Capturing:
{ capture: true }
as the third argument to addEventListener()
.Example:
// HTML:// <div id="parent">// <button id="child">Click me!</button>// </div>const parent = document.getElementById('parent');const child = document.getElementById('child');parent.addEventListener('click',() => {console.log('Parent element clicked (capturing)');},true, // Enable capturing phase);child.addEventListener('click', () => {console.log('Child element clicked');});
When you click the Click me!
button, the parent element's capturing handler will be triggered before the child element's handler.
Stopping Propagation:
Use stopPropagation()
to prevent the event from traveling further down the DOM tree during the capturing phase.
Example:
parent.addEventListener('click',(event) => {console.log('Parent element clicked (capturing)');event.stopPropagation(); // Stop event propagation},true,);child.addEventListener('click', () => {console.log('Child element clicked');});
In this example, only the parent event listener will be called when you click the "Click me!" button, as the event propagation is stopped at the parent element.
mouseenter
and mouseover
event in JavaScript and browsers?mouseenter
mouseover
Example:
const fs = require('fs');const data = fs.readFileSync('large-file.txt', 'utf8');console.log(data); // Blocks until file is readconsole.log('End of the program');
Example:
console.log('Start of the program');fetch('https://api.example.com/data').then((response) => response.json()).then((data) => console.log(data)) // Non-blocking.catch((error) => console.error(error));console.log('End of program');
AJAX is a set of web development techniques using various web technologies on the client side to create asynchronous web applications. Unlike traditional web applications where each user interaction triggers a full page reload, AJAX allows web applications to send data to and retrieve data from a server asynchronously without interfering with the display and behavior of the existing page. This enables dynamic updates to the web page without the need to reload it.
Key Points:
XMLHttpRequest
, but fetch()
is now preferred for modern web applications.XMLHttpRequest
APIExample:
let xhr = new XMLHttpRequest();xhr.onreadystatechange = function () {if (xhr.readyState === XMLHttpRequest.DONE) {if (xhr.status === 200) {console.log(xhr.responseText);} else {console.error('Request failed: ' + xhr.status);}}};xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);xhr.send();
XMLHttpRequest
, sets up a callback function to handle state changes, opens a request to a URL, and sends the request.fetch()
APIExample:
fetch('https://jsonplaceholder.typicode.com/todos/1').then((response) => {if (!response.ok) {throw new Error('Network response was not ok');}return response.json();}).then((data) => console.log(data)).catch((error) => console.error('Fetch error:', error));
.then()
to parse JSON data, and manages errors with .catch()
.fetch
fetch()
initiates an asynchronous request to fetch a resource from a URL.
Example:
fetch('https://api.example.com/data', {method: 'GET', // or 'POST', 'PUT', 'DELETE', etc.headers: {'Content-Type': 'application/json',},});
fetch()
returns a Promise that resolves to a Response
object representing the server's response.The Response
object offers methods to handle the body content, such as .json()
, .text()
, .blob()
.
Example:
fetch('https://api.example.com/data').then((response) => response.json()).then((data) => console.log(data)).catch((error) => console.error('Error:', error));
fetch()
is asynchronous, allowing the browser to continue executing other tasks while waiting for the server response..then()
, .catch()
) are handled in the microtask queue as part of the event loop.fetch()
configures various request aspects, such as HTTP method, headers, body, credentials, and caching..catch()
or try/catch
with async/await
.AJAX (Asynchronous JavaScript and XML) enables web pages to send and retrieve data asynchronously, allowing for dynamic updates without full page reloads.
XMLHttpRequest
and fetch()
?Both XMLHttpRequest (XHR)
and fetch()
enable asynchronous HTTP requests in JavaScript, but differ in syntax, handling, and features.
setRequestHeader
method.send
method.body
property in the options parameter.responseType
to handle different formats..then
for accessing data.onerror
event..catch
method.abort()
method.AbortController
for request cancellation.onprogress
event.Choosing Between Them: fetch()
is generally preferred due to its cleaner syntax and promise-based handling, but XMLHttpRequest
may still be useful for specific cases like progress tracking.
JavaScript has various data types categorized into two groups: primitive and non-primitive (reference) types.
true
or false
.Determining Data Types: JavaScript is dynamically typed, meaning variables can hold different data types over time. Use the typeof
operator to determine a variable's type.
Iterating over object properties and arrays is very common in JavaScript and we have various ways to achieve this. Here are some of the ways to do it:
for...in
StatementLoops over all enumerable properties of an object, including inherited ones.
for (const property in obj) {if (Object.hasOwn(obj, property)) {console.log(property);}}
Object.keys()
Returns an array of an object's own enumerable property names.
Object.keys(obj).forEach((property) => console.log(property));
Object.entries()
Returns an array of a given object's own enumerable string-keyed property [key, value] pairs.
Object.entries(obj).forEach(([key, value]) =>console.log(`${key}: ${value}`),);
Object.getOwnPropertyNames()
Returns an array of all properties (including non-enumerable ones) found directly upon a given object.
Object.getOwnPropertyNames(obj).forEach((property) =>console.log(property),);
for
LoopTraditional loop over array elements.
for (let i = 0; i < arr.length; i++) {console.log(arr[i]);}
Array.prototype.forEach()
Executes a provided function once for each array element.
arr.forEach((element, index) => console.log(element, index));
for...of
StatementLoops over iterable objects like arrays.
for (let element of arr) {console.log(element);}
Array.prototype.entries()
Provides both the index and value of each array element in a for...of
loop.
for (let [index, elem] of arr.entries()) {console.log(index, ': ', elem);}
Introduced in ES2015, the spread syntax (...
) is useful for copying and merging arrays and objects without modifying the originals. It's commonly used in functional programming, Redux, and RxJS.
Copying Arrays/Objects: Creates shallow copies.
const array = [1, 2, 3];const newArray = [...array]; // [1, 2, 3]const obj = { name: 'John', age: 30 };const newObj = { ...obj, city: 'New York' }; // { name: 'John', age: 30, city: 'New York' }
Merging Arrays/Objects: Merges them into a new one.
const arr1 = [1, 2, 3];const arr2 = [4, 5, 6];const mergedArray = [...arr1, ...arr2]; // [1, 2, 3, 4, 5, 6]const obj1 = { foo: 'bar' };const obj2 = { qux: 'baz' };const mergedObj = { ...obj1, ...obj2 }; // { foo: 'bar', qux: 'baz' }
Function Arguments: Passes array elements as individual arguments.
const numbers = [1, 2, 3];Math.max(...numbers); // Same as Math.max(1, 2, 3)
Array vs. Object Spreads: Only iterables can be spread into arrays; arrays can be spread into objects.
const array = [1, 2, 3];const obj = { ...array }; // { 0: 1, 1: 2, 2: 3 }
The rest syntax (...
) gathers multiple elements into an array or object, the inverse of the spread syntax.
Function Parameters: Collects remaining arguments into an array.
function addFiveToNumbers(...numbers) {return numbers.map((x) => x + 5);}const result = addFiveToNumbers(4, 5, 6, 7); // [9, 10, 11, 12]
Array Destructuring: Collects remaining elements into a new array.
const [first, second, ...remaining] = [1, 2, 3, 4, 5];// first: 1, second: 2, remaining: [3, 4, 5]
Object Destructuring: Collects remaining properties into a new object.
const { e, f, ...others } = { e: 1, f: 2, g: 3, h: 4 };// e: 1, f: 2, others: { g: 3, h: 4 }
Rest Parameter Rules: Must be the last parameter.
function addFiveToNumbers(arg1, ...numbers, arg2) {// Error: Rest element must be last element.}
Map
object and a plain object in JavaScript?size
property to get the number of key-value pairs.forEach
, keys()
, values()
, and entries()
.Object.keys()
, Object.values()
, or Object.entries()
for iteration.// Mapconst map = new Map();map.set('key1', 'value1');map.set({ key: 'key2' }, 'value2');console.log(map.size); // 2// Plain Objectconst obj = { key1: 'value1' };obj[{ key: 'key2' }] = 'value2';console.log(Object.keys(obj).length); // 1 (keys are strings)
Map
/Set
vs WeakMap
/WeakSet
?The main distinctions between Map
/Set
and WeakMap
/WeakSet
in JavaScript are as follows:
Map
and Set
accept keys of any type (objects, primitive values), whereas WeakMap
and WeakSet
exclusively use objects as keys, excluding primitive values like strings or numbers.Map
and Set
retain strong references to their keys and values, preventing their disposal by garbage collection. In contrast, WeakMap
and WeakSet
employ weak references for keys (objects), allowing these objects to be collected by garbage collection if no other strong references persist.Map
and Set
are enumerable and can be iterated over, while those in WeakMap
and WeakSet
are non-enumerable, precluding retrieval of key or value lists directly from them.Map
and Set
possess a size
property that indicates the number of elements they contain. In contrast, WeakMap
and WeakSet
lack a size
property because their size may vary as a result of garbage collection.Map
and Set
serve well as general-purpose data structures and for caching purposes. Conversely, WeakMap
and WeakSet
are primarily suited for storing metadata or additional object-related data without impeding the object's potential garbage collection when no longer needed.One practical use case for the arrow function syntax in JavaScript is simplifying callback functions, particularly in scenarios where you need concise, inline function definitions. Here's an example:
Use Case: Mapping an Array
Suppose you have an array of numbers and you want to double each number using the map
function.
// Traditional function syntaxconst numbers = [1, 2, 3, 4, 5];const doubledNumbers = numbers.map(function (number) {return number * 2;});console.log(doubledNumbers); // Output: [2, 4, 6, 8, 10]
Using arrow function syntax, you can achieve the same result more succinctly:
// Arrow function syntaxconst numbers = [1, 2, 3, 4, 5];const doubledNumbers = numbers.map((number) => number * 2);console.log(doubledNumbers); // Output: [2, 4, 6, 8, 10]
In asynchronous programming, a callback function is passed as an argument to another function and invoked when a task completes, such as fetching data or handling I/O operations. Here's a concise explanation:
function fetchData(callback) {setTimeout(() => {const data = { name: 'John', age: 30 };callback(data);}, 1000);}fetchData((data) => {console.log(data); // { name: 'John', age: 30 }});
Debouncing delays function execution until a specified time has passed since its last call, useful for tasks like search input handling.
function debounce(func, delay) {let timeoutId;return (...args) => {clearTimeout(timeoutId);timeoutId = setTimeout(() => func.apply(this, args), delay);};}
Throttling limits function execution to at most once within a specified interval, beneficial for tasks like handling frequent events such as window resizing or scrolling.
function throttle(func, limit) {let inThrottle;return (...args) => {if (!inThrottle) {func.apply(this, args);inThrottle = true;setTimeout(() => (inThrottle = false), limit);}};}
These techniques optimize performance and manage event-driven behaviors effectively in JavaScript applications.
Destructuring assignment simplifies extracting values from arrays or properties from objects into separate variables:
// Array destructuringconst [a, b] = [1, 2];// Object destructurinconst { name, age } = { name: 'John', age: 30 };
This syntax uses square brackets for arrays and curly braces for objects, enabling concise variable assignment directly from data structures.
Hoisting moves function declarations to the top of their scope during compilation, allowing them to be called before their actual placement in the code. Function expressions and arrow functions, however, must be defined before they are called to avoid errors.
// Function declarationhoistedFunction(); // Works finefunction hoistedFunction() {console.log('This function is hoisted');}// Function expressionnonHoistedFunction(); // Throws an errorvar nonHoistedFunction = function () {console.log('This function is not hoisted');};
In ES2015, classes use extends
to enable one class to inherit properties and methods from another. The super
keyword accesses the parent class's constructor and methods.
class Animal {constructor(name) {this.name = name;}speak() {console.log(`${this.name} makes a noise.`);}}class Dog extends Animal {constructor(name, breed) {super(name);this.breed = breed;}speak() {console.log(`${this.name} barks.`);}}const dog = new Dog('Rex', 'German Shepherd');dog.speak(); // Output: Rex barks.
Here, Dog
inherits from Animal
, showcasing how classes streamline inheritance and method overriding in JavaScript.
Lexical scoping in JavaScript determines variable access based on its position in the source code. Nested functions can access variables from their outer scope.
function outerFunction() {let outerVariable = 'I am outside!';function innerFunction() {console.log(outerVariable); // 'I am outside!'}innerFunction();}outerFunction();
Here, innerFunction
can access outerVariable
due to lexical scoping rules.
Scope in JavaScript determines the visibility of variables and functions within different parts of the code. There are three main types: global scope, function scope, and block scope.
// Global scopevar globalVar = 'I am global';function myFunction() {// Function scopevar functionVar = 'I am in a function';if (true) {// Block scopelet blockVar = 'I am in a block';console.log(blockVar); // Accessible here}// console.log(blockVar); // Throws an error}console.log(globalVar); // Accessible here// console.log(functionVar); // Throws an error
Global scope variables are accessible throughout the code, while function scope variables are limited to the function they are declared in. Block scope, introduced in ES6, confines variables to the block they are declared within (e.g., within curly braces ).
The spread operator (...) in JavaScript expands elements of an iterable (like arrays or objects) into individual elements. It's used for copying arrays or objects, merging them, and passing array elements as function arguments.
// Copying an arrayconst arr1 = [1, 2, 3];const arr2 = [...arr1];// Merging arraysconst arr3 = [4, 5, 6];const mergedArray = [...arr1, ...arr3];// Copying an objectconst obj1 = { a: 1, b: 2 };const obj2 = { ...obj1 };// Merging objectsconst obj3 = { c: 3, d: 4 };const mergedObject = { ...obj1, ...obj3 };// Passing array elements as function argumentsconst sum = (x, y, z) => x + y + z;const numbers = [1, 2, 3];console.log(sum(...numbers)); // Output: 6
The spread operator simplifies tasks like copying, merging, and function argument handling by expanding iterable elements into individual components.
this
binding in event handlersIn JavaScript, the this
keyword refers to the object executing the current code. In event handlers, this
usually points to the element that triggered the event. However, its value can vary based on how the handler is defined and invoked. To ensure this
refers correctly, methods like bind()
, arrow functions, or explicit context assignment are used.
These approaches help maintain the intended context for this
within event handling functions, ensuring predictable behavior across different event-triggering scenarios in JavaScript applications.
Classical Inheritance: In languages like Java and C++, classes inherit from other classes through a hierarchical structure. Instances are created from classes using constructors.
Prototypal Inheritance: In JavaScript, objects inherit directly from other objects. Objects serve as prototypes, and new objects are created based on existing ones.
Classical inheritance uses classes for instantiation, while prototypal inheritance leverages object linkage for property and behavior inheritance, highlighting JavaScript's unique approach to object-oriented programming.
document.querySelector()
and document.getElementById()
document.querySelector()
selects elements using CSS selectors and returns the first match.
const element = document.querySelector('.my-class');
document.getElementById()
selects an element by its ID attribute and returns the element with that specific ID.
const elementById = document.getElementById('my-id');
While document.querySelector()
offers flexibility with CSS selectors, document.getElementById()
is straightforward for selecting elements by their unique IDs in the DOM.
Dot Notation: Concise and straightforward, it accesses object properties using valid identifiers.
const obj = { name: 'Alice', age: 30 };console.log(obj.name); // Alice
Bracket Notation: Flexible, it accesses properties using strings, suitable for names with special characters or dynamic properties.
const obj = { name: 'Alice', 'favorite color': 'blue' };console.log(obj['favorite color']); // blue
Dot notation is clear for standard properties, while bracket notation handles special cases like dynamic or non-standard property names effectively.
Accessible from anywhere in the code.
var globalVar = "I'm global"; // Global scope
Limited to the function where it's declared.
function myFunction() {var functionVar = "I'm in a function"; // Function scope}
Restricted to the block where let
or const
is used.
function myFunction() {if (true) {let blockVar = "I'm in a block"; // Block scopeconsole.log(blockVar); // Accessible here}// console.log(blockVar); // ReferenceError: blockVar is not defined}
These scopes define where variables can be accessed, from global access throughout the code to specific function or block-level access for better control and encapsulation of variables.
Duplicates top-level properties; nested objects remain referenced.
let obj1 = { a: 1, b: { c: 2 } };let shallowCopy = Object.assign({}, obj1);shallowCopy.b.c = 3;console.log(obj1.b.c); // 3
Duplicates all levels, creating independent nested objects.
let deepCopy = JSON.parse(JSON.stringify(obj1));deepCopy.b.c = 4;console.log(obj1.b.c); // 2
Shallow copies share references to nested objects, while deep copies create entirely new instances, ensuring independent modifications.
var
, let
, and const
var
:
console.log(myVar); // undefinedvar myVar = 'Hello';
let
and const
:
console.log(myLet); // ReferenceError: Cannot access 'myLet' before initializationlet myLet = 'World';console.log(myConst); // ReferenceError: Cannot access 'myConst' before initializationconst myConst = '!';
const
:
const PI = 3.14;PI = 3.14159; // TypeError: Assignment to constant variable.
Closures in JavaScript provide a mechanism to create private variables by encapsulating them within a function scope. Here's how closures can be used to achieve this:
function createCounter() {let count = 0;return {increment: () => ++count,decrement: () => --count,getCount: () => count,};}const counter = createCounter();console.log(counter.increment()); // 1console.log(counter.getCount()); // 1console.log(counter.count); // undefined
Set
s and Map
s handle equality checks for objects?Set
s and Map
s in JavaScript determine the equality of objects based on reference equality, not by comparing their contents. This means objects are considered equal only if they point to the same memory location. For instance:
const set = new Set();const obj1 = { a: 1 };const obj2 = { a: 1 };set.add(obj1);set.add(obj2);console.log(set.size); // Output: 2
In this example, obj1
and obj2
are treated as separate entries in the Set because they are distinct objects, despite having identical properties. Therefore, Sets and Maps rely on object references to determine equality, not their internal values.
To access the index of an element in an array during iteration, you can utilize methods like forEach, map, for...of with entries, or a traditional for loop. Here's an example using forEach:
const array = ['a', 'b', 'c'];array.forEach((element, index) => {console.log(index, element);});
To determine the type of a variable in JavaScript, you use typeof
followed by the variable name. It returns a string indicating the variable's type: "string", "number", "boolean", "object", "function", "undefined", or "symbol". For arrays, use Array.isArray(variableName)
, and for null, check variableName === null
.
You've made it to the end of our extensive list of JavaScript interview questions and answers! We hope this guide has helped you gain the confidence and skills you need to ace your next JavaScript interview. Remember, practice is key, so keep coding and reviewing the concepts until they become second nature.
As a JavaScript developer, it's essential to be prepared for common interview questions that test your skills and knowledge. Here are 10 must-know questions, along with detailed answers and code examples, to help you ace your next interview.
Debouncing is a crucial technique used to manage repetitive or frequent events, particularly in the context of user input, such as keyboard typing or resizing a browser window. The primary goal of debouncing is to improve performance and efficiency by reducing the number of times a particular function or event handler is triggered; the handler is only triggered when the input has stopped changing.
Example Usage:
The debounce function from Lodash can be used to create a debounced version of a function, as shown below:
import { debounce } from 'lodash';const searchInput = document.getElementById('search-input');const debouncedSearch = debounce(() => {// Perform the search operation hereconsole.log('Searching for:', searchInput.value);}, 300);searchInput.addEventListener('input', debouncedSearch);
Debouncing and Throttling are related techniques, but they serve different purposes. Throttling is a technique that limits the frequency of a function's execution, while debouncing delays the execution of a function until a certain amount of time has passed since the last input event.
Practice implementing a Debounce function on GreatFrontEnd
Promise.all
Promise.all()
is a key feature in JavaScript that simplifies handling multiple asynchronous operations concurrently, particularly when there are dependencies among them. It accepts an array of promises and returns a new promise that resolves to an array of results once all input promises have resolved, or rejects if any input promise rejects.
Being proficient with Promise.all() demonstrates a front-end engineer's capability to manage complex asynchronous workflows efficiently and handle errors effectively, which is crucial for their daily tasks.
const promise1 = fetch('https://api.example.com/data/1');const promise2 = fetch('https://api.example.com/data/2');const promise3 = fetch('https://api.example.com/data/3');Promise.all([promise1, promise2, promise3]).then((responses) => {// This callback runs only when all promises in the array have resolved.console.log('All responses:', responses);}).catch((error) => {// Handle any errors from any promise.console.error('Error:', error);});
In this example, Promise.all()
is used to fetch data from three different URLs concurrently. The .then()
block executes only when all three promises resolve. If any promise rejects, the .catch()
block handles the error.
This is a valuable topic for front-end interviews since candidates are often tested on their knowledge of asynchronous programming and their ability to implement polyfills. Promise.all()
has related functions like Promise.race()
and Promise.any()
, which can also be covered in interviews, making it a versatile topic to master.
Practice implementing Promise.all()
on GreatFrontEnd
Deep Equal is an essential concept in JavaScript for comparing two objects or arrays to determine if they are structurally identical. Unlike shallow equality, which checks if the references of the objects are the same, deep equality checks if the values within the objects or arrays are equal, including nested structures.
Here's a basic implementation of a deep equal function in JavaScript:
function deepEqual(obj1, obj2) {if (obj1 === obj2) return true;if (obj1 == null ||typeof obj1 !== 'object' ||obj2 == null ||typeof obj2 !== 'object')return false;let keys1 = Object.keys(obj1);let keys2 = Object.keys(obj2);if (keys1.length !== keys2.length) return false;for (let key of keys1) {if (!keys2.includes(key) || !deepEqual(obj1[key], obj2[key])) return false;}return true;}// Example usageconst object1 = {name: 'John',age: 30,address: {city: 'New York',zip: '10001',},};const object2 = {name: 'John',age: 30,address: {city: 'New York',zip: '10001',},};console.log(deepEqual(object1, object2)); // true
In this example, the deepEqual
function recursively checks if two objects (or arrays) are equal. It first checks if the two objects are the same reference. If not, it verifies that both are objects and not null. Then, it compares the keys and values recursively to ensure all nested structures are equal.
This topic is valuable for front-end interviews as it tests a candidate's understanding of deep vs. shallow comparisons, recursion, and handling complex data structures.
Practice implementing Deep Equal on GreatFrontEnd
An EventEmitter class in JavaScript is a mechanism that allows objects to subscribe to, listen for, and emit events when specific actions or conditions are met. This class supports the observer pattern, where an object (the event emitter) keeps a list of dependents (observers) and notifies them of any changes or events. The EventEmitter is also part of the Node.js API.
// Example usageconst eventEmitter = new EventEmitter();// Subscribe to an eventeventEmitter.on('customEvent', (data) => {console.log('Event emitted with data:', data);});// Emit the eventeventEmitter.emit('customEvent', { message: 'Hello, world!' });
Creating an EventEmitter class requires an understanding of object-oriented programming, closures, the this keyword, and basic data structures and algorithms. Follow-up questions in interviews might include implementing an API for unsubscribing from events.
Practice implementing an Event Emitter on GreatFrontEnd
Array.prototype.reduce()
Array.prototype.reduce()
is a built-in method in JavaScript that allows you to apply a function against an accumulator and each element in the array (from left to right) to reduce it to a single value. This method is highly versatile and can be used for a variety of tasks such as summing numbers, flattening arrays, or grouping objects.
// Example: Summing numbers in an arrayconst numbers = [1, 2, 3, 4, 5];const sum = numbers.reduce(function (accumulator, currentValue) {return accumulator + currentValue;}, 0);console.log(sum); // Output: 15
Array.prototype.reduce()
is a frequently asked topic in front-end interviews, especially by major tech companies, alongside its sister methods, Array.prototype.map()
, Array.prototype.filter()
, and Array.prototype.concat()
. Modern front-end development often utilizes functional programming style APIs like Array.prototype.reduce()
, making it an excellent opportunity for candidates to demonstrate their knowledge of prototypes and polyfills. Although it seems straightforward, there are several deeper aspects to consider:
Practice implementing the Array.prototype.reduce()
function on GreatFrontEnd
In JavaScript, "flattening" refers to the process of converting a nested array into a single-level array. This is useful for simplifying data structures and making them easier to work with. JavaScript provides several ways to flatten arrays, with the most modern and convenient method being the Array.prototype.flat()
method introduced in ES2019.
// Example: Flattening a nested arrayconst nestedArray = [1, [2, [3, [4, [5]]]]];const flatArray = nestedArray.flat(Infinity);console.log(flatArray); // Output: [1, 2, 3, 4, 5]
In this example, the flat()
method is used with a depth of Infinity to completely flatten the deeply nested array into a single-level array. The flat() method
can take a depth argument to specify the level of flattening if the array is not deeply nested.
Before ES2019, flattening arrays required custom implementations or the use of libraries like Lodash. Here’s a basic custom implementation using recursion:
// Custom implementation of flattening an arrayfunction flattenArray(arr) {return arr.reduce((acc, val) => {return Array.isArray(val) ? acc.concat(flattenArray(val)) : acc.concat(val);}, []);}const nestedArray = [1, [2, [3, [4, [5]]]]];const flatArray = flattenArray(nestedArray);console.log(flatArray); // Output: [1, 2, 3, 4, 5]
This custom flattenArray
function uses the reduce()
method to concatenate values into a single array, recursively flattening any nested arrays encountered.
Practice implementing Flatten function on GreatFrontEnd
Data merging in JavaScript involves combining multiple objects or arrays into a single cohesive structure. This is often necessary when dealing with complex data sets or integrating data from different sources. JavaScript provides several methods to merge data, including the spread operator, Object.assign(), and various array methods.
The spread operator (...) is a concise way to merge objects. It creates a new object by copying the properties from the source objects.
const obj1 = { a: 1, b: 2 };const obj2 = { b: 3, c: 4 };const mergedObj = { ...obj1, ...obj2 };console.log(mergedObj); // Output: { a: 1, b: 3, c: 4 }
In this example, obj2's b property overwrites obj1's b property in the merged object.
Object.assign()
Object.assign()
is another method to merge objects. It copies all enumerable properties from one or more source objects to a target object.
const obj1 = { a: 1, b: 2 };const obj2 = { b: 3, c: 4 };const mergedObj = Object.assign({}, obj1, obj2);console.log(mergedObj); // Output: { a: 1, b: 3, c: 4 }
The spread operator can also merge arrays by concatenating them.
const array1 = [1, 2, 3];const array2 = [4, 5, 6];const mergedArray = [...array1, ...array2];console.log(mergedArray); // Output: [1, 2, 3, 4, 5, 6]
The concat() method merges two or more arrays into a new array.
const array1 = [1, 2, 3];const array2 = [4, 5, 6];const mergedArray = array1.concat(array2);console.log(mergedArray); // Output: [1, 2, 3, 4, 5, 6]
For deep merging, where nested objects and arrays need to be merged, a custom function or a library like Lodash can be used. Here's a simple custom implementation:
function deepMerge(target, source) {for (const key in source) {if (source[key] instanceof Object && key in target) {Object.assign(source[key], deepMerge(target[key], source[key]));}}Object.assign(target || {}, source);return target;}const obj1 = { a: 1, b: { x: 10, y: 20 } };const obj2 = { b: { y: 30, z: 40 }, c: 3 };const mergedObj = deepMerge(obj1, obj2);console.log(mergedObj); // Output: { a: 1, b: { x: 10, y: 30, z: 40 }, c: 3 }
merge
Lodash is a popular utility library in JavaScript that provides many helpful functions, including merge
. The _.merge
function in Lodash recursively merges properties of the source objects into the destination object, which is particularly useful for deep merging of nested objects.
const _ = require('lodash');const obj1 = { a: 1, b: { x: 10, y: 20 } };const obj2 = { b: { y: 30, z: 40 }, c: 3 };const mergedObj = _.merge({}, obj1, obj2);console.log(mergedObj); // Output: { a: 1, b: { x: 10, y: 30, z: 40 }, c: 3 }
In this example, _.merge
deep merges obj1
and obj2
, ensuring that nested properties are combined correctly.
Practice implementing Data Merging function on GreatFrontEnd
getElementsByClassName
In JavaScript, getElementsByClassName
is a method used to select elements from the DOM (Document Object Model) based on their CSS class names. It returns a live HTMLCollection
of elements that match the specified class name(s).
You can use getElementsByClassName
by calling it on the document object and passing one or more class names as arguments:
// Select all elements with the class name "example"const elements = document.getElementsByClassName('example');// Loop through the selected elementsfor (let i = 0; i < elements.length; i++) {console.log(elements[i].textContent);}
You can specify multiple class names separated by spaces:
const elements = document.getElementsByClassName('class1 class2');
This will select elements that have both class1 and class2.
HTMLCollection
The HTMLCollection
returned by getElementsByClassName
is live, meaning it updates automatically when the DOM changes. If elements with the specified class name are added or removed, the collection is updated accordingly.
querySelectorAll
For more complex selections based on CSS selectors, including class names, IDs, attributes, etc., querySelectorAll
provides more flexibility:
const elements = document.querySelectorAll('.example');
Practice implementing getElementsByClassName
on GreatFrontEnd
Memoization is a technique used in programming to optimize expensive function calls by caching their results. In JavaScript, memoization involves storing the results of expensive function calls and returning the cached result when the same inputs occur again.
The basic idea behind memoization is to improve performance by avoiding redundant calculations. Here’s a simple example of memoization in JavaScript:
function expensiveOperation(n) {console.log('Calculating for', n);return n * 2;}// Memoization functionfunction memoize(func) {const cache = {};return function (n) {if (cache[n] !== undefined) {console.log('From cache for', n);return cache[n];} else {const result = func(n);cache[n] = result;return result;}};}const memoizedExpensiveOperation = memoize(expensiveOperation);console.log(memoizedExpensiveOperation(5)); // Output: Calculating for 5, 10console.log(memoizedExpensiveOperation(5)); // Output: From cache for 5, 10console.log(memoizedExpensiveOperation(10)); // Output: Calculating for 10, 20console.log(memoizedExpensiveOperation(10)); // Output: From cache for 10, 20
Caching Results: The memoize function wraps around expensiveOperation and maintains a cache object.
Cache Check: Before executing expensiveOperation, memoize checks if the result for a given input (n) is already stored in the cache.
Returning Cached Result: If the result is found in the cache, memoize returns it directly without re-executing expensiveOperation.
Storing Result: If the result is not in the cache, memoize computes it by calling expensiveOperation(n), stores the result in the cache, and then returns it.
In modern JavaScript, libraries like Lodash provide utilities for memoization, making it easier to apply this optimization technique across different functions and use cases.
Practice implementing Memoize function on GreatFrontEnd
Before the introduction of optional chaining (?.
) in JavaScript, accessing nested properties in an object could lead to errors if any part of the path did not exist.
For example:
const user = {name: 'John',address: {street: '123 Main St',},};const city = user.address.city; // throws an error because address.city is undefined
get
from LodashTo avoid this, developers used workarounds like the get
function from Lodash to access nested properties within objects with ease:
const user = {name: 'John',address: {city: 'New York',},};console.log(_.get(user, 'address.city')); // 'New York'console.log(_.get(user, 'address.street')); // 'undefined'
Here, _.get
retrieves the value located at obj.user.address.city
, handling potential undefined values gracefully.
However, with the introduction of optional chaining (?.
) in JavaScript, we can now access nested properties in a safer way:
const user = {name: 'John',address: {street: '123 Main St',},};const city = user.address?.city; // returns undefined instead of throwing an error
The ?.
operator allows us to access properties in a way that stops evaluating the expression if any part of the path is null or undefined, preventing errors and returning undefined instead.
Practice implementing get
function on GreatFrontEnd
These questions cover essential concepts in JavaScript, and understanding them will help you tackle more complex problems in your interviews. Remember to practice and be ready to explain your thought process and code examples!
If you have used Next.js, you have probably heard of Server Actions as a new way to handle form submissions and data mutations in Next.js applications. Server Actions have both server-side and client-side aspects to them and Actions, the client-side APIs are landing in React 19! React Actions are not specific to Next.js or data fetching – they can be used with other server-side frameworks for any asynchronous operations.
In this post, we will elaborate on what React Actions are and how to use the new hooks like useActionState
and useFormStatus
to build form submission experiences the modern way.
Note: As of writing, React 19 has not been published and the API can be prone to updates, so you should always refer to the latest version of the documentation.
Before we dive deeper into React Actions, we should first understand the action
property in native HTML forms. Before JavaScript was introduced, the common way to send the data to the server was via the action
attribute on <form>
s.
When we define a <form>
element, we can also set an action
attribute to a URI which will be used as the endpoint to send the data to the server. The action
attribute is often combined with method
attribute which can be set to HTTP methods like GET
or PUT
.
<form action="/user" method="POST"><input name="name" id="name" value="" /><div><button type="submit">Save</button></div></form>
When a user clicks on the "Save" button, the browser will make a HTTP request to the /user
endpoint using the specified HTTP method. This is a very powerful pattern that does not rely on JavaScript, however there are downsides of this approach:
Submitting forms in React is straightforward. It can be done by utilizing the onSubmit
prop and fetch
API. We can show loading and error states through usage of the useState
hook and onSubmit
prop.
import { useState } from 'react';export default function UserForm() {const [isPending, setIsPending] = useState(false);const [error, setError] = useState(null);const handleSubmit = async (event) => {event.preventDefault();const data = new FormData(event.target);try {setError(false);setIsPending(true);await fetch('/user', {method: 'POST',body: JSON.stringify(data),});event.target.reset();} catch (err) {setError(err.message);} finally {setIsPending(false);}};return (<form onSubmit={handleSubmit}><input id="name" name="name" />{error && <p>{error}</p>}<button type="submit">{isPending ? 'Saving...' : 'Save'}</button></form>);}
Client-side form submissions and updates offer several advantages, particularly in terms of user experience and performance:
While client-side form submissions and updates provide numerous benefits, there are also potential problems and pitfalls that we should be aware of:
event.preventDefault
otherwise the browser will do a full page refresh on submission of the form.Enter the era of new React APIs! With the introduction of Actions in React 19, we can harness the power of form actions on the client side as well.
Typically, HTML <form>
s support URI strings for values to the action
attribute. However, <form>
's in React 19 accept functions as valid values for the action
prop! We can even pass in async
functions to the action. When a string is passed to the action
, the <form>
will behave like native HTML forms, however if a function is passed, the form will be enhanced by React.
<form action={actionFunction}>
Let's convert the form above to use React 19's Actions:
import { useState } from 'react';export default function UserForm() {const [isPending, setIsPending] = useState(false);const [error, setError] = useState(null);async function createUserAction(formData) {setError(false);setIsPending(true);try {await fetch('/user', {method: 'POST',body: JSON.stringify({ name: formData.get('name') }),});alert('User has been created successfully');} catch (err) {setError(err.message);} finally {setIsPending(false);}}return (<form action={createUserAction}><input id="name" name="name" />{error && <p>{error}</p>}<button type="submit">{isPending ? 'Saving...' : 'Save'}</button></form>);}
React does a few special things under the hood when we pass a function to action
:
event.preventDefault()
, React does this automatically if a function is passed to action
FormData
as a parameter, so we do not need to construct form data ourselves via new FormData(event.target)
<form>
will be reset upon action successThat's useful, but do we still have to maintain the pending and error state variables ourselves? Not at all, React also has a solution for them. These issues are addressed by the introduction of new hooks – useActionState
and useFormStatus
. Let's look at these two new hooks.
useActionState
hookThe useActionState
hook helps make the common cases easier for Actions. useActionState
hook accepts multiple parameters:
actionFn
: A function which will be used as action for the form. actionFn
accepts two parameters: previousState
and formData
initialState
: Value to be used as the initial state. It is ignored after the action is first invokedpermalink
(optional): A string containing the unique page URI that this form modifiesThe useActionState
hook returns an array containing two values:
formState
: A value which will be derived from return value of action function. Defaults to initialState
formAction
: Reference to action function which was passed to the <form>
's action
const [state, formAction] = useActionState(actionFn, initialState);
Let's rewrite our earlier form example using the useActionState
hook:
import { useActionState } from 'react';async function createUserAction(prevState, formData) {try {await fetch('/user', {method: 'POST',body: JSON.stringify({ name: formData.get('name') }),});} catch (err) {return {success: false,message: err.message,};}return {success: true,message: 'User created successfully!',};}export default function UserForm() {const [formState, formAction] = useActionState(createUserAction, null);return (<form action={formAction}><input id="name" name="name" />{formState?.success === true && (<p className="success">{formState?.message}</p>)}{formState?.success === false && (<p className="error">{formState?.message}</p>)}<button type="submit">Save</button></form>);}
A little better! We no longer need a state just for the error message, it is now part of the action state. Astute readers will notice that this new example does not handle the pending/loading states. That's where the useFormStatus
hook comes in.
useFormStatus
hookThe useFormStatus
hook provides status information of the last form submission, which can be used by components to render pending states (e.g. loading indicators, disabling buttons and inputs).
It does not accept any parameters and returns a status
object with the following properties:
pending
: A boolean
value that indicates whether the parent <form>
is pending submissiondata
: A FormData
object containing data of the parent <form>
. It is null
if there is no submission or no parent <form>
method
: A string value of either get
or post
. This tells us whether the form is getting submitted using GET
or POST
action
: A reference to the action
prop on the parent <form>
. It is null
if there is no parent <form>
or if a string URI value provided to the action
propconst { pending, data, method, action } = useFormStatus();
Using the useFormStatus
hook comes with a caveat – useFormStatus()
will only return status information for a parent <form>
. It will not return status information for any <form>
rendered in that same component or children components. Hence the useFormStatus
hook must be called from a component that is rendered inside a <form>
.
Let's rewrite our example to use useFormStatus
for handling pending states:
import { useActionState } from 'react';import { useFormStatus } from 'react-dom';async function createUserAction(prevState, formData) {try {await fetch('/user', {method: 'POST',body: JSON.stringify({ name: formData.get('name') }),});} catch (err) {return {success: false,message: err.message,};}return {success: true,message: 'User created successfully!',};}// In order for `useFormStatus` to work we have to extract the button// into a separate component so the <form> is now a parent component.function SubmitButton() {const { pending } = useFormStatus();return <button type="submit">{pending ? 'Saving...' : 'Save'}</button>;}export default function UserForm() {const [formState, formAction] = useActionState(createUserAction, null);return (<form action={formAction}><div><input id="name" name="name" /></div>{formState?.success === true && (<p className="success">{formState?.message}</p>)}{formState?.success === false && (<p className="error">{formState?.message}</p>)}<SubmitButton /></form>);}
In order for useFormStatus
hook to work, we have to extract the <button>
into a separate component so the <form>
is now a parent component.
By using the new useActionState
and useFormStatus
hooks:
pending
status. We can utilize the pending
field from the return value of useFormStatus
. There is no need to do prop drilling or use context for passing the pending state.success
and error
fields from the action
function and be used to display success and error messages.<form>
action succeeds, React will automatically reset the form for uncontrolled components. If you need to reset the <form>
manually, a new requestFormReset
React DOM API is available.action
or formAction
props of <form>
, <input>
, and <button>
elements, the HTTP method will be POST
regardless of the value of the method
prop.action
prop can be overridden by a formAction
prop on a <button>
or <input>
component as these support the formAction
prop.The new React Actions, along with the useActionState
and useFormStatus
hooks provide apps with a new way to write form submissions in React efficiently and can easily manage the form's pending, success, and error states.
Say goodbye to form boilerplate code!
As Front End Engineers, we aim to deliver the best user experience and one of the ways to achieve that is by optimizing the applications' performance.
Users expect fast, responsive experiences, and will quickly abandon sites that are slow to load. Studies show that if a web page takes more than 3 seconds to load, over 40% of users will leave. With the prevalent usage of mobile devices which can be on slower network speeds, optimizing performance is critical.
Code splitting and lazy loading are effective strategies to achieve great performance on the web. In this post, we’ll explore these techniques, their benefits, and how they can be implemented in React.
Code splitting breaks down your application into smaller chunks, loading only the necessary parts to reduce the bundle size. Lazy loading defers loading non-essential resources until they’re needed, further enhancing performance.
For example, consider a React app with a Login, Dashboard, and Listing page. Traditionally, the code for all these pages are bundled in a single JS file. This is suboptimal because when the user visits the Login page, it is unnecessary to load pages such as the Dashboard and Listing page. But with implementing code splitting and lazy loading, we can dynamically load specific components/pages only when needed, significantly improving performance.
In React, code splitting can be introduced via dynamic import()
. Dynamic import is a built-in way to do this in JavaScript. The syntax looks like this:
import('./math').then((math) => {console.log(math.add(1, 2));});
For React apps, code splitting using dynamic imports is supported out of the box via React.lazy
if a boilerplate like create-react-app
is used. The React.lazy()
function lets you render a dynamic import as a regular component. This feature was introduced in React 16.6 which allows lazy loading of components via splitting a big JS bundle into multiple smaller JS chunks for each component that is lazily loaded.
However, if a custom Webpack setup is used, you must check the Webpack guide for setting up code splitting.
To implement lazy loading in React, we can leverage React.lazy
function and the Suspense
component to handle loading states. Here's an example demonstrating lazy loading in React:
const LazyComponent = React.lazy(() => import('./LazyComponent'));function App() {return (<React.Suspense fallback={<div>Loading...</div>}><LazyComponent /></React.Suspense>);}
By wrapping a lazy-loaded component with Suspense
, we can provide a fallback/placeholder UI while the component is being loaded asynchronously, such as a spinner.
However, there can be a case where LazyComponent
fails to load due to some reason like network failure. In that case, it needs to handle the error smoothly for a better user experience with Error Boundaries.
import MyErrorBoundary from './MyErrorBoundary';const LazyComponent = React.lazy(() => import('./LazyComponent'));function App() {return (<MyErrorBoundary><React.Suspense fallback={<div>Loading...</div>}><LazyComponent /></React.Suspense></MyErrorBoundary>);}
So, when the LazyComponent
is lazily loaded, it signifies that the code for LazyComponent
is segmented into a distinct JS chunk, separate from the main JS bundle. This JS chunk is exclusively loaded when the LazyComponent
is required to be displayed on the user interface, optimizing the loading process and enhancing the application's performance.
Note: React.lazy
and Suspense
only work on the client side and are not available for server-side rendering. For server-side code splitting, the loadable/component library can be used.
From the above, we have seen how we use React.lazy
to code split and lazy load components. But the question is where to lazy load and code split. There are approaches like Route-based code splitting and Component-based code splitting.
Route-based code splitting is almost always the best place to start code splitting and it is also where we can achieve potential max size reduction of our JS bundle. It works best when the routes are very distinct and there is very little code duplication between the routes because if it does there will be duplicate codes in all the JS chunks of lazily-loaded routes. When we try to lazily load all the routes in our app, there is a chance of code duplication in the output bundles. Hence we need to be careful about this when we do code splitting at route level.
Here is an example of route-based code splitting:
import { Suspense, lazy } from 'react';import { BrowserRouter as Router, Routes, Route } from 'react-router-dom';const Login = lazy(() => import('./Login'));const Dashboard = lazy(() => import('./Dashboard'));const App = () => (<Router><Suspense><Routes><Route path="/" element={<Login />} /><Route path="/about" element={<Dashboard />} /></Routes></Suspense></Router>);
Component-based code splitting provides granular control over loading specific components, allowing for more precise optimization. The real power of code splitting comes into the picture in component-based code splitting where we have more control over granular components. When deciding which components to lazy load, consider the importance and impact of each component on the initial rendering and user experience. Ideal candidates for lazy loading are large components with significant code or resources, conditional components that are not always needed, and secondary or non-essential features. These can be segmented into separate chunks and loaded on demand, optimizing performance. However, critical components like headers, main content, and dependencies should be loaded upfront to ensure a seamless user experience. We need to be careful in selecting which components to lazy load to strike a balance between initial load times and providing essential functionality. Here is an example of component-based code splitting:
import { useState, lazy, Suspense } from 'react';const Modal = lazy(() => import('./Modal'));function App() {const [showModal, setShowModal] = useState(false);const openModal = () => {setShowModal(true);};const closeModal = () => {setShowModal(false);};return (<div><button onClick={openModal}>Open Modal</button>{showModal && (<Suspense fallback={<div>Loading Modal...</div>}><Modal onClose={closeModal} /></Suspense>)}</div>);}export default App;
In this example, the Modal
component is lazily loaded using React.lazy()
and dynamically imported. The modal is conditionally rendered based on the showModal
state, which is toggled by the openModal
and closeModal
functions. The Suspense
component displays a loading indicator while the modal component is being loaded asynchronously. This implementation optimizes performance by loading the modal component only when the user interacts with the Open Modal
button, preventing unnecessary loading of heavy components like a text editor until they are actually needed.
If you’re using Webpack to bundle your application, then you can use Webpack's magic comments to further improve the user experience with lazy loading.
We can use webpackPrefetch and webpackPreload for dynamic imports. In the above example of the lazy loading Modal, the Modal is loaded only when the user clicks the Open Modal
button and the user has to wait for a fraction of a second to load the Modal.
We can improve the user experience by not making users wait for the Modal to load. So, in that scenario, we can prefetch or preload the Modal component. In the above example of the Lazy loading modal, the only difference will be in how we import the Modal
component.
Before:
const Modal = lazy(() => import('./Modal'));
After:
const Modal = lazy(() => import(/* webpackPrefetch: true */ './Modal'));
What webpackPrefetch: true
does is that it tells the browser to automatically load this component into the browser cache so it's ready ahead of time and the user won’t have to wait for the Modal component to load when the user clicks on the Open Modal
button.
We can use webpackPrefetch
and webpackPreload
for a particular component when we think that there is a high possibility for the user to use that component when a user visits the app.
Be sure to assess your application's requirements, tech stack and challenges when deciding the code splitting and lazy loading approach. By strategically dividing code and loading resources on demand, you can create fast, efficient, and engaging web applications.
Other than front end performance techniques, image performance is also crucial in developing a web application, especially if your website contains a lot of them. This article contains multiple ways we can do to make image loading more efficient and thus, reduce the time needed to display.
Content delivery network (CDN) saves a copy of our web content on many servers around the world. This is to allow content to be delivered to users from their nearest server.
Having optimal image formats, such as WebP or AVIF, can reduce sizes while maintaining quality. These formats use advanced compression techniques, making web pages load faster.
<picture>
to provide multiple sources in different formats, letting the browser choose the most compatible and efficient one.To have responsive images, where images adjust their own size to the current display size, we can provide multiple sources for the image, allowing the browser to select the most appropriate version.
<img>
to list these image sources of different sizes to match various display sizes.Adaptive images involve detecting network speed and serving different image qualities accordingly. Users with slower connections will receive lower-quality images as compared to those with faster connections.
Images that are offscreen, or not in the viewport, are not loaded until they are needed. This means that the images are only loaded when they are about to become visible / nearing the viewport.
With lazy loading, similar to offscreen images, images and iframes are loaded only as they approach the user's viewport. This can significantly reduce initial page load times and save bandwidth.
loading="lazy"
to <img>
and <iframe>
.Progressive JPEGs load in layers, improving in clarity and detail with each layer, so that users see a low-quality version of the image almost immediately, which progressively improves until the image fully loads. This enhances the user experience by providing visual content faster.
Preloading allows us to specify which images should be loaded early, even before the browser encounters the image tags. This is particularly useful for images that are crucial to the user experience but might be discovered late by the browser, such as later sections of the page or carousels.
<link rel="preload">
in <head>
to specify these images, setting as="image"
.Compression reduces file sizes by getting rid of unnecessary data, speeding up loading times. Effective compression balances size and quality, allowing images to load quickly without an obvious loss in quality.
Enhancing image performance requires the combination of various techniques. By applying the above techniques, we can significantly improve the loading speed and efficiency of our web applications. Ultimately, staying updated of the latest trends in image optimization and web standards is essential for remaining competitive in web development.
After months of working hard on this, we are thrilled to finally announce the beta launch of GreatFrontEnd Projects! 🥳️
At GreatFrontEnd, we strive to create a platform for front end engineers to learn, grow and connect with one another.
Through scouring countless forums frequented by front end engineers and speaking to our users, we've discovered a common need for a platform where you can develop well-crafted real-world projects – be it to learn something new through hands-on practice, or to build up your portfolio of projects.
We wanted to build the very best version of such a platform while listening to your needs. We believe we are close to achieving this, and we're excited for you to experience it firsthand.
GreatFrontEnd Projects is a platform for front end engineers to build real-world projects.
In essence, we provide you with tons of real-world project challenges that you can build.
For each challenge, you'll have access to everything you need to start coding right away – including professional designs, API specs, starter code and image files.
Just click "Start project", open up your IDE and start coding!
After you're done, host your project on any service and submit your GitHub repo and site url. This will be published for other users to give you code reviews and feedback.
We will pull your code from GitHub directly onto the platform for instant code reviews by the community, allowing you to receive feedback for your work and gain experience points ("reputation") to track your progress.
By using the platform, you can:
Budget-savvy users will also be happy to know that we are predominantly free, with 80% of our challenges accessible for no cost. We believe in providing the basics for free, including multi-page apps and breakpoint management. You won't be charged for essential features like taking screenshots.
Our premium charges only apply to advanced features that go beyond the basics, offering an extra boost to your learning and development. Find out more →
Whether you're a complete beginner hoping to learn front end from scratch, or a senior engineer hoping to learn more modern stacks, our platform was designed to assist you in your learning goals.
We offer a range of starter level challenges that are specifically designed for beginners. You can begin with these and gradually progress to more advanced challenges as you gain confidence and experience.
As you embark on each challenge, you'll have access to a wealth of resources to support your learning journey. These include detailed guides, solutions, references from fellow users, and community forums. These resources are carefully curated to assist you in understanding and mastering the concepts behind each challenge.
While other platforms might leave you relying solely on community feedback, our premium plan provides you with practical development guides and solutions written by experienced senior engineers from top tech companies. You'll learn best practices and supercharge your learning by referencing professionally-written code derived from years of experience.
Complete beginners will also be happy to find guides to help you get started with the very basics, such as starting up your IDE and code repository, or building UI with figma.
Every challenge on our platform is detailed with the skills that you would be able to learn after building them. Starting from the simplest challenges, you may find yourself learning basic HTML and CSS. As you move along, you start to learn more advanced skills, such as using UI frameworks like React or Svelte.
We also offer a skills roadmap on our platform, which serves as a step-by-step guide to acquire all the fundamental skills required for front-end development, from the very basics to advanced topics.
For each skill, it provides you with a list of curated resources, as well as a recommended order of projects to build.
After completing a project, you'll have the opportunity to receive feedback and code reviews from the community. This feedback is invaluable for your growth as it helps you identify areas for improvement and refine your coding skills.
We make code reviews easy by displaying your code directly on our platform, eliminating the need for community members to go elsewhere to review your work. This convenience encourages more feedback and collaboration, which means you can expect to receive more feedback for your work.
Our platform includes an advanced gamification system that encourages you to track your progress and take on more challenges. Every productive action you undertake towards building projects or learning new skills will be rewarded with reputation points, continuously motivating you to stay engaged and make steady progress in your learning journey.
Our projects were designed with real world project specs meant for professional software engineers. This includes project specs and user stories written by professional product managers, and fully specified UI UX designs by high-end designers.
This means that whatever you build from day 1 will resemble the kind of work you would be expected to do in a full-time front end engineer job. Moreover, you can be sure that every project you spend time building will form a professional application that can be reused for future projects, or used as part of your portfolio. Learn while building something useful!
If you're looking to build your portfolio – we've built our platform to ensure that you are well taken care of.
For developers seeking to build their portfolios or embark on side projects, our platform allows you to build stunning portfolio projects that were professionally designed by high-end designers. Design is hard – and you'd be able to focus solely on the technical execution.
Furthermore, unlike other challenge platforms, you'd be able to easily construct personalized portfolio projects instead of building the same thing as everyone else.
Each project within our platform is made up of reusable components which adhere to the same design system, making them inherently modular and compatible with one another. This means you can seamlessly combine components from various projects to construct unique and customized applications for your portfolio. These components cover a diverse range of applications, including Marketing, E-Commerce, Web Apps, Games, and even Portfolios, which means you'd be able to compose a wide variety of apps from them.
Additionally, we offer Component Tracks, which are collections of projects that form component libraries or design systems. This can leave a strong impression on potential employers and recruiters, showcasing your expertise and versatility in building a variety of components for common use cases, which is much more impressive than building individual projects.
If you enjoy dedicating your spare time to creating side projects, our platform will be beneficial for you as well. Each challenge you complete contributes to a growing collection of professionally designed, reusable components, allowing you to seamlessly integrate them into any of your personal side projects.
We provide all of the essentials for free. You will be able to complete 80% of our challenges and even some advanced features that other platforms make premium, such as multi-page apps, breakpoint management and screenshot taking.
Here are the advanced features you can enjoy as a Premium member:
Each guide and solution was written by big tech senior engineers with best practices derived from years of experience, allowing you to learn techniques and patterns early on in your learning, setting up for a strong foundation.
Learning how to use design tools like Figma is an important skill for any professional front end developer. Moreover, using the design file helps you in building a more precise solution using design details like font sizes, spacing and colors, eliminating the time that would otherwise be spent on guesswork.
Without knowing the domain well, it's hard to know which projects you should build in order to train different aspects of a skill. Our skills roadmap solves that problem by providing a structured roadmap of projects to build to train all the core skills required for front end engineers, all the way from beginner to advanced. While the free plan lets you access only the foundational skills in the skills roadmap, you will get full access to all nodes in the skills roadmap once you purchase any premium plan. This helps you learn skills efficiently without the guesswork.
Our component tracks are a unique feature where each track is a collection of projects that form a component library or even design system. By building entire component tracks, you showcase your abilities and versatility in building a variety of components for common use cases, which is much more impressive than building individual projects. Moreover, one of our component tracks is a design system, which means you get to build the underlying design system behind all of the projects on our platform, serving as a good foundation for your toolkit of reusable components.
Some of our most impressive projects were designed to teach you (and allow you to showcase) complex and / or modern techniques like full stack or artificial intelligence skills. These are the projects you'd want to reference when building your portfolio to stand out from the crowd of applications.
With these premium features, you'll save considerable time and effort towards building accurate designs and becoming a highly skilled front-end developer.
Refer to our pricing plan here for a free vs premium comparison table →
With our Beta launch underway, we're eager to collect insights from our beta testers to enhance our platform further. Should you wish to report a bug or propose new features, feel free to reach out via email at feedback@greatfrontend.com or share your thoughts through our feedback widget ("Chat with us!" on the side of the page).
This was a significant milestone for such a huge project spanning several months. We would like to express our gratitude to the team that helped make this happen, including:
A good website is not only just about aesthetic user interface, optimizing its front end performance is equally as important, and in certain domains like e-commerce, checkout conversion performance is highly dependent on website performance.
This article presents a collection of underutilized yet effective strategies that you can use to improve your website's speed and user experience. These are useful concepts to know for front end system interviews as well as for your day-to-day work!
List virtualization is an optimization technique where only the currently visible items in a long list or large dataset are rendered. This method dynamically loads and unloads elements based on the scroll position.
react-virtualized
.Both bundle and code splitting are optimization techniques in web development that involve dividing a large codebase into smaller chunks, loading only the necessary chunks at any point of time. This can significantly enhance the performance and efficiency of applications, especially those with extensive codebases or complex dependencies.
import()
for dynamic imports at these points.By implementing dynamic imports, we import code only on interaction, visibility, or navigation. Similar to that, lazy loading is also a popular design pattern that delays the initialization of an object until when it is actually needed by users. These can help to improve efficiency, especially at times where costly resources are not always utilized.
import()
calls.Optimal loading sequence prioritizes the loading of essential resources, like CSS, fonts, and above-the-fold content. This method carefully orders the loading process, so critical elements are rendered first, enhancing perceived performance. By doing this, non-essential items are deferred, largely boosting efficiency and user satisfaction, especially during page initialization.
<link rel="preload">
for these resources in the HTML head.Through the prefetching technique, resources are loaded in the background before they are requested by users. This strategy aims to reduce perceived latency and improve responsiveness by fetching resources ahead of time, based on user behavior patterns or predictive algorithms.
<link rel="prefetch">
to instruct the browser to load these in idle time.Preloading is where specific resources are identified and loaded early in the page's life cycle, even before the browser requires them. This ensures that critical assets such as scripts, stylesheets, and images are readily available by the time they're needed.
Unlike prefetching, which depends on future navigation, preloading focuses on the current page, strategically accelerating the availability of high-priority resources that are crucial for the immediate next steps. This is particularly useful for resources that are essential for the initial view or interactive features of a page, ensuring a smoother and faster user experience.
<link rel="preload">
for these resources, specifying the type with as
.Compression is a method that reduces file sizes for faster user delivery by eliminating redundant data. This not only quickens load times for web pages and apps but also cuts down on bandwidth and associated costs, similar to how tree shaking, which will be explained below, removes unused code to streamline bundles.
Tree shaking is an optimization technique used to eliminate unused code from the final bundle before deployment. By analyzing the import and export statements in a module structure, static analysis tools can determine which modules and lines of code are not being utilized and remove them.
Optimizing front end performance is important in providing a fast, efficient, and enjoyable user experience. Techniques mentioned above are often powerful yet often underutilized. By carefully implementing these strategies, we can ensure that our applications perform optimally, keeping users engaged and satisfied.
This is a guest post by Jordan Cutler, Senior Frontend Engineer at Pinterest and author of the High Growth Engineer Newsletter.
I've seen a lot of CSS.
Unfortunately, there aren't great resources on doing it right.
In my experience as a Senior Frontend Engineer, I see 5 common mistakes that I'll go over in this article and how to avoid them.
Recognizing and avoiding these mistakes will help you write CSS that:
Let's dive in!
width
and height
properties incorrectlyOne of the most common mistakes comes at the cost of responsiveness.
It's the overuse of width
and height
.
Luckily, there are easy fixes to this.
In general:
max-width
alongside width
height
for min-height
Using width
and height
can be ok in certain scenarios. If you are using them, you should know what you're doing and be cautious.
Some examples where width
and height
make more sense:
overflow: auto
on their parent.HTML is responsive by default. Our CSS is what often breaks the responsiveness. When we add styles, we need to keep our page responsive.
Using width
and height
is dangerous because they are restrictive.
It says: "You must be this size no matter what."
For example: If we use width: 900px
on an element, what happens if we are on mobile?
It would overflow off the screen.
If you do need to use a fixed width value, make it flexible. My preferred way for doing that is adding max-width: 100%
.
A common example I see in the real world is defining a fixed width
value on <input>
elements.
Here's what it looks like when an <input>
is 400px
inside a container that was shrunk to 250px
.
Once we apply max-width: 100%
the issue goes away.
Similarly, we run into the same issue with height
.
If we define a fixed height: 250px
and our content size is greater than 250px, we see this happen:
However, the fix is easy. We can use min-height: 250px
instead.
Now, our container will always be at least 250px
, but can become larger if it needs to fit the content inside.
CSS is often considered difficult to maintain.
A large reason for this is mixing responsibilities in CSS.
In the same way that you don't want a function to be doing 10 different things, we can apply that principle to our CSS.
How? By dividing each element we apply CSS to into either a "layout element" or a "content element."
Here are examples of content elements:
Essentially, they are isolated items that hold content.
In comparison, "layout elements" define the layout those "content elements" should be placed in.
They define the structure via flexbox, grid, gap, and margin.
Layout elements are all the elements you don't see.
Here's an example:
Let's look at the Tailwind landing page and see how it is structured by adding a border: 1px solid red
to everything.
You can already start to see where the "content" elements are vs the "layout elements." The content elements are the buttons, text inputs, paragraph text, icon buttons in the navbar, etc.
Let's focus on the bottom row with the "Get started" button:
The "Get started" button and "Quick search" input are wrapped in a <div>
which is the layout element.
It defines display: flex
, margin-top
, and justify-content: center
.
With that, it created the skeleton where content elements can just be plopped into the correct location.
In your CSS, you should aim to use this pattern too.
A common mistake would be applying margin-top
on one or both of these buttons, rather than letting the layout element do it.
Question: In the below image, would you use padding
, margin
, or gap
to add space between these tags?
Hopefully, your answer is gap
.
Often, I see something like:
<style>.tag {margin-left: 8px;}.tag:first-child {margin-left: 0;}</style><div class="tag">All</div><div class="tag">Music</div><div class="tag">JavaScript</div>...<div class="tag">Sales</div>
While this "works", it breaks separation of responsibilities.
The .tag
should just be concerned with rendering itself and its content.
Instead, we can do:
<style>.tagRow {display: flex;gap: 8px;}.tag {// The CSS that controls the tag appearance}</style><div class="tagRow"><div class="tag">All</div><div class="tag">Music</div><div class="tag">JavaScript</div>...<div class="tag">Sales</div></div>
This defines a wrapper around the .tag
s. Any element inside that tag wrapper will receive a space around them that is 8px
wide. The .tag
doesn't know anything about and should not bother with the context it's in. It's just a tag being placed in a pre-defined layout.
Think of padding
as bubble wrap inside a package. It's part of the package, but not the contents.
Padding is just whitespace that is part of the element itself and prevents the content from being too tightly packed.
Padding gets applied to "content elements."
Think of margin as a restraining order. You have to be a minimum distance away.
I don't like to apply margin
to "content elements."
Instead, I create a wrapping element that applies the margin.
For example, let's say we have a button at the bottom of a form like this:
<form><input /><input />...<button type="submit">Submit</button></form>
If we want to add a minimum amount of space above the button and any other content, I'd create a wrapper div
around the button to apply that margin.
<form><input /><input />...<div class="submitButtonWrapper"><button type="submit">Submit</button></div></form><style>.submitButtonWrapper {margin-top: 16px;}</style>
Technically, you could add the margin to the submit <button>
directly. I don't prefer this though.
In practice, your buttons will usually be abstracted away in a component library that usually prevents style overrides—for good reason.
By creating this wrapper element, the separation of responsibilities is clearer.
One more note for a good use of margin: Typography elements.
Headings and paragraphs are good uses of margin, since there will usually be varying amounts of spacing between these and they usually aren't wrapped in a container where gap
makes sense.
Let's say we have 2 <div>
elements and some CSS like this:
<style>div {width: 100px;height: 100px;}.blue {z-index: 5000;background-color: blue;}.red {z-index: 4000;background-color: red;// Just to get both divs on top of each othermargin-top: -100px;}</style><div class="blue"></div><div class="red"></div>
Do you know which one will be displayed on top?
…
The answer is… red.
Why?
If you answered blue, you might have thought it was because the z-index
of the .blue
div was higher.
While that is true, z-index
isn't doing anything here.
It's not doing anything because z-index
only works in certain "layout modes."
Because it isn't doing anything, the element that appears in the DOM later is rendered at the top. In this case, since the .red
div element comes later, it will appear above the other.
The default layout mode for most elements is called "flow".
z-index
isn't implemented in the "flow" layout mode.
The MDN docs call this out in the position: static
area which is the default for elements.
But it is implemented in "positioned" layout mode.
To enter positioned layout for an element, we can apply position: relative
, position: absolute
, position: fixed
, or position: sticky
In this case, the easiest way to get z-index
to work is to use position: relative
.
<style>div {width: 100px;height: 100px;}.blue {position: relative;z-index: 5000;background-color: blue;}.red {position: relative;z-index: 4000;background-color: red;// Just to get both divs on top of each othermargin-top: -100px;}</style><div class="blue"></div><div class="red"></div>
Since both elements are in the "positioned" layout mode, z-index
will work and the .blue
div will be displayed on top.
If you've instinctively used gap
only when you add display: flex
or display: grid
, you probably knew this without realizing it.
Why don't you just add gap
without adding display: flex
?
Because gap
isn't implemented in the default layout mode—flow.
But it is in the "flex" layout mode! Which you get when you do display: flex
.
top
, left
, right
, bottom
only works in "positioned" layout mode.grid-template-areas
only works in "grid" layout mode.The final common mistake is overusing only display: grid
or only display: flex
.
Although many display: flex
cases can use display: grid
and vice-versa, each of these excel in their own areas.
CSS Grid is great for 2-dimensional layouts. I often see good CSS grid use cases when building a page-level layout structure.
For example, Una Kravetz shares how to build the classic holy grail layout using CSS Grid.
.parent {display: grid;grid-template: auto 1fr auto / auto 1fr auto;}header {grid-column: 1 / 4;}.left-side {grid-column: 1 / 2;}main {grid-column: 2 / 3;}.right-side {grid-column: 3 / 4;}footer {grid-column: 1 / 4;}
Flexbox is great for stacking items side by side or on top of each other in 1 dimension.
A great example of this is something like nav items that should be equally spaced apart.
Here is an example on Tailwind's home page.
Here's one more example on Airbnb.
In general, I use Flexbox more than CSS Grid, but they both have their use cases and its important to use the correct one for your use case.
To learn more about Flexbox and CSS Grid on a fundamental level, I recommend Josh Comeau's Interactive Flexbox Guide and his Interactive CSS Grid Guide.
These are the top 5 CSS mistakes I've seen in my experience, but I'm sure there are others we can all learn from.
If you have other mistakes you see the most, feel free to drop me a DM on LinkedIn.
You can also check out my newsletter, High Growth Engineer, where I share how to grow faster as a software engineer to 50k+ engineers weekly.
See you there!
Headless UI libraries are a modern trend in web development that focus on providing the underlying logic and functionality for UI components without prescribing a specific visual style. This approach allows developers to build fully accessible UI components according to their design systems without being tied to any predefined styles or aesthetics.
Radix UI, an open source component library, prioritizes fast development, easy maintenance, and accessibility. With 32 components available, it streamlines development by removing the need for configuration, allowing developers to quickly incorporate its components into their projects.
By the numbers (accurate as of 26th Feb 2024):
Next, Headless UI provides completely unstyled and fully accessible UI components.The diverse array of elements in Headless UI is meticulously crafted to function effectively within the system, making it an excellent option for individuals who prioritize developing custom, inclusive interface designs with a distinct visual aesthetic.
By the numbers (accurate as of 26th Feb 2024):
React Aria is a library of React Hooks that provides accessible UI primitives for your design system. There are over 40 components with built-in behavior, adaptive interactions, accessibility, and internationalization, ready for your custom styles. It provides an excellent user experience, notably with functions such as screen reader assistance, keyboard controls, focus handling, language translations, and additional features.
By the numbers (accurate as of 26th Feb 2024):
In addition, Aria Kit is an open-source library that provides unstyled, primitive components for building accessible web apps, design systems, and component libraries with React. It contains a collection of components that handle accessibility, state management, and keyboard interactions, while leaving the styling and composition to the user.
By the numbers (accurate as of 26th Feb 2024):
Ark UI is a headless library known for building reusable, scalable design systems that works for a wide range of JS frameworks. The components are unstyled, declarative, accessible, and reliable, allowing for a delightful development experience. It supports multiple platforms and applications, helps create a consistent design system across them and uses state machines to ensure predictable and bug-free behavior in every component.
By the numbers (accurate as of 26th Feb 2024):
Furthermore, Reach UI is a collection of accessible, React-based UI components for building design systems. This prioritization of accessibility results in user-friendly components that enhance inclusivity in web applications. Reach UI's components adhere to accessibility best practices, simplifying the process for developers to build applications suitable for all users, including those who depend on assistive technologies.
By the numbers (accurate as of 26th Feb 2024):
The headless UI library ecosystem for React showcases a vibrant collection of tools designed to meet a wide range of web development needs. From accessibility and ease of use to customization and scalability, each library offers distinct advantages, reflecting the industry's shift towards more adaptable and inclusive web design practices. As web development continues to advance, the flexibility and user-focused design of headless UI libraries will remain essential for shaping the future of digital interfaces.
Open source design systems by tech companies provide a wealth of knowledge and insights into best practices, innovative solutions, and collaborative processes between designers and developers. Here are some of our favorite by popular tech companies:
Gestalt by Pinterest is a design system designed to enhance UI consistency and quality. Its set of components and rules are tailored to fit Pinterest's specific look and how it works. It's also great at creating easy-to-use interfaces, paying special attention to making sure everyone can use them easily, no matter their abilities.
By the numbers (accurate as of 26th Feb 2024):
Material Design by Google is a system for design that helps in developing consistent, visually pleasing, and effective UI for various platforms and devices. It includes grid-based layouts, responsive animations, transitions, and many more. All these are made by using a minimalist design approach.
By the numbers (accurate as of 26th Feb 2024):
Next, Blueprint by Palantir is a collection of reusable components, interactive documentation, and accessibility features for the web, built on React. Its emphasis on growing easily and being interactive makes it especially good for big company apps in fields that need to handle a lot of complex information.
By the numbers (accurate as of 26th Feb 2024):
Polaris, created by Shopify, is a design system that assures a consistent user experience on its platform. It has detailed guidelines on various components that follow Shopify's design standards. This allows for it to have easy-to-use and readily available shopping experiences for both sellers and buyers.
By the numbers (accurate as of 26th Feb 2024):
Lightning by Salesforce provides ready-to-use HTML and CSS UI elements that can be used to develop Salesforce experiences. It explains the visual design values and attributes that ensure branding and UI consistency at scale.
By the numbers (accurate as of 26th Feb 2024):
Primer by GitHub contains guidelines, principles, and patterns for designing UI at GitHub, with the aim of ensuring consistency and usability across GitHub's platforms. It includes design guidelines, components, and tools that reflect GitHub's specific needs for collaborative coding environments, which is particularly important to ensure clarity and efficiency in complex interfaces.
By the numbers (accurate as of 26th Feb 2024):
Spectrum, Adobe's design system, is engineered to enhance user experience across its diverse platforms. By focusing on being flexible and creative, it can become really good at being easy to use and looking great, setting a high bar for how user interfaces should look.
By the numbers (accurate as of 26th Feb 2024):
Carbon by IBM is an open source design system for products and digital experiences, based on the IBM Design Language. With its comprehensive set of design guidelines and development components, Carbon is designed to facilitate the creation of intuitive and efficient user interfaces, particularly for complex enterprise solutions requiring scalability and integration.
By the numbers (accurate as of 26th Feb 2024):
Ring UI by JetBrains is a collection of UI components for web-based products built inside JetBrains or third-party plugins for JetBrains’ products. It's designed to help developers build consistent, responsive, and attractive interfaces quickly. Ring UI emphasizes productivity and user experience, reflecting JetBrains' expertise in development tools.
By the numbers (accurate as of 26th Feb 2024):
In addition, Base Web by Uber is a React component library that offers a robust suite of components out of the box. It offers an extreme level of customization through the Overrides API and configurable Themes. Its scalability and wide-ranging component library make it appropriate for various web applications, such as large-scale enterprise systems or basic web projects.
By the numbers (accurate as of 26th Feb 2024):
Atlassian Design System offers direction, elements, and materials for creating interfaces that match Atlassian's products like Jira and Confluence. It highlights teamwork, variation, and a unique visual design to support Atlassian's dedication to team productivity software.
By the numbers (accurate as of 26th Feb 2024):
Backpack by Skyscanner consists of design tools, components, and guidelines that focus on providing a positive user experience on the Skyscanner website. Promoting modularity, scalability, and accessibility helps designers and developers quickly develop cohesive and user-friendly interfaces.
By the numbers (accurate as of 26th Feb 2024):
Fluent by Microsoft is a collection of UX frameworks designed for developing web and mobile applications with the same appearance and function as Microsoft products.It emphasizes on motion and material-inspired interfaces for smooth and intuitive user experiences on Microsoft platforms and devices.
By the numbers (accurate as of 26th Feb 2024):
Finally, Protocol by Mozilla is a design system for Mozilla and Firefox websites that provides a common design language, reusable code components, and high level guidelines for content and accessibility. It prioritizes web standards and open-source principles, making it a versatile choice for web projects aiming for clarity and user-friendliness.
By the numbers (accurate as of 26th Feb 2024):
Design systems from top tech companies that everyone can use are a huge help for both designers and developers. They help make digital interfaces consistent, easy to use, and innovative. Each system has its own special features and ideas, showing how varied and vibrant the tech world is today.
One of the best things about React is the rich ecosystem of libraries and tools that helps developers build apps quickly. Here, we list some enduring and widely favored React ecosystem libraries.
Next.js, a full-stack React framework, is utilized by some of the biggest companies globally to build top-notch web applications. It offers automatic optimizations, data fetching, routing, server functions, API endpoints, and more, solidifying its position as a robust full-stack React framework.
By the numbers (accurate as of 26th Feb 2024):
Remix is a full stack React framework with a focus on web fundamentals and modern web app UX, enabling developers to build better websites with faster and smoother user experiences. It allows data loading, code splitting, and UI transitions to be handled by the URL segments, resulting in snappy page loads and instant transitions. Furthermore, it supports HTML forms, handles both the server and client side, has built-in error handling, and supports route error boundaries. Its abundance of features allow it to stand-out as one of the most impactful React libraries.
By the numbers (accurate as of 26th Feb 2024):
Additionally, React Query is a library that provides declarative, always-up-to-date auto-managed queries and mutations for data fetching in React applications. It handles caching, background updates, stale data, and many more with zero-configuration. It simplifies data fetching logic, improves developer and user experiences, supports complex workflows, and works with any backend.
By the numbers (accurate as of 26th Feb 2024):
Docusaurus is a react-based static website generator. It includes customization, localization, versioning, search, and dark mode features to improve user experience and aid in documentation creation.
By the numbers (accurate as of 26th Feb 2024):
React Hook Form allows for better form state management and validation as it is a library for building forms in React with less code and better performance. It offers a feature-complete API, a constraint-based validation system, a small package size, minimal re-renders, easy adoption, and consistent validation strategies.
By the numbers (accurate as of 26th Feb 2024):
React Router is instrumental in implementing declarative routing within React applications. Its approach simplifies the creation of dynamic, single-page applications, allowing developers to manage navigation and view transitions with ease. The library's popularity and utility in the React ecosystem are evident from its GitHub stars, underscoring its role in shaping modern web application development.
By the numbers (accurate as of 26th Feb 2024):
Redux / React Redux is a predictable state management for JavaScript apps in a consistent, predictable, and testable way. It enables powerful features like undo/redo, state persistence, and more. One of its features, Redux DevTools, can debug and trace state changes, with time-travel and error reporting capabilities. Most importantly, Redux works with any UI layer and has a large ecosystem of addons to fit different needs.
By the numbers (accurate as of 26th Feb 2024):
Meanwhile, Framer Motion is an open source, production-ready animation and gesture library for React on the web. Possessing straightforward syntax, robust capabilities, and optimized performance make it one of the most valuable libraries in the React ecosystem.
By the numbers (accurate as of 26th Feb 2024):
React Testing Library is one of the best choices for testing React components that emphasizes user experience, providing easy-to-use APIs for different frameworks such as Angular, Vue, and React. It also includes extensions for Cypress and React Native, which further displays its versatility and effectiveness.
By the numbers (accurate as of 26th Feb 2024):
React Email consists of high-quality, unstyled components for creating beautiful emails using React. The components are responsive, accessible, customizable, and compatible with most email clients. They also back server-side rendering and CSS inclining, making it a valuable tool in the React ecosystem for creating impactful email campaigns.
By the numbers (accurate as of 26th Feb 2024):
Last but not least, React Use is a large library of React hooks, ported from libreact, that provide various functionalities for React components. This shows how it can be a valuable resource for developers seeking to enhance their applications with custom hooks, as reflected in its GitHub stars.
By the numbers (accurate as of 26th Feb 2024):
The React ecosystem consists of countless libraries that streamline web development and tackle many different challenges. Each of the libraries mentioned have a crucial role in improving the quality and efficiency of web apps development. With strong community support, the React ecosystem is a dynamic and invaluable resource for developers.
UI component libraries improve developer efficiency by enabling consistent design and code reuse. As the top JavaScript frameworks, React offers a wide range of UI component libraries. This article will delve into some of the most popular and robust options available.
MUI, previously known as Material UI, provides React developers with a range of free UI tools including a diverse component library, customizable themes, and production-ready components, all in line with Google's Material Design principles. This makes it an excellent option for creating visually appealing and powerful web applications.
By the numbers (accurate as of 22nd Feb 2024):
Ant Design uses CSS-in-JS technology to provide dynamic and mixed theme ability, and also improves the performance of the application by using component level CSS-in-JS solution. It is commonly used in the development of complex and large scale enterprise applications.
By the numbers (accurate as of 22nd Feb 2024):
Shadcn UI consists of beautifully designed components built using Radix UI and Tailwind CSS, which are accessible, customizable, and open source. They cover a wide variety of use cases such as dashboard, tasks, forms, music, and authentication.
By the numbers (accurate as of 22nd Feb 2024):
Next, Chakra UI is a simple, modular and accessible component library for building React applications. It offers accessible, themeable, composable and light/dark UI components, as well as developer experience and community support, making it an excellent choice for developers who prioritize inclusivity and flexibility in their web applications. Its emphasis on composition and ease of styling allows for rapid development of attractive and accessible web interfaces.
By the numbers (accurate as of 22nd Feb 2024):
Mantine is a React components library that offers more than 100 customizable components and 50 hooks for building accessible web applications faster. It supports visual customizations with props, styles overriding, and flexible theming with colors, fonts, shadows, and more – providing developers with a versatile toolkit for building responsive and visually appealing web applications.
By the numbers (accurate as of 22nd Feb 2024):
Additionally, React Bootstrap is a library that replaces the Bootstrap JavaScript with React components, without unneeded dependencies like jQuery. As one of the original React libraries, React-Bootstrap has grown alongside React, making it a great option.
By the numbers (accurate as of 22nd Feb 2024):
Next UI is a beautiful, fast and modern React UI library that provides a plugin to customize default themes, a fully-typed API, and accessibility support. Its focus on design and user experience, along with a developer-friendly attitude, makes it a top pick for web developers.
By the numbers (accurate as of 22nd Feb 2024):
Semantic UI is the official React integration for Semantic UI, a development framework that helps create beautiful, responsive layouts using human-friendly HTML. Semantic UI React is jQuery free, declarative, and has a rich set of components and subcomponents. It also supports augmentation, shorthand props, and auto controlled state.
By the numbers (accurate as of 22nd Feb 2024):
Last but not least, PrimeReact includes advanced components like Data Tables, Trees, and more – offering a vast collection of widgets to build rich user interfaces. Its emphasis on theme customization and a wide array of components makes it suitable for applications demanding a high level of visual customization and functionality.
By the numbers (accurate as of 22nd Feb 2024):
React UI component libraries in 2024 offer a variety of choices to meet different development needs and design preferences. These libraries cater to visual appeal, performance, and inclusivity. Their popularity, shown by GitHub stars and npm downloads, highlights their importance in the React ecosystem. As web development trends rapidly evolve, these libraries will likely continue evolving and taking an important part in web applications.
One of the most effective ways to grow as a developer is by studying real-world projects. In this article, we've curated a list of extensive Next.js projects for you to dive into and dissect. By exploring the architecture and codebase of these large-scale web applications, you'll gain invaluable insights into best practices, project structure, and advanced techniques.
Whether you're a beginner looking to understand the fundamentals or an experienced developer aiming to refine your skills, these projects serve as valuable resources to elevate your proficiency and tackle complex challenges with confidence.
Supabase is an open-source alternative to Firebase that offers a full PostgreSQL database, real-time functionality, simplified authentication, seamless storage integration, and additional features. It empowers developers to build scalable and secure web applications while maintaining compatibility with existing tools and extensions.
Beyond the basics, Supabase offers additional features like embedding vectors (great for geospatial data), real time subscriptions, edge functions (serverless compute), migration guides, project management tools, a command-line interface (CLI), and integrations with other services.
By the numbers (accurate as of 15th Feb 2024):
Cal.com is an open-source scheduling tool that allows users to control their own data, workflow, and appearance. It is a successor of Calendly that is self-hosted or hosted by Cal.com, Inc. and can be deployed on the user's own domain.
What's great about Cal.com is that it integrates with various services such as Google Calendar, Zoom, Daily.co, HubSpot, and more. It also supports customization, white-labeling, and API access. It has a built-in app store where users can add or remove integrations as they wish.
By the numbers (accurate as of 15th Feb 2024):
Infisical is an open-source secret management platform that teams use to centralize their secrets like API keys, database credentials, and configurations. It offers a user-friendly dashboard, client SDKs, CLI, API, native integrations, Kubernetes operator, agent, self-hosting, secret versioning, role-based access controls, secret scanning, and more. The wide range of features offers ample opportunities for learning and experimentation within the Next.js ecosystem.
By the numbers (accurate as of 15th Feb 2024):
Dub.co is an open-source link management tool for modern marketing teams to create, share, and track short links. One most important feature is you can host Dub.co on your own server for more control over your data and design as an opportunity to experiment with Next.js.
By the numbers (accurate as of 15th Feb 2024):
Compared to alternatives, Twenty is great for the fact that it offers full control, freedom, and the ability to contribute, self-host, and fork, breaking away from vendor lock-in and allowing users to shape the open future of CRM. This is while also prioritizing data accessibility and visualization from various sources without retrofitting, and providing an effortlessly intuitive interface inspired by Notion.
By the numbers (accurate as of 15th Feb 2024):
Inbox Zero is an open-source email app that helps users reach inbox zero fast with AI assistance. Its unique feature is that it involves AI to help users manage their email subscriptions, automate their replies, block cold emails, and analyze their inbox.
What's more, the AI lets users instruct the app in plain English to reply, forward, or archive emails based on certain rules. Users can also use planning mode to review the AI's suggestions before applying them.
By the numbers (accurate as of 15th Feb 2024):
Rallly is a web application that allows users to schedule group meetings with friends, colleagues and teams by creating meeting polls based on participants' availability. With the convenience of not having to sign up/login, users only have to simply enter their name and email to join a poll. Hence, this allows Rallly to have multiple advantages over alternatives, including its simplicity, privacy and customisation.
By the numbers (accurate as of 15th Feb 2024):
Formbricks is a free and open source survey platform that allows users to gather feedback from various channels and integrate with other tools. Users can create surveys with a no-code editor, launch and target surveys to specific user groups, invite team members to collaborate, and leverage Formbricks Insight Platform or build their own data analysis capabilities.
By the numbers (accurate as of 15th Feb 2024):
Civitai is an open-source platform where people can share, collaborate, and learn from each other's stable diffusion models, which are techniques to customize AI generations. Users can also leave comments and feedback on each other's models to facilitate collaboration and knowledge sharing.
By the numbers (accurate as of 15th Feb 2024):
Plane is an open source project management tool that helps you track your issues, epics, and product roadmaps in the simplest way possible. As a platform used by over a thousand companies across countries, it offers a minimalist and intuitive interface, a powerful query language, a flexible workflow system, and integrations with popular tools like GitHub, Slack, and Figma.
By the numbers (accurate as of 15th Feb 2024):
Last but not least, Daily.dev is a professional network for developers to learn, collaborate, and grow together. Its feature involves customizing your feed, bookmarking articles, syncing across devices, and joining a community of developers.
Its web app utilizes the brand new incremental static generation feature of Next.js to deliver pages fast, which makes it interesting to explore. This is used to deliver the latest programming news from top tech publications on any topic you want so that you can stay updated on the latest trends, learn new skills, and discover new opportunities in the tech industry.
By the numbers (accurate as of 15th Feb 2024):
This collection of large-scale Next.js projects offers a rich learning environment for those seeking to deepen their understanding and hone their skills. By dissecting and exploring these real-world applications, we can gain valuable insights into advanced techniques, project structures, and best practices. Whether you're looking to master the fundamentals or refine your expertise, immersing yourself in these projects provides a hands-on opportunity to elevate your proficiency and tackle complex challenges with confidence. So, dive in, explore, and embark on a journey of continuous learning and growth within the Next.js ecosystem!
As developers, we often wonder what kind of web page gives good user experience. The problem arises when we try to quantify it. Enter 'Core Web Vitals' – a set of key metrics provided by Google that constitute unified guidance for quality signals. It measures real-world user experience for loading performance, interactivity, and visual stability of the page.
In this short article, we will summarize the most important things you should know about these metrics and provide you with links to learn more should you desire.
The first metric to consider is the Largest Contentful Paint (LCP). LCP is a metric that measures the loading performance of a page. Specifically, it marks the point in the page load timeline from a user's perspective when the main/largest content (such as images or videos) has likely loaded.
LCP is represented in seconds, where anything under 2.5 seconds is great. It is extremely crucial as it represents how quickly users can access the main content of a web page. The faster it is, the better user engagement, satisfaction, and retention. To monitor, platforms such as Google's PageSpeed Insights or Lighthouse can be used.
LCP is affected by several factors: Server response times, network latency, render-blocking resources, and the complexity of the web page's layout. Optimizing these factors can help improve LCP and overall loading performance.
Examples of such optimization techniques include:
First Input Delay (FID) is also an important metric that assesses the responsiveness of a web page. Different from LCP, FID evaluates the interactivity of a page by quantifying the time taken between users' first action (such as clicking a button or tapping on a link) and how long the browser takes to respond to that.
A low FID (under 100 milliseconds) indicates that the page is highly interactive and responsive to user input, contributing to a positive user experience. Similar to LCP, FID is measured using performance monitoring tools like Google's PageSpeed Insights, Lighthouse, or web analytics platforms.
As FID is affected by code execution time, main thread activity and the presence of long tasks that block the browser's responsiveness, the following methods will help in improving it:
Lastly, Cumulative Layout Shift (CLS) measures visual stability by quantifying how much visible content moves around on the screen while loading and the total of all individual layout movement scores (ranging from 0 to 1) that happens while the page remains active. Layout shifts happen when visible elements on a page unexpectedly move to a different position, causing content to reorganize and disrupt user experience. A low CLS (under 0.1) means that the page's layout remains stable and consistent with minimum disruption for users.
CLS is affected by the following factors:
Hence, to reduce CLS, developers can try the following techniques:
Understanding and optimizing Core Web Vitals metrics are essential for developers looking to improve user experience and website performance. Multiple research shows that only about 40% of web pages pass these metrics. By focusing on them, developers can easily quantify the scores and performance of their web pages, giving users a great experience while visiting the website.
Tailwind CSS' "write only atomic classes in your HTML approach" has long been regarded as a controversial and radical approach to styling since its inception. Over the years, the industry has started to see its benefits and Tailwind has since gained immense popularity.
In this article, we explore the diverse range of free UI components and libraries that leverage Tailwind CSS, empowering front end developers to build modern and aesthetically pleasing user interfaces without compromising on efficiency or flexibility.
Whether you're a seasoned developer or just venturing into the realm of Tailwind-powered design, this article is your compass to discover the most valuable and freely available UI component libraries that can elevate your websites to new heights.
Sailboat UI is a modern UI component library specifically designed for Tailwind CSS. It offers over 150 open source components, and uses Alpine.js for some interactive components. However, the important part of the examples is the Tailwind classes used, so you can easily integrate it with your favorite headless UI library in a framework of your choice.
What is great about Sailboat is that it offers a few variants of each component. For example, Accordions are offered in multiple styles like simple, bordered, with background, etc, so you can choose a variant that suits your theme and brand.
By the numbers (accurate as of 18th Jan 2024):
HyperUI is a collection of free copy-pastable Tailwind CSS components that has been around since 2021. It is very similar to Tailwind UI in that both contain components and sections for application, marketing, and e-commerce websites. Since it's pure copy pasting of HTML/CSS, there's nothing to install and you can integrate it with any headless UI library.
By the numbers (accurate as of 18th Jan 2024):
The most outstanding thing about Preline is its vast library of components and sections. At over 60 components and 170 sections, it is the largest free set of Tailwind component examples out there. The examples are all very well-built and look very polished. Last but not least, all the examples have full support for dark mode.
By the numbers (accurate as of 18th Jan 2024):
daisyUI is one of the first Tailwind-based UI component libraries in existence. The library takes a unique approach where it provides a Tailwind plugin that injects its own higher-level classes that are composed of Tailwind utility classes. This somewhat defeats the purpose of Tailwind's atomic classes approach as developers are using pre-built CSS classes, similar to Bootstrap. Nevertheless, with 28.5k stars, it's a wildly popular library that boasts a vast library of over 50 components with multiple variants and classes.
By the numbers (accurate as of 18th Jan 2024):
While many libraries are aimed at more general usage of marketing and e-commerce, Tremor is focused on dashboard and data visualization components. Beyond the standard UI components of accordion, button, forms, Tremor includes multiple visualization components – area chart, bar chart, donut chart, line chart, scatter chart, you name it, they have it.
By the numbers (accurate as of 18th Jan 2024):
Despite the name, NextUI has nothing to do with Next.js, the popular React metaframework. Since v2.0, Next UI is built on top of Tailwind CSS and React Aria and is one of the fastest growing UI libraries based on Tailwind. Like Tremor, to use Next UI, import React components from the npm package @nextui-org/react
and use the components within your application.
By the numbers (accurate as of 18th Jan 2024):
If there was one library that took the world by storm in 2023, it'd be shadcn/ui, which is built on top of Tailwind CSS and Radix UI. While many other Tailwind UI libraries make developers copy the HTML/CSS or install the library as an npm dependency, shadcn takes an approach where you copy and paste any required component code into your projects; you have full ownership and control over the code. This approach has both merits and drawbacks. While you have full control over the components, the obvious downside is that you won't receive updates automatically and have to remember to sync any changes. This approach is best used by teams who have strong front end developers who have a need for customizing component appearance as well as the capability to maintain the components in the long term.
By the numbers (accurate as of 18th Jan 2024):
Park UI is a relatively new Tailwind library but is a strong contender to shadcn/ui. Park UI's usage approach is similar to shadcn/ui,
Where shadcn/ui is coupled with Tailwind and Radix UI, Park UI is way more versatile. It has first class support for both Tailwind CSS and Panda CSS (another atomic CSS library that uses a JavaScript-first approach) and also integrates with React, Vue, and Solid via Ark UI, a headless UI library that has first party implementations with React, Vue, and Solid.
By the numbers (accurate as of 18th Jan 2024):
Pines UI is a UI library built on top of Tailwind CSS and Alpine.js and offers multiple variants for each component. With black as its primary color, the default appearance of the components highly resemble shadcn/ui and Park UI.
By the numbers (accurate as of 18th Jan 2024):
Last but not least, we have Aceternity UI. Built by the amazing Manu Arora, Aceternity UI is unlike the typical common UI component libraries introduced above. It is a collection of bespoke and carefully crafted UI sections built on top of Tailwind CSS and Framer Motion.
Some of our favorite examples include the Hero Parallax, 3D Card Effect, and the Wavy Background. It's an extremely fast way to elevate your website's design and add a "wow" factor! We recommend using it along another UI component library of your choice.
By the numbers (accurate as of 18th Jan 2024):
Now that we know how Meta writes JavaScript and CSS, let's see how they're used to build UI components.
Naturally the company that created React will build all their applications using React. You won't find Angular, Vue, or Svelte in Meta's codebase and Meta is all-in on React. By focusing on a single UI framework, tooling, knowledge and expertise can be shared and transferred more easily. Moreover, any issues faced by Meta engineers can be solved in-house, by consulting the React team. Longer term larger issues can be addressed in React's roadmap.
To easily build and test React components, Meta engineers can write example files for these components to demonstrate how the component looks and behaves when using a certain combination of props, similar to stories in Storybook.
All React components in the codebase that have example files are easily searchable within an internal tool and can be interacted with right within the browser. It is super handy for discovering common components and seeing how they are being used.
Unit testing of component interaction and behavior can be done using Jest and React Testing Library. Snapshot tests and screenshot tests can also be conveniently generated from the component examples.
Each product Meta builds has its own design system with a React implementation of the UI components. Many UI components (e.g. buttons, dropdowns, modals, etc) have the same underlying logic, functionality, and accessibility features built in and that layer can be shared between UI components belonging to different design systems. Each design system only has to care about customizing the appearance. This shared logic and accessibility layer is known as headless UI components. Naturally, Meta has built its own headless components to be reused across product design systems.
Companies looking to accelerate their UI components development should pick up one of the popular headless UI component libraries. For React, there's Radix UI, React Aria, and Aria Kit.
Meta also has internal tools to gain insights on how big a component's file size is (bytes) and analytics, e.g. which modules are depending on the component (importing the component) and how many times they are being used in the codebase. This is useful for tracking progress when deprecating components.
In a monorepo, it becomes too easy to import components from anywhere in the codebase. However, this is bad when a component is still under development and not ready to be used widely or when a component is meant to be deprecated. To prevent unwarranted usage of components, lint rules can be used to flag usages of the components outside of directories they are not allowed to be used in.
An emerging technology trend / workflow is “Design to code”, meaning you turn designs from your designer into code that you can use directly within your app. Since Figma is the industry standard for designing digital products these days, a forward-looking styling approach will be one that's integrated with Figma and can support exporting of UI markup and styles from Figma designs. Companies like Builder.io and Locofy offer such solutions.
Over the years, new UI libraries offering different reactive approaches have emerged (e.g. Qwik and Solid). React may no longer be the most performant library, but in terms of the overall ecosystem, learning resources, financial backing, availability of developers proficient in it, React is still one of the safest choices. Moreover, React is still constantly innovating and improving, with projects like React Server Components and an optimizing compiler. However, it is more important to decide on a company-wide blessed framework that the team is comfortable with, than to choose the most trendy UI framework.
For component examples and visual testing, Chromatic (by the maintainers of Storybook) and Percy are the leading choices available in the market.
Writing CSS at scale, particularly in large and complex projects, can present several challenges. Here are some common problems associated with scaling CSS:
CSS uses a global namespace, which causes many problems at scale when many developers are building into the same web application. Since the 2010s, Meta has been writing CSS using an approach called CSS modules, which solves some of the problems with CSS at scale – global namespace, including styles only on pages that use them, dead code elimination (only including the selectors that are still being referenced), and some others. Meta's CSS modules implementation is not open sourced but bundlers like webpack add support for something similar.
In 2014, Christopher Chedeau gave an insightful talk about writing CSS within your JS. In 2018, Meta started their rewrite of facebook.com and announced a new way of writing CSS in JS, which is called StyleX. StyleX's API resembles CSS-in-JS libraries like Aphrodite and Glamor, and has these key features:
The team responsible for rebuilding facebook.com gave a talk about StyleX during React Conf 2019 and also at React Finland. The process of open sourcing StyleX has been a long one and as of Nov 2023, StyleX is finally open sourced. A caveat about StyleX is that it needs extra configuration to work in UI frameworks that use custom file formats like Vue and Svelte.
In the open source ecosystem, Tailwind CSS is the rage. Like StyleX, Tailwind also uses atomic CSS approach so they also share a similar benefit of a CSS stylesheet file that plateaus. However, since Tailwind CSS is just a set of atomic classes, you can use it within both JavaScript and HTML and even server-side Ruby templates. Out-of-the-box, the default Tailwind config provides a set of predefined classes (similar to design tokens) whereas StyleX does not.
Panda CSS is a new atomic CSS-in-JS library by the creators of Chakra UI and it is the middleground of StyleX (library calls are compiled away, type-safe) and Tailwind (inline style declarations, provides some default styling tokens).
The advent of React Server Components means that a whole category of styling libraries that relied on context for theming and runtime injection without any static stylesheet extraction will cease to work. Thankfully Tailwind and Panda CSS generate styles at build time and are fully-compatible with React Server Components.
If you don't mind seeing multiple classes in your HTML or building only with HTML and CSS (or non-JavaScript templating approach), Tailwind CSS will be my recommendation. If you prefer a JavaScript-first approach for styling and type-safety is important to you, use Panda CSS. Panda CSS is similar to StyleX but has a larger community maintaining it.
Just like how developers can use the latest language features of JavaScript by using Babel, developers can also use the latest CSS features by using a CSS preprocessor like PostCSS to automate routine CSS operations, utilize future CSS syntax, and enhance CSS's functionality.
Using Tailwind and Panda removes the need for writing CSS directly, so you might not actually need to use PostCSS in your applications at all. By the way, Tailwind and Panda use PostCSS under-the-hood so you will still be using PostCSS, just not directly.
Because Meta uses StyleX for authoring styles, linting is done in JavaScript, which means ESLint and Flow. StyleX's type-safety features allow components to restrict styling props to be of only certain allowed properties (e.g. margin) and only allowed values (e.g. multiples of 4).
Teams using Tailwind CSS should install the official Tailwind CSS Intellisense plugin which offers autocomplete, linting and previews for a class' underlying styles.
If you're writing plain CSS or CSS modules, stylelint is the recommended linter.
Design tokens are essentially variables used to store design decisions such as colors, fonts, border radii, spacings, animations, and more. These tokens can be simple values like a number, color hex code, or more complex information like a typography scale (font size, line height, letter spacing).
By using design tokens, teams can ensure consistency across their product. As the design system evolves, changes made to the tokens automatically propagate throughout the product, ensuring uniformity and reducing the risk of inconsistencies.
On the web, design tokens can be implemented using CSS variables, or an object in JavaScript. Meta uses CSS variables for color tokens so that dark mode is essentially another set of color tokens. For spacing, border radii and other kinds of numerical values, tokens are not being exposed to developers because most design choices have been abstracted behind UI components built by the product's design system team.
In my opinion, UI components don't capture every design requirement and it's useful to make design tokens available for product engineers to use. Your company's product design team would probably already have selected design tokens and used them in their design system.
If you are using Tailwind or Panda CSS, the default configuration includes tokens for spacing, colors, typography, etc but they can be configured for custom design tokens. If you are not using Tailwind or Panda CSS and don't have a design system but want a similar default set of tokens, Open Props offers a good set of design tokens in the form of CSS variables.
An Atomic stylesheet is the recommended approach for styling on the web because the stylesheet size does not grow proportionately to the number of features/developers/products. Tailwind CSS and Panda CSS are modern libraries to achieve atomic stylesheet generation in a futureproof fashion that's compatible with React Server Components and a server-first component era.
As web development continued to evolve, the demand for more advanced and modern features led to the development of ECMAScript 6 (ES6), also known as ECMAScript 2015. Released in June 2015, ES6 introduced a range of new features, including arrow functions, classes, template literals, and destructuring assignments, among others.
Today, JavaScript is a ubiquitous language used not only for client-side web development but also for server-side development (Node.js) and mobile app development (React Native). The language continues to evolve, with ongoing efforts to enhance performance, introduce new features, and address the needs of modern web development.
After the release of ES2015 / ES6, the ECMAScript specification transitioned to a yearly release cycle, allowing for a more agile development process and quicker incorporation of new features. Subsequent releases, such as ECMAScript 2016 (ES7), ECMAScript 2017 (ES8), and so on, brought additional improvements and features to the language.
Using the latest JavaScript language features in web development is crucial for several reasons. First and foremost, it ensures compatibility with modern browsers and leverages performance improvements introduced in newer language versions. This not only enhances the user experience but also aligns development efforts with the evolving standards of the JavaScript ecosystem. Furthermore, modern features contribute to enhanced productivity, readability, and maintainability of code. Syntax improvements, abstractions, and functionalities like arrow functions, async/await, and class syntax make code more concise, organized, and easier to understand. By staying current, developers gain access to a broader ecosystem of APIs, libraries, and community support, fostering a better development experience and future-proofing their code. Keep your developers happy by enabling the latest technologies and language features.
A combination of JavaScript compilers and polyfills allow developers to use the latest ECMAScript features. JavaScript compilers like Babel transpile newer language syntax into an older version while polyfills like core-js provide implementations for newer JavaScript APIs and they work together so that older browsers that do not support the features can still run the code. Google uses their homegrown Google Closure Compiler while Meta uses a combination of Babel and Flow.
TypeScript, which offers type checking on top of enabling modern language features, is both a compiler and type checker, so you might not need Babel at all. Read on to find out more.
Type-safety is extremely important in large companies and large codebase. Type-safety helps to eliminate an entire class of bugs during development that will not make it into production.
Meta places such a high importance in type-safety that they have developed type-safe versions of / type checkers for every dynamic-typed language they use:
Other big tech companies like Microsoft have developed TypeScript and Google has developed Dart, which further highlights the importance of using type-safe languages when developing at scale. These languages / type checkers are open sourced and available for all to use.
Beyond catching and preventing type errors, type-safe languages are also easier to read and maintain. In VS Code, you can hover over a symbol to know its type. After all, code is read much more than it is written! The benefits of increased readability and clarity outweighs the learning curve in the long term. IDEs can also improve the developer experience by providing better autocompletion suggestions, and showing type errors inline.
Although Flow is open sourced, the Flow team has publicly stated that open source support is not a priority. In the open source ecosystem, there are other alternatives for writing type-safe web applications like ReScript and Reason, which also have their roots in Meta, but they have lost traction in recent years.
There are multiple ways to write and format code, and in a large codebase built by a large team, inconsistent coding styles across different files and developers make the code difficult to read and maintain. This lack of standardization can also hinder collaboration and slow down development, as more time is spent understanding and debugging the code instead of adding new features or improvements.
Linting is the automated process of analyzing source code to detect errors, enforce coding conventions, and identify potential issues. Linters, or linting tools, play a crucial role in improving code quality by catching syntax errors, promoting consistent coding styles, and enforcing best practices. They contribute to error prevention, enhance code readability, and establish a uniform coding style across a project.
Meta uses ESLint with eslint-plugin-flowtype for linting JavaScript. If you're using TypeScript, the recommendation would be ESLint with typescript-eslint. Configuring ESLint rules individually can be a chore, so it is recommended to use popular open source ESLint configs like eslint-config-airbnb as a starting point. Meta's internal ESLint config can be found at eslint-config-fbjs.
It is recommended to add ESLint rules around sorting imports, sorting object keys alphabetically, sorting React component props alphabetically, etc. This reduces the chances of merge conflicts when developers edit the same file. Christoph Nakazawa, ex-manager on Meta's JavaScript infrastructure and React Native team published @nkzw/eslint-config which has a high degree of overlap with Meta's ESLint configuration.
Coding style is another aspect that can be automated and benefits from consistency. Prettier is the de facto choice for formatting. In 2018, Christopher Chedeau ran a format-athon at Meta where he rallied around 20 engineers across the company to format the codebase with Prettier.
Linting should ideally be executed during development so that errors and warnings are reflected within the IDE instantly. The fast feedback loop increases productivity as the developer can fix the issues on-the-spot while the context is still fresh. Autofix lint issues and format the code on save by adding to VS Code's settings.json
:
{"editor.formatOnSave": true,"editor.codeActionsOnSave": {"source.fixAll.eslint": true}}
A JavaScript runtime is an environment where JavaScript code is executed. JavaScript was originally designed to be run in the browser, but these days JavaScript can also be run on the server using Node.js, which is the most popular server-side JavaScript runtime built on Google Chrome's V8 engine.
Meta uses Hermes for server-side rendering of React, although Hermes was originally designed to run React Native apps on mobile platforms.
There isn't much of a choice to debate about here. In the wild, most companies use Node.js and it has gotten really good in recent years. Deno and Bun are rising stars in the JavaScript runtime space and both offer TypeScript support out-of-the-box. Deno has a high focus on security by restricting access to sensitive runtime APIs by default, while Bun is extremely fast and includes a package manager, bundler, and test runner.
Deno and Bun are still considered new so the risk is yours to bear if you decide to go with them.
While the JavaScript language is rapidly evolving, in the last decade, there were huge gaps where the language left in terms of providing solutions to common product needs like functional utilities, datetime formatting and manipulation, etc. As a result, a rich ecosystem of utilities and libraries that cater to various development needs have emerged:
Engineering leadership or a front end infrastructure team should make a decision on which libraries to use for various purposes. This prevents different libraries serving the same purpose being shipped in the same app and users end up paying the cost of downloading the JavaScript twice. Resources should also be committed to supporting internal developers facing issues with them and performing periodic codebase-wide upgrading of the library versions being used.
Finally, the last piece of JavaScript development to discuss is package management. Package management in JavaScript refers to the process of managing, distributing, and installing JavaScript libraries and tools within a project. A package manager is a tool that simplifies these tasks by automating the installation, versioning, and dependency resolution of external code packages.
Back in 2016, Meta created Yarn as an improvement over npm, it introduced the concepts of lockfiles, fast installs, global offline caches, workspaces, etc. A key feature of a scalable package manager is locking down the version dependencies and an offline mirror. You don't want to be blocked from deployment if the npm registry goes down. At Meta, node_modules are installed via Yarn v1, scanned for vulnerabilities, and checked into the monorepo.
These days, npm has mostly caught up with Yarn in terms of feature parity and Yarn has also undergone many changes since its initial launch. As of writing, Yarn is now v4, introduces Yarn Plug'n'Play, and is no longer maintained by Meta. Personally, my goto package manager these days is pnpm because it has the most features, has an amazing development experience, and is frequently updated.
TypeScript by Microsoft is now the de facto way to write type-safe web applications. Community support (learning resources, library type declarations) and developer tooling support (linters, IDE integration) for TypeScript is also extremely strong; new JavaScript runtimes like Deno and Bun also support TypeScript out-of-the-box. Even Google uses TypeScript as the primary language for Angular development, a UI framework created by them.
You will benefit from choosing TypeScript, even if the size of your codebase is small.
Meta (previously Facebook) is known for its social media platforms and used by a huge number of users and services. Meta's websites include facebook.com, instagram.com, messenger.com, whatsapp.com, threads.net, meta.com, and more and are used by billions of global users monthly.
Developing these websites consisting of thousands of pages by thousands of engineers is no simple feat. In this article, we will provide some insights of how big companies like Meta and Google handle front end development on such a large scale, the tools, methods, and strategies they use and how growing companies who face the same problems can adopt some of the approaches using alternatives in the open source ecosystem.
Aside from Meta's huge web user base, Meta is also an interesting company to look at because between 2015 to 2020, Meta was the forerunner of modern front end development with the creation of popular open source front end technologies like React, React Native, Flux, Jest, GraphQL, Docusaurus, Relay, Draft.js, Yarn, and more. It's fair to say that Meta had a significant impact on the modern front end ecosystem.
Disclaimer: I am no longer affiliated with Meta and this article does not represent Meta. Opinions and experiences are solely my own. I worked at Meta as a Front End Engineer from 2017 to early 2023 and things might have changed since then.
I am an ex-Meta Staff Engineer who led engineering teams to build meta.com and oculus.com. I was actively involved in the development of Meta's open source projects by creating Docusaurus 2, maintaining (and deprecating) Flux, and making small contributions to Lexical and Relay. Outside of front end development, I have also written multiple technical interview resources like Blind 75, Tech Interview Handbook, and Front End Interview Handbook, which have amassed over 100,000 GitHub stars.
As a company grows, the number of features and number of engineers developing those features increase. Look at your company's front end code base and see:
If there are too many to count, congratulations, you have just identified tech debt. These problems start to occur because:
As a result of the above, teams solve the problems for themselves, because it's much faster to solve their own problem than to solve it for everyone. What can also happen is that someone tries to solve it for everyone, but the solution isn't sufficient and the next person comes along to “improve” the solution for their own use case without understanding the use cases of others and ends up making things worse for everyone, often in subtle unnoticeable ways like performance degradations.
Duplicated code, different approaches to achieving a similar outcome, unusable existing solutions all contribute to tech debt. At scale and over a long period of time, tech debt that is seemingly small can compound and lead to disastrous business consequences like bugs, downtimes, and developer burn out. Thankfully, even small improvements will also lead to large benefits at scale when lots of products and people benefit from it.
Effective front end teams at scale score well on the following metrics:
It is important to know the nature of the organization you are in. Different types of organizations have different focuses and you should adapt your approach to the nature of the company and product. Tech debt and engineering scalability is not as important in certain companies for good reasons.
Be also aware of premature optimizations. Sometimes what you think is a problem might just be a minor inconvenience that is not actually a critical problem to solve right now. Without thorough understanding of the problem, solving a problem prematurely and having to revert it later costs more time overall.
As a Front End Engineer who mostly worked in product teams I'm not all that familiar with deployment infrastructure and the amazing product infrastructure work, so some parts will be glossed over.
A common frustration when preparing for front end interviews is the sheer volume of questions to practice. There are over a 100 possible front end interview questions available on GreatFrontEnd. How do you choose the best questions to practice to get the highest return of investment on your time?
Front end interview questions often revolve around a combination of fundamental JavaScript concepts, problem solving skills, layout building skills, and data structures and algorithms. To maximize your efficiency, we recommend practicing the most commonly-asked questions, along with those that touch multiple skills, so that you get all-rounded practice.
In this list we will cover the most important User Interface questions to practice. Let's get started!
Building a Todo list is a practical exercise that evaluates a front end engineer's ability to work with user interfaces, handle state, and perform basic CRUD (Create, Read, Update, Delete) operations. In the most basic form, candidates are asked to create a simple todo list where users can:
It is a common interview question because it reflects real-world scenarios in front end development, where developers often create user interfaces to add new entries to a list, manipulate the list, and remove entries from the list.
Building a well-implemented Todo list demonstrates proficiency in DOM manipulation, event handling, building accessible forms, event delegation, and state management. This is a question that can also have multiple potential follow-up questions:
The ease of successfully implementing the follow-up questions is highly dependent on the quality of the basic implementation, so it's important to get the basic implementation right.
Practice implementing a Todo List on GreatFrontEnd
Tabs are a common UI pattern in web development, making this question practical and applicable to a front end engineer's daily tasks. Typically, candidates are asked to create a tabs UI component with the following requirements:
For such questions, candidates might be asked to implement without using any external libraries or frameworks. Building a well-implemented Tabs component demonstrates proficiency in DOM manipulation, event handling, accessibility, event delegation, state management, and designing component abstractions. This is a question that can also have multiple potential follow-up questions:
Practice implementing a Tabs component on GreatFrontEnd
A "Tic-tac-toe" front-end interview question typically involves creating a web-based version of the classic Tic-tac-toe game using HTML, CSS, and JavaScript. Typically, candidates are asked to create a tabs UI component with the following requirements:
Implementing Tic-tac-toe assesses a candidate's understanding of DOM manipulation, event handling, building a grid-based layout, implementing non-trivial game logic, data structures and algorithms, and state management. Potential follow-up questions include:
Practice implementing a Tic-tac-toe component on GreatFrontEnd
An "Image Carousel" front end interview question typically involves creating a web-based image carousel or slider. The goal is to implement a user interface that allows users to view a sequence of images, navigate between them. More specifically:
For such questions, candidates might be asked to implement without using any external libraries or frameworks. Implementing an image carousel assesses a candidate's understanding of DOM manipulation, event handling, building a slider layout, CSS transitions, and more. There are many ways to implement an image carousel, each with its own pros and cons. Unsurprisingly, this question has tons of possible follow-up questions:
Image carousel interview question, along with its follow-ups, can even evolve into a front end system design question as there are multiple performance optimizations to talk about, with image optimizations being key. It's a great front end interview question to practice.
Read about how to design an Image Carousel on GreatFrontEnd
An "Autocomplete" front end interview question typically involves creating an interactive autocomplete feature for a text input field. The goal is to implement a user interface that suggests and displays possible matches or completions as the user types into the input field.
Here's a simplified example of an Autocomplete front end interview question has the following requirements:
Implementing an autocomplete assesses a candidate's understanding of DOM manipulation, event handling, asynchronous programming (fetching suggestions from a server), familiarity with CSS positioning, form handling, accessibility, performance, and the list goes on. It is the single most important question you can practice as it literally touches on every possible front end interview topic. Like the image carousel question, there are also many possible follow-up questions:
With the amount of things to talk about, this can also evolve into a front end system design question to have further discussions about what other APIs an autocomplete component can include, performance improvements, etc.
Read about how to design an Autocomplete on GreatFrontEnd
It goes without saying that there are more possible interview questions to practice, but many other interview questions build on top of the skills evaluated for the above questions. If you are short on time, practicing the above questions first will give your the highest return on investment and is highly recommended.
All the best for your interviews!
A common frustration when preparing for front end interviews is the sheer volume of questions to practice. There are over a 100 possible front end interview questions available on GreatFrontEnd. How do you choose the best questions to practice to get the highest return of investment on your time?
Front end interview questions often revolve around a combination of fundamental JavaScript concepts, problem solving skills, layout building skills, and data structures and algorithms. To maximize your efficiency, we recommend practicing the most commonly-asked questions, along with those that touch multiple skills, so that you get all-rounded practice.
In this list we will cover the most important JavaScript questions to practice. Let's go!
Debouncing is a crucial technique used to manage repetitive or frequent events, particularly in the context of user input, such as keyboard typing or resizing a browser window. The primary goal of debouncing is to improve performance and efficiency by reducing the number of times a particular function or event handler is triggered; the handler is only triggered when the input has stopped changing.
import { debounce } from 'lodash';// Example usage for a search input fieldconst searchInput = document.getElementById('search-input');const debouncedSearch = debounce(() => {// Perform the search operation hereconsole.log('Searching for:', searchInput.value);}, 300);searchInput.addEventListener('input', debouncedSearch);
In this example, the debounce
function creates a wrapper function that delays the execution of the provided callback until a certain amount of time has passed since the last input event. This helps optimize performance by preventing unnecessary computations or network requests during rapid user input. Adjust the delay
parameter based on the specific requirements of your application.
Debounce is a frequently asked question in front end interviews, especially by big tech companies because it assesses the candidate's understanding of asynchronous programming, closures, and the this
callback. The basic version of debounce isn't too hard to implement, so you might be asked to implement additional functionality that's available on Lodash's _.debounce
:
Or you could also be asked to implement the Throttle function, which is the sister function of Debounce; it's important to understand the differences between them. Throttle is a common follow-up question to Debounce and vice versa.
Practice implementing a Debounce function on GreatFrontEnd
Promise.all()
Promise.all()
is an important feature in JavaScript for simplifying code needed to handle multiple asynchronous operations concurrently, especially if there are dependencies between them. It takes an array of promises and returns a new promise that resolves to an array of the results when all of the input promises have resolved, or rejects if any of the input promises reject.
Proficiency with Promise.all()
is an indicator of a front end engineer's ability to handle complex asynchronous workflows efficiently and manage errors effectively, making it highly relevant to their daily tasks.
const promise1 = fetch('https://api.example.com/data/1');const promise2 = fetch('https://api.example.com/data/2');const promise3 = fetch('https://api.example.com/data/3');Promise.all([promise1, promise2, promise3]).then((responses) => {// This callback is only ran when all promises in the input array are resolved.console.log('All responses:', responses);}).catch((error) => {// Handle any errors from any promiseconsole.error('Error:', error);});
In this example, Promise.all()
is used to fetch data from three different URLs concurrently, and the .then()
block is executed only when all three promises have resolved. If any of the promises rejects, the .catch()
block is triggered.
It is a good question to practice for front end interviews because candidates are often evaluated on their mastery of asynchronous programming and whether they know how to implement polyfills. Promise.all()
has many sister functions like Promise.race()
, Promise.any()
, which can also be asked in interviews, so you kill many birds with one stone by practicing Promise.all()
.
Practice implementing Promise.all()
on GreatFrontEnd
In JavaScript, a deep clone function is used to create a new object or array that is entirely independent of the original object/array. This means that not only the top-level structure is cloned, but all nested objects and arrays within the original structure are also duplicated. In other words, changes made to the cloned object do not affect the original, and vice versa.
// Original user.const user = {name: 'John',age: 42,};// Create a deep clone of the user.const clonedUser = deepClone(user);// Modify the cloned user without affecting the original.clonedUser.name = 'Jane';// Output the original and cloned user. The original is not affected.console.log('Original User Data:', user); // { name: 'John', age: 42 }console.log('Cloned User Data:', clonedUser); // { name: Jane', age: 42 }
While not as frequently asked in interviews as some other questions, deep cloning showcases one's understanding of recursion and the various data types in JavaScript – how to recursively traverse an arbitrary object in JavaScript and process each encountered type.
Practice implementing the Deep Clone function on GreatFrontEnd
An Event Emitter class in JavaScript is a mechanism that allows objects to subscribe to and listen for events, and to emit events when certain actions or conditions occur. It facilitates the implementation of the observer pattern, where an object (the event emitter) maintains a list of dependents (observers) that are notified of changes or events. In fact, EventEmitter is part of Node.js' API.
// Example usageconst eventEmitter = new EventEmitter();// Subscribe to an eventeventEmitter.on('customEvent', (data) => {console.log('Event emitted with data:', data);});// Emit the eventeventEmitter.emit('customEvent', { message: 'Hello, world!' });
Implementing an EventEmitter
class involves knowledge of object-oriented programming, closures, this
callback, and data structures and algorithms fundamentals. Possible follow-up questions include an alternative unsubscribing API.
Practice implementing an Event Emitter on GreatFrontEnd
Array.prototype.filter()
is a built-in method in JavaScript that allows you to create a new array containing elements that satisfy a certain condition. It iterates over each element in the array and applies a callback function to determine whether the element should be included in the filtered array.
// Example: Filtering out even numbers from an arrayconst numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];const oddNumbers = numbers.filter(function (number) {return number % 2 !== 0;});console.log(oddNumbers); // Output: [1, 3, 5, 7, 9]
Array.prototype.filter()
is commonly-asked questions in front end interviews by big tech companies, along with its sister functions, Array.prototype.map()
, Array.prototype.reduce()
, and Array.prototype.concat()
. Modern front end development favors functional programming style APIs like Array.prototype.filter()
and it is also an opportunity for candidates to demonstrate their knowledge of prototypes and polyfills. It seems easy on the surface, but there's much more to it:
this
value?Nailing these edge cases is paramount for acing front end interviews.
Practice implementing the Array.prototype.filter()
function on GreatFrontEnd
If you're short on time, we recommend practicing these 5 questions, which takes around 3 hours.
What do you think of these questions? Do you think there are other important questions to cover?
Front End Engineering is a nuanced field within Software Engineering and not all companies place equal emphasis on nor have the need for. Choosing the right company is a pivotal decision that can shape the trajectory of a Front End Engineer's career.
A careful evaluation of potential employers to ensure alignment with one's skills, aspirations, and work preferences. From the nature of business to technological stack to growth opportunities, the considerations are diverse and significant. In this post, we highlight key factors and essential questions that Front End Engineers should explore when evaluating companies. By delving into these factors, Front End Engineers can make informed decisions that not only resonate with their expertise but also contribute to a fulfilling and successful career.
Here are some of the factors you should be considering when joining a company as a Front End Engineer:
Let's dive into each factor.
How important is the front end web product to the success of the company? Is web the primary medium in which users consume the product or is web just used for internal dashboards?
If front end web is important to the company, the company will likely want to hire the best front end engineering talent and compensate them well. As a Front End Engineer, it's also important to join such companies because if/when recession strikes, the less important departments will be the first to be let go. Ideally, you'd want to be in the important departments, be it recession or not.
Web engineering is important to companies like Figma where the core product is within the browser or a native desktop app. On the other hand, web is not as important to a ridehailing company where users are using native mobile apps to book rides.
What are the primary revenue streams for the business? What is the breakdown of revenue per platform?
It goes without saying that companies will allocate more resources to revenue-generating platforms.
Other questions to ask:
What technology stack is the company on? Does the company bother to keep up with modern front end technologies or are the company's products still on jQuery and Bootstrap 3 with no intention to migrate?
You don't want to be still working on legacy technologies and not improve your skills at your new job. A stack that includes modern front-end technologies can offer more relevant experience and contribute to your professional growth.
Good companies should strive to keep their technologies updated as it improves the product quality and helps their employees stay relevant with industry needs.
What are scalability and performance challenges the company has faced? Describe the technical challenges faced by the front end teams.
If the technical challenges/bottlenecks are mainly on the back end, then back end is likely more important to the company than front end in the long run. Consequently, there will be more seniority composition of engineers will bias towards the back end and limited growth for front end engineers due to lack of complex front end work available.
What is your approach to front end testing and quality assurance?
If front end products are not tested thoroughly, the company is either small, or the front end products are not as important.
Other questions to ask:
How are engineering teams organized? Are there both front end engineers and back end engineers in the same team or are they in separate teams? How much back end engineering does a front end engineer have to do?
Both ways of team organization can work, but you'd want to avoid a situation where you are the only front end engineer on a large team primarily made up of back end engineers where there are few senior to learn front end from. You might also be a victim of "bait and switch", where after you join a team, there isn't that much front end work to do and you end up spending most of your time doing back end work. That's not what you're hired to do in the first place, some might like it, but most don't.
Is there a front end infrastructure team that takes care of common front end engineering needs across the company such as design system components, linting, build, performance, package upgrades, etc?
If a medium/large-sized company does not have a dedicated front end infrastructure team, there is a good chance that the company does not have too many front end products to warrant the need for a front end infrastructure team. Another reason is that front end engineering quality is not important to the company.
Other questions to ask:
How much does leadership value front end engineering? How important is brand identity and design to the company?
The former can be hard to find out as an outsider but you can roughly tell from the quality and aesthetics of the products. This is an important question to find out because leadership decides where to allocate resources to and the more important departments have a higher priority.
What is the background of the leaders (especially the technical ones)?
Leaders that were more product and design focused will tend to prioritize front end engineering quality.
Other questions to ask:
Is there a dedicated front end interview process or is the process the same as for general software engineers (back end heavy)?
A dedicated front end interview process allows companies to assess candidates more thoroughly and accurately for front end engineering roles. It ensures that the evaluation is aligned with the specific skills and challenges associated with front end development, contributing to effective hiring decisions and the overall success of the engineering team.
If there isn't a dedicated front end interview process, it could be a sign that leadership does not understand front end engineering well and there's no front end engineers in leadership roles. A front end interview process that does not test the right skills can result in non-optimal hires which brings down the company's quality of front end engineering.
Other questions to ask:
Does the company pay front end engineers the same as back end / full stack / generalist software engineers?
If a company pays different specializations of software engineering differently, it's an indication that some specializations are not as important as the rest, aka. Second class citizens. Many companies like Meta and Google pay Front End Engineers as much as Software Engineers. Don't settle for less.
Even if companies pay the same salary for Front End Engineers, the offer salary shouldn't be the only factor you should consider. You should also consider the longer term aspects such as career progression, how feasible it is possible to grow to senior, staff levels and beyond.
How does the career ladder for a front end engineer differ from a back end engineer? How high is the career ceiling for a front end engineer at the company? What level can you attain by focusing on front end development? What is the composition of backgrounds of the high-level engineers at the company?
The career ladder for a front end engineer and a back end engineer can vary between companies, and the specific titles and levels may differ. The career ceiling for a front end engineer can be quite high in companies that have complex client products (e.g. Figma, Google Docs, VS Code, etc) or value the importance of user experience and interface design.
In many other companies, the career ceiling for front end engineers is much lower. For such companies, front end engineers have to look beyond front end work and contribute strategically to the organization's goals in order to rise up the ranks.
Companies which have high career ceilings for Front End Engineers (without them having to go into management) include Google, Meta, Microsoft, Airbnb, and Figma.
As mentioned earlier, look out for where the technical challenges lie as the company grows. Complex front end technical challenges will warrant the need for senior and beyond front end engineers which is good for a career in front end engineering.
How many other front end engineers are there at the company? Who are the other front end engineers at the company? Are they experienced and people I can learn from to level up my front end engineering skills? Are the other front end engineers also passionate about modern front end technologies and frequently share knowledge with each other?
Understanding the experience level of fellow front end engineers is crucial for evaluating potential learning and mentorship opportunities. Working alongside experienced peers facilitates skill development, as you can glean insights from their expertise and real-world projects.
It's also important to know if there's a culture for sharing about modern front end technologies; A team that actively shares knowledge and embraces modern technologies suggests a dynamic and forward-thinking environment.
Other questions to ask:
Does the company have any open source projects? Does the company give back to open source by submitting upstream projects? Does the company have an engineering or design blog and their posts of high quality?
Having a community presence contributes to the positive brand image of the company and demonstrates a commitment to transparency, collaboration, and giving back to the community. An engineering blog fosters a culture of knowledge sharing within the company and contributes to the professional development of team members.
A company actively involved in open source contributions is likely to attract developers who are passionate about open source and community collaboration. It can be a deciding factor for top talent when choosing where to work.
Being able to contribute to open source (be it the company's open source projects or projects used by the company)as part of your work helps to build up personal brand and credibility which is useful to show future employers.
Other questions to ask:
Are the company's products visually appealing? Is there a cohesive brand identity and product interface? Do the products look like they are part of a design system?
While Front End Engineers don't necessarily have to design, they have to work closely with designers and implement what the designers come up with. A design system provides a structured approach to design and development, promoting consistency, efficiency, and collaboration across teams. It is an essential tool for creating and maintaining a unified and user-friendly interface in software products. Building products without a proper design system will lead to inefficiencies and inconsistencies down the road. A lack of a proper design system or design department could also be an indication that design and front end engineering is not valued by the company.
Other questions to ask:
Evaluating a company holistically, from its tech stack to its commitment to innovation and the quality of its team, empowers you to make informed choices that align with your professional aspirations.
By considering the factors outlined in this guide, you pave the way for a rewarding and fulfilling career, where your skills thrive, and your contributions leave a lasting impact on the digital experiences you help create.
You are not just choosing a workplace; you are selecting a path that leads to your own advancement, innovation, and success as a Front End Engineer.
This is part 2 of our top companies series where we gathered the top tech companies for a fulfilling career as a Front End Engineer. Companies are selected based on the following criteria:
This post covers the list of the top medium-sized tech companies for front end engineers. For each company, we give a rating out of 5 on the following areas:
Airbnb revolutionized the travel and hospitality industry with its online marketplace for lodging and experiences. The Airbnb platform is a prime example of good design meets good engineering with its user-friendly and aesthetically pleasing interface that is used globally and supports nearly 100 languages.
Airbnb has made significant contributions to the open source community. Some of their notable open source projects include:
Between 2015 to 2020, Airbnb is one of the most innovative companies in the front end space. Airbnb was one of the earliest companies to implement isomorphic/universal rendering on the web with the invention of Rendr, a library to render Backbone.js apps on the server. They also use server-driven UIs to dynamically build pages for web and mobile. Airbnb's front end team also spoke at React Conf 2019 about the process behind their design system.
Noteworthy front end engineers from Airbnb include Harrison Shoff, known for his work on Airbnb's style guide and visx. Leland Richardson, Jordan Harband, Spike Brehm, Josh Perez, were previously from Airbnb and were instrumental in shaping Airbnb's strong front end engineering culture and establishing the company as a leader in the front end ecosystem.
On the design front, Airbnb's Design department has a beautifully-designed blog and frequently shares their knowledge and resources with the designer community.
Airbnb's front end interview process mostly focused on front end coding with just one round of algorithmic coding. However, Airbnb places a high emphasis on cultural fit as the full loop consists of two rounds of behavioral interviews where most companies only have one.
As of writing, Airbnb has scaled back on their community contributions, possibly due to the global movement towards cost cutting. We no longer see as much front end innovation coming from Airbnb as compared to before. Some of Airbnb's famous libraries like Enzyme and React Dates have been moved out of the Airbnb GitHub organization and don't look maintained. Airbnb's design blog has been paused since 2022 and Airbnb's engineering and design social media accounts have been closed.
It's important to note that at its core, the Airbnb platform is a CRUD application and the challenges lie in scaling the platform for new verticals; the products themselves are not technically sophisticated client applications like a collaborative editor or design tools. However, if you are a product engineer who enjoys translating beautiful designs into reality and bringing them into the hands of people globally, Airbnb is still a cool and great place to work.
Rating: Projects (3/5), Talent (4/5), Design (5/5), Compensation (4/5), Outlook (3.5/5)
A Canadian powerhouse, Shopify empowers businesses with its e-commerce platform, providing business owners with tools to create online stores. As these websites allow a high degree of customization by users while optimizing for performance and SEO, the Shopify platform requires complex, top quality front end engineering.
Shopify is no stranger to open source and although most of their projects are targeting the Ruby ecosystem, there are a few notable front end projects from them:
In recent years, Shopify hired many well-known developers from Google like Ilya Grigorik, Jason Miller (creator of Preact), Jake Archibald, and Surma. Ryan Florence and Michael Jackson of React Router fame ended up at Shopify thanks to the Remix acquisition.
Shopify is an attractive workplace for front end engineers due to its innovative and varied product range, which offers opportunities for diverse and challenging work. The company is pushing the boundaries of e-commerce web development which demands cutting-edge development to enhance user experiences, providing developers a platform to work on impactful and widely-used applications. Shopify's embrace of the latest technologies and commitment to continuous improvement in the field also contribute to a dynamic and growth-oriented work environment. This, combined with a supportive culture and opportunities for professional development, makes Shopify a rewarding place for front end engineers.
Rating: Projects (4/5), Talent (4/5), Design (4.5/5), Compensation (3.5/5), Outlook (4/5)
Figma has become a game-changer in the design software industry, offering a cloud-based interface design tool that enables collaborative work. Figma's apps (Figma and Figjam) are possibly some of the most complex applications to exist as it is extremely challenging to build a performant design tool in the browser that scales for large design files and also offers real-time collaboration.
While Figma's direct contributions to open source are limited, its impact on the design and development community is substantial. Figma has won over the design market and has begun to focus their efforts on improving the design-to-code workflow with the introduction of Dev Mode and Storybook integration, making the product easier to use for developers.
Figma’s focus on user interface and experience design aligns well with the core competencies of front end engineering. Furthermore, the company's innovative approach and commitment to cutting-edge technology provide a dynamic environment for professional growth and learning in the field of front end development. There's a good chance that engineers who work at Figma are passionate about design and creating great user experiences; it's always great to be surrounded by like-minded people.
Rating: Projects (4/5), Talent (4.5/5), Design (5/5), Compensation (4/5), Outlook (4/5)
Stripe provides payment processing solutions for e-commerce, characterized by their user-friendly and secure interfaces. Their front end work mainly involves creating straightforward yet secure checkouts and payment SDKs, dashboards for businesses to manage their account, prebuilt UI elements for developers who want to create their own checkout flows, and developer platform, which is a world-class example of great developer documentation.
Stripe does not have too many contributions to open source but front end engineers at Stripe have been instrumental in advancing UI/UX in financial technology. Some of their projects include Markdoc, a powerful, flexible, Markdown-based authoring framework and Sorbet, a type checker for Ruby.
Stripe is a great company for front end engineers and designers mainly due to its reputation for building elegant, user-friendly products. The company is at the forefront of financial technology, offering developers the chance to work on innovative, high-impact projects that shape the way businesses handle online transactions. Stripe's emphasis on clean, efficient code and cutting-edge technology aligns with the interests of developers who are keen on pushing the boundaries of web development. Additionally, Stripe's collaborative work culture and focus on professional growth make it a great environment for a front end engineering career.
Rating: Projects (3.5/5), Talent (5/5), Design (5/5), Compensation (4.5/5), Outlook (4/5)
Vercel is a cloud platform focused on enhancing the development and deployment experience for front end teams. Vercel's platform is designed to make building and deploying as straightforward as possible, emphasizing ease of use, universality, and accessibility. There's seamless integration with major JavaScript frameworks, like Next.js, Remix, Nuxt, etc so that deploying sites built using these frameworks becomes a breeze. On top of hosting, Vercel also provides logs, A/B testing, storage, analytics, performance tracking, and more.
It's particularly known for creating Next.js, an open source web framework based on React. Next.js is currently the most popular meta framework for building React applications and innovating at breakneck speed especially after members of the React core team left Meta to join Vercel. Besides Next.js, Vercel is also behind projects like SWR, Turborepo, Turbopack, and SWC.
The Vercel team can be said to be the Avengers of the web development ecosystem, a congregation of top front end engineering talents. Guillermo Rauch, CEO and founder of Vercel, is a well-known figure in the development community for his work on Next.js and Socket.io. Jared Palmer, a key figure in the JavaScript community joined Vercel after they acquired his startup Turborepo. Tobias Koppers, the creator of webpack, joined Vercel to work on the evolution of webpack, Turbopack. As mentioned above, some React core team members like Sebastian Markbåge, Andrew Clark, and Dominic Gannaway, joined Vercel to continue working on React and other projects. The list goes on.
Vercel has taken Meta's place of being the industry leader in pushing the boundaries of front end innovation. Besides front end hosting, Vercel is also investing in AI with the creation of the Vercel AI SDK and v0, a tool for generative UI design.
Rating: Projects (4.5/5), Talent (5/5), Design (4.5/5), Compensation (4.5/5), Outlook (4/5)
When you're on the hunt for your next front end engineering role, it's not just about picking any company that'll have you. Companies differ in terms of domains, size, customers, etc. The importance of front end development to a company depends on the company's core offering. Naturally, a company that has a core product that is primarily used on the web will prioritize their front end engineering department, while for companies whose flagship products are mobile apps or hardware, front end development might take a backseat.
Whether it's working on cutting-edge technologies, being surrounded by inspiring colleagues, or contributing to projects you're passionate about, the right company can elevate your career from ordinary to extraordinary. Choosing the right company can make all the difference between leading a fulfilling career and hitting arbitrary career ceilings.
In this post, we have gathered the top tech companies for a fulfilling career as a Front End Engineer. Companies are selected based on the following criteria:
This post covers the list of the top big tech companies for front end engineers. For each company, we give a rating out of 5 on the following areas:
Let's get the obvious company out of the way. Meta (previously Facebook), is a major player in social networking, digital marketing, and has recently ventured into the realm of augmented and virtual reality. The complexity of their front end work is showcased in the Facebook and Instagram platforms, which are known for their visually rich interface and smooth user experience. Unbeknownst to most, the most complex (and possibly most important) web application at Meta is their Ads Manager application as it has to display and manage thousands of rows of ad campaigns and also allow advertisers to create ad creatives in various formats for different platforms. The ads manager and business suite surfaces are worked on by thousands of engineers across the company.
When the Facebook.com redesign was released in 2020, it was considered the gold standard for web applications as it popularized the concepts of SSR, lazy loading, prefetching on navigation, and data-driven dependencies. Today, most new applications at Meta are built using this JavaScript-centric tech stack.
If I were to name a company that has the most impact on the open source front end ecosystem, it has to be Meta. Meta revolutionized how UI was built with the invention of React and React Native, introduced new data fetching approaches with GraphQL, built Jest, a feature-rich test runner that displaced then-frontrunner Mocha, made building documentation websites easy with Docusaurus, and Yarn, a package manager that was a huge improvement over npm at the time of its release. Meta has open sourced technology in almost every aspect of front end development most of which achieved widespread adoption at one point in time, which speaks volumes about the standard of Meta's front end engineers and importance of front end development to the company. Because much of Meta's front end technologies are open source and widely-adopted, new developers who had prior experience with these technologies will have an easier time when onboarding.
Meta is continuing to lead innovation in the front end space with the development of React Server Components, Server actions, React forget, and cross-platform React components so that users can share components between web, mobile, and VR environments.
Notable figures in front end development from Meta includes Jordan Walke, the creator of React and Reason; Joe Savona, who is currently working on the React Forget compiler; Dan Abramov, Sebastian Markbåge, Lauren Tan, Andrew Clark, Dominic Gannaway, Brian Vaughn, Sophie Alpert, Rick Hanlon, who have worked / are still working on React, Christopher Chedeau of Prettier and CSS-in-JS fame; and Christoph Nakazawa, who worked on React Native, Jest, and Metro.
Meta's front end interview process is one that heavily favors Front End Engineers as it is focused on practical front end domain knowledge without excessive emphasis on algorithmic knowledge.
While Meta is a great place for Front End Engineers, it can be stressful working there. The company is known for its fast-paced and results-driven work environment that rewards high performers, which can contribute to tight deadlines and stress. Employees may also face public scrutiny due to the company's high-profile nature and history with privacy concerns and misinformation. The demanding nature of the work and high expectations, coupled with potential challenges to work-life balance, are factors individuals should consider. On the engineering side of things, Meta is stuck using Flow for writing type-safe JavaScript, which is almost no longer used outside of Meta. However, there are some similarities with TypeScript, so the learning curve should not be too huge.
Rating: Projects (4/5), Talent (5/5), Design (4/5), Compensation (4.5/5), Outlook (3/5)
Google, a global technology leader, is best known for its search engine. However, the company's reach extends far into areas such as video streaming, cloud computing, AI, consumer hardware, and software solutions. Some of their products that showcase intricate front end development include YouTube, the world's top video streaming website; Google Maps, with its interactive maps and street views; Gmail, a widely-used email service featuring a sophisticated interface; and Google Docs, an online word processor offering real-time collaboration.
Google's commitment to the front end open source ecosystem is evident in projects like:
Google is home to many prominent figures in the front end community because they regularly create great guides and content for web development like web.dev and YouTube channels like Chrome for Developers. Paul Irish is known for his work on Chrome DevTools and other web development resources; Addy Osmani, Software Engineer on Google Chrome and author of numerous technical books; Minko Gechev, tech lead and manager for Angular.
Google's front end interview process has a good balance of algorithmic knowledge and domain knowledge. There are pure algorithmic rounds and front end coding questions will also require a good grasp of software design and good data structure choice.
Working at Google, while offering numerous benefits, comes with potential downsides. The large organizational structure of the company may pose challenges in communication and decision-making, and frequent reorganizations can create uncertainty for employees. Projects are often canceled and I have lost count of the number of instant messaging applications that were built, killed, and rebuilt. Despite these downsides, Google's cutting-edge projects, great salary, career growth opportunities, and focus on employee well-being continue to attract top talent.
Rating: Projects (4.5/5), Talent (4.5/5), Design (4/5), Compensation (4/5), Outlook (4/5)
Microsoft, a household name in software, offers a range of products from operating systems to cloud services and business solutions. Besides Bing and Edge, the Microsoft 365 suite brings traditional workplace offerings like Word, Excel, and PowerPoint to the web which are complex web applications that require a high level of front end to create.
For a long time, Microsoft was focused on desktop applications but in recent years, Microsoft has increased investment in developing the web ecosystem and have established themselves as one of the most important companies to the future of front end development with the following projects:
Moreover, Microsoft is heavily investing in AI. They're the largest investors in OpenAI and OpenAI utilizes Azure as its primary cloud provider to train and run its large-scale AI models. Microsoft has been integrating OpenAI's technologies into its products, like GitHub Copilot, Microsoft 365 products and Bing search engine, among others. The partnership with OpenAI gives it a competitive edge in the AI race, especially against other tech giants like Google and Amazon. By closely working with and funding one of the most advanced AI research labs, Microsoft positions itself at the forefront of AI innovation.
In a company as large as Microsoft, bureaucratic processes are often in place. This can mean slower decision making, more layers of management, and a need to navigate through a complex corporate structure to get things done. For some, this can feel restrictive and stifle creativity.
Rating: Projects (4/5), Talent (4/5), Design (3.5/5), Compensation (3.5/5), Outlook (4/5)
The only Chinese technology company on the list, ByteDance is most famous for its social media app TikTok, which is a highly addictive platform that prioritizes mobile-first short-form videos. While TikTok content is primarily consumed on mobile platforms, ByteDance has a large suite of enterprise offerings for the web, like Lark and BytePlus.
ByteDance is actually pretty active in the open source community, though their projects are less known compared to those from Western big tech companies. One of ByteDance's most famous open source projects is rspack, a Rust port of webpack. Other notable projects include IconPark, a high quality, highly customizable icon set and Xigua player, a modern video player for the web.
ByteDance is home to many talented individuals like Zack Jackson (Creator of module federation) and Dexter Yang (Author of Spellbook of Modern Webdev). Anthony Fu, one of the most prolific open source leaders in recent years, used to work at ByteDance.
In terms of the hiring process, ByteDance's front end interview process is one of the toughest to crack due to their wide pool of questions that tests the candidates' breadth and depth in the domain. Candidates can expect to answer tough trivia questions about their favorite JavaScript framework and the front end domain. All of algorithms, JavaScript utilities, UI coding, and system design questions can also be asked.
ByteDance is very similar to Meta in many ways and I often tell people they're the Meta of China – their focus on social products, performance-driven culture that disproportionately rewards high performers, mindset which encourages moving fast and taking ownership, open sourcing innovative technologies, etc.
However, being a Chinese company, might have a corporate culture that differs significantly from Western norms. Engineers from different cultural backgrounds might find it challenging to adapt to these differences in work style, communication, and management practices. TikTok is constantly under scrutiny in the United States due to national security concerns related to its Chinese ownership and there's no knowing if or when the app will be banned in the US, which will hurt the business greatly.
Rating: Projects (4/5), Talent (4/5), Design (4/5), Compensation (4/5), Outlook (4/5)
Netflix, the leading entertainment streaming service, is renowned for its personalized user interfaces and seamless streaming experience across devices. Netflix is one of the best places for front end engineers not because of Netflix.com; it is their suite of complex web-based studio technologies that makes working there exciting.
Netflix's primary contribution to the open source community is Falcor, a JavaScript library for efficient data fetching. Notable front end developers from Netflix include Jafar Husain, known for his work on reactive programming and JavaScript frameworks. Jafar has also worked at Meta on GraphQL.
Netflix's company culture, often compared to a professional sports team, is centered around performance, collaboration, and a high degree of freedom and responsibility. This culture is detailed in their "Netflix Culture: Seeking Excellence" document and is widely recognized in the business world for its unique approach. Netflix is known for paying top of the market salaries for Software Engineers and mostly in base salary. Employees at Netflix have the flexibility to decide how much of their compensation is in cash or stock options. On the other hand, the dream team culture can lead to workplace competitiveness and increased stress as employees may feel they must constantly prove their worth to remain with the company.
Rating: Projects (3.5/5), Talent (4/5), Design (4/5), Compensation (5/5), Outlook (4/5)
Originally posted on https://niteshseram.in.
Handling large datasets is a common challenge in frontend applications. As the amount of data grows, it can lead to performance issues, such as slow loading times and unresponsive user interfaces. In this blog, we will explore different methods to effectively handle large datasets in React applications. We will discuss techniques like pagination, infinite scroll, and windowing. By implementing these strategies, we can ensure that our frontend application remains fast and efficient, even when dealing with large amounts of data.
Before we dive into the different methods of handling large datasets, let's first understand the performance problems associated with them. When an application tries to render or manipulate a large amount of data in a list, it can cause significant performance issues. This is because rendering a large number of DOM elements can be time-consuming and resource-intensive.
To illustrate this, let's create a sample React application that renders a list of 10,000 records. By examining the performance of this sample application, we can better understand the challenges of handling large datasets.
To get started, create a new React application using the create-react-app command in your terminal:
npx create-react-app large-dataset-app
Once installed, open the App.js
file in the src
directory and replace the existing code with the following:
const data = new Array(10000).fill().map((_, index) => ({id: index,name: `Temp Name ${index}`,email: `Temp Email ${index}`,}));function App() {return (<div>{data.map((item) => (<div key={item.id}><h3>{item.name}</h3><p>{item.email}</p></div>))}</div>);}export default App;
In this code, we generate an array of 10,000 objects, where each object represents a record in our dataset. We then use the map
function to render each item in the array as a <div>
element. Each <div>
contains the name and email of the corresponding item.
Now, start the React application by running the following command in your terminal:
npm start
Open your browser and navigate to http://localhost:3000. You will notice that it takes some time for the page to load, and scrolling through the list may also be slow. This is because rendering 10,000 DOM elements at once can cause performance issues.
One way to handle large datasets is by implementing pagination. Pagination allows you to render data in pages, rather than all at once. By controlling the amount of data shown on the page, you can reduce the stress on the DOM tree and improve performance.
There are several UI libraries in React that provide pagination components, such as react-paginate. However, if you prefer not to use a UI library, you can implement pagination manually.
To illustrate this, let's modify our sample application to include pagination. First, install the react-paginate
library by running the following command:
npm i react-paginate
Next, open the App.js
file and replace the existing code with the following:
import { useState } from 'react';import ReactPaginate from 'react-paginate';const data = new Array(10000).fill().map((_, index) => ({id: index,name: `Temp Name ${index}`,email: `Temp Email ${index}`,}));function App() {const [currentPage, setCurrentPage] = useState(0);const itemsPerPage = 10;const pageCount = Math.ceil(data.length / itemsPerPage);const offset = currentPage * itemsPerPage;const currentData = data.slice(offset, offset + itemsPerPage);const handlePageChange = (selectedPage) => {setCurrentPage(selectedPage.selected);};return (<div>{currentData.map((item) => (<div key={item.id}><h3>{item.name}</h3><p>{item.email}</p></div>))}<ReactPaginatepreviousLabel={'Previous'}nextLabel={'Next'}breakLabel={'...'}pageCount={pageCount}marginPagesDisplayed={2}pageRangeDisplayed={5}onPageChange={handlePageChange}containerClassName={'pagination'}activeClassName={'active'}/></div>);}export default App;
In this code, we use the useState
hook to manage the current page state. We calculate the number of pages based on the total number of records and the desired number of items per page. We then use the slice
method to get the current data to be displayed on the page.
The ReactPaginate
component renders a pagination UI with previous and next buttons, as well as page numbers. The onPageChange
event handler updates the current page state when the user clicks on a page number.
Now, when you run the application, you will see that the data is rendered in pages, with only a subset of records shown at a time. This helps to improve the performance of the application by reducing the number of rendered DOM elements.
Another approach to handling large datasets is through the infinite scroll. Infinite scroll involves loading data incrementally as the user scrolls down the page. Initially, only a subset of data is loaded, and more data is appended as the user reaches the end of the list.
There are various ways to implement infinite scroll in React, and one popular library for this purpose is react-infinite-scroll-component. To use this library, install it by running the following command:
npm i react-infinite-scroll-component
Next, open the App.js
file and replace the existing code with the following:
import { useState } from 'react';import InfiniteScroll from 'react-infinite-scroll-component';const data = new Array(10000).fill().map((_, index) => ({id: index,name: `Temp Name ${index}`,email: `Temp Email ${index}`,}));function App() {const [items, setItems] = useState(data.slice(0, 20));const fetchMoreData = () => {setTimeout(() => {setItems((prevItems) => [...prevItems,...data.slice(prevItems.length, prevItems.length + 20),]);}, 1500);};return (<InfiniteScrolldataLength={items.length}next={fetchMoreData}hasMore={items.length < data.length}loader={<h4>Loading...</h4>}>{items.map((item) => (<div key={item.id}><h3>{item.name}</h3><p>{item.email}</p></div>))}</InfiniteScroll>);}export default App;
In this code, we use the useState
hook to manage the item's state. Initially, we load the first 20 items from our dataset. The fetchMoreData
function is called when the user scrolls to the end of the list. It appends the next 20 items to the existing items using the spread operator.
The InfiniteScroll
component from react-infinite-scroll-component
wraps the list of items. It takes the current length of the items as the dataLength
prop, the fetchMoreData
function as the next
prop, and a boolean value to indicate whether there is more data to be loaded.
When you run the application, you will notice that the data is loaded incrementally as you scroll down the page. This approach improves the user experience by providing a seamless scrolling experience while efficiently loading and rendering the data.
Another technique for handling large datasets is windowing. Windowing involves rendering only the visible portion of a list to the screen, rather than rendering all the items at once. This helps to reduce the number of DOM elements and improves performance.
One popular library for windowing in React is react-window. It provides a set of components for efficiently rendering large lists. To use react-window
, install it by running the following command:
npm i react-window
Next, open the App.js
file and replace the existing code with the following:
import { FixedSizeList as List } from 'react-window';const data = new Array(10000).fill().map((_, index) => ({id: index,name: `Temp Name ${index}`,email: `Temp Email ${index}`,}));const Row = ({ index, style }) => (<div style={style}><h3>{data[index].name}</h3><p>{data[index].email}</p></div>);function App() {return (<List height={400} itemCount={data.length} itemSize={80} width={300}>{Row}</List>);}export default App;
In this code, we define a Row
component that renders each item in the list. The FixedSizeList
component from react-window
is used to render the list. It takes the height and width of the list, the total number of items, and the size of each item as props.
When you run the application, you will see that only a portion of the list is rendered at a time, based on the height of the list. As you scroll through the list, the windowing technique efficiently renders only the visible items, resulting in improved performance.
You might be wondering what’s the difference between what the react-infinite-scroll-component
and react-window
do. The difference is that in react-infinite-scroll-component
load data incrementally as the user scrolls. It dynamically adds more items to the list as needed, creating an illusion of infinite content. react-window
, on the other hand, renders only a subset of the list items that are currently visible in the viewport, reusing DOM elements as the user scrolls.
Due to its simpler API and automatic handling of scrolling, react-infinite-scroll-component
may be easier to set up and use for basic infinite scrolling needs. However, it may not perform as well with extremely large data sets or complex list items since it keeps all rendered elements in the DOM. In contrast, react-window
's windowing technique ensures that only the visible items are rendered, resulting in improved performance and reduced memory footprint for large lists.
Handling large datasets in frontend applications can be challenging, but there are various techniques available to address this issue. By implementing pagination, infinite scroll, windowing, or using specialized libraries like react-virtualized
or react-window
, you can effectively manage large amounts of data while maintaining optimal performance.
In this blog, we explored different methods of handling large datasets in React applications. We discussed pagination as a way to render data in pages, infinite scroll for loading data on demand, windowing for efficiently rendering large lists, and libraries like react-window
that provide additional features for handling large datasets.
Remember to consider the specific requirements and constraints of your application when choosing a method for handling large datasets. Each approach has its advantages and trade-offs, so it's important to evaluate which technique best suits your use case.
By implementing these strategies, you can ensure that your frontend applications remain fast, responsive, and user-friendly, even when dealing with large amounts of data.