Should You Really Use useMemo in React? Let's Find Out.

Some of our developers recently came with the question, when should I use useMemo in React? That is an excellent question. In this article, we will have a scientific approach, define a hypothesis, and test this with real-life benchmarks in React.

Read on to find out what the performance impact of useMemo is.

What is useMemo?

UseMemo is one of the hooks offered by React. This hook allows developers to cache the value of a variable along with a dependency list. If any variables in this dependency list changes, React will re-run the processing for this data and re-cache it. If the variable values in the dependency list have already been cached before, React takes the value from the cache.

This mostly has an effect on re-renders of components. Once the component re-renders, it will fetch the value from the cache instead of having to loop through an array or process data again and again.

What does React say about useMemo?

If we look at the React documentation regarding useMemo it is not mentioned when it should be used. They simply mention what it does and how it can be used.

You may rely on useMemo as a performance optimization

The question here is, from what point on is useMemo interesting? How complex or big should the data be before we see performance advantages in using useMemo? When should developers actually use useMemo?


Before we start our experiment, let’s define a hypothesis.

Let’s first define the complexity of the object and processing we want to perform as n. If n = 100, then we need to loop through an array of 100 items in order to get the final value of the memo-ed variable.

We then also need to separate two actions. The first action is the initial render of the component. In this case, if a variable is using useMemo or not, they both have to calculate the initial value. Once the first render is done, subsequent re-renders (the second action we need to measure) with useMemo can retrieve the value from the cache, where the performance benefit should be visible compared to the non-memo version.

In all cases, I would expect an overhead of about 5–10% during initial renders in order to go set up the memo cache and store the value. I expect to see performance loss for useMemo when n < 1000. For n > 1000, I would expect to see similar or better performance on re-renders with useMemo, but the initial render should still be slightly slower due to the extra caching algorithm. What is your hypothesis?

Benchmarking Setup

We set up a small React component as follows, which will generate an object with complexity n as described, the complexity is defined as the prop level.

import React from 'react';
const BenchmarkNormal = ({level}) => {
	const complexObject = {
    	values: [] 
    for (let i = 0; i <= level; i++) {
    	complexObject.values.push({ 'mytest' });
    return ( <div>Benchmark level: {level}</div>);

export default BenchmarkNormal;

This is our normal benchmark component, we’ll also make a benchmark component for useMemo, BenchmarkMemo.

import React, {useMemo} from 'react';
const BenchmarkMemo = ({level}) => {
	const complexObject = useMemo(() => {
    	const result = {
        	values: []
        for (let i = 0; i <= level; i++) {
        return result;
    }, [level]);
    return (<div>Benchmark with memo level: {level}</div>);

export default BenchmarkMemo;

We then set up these components to be displayed when pressing a button, in our App.js. We also use React’s <Profiler> to provide us with the render times.

function App() {
	const [showBenchmarkNormal, setShowBenchmarkNormal] = useState(false);    
    // Choose how many times this component needs to be rendered
    // We will then calculate the average render time for all of these renders
    const timesToRender = 10000;
    // Callback for our profiler
    const renderProfiler = (type) => {
    	return (...args) => {
        	// Keep our render time in an array
            // Later on, calculate the average time
            // store args[3] which is the render time ...
    // Render our component 
    return <p> {showBenchmarkNormal && [...Array(timesToRender)].map((index) => { return <Profiler id={`normal-${index}`} onRender={renderProfiler('normal')}><BenchmarkNormal level={1} /></Profiler>;    })}    </p>;}

As you can see, we render the component 10 000 times and fetch the average render time for these. Now we need a mechanism to trigger a re-render of our components on demand, while not having to re-calculate the useMemo, so we do not want to modify any of the values in the dependency list of useMemo.

// Add a simple counter in state    
// which can be used to trigger re-renders    
const [count, setCount] = useState(0);
const triggerReRender = () => {
	setCount(count + 1);   

// Update our Benchmark component to have this extra prop 
// Which will force a re-render
<BenchmarkNormal level={1} count={count} />

In order to keep the results clean, we always start out with a fresh web browser page before starting a test (except for re-renders), to clean out any cache that may still be on the page and affecting our results.


Results with complexity n = 1

Benchmark results for complexity 1

The complexity is shown on the left column, with the first test being the initial render, the second test being the first re-render and the final test being the second re-render. The second column shows the results for the normal benchmark, without useMemo. The final column shows the results for the benchmark with useMemo. The values are the average render time distributed over 10 000 renders of our benchmark component.

The initial render is 19% slower when using useMemo, which is a lot higher than the expected 5–10%. Subsequent renders are still slower, as the overhead of going through the useMemo cache costs more than recalculating the actual value.

In conclusion, for complexity n=1, it is always faster to not use useMemo as the overHead is always more expensive than the performance gain.

Results with complexity n = 100

UseMemo vs no useMemo benchmark results for complexity n = 100

With a complexity of 100, the initial render with useMemo becomes 62% slower, which is a significant amount. Subsequent rerenders seem to be slightly faster or similar on average.

In conclusion with a complexity of 100, the initial render is significantly slower, while the subsequent re-renders are quite similar and at best slightly faster. At this point, useMemo does not seem interesting yet.

Results with complexity n = 1000

UseMemo vs no useMemo benchmark results for complexity n = 1000

With a complexity of 1000, we notice the initial render with useMemo becomes 183% slower, as presumably, the useMemo cache is working harder to store the values. Subsequent renders are about 37% faster!

At this point, we can see some performance increase during re-renders, but it does not come without cost. The initial render is a lot slower, with a 183% time loss.

In conclusion, with a complexity of 1000, we can see a bigger performance loss during the initial render (183%), however, subsequent renders are about 37% faster.

Whether this is already interesting or not will highly depend on your use case. A 183% performance loss during the initial render is a tough sell, but might be justifiable in case of a lot of re-renders in the component.

Results with complexity n = 5000

With a complexity of 5000, we notice the initial render being 545% slower with useMemo. It seems the higher complexity the data and processing is, the slower the initial render is for useMemo in comparison to without useMemo.

The interesting part comes when looking at the subsequent renders. Here, we notice a 437% to 609% performance increase with useMemo on every subsequent render.

In conclusion, the initial render is a lot more expensive with useMemo, but subsequent re-renders have an even bigger performance increase. In case your application has data/processing of complexity >5000 and has a few re-renders, we can see the benefits of using useMemo.

Notes on Results

The friendly reader community has pointed out some possible reasons as to why the initial render can be much slower, such as running production mode and so on. We re-tested all our experiments and found the results to be similar. The ratios are similar, while the actual values can be lower. All in all the same conclusions apply.


These are our results with components having values of complexity n, where the application will loop and add values to an array n times. Results will vary depending on how exactly you are processing data along with the amounts of data. This however should be able to give you an idea of the performance differences with different sizes of datasets.

Whether or not you should use useMemo will highly depend on your use case, but with a complexity of < 100, useMemo hardly seems interesting.

It is worth noting that the initial renders with useMemo take quite a setback in terms of performance. We expected an initial performance loss of around 5–10% consistently, but have found that this highly depends on the data/processing complexity and can even cause 500% performance losses, which is 100x more performance loss than expected.

We have re-run the tests a couple of times even after having the results and we can say the subsequent results were very consistent and similar to the initial results we have noted down.

When do you use useMemo? Will these findings change your mind on when to use useMemo? Let us know in the comments!

Kevin Van Ryckegem

Kevin Van Ryckegem