From Manual Grind to Automated Bliss: Introducing preact-perf-metrics

Ivan Slobodiuk
loveholidays tech
Published in
4 min readMar 14, 2024

--

At loveholidays, we strive to deliver a holiday booking experience that offers unmatched ease and unmissable value by, among other things, being fast and responsive. Our Preact-powered website seamlessly adapts to user interactions, dynamically updating various elements like images, filters, prices, and text.

Maintaining peak performance while adding new features is an ongoing challenge. Each interaction involves a complex interplay of components, hooks, data fetching, state management, and contexts, triggering cascading updates (rerenders) across the site. And while essential for interactivity, excessive or unnecessary rerenders can lead to sluggishness and a frustrating user experience.

Therefore we authored and open-sourced @loveholidays/preact-perf-metrics: an automated testing tool designed to safeguard our application’s responsiveness and prevent unexpected performance regressions as our codebase evolves.

How it works

Taking inspiration from established testing practices like snapshot testing and visual regression testing, @loveholidays/preact-perf-metricsallows you to write performance tests using Playwright. These tests verify framework activity such as the number of rerenders, rendering phases, and unmountings that occur during any interaction — essentially creating a performance snapshot. Any changes in these metrics will trigger test failures, prompting to either address the code or update the performance snapshot accordingly.

The tool comprises two parts: a setup() function to instrument the Preact application, and a set of custom Playwright matchers for writing assertions in the tests.

Consider the following performance test example, guaranteeing a single rerender when clicking the “Increase Counter” button in a classic counter app:

import { expect, test } from '@playwright/test'; 
import { reset } from '@loveholidays/preact-perf-metrics';

test("", await ({page}) => {
await reset(page); // reset the counters
await page.getByText('Increase Counter').click();
await expect(page).toPerform({ nodesRendered: 1 }); // asserts
})

If a code update leads to additional elements rerendering, the test will fail and highlight the difference:

Error: 
- Expected
+ Received
Object {
- "nodesRendered": 1,
+ "nodesRendered": 3,
}

As in any snapshot, the specific numbers themselves (e.g., 1, 3, or 1023) hold less significance than their changes due to code modifications¹.

A simple fix (but hard to spot)

Many times we’ve encountered performance issues that surfaced weeks or even months after the initial code changes.

One example involved a flickering airline logo in the flights filter section. This blip remained undetected despite passing all unit, integration, and visual regression tests; and it wasn’t visually apparent in high-end developer machines.

The logos flicker on every interaction. The animation is slowed down for dramatic effect.

Deep investigation aided by the preact-developer-tools (and a generous amount of console logging 😉) revealed the root cause. Every checkbox interaction was triggering not only unnecessary rerenders, but also unmounts and remounts of images, leading to suboptimal performance and user experience.

Everything gets updated with each interaction

Identifying the culprit proved challenging within the context of a larger pull request, but at the end it was a simple one (and relatively easy to fix) demonstrated in the simplified diff below².

ITEMS.map((item, index) => (
<ListItem
key={index}
selected={index === selected}
- as={ItemComponent}
+ as={(props) => (<label><ItemComponent {…props}></label>)} />
)

Had we had the tool and tests in place back then, we would’ve prevented merging and shipping the issue to our customers. The test would’ve failed indicating that nodesUnmounted went from 0 to N.

Beyond Code Reviews: An Automated Tool

While the flickering logo was fixed, the situation highlighted a larger problem. Similar inefficiencies became increasingly common during profiling sessions and, alarmingly, they were always introduced silently, unnoticeably. While one might argue these issues should be caught during code review or that simply “it should not happen”, manual inspection becomes increasingly impractical with a growing development team.

This prompted us to explore alternatives to relying solely on good intentions: a mechanism³. We wanted something that provides the same information Preact developer tools gives, but in a way that could be automated as a build check.

Thankfully, the Preact community was very helpful. After sharing our idea on their Slack channel, we received valuable guidance for the development of the initial version.

Having used @loveholidays/preact-perf-metrics at loveholidays for some time now, we’ve successfully prevented a handful unintended regressions. We’re pleased to announce the open-sourcing of the tool today, hoping it will benefit other projects and contribute to the Preact development community.

[1] You can read the “how to use” section of the README in github.
[2] You can find this case in the @loveholidays/preact-perf-metrics example code, together with its corresponding performance test.
[3] “Good intentions never work, you need good mechanisms to make anything happen” ~ Jeff Bezos

--

--