Thoughts about web performance and frameworks

There are many measures for web performance, some more important than others depending on what you are building. Frequently, lots of benchmarks miss one for another, making it seem like one library is objectively the better choice than the other.

Broadly speaking, we have three categories of things that we are building for. The first is strictly static content - stuff like portfolios and blogs. The next category which is mostly dominated by news sites and aggregators are the content-heavy sites, and the last, web apps.

For strictly static content, simple optimizations go a long way. Implementing gzip, compressing images, and hosting the content on a Content Distribution Network (CDN), are easy to implement. Plenty of plugins also exist to make the process code free and painless.

Advance techniques like applying Brotli, heavily caching assets, and using service workers can be used to make your static site offline ready and possibly the fastest website in the world.

The next category, content-heavy sites are the hardest to accommodate for. Static sites will not cut it because things like animations, search and dynamic content are required. Libraries like jquery seem just sufficiently unwieldy to use but libraries like React are also too heavyweight for what you want. Furthermore, virtual dom's benefits don't shine through as much when you are replacing large portions of the DOM and users aren't interacting a whole lot on each page.

The same old rules for static content apply, but libraries like Svelte and Preact occupy the sweet spot for this kind of content. Server-side rendering followed by hydration would deliver the most important content to your users first followed by the interactive elements.

Advanced techniques like bundling your site with both type=modules and nomodule scripts can help to reduce the amount of Javascript sent and parsed, resulting in faster Time To Interactive (TTI).

Bleeding edge techniques involve using Cloudflare workers to stream the html instead of using your own servers. Since Cloudflare workers live on the same servers as the Cloudflare CDN, content needs to travel much less distance and Time To First Byte (TTFB) can be reduced.

Another bleeding edge technique involves using Service Workers to stream the html from cached Javascript components and only fetching the latest JSON data. The benefit of this technique compared to the previous would be largely dependent on the CPU speed of the device compared to the time to send the resource from Cloudflare workers.

Lastly, instead of sending full JSON components, sending new line delimited JSON can improve the TTFB of large lists of items.

The last category applies to sites like Adsense, Analytic Dashboards, and perhaps future web versions of multimedia editors like Photoshop. These sites require large amounts of Javascript and interactivity. Significant jank or delay to respond to user actions would impact the experience of a user and their ability to do work.

One of Facebook's key product, the advertising web app, is their profit generator and also an app with hundreds of interactive components. Only a library like React where events are pooled and a scheduler to yield to interactivity (in development) is sufficient to handle this kind of workload.

As an engineer, one must weigh the pros and cons of various solutions, and pick the ones that suit the problem space more. Cargo-culting and using React everywhere is detrimental to the ecosystem because better solutions for other problems will be drowned out. Thus, I feel that most applications on the web do not fit the use case for React. E-commerce, blogs, and simple apps could be using libraries like Svelte, Preact, and Web Components instead.

In the future, consider what your needs are, and pick the right framework!