Most JavaScript developers love making modern applications, with our new language features and latest libraries making our life easier, but has any one ever actually wondered if this ever impacts our users.
We are dealing with abstractions, abstractions are trying to cover a general use case which could not be conforming to yours. Some are over-engineered in ways one can't possibly comprehend, use cases that will never be reached by middle sized applications. Some of these over engineered libraries will be able to be tree-shaken but the majority won't since they most likely have to incorporate it somewhere in their main process.
We as developers most of the time don't really feel wire/performance problems with our fancy laptops and not testing on low-end mobile devices but there's actually an untapped market in the web. Why would one resort to a mobile app if the website is JUST AS GOOD?
Let's talk about something that IS important, I'm not denying that at all we have to go through a lot to build scaleable applications and well if our job is fun we most often make better things.
We all love our dynamic scripting language that has evolved from a non-modular thing
to an actual language. The coming of babel
and evergreen browsers have done
so much for the ecosystem.
Babel does a great job at leveraging experimental and new language features, but why would we want to introduce polyfills to applications that don't even need it? Or why inject a new language feature that you are using in one place? Most of the time we don't actually reason about these things and this makes us ship more and more JavaScript to our user which in turn makes them load longer, drains battery life, ...
Aside from this we could have libraries with these simple helpers like for instance
pipe
what this basically does is execute the functions it gets in order.
This could be easily compiled out with a simple Babel
plugin.
pipe(x, y) --> x(y())
Your DX remains untouched but you help your low-end connection/device user. It also enables you to learn something more low-level, the art of AST seems daunting at first but can be really fun. A good thing to get started is experimenting in ASTExplorer. Here you can copy paste some things and try out some transformations.
What if we could choose for a language that does strict type-checking and actually compiles to superb compact and fast JavaScript? Well we can, we can go for Reason which actually has all these fancy modern features and a really awesome type-system.
What I personally most of the time do is on Tuesdays I throttle my CPU 4x, Throttle Tuesday, and on Thursday I slow my network down to Fast 3G, 3G Thursday.
The 4x throttle isn't really close to a low-end mobile device but it helps a developer understand
why and what is wrong with certain code and why requestAnimationFrame
can be important.
The Fast 3G makes a developer understand the need for a CDN and an efficient caching strategy for your code.
Another untouched subject is serving modern content to modern browsers, people often disregard
how simple this is and how much it can actually save in terms of bundle size (10-20%). Our current
eco-system does not allow YET for our beloved npm libraries to publish in different transpiled
states but I think with the dawn of IE11 and the new Edge this should become a working point.
Now to touch the subject that often people don't think about, most people run their SSR environment
in an old Node version but this could actually be a performance bottleneck. Writing your node
in modern esm
and using the latest node version can actually speed up your static HTML render.
Note that your user does not feel a thing in this.
Library choices are often undergoing the same story we choose the most popular
library, which don't get me wrong is popular for a reason, these inject a ton
of features we aren't using and we go on with our easier lifes. Why would we
choose for a state management solution for an application that hardly has any
state? We could just leverage our native useEffect
and put it into context
?
Don't overcomplicate simple things, why use a routing library of 15kB when there's
one out there of 3kB doing the same thing? Most of the time there are a lot of
needless abstractions to for example conform to react-native
, why would you
need those? They won't be tree-shaken because the core-logic is altered for them.
Why would you need complicated SSR logic when you are only using CSR? Questions
like these should be present when choosing a library.
Something that I'm getting very fond about are extensible libraries, it could be
as dead-simple as just a mutable options
object exposed and some conventions being
agreed upon. This way you can keep a lot of core-logic out of your library code and
allow for your user to make their own additions.
Choose what you need, if it's maintained then you can be sure that it's a good choice.
Let's not be naïve to optimise for our user we need hard numbers about the people
using the website. Why would we provide IE9
support when we see all our users are
using evergreen browsers? Why should we consider a perf-related issue fixed when
it works on our machine?
So in the DX section I brought up module-nomodule, this is no silver bullet. If you go full evergreen and you see that a 50/50 of your users are on really bad connections you could go as far as offer a separate bundle for people with bad connectiosn vs good connections.
Disclaimer: The POC shows what's possible not what's ideal
So this is a bit anecdotal, I once faced an issue where we had complaints about the performance of a certain (HUGEE) form. I dug right into this (with my throttle on) and saw it improve noticeably, made a PR and got it into development. Someone else with another quality laptop reviewed this and said yes it's faster. We moved to production and surprise surprise a week later someone still had the performance issues, we did not know what device/browser this was on so we were a bit in the dark.
That's why I highly encourage if you do things like browser testing and you have issues like these to limit your testing servers to your lowest-end device and make some puppeteer test to assert everything feels good. It's quite easy to see some details about browsers and the strength of the device from some simple non-intruisive tracking. This does not impose any problems in GDPR as long as you don't couple accounts to tracked data.
This will mostly be raw performance because our main thread is a precious single being, all the work we offload onto it can cause issues to visual aspects of our application.
When we talk about frontend there are always some easy wins to be made depending on the use case. A few popular ones will reside under the worker concept, we have both web and service workers to help us out with different things.
We can use web workers to offload expensive calculations, this will avoid the main thread from being
blocked for extensive periods of time. Think about expensive calculations for an animation or something.
There are several experiments to do the first diffing phase for React in a web-worker
, problem being
that the communicated data is quite hard to do right. This might sound hard but we have champions in
our middle like developit who build a tool to make all of this a lot
easier, it's called workerize.
Note that web workers are session dependent
Service workers on the other hand can be used to efficiently cache resources and even network requests, it's hard to do this well but Google has our back with workbox
I personally have done some experiments with service workers to build a graphql cache, this would move things
like @urql/exchange-graphcache
into a service worker to offload calculations and to persist your cache
across sessions without having to go through lengths with localStorage
or indexedDB
.
Service workers sound like a lot of trouble, imagine you have a versioned API and your SW cached your
JS bundle on v3
now in v4
of the API we have a fix of a certain bug but yes our user is stuck with this...
We can build a wrapper around our fetch
that forces the service worker to reload whenever the version
endpoint
responds that the current version of the bundle is too far back.
Painting can be cheap in real life but it can be pretty expensive in your browsers if it has to be done by a CPU rather than a GPU.
Why would we render our 1000 products at once when we can only see 15? I have seen countless projects where full endless lists are rendered with a lot of yank on the main thread as a result.
We have nifty tools for modern browsers like the IntersectionObserver
or libraries like react-window
who can handle this for you.
As a plus with IntersectionObserver
you can even delay those parts from being loaded entirely, you can
predownload their bundle and make the execution of it lazy. All these little things heap up to a smooth
application. Just imagine how much unused code is in that initial bundle when all these invisible things
come on screen.
Context can be seen as a broker with the postMessage being a value changing, this means that every subscriber receives every message since there's no official way to subscribe to certain parts of context. This makes it so that using context as a really global state management solution is a no-go (for reasonably large applications).
You will have to build in some way or another to handle redundant rerenders with some subscription system be it
that your provider is some client with internal subscriptions like urql
or just a layer on top of context like use-context-selector
measures like these prevent you from over-using shouldComponentUpdate
and memo
which both have their own cost.
Let's stop over-using the non webgl dom for things it's not meant for, an interactive seat-picker for instance why would you not use a canvas for this. Maybe your reasoning is oh we can't use React but even that isn't true anymore React offers custom reconcilers and you can be really smart with this just look at react-threefiber (yes also for react-native). This implements a custom reconciler for three.js.
There's more than just the front-end that's facing the user, if your requests send back tons of data you are ramping up the downloads as well. Pagination can be an important concept just for this reason.
GZIP/Brotli isn't limited to only the bundle, we can also do this on our response payloads, you can reason about this like: we can gzip JS, we can gzip JS Object Notation. Note that if you are dealing with a lot of low-end devices and your payloads are already small that this is not an optimization you want to go for. This is meant for big payloads or medium payloads on bad connections.
Putting every single thing in perspective of your audience can make hand-tailored applications very appealing and ergonomic.
All of this can seem really elaborate and oh well I've been reading what this guy is saying but where do I start? How do I find these things when a problem actually occurs?
We have the Chrome developer tools to show us how long certain paints are taking and the same goes for functions. We can look at downloads in the network-tab, this will show us the size of network-requests including assets, bundle, ... Another thing this tab can show us is how much code of a certain bundle is unused at initial load. Last but not least we can record memory heaps to detect leaks, this happens in the memory tab. This is a little harder to navigate but when you get used to it can also be an awesome tool to detect where exactly you are going wrong.
In React land we have an awesome profiler, this thing can actually show us what rerendered, how long it took and so on. This is such an amazing tool to assess performance.
I'm in no-way vouching for some hard performance/size culture, I'm advocating to tailor your application to the needs of your audience. Nothing more and nothing less, most of the size wins are so easy to get if you just have some good agreements with your team before you start.
Effort between brackets
- serve GZIP and Brotli (low)
- add
<link rel="preconnect" />
for your backend (low) - offload expensive work to web workers (low)
- efficient caching strategies can be achieved with service workers (medium)
- track devices/connections and test for that (medium
- for network aligned requests check if Suspense/Concurrent can traverse deeper to aggregate requests (I guess not) --> add explanation of request-aggregation.