For starters, let's decrease the load time using the browser caching mechanism. A browser can cache everything and later provide user with cached data. You still have localstorage and sessionstorage. They allow storing part of data that contribute to increasing SPA for future page loads and decreasing the number of server queries.
It is deemed necessary to optimize code for those environments where it is executed. However, experience shows that it consumes too much time and effort without providing a tangible increase. I suggest regarding this as a recommendation only.
Naturally, it is a good idea to eliminate all memory leaks. I will leave this out of scope of this article since everyone knows how to do it, if not – then just Google it.
Another assistant of ours is called a Web Worker. Web Workers are streams that belong to the browser that may be used to execute JS code without blocking the event loop. Web Workers allow performing CPU-intensive and time consuming tasks without blocking the user interface stream. In fact, when using them, calculations are performed in-parallel. This is true multithreading. There are three types of Web Workers:
- Dedicated Workers — Instances of Dedicated Web Workers are created by the main process. Only the process itself may exchange data with them.
- Shared Workers — Access to Shared Workers may be gained by any process that has the same source as the Worker (e.g., different browser tabs, iframe and other Shared Workers).
- Service Workers — These are Workers managed by events and registered using their source of origin and path. They may control the web page they are related to by intercepting and modifying navigation commands and resource queries and by data caching that may be controlled very precisely. All this gives us great means to control application behavior in a certain situation (e.g., when no network is available).
You may easily find information on to how to use them within the boundlessness of the Internet.
So, now that we have an understanding of the approaches, bells and whistles, let's talk about the code itself.
First, try not to access the DOM tree directly since it's a CPU-intensive operation. Imagine that in your code there's always some manipulation with a certain object going on. Instead of working with this object via a link, you're constantly yanking the DOM tree to search for this element and work with it, but that's how we implement caching patterns in the code.
Step two – get rid of global variables. ES6 provided us with a great discovery of humanity called block variables (in simpler terms, declaring variables with var
And the last, but not least. Unfortunately, not everyone has enough experience to understand this subtle aspect. I am all against using recursion functions. Yes, they decrease code size, but there is a catch: these recursion functions often have no exit conditions, they are simply forgotten. As they say, you can smash a finger with a hammer, however it's not the problem of the hammer, but the one of the finger owner. Or like in that cat meme: recursive functions are not bad, you just have to cook them properly.
Despite all power of today's front-end applications, don't forget about the basics. A vivid example of wastefulness and irrationality is adding new elements in the start of a new array. Those who know – they get it, and for those who don't, here's the explanation. Everyone knows that array elements have their own index, and when we are going to add a new array element into its start, then the sequence of actions is as follows:
- Identification of array length
- Enumeration of each element
- Shifting each array element
- Insertion of a new element into the array
- Re-indexation of array elements.