Handling network waterfalls
Technical, Frontend, Performancetitle: Handling network waterfalls created_at: 2020-01-26T14:52:16.823Z updated_at: 2024-02-12T14:52:16.823Z tldr: waiting and cancelling network requests is_published: 1 star: true category: Technical, Frontend, Performance share_type: add
In single-page applications, the client needs to load the data required from api service, instead of the browser loading a pre-rendered page. As the application becomes complex and the relations between data become complex, the load time of the app needs to wait for the rendering logic, view logic, and then on the data needed for the view. This creates a waterfall. There are a couple of ways to handle this.
Backend for frontend
If you own the product, the easiest solution is to create a new service entirely dedicated to the front end. Costs a bit to maintain in time and effort but worth it. All the APIs are dedicated to serving only the front end. So, all the data fetching for the first load goes in here. GraphQL is a great tool for this.
HTTP/2 Multiplexing
HTTP/2 can send multiple requests in parallel (around 6) over a single connection and receive the requests back in any order. Browsers can open multiple requests at the same time, which is better for reducing the time required to load one asset at a time, but for the tradeoff for handling the order in which the data arrives, and if any of the requests has failed to be fulfilled. Here is an animated version
An Example workspace
Let's say we have an app, that requires user data, workspace data, team data, team resources, and control access for each resource. We need to load these resources in order.
Workspace data flow
Getting data for these assets one after another takes a lot of time. So let's group these requests and send them in parallel. Lets do this incrementally learning about parallel requests, aborting requests, retries, callback, stream and observables.
.all()
, .allSettled()
Javascript Promise
exposes .all
and .allSettled
static methods. It takes in an iterable of promises and returns a single Promise, which is fulfilled when all of the inputs are fulfilled and returns the result as an array.
.all()
rejects if any of the input fails to resolve. .all
does not show which promise failed, it simply rejects the rejected value, and doesn't show the resolved values either.
// on all requests are success
try {
const workspace = await Promise.all([getWorkspaceDetails, getWorkspaceUser]);
const [workspaceDetails, workspaceUser] = workspace;
} catch (err) {}
// on one failed request
try {
const getWorkspaceId = Promise.resolve(511);
const getUser = new Promise((resolve, reject) => {
setTimeout(() => {
reject("unstable!");
}, 100);
});
const values = await Promise.all([getWorkspaceId, getUser]);
console.log(values);
} catch (err) {
console.log("reason", err); // reason unstable
}
whereas .allSettled()
works similarly to .all
but returns the result as an array of objects that describe each promise.
(async () => {
try {
const getWorkspaceId = Promise.resolve(511);
const getUser = new Promise((resolve, reject) => {
setTimeout(() => {
reject("unstable!");
}, 100);
});
const values = await Promise.allSettled([getWorkspaceId, getUser]);
console.log(values);
/*
[
{
"status": "fulfilled",
"value": 511
},
{
"status": "rejected",
"reason": "unstable!"
}
]
*/
} catch (err) {
console.log("reason", err); // No error thrown
}
})();
now we know what api failed, we can decide what to proceed and what to cancel!
Abort signals and retries
We can cancel requests from the front end that are ongoing but no longer required. Let's say a user is at a page and there is a network request going on, the user navigates to a different page, the old request of the page is no longer required, and we can cancel with abort signals.
let getNewsController;
const onNavigate = () => {
if(getNewsController) {
getNewsController.abort()
}
}
const async getNews() {
try {
getNewsController = new AbortController();
const signal = getNewsController.signal;
return fetch(url, { signal });
} catch {}
}
Another example, two requests need to be called but are interdependent on each other, one of them fails, and we don't need the second one we can cancel it. and retry the same.
let getCurrentNewsController;
let getViewsController;
let retries = 0;
const onRetry = () => {
if (getCurrentNewsController) {
getCurrentNewsController.abort();
}
if (getViewsController) {
getViewsController.abort();
}
if (retries < 3) {
retries += 1;
getCurrentNews();
getViews();
}
};
Observables and streams
Now imagine scaling the above code to a larger code base. Pretty hard to maintain!. That's where the Observable pattern comes in, We can leverage this to add streams, retries, and chain callback.
RxJS is what we will be using today.
const workspaceUser$ = fromFetch(url).pipe(
switchMap((response) => {
if (response.ok) {
return response.json();
} else {
return of({ error: true, message: `Error ${response.status}` });
}
}),
catchError((err) => {
return of({ error: true, message: err.message });
}),
);
// -- at the view level --
const workspaceUserAbort = new Subject();
workspaceUser$
.pipe(
takeUntil(workspaceUserAbort),
retry(3), // retry 3 times
)
.subscribe({
next: (result) => {
// follow with other apis
},
complete: () => console.log("done"),
});
// ... similarly for other apis
To achive parallel request in RxJS we use forkJoin. It returns an observable and we can do the same standard operations on the same.
const workspace$ = forkJoin({
user: workspaceUser$
teams: workspaceTeams$
});
workspace$.pipe(
takeUntil(workspaceAbort),
retry(3), // retry 3 times
)
.subscribe({...})
Much better organized code and the code is extensible for other use cases when required too...
There is a new player in the town Effect, it's incredibly cool do check them out!
Further reading
Having thoughts ?
Add yours