Batch Database Requests with a GraphQL API

Jacob Paris
InstructorJacob Paris
Share this video with your friends

Social Share Links

Send Tweet
Published 2 years ago
Updated 2 years ago

Querying a thousand posts doesn't have to mean a thousand individual database requests.

We're going to build a batching layer that adds a millisecond pause into each resolver while it gathers all the required post IDs and then fetches them in a single request

Instructor: [0:00] Batching is the second layer of abstraction, where instead of resolving requests immediately, we pause for a bit. Then we take every ID that was requested over that pause and do one single query to fetch all of them. Let's call that Active Query, and we'll make it null by default.

[0:16] To pause, we need to make a new Promise with a short timeout. One millisecond is OK here. Longer will allow bigger batches but will also make the request slower. Our batch size will be every ID in the Promise's cache, which gives us everything we've requested so far.

[0:32] Bring the database query code here, and we'll tweak it to accept a list of IDs instead of just a single one. That's going to be SELECT ALL FROM users WHERE ID IN (IDs). .where() becomes .whereIn(), and pass it all of the IDs from the Promise cache. We don't just want the first item anymore, so we can return .first(), and this becomes ActiveQuery instead.

[0:56] If we're fetching the list of IDs from the cache, then we're always going to be missing the current one if it's not a cache yet. There's lots of ways to fix that, but we can just add it to the cache and be done with it.

[1:08] The Promise for each item key is going to start with the common act of query, which returns every item in this batch, and then branch off with .then() to find and return the specific ID for this item.

[1:23] We resolve the Promise for this key, which is being returned from the load method for this key. Last thing, once this patch has started, we don't need to dispatch it again. Only do that if we haven't set ActiveQuery yet. Let's test this out.

[1:40] We have one request for posts and one request for all of the users at once. Before, a thousand posts would do a thousand individual author requests. Now it will be one request no matter how many posts we have.

[1:51] Running this a second time shows our cast still works across multiple network requests. Since there's no guarantee those requests come from the same person, that means a logged in user could cast some sensitive data the guest users could access by mistake. We're going to have to fix that.

egghead
egghead
~ an hour ago

Member comments are a way for members to communicate, interact, and ask questions about a lesson.

The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io

Be on-Topic

Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.

Avoid meta-discussion

  • This was great!
  • This was horrible!
  • I didn't like this because it didn't match my skill level.
  • +1 It will likely be deleted as spam.

Code Problems?

Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context

Details and Context

Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!

Markdown supported.
Become a member to join the discussionEnroll Today