Replacing FaunaDB with DynamoDB as the Backend of a GraphQL API with Zero Client Changes

Chris Biscardi
InstructorChris Biscardi

Share this video with your friends

Send Tweet
Published 2 years ago
Updated a year ago

We take a set of DynamoDB queries that insert, update, and query for TODOs and replace the entire backend of our GraphQL API which is currently using FaunaDB. We cover DynamoDB operation ReturnValues for put, update, and query operations, implementing transformer functions to coordinate the stored representation in DynamoDB with your business objects, and then we get into Netlify. Netlify's environment variables don't allow us to use AWS_ACCESS_KEY_ID as an environment variable name. Once we fix that, we set up a pull request on GitHub, which results in a Deploy Preview that includes our new GraphQL function deployment!

Instructor: [0:00] Now that we've successfully created, updated and queried our todos from DynamoDB, we'll take our logic and replace the GraphQL backend with DynamoDB. We'll completely get rid of all of the Fauna logic and instantiate the AWS client instead using the same environment variables, the same region and the same table name.

[0:20] Note that we'll also have to add AWS SDK and UUID as dependencies to our GraphQL function. The first query we will replace is the todos query. Note, our return values which have an ID, text and done in a list will get rid of all of the Fauna logic and drop in our DynamoDB document client query.

[0:40] Note that we have to swap the value that we used before with the actual user ID. If successful, the result will return result.items. If we go back to our test and run the query, if we look at the return value for query, we can see that it has PK, SK, and data.

[0:57] This is different than our GraphQL schema. Since we haven't updated the client, while we've swapped the backend, we want to massage this data into something that fits our GraphQL schema.

[1:07] Our todo comes in as PK, SK, and data. Data can be spread into the resulting object because it has all the values that we would expect. We can also safe list values into this object by picking out the Dynamo text keys.

[1:20] The idea of our todo is the sort key. We can get rid of the prefix of todo and get the UUID with a .replace call. Now our todo doesn't return an owner, so we don't have to use the PK. Note that the DocumentClient could fail here. We would want to capture that error.

[1:35] The next query we will replace is the add todo query. The add todo mutation takes the text value, sets the owner and sets done to false, then returns everything. Again, we'll get rid of the entire Fauna Query and will replace it with the put query we wrote earlier.

[1:51] This creates a new todo with the user and the user ID that comes in from context with a todo that is a new todo with a new ID, a created_at and updated_at done in the text. Interestingly, we don't get anything back from DocumentClient.put.

[2:06] If we look at the return values option with the parameters of PutItem, which is the underlying implementation of put, we can see that the default value is NONE, which matches with the experience that we had last time, or ALL_OLD -- means if PutItem overwrote an attribute name-value pair, then the content of the old item is returned.

[2:25] Note that there are a number of other return values that PutItem completely ignores, such as UPDATED_OLD, ALL_NEW, or UPDATED_NEW. However, because we're creating this todo for the first time, we can return that value ourselves since our GraphQL API expects the new todo.

[2:42] Note that created_at and updated_at currently are in our GraphQL API, so we can remove them. Done is always set to false, and text will come through.

[2:50] Note that we have a little bit of a bug here in that we have to get the ID, but we can't call UUID again because it will generate a new UUID. We'll pull the UUID call out into an earlier call and use that value in the return value for our todo.

[3:05] Because we've created a brand-new todo, we know that the object that we're inserting is equal to the object that we're returning, so we don't have to ask the database to return it to us to know what the values are. Finally, we'll update.

[3:17] Unlike PutItem, UpdateItem allows us to specify ALL_NEW as the return value. If we look at this in our test file and run the update on the same todo we ran last time, we can see that we get an attributes back with all the attributes.

[3:32] Again, we'll get rid of the entire Fauna Query, we'll replace the user, we'll replace the todo with the UUID, we'll set data.done = true, and we'll return all the new values. That means result, if successful, is going to have a result.attributes.

[3:47] We can destructure the PK, the SK, and data off of result.attributes and return it in the same way that we did above. Note that this mapping from DynamoDB representation to business object type can be split into its own function and reused across all of these queries if we need to.

[4:04] As our app gets larger and larger, this can be a better and better idea. We have the representation in Dynamo, a function to transform that representation into our business objects that matches the GraphQL API, and the representation that fits the GraphQL API.

[4:21] Now that we've made all of our changes, we can push this up to Netlify. One quick reminder, when we push this up to Netlify, we're going to have to go set our environment variables in the Netlify console again.

[4:31] Because we chose to not create an IAM user with restricted permissions, we're still working with our root key and our root access token. If this bothers you, you can watch me do it in this video, because in the next video, what we're going to do is have Serverless Framework handle all of that for us.

[4:49] Note that if we try to stay in the Netlify environment, AWS access key ID, it says that AWS access key ID is a reserved environment variable. We'll have to name this something else. Don't forget to update our code so that we're accessing the new environment variable.

[5:03] Since our Netlify settings are set to deploy only pull requests, we'll create a pull request using Hub. We can see that our deploy preview is building. Now we have a new PR. We can go to the deploy preview and log in and check the dashboard. Interestingly, our API request is still going to production and not our deploy preview. This is because we hardcoded the URL in Gatsby browser.

[5:24] If we want this to work, we're going to have to take the host name off. If we log in on the new preview with the proper URL, you can see that we don't have any user todos. Also note that because we didn't specify any sorting in our DynamoDB query, we don't get any sorting.

[5:40] When we add todos to our DynamoDB table and then we requery again to fetch those todos to render them, the orders can change. Other than that, everything works as expected. We didn't have to change any client code except for the deploy preview URL, which was completely unrelated to our backend changes.

[6:00] If we take a moment to check our console in the DynamoDB table section, we can see that we have four todos for this user. Everything's working and we completely swapped our backend for a more performing, less costly solution that can enable more features in the future without having to change any of the client.