We finish off our GraphQL API by creating database, collections, and indexes in FaunaDB. We show how to explore calling out to Fauna using the JavaScript driver before showing how to integrate our queries with our GraphQL resolvers.
Instructor: [0:00] Our GraphQL API is still using in-memory data structures to store our to-dos We'll move that into a database so that it persists when our serverless functions get torn down and new ones get started.
[0:10] The database we're going to use is FaunaDB. After creating an account we'll create a new database. Our serverless nullified to-dos database needs a collection. This collection will be the to-dos collection.
[0:28] We don't need to worry about our document history or the TTL, so we'll leave those alone. The collection index shows us all the documents in the collection. We'll leave it on right now since we're in development, but we might want to turn this off in the production since we don't really have a use case for it.
[0:43] Now that we have our to-dos collection, let's install the FaunaDB driver. While we're working, we'll make a little test directory. Our test directory and file make it a little easier for us to play FaunaDB's query language, while we're not sure what we're doing yet.
[1:00] The client needs a FaunaDB secret access for database. I'll create a .nvmrc file. If we go to the security tab inside of our database, we can create a new key. There are a number of different keys to choose from. We're going to choose server.
[1:15] Note that we can revoke our key later as I'll do after you've seen it on this screen cast. Now that I'm getting this DRM error, that nvmrc is blocked. Render DRM allow to approve its content.
[1:26] The tool I'm using to load environment variables without spamming them into my console and displaying them in my history is called DRM. You don't have to use it if you have a different way of doing this. It is quite nice, as it will export variables for you as you go into certain directories and unexport them as you go out.
[1:44] By default, if any changes are made to a .nvmrc file, it doesn't run it until you explicitly allow it to. Now we have the font and environment variable. This enables our index.js to access it on process.m.fauna.
[1:56] You may be more familiar with tools like .m which are language specific. Here we've used the FaunaDB driver to run a create command on the to-dos collection, and insert a new to-do.
[2:09] We've included an owner attribute which for now is hard coded, but later will be the sub from our user context that we dealt with when we worked with Netlify Identity in our GraphQL server.
[2:19] Otherwise, this structure should look pretty familiar as we've been working with to-dos with text and done for a while now. If we run nodeindex.js, we can see that client.query returns a promise so we need to await it.
[2:33] If we await it, we've successfully inserted a to-do. If we look at our collections because we ran the script twice we had two to-dos, even though they look exactly the same. Keep that in mind when you're working with FaunaDB. Two documents with exactly the same contents are not the same document.
[2:48] If we wanted to update one of the fields, we would have to use the Ref, or delete one of these from the console and insert a second one. I want to save this structure for later, when we're dealing with our GraphQL API and what we have to return. In the meantime, we can play around a little bit and check out what the Ref value is.
[3:04] Note, that we can get the ID of the to-do if we do results.Ref, which stringifies as the expression that we would write in the console.id. Now that we've inserted a couple of results and we know what the structure of the response looks like, we'll try to query those results.
[3:21] Next, let's update one of the done attributes on one of the to-dos that we have stored. In the console, we can copy the Ref to the clipboard for the third to-do we inserted and run our scripts. We get a data structure that is a to-do, that looks like the done has been set to true. If we refresh the console, we can see that it has.
[3:39] Finally, we'll want to query to-dos by user. If we look at our indexes, we can see that we have the collection index that we set up when we specified the collection. This is called all_todos, and it returns all of the to-dos, no matter the user.
[3:52] Let's take a second to insert a new to-do for userTest2. Now we have two to-dos for userTest and one to-do for userTest2. We'll create a new index called todos_by_user. In the terms, we want this to be owner, which has to be data.owner because of the way Fauna stores data. These terms are the values that we're going to filter on.
[4:17] In values, we'll want the Ref to come back, the text to come back, and also the done state to come back. An index in a database like Fauna is basically a copy of our data. What we've done here is created a new set of results, that will have these three values on it and no more.
[4:39] If we save our index and search for userTest, we should get two to-dos back. We search for userTest2, we should get one to-do back. Since this all works, we'll go back to the console. We're printing a query here and we'll paginate over the match on the index that we just created, todos_by_user, using the userTest user.
[4:58] Logging these results out, we get a JSON object with data and an array of our results, as all of the tools we need to work with our GraphQL API. At the top of our GraphQL API, we'll import Fauna. We'll set qt = faunaDB.query and we'll set up our client again. This time we have to make our resolver async. We use the FaunaDB query we just used to query all the to-dos by the user.
[5:23] Remember, user is our user ID. Then, we map over the results and return in the way that our query expects them to be returned. Note, that we aren't handling errors here, and we could have errors, so keep that in mind.
[5:36] In our addTodo mutation, we'll do the same thing. Note, that our text comes from the arguments mutation done as false, which we have to include here, or it won't be in the Fauna document and the owner will get set to user. In this case, if we don't have a user, we'll throw an error.
[6:00] We'll get rid of our logic from before, and also get rid of the index that we were using. The return value here on a successful request will be the spread of results.data, which will include the text, done, and the owner, as well as the ID that comes from the Ref.
[6:16] We'll make the same changes to updateTodoDone and take advantage of the update query we wrote earlier. The update query happens to have the same response type, so we'll use that. Remember, if we don't have a user, we want to throw the error as well. Now we can get rid of our to-do's object. We just no longer need it because we're using Fauna for everything.
[6:36] Before we push this, we need to take our token and set it as a value in our Netlify console, in the Build & Deploy/Environment variables settings. We'll have the Fauna key, which will be the same key that we had earlier. Now when we build our function, we'll get that value.
[6:54] It's important that we set up that environment variable, but before we push, because functions are immutable. If we change the environment variable after the function has been deployed, we'll have to redeploy our function for the environment variable to take hold. Make sure we don't commit any nvar secrets.
[7:10] After running the application, we can see that we're getting a response that is code 502. If we look at the network request, we can see that the reason is because we can't find the module, FaunaDB. If we look at FaunaDB, we can see that we actually installed it into www and not GraphQL.
[7:30] Now that we've successfully deployed, we can see that we can add new to-dos and modify them, as well as fetch all of them. If we take a look at the all_todos index in our FaunaDB collection, we can see that there's an owner, that was the owner we just logged-in with, and that we can't see any of the other user's to-dos. We've now successfully built our application.
[7:51] What we've ended up with is a Gatsby site, that allows us to put a static marketing or landing page on any page that we want. We've allocated the /app sub-route to be an authentication-restricted client-side app. We've used Netlify Identity to allow users to sign-in and operate the application.
[8:10] We've built a GraphQL serverless API that allows us to interact with our database, insert to-dos, update documents, and fetch all of the ones that are relevant for our specific user, implemented the backend using FaunaDB, and once again, used Netlify Identity to restrict access to certain users. This is a full JAMstack serverless application. Congratulations.