Man 1: Over the course of this series, we've been using FaunaDB as our database. Over the next few videos, we're going to be switching to DynamoDB. Here's a brief comparison of the two options. First, both databases are very good at being the data storage back-end for a serverless application.
Some people refer to the databases themselves as serverless as well because of the fact that they're both hosted, and they're both managed for you. This means that with either solution, we won't have to do anything like setup our own replicas, deploy a server, or really worry about scaling at all if we don't want to.
Secondly, both databases support transactions. Fauna support for transactions is required. You can't make a request to FaunaDB without it being enveloped in a transaction. Whereas with DynamoDB, you have the option of how consistent you want your queries to be.
FaunaDB and Dynamo both advertise being low latency. Fauna requires that your entire application is distributed across a set of global regions. This by nature means your request will be a little bit slower, but they do advertise low latency reads and writes. They don't, however, specify any numbers.
DynamoDB is built specifically for performance and calls out single-digit millisecond as well as microsecond latencies. On the list of features that Fauna has that DynamoDB won't have include the native GraphQL API. We weren't using the native GraphQL API in our application, but this won't matter to us.
Learning to use Fauna's native GraphQL API also requires that you learn the underlying SQL query language. If you don't, then you won't be able to debug or understand what the GraphQL SQL is transforming into. This means the learning curve for Fauna and DynamoDB turns out to be similar on the query language level, where DynamoDB has a fairly simple query language, and Fauna has SQL, which is more Lisp-based.
The other feature that Fauna has that DynamoDB won't have is attribute-based access control. With Dynamo, we can still implement role-based access control, which is what the majority of applications today use. We can also take advantage of IAM roles, which means that a specific serverless function can be allowed to access a specific item or attribute in our DynamoDB database.
Next, we can take a look at pricing. We can see here that overall, the pricing is pretty comparable. The numbers we've used here for operations come from specifically one of Fauna's tiers.
If you look at the total cost, you'll see that DynamoDB is fairly cheaper, even though these are both on-demand and they're both applying each respective platform's free tier. We can't take Fauna's free tier away from the calculation, but we can take AWS's, which bumps our cost up six dollars.
Note that for specific workloads, price can vary. Read operations from Fauna are about double that of DynamoDB, while write operations at almost $90 for Fauna stand at around $55 for DynamoDB. In both cases, it's cheaper to do operations in DynamoDB than it is to do in Fauna.
The one space where Fauna beats DynamoDB in pricing is how much data you'll have in the table. At 200 gigs of data, it costs us $35 to store that in Fauna each month, while in Dynamo, it costs us about $50.
Overall, if you're just starting a project, these pricing differences don't really matter that much, as you'll probably stay within the free tier. If you have an application which is doing operation-heavy workloads, it may be worth it to move to DynamoDB.
Note also that while FaunaDB has a free tier, and on-demand tier or Metered and a Pro tier in addition to their enterprise tier. DyanamoDB has on-demand, provisioned capacity, and reserved capacity.
The pricing tiers for Fauna are pretty artificial and don't correspond to a usage level, while the tiers for DynamoDB can be adjusted for any usage level. This gives DynamoDB a far more flexible pricing scheme and pricing model to build more varied application types on.
Finally, we get into the features that DynamoDB has that Fauna struggles to keep up with. The first and most important one is change data capture. This is what's known as DyanmoDB Streams. DynamoDB Streams allows us, every time we insert something into the database or modify something, to take those changes and put them into a queue that we can deal with later.
FaunaDB doesn't have any answer for this at all. If we ever want to asynchronously do processing based on the changes that are coming into our database, we have to go with DynamoDB over Fauna.
Next, we have the integrations. DynamoDB integrates much more strongly with other AWS features, like Lambda. This allows us to use Lambdas, process the change data capture that we talked about. We also get to control which regions we deploy into, how much that cost, and other trade-offs.
Finally, we have two features that we aren't actually going to take advantage of in this series videos. The first is global tables, which allows us to replicate our DynamoDB table across many different regions in the AWS. The second is DynamoDB accelerator or DAX, which allows us to achieve the microsecond latencies by using caching.