Proxy Supabase requests with Cloudflare Workers and Itty Router

Jon Meyers
InstructorJon Meyers
Share this video with your friends

Social Share Links

Send Tweet
Published 2 years ago
Updated 2 years ago

Itty Router is a tiny router package that has a similar API to Express. In this video, we install Itty Router and the Extras packages.

Additionally, we refactor our existing worker to use the router, and declare a dynamic route to fetch a single article.

To handle paths and articles that don't exist, we add a catch all route that returns a 404 status.

Lastly, we use the json helper to automatically stringify JSON objects and set the Content-Type header for application/json.

Code Snippets

Install Itty Router and extras

npm i itty-router itty-router-extras

Run wrangler development server

npx wrangler dev

GET route for all articles

router.get(
  "/articles",
  async (request, { SUPABASE_URL, SUPABASE_ANON_KEY }) => {
    const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);

    const { data } = await supabase.from("articles").select("*");
    return json(data);
  }
);

GET route for one article

router.get(
  "/articles/:id",
  async (request, { SUPABASE_URL, SUPABASE_ANON_KEY }) => {
    const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);
    const { id } = request.params;

    const { data } = await supabase
      .from("articles")
      .select("*")
      .match({ id })
      .single();

    if (!data) {
      return status(404, "Not found");
    }

    return json(data);
  }
);

Catch all route for 404

router.all("*", () => status(404, "Not found"));

Resources

Instructor: [0:00] Our Worker currently gives us back all of the articles in our Supabase database, but we also want to be able to navigate to a specific article by going to its ID. As you can see, anything going to our Cloudflare Worker is just going to load that default index.js route. [0:15] Let's install the itty-router package to be able to navigate to our individual articles. Type npm i, and then itty-router, and we'll also install itty-router-extras. We can run our wrangler dev server again. At the top of our Worker, we want to import { Router } from 'itty-router'. We then create a new instance of a router by calling newRouter and declare any routes that we want to listen for.

[0:43] To get all of our articles, we're going to create a getRoute/articles. We then give it a function which we would like to call, which is just this function that we have here for fetch, only minus the fetch keyword. We just need to remove this trailing comma at the end, and also make this a proper arrow function.

[1:04] If we scroll down to the bottom where we have our export, we just need to say that our fetch will now be handled by our router. If we head back over to the browser and get rid of this ID at the end here, and instead replace it with /articles, we'll see that we're displaying all of those articles from Supabase again.

[1:22] If we're to copy this ID and we wanted to be able to create a route for this specific ID, we head back over to our Worker and declare a new getRoute/articles/:ID, which declares a dynamic route, meaning, whatever the value that's after / is in the URL will be available to us as this ID parameter.

[1:43] We then declare our handler function for this route where we can destructure this ID from our request.params, and now we can return a new response with our ID. If we navigate to our new route, we'll see its ID printed out here.

[2:03] Let's get that specific article from Supabase. Again, we need to create a new Supabase client by calling the createClient function and passing it our Supabase URL, and our Supabase unknown key.

[2:16] Now, we want to get some data from Supabase. By awaiting a call to Supabase, we want to get some records from the Articles table. We want to select all of the columns, but only where there's a match for the ID column to the ID that came in from our request.params.

[2:37] We're only expecting to get a single row back from this, so we can chain on .single so that we get that actual row, rather than an array with that single row in it.

[2:46] This data object is what we want to send back as our response. Rather than going through stringifying our data and setting the headers, we can use a helper function to do this for us called JSON, which comes in from our itty-router-extras.

[2:59] We can then replace all of this response with just a call to JSON, and then passing that data object that we get back from Supabase.

[3:08] We can do the same thing in our dynamic route here. Rather than creating a new response, we can just call that JsonHelper and pass it our data. If we save this, navigate back to the browser and refresh, we'll see just the data for that specific article.

[3:22] If the user was to try to navigate to a URL that didn't exist or an article that didn't exist, they get back null here, but we should probably handle this with a 404.

[3:31] After we make that request to Supabase to get our data, we can then check if we don't have any data, then we want to return a status of 404 and the text "Not Found." This status also needs to be imported from our itty-router-extras. If we refresh the page, we'll see a 404.

[3:51] This handles IDs that don't exist, but if a user was to navigate to some nonsense route that didn't exist, they would see this big scary exception from our Worker. We can declare a catch-all route at the bottom of our file by saying router.all. This will run for any method of request, so GET, or POST, or anything else.

[4:10] The route that we want to listen to is *, so basically anything that hasn't yet been handled by another route, and then we declare our handler function, which is again just going to return a status of 404 and the text "Not Found." If we refresh, we'll see that proper 404 response, no matter what the user navigates to.

egghead
egghead
~ 6 minutes ago

Member comments are a way for members to communicate, interact, and ask questions about a lesson.

The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io

Be on-Topic

Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.

Avoid meta-discussion

  • This was great!
  • This was horrible!
  • I didn't like this because it didn't match my skill level.
  • +1 It will likely be deleted as spam.

Code Problems?

Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context

Details and Context

Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!

Markdown supported.
Become a member to join the discussionEnroll Today