Serverless GraphQL API with Hasura and AWS stack

 Reynaldo Rodríguez
Reynaldo Rodríguez
May 5, 2020
Serverless
API
GraphQL
AWS
AWS Benefits
Serverless GraphQL API with Hasura and AWS stack

As we saw in our previous post, we can code and deploy a fully scripted Rest API on AWS using the Serverless Framework. Today we will be diving deeper by creating a GraphQL API boilerplate based on Hasura and the AWS stack. Hasura is a trending open source engine which auto-generates a GraphQL API with subscriptions support by reading a PostgreSQL database model. Previous knowledge of Serverless, Hasura and GraphQL is required.

We will also use Lambda microservices to enable some Hasura capabilities like Authentication, Remote Schema and Event Triggers in the same stack. There will be other simple AWS Resources needed like Cognito, Aurora RDS, ECS and a few others which are more complex like VPC, NAT Gateway, ELB, all these to have full control of the stack and its accessibility between their components and the Internet.

The goal of this tutorial is to create a boilerplate we can use to deploy a backend using Hasura and AWS which is ready to be used. Just keep in mind this proposed stack includes some resources outside the AWS Free Tier and they will be billed accordingly. After deploying, I will show you how to estimate the cost of the stack.

Let’s start as usual by initializing the serverless project by specifying the path where it will be created and the project name:


Now that we have the project initialized, we can fire up our favorite IDE to open it.

First let’s add a .env file which will hold credentials for Hasura and the Database. This will be picked up by serverless and subsequently by the resources at first deployment and will set these values:


The next thing to do is to organize the project structure according to the services it will hold.

Let’s create the following folder structure:

- functions

        - cognito-triggers

        - event-triggers

        - remote-schema

                 - mutations

                 - queries

                 - types

- migrations

- resources

Inside the resources folder, we are going to add the yaml typed AWS Resources. I’ll list the AWS main resources and a description of why we are using them:

- VPC (To group all project resources into a separated section of the AWS Cloud and provide it with network addressing).

- ECS (To construct and deploy a container which will hold Hasura).

- ELB (To secure and distribute the traffic to the VPC).

- Cloudfront (To serve our instance into a distributed network across the globe).

- Cognito (To enable user authentication).

- RDS (To create an Aurora Database which will be connected to Hasura).

Let’s add each one of them:

For the VPC let’s create vpc.yml. Here we define the template to create a Virtual Private Cloud with its own networking configuration, which includes 2 public subnets, 2 private subnets, a NAT gateway and proper routing between them:


In a new file called elb.yml let’s create an Elastic Load Balancer to route public access to the VPC at a specific port on the public subnets. Note that the internal port is using 8080, as this is the port that the Hasura docker image exposes when deployed:


Next on rds.yml let’s add the Aurora PostgreSQL-based database within the private segment of the VPC that Hasura will be using to store the data. Note how we are referencing to environment variables as they are picked up and declared in the serverless.yml:


On a new cognito.yml file, we will create the template to deploy a Cognito User Pool and a User Pool Client. This will be the identity provider and what Hasura will use to validate user access through attached Hasura claims on the JWT:


On a new ecs.yml file, we will create the template to deploy the Hasura container within the public segment of the VPC by pulling the last stable version from Dockerhub. We will also add a few environment variables that Hasura requires such as Database connection, Authentication configuration, and two static container environmental variables, which refers to two functions in our stack, the remote schema and the event triggers:


Last but not least, among the resources is the cloudfront.yml file. Here we will add the template to deploy a Cloudfront Distribution which will serve Hasura through the ELB globally to the frontend clients with low latency:


Now that we are done with the resources, let’s move up to the functions. Inside our cognito-triggers folder we need a Pre Token Generation Lambda trigger which will be used by the declared User Pool to attach Hasura Claims into the generated JWT on user login. At the moment there is no way to link these two with serverless due to a known bug but we will link them manually later. Add the following inside cognito-triggers/pre-token-generation.js:


Inside the event-triggers folder, we will add an index.js with a function which will be the entry point for the Hasura Event Triggers:


On the remote-schema folder we will develop a Lambda GraphQL service based on apollo-server-lambda which will serve as a remote schema for our Hasura instance and will allow us to develop new endpoints with custom logic like third party integrations. In this case we will add authentication endpoints.

Initialize the service and install dependencies by calling:


Let’s start by adding an remote-schema/index.js with the following:


Inside remote-schema/mutations let’s add an index.js, which will read every file on this folder and treat it like a resolver for the mutation with the same name:



Add also a signUp.js with the logic to sign up a user on the User Pool. Note we are taking most of the code from the previous serverless post:



Now inside remote-schema/queries let’s do the same but for queries, adding an index.js with the following:


Add also a signIn.js with the logic to sign in a user against the User Pool:


Regarding the remote-schema/types folder, let’s add an index.js too, to pick all the files in the folder as the schema types:



Now define the query and mutation types in query.js and mutation.js files respectively, so the graphql server can pick them:



Also, we are adding a custom type on AuthResult.js, which will be the response of the authentication provider:


All done for the remote schema part, now that we have all AWS resources and our functions in place, let’s wire them all up in the serverless.yml as follows. A few important things to note here: we are using the serverless-dotenv-plugin to load environment variables from the .env in our serverless.yml, we are declaring the COGNITO_CLIENT_ID env var which takes its value from the reference of the deployed resource, we are specifying VPC configuration to our service, we are using an individual package configuration for our Lambda functions, remote schema requires both GET and POST methods, pre token generation doesn’t have an event due to the mentioned bug so we are going to handle both trigger and Cognito deployment but manual linking is required between them over the AWS console.



Finally, we can deploy our stack by issuing the deploy command on the shell as follows:

serverless deploy


After a few minutes, the whole stack will be deployed on AWS. It may take a while as the Cloudfront instance needs to propagate across all regions. Once that’s done, we can navigate to the Cloudfront management within AWS Console to see the URL where Hasura is reachable, by going to that URL. It will look like this:

Serverless GraphQL API with Hasura and AWS stack


Type the password you specified on HASURA_ADMIN_SECRET .env file and you will be redirected to the dashboard:

Serverless GraphQL API with Hasura and AWS stack


To start adding migrations on the project you can open the hasura console locally by creating a file called config.yaml and adding the following:


Being admin_secret the value of HASURA_ADMIN_SECRET and endpoint the Cloudfront url, this way the Hasura CLI can communicate, open the Hasura console locally and persist changes done on the migrations folder. After having this file you can issue the following on the shell:

hasura console


Let’s test the migration persisting by going to the Remote Schema tab and adding the service we created to act as the remote schema. As the URL for that service is already incorporated as an environment variable on the container, we can load that value from there by specifying it as follows:

Serverless GraphQL API with Hasura and AWS stack


Once created, it will look like this:

Serverless GraphQL API with Hasura and AWS stack


And migration folder will have that change persisted:

Serverless GraphQL API with Hasura and AWS stack


We can do this same procedure to add the needed event triggers on the tables when we have them.

After attaching the remote schema, we will be able to see the custom endpoints we created to handle user authentication:

Serverless GraphQL API with Hasura and AWS stack


In order for the Cognito authentication to return the proper JWT, we need to link the pre token generation trigger we created and deployed with the deployed user pool. To do that, we can go to the Cognito User Pool management, specifically on the triggers section, and select the deployed Lambda:

Serverless GraphQL API with Hasura and AWS stack


Now, let’s test the user authentication. We will use the signUp mutation to sign up a new user:

Serverless GraphQL API with Hasura and AWS stack




And SignIn endpoint to sign in a user:

Serverless GraphQL API with Hasura and AWS stack




If we inspect the generated idToken on https://jwt.io/it will have the attached claims:

Serverless GraphQL API with Hasura and AWS stack


This token is what we need to send from the frontend to be able to query Hasura queries and mutations when the database modeling is all completed.

Regarding our deployed stack on AWS, if you go to the Cost Management section on AWS Console you can find the current and estimated cost for each deployed service. The monthly estimate flat fee will be approximately:

Serverless GraphQL API with Hasura and AWS stack


Consider this as just the base which can be maintained while making the database modeling or developing the frontend. This would go higher when it is deployed to production and usage of services increases because of traffic.

The final stack architecture diagram is:

Serverless GraphQL API with Hasura and AWS stack


With this we conclude the creation of a Serverless GraphQL API boilerplate based on Hasura on AWS. The following source code can be found at https://github.com/ReyRod/graphql-api

Thanks for reading this far, if you're interested in interested in integrating the Facebook SDK into your React Native app, check this article https://www.vairix.com/tech-blog/react-native-facebook-sdk-integration

Don't miss a thing, subscribe to our monthly Newsletter!

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Development of a Rest API the serverless way

This step-by-step guide will show you how to develop a REST API the serverless way.

March 11, 2020
Read more ->
Serverless
API

Consuming GraphQL endpoint using React

Learn how to use React-Apollo to consume GraphQL endpoints with this step-by-step guide.

February 13, 2020
Read more ->
React
Apollo
GraphQL
Integration

Contact

Ready to get started?
Use the form or give us a call to meet our team and discuss your project and business goals.
We can’t wait to meet you!

Write to us!
info@vairix.com

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.