In a previous story, the Payments team managed to deliver a multi-tenant functionality based on GraphQL and DynamoDB. It is now the time for the Accounts team to shine! They will expose a RESTful Accounts API via API Gateway, an AWS managed service responsible to manage APIs. The actual business logic will be implemented via a Lambda function which will retrieve data from an Aurora RDS cluster based on Postgres database. This is how our Accounts module architecture will look like at the end of this story.

Setting up the Accounts APIs

In order to restrict our APIs only to authenticated users and to prevent unauthorized access, we will import our existing AWS Cognito Users Pools in our accounts micro-frontend module:

cd mfe-accounts
amplify import auth

Once Cognito is imported, we will our new REST api:

amplify add api

After choosing REST, we will define a new API friendly name (AccountAPI), along with a list of associated endpoints. For simplicity we will expose CRUD APIs on /accounts. For the time being, let’s create a new dummy Hello World NodeJS Lambda function which we will modify later. Finally, we should restrict API access for authenticated users on all CRUD operations. Successful result can be seen in below execution:

As usual, let’s see what’s the result of above both in our code and in AWS. On code side:

  • our backend-config.json got updated with 3 new sections, api, auth and function.
  • A folder api. It contains CloudFormation templates responsible to create the API gateway and IAMs roles required to access Lambda functions.
  • A folder auth, with imported Cognito users pools.
  • A folder function. It manages the business logic for the Lambda function handler (under src folder), by provisioning resources with a CloudFormation template, which we will modify later in order to access the Aurora cluster.
  • A simple, yet incomplete REST API is exposed on an available URL via API Gateway:

To provision those resources in AWS, just push:

amplify push

Below shows how API Gateway got setup.

We have just provisioned our first version of a serverless REST API, that was too easy! Now you may think that everything will work as expected as Amplify is spoiling us.

Not exactly, let’s see why in the next section when we attempt to call this API from our Angular app.

Hammering (and understanding) generic CORS errors

Calling API Gateway from the client via API is very simple:

By using API module from aws-amplify package, we can send an HTTP request to the /accounts endpoint defined in our defined API AccountsAPI in API Gateway. At first try, this will result in CORS errors, which were tricky to troubleshoot because they were, to say the least, very vague.

Long story short, after a few troubleshooting attempts, there were a few reasons for above, along with suggestions:

  • After above fix, CORS was still appearing but for a different reason, as shown in the x-amzn-ErrorType response header about an IncompleteSignatureException. According to AWS documentation, that error was due as it expected a Credential parameter. This made me think that the API Gateway was not setup correctly. Indeed, as shown below, API Gateway used an IAM role for authorizing requests, rather than Cognito. Let’s then create a Cognito authorizer, which allows you to control access to your API access. I had to set it to use the mfe-parent Cognito user pools, force to use the Authorization token claim to check user identity and associate it with the API Gateway accounts resource’s method execution. You can see these action sin the gif below.

Once the API’ stage was deployed, I tried to call the REST Api from the client. This time, seeing the “Hello from Lambda” message from the 200 HTTP response was relieving.

Now that we could prove that our Angular client could successfully call our API Gateway + Lambda backend, it was time to push it further and persist our accounts data into a Postgres SQL database as part of an Aurora cluster.

Aurora cluster deep-dive with CloudFormation

In order to persist our accounts, we will use Amazon Aurora, a serverless Relational Database Service (RDS) database claimed to be high performant, available, scalable and secure. For our PoC we want to create an Aurora DB cluster, which consists of multiple DB instances where its data is stored in a cluster volume, aka a virtual database storage volume that spans multiple Availability Zones (AZ). This ensures high availability as database data is copied across AZs.

Aurora logo

Rather than creating the Aurora cluster with a few clicks from AWS Console, I wanted to have it as part of the Amplify configuration. Luckily for me, Amplify supports advanced workflows where you can add custom AWS resources which are not yet supported by Amplify out of the box. In simple terms, we can define our own CloudFormation template where we can provision our Aurora cluster along with a database. We can manage it with the following steps:

  • Update our backend-config.json to support our new rds custom resource, which we call accountsCluster.
  • Create a rds folder under amplify folder with below content containing a Paraments.json and a template.json files.
  • The Parameters.json is used as input parameters for the upcoming CloudFormation template and contains configurable information such as username to use to log in to the DB instance inside the Aurora cluster and the cluster name.
  • Template.json file contains this CloudFormation template, which creates an Aurora cluster with a DB instance named accountsDatabase. The template will leverage AWS Secret Manager to authenticate to the cluster. It is important to note that Aurora can be launched only in an AWS Virtual Private Network where its subnets span across at least 2 AZs and are defined as part of a DB Subnet Group associated with the DBSubnetGroupName identifier. Below you can see Aurora master and replicas DB instances available on a multi-AZ default configurationand their connection with the cluster.
  • Our Lambda requires access to Aurora. We can achieve that by leveraging the AWS Data API and setup a few IAM roles so that we can connect to the Aurora cluster. To do that we update our backend-config.json to add a dependsOn section specifying we rely on the accounts Cluster for our Lambda function.
  • We will use AWS Secret Manager to securely connect our Lambda to the Aurora cluster. To do that, we need to provide resource and secret arns so that our NodeJs function can have those info exposed and retrievable in the code. Below extract (full version here) allows the Lambda function get retrieve this info and to get the right permissions to both execute SQL statements against the cluster and access the secrets from Secret Manager.

Hook RDS with Lambda

Finally, given the infrastructure provisioned by above CloudFormation template, we can update our Lambda code by using the AWS Data API and executing a SQL query to retrieve a list of accounts via the RDS.executeStatement function

Before we can push our changes to AWS, since we used a custom CloudFormation template, we need to make sure our changes in backend-config.json are applied locally. For this and to provision our changes, lets run:

amplify env checkout dev
amplify push

Resources have been provisioned. To test RDS database connection is up, access the AccountsDatabase Query editor with Secret ARN details and you can query the database via SQL.

To create our account table, you can use below SQL snippet as an example:

Accounts tree user interface

To prove that all above backend infrastructure is available to our client, I have introduced a simple display of a list of all accounts previously saved by above SQL statements by using Angular Material Tree.


The accounts team now is setup to scale and add more accounts-related functionality. In less than 10 minutes we have exposed a Rest API via Api Gateway which, through Lambda, can access a high available, scalable and secure RDS cluster. This is meant to be a starting point from where you can optimize and tweaks API Gateway, Lambdas and Aurora parameters to build resilient and secure architectures. Hope you enjoyed it!


Tech Lead with a passion for frontend, backend and cloud