Enterprise cloud patterns from the trenches: Hybrid cross-account APIs with Kong and AWS Private Link

Antonio Lagrotteria
6 min readAug 25, 2024

--

In today’s fast-paced digital landscape, Domain-Driven Design (DDD) principles offer a strategic framework for aligning business domains with technical implementations, ultimately driving faster time-to-market and simplifying governance, especially in enterprise setups.

However, ensuring service domain independence while maintaining robust governance, a comprehensive API catalogue, and stringent security compliance remains challenging.

This article explores an approach to supporting a cross-account DDD architecture by combining Kong Gateway with AWS Private Link in a hybrid environment.

The approach is deployed as Terraform IaC script via GitHub actions, by leveraging best practices discussed in a previous article:

Kong Gateways data planes will be managed via Kong Konnect, as explained in the link below:

By the end of this article, you will learn how to architect a scalable, repeatable and secure DDD approach through multiple AWS accounts, leveraging AWS Private Link and Kong as an API layer.

What will we build?

The complete architecture is illustrated below, detailing the flow from left to right:

  • Consumer Workload (Service Domain 1 — SD1): A Lambda function representing a business service domain that consumes an API exposed on a Gateway Account via Private Link.
  • Shared Service / Gateway Account: This account hosts a Kong Gateway data plane that exposes a service route, functioning as part of a Shared Services account outlined in the AWS Security Reference Architecture (SRA). This account acts as a central hub, managing interactions among service domains.
  • Service Domain 2 (SD2): Another business domain, hosting a backend Lambda function behind the previously mentioned Kong API route. This function interacts with an on-premise target via an Egress account.
  • Egress Account: Responsible for managing cloud-to-on-premises traffic, ensuring seamless integration between cloud-native and on-premise resources.

The Endpoint services and VPC Endpoint pattern

This architecture adheres to the consumer-provider model, establishing clear boundaries and contracts between entities.

A service consumer (on the left side) uses a VPC Interface endpoint to connect with a provider (on the right side).

A service provider (on the right side) exposes itself via an Endpoint Service backed by a layer-4 Network Load Balancer (NLB).

The contract is implemented by associating the consumer’s VPC Endpoint with the name of the Endpoint service so that only allowed principals can traverse the network.

A Terraform example

The above pattern can be illustrated with the following Terraform code.

In this scenario, a Lambda function in the SD1 account communicates with another Lambda function in the SD2 account via the Shared Service Kong account.

First, the shared Kong account must be discoverable by other accounts, requiring the setup of an NLB and a VPC service endpoint, which can then be linked with the corresponding VPC Interface endpoint in the consumer account.

resource "aws_lb" "shared-kong" {
provider = aws.shared-api

name = "ec2-lb-kong"
internal = true
load_balancer_type = "network"
security_groups = [module.sd1_vpc_endpoint_sg.security_group_id]
subnets = module.sd1-vpc.private_subnets
}

resource "aws_vpc_endpoint_service" "shared_nlb_kong_endpoint_service" {
provider = aws.shared-api

acceptance_required = false # Automatically accept endpoint connections
network_load_balancer_arns = [aws_lb.shared-kong.arn]
allowed_principals = [
# Allowed ARNs from workloads using this service
]
}

To ensure that SD1 is aware of the shared service account, a corresponding VPC endpoint must be established in the same SD1 account:

resource "aws_vpc_endpoint" "sd1_vpc_endpoint" {
provider = aws.sd1-workload

vpc_id = module.sd1-vpc.vpc_id
service_name = aws_vpc_endpoint_service.shared_nlb_kong_endpoint_service.service_name
vpc_endpoint_type = "Interface"

subnet_ids = data.aws_subnets.sd1_private_subnets.ids

security_group_ids = [
module.sd1_vpc_endpoint_sg.security_group_id
]
}

With the VPC endpoint in place, a workload, such as a Lambda function, can easily consume the VPC Endpoint DNS and interact with the shared services API via RESTful HTTP calls:

const axios = require('axios');

exports.handler = async (event) => {
try {
// Define the service endpoint URL. Replace this with your actual service URL.
const endpointUrl = 'http://your-vpc-endpoint-service-url/resource-path';

// Make a GET or POST request to the service
const response = await axios.get(endpointUrl);

// Log and return the response
console.log('Response data:', response.data);
return {
statusCode: 200,
body: JSON.stringify(response.data),
};
} catch (error) {
console.error('Error making request to VPC endpoint:', error);

// Return error details
return {
statusCode: error.response ? error.response.status : 500,
body: JSON.stringify({
message: 'Failed to connect to VPC endpoint',
error: error.message,
}),
};
}
};

For the target Lambda function in SD2 to be “discoverable” by the shared service account, and subsequently by the SD1 account, it must expose an endpoint service.

However, given that an NLB cannot be directly linked to a Lambda function, we employ an Application Load Balancer (ALB) as a workaround.

resource "aws_lb" "nalb_sd2_workload" {
provider = aws.sd2-workload

name = "nlb-alb"
internal = true
load_balancer_type = "network"
security_groups = [module.sd2_workload_nlb_alb_sg.security_group_id]

subnets = module.sd2-workload-vpc.private_subnets
}

resource "aws_lb_listener" "nlb_listener_sd2_workload" {
provider = aws.sd2-workload

load_balancer_arn = aws_lb.nalb_sd2_workload.arn
port = 80
protocol = "TCP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.alb_targetgroup_sd2_workload.arn # Forward traffic to ALB target group
}
}

resource "aws_lb_target_group" "alb_targetgroup_sd2_workload" {
provider = aws.sd2-workload

name = "alb-target-group"
port = 80
protocol = "TCP"
vpc_id = module.sd2-workload-vpc.vpc_id
target_type = "alb"

health_check {
enabled = true
port = 80
interval = 30
timeout = 10
healthy_threshold = 3
unhealthy_threshold = 3
}
}

resource "aws_lb_target_group_attachment" "alb_target_sd2_workload" {
provider = aws.sd2-workload

target_group_arn = aws_lb_target_group.alb_targetgroup_sd2w.arn
target_id = aws_lb.alb_sd2_workload.arn
}

resource "aws_lb" "alb_sd2_workload" {
provider = aws.sd2-workload

name = "alb"
internal = true
load_balancer_type = "application"
security_groups = [module.sd2_workload_alb_sg.security_group_id]

subnets = module.sd2-workload-vpc.private_subnets
}


resource "aws_lb_listener" "lambda_listener_sd2_workload" {
provider = aws.sd2-workload

load_balancer_arn = aws_lb.alb_sd2_workload.arn
port = 80
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.lambda_target_group_sd2_workload.arn
}
}

resource "aws_lb_target_group" "lambda_target_group_sd2_workload" {
provider = aws.sd2-workload

name = "lambda-target-group"
protocol = "HTTP"
vpc_id = module.sd2-workload-vpc.vpc_id
target_type = "lambda"

health_check {
enabled = true
interval = 30
timeout = 10
healthy_threshold = 3
unhealthy_threshold = 3
}
}

resource "aws_lambda_permission" "with_lb_sd2_workload" {
provider = aws.sd2-workload

statement_id = "AllowExecutionFromlb"
action = "lambda:InvokeFunction"
function_name = "arn:aws:lambda:<REGION>:<ACCOUNT_ID>:function:<FUNCTION_NAME>"
principal = "elasticloadbalancing.amazonaws.com"
source_arn = "${aws_lb_target_group.lambda_target_group_sd2_workload.arn}"
}

resource "aws_lb_target_group_attachment" "lambda_target_sd2_workload" {
provider = aws.sd2-workload

target_group_arn = aws_lb_target_group.lambda_target_group_sd2_workload.arn
target_id = "arn:aws:lambda:<REGION>:<ACCOUNT_ID>:function:<FUNCTION_NAME>"
depends_on = [aws_lambda_permission.with_lb_sd2_workload]
}

Similarly, the Kong Shared account will have to use a VPC endpoint service DNS to configure the Kong route upstream host.

Hybrid communication

The same pattern applies to hybrid environments, involving on-premise workloads. For instance, if the Lambda in SD2 must consume an API hosted on-premise, an NLB can point to the on-premise IP address and expose it to the SD2 account via an endpoint service, in an Egress account:

# Create a Network Load Balancer
resource "aws_lb" "nlb_for_on_prem" {
provider = aws.cloud-to-on-prem

name = "nlb_for_on_prem"
internal = true
load_balancer_type = "network"
subnets = [aws_subnet.nlb_for_on_prem_subnets]

}

# Create a Target Group
resource "aws_lb_target_group" "nlb_for_on_prem_target_group" {
provider = aws.cloud-to-on-prem

name = "nlb_for_on_prem-tg"
port = 80
protocol = "TCP"
vpc_id = aws_vpc.nlb_for_on_prem_vpc.id
target_type = "ip"

health_check {
enabled = true
interval = 30
path = "/"
port = "traffic-port"
protocol = "TCP"
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 10
}
}

# Register an IP address with the Target Group
resource "aws_lb_target_group_attachment" "example_attachment" {
target_group_arn = aws_lb_target_group.nlb_for_on_prem_target_group.arn
target_id = "<ON_PREMISES_IP_ADDRESS>"
port = 80
}

# Create a Listener for the NLB
resource "aws_lb_listener" "example_listener" {
load_balancer_arn = aws_lb.nlb_for_on_prem.arn
port = 80
protocol = "TCP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.nlb_for_on_prem_target_group.arn
}
}

Conclusion

This article demonstrates a comprehensive approach to implementing cross-account, hybrid cloud APIs using Kong Gateway and AWS Private Link.

By enabling secure and scalable interactions between service domains in a DDD architecture, this solution ensures robust governance and seamless integration across AWS accounts and on-premise environments.

Leveraging Terraform for Infrastructure as Code guarantees that these deployments are repeatable, secure, and aligned with industry best practices.

--

--

Antonio Lagrotteria

Engineering Manager | Full-Stack Architect | Team/Tech Lead with a passion for frontend, backend and cloud | AWS Community Builder