eliasbrange.dev
Next-level mocks with Lambda extensions

Next-level mocks with Lambda extensions

2024-10-24
| #AWS #Serverless

Most services rely on external services to function. Organizations often try to maintain shared non-production environments to test these integrations. However, coordination issues and risks of getting blocked by other teams are common when the number of teams and services grows.

Some issues you might encounter:

Your service will communicate with those external services in production, so you must build for it. What do you do? You build in isolation.

Building in isolation can significantly improve the agility of a team. It removes the constant need for coordination in a shared test environment. A team can develop at its own pace while minimizing the risk of another team blocking progress. It requires sound practices, though. Sound practices in the form of APIs that are discoverable and well-documented. You do have OpenAPI schemas, right?

To build in isolation, you must replace calls to external services somehow. You want something that acts like the real deal while minimizing the work required to maintain it.

Let’s look at some options.

Mocking external APIs

So, how do we mock external APIs? Let’s start simple. In our code, we can check if we are in a non-production environment and branch out to separate logic. It could look like this:

if (ENVIRONMENT === "prod") {
return callExternalService(...)
} else {
// Return some hard-coded data during development.
return { ... }
}

This “works,” but it is not the prettiest. Littering the code in all your integration points with if-clauses like this will make the code much more complicated to understand and maintain. You also exercise separate code paths in different environments, and your “actual” code is first executed in production.

Let’s take another approach. Using environment variables, we’ll configure the URL(s) to any external service:

const response = await fetch(`${process.env.ANOTHER_SERVICE_URL;}/some-path`);

We’ll set the environment variable in production to the actual URL. In development, we need something else. We need a mock server configured to respond to requests in a specific and deterministic way.

Enter WireMock. WireMock is a tool that lets you mock APIs. You can define request patterns and response templates in JSON files such as this:

{
"request": {
"method": "GET",
"urlPath": "/hello"
},
"response": {
"jsonBody": {
"message": "there!"
}
}
}

When you start WireMock and send it a request for GET /hello, it will respond with the JSON payload {"message": "there!"}.

So, how do we deploy and operate this? Most guides I found pointed to deploying WireMock as a docker container. While certainly doable, I wasn’t enthusiastic about deploying (and paying for) a long-running container process to mock dependencies of my Serverless (and practically free) Lambda functions.

My search continued, and I stumbled upon wiremock-rs and, subsequently, stubr. The former is a rust adoption of WireMock, and the latter builds on it to support the JSON files mentioned above.

This I can work with. Due to Rust’s minimal footprint, I figured I could chuck it into a Lambda Extension and avoid having to run any external containers.

Lambda Extensions

Lambda Extensions allow you to run separate processes alongside your Lambda function. One of the most common usecases is to run observability agents to send data to your observability platform. In this case, we will build an extension that starts a mock server locally. By adding this extension to a Lambda function, we can then send requests to the mock server on localhost. Extensions are added to a Lambda function by adding it as a layer. Check out the official AWS docs for more information.

A solution with WireMock, Rust, and Lambda Extensions

I landed on a solution that starts stubr locally in a Lambda Extension. JSON stubs are added as a separate layer, and the URL to the fictional service is configured with an environment variable. This way, the only difference between environments is the external service URL and whether the layers are attached.

You can also find the code below on GitHub.

1. Build the extension

Before this, I had never written a Lambda Extension in Rust. Luckily, Cargo Lambda made it very easy.

Create a new extension with cargo lambda new --extension lambda-wiremock. Go into the directory and install stubr with cargo add stubr.

Cargo Lambda generates a starter template in src/main.rs; replace it with:

use lambda_extension::*;
#[tokio::main]
async fn main() -> Result<(), Error> {
let stubs_path = std::env::var("STUBS_PATH").unwrap_or_else(|_| "/opt/stubs".to_string());
let stubs_port = std::env::var("STUBS_PORT").unwrap_or_else(|_| "1234".to_string());
tracing::init_default_subscriber();
let stubr = stubr::Stubr::start_with(
stubs_path,
stubr::Config {
port: Some(stubs_port.parse().unwrap()),
..Default::default()
},
)
.await;
tracing::info!(stubr.uri = %stubr.uri(), "stubr started");
Extension::new().run().await
}

When the extension starts up (during a cold start), it will start a stubr server on the port specified in the STUBS_PORT environment variable or 1234 by default. It will recursively find all JSON input files in /opt/stubs or whatever you specify in the STUBS_PATH environment variable.

Build and publish the extension with Cargo Lambda:

Terminal window
$ cargo lambda build --release --extension --arm64
$ cargo lambda deploy --extension --compatible-runtimes nodejs20.x

It’s essential to specify the correct architecture and compatible runtimes. A Lambda running x86 will not work with an extension built for ARM and vice versa. The extension is runtime agnostic in this case. I included only nodejs20.x for brevity.

2. Deploy an example app

To showcase the example, I wrote a simple Lambda Function that calls the fictional service and proxies its response to the caller:

import type { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";
const ANOTHER_SERVICE_URL = process.env.ANOTHER_SERVICE_URL;
export const handler = async (
_event: APIGatewayProxyEvent,
): Promise<APIGatewayProxyResult> => {
const response = await fetch(`${ANOTHER_SERVICE_URL}/some-path`);
return {
statusCode: 200,
body: JSON.stringify(await response.json()),
};
};

I deployed this function with SAM, but any other framework should do. If you want to see the complete example code using SAM, refer to my GitHub repository.

In my SAM project, I have a directory named wiremock and a sub-directory named stubs. It contains the following JSON stub:

{
"request": {
"method": "GET",
"urlPath": "/another-service/some-path"
},
"response": {
"jsonBody": {
"message": "Hello from Stubr/Wiremock!"
}
}
}

This adds a request pattern for all requests to GET /another-service/some-path. Adding a distinct base path for each service you mock can be handy if you must mock multiple services with similar paths. I could, for example, add another mock for GET /a-third-service/some-path.

I create a layer by pointing to the wiremock directory so I can add them to my Lambda function.:

StubsLayer:
Type: AWS::Serverless::LayerVersion
Properties:
ContentUri: wiremock
CompatibleRuntimes:
- nodejs20.x

When unpacked, layers are extracted to the /opt directory inside a Lambda function. My JSON files will end up in /opt/stubs, where the extension is configured to look by default.

Then, I can simply add the layers to the function and set ANOTHER_SERVICE_URL:

ApiFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/functions/api.handler
Runtime: nodejs20.x
Architectures:
- arm64
Timeout: 30
Layers:
- !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:layer:lambda-wiremock:1
- !Ref StubsLayer
Environment:
Variables:
ANOTHER_SERVICE_URL: http://127.0.0.1:1234/another-service
Events: ...
Metadata: ...

After deploying the sample application, I can reap my reward:

Terminal window
$ http https://5hnphv1fs5.execute-api.eu-west-1.amazonaws.com/hello
HTTP/1.1 200 OK
Apigw-Requestid: AHWaUhR...
Connection: keep-alive
Content-Length: 40
Content-Type: text/plain; charset=utf-8
Date: Wed, 23 Oct 2024 17:50:46 GMT
{
"message": "Hello from Stubr/Wiremock!"
}

I have successfully mocked an external service with minimal overhead. Now, I can continue to build my service and verify its functionality without battling unreliable test data and deployments in a shared test environment.

Conclusion

If you want to mock HTTP APIs, WireMock is a powerful option. The Rust implementation makes it a perfect fit for mocking APIs in your Lambda functions since you can run it in an extension. You can build the extension once (or twice for ARM and x86) and reuse it across your AWS organization. This way, service teams only have to manage their JSON stubs, and the shared extension handles the rest.

Mocking inter-team dependencies reduces requirements in some areas while increasing them in others. It reduces the need for having shared non-production environments where all teams must maintain their services. It removes the need to manage test data in those environments. However, it requires teams to ensure their API documentation is always up-to-date. If you build and mock against an OpenAPI schema, the real deal must behave as documented.

You could also use mocks when running initial load testing. You could, for example, verify that your database handles the expected load. You can do this without needing external services to scale with you. You can even configure the stubs to inject an artificial delay to mimic the actual services’ expected latency.

A word of warning when using stubr for load testing. The first request to each stub seems to require a “cold start” in stubr. I guess that the stubs are lazily evaluated when first requested. During my limited testing, the first request to the stub in a Lambda with 1769 MB was around 100 ms. With 128 MB, the first request took around 1500ms. The percentage of cold starts should be low when running steady traffic, such as during load testing. According to AWS, an analysis of production workloads showed that cold stars occur in under 1% of invocations. And, if you are happy with the results while running the extension and layer, you can expect better results and shorter cold starts when you remove them in production.

This was it for this time. Happy mocking!


About the author

I'm Elias Brange, a Cloud Consultant and AWS Community Builder in the Serverless category. I'm on a mission to drive Serverless adoption and help others on their Serverless AWS journey.

Did you find this article helpful? Share it with your friends and colleagues using the buttons below. It could help them too!

Are you looking for more content like this? Follow me on LinkedIn & Twitter !