Next-level mocks with Lambda extensions
Most services rely on external services to function. Organizations often try to maintain shared non-production environments to test these integrations. However, coordination issues and risks of getting blocked by other teams are common when the number of teams and services grows.
Some issues you might encounter:
- A team breaks a service you rely on during development, blocking your progress.
- A service has a complete API specification without being fully implemented, blocking your progress.
- A team uses ephemeral environments instead of a long-lived test environment, making it hard to know which one to use.
Your service will communicate with those external services in production, so you must build for it. What do you do? You build in isolation.
Building in isolation can significantly improve the agility of a team. It removes the constant need for coordination in a shared test environment. A team can develop at its own pace while minimizing the risk of another team blocking progress. It requires sound practices, though. Sound practices in the form of APIs that are discoverable and well-documented. You do have OpenAPI schemas, right?
To build in isolation, you must replace calls to external services somehow. You want something that acts like the real deal while minimizing the work required to maintain it.
Let’s look at some options.
Mocking external APIs
So, how do we mock external APIs? Let’s start simple. In our code, we can check if we are in a non-production environment and branch out to separate logic. It could look like this:
This “works,” but it is not the prettiest. Littering the code in all your integration points with if-clauses like this will make the code much more complicated to understand and maintain. You also exercise separate code paths in different environments, and your “actual” code is first executed in production.
Let’s take another approach. Using environment variables, we’ll configure the URL(s) to any external service:
We’ll set the environment variable in production to the actual URL. In development, we need something else. We need a mock server configured to respond to requests in a specific and deterministic way.
Enter WireMock. WireMock is a tool that lets you mock APIs. You can define request patterns and response templates in JSON files such as this:
When you start WireMock and send it a request for GET /hello
, it will respond with the JSON payload {"message": "there!"}
.
So, how do we deploy and operate this? Most guides I found pointed to deploying WireMock as a docker container. While certainly doable, I wasn’t enthusiastic about deploying (and paying for) a long-running container process to mock dependencies of my Serverless (and practically free) Lambda functions.
My search continued, and I stumbled upon wiremock-rs and, subsequently, stubr. The former is a rust adoption of WireMock, and the latter builds on it to support the JSON files mentioned above.
This I can work with. Due to Rust’s minimal footprint, I figured I could chuck it into a Lambda Extension and avoid having to run any external containers.
Lambda Extensions
Lambda Extensions allow you to run separate processes alongside your Lambda function. One of the most common usecases is to run observability agents to send data to your observability platform. In this case, we will build an extension that starts a mock server locally. By adding this extension to a Lambda function, we can then send requests to the mock server on localhost. Extensions are added to a Lambda function by adding it as a layer. Check out the official AWS docs for more information.
A solution with WireMock, Rust, and Lambda Extensions
I landed on a solution that starts stubr
locally in a Lambda Extension. JSON stubs are added as a separate layer, and the URL to the fictional service is configured with an environment variable. This way, the only difference between environments is the external service URL and whether the layers are attached.
You can also find the code below on GitHub.
1. Build the extension
Before this, I had never written a Lambda Extension in Rust. Luckily, Cargo Lambda made it very easy.
Create a new extension with cargo lambda new --extension lambda-wiremock
. Go into the directory and install stubr
with cargo add stubr
.
Cargo Lambda generates a starter template in src/main.rs
; replace it with:
When the extension starts up (during a cold start), it will start a stubr
server on the port specified in the STUBS_PORT
environment variable or 1234
by default. It will recursively find all JSON input files in /opt/stubs
or whatever you specify in the STUBS_PATH
environment variable.
Build and publish the extension with Cargo Lambda:
It’s essential to specify the correct architecture and compatible runtimes. A Lambda running x86 will not work with an extension built for ARM and vice versa. The extension is runtime agnostic in this case. I included only nodejs20.x
for brevity.
2. Deploy an example app
To showcase the example, I wrote a simple Lambda Function that calls the fictional service and proxies its response to the caller:
I deployed this function with SAM, but any other framework should do. If you want to see the complete example code using SAM, refer to my GitHub repository.
In my SAM project, I have a directory named wiremock
and a sub-directory named stubs
. It contains the following JSON stub:
This adds a request pattern for all requests to GET /another-service/some-path
. Adding a distinct base path for each service you mock can be handy if you must mock multiple services with similar paths. I could, for example, add another mock for GET /a-third-service/some-path
.
I create a layer by pointing to the wiremock
directory so I can add them to my Lambda function.:
When unpacked, layers are extracted to the /opt
directory inside a Lambda function. My JSON files will end up in /opt/stubs
, where the extension is configured to look by default.
Then, I can simply add the layers to the function and set ANOTHER_SERVICE_URL
:
After deploying the sample application, I can reap my reward:
I have successfully mocked an external service with minimal overhead. Now, I can continue to build my service and verify its functionality without battling unreliable test data and deployments in a shared test environment.
Conclusion
If you want to mock HTTP APIs, WireMock is a powerful option. The Rust implementation makes it a perfect fit for mocking APIs in your Lambda functions since you can run it in an extension. You can build the extension once (or twice for ARM and x86) and reuse it across your AWS organization. This way, service teams only have to manage their JSON stubs, and the shared extension handles the rest.
Mocking inter-team dependencies reduces requirements in some areas while increasing them in others. It reduces the need for having shared non-production environments where all teams must maintain their services. It removes the need to manage test data in those environments. However, it requires teams to ensure their API documentation is always up-to-date. If you build and mock against an OpenAPI schema, the real deal must behave as documented.
You could also use mocks when running initial load testing. You could, for example, verify that your database handles the expected load. You can do this without needing external services to scale with you. You can even configure the stubs to inject an artificial delay to mimic the actual services’ expected latency.
A word of warning when using stubr
for load testing. The first request to each stub seems to require a “cold start” in stubr
. I guess that the stubs are lazily evaluated when first requested. During my limited testing, the first request to the stub in a Lambda with 1769 MB
was around 100 ms
. With 128 MB
, the first request took around 1500ms
. The percentage of cold starts should be low when running steady traffic, such as during load testing. According to AWS, an analysis of production workloads showed that cold stars occur in under 1% of invocations. And, if you are happy with the results while running the extension and layer, you can expect better results and shorter cold starts when you remove them in production.
This was it for this time. Happy mocking!