Hands Free Canary with ALB Advanced Routing Rules

Hands Free Canary with ALB Advanced Routing Rules

Canary deployments may seem like an advanced technique that requires a team of engineers to implement.

But with the new Advanced Request Routing for ALBs (Application Load Balancer), safely releasing new versions of your application straight into production has never been easier.

First, either create a new ALB or use an existing ALB and copy its DNS name.
e

Then, clone this repo https://github.com/ysawa0/alb-canary

And copy the DNS name to this section of serverless.yml

1
2
3
4
environment:
stage: ${self:custom.stage}
region: ${self:custom.region}
alb_dns_name: canary-alb-1183014609.us-east-1.elb.amazonaws.com

This repo will deploy a Lambda with a API Gateway endpoint that will redirect users to the ALB with a twist – it will add a ?id=$val GET parameter. Where $val will be an integer from 1 to 6.

It uses the Serverless Framework

1
2
# Install Serverless if you don't have it
npm install serverless -g

Then run to deploy the Lambda and API Gateway

1
sls deploy

Save the endpoint of the deployed API Gateway for later.
e

Now, we will set up routing rules that will mimic our “application”.

Click View/edit rules

e

Add the rules below
e
e

It should now look like this.
e

Now, trying querying the API Gateway endpoint we deployed earlier.

1 out of 6 times, it should be bucketed into the canary rule.

1
2
3
4
curl -L https://dmkgpj2yxh.execute-api.us-east-1.amazonaws.com/qa/canary
Id was 1 through 5
curl -L https://dmkgpj2yxh.execute-api.us-east-1.amazonaws.com/qa/canary
Id is 6, you've been canaried!

That’s it! You’ve set up a canary deployment where 1/6 users are canaried.

Using AWS ElastiCache Redis with Spinnaker

Using AWS ElastiCache Redis with Spinnaker

To productionalize a Spinnaker installation for high availability, one of the recommendations is to use an external Redis store, such as AWS ElasticCache. This guide will go over how to migrate a Kubernetes installation of Spinnaker to an AWS ElasticCache Redis instance using Halyard.

All config files (with the proper directory structure) used in this guide can be found in this this repo: ysawa0/spinnaker-elasticcache-redis

spinnaker logo elasticcache logo

Create the ElastiCache instance

elasticcache-settings

Keep Cluster Mode unchecked.

Node type will depend on your needs and budget, here we chose a m5.large

For Engine Version choose 3.2.10.

Configure Halyard and update Spinnaker

elasticcache-settings

After the instance is created, copy the Primary Endpoint for the cluster.

If you want to update all Spinnaker services at once, place this snippet into ~/.hal/default/service-settings/redis.yml, and replace $REDIS_PRIMARY_ENDPOINT with your endpoint.

1
2
overrideBaseUrl: redis://$REDIS_PRIMARY_ENDPOINT
skipLifeCycleManagement: true

To update each Spinnaker service at a time, place the below into ~/.hal/default/profile-settings/$SERVICE-local.yml

Where $SERVICE would be orca, clouddriver, gate, etc.

1
services.redis.baseUrl: redis://$REDIS_PRIMARY_ENDPOINT

Lastly, after updating the base URLs, place this into ~/.hal/default/profiles/gate-local.yml.

1
2
3
redis:
configuration:
secure: true

Now update Spinnaker by running hal deploy apply.

After you confirm that everything is working as expected, it’s time to disable the spin-redis service.

Update ~/.hal/default/service-settings/redis.yml by inserting enabled: false

1
2
3
overrideBaseUrl: redis://$REDIS_PRIMARY_ENDPOINT
skipLifeCycleManagement: true
enabled: false

And scale down the Redis Deployment to 0 replicas in Kubernetes.

1
kubectl scale deploy spin-redis -n spinnaker --replicas=0

Now sit back, relax and enjoy having to monitor one less data store.