Envoylint.com

Envoylint.com

Validate your Envoy configs from your browser

envoylint.com

What is this?

This site takes a Envoy config and validates it for you. Run against actual Envoy binaries with multiple versions supported (1.16.2, 1.16.0, 1.14, 1.12)

How does it work?

It sends the config to a Lambda running Envoy in validate mode or against the config_load_check_tool and prints the results.
There is a 30 second timeout on the linter due to API Gateway limitations. Extremely large configs may reach that.

Do you save any data?

No and all sessions run on ephemeral Lambda containers. But it’s best to never send any sensitive data.

Can I see the code?

https://github.com/ysawa0/envoylint

Sniff Kubernetes Pod requests and headers using tpcdump

Sniff Kubernetes Pod requests and headers using tpcdump

When your fancy observability tools have failed you there’s still trusty tpcdump

The was done on Ubuntu. YMMV on other distros.

Exec into a pod

1
kubectl exec -it my-pod-name sh

Run

1
2
cat /sys/class/net/eth0/iflink
> 588 # container eth id

It should return a number, the container eth id

Now run

1
kubectl describe po my-pod-name | grep Node

To find out the node it’s running on

SSH into the node then run below to find the eni id

1
2
ip link | grep 588
> eni89aabc12345

Now use tcpdump to sniff the requests coming in

1
tcpdump -A -i eni89aabc12345

To capture a specific header

1
tcpdump -A -i eni89aabc12345 | grep -i X-Real-IP -C 5
AWS Double Charges Cross AZ Traffic and Two Solutions

AWS Double Charges Cross AZ Traffic and Two Solutions

Recently I saw an intriguing article by Corey Quinn of lastweekinaws.com. His finding was that although AWS lists cross AZ traffic as costing $0.10 / GB on their documentation, cross AZ traffic actually costs $0.20 / GB; they double charge – charging you for data going out of an AZ and again for going into the other AZ.

This pricing is the same as cross region traffic – $0.20 / GB !

When I read this, it was hard to believe! For the last couple of years, I’ve always thought that cross AZ traffic is cheaper than cross region. Many of my co-workers believed the same as well.

I just had to recreate the cross AZ tests in the post to convince my self.

The test is simple as Corey was kind enough to detail the steps. I’ve added some additional details below to make it even easier to recreate.

I chose a region where I had absolutely nothing running in it to factor out any external factors.

The Bill

Per the test, I sent 10GB of traffic from an EC2 in us-west-2a to another in us-west-2b. Once the bill came in, I was charged for 20GB of data. So every 1GB transferred counted as 2.

Cross AZ bills

Talking to AWS Support

After reaching out to AWS, they confirmed the results I saw. While the docs do not expliclity state that cross AZ costs are “double charged”, it does state that data “is charged at $0.01/GB each direction.”
Ingress egress price

Reducing cross AZ data costs

As most organizations today deploy their services across multiple AZs for high availability, it’s difficult to reduce cross AZ data without the right architecture.

Here are two concepts being used today to combat rising cloud costs (note: implementing either is not easy!)

Service mesh (Envoy, Istio, etc.)

By adopting service mesh architecture, it’s possible to force service to service communication to be within the same AZ.

For example, if a service A container is running in us-east-1a, a service mesh sidecar container running alongside it can ensure all requests goto services also running in 1a.

Implementing this with Envoy can be done through the zone aware routing feature or by using the Endpoint Discovery Service to dynamically send instances of your service endpoints (IP addresses) that are in the same AZ.

Cluster per AZ

This is a model being adopted by companies at scale like Tinder, Lyft and Reddit. They’re making it possible by using Kubernetes. The idea here is to have a production cluster per AZ, where each cluster has a complete replica of all your services – each cluster is an independent copy.

Cluster / AZ. Image taken from Reddit k8s talk

All communication within each cluster is done in the same AZ as the traffic is pinned inside.

Of course, any shared data stores must be located outside the clusters. Managed data stores such as S3, DynamoDB and ElastiCache make a good fit.

Replay log files stored locally with ShadowReader load testing

Replay log files stored locally with ShadowReader load testing

ShadowReader can parse logs stored locally and push it to S3, so that it can be replayed by the load testing Lambdas.

The only requirements are that:

  • Logs must be in a consistent format.
  • You must supply a RegEx to instruct the script of the log format.
  • You must supply the time format for the timestamps in the logs.

Below is an example of how to parse logs stored in the default Nginx log format

1
2
3
log_format combined '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';

How to

First, save the below to a logs.txt file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
10.168.166.132 - - [15/Mar/2019:04:12:24 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:12:31 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:12:39 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:12:46 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:12:54 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:13:01 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:13:09 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:13:16 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:13:24 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:13:31 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:13:39 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:13:46 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:13:54 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:14:01 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:14:09 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:14:16 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:14:24 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:14:31 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:14:39 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:14:46 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:14:54 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:15:01 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:15:09 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:15:16 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:15:24 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:15:31 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:15:39 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:15:46 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:15:54 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.168.78 - - [15/Mar/2019:04:16:01 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"
10.168.166.132 - - [15/Mar/2019:04:16:09 +0000] "GET / HTTP/1.1" 403 23 "-" "ELB-HealthChecker/2.0" "-"

Now run the local parser, parser.py via the terminal.
The RegEx capturing group for the timestamp field must be named timestamp in the RegEx provided.
There must be a RegEx capturing group named uri which captures the uri of the logged event.
The RegEx must be in the Python format.

1
2
3
4
5
6
7
:param file: Name of log file to parse. Accepts wildcards.
:param app: Name of the application for the logs.
:param bucket: S3 bucket to store the parsed logs to, Ex: "my-bucket123"
:param timeformat: The format of the timestamp in the logs. Ex: 'DD/MMM/YYYY:HH:mm:ss ZZ'
Accepts the following tokens: https://pendulum.eustace.io/docs/#tokens
:param regex: Regex to use to parse the logs.
Ex: '(?P<remote_addr>[\S]+) - (?P<remote_user>[\S]+) \[(?P<timestamp>.+)\] "(?P<req_method>.+) (?P<uri>.+) (?P<httpver>.+)" (?P<status>[\S]+) (?P<body_bytes_sent>[\S]+) "(?P<referer>[\S]+)" "(?P<user_agent>[\S]+)" "(?P<x_forwarded_for>[\S]+)"'

Run the local parser

1
2
# inside the shadowreader directory
pip install -r requirements-local-parser.txt
1
2
3
python3 parser.py logs.txt --app app1 --bucket my-bucket \
--timeformat 'DD/MMM/YYYY:HH:mm:ss ZZ' \
--regex '(?P<remote_addr>[\S]+) - (?P<remote_user>[\S]+) \[(?P<timestamp>.+)\] "(?P<req_method>.+) (?P<uri>.+) (?P<httpver>.+)" (?P<status>[\S]+) (?P<body_bytes_sent>[\S]+) "(?P<referer>[\S]+)" "(?P<user_agent>[\S]+)" "(?P<x_forwarded_for>[\S]+)"'
1
2
3
4
Wildcard example for parsing multiple files
python3 parser.py /tmp/logs-2019* --app app1 --bucket my-bucket \
--timeformat 'DD/MMM/YYYY:HH:mm:ss ZZ' \
--regex '(?P<remote_addr>[\S]+) - (?P<remote_user>[\S]+) \[(?P<timestamp>.+)\] "(?P<req_method>.+) (?P<uri>.+) (?P<httpver>.+)" (?P<status>[\S]+) (?P<body_bytes_sent>[\S]+) "(?P<referer>[\S]+)" "(?P<user_agent>[\S]+)" "(?P<x_forwarded_for>[\S]+)"'

NOTE: The S3 bucket set in --bucket must be the same as the name of the deployed parsed_data_bucket in serverless.yml

You should see an output like below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
5 minutes of traffic data was uploaded to S3.
Average requests/min: 6
Max requests/min: 8
Min requests/min: 2
Timezone found in logs: +00:00
To load test with these results, use the below parameters for the orchestrator in serverless.yml
==========================================
test_params: {
"base_url": "http://$your_base_url",
"rate": 100,
"replay_start_time": "2019-03-15T04:12",
"replay_end_time": "2019-03-15T04:16",
"identifier": "oss"
}
apps_to_test: ["app1"]
==========================================

Paste the test_params and apps_to_test into serverless.yml and follow the other guides to start the load test.

Hands Free Canary with ALB Advanced Routing Rules

Hands Free Canary with ALB Advanced Routing Rules

Canary deployments may seem like an advanced technique that requires a team of engineers to implement.

But with the new Advanced Request Routing for ALBs (Application Load Balancer), safely releasing new versions of your application straight into production has never been easier.

First, either create a new ALB or use an existing ALB and copy its DNS name.
e

Then, clone this repo https://github.com/ysawa0/alb-canary

And copy the DNS name to this section of serverless.yml

1
2
3
4
environment:
stage: ${self:custom.stage}
region: ${self:custom.region}
alb_dns_name: canary-alb-1183014609.us-east-1.elb.amazonaws.com

This repo will deploy a Lambda with a API Gateway endpoint that will redirect users to the ALB with a twist – it will add a ?id=$val GET parameter. Where $val will be an integer from 1 to 6.

It uses the Serverless Framework

1
2
# Install Serverless if you don't have it
npm install serverless -g

Then run to deploy the Lambda and API Gateway

1
sls deploy

Save the endpoint of the deployed API Gateway for later.
e

Now, we will set up routing rules that will mimic our “application”.

Click View/edit rules

e

Add the rules below
e
e

It should now look like this.
e

Now, trying querying the API Gateway endpoint we deployed earlier.

1 out of 6 times, it should be bucketed into the canary rule.

1
2
3
4
curl -L https://dmkgpj2yxh.execute-api.us-east-1.amazonaws.com/qa/canary
Id was 1 through 5
curl -L https://dmkgpj2yxh.execute-api.us-east-1.amazonaws.com/qa/canary
Id is 6, you've been canaried!

That’s it! You’ve set up a canary deployment where 1/6 users are canaried.

AWSJar makes it easy to save data from AWS Lambda

AWSJar makes it easy to save data from AWS Lambda

🏺 AWSJar

Downloads
Python 3.6

Jar Logo

🏺 AWSJar makes it easy to save data from AWS Lambda.

The data (either a dict, list, float, int, or string) can be saved within the Lambda itself as an environment variable or on S3.

Install

1
pip install awsjar

Examples

Increment a sum with every invocation

1
2
3
4
5
6
7
8
9
10
11
12
import awsjar

def lambda_handler(event, context):
jar = awsjar.Jar(context.function_name)
data = jar.get() # Will return an empty dict if state does not already exist.

s = data.get("sum", 0)
data["sum"] = s + 1

jar.put(data)

return data

Make sure your website is up 24/7

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import awsjar
import requests

# Set a CloudWatch Event to run this Lambda every minute.
def lambda_handler(event, context):
jar = awsjar.Jar(context.function_name)
data = jar.get() # Will return an empty dict if state does not already exist.

last_status_code = data.get("last_status_code", 200)

result = requests.get('http://example.com')
cur_status_code = result.status_code

if last_status_code != 200 and cur_status_code != 200:
print('Website might be down!')

jar.put({'last_status_code': cur_status_code})

Save data to S3

1
2
3
4
5
6
7
8
9
10
import awsjar

# Save your data to an S3 object - s3://my-bucket/state.json
bkt = awsjar.Bucket('my-bucket', key='state.json')

data = {'num_acorns': 50, 'acorn_hideouts': ['tree', 'lake', 'backyard']}
bkt.put(data)

state = bkt.get()
>> {'num_acorns': 50, 'acorn_hideouts': ['tree', 'lake', 'backyard']}

How to

  1. Jar
    1. Initialization
    2. Save Data
    3. Serialize Data
    4. IAM Role for Lambda
  2. Bucket
    1. Initialization
    2. Save data
    3. Specifying Keys
    4. S3 Versioning
    5. Serialize Data

Jar

Save your data within the Lambda itself, as an environment variable.

This method has no associated costs but AWS only allows you to store up to 4KB of data in the environment variables.

Jar can compress the data before storing it, allowing up to about 8KB of uncompressed data.

This may not seem like much, but it can cover a lot of use cases. It’s also nice to not have to provision extra resources and keep everything self contained.
Here’s a 7KB list that will fit with Jar.

1
2
3
x = list(range(1400))
>> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260, 1261, 1262, 1263, 1264, 1265, 1266, 1267, 1268, 1269, 1270, 1271, 1272, 1273, 1274, 1275, 1276, 1277, 1278, 1279, 1280, 1281, 1282, 1283, 1284, 1285, 1286, 1287, 1288, 1289, 1290, 1291, 1292, 1293, 1294, 1295, 1296, 1297, 1298, 1299, 1300, 1301, 1302, 1303, 1304, 1305, 1306, 1307, 1308, 1309, 1310, 1311, 1312, 1313, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 1321, 1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1331, 1332, 1333, 1334, 1335, 1336, 1337, 1338, 1339, 1340, 1341, 1342, 1343, 1344, 1345, 1346, 1347, 1348, 1349, 1350, 1351, 1352, 1353, 1354, 1355, 1356, 1357, 1358, 1359, 1360, 1361, 1362, 1363, 1364, 1365, 1366, 1367, 1368, 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 1380, 1381, 1382, 1383, 1384, 1385, 1386, 1387, 1388, 1389, 1390, 1391, 1392, 1393, 1394, 1395, 1396, 1397, 1398, 1399]
jar.put(x)

Initialization

1
2
3
4
5
6
7
8
9
10
import awsjar

# Cans specify region if testing locally
jar = awsjar.Jar(lambda_name='sams-lambda', region='us-east-1')

# If running the code in Lambda, it will automatically know the proper region it's running in.
jar = awsjar.Jar(lambda_name='sams-lambda')

# Turn on data compression
jar = awsjar.Jar(lambda_name='sams-lambda', compression=True)

Save data

1
2
3
4
5
data = {'num_acorns': 50, 'acorn_hideouts': ['tree', 'lake', 'backyard']}
jar.put(data)

state = jar.get()
>> {'num_acorns': 50, 'acorn_hideouts': ['tree', 'lake', 'backyard']}

Serializing data

Jar comes with datetime encoders/decoders for you to use.

It uses the standard library json.dumps and json.loads to serialize data so it’s possible to write your own encoder/decoders to serialize your data.

Here’s some instructions

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from awsjar import Jar, datetime_decoder, datetime_encoder
from datetime import datetime

jar = Jar(
lambda_name=lambda_name,
region=region,
decoder=datetime_decoder,
encoder=datetime_encoder,
)
time = datetime.now()

data = {"list": [1, 2, 3], "dt1": time}

jar.put(data)
x = jar.get()
>> {"list": [1, 2, 3], 'dt1': datetime.datetime(2019, 1, 9, 18, 49, 44, 847202)}

IAM Role

Any Lambda using Jar to save to an env var will need these permissions specified in the Role.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"lambda:UpdateFunctionConfiguration",
"lambda:GetFunctionConfiguration"
],
"Resource": "*"
}
]
}

Bucket

Save your data to S3.

Initialization

1
2
3
4
5
6
7
8
9
import awsjar

bkt = awsjar.Bucket(bucket='my-bucket', key='state.json')

# Can specify region if you'd like.
bkt = awsjar.Bucket(bucket='my-bucket', key='state.json', region='us-east-1')

# This will pretty print any data saved to S3.
bkt = awsjar.Bucket(bucket='my-bucket', key='state.json', pretty=True)

Save data

1
2
3
4
5
6
7
8
data = {'num_acorns': 50, 'acorn_hideouts': ['tree', 'lake', 'backyard']}
bkt.put(data)

state = bkt.get()
>> {'num_acorns': 50, 'acorn_hideouts': ['tree', 'lake', 'backyard']}

bkt.delete() # Delete the object
bkt.delete(key="key123") # Delete the object

Specifying keys

You can specify the key to override the key that was used in initialization.

1
2
3
4
5
6
7
8
bkt = aj.Bucket(bucket='my-bucket', key='state.json')
bkt.put(['test']) # Saved to s3://my-bucket/state.json

data = ['override']
bkt.put(data, key="override.json") # Saved to s3://my-bucket/override.json

state = bkt.get(key="override.json")
>> ['override']

Versioning

S3 has an eventual consistency data model

For example, this means that getting an object immediately after overwriting it may not return the data you expect.

To overcome this, enable versioning

If an S3 Bucket has versioning enabled, Bucket will detect it automatically and fetch the latest version of an object on any get() calls.

1
2
3
4
5
6
7
8
# Check versioning status
bkt.is_versioning_enabled()

# Enable versioning
bkt.enable_versioning()

# Disable versioning
bkt.enable_versioning()

Serializing data

Same as Jar

Contributing

Please see the contributing guide for more specifics.

Contact / Support

Please use the Issues page

I greatly appreciate any feedback / suggestions! Email me at: yukisawa@gmail.com

License

Distributed under the Apache License 2.0. See LICENSE for more information.

Using AWS ElastiCache Redis with Spinnaker

Using AWS ElastiCache Redis with Spinnaker

To productionalize a Spinnaker installation for high availability, one of the recommendations is to use an external Redis store, such as AWS ElasticCache. This guide will go over how to migrate a Kubernetes installation of Spinnaker to an AWS ElasticCache Redis instance using Halyard.

All config files (with the proper directory structure) used in this guide can be found in this this repo: ysawa0/spinnaker-elasticcache-redis

spinnaker logo elasticcache logo

Create the ElastiCache instance

elasticcache-settings

Keep Cluster Mode unchecked.

Node type will depend on your needs and budget, here we chose a m5.large

For Engine Version choose 3.2.10.

Configure Halyard and update Spinnaker

elasticcache-settings

After the instance is created, copy the Primary Endpoint for the cluster.

If you want to update all Spinnaker services at once, place this snippet into ~/.hal/default/service-settings/redis.yml, and replace $REDIS_PRIMARY_ENDPOINT with your endpoint.

1
2
overrideBaseUrl: redis://$REDIS_PRIMARY_ENDPOINT
skipLifeCycleManagement: true

To update each Spinnaker service at a time, place the below into ~/.hal/default/profile-settings/$SERVICE-local.yml

Where $SERVICE would be orca, clouddriver, gate, etc.

1
services.redis.baseUrl: redis://$REDIS_PRIMARY_ENDPOINT

Lastly, after updating the base URLs, place this into ~/.hal/default/profiles/gate-local.yml.

1
2
3
redis:
configuration:
secure: true

Now update Spinnaker by running hal deploy apply.

After you confirm that everything is working as expected, it’s time to disable the spin-redis service.

Update ~/.hal/default/service-settings/redis.yml by inserting enabled: false

1
2
3
overrideBaseUrl: redis://$REDIS_PRIMARY_ENDPOINT
skipLifeCycleManagement: true
enabled: false

And scale down the Redis Deployment to 0 replicas in Kubernetes.

1
kubectl scale deploy spin-redis -n spinnaker --replicas=0

Now sit back, relax and enjoy having to monitor one less data store.

How we fixed a Node.js memory leak by using ShadowReader to replay production traffic into QA

How we fixed a Node.js memory leak by using ShadowReader to replay production traffic into QA

A problem Edmunds faced recently was a memory leak in our Node.js application. It confounded the engineering team as it was only occurring in our production environment; we could not reproduce it in QA, until we introduced a new type of load testing tool developed here at Edmunds, which replays production traffic.

Shadow-reader-logo
load-test-animation

Read about it on opensource.com!