This post covers in detail, the logging functionality within Lambda functions using the Python runtime — skip to the TL;DR if you’d like to just understand what to do without getting into the nitty-gritty. For the brave folks still reading…
Last week I took the AWS Certified Security – Specialty exam — and I passed with a score of 930 (Woohoo!!)
In this post I cover why I took it, what I did to pass, my overall exam experience, and some tips I learnt along the way.
So let’s go.
Why would anybody pay good money, subject themselves to hours of studying, only to end up sitting in a cold exam room for hours answering many multiple choice questions!
And the reward for that work is an unsigned PDF file claiming you’re ‘certified’, and ‘privilege’ access to buy AWS branded notebooks and water bottles!! Unless those water bottles come with a reserved instance for Microsoft SQL server in Bahrain, I’m not interested.
But, jokes asides, I did this for fun and profit, and fortunately I really did enjoy the preparing for this exam. It exposed me to AWS services that I barely knew — and forced me to level-up my skills even on those that I knew.
The exam has a massive focus on VPC, KMS, IAM, S3, EC2, Cloudtrail and Cloudwatch. While lightly touching Guardduty, Macie, Config, Inspector, Lambda, Cloudfront, WAF, System Manager and AWS Shield.
You need to catch you breath just reading through that list!
But for those diligently keeping count — you’d notice that the majority of those services are serverless — meaning the exam combined my two technological love-affairs … security and serverless!
I wasn’t lying when I said it was fun. So what about the profit.
I’m not sure how good this would be for my career (I literally got the cert last week), but for $300, it’s is relatively cheap, with a tonne of practical value. So trying to get an ROI on this, isn’t going to be hard.
For comparison, the CCSP certification cost nearly twice as much, is highly theoretical and requires professional experience.
The results also help me validate my past years of working on serverless projects, proving I wasn’t just some rando posting useless hobby projects on GitHub. Instead, I’m now a certified AWS professional, posting useless hobby projects on GitHub (it’s all about how you market it!)
So now that we’ve covered the why, let’s move onto how.
GitHub actions is the new kid on the workflow block. It allows users to orchestrate workflows using familiar git commands like push & pull requests, and un-familiar GitHub events like gollum, issue creation and milestone closures. In this post, we’ll…
In my honest (and truly humble) opinion, VPCs don’t make much sense in a serverless architecture — it’s not that they don’t add value, it’s that the value the add isn’t worth the complexity you incur.
After all, you can’t log into a lambda function, there are no inward connections allowed. And it isn’t a persistent environment, some functions may timeout after just 2-3 seconds. Sure, network level security is still worthy pursuit, but for serverless, tightly managing IAM roles and looking after your software supply chain for vulnerabilities would be better value for your money.
But if you’ve got a fleet of EC2s already deployed in a VPC, and your Lambda function needs access them. Then you have no choice but to deploy that function in a VPC as well. Or, if your org requires full network logging of all your workloads, then you’ll also need VPC (and their flow logs) to comply with such requests.
Don’t get me wrong, there is value in having your functions in a VPC, just probably not as much as you think.
Put that aside though, let’s dive into the wonderful world of Lambda functions and VPCs
First, imagine we deploy a simple VPC with 4 subnets.
- A Public Subnet with a Nat Gateway inside it.
- A Private Subnet which routes all traffic through that NAT Gateway
- A Private Subnet without internet (only local routing)
- A Private Subnet without internet but with a SSM VPCe inside it
Let’s label these subnets (1), (2) ,(3) and (4) for simplicity.
Now we write some Lambda functions, and deploy each of them to each subnet. The functions have an attached security group that allows all outgoing connections, and similarly each subnet has very liberal NACLs that allow incoming and outgoing connections.
Then we create a gateway S3 VPC-endpoint (VPCe), and route subnet (4) to it.
Finally, we enable private DNS on the entire VPC. And then outside the subnet we create a bucket and an System Manager Parameter Store Parameter (AWS really need better terms for these things).
The final network looks like this:
Amazon KMS is one of the most integrated AWS services, but probably also the least understood. Most developers know about it, and what it can do, but never really fully realize the potential of the service. So here’s a rundown…
Had a blast at PyConSG 2019, really cool to be in the presence of so many pythonistas. Would definitely recommend, especially since python is one of the more broadly used languages (AI, Blockchain, RPA, etc). My talk was on AWS…
Everyone knows that I’m a Lambda fanboy, and to be fair Lambda deserves all the praise it gets, it is **the** gold-standard for serverless functions. But yesterday, I gave Google Cloudrun a spin, and boy(!) is Lambda is going to get a run for its money.
Which is surprising given Google has traditionally lagged in this area — isn’t it quaint that we use words like ‘traditional’ in the serverless world!
But I digress.
The Lambda equivalent in the Google world, is Google cloud functions … which is (generously speaking) what lambda was 2 years ago– pretty boring. The only advantage I saw it having over Lambda, was the ability to build python packages natively in the
requirements.txt file. But that incurred a build during deploy, which in turn had a limit.
In short, Google Cloud functions lacked the simplicity of Lambda, with little benefit for incurring all that additional complexity.
But Cloud Run is something else. It’s still more complex than lambda, but here the trade-off seems worth it. So let’s take a peek at Google’s new serverless Golden Boy!
Containers vs. Functions
In Lambda the atomic unit of compute is the function, which for an interpreted language like Python is just plaintext code uploaded to AWS. But in Cloud Run the atomic unit is the container — and that can be a container for just the one function, or the container for the entire app itself — with all the routing logic embedded within it.
Now why would you need apps for the serverless world?! You ask indignantly. Aren’t these all supposed to be function based?
Well actually lots of people have legacy code written at the application level, and re-writing an entire application takes a long time, and very rarely succeeds on the first try.
Lambda functions are awesome, but they only provide a single dimension to allocate resources –
memorySize. The simplicity is refreshing, as lambda functions are complex enough — but AWS really shouldn’t have called it
memorySize if it controls CPU as well.
Then again this is the company that gave us Systems Manager Session Manager, so the naming could have been worse (much worse!).
memorySize of your lambda function, allocates both memory and CPU in proportion. i.e. twice as much memory gives you twice as much CPU.
The smallest lambda can start with minimum of 128MB of memory, which you can increment in steps of 64MB, all the way to 3008MB (just shy of 3GB).
So far, nothing special.
But, at 1792MB, something wonderful happens — you get one full vCPU. This is Gospel truth in lambda-land, because AWS documentation says so. In short, a 1792MB lambda function gets 1 vCPU, and a 128MB lambda function gets ~7% of that. (since 128MB is roughly 7% of 1792MB).
Using maths, we realize that at 3008MB, our lambda function is allocated 167% of vCPU.
But what does that 167% vCPU mean?!
I can rationalize anything up to 100%, after all getting 50% vCPU simply means you get the CPU for 50% of the time, and that makes sense up to 100%, but after that things get a bit wonky.
After all, why does having 120% vCPU mean — do you get 1 full core plus 20% of another? Or do you get 60% of two cores?
At the end of 2018, AWS introduced custom runtimes for Lambda functions, which provided customers a way to run applications written in languages not in the holy list of the ‘Official AWS Lambda Runtimes’ which include a plethora of languages. It has 3 versions of Python, 2 versions of Node, Ruby, Java, Go and .NET core (that’s a lot of language support)
Security-wise, it’s better to use an Official AWS Lambda runtime than it is to roll your own. After all, why take ownership for something AWS is already doing for you — and for free!
But, as plentiful as the official runtime list is– there’re always edge-cases where you’d want to roll your own custom runtime to support applications written in languages AWS doesn’t provide.
Maybe you absolutely have to use a Haskell component — or you need to migrate a c++ implementation to lambda. In these cases, a custom runtime allows you to leverage the power of serverless functions even when their runtimes are not officially supported.
Bash Custom Runtime
Which brings us to the topic of today’s post, the bash custom runtime.
For Klayers, I needed a way to update a github repo with a new json file every week — which can be done in python, but no python package came close to the familiarity of
git pull ,
git add and
So rather than try to monkey around a python-wrapper of git, I decided to use git directly — from a shell script — running in a lambda — on the bash runtime.
So I pulled in the runtime a github repo I found, and used it for write a lambda function. Simple right? Well not entirely — running regular shell scripts is easy, but there are some quirks you’ll have to learn when you run them in a lambda function…
Not so fast there cowboy…
Firstly, the familiar home directory in
~/ is off-limits in a lambda function — and I mean off-limits. There is absolutely no-way (that I know off), for you can add files into this directory. Wouldn’t be a big isue, except this is where
git looks for ssh keys and the
Next, because lambda functions are ephemeral, you’ll need a way to inject your SSH key into the function, so that it can communicate to GitHub your behalf.
Finally, because you’ve chosen to use the bash runtime, you’re limited to the awscli utility, which while fully functional doesn’t come with the usual tools as boto3 for python. It’s a lot easier to loop and parse json in python than it is in bash — fortunately,
jq makes that less painful, and
jq is included in the custom runtime :).
Enough talking let’s build this
One of a great things about Lambda functions is that you can’t SSH into it.
This sounds like a drawback, but actually it’s a great security benefit — you can’t hack what you can’t access. Although it’s rare to see SSH used as an entry path for attackers these days, it’s not uncommon to see organizations lose SSH keys every once in a while. So cutting down SSH access does limit the attack surface of the lambda — plus the fact, that the lambda doesn’t exist on a 24/7 server helps reduce that even further.
Your support engineers might still want to log onto a **server**, but in todays serverless paradigm, this is unnecessary. After all, logs no longer exists in
/var/logs they’re on cloudwatch, and there is no need to change passwords or purge files because the lambdas recycle themselves after a while anyway. Leave those lambda functions alone will ya!
As a developer, you might want to see what is **in** the lambda function itself — like what binaries are available (and their versions), or what libraries and environment variables are set. For this, it’s far more effective to just log onto a lambci docker container — Amazon work very closely with lambci to ensure their container matches what’s available in a Lambda environment. Just run any of the following
docker run -ti lambci/lambda:build-python3.7 bash
docker run -ti lambci/lambda:build-python3.6 bash
Lambci provide a corresponding docker container for all AWS runtimes, they even provide a
build image for each runtime, that comes prepackaged with tools like
zip. This is the best way to explore a lambda function in interactive mode, and build lambda layers on.
But sometimes you’ll find yourself wanting to explore the actual lambda function you ran, like checking if the binary in the lambda layer was packaged correctly, or just seeing if a file was correctly downloaded into
/tmp— local deploy has it’s limits, and that’s what this post is for.