All posts filed under “Cloud Computing

Cloud Computing whether it’s IaaS , Paas or even SaaS.

comment 1

My experience with AWS Certified Security – Specialty

Last week I took the AWS Certified Security – Specialty exam — and I passed with a score of 930 (Woohoo!!)

In this post I cover why I took it, what I did to pass, my overall exam experience, and some tips I learnt along the way.

So let’s go.

Why?

Why would anybody pay good money, subject themselves to hours of studying, only to end up sitting in a cold exam room for hours answering many multiple choice questions!

And the reward for that work is an unsigned PDF file claiming you’re ‘certified’, and ‘privilege’ access to buy AWS branded notebooks and water bottles!! Unless those water bottles come with a reserved instance for Microsoft SQL server in Bahrain, I’m not interested.

But, jokes asides, I did this for fun and profit, and fortunately I really did enjoy the preparing for this exam. It exposed me to AWS services that I barely knew — and forced me to level-up my skills even on those that I knew.

The exam has a massive focus on VPC, KMS, IAM, S3, EC2, Cloudtrail and Cloudwatch. While lightly touching Guardduty, Macie, Config, Inspector, Lambda, Cloudfront, WAF, System Manager and AWS Shield.

You need to catch you breath just reading through that list!

But for those diligently keeping count — you’d notice that the majority of those services are serverless — meaning the exam combined my two technological love-affairs … security and serverless!

I wasn’t lying when I said it was fun. So what about the profit.

I’m not sure how good this would be for my career (I literally got the cert last week), but for $300, it’s is relatively cheap, with a tonne of practical value. So trying to get an ROI on this, isn’t going to be hard.

For comparison, the CCSP certification cost nearly twice as much, is highly theoretical and requires professional experience.

The results also help me validate my past years of working on serverless projects, proving I wasn’t just some rando posting useless hobby projects on GitHub. Instead, I’m now a certified AWS professional, posting useless hobby projects on GitHub (it’s all about how you market it!)

So now that we’ve covered the why, let’s move onto how.

comment 0

Lambda functions in a VPC

In my honest (and truly humble) opinion, VPCs don’t make much sense in a serverless architecture — it’s not that they don’t add value, it’s that the value the add isn’t worth the complexity you incur.

After all, you can’t log into a lambda function, there are no inward connections allowed. And it isn’t a persistent environment, some functions may timeout after just 2-3 seconds. Sure, network level security is still worthy pursuit, but for serverless, tightly managing IAM roles and looking after your software supply chain for vulnerabilities would be better value for your money.

But if you’ve got a fleet of EC2s already deployed in a VPC, and your Lambda function needs access them. Then you have no choice but to deploy that function in a VPC as well. Or, if your org requires full network logging of all your workloads, then you’ll also need VPC (and their flow logs) to comply with such requests.

Don’t get me wrong, there is value in having your functions in a VPC, just probably not as much as you think.

Put that aside though, let’s dive into the wonderful world of Lambda functions and VPCs

Working Example

First, imagine we deploy a simple VPC with 4 subnets.

  1. A Public Subnet with a Nat Gateway inside it.
  2. A Private Subnet which routes all traffic through that NAT Gateway
  3. A Private Subnet without internet (only local routing)
  4. A Private Subnet without internet but with a SSM VPCe inside it

Let’s label these subnets (1), (2) ,(3) and (4) for simplicity.

Now we write some Lambda functions, and deploy each of them to each subnet. The functions have an attached security group that allows all outgoing connections, and similarly each subnet has very liberal NACLs that allow incoming and outgoing connections.

Then we create a gateway S3 VPC-endpoint (VPCe), and route subnet (4) to it.

Finally, we enable private DNS on the entire VPC. And then outside the subnet we create a bucket and an System Manager Parameter Store Parameter (AWS really need better terms for these things).

The final network looks like this:

comment 1

Using Terraform and Serverless Framework

Image from wikicommons.

The Serverless framework (SF) is a fantastic tool for testing and deploying lambda functions, but it’s reliance on cloudformation makes it clumsy for infrastructure like DynamoDB, S3 or SQS queues.

For example, if your serverless.yml file had 5 lambdas, you’d be able to sls deploy all day long. But add just one S3 bucket, and you’d first have to sls remove before you could deploy again. This different behavior in the framework, once you introduce ‘infra’ is clumsy. Sometimes I use deploy to add functions without wanting to remove existing resources.

Terraform though, keeps the state of your infrastructure, and can apply only the changes. It also has powerful commands like taint, that can re-deploy a single piece of infrastructure, for instance to wipe clean a DynamoDB.

In this post, I’ll show how I got Terraform and Serverless to work together in deploying an application, using both frameworks strengths to complement each other.

**From here on, I’ll refer to tool Serverless Framework as SF to avoid confusing it with the actual term serverless

Terraform and Serverless sitting on a tree

First some principles:

  • Use SF for Lambda & API Gateway
  • Use Terraform for everything else.
  • Use a tfvars file for Terraform variable
  • Use JSON for the tfvars file
  • Terraform deploys first followed by SF
  • Terraform will not depend on any output from SF
  • SF may depend on output from terraform
  • Use SSM Parameter Store to capture Terraform outputs
  • Import inputs into Serverless from SSM Parameter Store
  • Use workspaces in Terraform to manage different environments.
  • Use stages in Serverless to manage different environments.
  • stage.name == workspace.name

In the end the deployment will look like this:

comment 0

Malaysia Data Center aspirations

A bernama report a couple of days ago mentioned that Malaysia was ‘well-positioned’ to be a world class preferred hub for data centers: KUALA LUMPUR, April 18 (Bernama) — Malaysia is well-positioned to be a world-class preferred hub as a…

comments 2

Undersea Cables in Malaysia : The Need for infrastructure

Undersea Cable Map of MalaysiaA good friend and regular reader (or so I hope) of this blog sent me this link last week. It’s a really nifty chart of all the undersea cables in the world. Now, for those who don’t know what undersea cables are, they’re basically the huge data cables that carry around the data we use for the internet. While modern satellites orbit overhead, the unfortunate truth is that satellites aren’t able to carry even a fraction of the bandwidth that undersea cables do, and chances are if you’re reading this now–at least some of this data has gone through an undersea cable before ending up on your screen.

If you look at the moment from an abstract level however, you begin to notice that these cables tend to ‘cluster’ around certain areas. We can see clear clusters in America, but more specifically in states like California, Florida, New Jersey and Oregon. Other places we see clusters are in Brazil, particularly Sao Paolo, and then we huge clusters in the UK (and zooming in you’ll see there’s huge connectivity to Ireland), Portugal and a large amount of cables going through the Suez Canal. In Asia, we see huge metropolis of these things in Japan, Korea, Shanghai and Taiwan, and finally much closer to home we see a huge clustering happening next door–in Singapore and a tiny bit of clustering happening in Sydney, Australia.

comments 5

Cracking Passwords with the Cloud

I ┬áremember my computer security professor telling me that encryption doesn’t make it impossible to decrypt, but rather infeasible to decrypt. Nobody is going to buy a supercomputer to crack your final year thesis, simply because the data isn’t worth nearly as much as the cost to crack it–thereby making it infeasible.

With cloud computing, however, end-users and regular joes like us, have access to very very powerful machines for a fraction of their actual cost (since we’re only renting the machines). Couple that with the high scalability of the cloud , it means that what was previously infeasible, is now a very viable option. In fact what used to be only available to big corporations and governments, now has become available to anyone with a credit card and Amazon account.

I’m not talking about complex mathematical approaches to breaking encryption either, I’m talking about the standard brute force method. Brute Force┬ábasically involves trying every single possible password until you eventually find the password that works. In the past brute force wasn’t considered a valid option since trying all those passwords which number in the hundreds of billions, would require a very powerful computer, and most people–not even criminals, had access to that sort of computing power. However, with the advent of cloud computing, powerful hardware is suddenly becoming more available to the general public for low-down prices. What use to cost tens of thousands of dollars per server now cost just 2.60 an hour to ‘rent’.

What if we could use the power of the cloud to crack the average level encryption we have on our zip or excel files? Well it turns out, we can, and it’s results are ridiculous!

comment 1

MSC Cloud Initiative : Why it’s a bridge too far

Why does Amazon–arguably the biggest cloud player in the world–choose to launch it’s Asia-Pacific Offering in Singapore rather than Malaysia? One would think that the prohibitively high prices of land in Singapore, coupled with it’s higher base cost and employee wages would make Singapore a terrible place to put up a Huge Datacenter comprising of thousands of Servers and HVAC units.

Just to compare Malaysia and Singapore, you can build data centers in Malaysia for a fraction of the cost, coupled with cheaper labor and support cost. Our subsidized power, also means that Amazon could benefit from lower electricity bills. Best of all, Malaysia and Singapore, aren’t really that far apart, so why setup shop in Singapore for something that relies on high volume and low cost? The answer is quite simple–Singapore is where the Internet is, or rather that’s where the data flows through. The internet is the information super highway, and just like any other highway the 3 most important criteria for setting up business on the internet is location, location,location.

comment 0

When Lightning strikes the Cloud: Amazon Outage

Google recently announced their Amazon EC2 killer, the Google Compute Engine or GCE. Google wasn’t messing around and went straight for the Amazon jugular releasing 4 instance types all of which appear cheaper than their Amazon counterparts. That being said the price comparison was done solely on the basis on a on-demand Amazon instance types–Amazons most expensive prices, if you compare for the Reserved instances, then prices become more competitive.

It’s exciting to finally see a Juggernaut big enough to take on Amazon in terms of price and scale. This is all around good news for everyone, especially since this report from Cisco estimates that revenues from IaaS providers are not only high right now, but will continue to grow over the next 5 years. There’s a lot of room at the IaaS space, and Google just wants to wet their beak here as well.

So it must have come as a pleasant surprise to Google when they heard ‘hurricane-like’ thunderstorms ripped across the US east coast taking down power to 3.5 million–and the Amazon East Data center as well. I was personally affected by this phenomena when my access to Netflix was abruptly halted, as you can imagine I wasn’t a happy camper.