The system, which was introduced on the first day of the 2020 school session yesterday, takes only two seconds to scan a pupil’s face before his personal information, such as full name, pupil number and class, is stored into the…
This year I resolve to support the media that I like, i.e start paying for content I’ve been consuming for free all this time. I believe that if we want better media, we need to start paying for it, and…
Last week, I launched a new pipeline for Klayers to build Python3.8 Lambda layers in addition to Python3.7. For this, I needed a separate pipeline because not only is it a new runtime, but under the hood this Lambda uses a new Operating System (Amazon Linux 2 vs. Amazon Linux 1)
So I took the opportunity to make things right from an account hierarchy perspective. Klayers for Python3.7 lived in it’s own separate account from all my other hobby projects on AWS — but I kept all stages in it (default, dev and production). [note:Default is an odd-name, but it ties to the Terraform nomenclature]. This afforded some flexibility, but the account felt bloated from the weight of the different deployments — even though they existed in different regions.
It made no sense to have default and dev on the same account as production — especially since accounts were free. Having entirely separate accounts for prod & non-prod incurred no cost, and came with the benefit of additional free-tiers and tidier accounts with fewer resources in them — but the benefits don’t stop there.
Android TV boxes, are computers that stream content from the internet onto your TV. The difference between them and your smart-phone is that it has a HDMI connector to your TV, and it usually comes pre-loaded with software to illegally stream content.
While the boxes themselves, are general purpose computers running Android (the most popular OS today), the real focus of any regulation should be on the software on the device and the internet-based streaming services that support them.
Which seems to be the case…
Today, TheStar reports that the MCMC will begin blocking these unauthorized streaming services, rendering the boxes that connect to them useless.
But, if the MCMC uses it’s usual method of DNS filtering to implement the block, it’ll be trivial for most folks to circumvent the issue, the boxes run Android after all. The government will very quickly find itself in a cat and mouse situation in trying to block them.
I started the year building out govScan.info, a site that audits .gov.my websites for TLS implementation. Overall I curated a list of ~5000 Malaysian government domains through various OSINT and enumeration techniques and now use that list to scan them…
Sayakenahack was undoubtedly the highlight of my 2017. If you’ve come from sayakenahack.com, I’m sorry but I’ve shutdown the site :(. I learnt so much from it, and it was even my ticket for presenting at Hack In the Box…
Over the past few weeks, I’ve been toying with lambda functions and thinking about using them for more than just APIs. I think people miss the most interesting aspect of serverless functions — namely that they’re massively parallel capability, which can do a lot more than just run APIs or respond to events.
There’s 2-ways AWS let’s you run lambdas, either via triggering them from some event (e.g. a new file in the S3 bucket) or invoking them directly from code. Invoking is a game-changer, because you can write code, that basically offloads processing to a lambda function directly from the code. Lambda is a giant machine, with huge potential.
What could you do with a 1000-core, 3TB machine, connected to a unlimited amount of bandwidth and large number of ip addresses?
Here’s my answer. It’s called potassium-40, I’ll explain the name later
So what is potassium-40
Potassium-40 is an application-level scanner that’s built for speed. It uses parallel lambda functions to do http scans on a specific domain.
Currently it does just one thing, which is to grab the
robots.txt from all domains in the cisco umbrella 1 million, and store the data in the text file for download. (I only grab legitimate robots.txt file, and won’t store 404 html pages etc)
This isn’t a port-scanner like nmap or masscan, it’s not just scanning the status of a port, it’s actually creating a TCP connection to the domain, and performing all the required handshakes in order to get the
Scanning for the existence of ports requires just one SYN packet to be sent from your machine, even a typical banner grab would take 3-5 round trips, but a http connection is far more expensive in terms of resources, and requires state to be stored, it’s even more expensive when TLS and redirects are involved!
Which is where lambda’s come in. They’re effectively parallel computers that can execute code for you — plus AWS give you a large amount of free resources per month! So not only run 1000 parallel processes, but do so for free!
A scan of 1,000,000 websites will typically take less than 5 minutes.
But how do we scan 1 million urls in under 5 minutes? Well here’s how.
Just because you have webhook, doesn’t mean you need a webserver.
With serverless AWS Lambdas you’ve got a free (as in beer) and always on ability to receive webhooks callbacks without the need for pesky servers. In this post, I’ll setup a serverless solution to accept incoming
POST from a GitHub webhook.
DNS Queries on GovScan.Info
This post is a very quick brain-dump of stuff I did over the weekend, in the hopes that I don’t forget it :). Will post more in-depth material if time permits over the weekend.
govScan.info, a site I created as a side hobby project to track TLS implementation across
.gov.my websites — now tracks DNS records as well. For now, I’m only tracking MX, NS, SOA and TXT records (mostly to check for dmarc) but I may put more record types to query.
DNS Records are queried daily at 9.05pm Malaysia Time (might be a minute or two later, depending on the domain name) and will be stored indefinitely. Historical records can be queried via the API, and documentation has been updated.
The security community has been abuzz with an absolutely shocker of story from Bloomberg. The piece reports that the Chinese Government had subverted the hardware supply chain of companies like Apple and Amazon, and installed a ‘tiny chip’ on motherboards manufactured by a company called Supermicro. What the chip did — or how it did ‘it’ was left mostly to the readers imagination.
Supermicro’s stock price is down a whooping 50%, which goes to show just how credible Bloomberg is as a news organization. But besides the Bloomberg story and the sources (all of which are un-named), no one else has come forward with any evidence to corroborate the piece. Instead, both Apple and Amazon have vehemently denied nearly every aspect of the story — leaving us all bewildered.
But Bloomberg are sticking to their guns, and they do have credibility — so let’s wait and see. For now, let’s put this in the bucket called definitely could happen, but probably didn’t happen.
I can only imagine how hard it must be to secure a modern hardware supply chain, but the reason for this post is to share my experience in some supply chain conundrums that occurred to a recent project of mine.
I operate (for fun) a website called GovScan.info, a python based application that scans various
gov.my websites for TLS implementation (or lack thereof). Every aspect of the architecture is written in Python 3.6, including a scanning script, and multiple lambda functions that are exposed via an API, with the entirety of the code available on github.
And thank God for GitHub, because in early August I got a notification from GitHub alerting me to a vulnerability in my code. But it wasn’t a vulnerability in anything I wrote — instead it was in a 3rd-party package my code depended on.