comments 4

The Malaysian Ministry of Education Data Breach

Ok, I’ve been pretty involved in the latest data breach, so here’s my side of the story.

At around 11pm last Friday, I got a query from Zurairi at The Malay Mail, asking for a second opinion on a strange email the newsdesk received from an ‘anonymous source’. The email was  regular vulnerability disclosure, but one that was full of details, attached with an enormous amount of data.

This wasn’t a two-liner tweet, this was a detailed email with outlined sub-sections. It covered why they were sending the email, what the vulnerable system was, how to exploit the vulnerability and finally (and most importantly!) a link to a Google Drive folder containing Gigabytes of data.

The email pointed to a Ministry of Education site called SAPSNKRA, used for parents to check on their children’s exam results. Quick Google searches reveal the site had security issues in the past including one blog site advising parents to proceed past the invalid certificate warning in firefox. But let’s get back to the breach.

My first reaction was to test the vulnerability, and sure enough, the site was vulnerable to SQL Injection, in exactly the manner specified by the email. So far email looked legitimate.

Next, I verified the data in the Google Drive folder, by downloading the gigabytes of text files, and checking the IC Numbers of children I knew.

I further cross-checked a few parents IC numbers against the electoral roll. Most children have some indicator of their fathers name embedded in their own, either through a surname or the full name of the father after the bin, binti, a/l or a/p. By keying in the fathers IC number, and cross-referencing the fathers name against what was in the breach, it was easy to see that the data was the real deal.

So I called back Zurairi and confirmed to him that the data was real, and that the site should be taken offline. I also contacted a buddy of mine over at MKN, to see if he could help, and Zurairi had independently raised a ticket with MyCert (a ticket??!!) and tried to contact the Education Minister via his aide.

Obviously neither Zurairi nor myself, or any of the other journalist I kept in touch with, could report on the story. The site was still vulnerable, and we didn’t want someone else breaching it.

The next morning, I emailed the anonymous source and asked them to take down the Google Drive, explaining that the breach was confirmed, and people were working to take down the site. Hence there was no reason to continue exposing all of that personal information on the internet.

They agreed, and wiped the drive clean, and shortly after I got confirmation that the SAPSNKRA website had been taken down. So with the site down, and the Google Drive wiped cleaned, it seemed the worst was behind us.

Danger averted…at least for now.

But, since Data breaches last forever, and this was a breach, we should talk about what data was in the system. Zurairi did a good job here, but here’s my more detail take on the issue.

comments 2

3 times GovTLS helped fixed government websites

Couple months back I started GovTLSAudit. A simple service that would scan  .gov.my domains, and report on their implementation of TLS. But the service seems to have benefits above and beyond that, specifically around having a list of a government sites that we can use to cross-check against other intel sources like Shodan (which we already do daily) and VirusTotal.

So here’s 3 times GovTLSAudit helped secure government websites.

That time Yayasan Islam Terengganu was used a phishing website

I used virustotal’s search engine to see if they had extra .gov.my domains to scan, and found a few rather suspicious looking urls including:

paypal-security-wmid0f4-110ll-pp16.yit.gov.my
appleid.corn-security2016wmid7780f4-110ll-16.yit.gov.my
paypal-security-wmid7110f4-110ll-pp16.yit.gov.my

This was an obvious phishing campaign being run out of a .gov.my domain. Digging further, I found that the IP address the malicious urls resolve to was local, and belonged to Exabytes. And while the root page was a bare apache directory, buried deep within the sites sub-directories was a redirect that pointed to a Russian IP.

I took to twitter to report my findings — I kinda like twitter for this, and the very next day Exabytes come back with a followup that they were fixing it. That’s good, because having a phishing campaign run on .gov.my infrastructure isn’t exactly what you’d like.

There’s a lot more details in the tweet about how I investigated this,– click here to follow the thread. A warning though — I regularly delete my old tweets. So get it while it’s there :).

comment 0

The Astro Data Breach

I previously wrote about how data breaches are like diamonds: They’re not as rare as you think They’re worth far more to you than to a thief They last forever And the recent debacle over the Astro data breach epitomizes…

comment 0

First I deleted my most popular tweet — then I deleted 2000 more.

Two weeks ago, I rage-tweeted something regarding Malaysian politics that got a lot more viral than I liked (I’ve censored out the profanity for various reasons, most notably, there are teenagers who read this blog). It was a pointless collection of 200 characters, that somehow resonated with people enough to be shared across social media. Obviously, since it was me, the tweet was filled with a small collection of profanities, and laced with just the right amount of emotive content 🙂

But then things started getting bad.

Soon after I tweeted, I received messages from folks I hadn’t met in decades, showing me screenshots of their whatsapp group that had my tweet — my wife’s chinese speaking colleagues were showing it to her at work — I checked, and nearly 2,000 people retweeted it, which isn’t typical for me, and frankly speaking pretty scary.

As much as I’d like to have my content shared, the tweet in question is nothing but couple of crude words pieced together in a ‘rage-tweet’. And I understand that it emotionally resonates with folks who are angry, but if this the level of discourse we’re having on  Malaysian social media, we should be alarmed. Completely pointless rants being viralled is not how we ubah, it is the absolute opposite of how we ubah!

Research on the virality of articles from the New York Times showed that ‘angry’ content was more viral than any other, beating out awe, surprise and even practical value. The angrier the content, the more likely it would be shared. A rage-tweet is more likely to go viral than something like fuel-saving tips, even though the latter clearly is more valuable to readers.

At this point, I’d rant about how the media has a responsibility to look beyond clicks and ads, and to think about the impact of their content on society, but since I owned the tweet, I simply deleted it. Of course, I can’t stop the screen-shots being shared across whatsapp, but we do what we can.

Deleting your tweets

That got me thinking, twitter is a cesspool of angry farts screaming at each other, and that has some value.

But while, what I tweet today, may be relevant and acceptable today, it may not be 2-3 years from now. Kinda like how Apu from the Simpsons was acceptable and non-offensive in the 90’s.

I’m ashamed to say it, but I once thought that Michael Learns to Rock was a great rock band, in context, thats acceptable for a 12 year old 2 decades ago, before even Napster or Limewire. Of course, as a adult in 2018, I’m thoroughly aware that AC/DC are the greatest rock band ever, and Michael Learns to Rock, well they’re not exactly Denmark’s best export.

And that’s the problem, twitter removes context  — it’s very easy to take a 140 character tweet from 5 years ago out of context. Nobody cares about context on a platform that limits users to 140 characters (or 280 characters since end 2017). Maybe you quoted an article from TheMalaysianInsider, which, guess what, no longer exist. Context is rather fluid on twitter, and it changes rapidly over weeks, let alone the years from your first tweet.

For example,  this tweet from Bersatu’s Wan Saiful:

Gee, I wonder who he was talking about, a simple internet search will give you the answer, but that’s not the point.

Wan Saiful changed his opinion,  and he’s explained why, people should be allowed to change their mind.Freedom to change your opinion not just perfectly fine, it’s a per-requisite for progress.If we allow our tweet history to be a ball-and-chain that ties us to our old idealogy, how could we ever progress? Everybody changes their mind — and that’s OK.

The point is twitter should not be a historical archive — it should be current. A great place to have an informed discussion of current affairs, but not a place to keep old, out-dated and out of context material floating around.

Hence, I decided to delete all my tweets that were older than 90 days old, and here’s how.

comment 0

Gov TLS Audit : Architecture

Last Month, I embarked on a new project called GovTLS Audit, a simple(ish) program that would scan 1000+ government websites to check for their TLS implementation. The code would go through a list of hostnames, and scan each host for TLS implementation details like redirection properties, certificate details, http headers, even stiching together Shodan results into a single comprehensive data record. That record would inserted into a DynamoDB, and exposed via a rest endpoint.

Initially I ran the scans manually Sunday night, and then uploaded the output files to S3 Buckets, and ran the scripts to insert them into the DB.

But 2 weeks ago, I decided to Automate the Process, and the architecture of this simple project is complete(ish!). Nothing is ever complete, but this is a good checkpoint, for me to begin documenting the architecture of GovTLS Audit (sometimes called siteaudit), and for me to share.

What is GovTLS Audit

First let’s talk about what GovTLS Audit is — it’s a Python Script that scans a list of sites on the internet, and stores the results in 3 different files, a CSV file (for human consumption), a JSONL file (for insertion into DynamoDB) and a JSON file (for other programmatic access).

A different script then reads in the JSONL file and loads each row into database (DynamoDB), and then uploads the 3 files as one zip to an S3 bucket.

On the ‘server-side’ there are 3 lambda functions, all connected to an API Gateway Endpoint Resource.

  • One that Queries the latest details for a site [/siteDetails]
  • One that Queries the historical summaries for the site [/siteHistory]
  • One that List all scan (zip files) in the S3 Bucket [/listScans]

Finally there’s a separate S3 bucket to serve the ‘website’, but that’s just a simple html file with some javascript to list all scan files available for download. In the End, it looks something like this (click to enlarge):

comments 2

Read this before GE14

Let’s start this post the same way I start my day — by looking at Facebook.

Facebook made $40 Billion dollars in revenue in 2017, solely from advertising to pure schmucks like you. The mantra among the more technically literate is that facebook doesn’t have users it has products that it sells to advertisers, it just so happens that all its products are homo-sapien smart-phone totting urbanites (just like you!)

The platforms meteoric rise from nobody to top-dog, is a dream-story in Silicon Valley, but underneath the veneer of wholesome innovation lies a darker secret, one that could be responsible for the polarization of entire communities, including our own. And it’s all because of their most valuable employee.

No, not Mark Zuckerberg, but the real genius behind the blue and white site. The one responsible for billions of ad revenue facebook generates yearly, and unsurprisingly she’s female.

Anna Lytica and Machine Learning

There’s probably thousands of post your facebook friends make everyday, but she decides which 3 to fit onto your smartphone screen first, and the next 3 and so forth. From the millions of videos shared every hour, she painstakingly picks the few you’d see in your timeline, she decides which ads to show you, and which advertisers to sell you too, underneath the hood in the giant ad behemoth, she lies working all day, everyday.

She isn’t a person, ‘she’ is an algorithm, a complex program that does billions of calculations a second, and for this post we’ll give her the name… Anna Lytica.

Facebook doesn’t talk about her much, she is after all a trade secret (sort of), but what she does and how she does it, might be as much a mystery to us, as it is to Mr. Zuckerberg. Machine Learning algorithms are complex things, we know how to build them, and train them, but how they actually work is sometimes beyond our understanding.

Google can train Alpha-Go to play a game, but how it makes decisions is unknown to Google and even itself — it just IS a Go player.And it is really sad, when we watch these AI algorithms make amazing discoveries, but are unable to explain their rationale to us mere humans. It’s the reason why Watson, IBMs big AI algorithm, hasn’t taken off in healthcare, there’s no point recommending a treatment for cancer, if the algorithm can’t explain why it chose the treatment in the first place.

This is hard to grasp, but AI isn’t just a ‘very powerful’ program, AI is something else entirely. We don’t even use traditional words like write or build to refer to the process of creating them (like we do regular programs), instead we use the word train.

We train an algorithm to play Go, to drive, or to treat cancer. We do this the same way we breed dogs, we pick specimens with the traits we want, and breed them till we end up with a something that matches our desires. How a dog works, and what a dog thinks is irrelevant. If we want them big, we simply breed the biggest specimens, the process is focused entirely on outcome.

Similarly, how the algorithm behaves is driven by what it was trained to do. How it works is irrelevant, all that matters is outcome. Can it play Go, can it drive, can it answer jeopardy? If you want to understand an algorithm you need to know what it was trained to do.

Anna Lytica, was trained to keep you browsing Facebook, after all the companies other endeavors like internet.org, and instant articles were built with the same intention. And while good ol’ Mark stated that he’s tweaking Anna to reduce the time people spend on Facebook, this is something new, an exception to the years Facebook tweaked her to keep you on their site.

After all the average monthly user spends 27 minutes per day in the app, and if you go by daily users, they spend about 41 minutes per day on Facebook. If that’s the end-result of tweaking Anna to ensure we spend less time on Facebook — God help us all!

And while it’s difficult to understand how Anna works, its very easy to guess how she’ll behave. If the end result of Anna’s training is to keep you browsing Facebook, then human psychology reveals a simple trait all humans share — confirmation bias.