Read this before GE14

R

Let’s start this post the same way I start my day — by looking at Facebook.

Facebook made $40 Billion dollars in revenue in 2017, solely from advertising to pure schmucks like you. The mantra among the more technically literate is that facebook doesn’t have users it has products that it sells to advertisers, it just so happens that all its products are homo-sapien smart-phone totting urbanites (just like you!)

The platforms meteoric rise from nobody to top-dog, is a dream-story in Silicon Valley, but underneath the veneer of wholesome innovation lies a darker secret, one that could be responsible for the polarization of entire communities, including our own. And it’s all because of their most valuable employee.

No, not Mark Zuckerberg, but the real genius behind the blue and white site. The one responsible for billions of ad revenue facebook generates yearly, and unsurprisingly she’s female.

Anna Lytica and Machine Learning

There’s probably thousands of post your facebook friends make everyday, but she decides which 3 to fit onto your smartphone screen first, and the next 3 and so forth. From the millions of videos shared every hour, she painstakingly picks the few you’d see in your timeline, she decides which ads to show you, and which advertisers to sell you too, underneath the hood in the giant ad behemoth, she lies working all day, everyday.

She isn’t a person, ‘she’ is an algorithm, a complex program that does billions of calculations a second, and for this post we’ll give her the name… Anna Lytica.

Facebook doesn’t talk about her much, she is after all a trade secret (sort of), but what she does and how she does it, might be as much a mystery to us, as it is to Mr. Zuckerberg. Machine Learning algorithms are complex things, we know how to build them, and train them, but how they actually work is sometimes beyond our understanding.

Google can train Alpha-Go to play a game, but how it makes decisions is unknown to Google and even itself — it just IS a Go player.And it is really sad, when we watch these AI algorithms make amazing discoveries, but are unable to explain their rationale to us mere humans. It’s the reason why Watson, IBMs big AI algorithm, hasn’t taken off in healthcare, there’s no point recommending a treatment for cancer, if the algorithm can’t explain why it chose the treatment in the first place.

This is hard to grasp, but AI isn’t just a ‘very powerful’ program, AI is something else entirely. We don’t even use traditional words like write or build to refer to the process of creating them (like we do regular programs), instead we use the word train.

We train an algorithm to play Go, to drive, or to treat cancer. We do this the same way we breed dogs, we pick specimens with the traits we want, and breed them till we end up with a something that matches our desires. How a dog works, and what a dog thinks is irrelevant. If we want them big, we simply breed the biggest specimens, the process is focused entirely on outcome.

Similarly, how the algorithm behaves is driven by what it was trained to do. How it works is irrelevant, all that matters is outcome. Can it play Go, can it drive, can it answer jeopardy? If you want to understand an algorithm you need to know what it was trained to do.

Anna Lytica, was trained to keep you browsing Facebook, after all the companies other endeavors like internet.org, and instant articles were built with the same intention. And while good ol’ Mark stated that he’s tweaking Anna to reduce the time people spend on Facebook, this is something new, an exception to the years Facebook tweaked her to keep you on their site.

After all the average monthly user spends 27 minutes per day in the app, and if you go by daily users, they spend about 41 minutes per day on Facebook. If that’s the end-result of tweaking Anna to ensure we spend less time on Facebook — God help us all!

And while it’s difficult to understand how Anna works, its very easy to guess how she’ll behave. If the end result of Anna’s training is to keep you browsing Facebook, then human psychology reveals a simple trait all humans share — confirmation bias.

It’s all about being right, and never wrong

Confirmation bias is our deeply rooted love for being right. We love reading articles confirming what we already believe, and can gladly sit through hours of video reinforcing what we ‘know’ to be true. It’s a well-established tendency in all human-beings to search out for points of view that match our own.

On the opposite end of the spectrum is cognitive dissonance, that disgusting feeling of being told you’re wrong, or confronted with new facts that contradicts your beliefs. The more emotionally charged the belief, the stronger the effects of confirmation bias and cognitive dissonance become.

When we combine our human tendency for confirmation bias with an algorithm like Anna, we get filter bubbles.

Anna is a quick learner, and soon she’ll notice what articles keep you engaged, and what articles turn you off, she then creates an ‘imaginary world’ with content that provides confirmation bias and zero cognitive dissonance, this is your Facebook timeline, and hidden underneath is Anna, deciding what keeps you engage(confirmation bias) and deleting what would cause you log off (cognitive dissonance) — we call this a filter bubble.

It’s bad, because you never view an opposing view point, and soon you begin to think that the entire world agrees with your point of view. If you support Pakatan, pro-Pakatan post appear on your timeline, and only pro-Pakatan post — no Lim Sian See here.

Filter bubbles are bad, but hold your horses, this gets worse.

You’re never hard-core enough for Lionel

Anna has a brother, Lionel Lytica, who does the same job, but works for Google. Specifically he works for YouTube and is in charge of recommending videos on your autoplay list. Google doesn’t talk much about Lionel, but there are reports that he seems to be one of the biggest radicalizers on earth. Forget Zakir Naik, Lionel’s got him beat.

In 2016, Zeynep Tufecki, a reporter from the New York Times watched Trump Videos on YouTube for research on a piece she was writing. Soon after,  YouTube started to recommend white supremacist rants, Holocaust denials and other disturbing videos to her account. Intrigued she setup a brand new account, and experimented by viewing videos of Bernie Sanders and Hillary Clinton instead, she reports:

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This isn’t a filter bubble, this is a filter rip-tide, that pulls you in, sucks you under and keeps you down. It not only creates an artificial reality, it amplifies the extremes  of that reality and presents to you, all in the hopes of keeping you ‘engaged’ long enough for more ads.

Not only do you view pro-Pakatan videos, you’re viewing more extreme videos from them, and suddenly you go from regular Pakatan supporter to die hard ultra fan. We see this in sports all the time, two sets of supporters who believe their team can do no wrong, and want victory at all cost. But when it crosses over to politics, things get really ugly really fast.

Both Anna and Lionel are not real people (duh!), but their decisions impact us real folks all the time. If an editor at a daily newspaper decided to choose content that kept people engage, regardless of their accuracy or ‘radical’ impact, he’d be imprisoned in Malaysia, but somehow both Anna and Lionel who work for multi-billion dollar companies remain unscathed by any form of regulation.

And sadly it gets even worse.

Sex may sell, but Anger is viral

In a study of what makes stories viral, two researchers from the University of Pennsylvania discovered that anger is the best emotional button to press if you want readers to share your content.

Analysis of more than three months of New York Times articles sheds light on what types of online content become viral and why….Sadness, anger, and anxiety are all negative emotions, but while sadder content is less viral, content that evokes more anxiety or anger is actually more viral. These findings are consistent with our hypothesis about how arousal shapes social transmission.

For example, a one-standard-deviation increase in the amount of anger an article evokes
increases the odds that it will make the most e-mailed list by 34%.

In other words the more Angry the article, the more likely it is to be shared, which explains why social media platforms like Twitter and Reddit (which are purely based on sharing viral content) become cesspools of angry assholes shouting at each other. The platform by nature has more angry content on it (or rather by human nature)

Now that we’ve established how Anna and Lionel set you up in a filter bubble, radicalize you along the way, and how we are more likely to put out angry content, we setup a tri-fecta that guarantees a shitty outcome.

The polarization of Malaysia

This is where I get to the crux of my post, the polarization of our country.

Political discourse is extremely important for any democracy, but it cannot occur if nobody is allowed to speak. The government is of course the Prime offender, but so to are the opposition ‘fan-boys’. These are folks that would not entertain the idea that Pakatan could any wrong, and are convinced any attempt to rebuke a Pakatan leader is a ploy by ‘dedak eating BN imbeciles’, that requires to be shut down.

Consider a letter penned by Nathaniel Tan in Malaysiakini, rebuking Lim Guan Eng for cringe-worthy sing-along with young children at a state run tuition program. I link to Nathaniel’s wikipedia page, which should be enough for anyone to conclude he’s not a dedak eating BN sympathizer, but that didn’t stop commenters on Malaysiakini from remarking :

Nata , you are more stupid and idiot than I thought. Hope you are no more a PKR member and if you are ex , thanks god…

and this

 …LGE striking back is called retribution and as far as I’m concerned, is not such a big deal to get you all worked up, Nat. Though I’m sure you got a quite good work out from that.

and my favorite comment on the site:

Those who sit on the fence will soon feel big pain in the arse, Nathaniel. Your nitpicking self-righteousness and namby-pamby moralizing at inappropriate moments have turned your opinion pieces into a right pain in the national butt. By all means air your educated views – but AFTER we remove BN and have space to be polite and politically correct.

In other words, the commenter accepts that it is an educated view, but today’s political climate requires us to halt all such comments until we remove BN. Educated views will only be entertained when Pakatan are in Putrajaya it seems. This polarization, the “with us or against us” philosophy, is damaging Malaysia.

The polarization is more apparent with Pakatan then with Barisan, but it exist nonetheless, and is more strongly correlated to more emotional issues. We’re more likely to share angry articles that are written in an angry tone, Facebook and YouTube guarantee those shares reach our intended audience, and keeps them locked in a filter bubble.

The filter bubbles aren’t just echo-chambers, they’re loud echo chambers of angry people shouting at each other about just how bad the ‘other side’ is. This is not good for any country, and it certainly isn’t good for our elections, because it creates a false reality that our side is perfect, and the other side is devil incarnate, and there’s no in between.

Conclusion

And this is why I’m uncomfortable with Pakatan as it stands today (leave hateful comments below). Pakatan supporters are an echo chamber that cannot tolerate a dissenting view, even people like #UndiRosak are being vilified and harassed,

But maybe the problem isn’t Pakatan itself, but rather that Pakatan voters are more likely to be urban dwelling, smartphone carrying, social-media users — and use social media far more than their rural BN supporting cousins. Maybe the reason why Pakatan is populated by echo-chamber assholes is because they’re using social media more!

Maybe (just maybe) we’re already victims of Anna and Lionel, but just don’t know it yet.

My adopted home (Singapore) is looking to implement a fake news law, while my real home (Klang Malaysia) has already done so. Might I suggest, that fake news isn’t the big problem, the real issue is the algorithms that power these platforms, and that is what should be regulated.

Useful Reading:

This video will make you angry

How Machines Learn

But what *is* a Neural Network?

YouTube, the Great Radicalizer

2 comments

Astound us with your intelligence

  • And the most recent example of anger-charged viral posts I read is the one on that book saying Kuala Lumpur was founded by some other guy and not Yap Ah Loy. The link certainly made me click.

    • Yea, I guess the angrier something makes you, the more likely you are to click it, and that creates a bad incentive for click-baity ad-driven content.