comment 0

Technology saves lives, but it isn’t perfect

What do you do when the technology turns on you?

Or when the feature that’s built to save you, is the one that might just kill you?

There’s a stark similarity between the Takata airbag fiasco, that’s already taken 2 Malaysian lives, and the lady who died in self-driving Tesla.

Both involve the auto-industry and both are technology related, but together they represent a much deeper issue at hand–despite our noblest expectations, technology isn’t perfect–but it’s better than we had before.

We’ve all been trained by Hollywood to expect perfect technology, working all the time and in every scenario, but in reality technology sometimes fails, and newer technology fails more often.

Technology endures through failures, only by our good graces, but unless we grant that grace to it, we will not progress.

What should our response to a technical failure be?

Do we insist on removing ALL traces of the offending technology, or do we accept it as a price of progress, that the occasional failure is a tax we pay to get better technology.

But are some taxes just too high?

Society might accept failing antennas on an iPhone, or even bad Google searches, but an air-bag, that might blow a hole in your chest or a car that might crash you into a truck, might be too high of a price.

So is the tax for air-bags and self-driving cars just not worth the potential safety we get in return?

The progress tax

To be sure, car accidents are a massive human catastrophe. Just Massive!!

Ever year, 1.3 million people die in car accidents, compared to ‘just’ 400,000 people killed in the Syrian conflict over the last 5 years. So over the same period, car accidents would have killed 14 times more people than Bassad, ISIS and Al-Qeada put together!!

Of course accidents don’t get the same press treatment as a Civil war in the middle-east, but it has been moving steadily in the right direction. We’ve mandated seat-belts, and safety features, and as any car fanatic will tell you, car safety standards have improved dramatically over the years. Cars that held 5-star safety ratings a few years ago, would barely qualify for 3-star ratings today.

But while we can patch software, and swap out hardware, the weakest point of any computer system is the human-ware. (and make no mistake, your car is a computer system)

And we want to eliminate these 1.3 million deaths, you’d look at eliminating the weakest point in a moving car, and that dear readers, is undoubtedly the human being behind the wheel.

Humans panic, they sleep at the wheel, they drink and drive, they go over the speed-limit, they text while driving, and a hundred other things that put their lives and everyone else’s at risk. Driving on the race-track may be a fun hobby you do on weekends, but driving on public roads is best left to completely rational machine, one that doesn’t panic, or drink, or require rest, or go above speed limits–it’s the reason why planes have auto-pilot, they make better drivers.

AI was built not to become skynet, but rather to eliminate the need for humans to do menial task–ultimately, driving is a menial task and those are best left to machines.

I accept that having one driver die in an autonomous vehicle is certainly sad,but why isn’t it any sadder to have 2 people die in a standard human-driven vehicle? Or viewed on a much larger scale, why isn’t it devastating us as a species, that 1.3 million of us die from essentially a preventable disease? Why doesn’t anyone run marathons, and climb mountains, for car-accident awareness week?

If somebody claimed to have the cure for AIDS, or cancer, but said that in the process of perfecting the cure they’d need to test it and possibly kill some of the patient would you agree to such a move? If not, aren’t you risking the status quo, and doing nothing to prevent these deadly diseases from killing more of us?

Tesla claim that the auto-pilot on their car is essentially 1.5 times safer than any man-driven car in the US, as twice as safe if you consider the global average.

It’s not perfect, but it’s better than us–much better!

And because AI sorta-kinda-maybe follows Moore law, we can expect that 1 in 130 million statistic to grow exponentially over time, to the point where it’s roughly 1000 times safer than human drivers in just 10 years. But AI can’t move along that trajectory unless we let it–and grant it our good graces to keep chugging along.

If we plan to progress ever-nearer to perfect drivers on the road, and eliminate millions of unnecessary deaths, we should accept the tax rather than risk the status-quo.

But what about Takata

Takata is slightly different to Tesla, it’s caused by an unstable chemical in the airbag, and not really a learning-curve tax.

Now, to be fair, the chemistry necessary to inflate an airbag in a fraction of a second, isn’t going to be stable and slow molasses on top of whip-cream, so almost by definition it has to be ‘unstable’.

The question then becomes is the airbag fiasco a tax we have to pay for better airbags in the future, or simply a mis-calculation by Takata to reduce cost and improve their bottomline? I think the jury is still out on that one. But even leaving aside whose to blame (because I’m sure US Congress will something to say on that), what should you as the consumer do.

If you were driving a second generation Honda City, should you turn off your airbags?? (even if you could?), Airbags are safety features designed to save you, and turning them off is quite ridiculous. On the other hand, driving around in one those things is like playing Russian Roulette, which sounds terrible.

It’s not an easy decision, and even car nuts are split on whether you should forego Airbag safety in place of safety from airbags. It’s a difficult decision to make when the very technology that’s built to protect you, also might kill you–accidentally.

If you were Barack Obama, and you knew one of your well armed secret service bodyguards might just pull out their gun and shoot you–would you then dismiss all secret service protection altogether, and just ride solo?

Hard decisions, with uncertain outcomes, but a judgement call must be made, knowing the trade-offs.

Filed under: AI