An autonomous Uber had hit a woman (aged 49) in Arizona, US who was walking outside of the crosswalk with bicycle at around 2200 hrs on Sunday, who later died in the hospital as reported. It blew away the mind right from that very moment.
After the news of crash came in, Uber tweeted, “Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident.”
While the investigations are ongoing, the reports have now come in suggesting that Uber was likely not at fault. Tempe Police Chief Sylvia Moir told Chronicle, “I suspect preliminarily it appears that the Uber would likely not be at fault in this accident.”
Adding to the statement, Moir told, “I won’t rule out the potential to file charges against the [backup driver] in the Uber vehicle.”
It is obvious that the driverless Uber [there was a vehicle operator inside the car during the crash] failed to detect the pedestrian whatever the reason is, one could be she was walking outside of the crosswalk.
People walk outside of the crosswalk every day. There will be innumerable driverless cars on the road in the near future. What does this point at? With the shadow of pessimism, it directs me toward the probability of occurrence of many more fatal accidents which means many more deaths.
For that matter, I won’t be able to trust driverless cars at all regardless of how this technology aspires to transform our lives, to bring convenience to our lives, to create many more jobs and to generate a lot more revenue for the companies in a shared economy.
Adding to that, the levels of anxiety among the public might only go up when they will figure out driverless cars are coming on their way, and it might hit them as well.
The probable consequences which might or might not occur are altogether a different thought but are enough to make one anxious.
Scepticism towards accepting such a state will then only seem to rise knowing that their life might be at stake when they have got nothing to do with those driverless cars.
The Institution of Mechanical Engineers claimed two years ago, the road accidents could be reduced up to 95 percent by eliminating human error. When I learnt driving, I was told about how and when to switch gears in relation to the speed while accelerating and when to put brakes and when to release the clutch. Easy. I was also made learn with a perspective of how to act when someone suddenly appears in front of the car and more importantly, how to react with gears, clutch, accelerators and of course, brakes.
Brakes are very important. Everyone knows. I then questioned my instructor, what if brakes fail? His expressions were seemingly blank and I felt like a nincompoop. He didn’t have any answer to it and I only prayed such a situation never arrives.
And we suck big time at driving. Sometimes we rash, sometimes we forget the usage of brakes. Even the difference between high-beam and low-beam headlamps. And sometimes we also forget the idea behind the discovery of cars because when it comes to speed, we call it thrill. Thrill is one of the reasons that we get to see too many accidents on everyday basis. Challenging rules for the thrill is another. And this puts the discovery of self-driving cars at front.
But this case has again brought too many questions to answer before we start to believe, driverless future is really the future.
A study titled as “The social dilemma of autonomous vehicles” [basically about life and death decisions] by MIT had me in splits and left me with worries. After questioning the participants, the researchers deduced that the majority approved of utilitarian autonomous vehicles that sacrifice their passengers for the greater good (which basically means saving pedestrians and everyone else outside the car) and would like others to buy them. Nobody wants to see a car hitting anyone, you see.
But they would themselves prefer to ride in autonomous vehicles that protect their passengers at all costs. It surely strikes the chord and is very much obvious. Nobody wants to get killed, you see again.
It not only left me perplexed, but only put the driverless future under scanner. The findings must have left carmakers and authorities with a dilemma, and of course, everyone else including you and me. Who will be responsible when a driverless car hits a human? With great technology surely comes great power, but who will be blamed when the power isn’t in hands of humans but machines that run on codes? Artificial Intelligence and Human Intelligence seem to have no answer to it yet.
Joshua Greene, a professor of psychology at Harvard University, noted, “the critical feature of a social dilemma is a tension between self-interest and collective interest.”
The not-so-obvious issues have already started appearing very much obvious. It will only turn consumers, authorities and every other person who is more or less, or even not interested in a driverless future sceptical. The carmakers who are investing billions of dollars into developing this technology will only find their troubles growing many folds.
Long ago, we did a story where we had a point, “if something goes wrong, someone has to take the blame. That is how the world works (or has worked until now). However, liability is going to be a major challenge with driverless cars. Someone will have to take the blame. It can’t be the car company unless it was a faulty car, it can’t be the driver (there isn’t one) and it certainly can’t be the coders at Google.”
So, who will be blamed when a driverless car hits a human? This worry keeps on lingering the mind.
Recently, a study led by John Kingston mapped the different cases ‘considering the question of legal liability for AI computer systems’. Since the driverless future has started dawning upon us, that question remained unanswered, it still is, what we now have are the cases. At least, we have reached this far and Kingston explored it in detail that raises many more concerns.
As per the study, three scenarios could be applied to AI systems – perpetrator via another, natural probable consequence, and direct liability.
In the first case, he made a point regarding when someone instructs the machine, AI could be held innocent but the instructor cannot. “An AI program could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another,” says Kingston.
In the second case, when the actions of AI could be used inappropriately to perform an illegal act. He cited an example, “The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine.” But it can be defended as well, as the question arises whether the programmer was aware of the consequence of its use.
In the third, it requires both action and intent. “The intent is much harder to determine but is still relevant,” says Kingston. “Speeding is a strict liability offense,” he says. In that case, owner will not be liable but AI program.
And then it all comes down to the issue of defense, Kingston questions, “could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?”
However, the executive order reportedly issued by Arizona Gov. Doug Ducey in March, specifies that “a company is liable if one of its self-driving cars negligently kills a person.”
Another is the issue of punishment. For an instance, what form of punishment would be in case AI is directly liable? On what possibilities, AI will be punished?
It has become a vicious cycle, where needless to say, humans at the end appear to be the victim, so to say, will have to suffer. And might always stand at the lower end. For an instance, if a punishment announced of banning an AI system, all the efforts, time and money invested to develop that AI system will go in vain. It will all start appearing worthless. On one end, companies will be making losses and on the other, pessimism among everyone else will only appear to emerge seemingly endless even if carmakers pretend everything’s fine and chill.
Jason Levine, executive director of the Center for Auto Safety, a Washington-based advocacy group said, “it will set consumer confidence in the technology back years if not decades.” “We need to slow down.”
Last year, we were bombarded with the news when a self-driving Uber involved in a high-speed crash. Later, it came out that the other car was at fault, so to say, the situation was created by human’s fault. Scepticism that started growing had a sudden fall and we were then again back to wishing for faster developments in this space as the (bad) news waved off quickly.
In a hope that we will be in the backseat someday when steering will take right and left turns without having any driver involved. And some might have already planned to strike their work-life balance, for example, sleeping in the car while commuting. But after this recent shocking blow, it all has now turned into a nightmare.
It has again taken us back to the debate starting with ifs and if nots.
This self-driving car crash isn’t only a nightmare for Uber but for all the companies and visionaries who have been eyeing to turn it into a mode of everyday transport.
Arguments will be made time and again, and the companies will surely try [I hope, they try harder] to leave no stone unturned to make sure driverless future becomes a reality at the earliest when the testing had already begun since a long time.
But would companies be able to assure that there will not be any need ever again to take away those driverless cars from the roads with immediate effect? This only adds up to the challenges for the carmakers. And adds fuel to the psyche of the public.
This and that, and that this and this that remain unclear. To sum up, a lot remains to be seen. And it’s too early even to ask for that as we are again headed back to where we had started from.
Crafted with brevity
to make certain you see what others don't
Subscribe. We are growing.
I read. I write. A threat to humor, if one liners could kill. Twitter: @ayushxgarg