Showing posts with label pile up crash. Show all posts
Showing posts with label pile up crash. Show all posts

July 02, 2016

Are the systemic risks, derived from many or all cars being on autopilot, ignored by regulators? Like in banking?

Sir, Brooke Masters, with respect to the recent Tesla accident that caused a death, writes: “the more a car’s autopilot does, the less experience drivers will have — and the less watchful they will become. It is madness to expect them to seize the wheel and work a miracle in a moment of crisis” “Tesla tragedy raises safety issues that can’t be ignored”, July 2.

That sounds a lot like banking now becoming more and more automatically responsive to regulations, which could be faulty, and less and less reponsive to bankers’ diverse senses.

And Masters holds that “US regulators, who are in the midst of writing new guidelines for autonomous vehicles, need to take this into account before they give blanket approval to partially self-driving cars”. 

That sounds a lot like when our bank regulators are concerned with the risk of individual bank and not with the risks for the whole banking system. If those regulators are just evaluating how autonomous vehicles respond to traffic where humans drive all other vehicles, they will not cover the real systemic dangers. 

Masters informs: “Tesla noted that this was the first death to occur in 130m miles of driving on its autopilot system, versus an average of one death per 60m miles of ordinary driving.”

And to me that is a quite useless and dangerous information considering the possibilities of the mega chain reaction pile up car crash that could result when all or most cars are on autopilot, responding or trusting in similar ways… like when the very small capital requirement against what was AAA rated caused the mega bank crisis.

I can hear many arguing that if all cars are controlled then no accidents could occur. Yes that might be so but for it to occur, as a minimum minimorum, we would need to control all hackers to absolute perfection.

August 02, 2014

Currently both bankers and regulators are driving the bank cars simultaneously, using the same instruments and data.

Sir Tim Harford discusses the future of driverless cars in “Pity the robot drivers snarled in a human moral maze” August 2. And he left out some angles that I would have liked him to have explored.

For instance, when he talks of hiccups, human guided cars or computer guided cars accidents would we be talking about the same type of accidents… could not it be foreseeable that a computer glitch resulting accident could cause horrors way beyond what the worst pile up crashes often produced by bad weather conditions do? I mean something like the pile up bank assets crashes caused by having banks following the opinions of only a few credit rating agencies… in this case of agencies that on top of it all are humanly fallible?

And how does Harford´s reference to a person “being so arrogant as to think he could drive without an autopilot”, stand up against the constant badmouthing of bankers who did little but to trust their autopilot installed by their regulators?

But Harford is indeed right on the spot when he ends by mentioning “the question of what we fear and why we fear it remains profoundly, quirkily human” Is not a great example of that the fact that bank regulators who should in all logic fear the most what bankers do not fear, decided to base their fears on exactly the same ex ante perceptions of risks… and concocted their risk-weighted capital requirements?

In fact taking the analogy of driving a car to banking, what we now have is perhaps the worst of all worlds, namely bankers and regulators driving simultaneously using the same instruments and the same data... Can at least somebody please make up his mind about who is in charge, so that it is clear who or what we should blame in case of an accident?