July 02, 2016
Sir, Brooke Masters, with respect to the recent Tesla accident that caused a death, writes: “the more a car’s autopilot does, the less experience drivers will have — and the less watchful they will become. It is madness to expect them to seize the wheel and work a miracle in a moment of crisis” “Tesla tragedy raises safety issues that can’t be ignored”, July 2.
That sounds a lot like banking now becoming more and more automatically responsive to regulations, which could be faulty, and less and less reponsive to bankers’ diverse senses.
And Masters holds that “US regulators, who are in the midst of writing new guidelines for autonomous vehicles, need to take this into account before they give blanket approval to partially self-driving cars”.
That sounds a lot like when our bank regulators are concerned with the risk of individual bank and not with the risks for the whole banking system. If those regulators are just evaluating how autonomous vehicles respond to traffic where humans drive all other vehicles, they will not cover the real systemic dangers.
Masters informs: “Tesla noted that this was the first death to occur in 130m miles of driving on its autopilot system, versus an average of one death per 60m miles of ordinary driving.”
And to me that is a quite useless and dangerous information considering the possibilities of the mega chain reaction pile up car crash that could result when all or most cars are on autopilot, responding or trusting in similar ways… like when the very small capital requirement against what was AAA rated caused the mega bank crisis.
I can hear many arguing that if all cars are controlled then no accidents could occur. Yes that might be so but for it to occur, as a minimum minimorum, we would need to control all hackers to absolute perfection.