August 02, 2014
Sir Tim Harford discusses the future of driverless cars in “Pity the robot drivers snarled in a human moral maze” August 2. And he left out some angles that I would have liked him to have explored.
For instance, when he talks of hiccups, human guided cars or computer guided cars accidents would we be talking about the same type of accidents… could not it be foreseeable that a computer glitch resulting accident could cause horrors way beyond what the worst pile up crashes often produced by bad weather conditions do? I mean something like the pile up bank assets crashes caused by having banks following the opinions of only a few credit rating agencies… in this case of agencies that on top of it all are humanly fallible?
And how does Harford´s reference to a person “being so arrogant as to think he could drive without an autopilot”, stand up against the constant badmouthing of bankers who did little but to trust their autopilot installed by their regulators?
But Harford is indeed right on the spot when he ends by mentioning “the question of what we fear and why we fear it remains profoundly, quirkily human” Is not a great example of that the fact that bank regulators who should in all logic fear the most what bankers do not fear, decided to base their fears on exactly the same ex ante perceptions of risks… and concocted their risk-weighted capital requirements?
In fact taking the analogy of driving a car to banking, what we now have is perhaps the worst of all worlds, namely bankers and regulators driving simultaneously using the same instruments and the same data... Can at least somebody please make up his mind about who is in charge, so that it is clear who or what we should blame in case of an accident?