January 15, 2017

When will an Artificial Intelligence Agent declare humans too dangerous drivers and too dumb emission measurers?

Sir, I refer your “From diesel emissions to omitting the driver” January 15.

It is clear, not withstanding only one side will pay for it, that in the case of the failed carbon emission controls, both the measured and the measurers are to blame. Any regulation, if it fails in any shape or form, should bring on some consequences for the regulators… let us say a 50% salary reduction.

As is, just look at the case of bank regulators, those who set risk weights of 20% for what is AAA-rated, and 150% for what is below BB- rated. That evidenced they had (have) no clue about what they were doing; and so they caused the AAA rated securities backed with mortgages to the subprime sector crisis. But they are still going to Davos, flying business class the least, to lecture the world on what to do. 

It is also clear that one of the biggest challenges for the safety of driverless cars is that these might also encounter human drivers on the road. So either is the driverless-cars equipped with software that handles human-driving whims, or, sooner or later, some Artificial Intelligence Agent will take us humans off the road. Is that good or bad?

My answer to that question goes somewhat along this line. If absolutely all humanity is taken off the road, and so we all lose entirely the abilities needed to drive, so be it. But, if some humans were still allowed to drive, why would I want those to be somebody else’s grandchildren and not mine? 

PS. About driverless cars, the issue of how to tax these, so as not to lose out on the taxes we currently collect, for instance from PhDs driving taxes in New York, is also pending.