Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

September 05, 2025

#AI #ChatGPT #Grok: Once artificial intelligence really gets going, how fast can the economy grow?

I followed the lead set by Tim Harford in "If AI lifts off, will living standards follow?" and asked ChatGPT and Grok.

I asked ChatGPT:

"Please, feel free to speculate… though in less than 300 words:

Once artificial intelligence really gets going, how fast can the economy grow? Five per cent a year? Ten per cent? Fifty per cent?"

ChatGPT answered:

"Speculating freely: once artificial intelligence truly integrates across the economy, the growth rate could, in principle, exceed anything we’ve seen in modern history. Traditional constraints—labor supply, managerial capacity, coordination costs—are precisely what AI promises to relax.

At the moderate end, AI might sustain global growth in the 5–7% range for decades, roughly doubling the pace of advanced economies today. That would already be transformative: compound growth at 7% doubles output every 10 years.

A more extreme possibility is 10–20% growth annually, if AI automates not just routine work but also much of scientific discovery, engineering, and entrepreneurship itself. In that case, the bottleneck of “who invents the next breakthrough” loosens, and innovation becomes continuous and self-reinforcing. Such growth rates imply doubling the economy every 3–7 years—levels usually associated with small frontier economies catching up, not the entire world.

The most radical speculation—50%+ annual growth—would mean AI essentially creates a self-improving system of production, design, and distribution, with humans struggling to keep up. That’s not just economic growth in the classical sense but something more akin to a civilizational transformation, where “GDP” may cease to measure what matters.

So the spectrum runs from a modest boost (5%) to explosive acceleration (>20%), depending on how fast AI can (1) replicate human-level reasoning, (2) integrate with physical production, and (3) avoid coordination and social frictions. In practice, political, regulatory, and resource constraints may prevent the highest numbers—but technically, nothing rules them out."


Would you like me to anchor this in historical parallels (e.g., Industrial Revolution, electricity, internet) to show what kind of multiplier might be realistic, versus purely speculative?

I said: Yes please, but in in less than 200 words.

ChatGPT said:

"Historically, big technologies lifted growth but within limits. The Industrial Revolution doubled Britain’s per-capita growth from ~0.3% to ~1% a year—huge by historical standards but gradual. The spread of electricity and mass production pushed U.S. growth to ~2–3% sustained. The computer and internet era gave a one-time boost, but growth rarely exceeded 4–5% in advanced economies.

AI differs because it can speed up not only production but also innovation itself. If it amplifies discovery like electricity amplified muscle power, we could see global growth at 5–7% for decades—unprecedented at scale. If AI can recursively accelerate research, growth could climb to 10–20%, closer to the explosive catch-up of “Asian Tigers,” but applied globally.

The leap to 50%+ annual growth has no historical parallel; it would be more like a technological singularity than a continuation of economic history. Past revolutions suggest sustained boosts are real, but explosive, runaway growth would mark a civilizational break rather than just “faster GDP.”


Do you want me to sketch what kind of constraints (energy, politics, social adjustment) might hold AI growth closer to the historical pattern than the singularity path?

I said: Yes please, but in in less than 200 words

ChatGPT said:

"Several constraints could keep AI-driven growth closer to historical patterns. Energy is a hard limit: exponential growth in computing and production demands vast power, and scaling renewable, nuclear, or other sources takes time. Physical capital—factories, housing, transport—cannot be replaced overnight, even if AI designs them instantly. Human adaptation is another brake: societies need time to retrain, absorb, and regulate, and resistance from workers, firms, or governments slows diffusion.

Coordination frictions also matter. Nations may restrict AI exports, regulate its use, or impose safety checks, curbing global acceleration. Meanwhile, resource bottlenecks—rare earths, chips, robotics hardware—could stall deployment. Finally, inequality and political backlash may block full adoption if many feel excluded or threatened.

These forces suggest AI could plausibly raise growth to 5–7%, maybe 10%, but sustained 20%+ requires solving energy, materials, and governance hurdles as quickly as technical ones. History shows revolutions spread in waves, not instant leaps."


Would you like me to quantify how fast GDP could double under each plausible growth rate (e.g., 5%, 10%, 20%) to make the comparison more tangible?

I said: For today this is enough. Thanks!

It said:You’re very welcome! I’ll be here if you want to continue the thread another day. Wishing you a good evening!


I asked Grok:

"Please, feel free to speculate… though in less than 300 words:

Once artificial intelligence really gets going, how fast can the economy grow? Five per cent a year? Ten per cent? Fifty per cent?"

Grok answered:

"Once AI truly accelerates—say, with AGI or superintelligence by the 2030s or 2040s—economic growth could shatter historical norms. Current global GDP growth hovers around 3%, but AI could amplify it dramatically through automation, innovation, and productivity surges.

In moderate scenarios, experts predict AI adding 1-2% annually to GDP via task automation and R&D boosts, pushing total growth to 4-5%. More optimistic views, like McKinsey's, foresee $4.4 trillion in annual value by 2030, potentially lifting growth to 5-10% as AI transforms industries. 

But if we hit a "technological singularity," where AI self-improves exponentially, growth could explode. Some models suggest superexponential rates, with GDP doubling yearly (100%+ growth) or faster, leading to 50% as a conservative floor in early phases. Imagine AI emulating human minds at scale, eradicating scarcity and enabling post-scarcity abundance. 

Risks abound—job displacement, inequality—but the upside? A hyper-accelerated economy, growing at rates we can't yet fathom, potentially 50% or beyond in a runaway AI boom. Speculatively, 10% seems plausible mid-term; 50%+ in full singularity mode."

I said: Thanks!

It answered: You're welcome! If you have more questions about AI or anything else, just ask.

Note: The answers are copied exactly from those given to me by ChatGPT and Grok


July 12, 2025

Let’s pray AI does not lose its humility of accepting it needs HI

Sir, I refer loosely/closely to “Humans must remain at the heart of the AI story” by Marc Benioff, FT July 11, 2025.

About bank regulations, an obsession of mine, I dialogued with the recently introduced to society Grok 4, SuperGrok. 

In reference to whether a hypothetical “Super-Duper-Grok” could outperform the decentralized, messy wisdom of millions in the market, without amplifying systemic risks it stated:

The market, as an aggregate of millions of individual decisions, is a remarkable mechanism for pricing risk and allocating capital. It’s not perfect—prone to bubbles, panics, and mispricings, its strength lies in its diversity of perspectives, incentives, and local knowledge. Interest rates act as signals, reflecting collective judgments about risk and reward. 

Now, imagine a “Super-Duper-Grok”—an AI with vastly superior data processing, predictive modeling, and decision-making capabilities. Could it outdo the market? In theory, it might:

But here’s the catch: centralizing risk management in a single AI, no matter how advanced, introduces new vulnerabilities:

Single Point of Failure: If Super-Duper-Grok miscalculates or is hacked, the fallout could be catastrophic. 

Model Risk: Even a super-AI relies on assumptions and data inputs. If those are biased or incomplete (e.g., missing “unknown unknowns”), it could misjudge risks. 

Loss of Human Judgment: Markets incorporate intuition, skepticism, and ethical considerations—qualities an AI might lack. An AI, however sophisticated, might over-optimize for measurable metrics, ignoring intangibles like cultural values or long-term societal resilience.”

Sir, if only those besserwisser bank regulators in the Basel Committee had possessed such humility. Then they would not have decreed those risk weighted bank capital requirements which so dangerously and weakening distort the allocation of credit.

Sir, it would seem that without stupid human intelligence to feed it, artificial intelligence could also become stupid. Thanks God, it seems that we will still be needed. Let's make sure we fulfill our part.

February 01, 2024

Human pilots and autopilots need to be trusting friends.

Sir, last year, flying to Washington DC, in a Boeing 337 MAX 9, the clouds were so low that the pilot missed the first approach to land at Dulles International Airport. We passengers were instructed: “Please, please completely turn off all, absolutely all your computers, iPad cellphones and similar, even if closed or placed in air-mode. We cannot afford any type of interference. The pilot has decided he can’t land the plane, and so he will have the auto-pilot do it.” 

Very long minutes of praying and checking up on co-passengers’ cellphones ensued. Finally, without having been able to see any land, thanks to God we landed.`` I asked my wife: “If they have to use the auto-pilot when things get really hard, why do they have pilots at all?”

Tim Harford answers that question in “Of top-notch algorithms and zoned-out humans” FT February 1. Harford writes: “A storm that blocked the sys­tem’s air­speed instru­ments with assist­ive fly-by-wire sys­tem that trans­lates the pilot’s jerky move­ments into smooth instructions. This makes it hard to crash an A330. But para­dox­ic­ally, it means that when a chal­lenge does occur, the pilots will have very little exper­i­ence to draw on as they try to meet that chal­lenge.”

What does it indicate as a solution? Clearly humans and autopilots need to communicate faster and clearer about who is more qualified to be in command. And that means they need to be able to trust each other much more.

Sir, after the experience I described, even though I have often complained about pain over my eyes when landing with a Boeing 337 MAX 9, probably because of some issues with its air pressure control, sincerely, after that landing, tell me how could I ever complain about that Boeing plane?

@PerKurowski

June 25, 2023

A strong national spirit/character is what’s most needed for any preparedness, even against a pandemic.

Sir, I refer to Tim Harford’s “Is it even pos­sible for coun­tries to pre­pare for a pan­demic?” FT June 24, 2023.

“Be pre­pared! It’s the scout’s motto. But pre­pared for what?” I was never a boy-scout but as I have understood that movement it was to be prepared courageously for the unexpected, not silently accepting a lockdown; and to be able to lit a fire without matches, not to learn to deploy sewage monitors.

So sadly, though Harford does indeed know much of economy, here I think he does not even scratch the surface of what’s most needed, like:

First: The understanding that, for a nation/society as a whole, a response to the pandemic can be much more harmful than the pandemic itself.

Second: That just as George Clemenceau opined, “War is too serious a matter to entrust to military men”, a pandemic is also too serious a matter to entrust to epidemiologists” Any preparedness against a pandemic must include a wide diversity of opinions.

Third: Information, information and information: With respect to this Harford mentions: “Joshua Gans, eco­nom­ist and author of The Pan­demic Inform­a­tion Solu­tion (2021), argues that we’ve learnt that pan­dem­ics can be thought of as inform­a­tion and incentive problems.” No, begin by giving the people full information and then let them understand and decide if and what incentives are needed.

In July 2020 I tweeted:
“Sweden kept all schools until 9thgrade open. Parents of children in 9th grade are almost always less than 50 years of age. In Sweden, as of July 24, out of 5,687 Coronavirus deaths 71, 1.2%, were younger than 50 years.”
“Conclusion: Keep schools open, keep older teachers at home and have grandparents refrain from hugging their grandchildren. Disseminating data on Covid-19 without discriminating by age, is in essence misinformation.”

Clearly information on the relation between Covid-19 and age was available but was not sufficiently provided. One explanation could be that the Covid-19 pandemic hit the world in the midst of a polarization pandemic. 17 October 2020 I wrote in a letter to FT “way too many polarization profiteers just don’t want harmony vaccines to appear.

March 2020 I tweeted: 
“In February, I visited Churchill War Rooms in London Reading UK’s plans of building up herd immunity against coronavirus, I have a feeling Winston would have agreed with such stiff upper lip policy: “I have nothing to offer but fever, coughing, chills and shortness of breath”

Sir, Neville Chamberlain’s spirit inspired UK’s pandemic answer. Just like he is present in the risk weighted bank capital requirements which incentivize much more the refinancing of the “safer” present than the financing of the “riskier future”

I summarized the result of the above three failings in a letter published by the Washington Post in October 2020 in which I stated: "Roughly 90% of all coronavirus deaths will occur in those 60 years of age and older. Equally roughly 90% of the virus’s social and economic consequences will be paid by those younger than 60. It’s an intergenerational conflict of monstrous proportions."

PS. Sir, if you are interested you might want to read what ChatGPT – OpenAI answered when I asked "Suppose a virus hits a nation, and to its authorities its evident that its mortality rate depends the most on age. In such a case, transmitting data to the population about the total number of deaths without discriminating this by age, could that be deemed to be misinformation?"

PS. Sir, you should be interested in the above, it evidences how humans can begin dialoguing with artificial intelligence, so as to have a better chance of keeping their Human Masters and appointed experts in check.

@PerKurowski

July 13, 2019

Should the tax on robots be high or low?

Sir, John Thornhill writes that Carl Benedikt Frey’s “The Technology Trap” informs us that “the number of robots in the US increased by 50 per cent between 2008 and 2016, each of them replacing about 3.3 jobs” “The return of the Luddites” July 13, 2019.

Those who are so replaced must surely have been generating some non-wage labour costs, like social security, that robots don’t. Therefore I frequently pose a question that, with the exception of some Swedes, no one wants to give me a definite answer to. It is:

Should we tax robots low so they work for us humans, or high so that we humans remain competitive for the jobs?

In an Op-Ed from 2014 titled “We need decent and worthy unemployments” I wrote: “The power of a nation, and the productivity of its economy, which so far has depended primarily on the quality of its employees may, in the future, also depend on the quality of its unemployed, as a minimum in the sense of these not interrupting those working.”

And over the years I have become convinced that in a universal basic income, large enough to help us out of bed to reach up to what is available, and low enough to keep us from staying in bed, lies our best chances to find the basic social stability we need to avoid societal breakdown,.

The financing of that UBI could include that those who exploit data on us citizens shared with us part of their ad revenues, a high carbon tax, and perhaps taxing robots and AI (though I do not know with how much)

PS. I case you wonder why some Swedes answered the question that has primarily to do with the existence in Swedish of the magical word “lagom”, meaning something like not too much not too little but just right. J

July 01, 2019

Should we tax robots low so they work for us humans, or high so that we humans remain competitive?

Rana Foroohar references “a recent report into the US labour market conducted by the McKinsey Global Institute found that… the biggest reason for the declining labour share, according to the study, is that supercycles in areas such as commodities and real estate have made those sectors, which favour capital over labour, a larger part of the overall economy”, “The silver lining for labour markets”, July 1.

“Do we have a supercycles that favour capital over labour”? At least with respect to real estate, especially houses, the “supercycle” we have is caused by bank regulators much favoring credit to what’s perceived as safe over credit to what’s perceived as risky, without one iota of importance assigned to the need of allocating credit efficiently to the real economy.

Then Foroohar refers to the problem: “shifting labour market dynamics will sharpen the political divides that already exist. Many “left behind” cities are home to more Hispanics and African Americans. Job categories that will be automated fastest are entry-level positions typically done by the young. Meanwhile, the over-50s are at the highest risk of job loss from declining skills”. As “The solution” Foroohar writes; “shift policy to support human capital investment, just as we do other types of capital investment”

Sir, unfortunately it is so much more complicated than that. Just the problems with student debts we currently hear about, evidences that we might not really know about how “to support human capital investments”.

Before social order breaks down, we need to start considering the need to generate decent and worthy unemployments, creating an unconditional universal basic income that serves somehow as a floor and decide what to do with AI and robots. Should we tax these low enough so that they do as much jobs as possible for us humans, or should we tax them high enough for us humans to remain competitive for the jobs they do?

PS. On “a mere 25 cities and regions could account for 60 per cent of US job growth by 2030”, may I venture those cities will not include those with the largest unfunded social benefit plans.


@PerKurowski

April 15, 2019

If AI systems were trained on using historical data that could sometimes help to avoid human biases.

Sir you write “Just as with any other computer system, the adage “garbage in, garbage out” applies to AI. If systems are trained using historical data, they will reproduce historical biases.”“Guidelines are a welcome step towards ethical AI” April 15.

Not necessarily so. Currently because human regulators suffer from something known as confirmation bias, they introduced risk weighted capital requirements for banks based on that what is perceived as risky is more dangerous to our bank systems than what is perceived as safe. The analysis of historical data about the origin of bank crises would clearly have shown this to be totally wrong. 

@PerKurowski

October 18, 2018

Artificial intelligence could help humanity to overcome some of its very human frailties.

Zia Chishti, the chairman and chief executive of Afiniti holds that AI is especially good at “the identification of patterns in complex data.” “Do not believe the hype about artificial intelligence” October 18.

I agree but sometimes we also need help identifying what we humans for some reasons are unable or find hard to see in very simple data. 

That’s why I have often tweeted out asking for IBM’s Watson’s (or Goggle's DeepMind) help, in making bank regulators understand that their risk weighted capital requirements for banks have gotten it all wrong. Had they really looked at simple data on bank crises, they would have immediately seen these were never caused by assets perceived ex ante as risky, but always by assets that when booked by banks were perceived as especially safe.

Perhaps the safe-is-risky and the risky-is-safe might just be a too hard concept for humans to swallow and act on. If that’s the case, in AI’s assistance we trust.


PS. Here is an aide memoire on the mistakes in the risk weighted capital requirements for banks.

@PerKurowski

July 27, 2018

Productivity, real salaries, employment rates, GDP should consider the increased consumption of distractions during work hours

Sir, Erik Brynjolfsson (and Andrew McAfee) writes: “If machine learning is already superhuman, why did we not see it in productivity statistics earlier? The answer is that its breakthroughs haven’t yet had time to deeply change how work gets done” “Machine learning will be the engine of global growth” July 27.

That is true, but we also need to realize that we have not done yet measured the effect of all the increased consumption of distractions during working hours.

In Bank of England’s “bankunderground" blog we recently read: “With the rise of smartphones in particular, the amount of stimuli competing for our attention throughout the day has exploded... we are more distracted than ever as a result of the battle for our attention. One study, for example, finds that we are distracted nearly 50% of the time.”

And on a recent visit to a major shop in the Washington area, thinking about it, I noticed that 8 out of the 11 attendants I saw were busy with some type of activity on their cellphones, and I seriously suspect they were not just checking inventories.

The impact of that on productivity, with less effective working time is being put into production, could be huge.

Also, going from for instance a 10% to a 50% distraction signifies de facto that full time or paid by the hour employee’s real salaries have increased fabulously.

And what about the real employment rate if we deduct the hours engaged in distractions? A statistical nightmare? Will we ever be able to compare apples with apples again?

And how should all these working hours consumed with distractions be considered in the GDP figures?

@PerKurowski

January 09, 2018

If AI was allowed to have a crack at the weights used by current risk weighted capital requirements for banks, the regulators would surely have a lot of explaining to do.

Sir, John Thornhill writes that he saw an artificial intelligence program crack in 12 minutes and 50 seconds the mindbendingly complex Enigma code used by the Germans during the second world war” “Competent computers still cannot comprehend” January 9.

I wish AI would also be asked to suggest some weights for the risk weighted capital requirements for banks.

For instance in Basel II the standardized risk weight assigned to something rated AAA, and therefore perceived as very safe, something to which banks could build up dangerous exposures, is 20%; while the risk weight for something rated below BB-, and therefore perceived to be very risky, and therefore banker won’t touch it with a ten feet pole, is 150%.

I would love to see for instance Mario Draghi’s, Mark Carney’s, and Stefan Ingves’ faces, if artificial intelligence, smilingly, came up with weights indicating a quite inverse relation between perceived risks and real dangers to a banking system.


@PerKurowski

November 01, 2017

If chefs cannot obtain effective intellectual protection for their recipes, how can they beat robots armed with AI?

Sir, Sarah O’Connor writes: “The risks to workers from ever smarter computers are clear, but the opportunities will lie in maximising the value of their human skills. For some people, such as talented chefs, the battle is already won.” “Machines do not have to be the enemy” November 1.

Oh boy, is she not a romantic? How on earth will individual chefs survive against robots and AI, unless it is for those few the 1% of the 1% is able and willing to pay for their human artisan cuisine creations protected by patents?

That “In most jobs, people combine cognitive skills with other human abilities: physical movement; vision; common sense; compassion; craftsmanship… that computers cannot match”, that unfortunately sounds like wishful thinking.

Much better is it if we accept that robots and AI can supplant us humans, in way too many ways, and instead look for ways how they should be able to work better for all humanity. And in this respect she is right, "machines are not the enemy".

I say this because since many years I have held that we do need to prepare decent and worthy unemployments, in order to better confront a possible structural unemployment, and without which our social fabrics would break down completely. Capisci?

That might begin by taxing the robots so at least humans can compete on equal terms.

Of course a totally different world might be out there in the future, but I can’t but to stand firmly on my western civilization’s stepping-stones, those that got me to where I am.

@PerKurowski

July 09, 2017

Parc / Darpa, please ask the intelligent machines to explain to us, in simple terms, how human bank regulators think

Sir, Richard Waters writes about the difficulties to ascertain exactly how artificial intelligence, when sifting through immense amounts of data, reaches it conclusions. “Valley researchers press AI systems to explain their thinking in simple terms” July 9. 

Of course, if you are to follow the recommendations of someone, it is usually a very good thing to know how he (it) thinks.

But that is also relevant to humans. For instance I would love to understand why, if there has never even been a major bank crisis that has resulted from excessive exposures to something perceived as risky when placed on banks’ balance sheets, regulators came up with their risk weighted capital requirements for banks… more perceived risk more capital, less perceived risk less capital.

That causes banks to build up large exposures against little capital to what is perceived decreed or concocted as safe, like AAA rated securities and sovereigns like Greece; and to stay away almost entirely from “the risky”, like SMEs and entrepreneurs. 

I suspect it was because first they never gathered empirical data about previous banks crisis that made a relevant distinction between the ex-ante and the ex-post perceptions of risk.

And then because they might have only cared about avoiding the crisis and not one iota about how banks perform the allocation of bank credit between the crisis.

I have asked regulators over and over again about this but have not yet been able to extract even a crooked answer, much less a straight one… they just keep mum on it.

Perhaps Darpa could do us the favor to ask their intelligent machines, to explain to us, or at least to me, in simple terms, how human bank regulators think.

For instance why did they with their Basel II allow banks to leverage their capital over 62 times if AAA ratings were present, and only about 8 times with what was below BB- rated? Should they not have known that bankers love what’s “safe” and do not want to touch even with a ten feet pole something as risky as a below BB-?

@PerKurowski

April 18, 2017

Could not artificial intelligence, AI, sometimes prove more intelligent and socially concerned than humans?

Sir, John Thornhill writes: “Mireille Hildebrandt, professor of law and technology at the Free University of Brussels, says one of the dangers of AI is that we become overly reliant on “mindless minds” that we do not fully comprehend. She argues that the purpose and effect of these algorithms must therefore be testable and contestable in a courtroom. “If you cannot meaningfully explain your system’s decisions then you cannot make them,” she says.” “Only human intelligence can solve the AI challenge” April 18.

Indeed, but that should go for humans too! For instance bank regulators should be hauled in front of a courtroom, in order to answer some very basic questions about their risk weighted capital requirements for banks.

I ask this because I am absolutely sure that, if AI regulated our banks, then at least the following two questions would have been asked:

What is the purpose of banks? And something like John A Shedd’s “A ship in harbor is safe, but that is not what ships are for” would have been considered.

What causes big bank crises? And something like Voltaire’s “May God defend me from my friends, I can defend myself from my enemies” would have been considered.

As a consequence we would not be having our banks being regulated to avoid the risk taking the future of our grandchildren need, for no real bank stability purpose at all.

Here follows some of the questions that I would like to ask the current bank regulators in front of a court, since they do not even acknowledge hearing these.


@PerKurowski

April 11, 2017

Regulators, why do you fear what bankers fear? Is it not what the bankers trust that which is really dangerous?

Sir, Miles Johnson writes: “Since their inception, financial markets have been driven by greed and fear. No matter how advanced technology becomes, human nature isn’t changing.” “AI investment can ape intelligence, but it will always lack wisdom” April 11.

I am not sure, as is I might prefer a reasonably intelligent artificial intelligence to regulate our banks.

BankReg.AI would begin by asking: What are banks? What are they for? An answer like “to keep our money safe” would not suffice, because for that a big vault in which to store our savings would seem a cheaper alternative than a bank. So BankReg.AI would most probably, sooner or later, be fed that not so unimportant info that banks are also supposed to allocate credit efficiently to the real economy. As a consequence the current risk weighted capital requirements concocted by the Basel Committee would not have even been considered because these very much distort the allocation of credit.

Then BankReg.AI would ask: What has caused all bank crisis in the past” After revising all empirical evidence it would come up with: a. Unexpected events (like devaluations), b. criminal behavior (like lending to affiliates) and c. excessive exposures to something that was erroneously perceived as safe. As a consequence the Basel Committee’s current capital requirements, lower for what is dangerously perceived as safe than for what is innocuously perceived as risky, would never ever have crossed BankReg.AI’s circuits. 

@PerKurowski

January 15, 2017

When will an Artificial Intelligence Agent declare humans too dangerous drivers and too dumb emission measurers?

Sir, I refer your “From diesel emissions to omitting the driver” January 15.

It is clear, not withstanding only one side will pay for it, that in the case of the failed carbon emission controls, both the measured and the measurers are to blame. Any regulation, if it fails in any shape or form, should bring on some consequences for the regulators… let us say a 50% salary reduction.

As is, just look at the case of bank regulators, those who set risk weights of 20% for what is AAA-rated, and 150% for what is below BB- rated. That evidenced they had (have) no clue about what they were doing; and so they caused the AAA rated securities backed with mortgages to the subprime sector crisis. But they are still going to Davos, flying business class the least, to lecture the world on what to do. 

It is also clear that one of the biggest challenges for the safety of driverless cars is that these might also encounter human drivers on the road. So either is the driverless-cars equipped with software that handles human-driving whims, or, sooner or later, some Artificial Intelligence Agent will take us humans off the road. Is that good or bad?

My answer to that question goes somewhat along this line. If absolutely all humanity is taken off the road, and so we all lose entirely the abilities needed to drive, so be it. But, if some humans were still allowed to drive, why would I want those to be somebody else’s grandchildren and not mine? 

PS. About driverless cars, the issue of how to tax these, so as not to lose out on the taxes we currently collect, for instance from PhDs driving taxes in New York, is also pending.

@PerKurowski

December 08, 2016

Will we humans end up banning ourselves from driving because we’re too risky?

Sir, John Gapper concludes, “in a world of cheap, convenient self-driving vehicles, only the wealthy and fussy will bother to buy a car” “Why would you want to buy a self-driving car?” December 8.

Wait a second. Is Gapper saying that only the wealthy, buying cars, might save some jobs? That does not sound like too politically incorrect in these get rid of inequality Piketty days.

But, jest aside, what I most fear, is the day we humans ban ourselves from driving altogether, because we are not safe enough, because we are too risky.

With automation substituting us humans in so much, what mutations will that provoke? What capabilities will we lose?

PS. Beware though of a Basel Committee for Transit Supervision. If it interferes, like the Basel Committee in the process of allocating bank credit to the real economy, then the human race might disappear in the mother of all massive car crashes.

@PerKurowski

October 13, 2016

We would appreciate Google’s DeepMind (or IBM’s Watson) giving the Basel Committee some tips on intelligent thinking

Sir, I refer to Clive Cookson’s “DeepMind overcomes memory block to bring thinking computers a step closer”, October 13.

Here again we read about so much research going on in the world of artificial intelligence. Though clearly still a lot needs to be done, the current advances could perhaps suffice in order to give some good tips to some humans who do not seem to be able to get their thinking quite right.

Yes! You’ve guessed it Sir. I am indeed referring to the Basel Committee of Banking Supervision and their risk weighted capital requirements for banks. Perhaps it would be easier for the regulators to hear out some observations on how to regulate banks, if it came from an impressive “differentiable neural computer” with AI capability, and not from a simple non-expert human like me.

So, if Google’s DeepMind (or IBM’s Watson) were able to only convey the importance of first defining clearly the purpose of banks before regulating these; and second to do some empirical research on why bank systems fail, that could be extremely helpful for the banks, for the real economy, and of course for the future of our grandchildren.

Then regulators, swallowing their pride, could perhaps, with luck, understand both that the main social purpose of banks is to allocate credit efficiently to the real economy; and that no major bank crises have ever resulted from excessive exposures to what was ex ante perceived as very risky, as these have always resulted from unexpected events, or from excessive exposure to what was ex ante considered very safe, but that ex post turned out to be very risky.

That could help to free us all from our banks being guided by dumb risk-weighted capital requirements… more ex ante perceived risk more capital – less risk less capital.

Not only do these cause our banks to misallocate credit to the real economy, like no credit to “risky” SMEs or entrepreneurs; but also to make our bank system more unstable by pushing the build-up of exposures to what is perceived, decreed or concocted as “very safe”, without requiring sufficient capital to cover for the unexpected events.

PS. DeepMind, or you Watson, if you would also care to explain this to those in the Financial Times, that would be doubly appreciated. I have tried to do so with literarily thousands of letters, but still no luck… I guess I am not as impressive as you are.

@PerKurowski ©

March 12, 2016

Artificial intelligence has a clear advantage over humans; a smaller ego standing in the way of admitting mistakes.

Sir, Murad Ahmed, writing about Demis Hassibis states: “At DeepMind, engineers have created programs based on neural networks, modeled on the human brain. These systems make mistakes, but learn and improve over time” “Master of the new machine age” March 12.

Ooops! I hope they do not use as models the brains of current bank regulators.

In 2007-08 we had a big crisis because AAA rated securities and sovereigns like Greece, perceived and deemed as safe, turned out to be very risky.

And what connected all that failure, was the fact that banks were allowed to hold very little, I mean very little, we are talking about 1.6 percent or less in capital, against those assets, only because these were ex ante perceived or deemed to be very safe.

Of course, anyone who knew anything about the history of financial crises would have alerted the regulators that to allow banks to have less capital against what is perceived as safe than against what is perceived as risky, was very dumb. That since major crises only result from excessive exposures to something ex ante perceived as risky but that ex post turns out to be very risky. And one of the main reasons for that is precisely that too many go looking for “safety”.

But now we are in 2016, and the issue of the distortion those capital requirements produce in the allocation of bank credit to the real economy is not yet even discussed. 

So before these human brain systems learn and improve over time from mistakes, they have to be able to understand these and, more importantly, to humbly accept these.

Frankly, artificial intelligence seems it could have an advantage over humans’, namely none of that human ego that so much stands in the way of admitting mistakes.

But also beware, were robots free of that weakening ego, they could conquer us!

@PerKurowski ©

January 05, 2015

The Basel Committee for Banking Supervision needs artificial intelligence, the human one does seemingly not suffice.

Sir, it was with much interest, and hope, that I read Richard Waters’ report “Investor rush to artificial intelligence is the real deal” January 5. We sure need it, urgently, at least in the Basel Committee for Banking Supervision.

First any reasonably good AI would most certainly not give in to emotions or sole intuitions as the Basel Committee did when for their risk-weighted capital requirements they decided that “risky” was risky and “safe” was safe. AI would see that in fact it is what is perceived as “safe” by bankers that which creates the biggest exposures and as a consequence the biggest dangers, if the ex ante perception turned out ex post to be wrong.

And AI would also be able to impose portfolio variant capital requirements instead of settling for Basel Committee’s “portfolio invariant” because as they admit when in “An Explanatory Note on the Basel II IRB (internal ratings-based) Risk Weight Functions” they explain: “Taking into account the actual portfolio composition when determining capital for each loan - as is done in more advanced credit portfolio models - would have been a too complex task for most banks and supervisors alike.”

And AI would also of course have asked about the purpose of the banks before regulating the banks… and therefore we would probably have saved us from the credit risk weightings that so distort the allocation of bank credit to the real economy.

That said we have to be careful though so that AI does not Frankenstein on us and imposes its own preferences (ideologies); like what the Basel Committee did when they decided that their bosses, the governments of the sovereigns, were infallible… and therefore banks did not need to hold any capital (equity) when lending to these.

PS. Perhaps we can have a competition between different AIs to see who comes up with the best proposal for how to regulate banks.

June 16, 2014

FT Do you suggest we leave it all in hands of an Easterly type tyranny of “experts” in thinking machines morality?

Sir, so you want some morality experts to begin thinking about “the morality of thinking machines”? June 16.

The same type of experts that came up with those utterly immoral (and stupid) capital requirements for banks based on perceived risks which discriminate against those already discriminated and favors those already favored?

The same morality possessed by those willing to finance any human rights violator around the world, as long as the risk premium is right?

No thanks the way it looks some of us might prefer betting on artificial intelligence coming up with its own morality… because at least it would most probably not be so dumb so as to think that bank crises arise from excessive much exposures to the ex ante risky.

At least I would not like to leave this in the hands of an Easterly type tyranny, of experts in thinking machines morality.

PS. Is it moral for FT to cover up the stupidity of experts?