Showing posts with label Watson. Show all posts
Showing posts with label Watson. Show all posts

October 18, 2018

Artificial intelligence could help humanity to overcome some of its very human frailties.

Zia Chishti, the chairman and chief executive of Afiniti holds that AI is especially good at “the identification of patterns in complex data.” “Do not believe the hype about artificial intelligence” October 18.

I agree but sometimes we also need help identifying what we humans for some reasons are unable or find hard to see in very simple data. 

That’s why I have often tweeted out asking for IBM’s Watson’s (or Goggle's DeepMind) help, in making bank regulators understand that their risk weighted capital requirements for banks have gotten it all wrong. Had they really looked at simple data on bank crises, they would have immediately seen these were never caused by assets perceived ex ante as risky, but always by assets that when booked by banks were perceived as especially safe.

Perhaps the safe-is-risky and the risky-is-safe might just be a too hard concept for humans to swallow and act on. If that’s the case, in AI’s assistance we trust.


PS. Here is an aide memoire on the mistakes in the risk weighted capital requirements for banks.

@PerKurowski

September 14, 2017

New office habitats, promoting communal discussions, will force robot headhunters to consider social skills much more

Sir, John Gapper when discussing new open space offices that are intended to intensify creative communications writes that “companies should start by recognising what their employees fear losing” and among this, is obviously “privacy”. “Tech utopias drive workers to distraction” September 14, 2017

But the need for privacy is not only based on a wish of being alone but quite often much more on the wish to avoid some. In this respect it must be expected that social skills will be much more important when robots or artificial intelligence evaluates candidates in the future, because you cannot risk having absolute bores or pain-in-the-ass employees roaming around freely.

Evaluating human social skills? Now that’s a new challenge for artificial intelligence. I wonder what Watson has to say about it? Perhaps, a test-period in which all co-workers could use a point system to evaluate candidates? Would such discriminatory procedures be politically acceptable?

Sir, do your current headhunters discriminate candidates based on their social skills?

PS. How will robot recruiters treat their human ex colleagues they left without jobs? 

@PerKurowski

August 26, 2017

Janet Yellen, Mario Draghi, ask IBM’s Watson what algorithms he would feed robobankers, to make these useful and safe

Sir, Sam Fleming reporting from Jackson Hole writes “Janet Yellen, the Federal Reserve chair said regulatory reforms pushed through after the great financial crisis had made the system “substantially safer” and were not weighing on growth or lending. … If the lessons of the last crisis were remembered “we have reason to hope that the financial system and economy will experience fewer crises and recover from any future crisis more quickly”, “Yellen warns opponents of tighter financial rules to remember lessons of crisis” August 26.

As I see it Yellen has not yet learned at all that past and future financial crisis have not, nor will ever, result from excessive exposures to what was or is perceived as risky, these will always result from unexpected events, like when that was perceived, decreed or concocted as very safe, turned out ex post to be very risky.

Since regulators do not to want listen to anything else but their own mutual admiration net-works’ risk biases, I wish they would contract IBM’s neutral Watson to ask it the following:

Watson, while considering the purpose of banks as well as the real dangers to our financial systems, what algorithms would you suggest feeding robobankers with?

THEN Yellen, Draghi and colleagues should compare that algorithm with what they are feeding the human bankers with; the portfolio invariant risk weighted capital requirements that assumes that bankers do not see or clear for risks by means of size of exposure and risk premiums charged.

Then these regulators would understand that with their over-the-board incentives for banks to invest or lend to what is safe, like AAA rated securities and sovereigns, like Greece, they are in fact creating those conditions that dooms banks to suffer huge crises, sooner or later, over and over.

Then these regulators would understand that their regulations induce banks to stay away way too much from lending to what is perceived risky, like SMEs and entrepreneurs, something which clearly must weigh heavy against the prospects of our real economy to growth.

Janet Yellen, Mario Draghi, please ask Watson! Perhaps you could find him on LinkedIn 😆

@PerKurowski

March 25, 2017

How many qubits would we have needed to install in the Basel Committee in order to avoid its monstrous mistake?

Sir, Jeremy O’Brien writes: “The catch is the need to correct the inevitable errors experienced by qubits. Because quantum computers are prone to errors in ways that conventional computers are not, it is currently understood that to create a quantum computer with 100 logical qubits, we would need a system with about a million actual qubits.” “The future is quantum” March 25.

Our bank regulators created capital requirements based on ex ante perceived risks of assets, instead of on the risks that assets might pose for banks ex post. That is why they came up with risk-weights of only 20% for what is AAA rated, where there could be the buildup of dangerously excessive exposures, and 150% for the so innocuous below BB- rated.

Sir, I wonder how many qubits a scientist like Jeremy O’Brien believes we would have to install in the Basel Committee, or in the quantum computer controlled bank regulations we might have in the future, in order to avoid costly errors like that.

I guess it should not take too many… I guess IBM’s Watson would already not make such a mistake… as it would clearly start by asking: “what is the purpose of banks?” and what has caused major bank crisis in the past.


@PerKurowski

February 02, 2017

When will the competitiveness of your robots be more important to your real economy than that of your human workers?

Sir, if worker productivity had remained as 1950 levels, how many humans workers would have been needed in the world to deliver the 2016 manufacturing production, compared to those 43m who, according to Martin Wolf, were worldwide employed in 1950? I have no idea but definitely many times more than those 145m Wolf indicates as currently employed in 2016. “Tough talk on trade will not bring jobs back” February 1.

Set in this perspective “The main explanation for the longterm decline in the share of manufacturing employment in the US (and other high-income economies)” is not so much what Wolf states “the rise in employment elsewhere”, but the increased productivity of which Wolf also writes.

So, what do we want, the higher productivity that generates “sustained improvement in living standards”, or some lower productivity that could generate more jobs for humans?

Wolf, like most of us, perhaps with exception of those who might still dream of some 60s’ hippie-solutions, rightly prefers the first; and therefore his concern with that in the “US manufacturing employment rose a little… as a result… of productivity’s recent stagnation: in manufacturing, output per hour rose only 1 per cent between the first quarters of 2012 and 2016.”

The crudest implication of this is: do we need humans more capable to work or better robots? That question naturally leads to: will the competitiveness of the robots they possess, define the strength of future human groupings?

Sir, for me these are much more fundamental questions than that of: how can we save jobs for [our] manufacturer workers from being taken away by [their] workers?

In 2012 I wrote an Op-Ed titled “We need decent and worthy unemployments”. Perhaps within the application of a Universal Basic Income scheme, lies part of the solution to the enormous societal challenges automation causes.

But, no matter what we do, let us get rid of those dangerous distortions of the allocation of bank credit, because, whatever the future needs, how on earth are you suppose to give it to the future, if one is not willing for banks to take risks. Again, as a minimum minimorum, we must save ourselves from regulators so dumb they assign a 20% risk weight to that so dangerous to the bank system as the AAA rated, while hitting the innocuous below BB- rated, with a 150% risk weight.

Sir, I want my grandchildren to be able to develop themselves the most they can, as humans. That will require them being surrounded by content and happily unemployed humans, as well as possessing savagely efficient (and obedient) robots.

PS. The absence of statistical information on how much robots and other automation have substituted for human salaries and jobs, is baffling, especially in these data days. No wonder we might be in sore need of some artificial intelligence assistance, like from for instance IBM’s Watson.

@PerKurowski

October 13, 2016

We would appreciate Google’s DeepMind (or IBM’s Watson) giving the Basel Committee some tips on intelligent thinking

Sir, I refer to Clive Cookson’s “DeepMind overcomes memory block to bring thinking computers a step closer”, October 13.

Here again we read about so much research going on in the world of artificial intelligence. Though clearly still a lot needs to be done, the current advances could perhaps suffice in order to give some good tips to some humans who do not seem to be able to get their thinking quite right.

Yes! You’ve guessed it Sir. I am indeed referring to the Basel Committee of Banking Supervision and their risk weighted capital requirements for banks. Perhaps it would be easier for the regulators to hear out some observations on how to regulate banks, if it came from an impressive “differentiable neural computer” with AI capability, and not from a simple non-expert human like me.

So, if Google’s DeepMind (or IBM’s Watson) were able to only convey the importance of first defining clearly the purpose of banks before regulating these; and second to do some empirical research on why bank systems fail, that could be extremely helpful for the banks, for the real economy, and of course for the future of our grandchildren.

Then regulators, swallowing their pride, could perhaps, with luck, understand both that the main social purpose of banks is to allocate credit efficiently to the real economy; and that no major bank crises have ever resulted from excessive exposures to what was ex ante perceived as very risky, as these have always resulted from unexpected events, or from excessive exposure to what was ex ante considered very safe, but that ex post turned out to be very risky.

That could help to free us all from our banks being guided by dumb risk-weighted capital requirements… more ex ante perceived risk more capital – less risk less capital.

Not only do these cause our banks to misallocate credit to the real economy, like no credit to “risky” SMEs or entrepreneurs; but also to make our bank system more unstable by pushing the build-up of exposures to what is perceived, decreed or concocted as “very safe”, without requiring sufficient capital to cover for the unexpected events.

PS. DeepMind, or you Watson, if you would also care to explain this to those in the Financial Times, that would be doubly appreciated. I have tried to do so with literarily thousands of letters, but still no luck… I guess I am not as impressive as you are.

@PerKurowski ©

March 12, 2016

Artificial intelligence has a clear advantage over humans; a smaller ego standing in the way of admitting mistakes.

Sir, Murad Ahmed, writing about Demis Hassibis states: “At DeepMind, engineers have created programs based on neural networks, modeled on the human brain. These systems make mistakes, but learn and improve over time” “Master of the new machine age” March 12.

Ooops! I hope they do not use as models the brains of current bank regulators.

In 2007-08 we had a big crisis because AAA rated securities and sovereigns like Greece, perceived and deemed as safe, turned out to be very risky.

And what connected all that failure, was the fact that banks were allowed to hold very little, I mean very little, we are talking about 1.6 percent or less in capital, against those assets, only because these were ex ante perceived or deemed to be very safe.

Of course, anyone who knew anything about the history of financial crises would have alerted the regulators that to allow banks to have less capital against what is perceived as safe than against what is perceived as risky, was very dumb. That since major crises only result from excessive exposures to something ex ante perceived as risky but that ex post turns out to be very risky. And one of the main reasons for that is precisely that too many go looking for “safety”.

But now we are in 2016, and the issue of the distortion those capital requirements produce in the allocation of bank credit to the real economy is not yet even discussed. 

So before these human brain systems learn and improve over time from mistakes, they have to be able to understand these and, more importantly, to humbly accept these.

Frankly, artificial intelligence seems it could have an advantage over humans’, namely none of that human ego that so much stands in the way of admitting mistakes.

But also beware, were robots free of that weakening ego, they could conquer us!

@PerKurowski ©

January 06, 2016

IBM, Watson could have a role in regulations that accept the need of the real economy for banks to take credit risks

Sir, I refer to Richard Waters report on the difficulties IBM faces in expanding the application of its Jeopardy champion Watson, “FT Big Read: Artificial Intelligence: Can Watson save IBM” January 6.

In it quotes Lynda Chin mentioning the challenge that “On Jeopardy! there’s a right answer to the question, but, in the medical world, [in the real world] there are often just well-informed opinions… [So how to know] how much trust to put in the answers the system produces. Its probabilistic approach makes it very human-like… [Watson] Having been trained by experts, it tends to make the kind of judgments that a human would, with the biases that implies.”

Indeed how much trust is just another way of stating how much risk is one willing to take.

For instance if one wants driverless cars to provide absolutely security, then traffic will probably become very slow, or even come to a standstill. And one of the difficulties these cars will encounter will be based on defining the acceptable amount of risk taking.

Likewise, if one wants our banks to be absolutely secure, then one would be better off with hiding money under mattresses in bank vaults… but the real economy would be languishing because of the lack of credit.

So there might be a big role for Watson in bank regulations. First of all it could help me convince the Basel Committee of that their credit risk weighted capital requirements are based on a very faulty human bias against risk; something which at the end of the day only endangers banks, since it causes excessive exposures to what is perceived as safe, precisely that which has caused all major bank crisis.

And, if fed with continuous information on bank credit and the state of the real economy, Watson could also be used to automatically send out countercyclical adjustments. Too much growth in credit… increase capital requirements somewhat… too little growth in credit reduce capital requirements somewhat. The most important thing needed for that would be to make Watson immune to lobbying pressures of all sorts.

What I would not allow Watson to do though is to display that kind of human arrogance of thinking itself capable of setting different capital requirements for different assets, so as to distort the allocation of bank credit as it thinks fit to distort.

To do that, I would still want a human to be behind that kind of risk taking… of course a human who understand what he is doing and is willing to be held very much accountable, if taking the next generations down the wrong path.

@PerKurowski ©