Showing posts with label dangerous safe. Show all posts
Showing posts with label dangerous safe. Show all posts
March 06, 2018
Sir, John Thornhill writes: “In his Alan Turing Institute lecture, MIT professor Sandy Pentland outlined the massive gains that could result from trusted data… the explosion of such information would give us the capability to understand our world in far more detail than ever before”, “Trustworthy data will transform the world” March 6.
Indeed, but that also leads to other bigger dangers, not only because we might trust that trusted data too much, but also because we might not know how to interpret or what to do with that trusted data.
Like for instance the regulators with their current risk weighted capital requirements for banks. These establish that the riskier an asset is perceived the larger the capital a bank has to hold against it. Does that make sense? Absolutely not!
It is not if the perceived risk is correct, meaning the ex ante risk perceived ends up being the real ex post risk, that poses any major danger for our banking system. It is if the risk perceived is incorrect, that the real big dangers arise. And, of course, the safer an asset is perceived, and the more bankers trust that perception to be right, the longer and the faster it can travel down the dangerous lane of wrong perceived risks.
What detonated the most the 2007 crisis? The securities backed with mortgages to the subprime sector rated AAA by “trustworthy” credit rating agencies, in fact so trusted that the Basel Committee, with Basel II, allowed banks to leverage 62.5 times their equity with such “safe” assets.
@PerKurowski
June 11, 2017
In terms of creating systemic risks for our banking system, current regulators are the undisputable champions
Sir, former banker and banking lawyer Martin Lowy writes: “Dodd-Frank and Basel III capital rules have made banks and their holding companies stronger.” “How the next financial crisis won’t happen”, June 10
Well I sure know that the next financial crisis will absolutely not be the result of excessive bank exposures to something perceived as risky, as to what is rated below BB-, that to which regulators assigned a risk weight of 150%. Much more likely it will be from excessive exposures to something rated as safe as AAA, that to which regulators only assigned a meager 20% risk weight.
Really big bank crises, except from really extraordinary unexpected events, are the result of the introduction of something that can grow into a systemic risk.
What systemic risk do I see?
I see credit ratings, like when in 2003 in a letter published by FT I wrote: “Everyone knows that, sooner or later, the ratings issued by the credit agencies are just a new breed of systemic error to be propagated at modern speeds. Friends, please consider that the world is tough enough as it is”
I see risk weighted capital requirements, like those that allow banks to leverage more with what is “safe” than with what is “risky”., and therefore distorts, for no good reason, the allocation of bank credit to the real economy.
I see standardized risk weights that impose a single set of weights on too many.
I see regulators wanting to assure that banks all apply similar approved risk models, thereby again ignoring the benefits of diversification.
I see stress tests by which regulators make banks test against the some few same stresses, as if real stresses could be so easily identified.
I see living wills, as perfectly capable to create systemic risks that at this moment are hard to see.
In all, in terms of creating dangerous systemic risks, hubris filled bank regulators aee the undisputable champions.
The main cause for that is that our bank regulators find it more glamorous to concern themselves with trying to be better bankers, than with being better regulators.
Regulators, let the banks be banks, perceive the risks and manage the risks. The faster a bank fails if its bankers cannot be good bankers, the better for all.
Your responsibility is solely related to what to do when banks fail to be good banks.
And always remember these two rules of thumb:
1. The safer something is perceived to be, the more dangerous to the system it gets; and the riskier it is perceived, the less dangerous for the system it becomes.
2. All good risk management must begin by clearly identifying what risk can we not afford not to take. In banking the risk banks take when allocating credit to the real economy is precisely that kind of risks we cannot afford them not to take.
As in 1997 I wrote in my very first Op-Ed. “If we insist in maintaining a firm defeatist attitude which definitely does not represent a vision of growth for the future, we will most likely end up with the most reserved and solid banking sector in the world, adequately dressed in very conservative business suits, but presiding over the funeral of the economy. I would much prefer their putting on some blue jeans and trying to get the economy moving.”
@PerKurowski
April 11, 2017
Regulators, why do you fear what bankers fear? Is it not what the bankers trust that which is really dangerous?
Sir, Miles Johnson writes: “Since their inception, financial markets have been driven by greed and fear. No matter how advanced technology becomes, human nature isn’t changing.” “AI investment can ape intelligence, but it will always lack wisdom” April 11.
I am not sure, as is I might prefer a reasonably intelligent artificial intelligence to regulate our banks.
BankReg.AI would begin by asking: What are banks? What are they for? An answer like “to keep our money safe” would not suffice, because for that a big vault in which to store our savings would seem a cheaper alternative than a bank. So BankReg.AI would most probably, sooner or later, be fed that not so unimportant info that banks are also supposed to allocate credit efficiently to the real economy. As a consequence the current risk weighted capital requirements concocted by the Basel Committee would not have even been considered because these very much distort the allocation of credit.
Then BankReg.AI would ask: What has caused all bank crisis in the past” After revising all empirical evidence it would come up with: a. Unexpected events (like devaluations), b. criminal behavior (like lending to affiliates) and c. excessive exposures to something that was erroneously perceived as safe. As a consequence the Basel Committee’s current capital requirements, lower for what is dangerously perceived as safe than for what is innocuously perceived as risky, would never ever have crossed BankReg.AI’s circuits.
@PerKurowski
October 13, 2016
We would appreciate Google’s DeepMind (or IBM’s Watson) giving the Basel Committee some tips on intelligent thinking
Sir, I refer to Clive Cookson’s “DeepMind overcomes memory block to bring thinking computers a step closer”, October 13.
Here again we read about so much research going on in the world of artificial intelligence. Though clearly still a lot needs to be done, the current advances could perhaps suffice in order to give some good tips to some humans who do not seem to be able to get their thinking quite right.
Yes! You’ve guessed it Sir. I am indeed referring to the Basel Committee of Banking Supervision and their risk weighted capital requirements for banks. Perhaps it would be easier for the regulators to hear out some observations on how to regulate banks, if it came from an impressive “differentiable neural computer” with AI capability, and not from a simple non-expert human like me.
So, if Google’s DeepMind (or IBM’s Watson) were able to only convey the importance of first defining clearly the purpose of banks before regulating these; and second to do some empirical research on why bank systems fail, that could be extremely helpful for the banks, for the real economy, and of course for the future of our grandchildren.
Then regulators, swallowing their pride, could perhaps, with luck, understand both that the main social purpose of banks is to allocate credit efficiently to the real economy; and that no major bank crises have ever resulted from excessive exposures to what was ex ante perceived as very risky, as these have always resulted from unexpected events, or from excessive exposure to what was ex ante considered very safe, but that ex post turned out to be very risky.
That could help to free us all from our banks being guided by dumb risk-weighted capital requirements… more ex ante perceived risk more capital – less risk less capital.
Not only do these cause our banks to misallocate credit to the real economy, like no credit to “risky” SMEs or entrepreneurs; but also to make our bank system more unstable by pushing the build-up of exposures to what is perceived, decreed or concocted as “very safe”, without requiring sufficient capital to cover for the unexpected events.
PS. DeepMind, or you Watson, if you would also care to explain this to those in the Financial Times, that would be doubly appreciated. I have tried to do so with literarily thousands of letters, but still no luck… I guess I am not as impressive as you are.
@PerKurowski ©
Subscribe to:
Posts (Atom)