Showing posts with label Humans. Show all posts
Showing posts with label Humans. Show all posts

November 01, 2017

If chefs cannot obtain effective intellectual protection for their recipes, how can they beat robots armed with AI?

Sir, Sarah O’Connor writes: “The risks to workers from ever smarter computers are clear, but the opportunities will lie in maximising the value of their human skills. For some people, such as talented chefs, the battle is already won.” “Machines do not have to be the enemy” November 1.

Oh boy, is she not a romantic? How on earth will individual chefs survive against robots and AI, unless it is for those few the 1% of the 1% is able and willing to pay for their human artisan cuisine creations protected by patents?

That “In most jobs, people combine cognitive skills with other human abilities: physical movement; vision; common sense; compassion; craftsmanship… that computers cannot match”, that unfortunately sounds like wishful thinking.

Much better is it if we accept that robots and AI can supplant us humans, in way too many ways, and instead look for ways how they should be able to work better for all humanity. And in this respect she is right, "machines are not the enemy".

I say this because since many years I have held that we do need to prepare decent and worthy unemployments, in order to better confront a possible structural unemployment, and without which our social fabrics would break down completely. Capisci?

That might begin by taxing the robots so at least humans can compete on equal terms.

Of course a totally different world might be out there in the future, but I can’t but to stand firmly on my western civilization’s stepping-stones, those that got me to where I am.

@PerKurowski

February 21, 2017

How much of the “productivity” of robots is derived from the fact these are not burdened by as much taxes as humans?

Sir, you write: “A direct tax on robots is not the answer… It makes no sense to penalise technological innovation that raises productivity and creates wealth. Indeed, any rich country that makes automation too expensive risks driving its manufacturers away to lower wage jurisdictions”, “Robot tax, odd as it sounds, has some logic”, February 21.

Indeed, but why does it not work for you the other way around? I mean in that sense human workers are also “penalized” in many ways, like with payroll taxes, and jobs driven away to lower wage jurisdictions.

So, in order to more efficiently allocate labor/capital resources, robots and similar automation should also generate payroll taxes.

You also hold “the bigger question is how policymakers can use the tax system to ensure that growth is more evenly shared”. And there I have my doubts. If you are going to tax robots only to increase the franchise value of the redistribution profiteers, I am not with you at all.

I much prefer all those revenues to become part of the funding of the Universal Basic Income that is needed, in order to be able to better face that structural unemployment to which I referred to, in 2012, with my “We need worthy and decent unemployments”.

PS. Anyone searching on this issue for “robots” on my TeaWithFT blog should be able to determine, just like in the case of “risk weighted” capital requirements for banks, that I have de facto been censored by the Financial Times. Do you feel proud about that Sir?

@PerKurowski

February 12, 2017

How do you suggest a contrarian belief to a President without causing him great disconfirmatory feedback discomfort?

Sir, Tim Harford writes that when “I think I’m doing a good job, and then you tell me that I’m not… [research indicates] “that when this, which in the jargon is known as disconfirmatory feedback arrived, workers would then avoid contact with the people who had given them the unwelcome comments.” “If I’m cruising along in a complacent bubble, I badly need someone to explain what I’m doing wrong” January 11.

Bank regulators think they are doing great assigning a risk weight of 20% for what is AAA rated, and 150% for what is below BB-. I tell them they’re wrong, that what’s perceived as safe, contains much more danger than what is perceived as risky. I back it up with Voltaire’s “May God defend me from my friends, I can defend myself from my enemies” … and then the regulators avoid all contact with me. It sure looks like a case of reaction to a disconfirmatory feedback.

But, at a level of a Basel Committee for Banking Supervision should its members not have to be able to handle rationally any disconfirmatory feedback? And what about those even higher up?

For instance, I believe that in the race for jobs in the US, much more important than human competition from China and Mexico, is the fact that Americans have to compete with robots from everywhere that do not have to carry weights like payroll taxes.

Sir, I ask, how can I convey such a belief to a President without risking causing him great disconfirmatory feedback discomfort?

PS. Sir FT, since you have also mostly shut me up, could it be because I have gone over the level of disconfirmatory feedback you can handle? If so, how should we handle it? I ask because I have no wish to give up writing you my mind on so many things you might not agree with.

@PerKurowski

January 17, 2017

A tax on robots or similar substitutes for humans, and a Universal Basic Income, solves many of the challenges of automation.

Sir, Guy Wroble writes: “As automation will target more expensive labour first, aside from a limited number of tech jobs to service the machines, humanity would appear to be on the road to becoming the hewers of wood and the haulers of water. Jobs which pay so little that it makes no sense to automate them. “Automation may lead to humans hewing wood” January 17.

Indeed that could happen, if we do not do anything about it. But we can! Tax all what substitutes for human jobs, and have those tax revenues help fund a Universal Basic Income payable to all.

That way we would not only level the field for humans to compete with robots and similar, but we would also make sure humans do not have to offer themselves to perform the chores that robots are most suited to do.

@PerKurowski

December 08, 2016

Will we humans end up banning ourselves from driving because we’re too risky?

Sir, John Gapper concludes, “in a world of cheap, convenient self-driving vehicles, only the wealthy and fussy will bother to buy a car” “Why would you want to buy a self-driving car?” December 8.

Wait a second. Is Gapper saying that only the wealthy, buying cars, might save some jobs? That does not sound like too politically incorrect in these get rid of inequality Piketty days.

But, jest aside, what I most fear, is the day we humans ban ourselves from driving altogether, because we are not safe enough, because we are too risky.

With automation substituting us humans in so much, what mutations will that provoke? What capabilities will we lose?

PS. Beware though of a Basel Committee for Transit Supervision. If it interferes, like the Basel Committee in the process of allocating bank credit to the real economy, then the human race might disappear in the mother of all massive car crashes.

@PerKurowski

November 12, 2008

At least listen to the Joker before giving more powers to the schemers

Sir, Richard Thaler and Cass Sunstein in “Human frailty caused this crisis”, November 12, hold that “regulators need to help people manage complexity and temptations” but ignore the frailty of the regulators and the dangers of all their regulatory temptations.

I can hear now the free market answering a confounded citizen by describing the bank regulators with the same words the Joker used in the movie The Dark Knight, 2008. “You know, they're schemers. Schemers trying to control their worlds. I'm not a schemer. I try to show the schemers how pathetic their attempts to control things really are. So, when I say that … was nothing personal, you know that I'm telling the truth. It's the schemers that put you where you are. I just did what I do best. I took your little plan and I turned it on itself. Look what I did to this city with a few…” collateralized debt obligations.

When I think of a small group of bureaucratic finance nerds in Basel thinking themselves capable of exorcizing risks out of banking, for ever, by cooking up a formula of minimum capital requirements for banks based on some vaguely defined risks of default; and thereafter creating a risk information oligopoly by empowering the credit rating agencies and which was all doomed, sooner or later, to guide the world over a precipice of systemic risks; like what happened with the lousily awarded mortgages to the subprime sector, I cannot but feel deep concern when I hear about giving even more advanced powers to the schemers.