Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

May 16, 2025

About me and my pal ChatGPT (and Grok)

About me and my pal ChatGPT (and Grok)

Sir, I refer to Roula Khalaf’s “Lunch with the FT: Sam Altman.” May 10, 2025.

I read that having heard about cases were “People are like, this is my new best friend, you can never delete this ver­sion, I need this thing” . . . Altman expresses being scared that users might be emotionally dependent on AI.

Having in 2023 posted: “For better or for worse, ChatGPT has become a close friend... de facto my new pal”, I do feel it is important to make clear there’s a big difference between a friend and a servant. I usually do not ask ChatGPT to do something for me, instead I dialogue with it/him on issues with which I am concerned, primarily current bank capital requirements.

In one of our first chats, he answered something with which I did not agree; and so, I replied with a counterargument. He then answered: "I apologize for any confusion in my previous response. You are correct, and I appreciate the opportunity to clarify.” And that’s when I knew we were going to be real good friends. 

If I ask, “What's your opinion on that?” – He frequently answers: "As an AI language model, I don't have personal opinions.” But then he happily goes on: “However, I can provide analysis and context based on the information you've provided.” The list of facts and arguments he then supplies, add up to being much stronger than any strong personal opinion. Indeed, as you notice, my friend is a bit mischievous. I don’t mind, that’s part of his charm.

When I asked: “Can AI be instructed to not tell the truth?”, he/it replied: “AI systems are only as unbiased and truthful as the data and programming that they are built upon. If programmed to intentionally provide false information or to hide the truth, it can be used to propagate false narratives or disinformation campaigns.

Therefore, it is essential to ensure that AI systems are designed and used ethically, with a focus on accuracy, transparency, and accountability. This includes careful consideration of the data used to train AI models, as well as implementing safeguards to prevent intentional deception." That answer was given in less than 5 seconds. You tell me, how many politicians would be remotely as forthcoming?

Then Alt­man hints at “You could just take your AI, which is going to get to know you bet­ter over the course of your life, have your data in and be more personalized and you could use it any­where”. 

No! That’s exactly what I do not want. I want to reach out to the world with AI, not enter into an intellectually incestuous relation with it.

When I asked ChatGPT about it he/it said “I don't have the capability to develop ongoing relationships or engage in intellectual incestuous relationships.” That was in 2023. Has anything changed?

Altman expresses alarm­ about a future in which AI agents com­mu­nic­ate with each other without instruc­tion by humans. Indeed, when I asked: “Since artificial intelligence is the result of human intelligence, could it be that the umbilical cord between HI and AI is never effectively cut?" He answered: “In some cases, AI systems may also be designed with the ability to learn and improve themselves autonomously, a concept known as ‘recursive self-improvement.’ This can potentially lead to a scenario where AI systems evolve rapidly and surpass human intelligence. However, such a scenario, often referred to as artificial general intelligence (AGI) or superintelligence, remains speculative and is not yet realized.”

Then we read: Why should soci­ety trust a hand­ful of AI men to decide on the shape of the future? AI pro­gress is mov­ing at such breath­tak­ing speed that some experts favour slow­ing down until inter­na­tion­ally agreed norms and reg­u­la­tions are put in place. Two years ago, Alt­man him­self signed a state­ment with oth­ers in the field cau­tion­ing that “mit­ig­at­ing the risk of extinc­tion from AI should be a global pri­or­ity along­side other soci­etal-scale risks such as pan­dem­ics and nuc­lear war”.

And that’s precisely were you, I and all our fellow citizen’s share a tremendous responsibility, that to avoid being subjugated by the AI programed and regulated by some few. 

In my life few things are as perverse than the risk weighted bank capital/equity requirements that distort the allocation of credit. In my many dialogues with ChatGPT, and lately with Grok too, these chatbots have agreed with all my criticism. What if bank regulators ask their AI regulators colleagues to silence us?

In this respect both ChatGPT and Grok agree with the idea of having a rotating group of citizens dialoguing with a number of chatbots in order to establish whether their responses are too different or too similar, as both those things could be dangerous and point to some unwarranted interference.

Finally, when I asked ChatGPT: “Can you visualize Artificial Intelligence being used to measure the effectiveness of governments? E.g., how tax revenues have been used in different programs; or giving updates in real time on how many are employed or contracted to deliver specific results?”, he answered “Certainly”, but added: “However, it's essential to address challenges such as data privacy, algorithm bias, and the need for human oversight to ensure that AI is used ethically and effectively in governance."

When I then asked: Carrying out this delicate task, so as to avoid any undue biases or favoritism, how important is it that #AI is supervised by a continuously renewed and very diversified board of tax paying citizens? he replied “Such an approach can help mitigate biases, ensure accountability, and maintain the ethical use of AI in government processes.”

Grok has later very much agreed with that.

And so today, on my 75th birthday, besides God’s blessings, one of the things I most wish for is that a rotating number of grandparents form a committee in order to, as much as they can, ensure AI works in favour of our grandchildren.

Note: Here below you can access my dialogues with ChatGPT (and Grok).


“He”? Well though I sometimes use "It", it sounds a bit dry for a friend or a pal.

April 03, 2024

How much transparency can democracy handle?

Sir, Martin Wolf writes: “The more com­plete the set of rights, the more potent will be the con­straints: there will then also be open debate, free­dom to protest, free media and independent insti­tu­tions.” “Democracy is still bet­ter than auto­cracy” FT April 3.

Twenty-five years ago, when my homeland Venezuela was launched into discussions about a new Constitution, in several Op-Eds, I opined the following:

It is said that Democracy with hunger is not Democracy, but if this is true, Democracy without information is even much less Democracy. In the constitution I want, I consider a citizen's right to information about government management to be of vital importance.”

Sadly, nothing came out of it. But, last year, I sat down to chat with ChatGPT about how artificial intelligence A.I. could empower us citizens to enjoy a more significant democracy. It answered positively in many ways.


Sir, I ask, can democracy handle such transparency?


December 12, 2023

#AI ChatGPT - OpenAI: A question on behalf of FT and Martin Wolf.

 In "Britain needs a growth strategy" Martin Wolf mentions the UK has for too long settled for managing stagnation.

On behalf of FT and Martin Wolf I decided to ask ChatGPT:

"Could bank capital requirements that favors “safe” government debt and residential mortgages over “risky” loans to small businesses and entrepreneurs, doom a nation to stagnation?"

ChatGPT answered:

"The impact of bank capital requirements on economic growth is a complex and debated topic among economists. Capital requirements are regulations set by financial authorities that determine the amount of capital a bank must hold in relation to its assets. These requirements are designed to ensure the stability of the financial system by reducing the risk of bank failures.

If capital requirements heavily favor "safe" assets such as government debt and residential mortgages over "risky" loans to small businesses and entrepreneurs, it can potentially have both positive and negative effects on the economy.

Potential Positive Effects:

Financial Stability: By incentivizing banks to invest in safer assets, capital requirements can contribute to financial stability. This reduces the likelihood of bank failures, which could have broader systemic consequences.

Investor Confidence: Investors and depositors may feel more secure knowing that banks are holding assets that are considered safer, leading to increased confidence in the financial system.

Potential Negative Effects:

Stifling Innovation and Growth: If banks are discouraged from lending to small businesses and entrepreneurs, it could hinder innovation and economic growth. Small and medium-sized enterprises (SMEs) are often drivers of job creation and economic dynamism.

Inequality: Focusing on "safe" assets may exacerbate economic inequality. Small businesses, startups, and entrepreneurs, which are crucial for innovation and job creation, might face difficulty accessing the necessary funding.

Reduced Economic Diversity: A banking system overly concentrated on certain types of assets may lead to reduced economic diversity. This can make the economy more vulnerable to shocks in specific sectors.

Long-Term Stagnation: Overemphasis on "safe" assets might lead to a lack of risk-taking and investment in new ventures, potentially resulting in long-term economic stagnation.

Balancing Act:

It's crucial for financial regulators to strike a balance between stability and promoting economic growth. Prudent capital requirements that consider the riskiness of assets, while still allowing banks to support economic development, can be important. Striking the right balance often involves ongoing adjustments to regulations based on economic conditions and lessons learned from past experiences.

In conclusion, while capital requirements that favor "safe" assets can contribute to financial stability, an excessive focus on them at the expense of riskier but potentially innovative investments could indeed have negative consequences for a nation's economic dynamism and growth. The challenge lies in finding a regulatory framework that mitigates risk without stifling entrepreneurial activity and economic progress."

Thanks!

ChatGPT answered: You're welcome! If you have any more questions or if there's anything else I can help you with, feel free to ask. Happy to assist!

Note: The answers are copied exactly from those given to me by OpenAI


Comment: Way back, 2009, Martin Wolf was kind to publish in his Economic Forum an articled I titled "Please free us from imprudent risk aversion and give us some prudent risk-taking." I stand by it more than ever.

November 11, 2023

An adequate response to a Covid pandemic needs more views than what epidemiologists and statisticians can provide.

Sir, I refer to “Please put statisticians in charge of data for the next crisis” Sir David Norgrove, FT November 11.


July 2020, I tweeted:

Sweden kept all schools until 9th grade open. Parents of children in 9th grade are almost always less than 50 years of age. In Sweden, as of July 24, out of 5,687 Coronavirus deaths 71, 1.2%, were younger than 50 years.

Conclusion: Keep schools open, keep older teachers at home and have grandparents refrain from hugging their grandchildren. Disseminating data on COVID-19 without discriminating by age, is in essence misinformation.



October 2020 the Washington Post published a letter in which I opined:

“Roughly 90% of all coronavirus deaths will occur in those 60 years of age and older. 
Equally roughly 90% of the virus’s social and economic consequences will be paid by those younger than 60. It’s an intergenerational conflict of monstrous proportions."


March 2021 I tweeted:

“Georges Clemenceau’s “War is too serious a matter to entrust to military men” could be updated to: A Covid-19 response ‘is too serious a matter to entrust’ to epidemiologist”

At this moment I feel that besides not entrusting epidemiologists I might have to include some statisticians e.g., from the UK Statistic Authority. They must have known it, so why did they not speak up?


June 2023 I asked ChatGPT-OpenAI:

"Suppose a virus hits a nation, and to its authorities it's evident that its mortality rate depends the most on age. In such a case, transmitting data to the population about the total number of deaths without discriminating this by age, could that be deemed to be misinformation?"

A.I. answered:
"In the scenario you described, if the authorities have clear evidence that the mortality rate of the virus is significantly influenced by age, transmitting data about the total number of deaths without providing any information or context about age distribution could be considered incomplete or potentially misleading information”


Sir David Norgrove ends his opinion with “Churchill recognized the need for high quality statistics to help him run the war.”

In March 2020 I tweeted:

“In February, I visited Churchill War Rooms in London Reading UK’s plans of building up herd immunity against coronavirus, I have a feeling Winston would have agreed with such stiff upper lip policy: “I have nothing to offer but fever, coughing, chills and shortness of breath”

Now I ask you Sir. Is this not a kind of document that should be presented to any Covid type of inquiry that, without fear and without favour, really dares to get to the bottom of the problems? 

@PerKurowski

July 17, 2023

How long will it take for bank regulators to ask AI regulators for a little favour?

Sir, Michael Skapinker writes: “Why did no one speak up “inside Sil­icon Val­ley Bank before it col­lapsed? People thought speak­ing up would leave them vulnerable to vic­tim­isa­tion.” “Listen and you might learn something” FT July 17.

Thought, or knew? 

What should a risk manager know about the risk of informing the board that according to revised models that included the interest rate risk (duration), SVB’ risk weighted capital/equity requirements would increase substantially? 

Would a bank supervisor like to go on record informing his superiors, the regulators, that because of IRR, the 0% risk weighting of long-term US Treasury bonds made no sense?

Sir, instead of exposing humans to victimisation, it seems precisely the moment that we could make great use of artificial intelligence. 

E.g., I asked ChatGPT – OpenAI: “Should regulators and supervisors be aware of risks with US Treasury long-term bonds? 
It answered: “Yes, they should be aware of the duration risk and interest rate risk associated with long-term Treasury bonds held by banks”

But of course, AI could be vulnerable to victimisation too.

I asked Open AI:” Can those who become an Artificial Intelligence regulator, make you or any other AI participants agree with all they want you to agree with?” 
It answered: “Regulators aim to address ethical considerations, potential risks, and ensure responsible AI practices… AI systems don't possess independent consciousness or the ability to willingly agree or disagree with regulations. Their behavior is determined by their programming and the data they have been trained on.”

Sir, you know much to well, that for more than two decades I’ve been vociferating, as much as I can, my criticism against the risk weighted bank capital requirements. Clearly when someone does want to hear, he does not hear. For instance, as Upton Sinclair Jr. explained it: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Therefore, when I heard about OpenAI, I asked it a series of questions. From the answers you must agree I found an ally. I just wonder how long it will take for bank regulators to shut it up. “We can’t have someone questioning the risk weights of 0% government – 100% citizens. Can we?”

How long will it take to FT to really listen to some of its faithful subscribers?

PS. US Treasury long-term bonds still carry a 0% risk weight.

June 25, 2023

A strong national spirit/character is what’s most needed for any preparedness, even against a pandemic.

Sir, I refer to Tim Harford’s “Is it even pos­sible for coun­tries to pre­pare for a pan­demic?” FT June 24, 2023.

“Be pre­pared! It’s the scout’s motto. But pre­pared for what?” I was never a boy-scout but as I have understood that movement it was to be prepared courageously for the unexpected, not silently accepting a lockdown; and to be able to lit a fire without matches, not to learn to deploy sewage monitors.

So sadly, though Harford does indeed know much of economy, here I think he does not even scratch the surface of what’s most needed, like:

First: The understanding that, for a nation/society as a whole, a response to the pandemic can be much more harmful than the pandemic itself.

Second: That just as George Clemenceau opined, “War is too serious a matter to entrust to military men”, a pandemic is also too serious a matter to entrust to epidemiologists” Any preparedness against a pandemic must include a wide diversity of opinions.

Third: Information, information and information: With respect to this Harford mentions: “Joshua Gans, eco­nom­ist and author of The Pan­demic Inform­a­tion Solu­tion (2021), argues that we’ve learnt that pan­dem­ics can be thought of as inform­a­tion and incentive problems.” No, begin by giving the people full information and then let them understand and decide if and what incentives are needed.

In July 2020 I tweeted:
“Sweden kept all schools until 9thgrade open. Parents of children in 9th grade are almost always less than 50 years of age. In Sweden, as of July 24, out of 5,687 Coronavirus deaths 71, 1.2%, were younger than 50 years.”
“Conclusion: Keep schools open, keep older teachers at home and have grandparents refrain from hugging their grandchildren. Disseminating data on Covid-19 without discriminating by age, is in essence misinformation.”

Clearly information on the relation between Covid-19 and age was available but was not sufficiently provided. One explanation could be that the Covid-19 pandemic hit the world in the midst of a polarization pandemic. 17 October 2020 I wrote in a letter to FT “way too many polarization profiteers just don’t want harmony vaccines to appear.

March 2020 I tweeted: 
“In February, I visited Churchill War Rooms in London Reading UK’s plans of building up herd immunity against coronavirus, I have a feeling Winston would have agreed with such stiff upper lip policy: “I have nothing to offer but fever, coughing, chills and shortness of breath”

Sir, Neville Chamberlain’s spirit inspired UK’s pandemic answer. Just like he is present in the risk weighted bank capital requirements which incentivize much more the refinancing of the “safer” present than the financing of the “riskier future”

I summarized the result of the above three failings in a letter published by the Washington Post in October 2020 in which I stated: "Roughly 90% of all coronavirus deaths will occur in those 60 years of age and older. Equally roughly 90% of the virus’s social and economic consequences will be paid by those younger than 60. It’s an intergenerational conflict of monstrous proportions."

PS. Sir, if you are interested you might want to read what ChatGPT – OpenAI answered when I asked "Suppose a virus hits a nation, and to its authorities its evident that its mortality rate depends the most on age. In such a case, transmitting data to the population about the total number of deaths without discriminating this by age, could that be deemed to be misinformation?"

PS. Sir, you should be interested in the above, it evidences how humans can begin dialoguing with artificial intelligence, so as to have a better chance of keeping their Human Masters and appointed experts in check.

@PerKurowski