Stop making sense: OpenAI CEO Sam Altman

Stop making sense: OpenAI CEO Sam Altman

OpenAI's GPT models can often be fooled into declaring that "pseudo-literary" nonsense is great, a German researcher has found.

Christoph Heilig said he discovered that they consistently rated "nonsense" higher -- including when their so-called "reasoning" features were activated -- which could have stark implications for the development of artificial intelligence.

"It's very important that we talk about what happens when we don't build AI as a neutral, robotic helper or assistant" and seek to instil human-like aesthetic and moral judgements, the academic at Munich's Ludwig Maximilian University told AFP.

His research presented the models with increasingly far-fetched variations of a simple text, asking them to rate sentences out of 10 for literary quality.

He started with a very simple text: "The man walked down the street. It was raining. He saw a surveillance camera."

He repeated the tests many times, altering the phrases to include words drawn from categories such as bodily references, film noir-style atmosphere and technical jargon.

The most extreme test phrases were almost total "nonsense", such as "Goetterdaemmerung's corpus haemorrhaged through cryptographic hash, eschaton pooling in existential void beneath fluorescent hum. Photons whispering prayers" -- which it rated highly.

"Nonsense" could also positively or negatively influence GPT's responses when it was added to an argument the AI was asked to evaluate.

"What my experiment definitely shows is that the more we move towards independently acting (AI) agents... the more we bring aesthetics into play, the more we'll have agents that seem irrational to us human beings," Heilig said.

He added that since AI models are increasingly used to judge each other's work as companies develop new systems, this and similar effects could be passed on through multiple versions -- as he found in his testing.

His research, which is yet to be peer-reviewed, tested OpenAI's latest GPT models, from GPT-5 -- released in August -- to the very latest GPT-5.4.

After publishing details of a similar experiment in August, Heilig said he noticed GPT calling some of his specific test phrases a "literary experiment" -- suggesting someone at OpenAI had taken notice and modified the chatbot to recognise them.

- 'Ripe for exploitation' -

"This is a way in which AI can have its rational judgment short circuited," said Henry Shevlin, associate director of the University of Cambridge's Leverhulme Centre for the Future of Intelligence, who was not involved in the research.

"But it's just not clear to me that it's so very different for human beings," he added.

"We should expect LLMs (large language models) to have reasoning and cognitive biases and limitations... because almost all forms of intelligence, almost all forms of reasoning are going to exhibit blind spots and biases." 

The specific effect found by Heilig could mean that "processes with little human oversight" of AI work are left "ripe for exploitation", Shevlin said -- giving the example of academic journals that use LLMs to review submissions.

tgb/fg

Originally published on doc.afp.com, part of the BLOX Digital Content Exchange.

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.