We tested the latest AI – and here's why you should be worried

Raw Text

ChatGPT is the most recent revolution in artificial intelligence, with mind-boggling capabilities – but it raises ethical questions

By Ed Cumming

The software can write scripts, poems, even newspaper articles – and it has been suggested that it could soon replace Google

It will disappoint fans of The Terminator, but the AI revolution is coming not in the form of killer robots or dystopian autocracies, but chat bots. We were told it would mean the apocalypse. So far it looks a lot like customer service, albeit much better than usual.

The latest revolution in public-facing artificial intelligence is ChatGPT, a piece of software designed by OpenAI, a California-based research company. GPT is short for Generative Pre-trained Transformer. It was released last week .

In the simplest terms, it works by scouring its dataset, which is most of what is written on the internet, finding the answers that best fit a given prompt, and rendering it in clear, if wooden, English. It’s a bit like the autocomplete function on your phone or email, except on a much grander scale.

ChatGPT in action

ChatGPT in action

To those unfamiliar with developments in AI, the latest capabilities are mind-boggling. You type in a request and it generates a response. It can write scripts, poems, even newspaper articles. As a test, I asked it for a limerick about The Daily Telegraph.

There once was a newspaper called the Daily Telegraph Whose reporters were known  for their cheek They’d write about royal affairs With a very particular flair But their puns were often quite weak

What it lacks in scansion, it makes up for in insolence.

It isn’t only journalists and copywriters at risk of being made redundant. Even more impressively, ChatGPT has been able to write credible computer code. It has been suggested that the software could soon replace Google . Earlier this month, Paul Buchheit, an engineer working on Google’s email service, Gmail, tweeted that Google might only be a year or two away from “total disruption”.

A cautionary tale

Where Google simply provides search results, GPT can turn out a comprehensive answer. “A lot of the creative industries have been caught flat-footed by this,” says Dr Daniel Susskind, a professor at Oxford and author of a book, The Future of the Professions, about the effect AI will have on employment.

“They think there’s something special about a faculty like creativity, but it turns out these systems can solve problems that might require creativity from us, but do it in different ways.

Where Google simply provides search results, GPT can turn out a comprehensive answer

Credit : Stock

It’s a cautionary tale for many people who think what they do is too complex or subtle for these systems to do.” While ChatGPT is impressive to the layman, he says, it is not a surprise to those studying the field. Automated writing software has been used in certain kinds of journalism for years.

The software is not infallible. While its doggerel is impressive, the poetry lacks heart. It sometimes comes up with outright nonsense. As it is trained on data only going up to last year, it is not up to date on current affairs. Patterns quickly emerge. Asking it similar questions will produce formulaic responses. It has been likened to a confident 11-year-old winging its answers without real understanding.

But as Susskind observes, this is just the start. We are on the cusp of an era in which creativity, or rather acts that have been thought of as creative, will be free. For just one example, what does it mean for cinema when a great script is free? Will writers simply become prompters? Will art come with a “free range” label to denote that it was written by a computer?

“However remarkable this seems, it’s still the worst it’s ever going to be,” Susskind adds. “People spend a lot of time finding the holes and shortcomings but these machines are going to get relentlessly better. The surprise and bewilderment from a lot of people this week suggests we’re not really prepared for that.”

Ethical concerns

The software raises ethical questions, too. A source that generates information can equally generate disinformation. The machines are programmed by people, and can end up reflecting their prejudices.

“It’s impressive, but it raises a lot of ethical concerns,” says Prof Carissa Veliz, a professor of ethics and philosophy at the University of Oxford. School essays, for example, may need a complete rethink when any student can generate a passable essay for free in seconds. Perhaps we will see a return to handwriting, or exam conditions.

“You can ask it to create a conspiracy theory about Covid, say, and it can do it quite well,” Veliz adds. “It makes it cheaper and easier to spread fake news. It’s inherently deceptive in its design. It’s designed to sound like a thinking being but it’s simply statistical inference. It doesn’t have any understanding. It just mimics discourse.”

It may even be that this technology shouldn’t be made available to the public, she says. “Some kinds of technology are too sensitive to make available to everyone. Things about radiation or nuclear technology. It has become trendy to think that anything open access is better than if it’s not. But in this case, where abuse is so easy, it’s unclear to me whether we should be allowing just anyone to access it.”

New developments

ChatGPT is only the most high profile in a wave of impressive AI developments in recent weeks. For a small fee, you can spruce up your photos with Lensa. Cicero, a bot developed by Meta to play the classic board game Diplomacy, finished in the top 10 per cent of an online competition. Diplomacy is a step on from chess or Go, because the key gameplay dynamic is that you must make deals with your fellow players and then betray them.

In the chat boxes, the bots were happily plotting and reneging against their human counterparts. Nobody suspected they weren’t playing a human. It raised ethical alarms. If an AI could be trained to lie about whether it will support you in your invasion of France next turn, what will it be able to lie about in a hundred years?

Gary Marcus, an AI entrepreneur and writer, says it’s important not to get carried away. “I tend to view these developments with a note of scepticism,” he says.

“Cicero is genuinely interesting from a computer science perspective. I’m not sure there’s an immediate ethical worry. The notion of lying is very narrow. People built poker-playing bots that had to bluff, which is a form of lying. But the fact they can build a poker bot that bluffs doesn’t mean that Walter White [the murderous anti-hero of the TV series Breaking Bad] in robot form is around the corner.”

He is more worried by another Meta programme, Galactica, which writes scientific papers. “Galactica makes the publication of misinformation really easy,” he says. “It is indifferent to the truth. We should really worry about having bots that can generate really plausible information that is hard to distinguish from reality.”

The next generation of GPT will be even better. “I’m concerned that GPT-4 will make the cost of misinformation basically zero, and really difficult to detect. That’s a real problem. The genie is out of the bottle.”

As a final test, I asked the ChatGPT bot to write a poem about a journalist called Ed who is worried he will be replaced by a computer. In the third stanza, it sounded a hopeful note.

But Ed is a fighter, and he won’t give up He’ll continue to write and to keep up With the latest developments in technology And prove that he’s more than just a commodity

Frankly, I’m not so sure, but it’s nice that someone believes in me.

Do you think AI could do your job? Tell us in the comments below

Artificial Intelligence,

Robots

Twitter Icon

Facebook Icon

WhatsApp Icon

Email Icon

Comment speech bubble icon

Comments

commenting policy

here

Log In

Subscribe

More stories

Artificial Intelligence,

Robots

Twitter Icon

Facebook Icon

WhatsApp Icon

Email Icon

Comment speech bubble icon

Single Line Text

ChatGPT is the most recent revolution in artificial intelligence, with mind-boggling capabilities – but it raises ethical questions. By Ed Cumming. The software can write scripts, poems, even newspaper articles – and it has been suggested that it could soon replace Google. It will disappoint fans of The Terminator, but the AI revolution is coming not in the form of killer robots or dystopian autocracies, but chat bots. We were told it would mean the apocalypse. So far it looks a lot like customer service, albeit much better than usual. The latest revolution in public-facing artificial intelligence is ChatGPT, a piece of software designed by OpenAI, a California-based research company. GPT is short for Generative Pre-trained Transformer. It was released last week . In the simplest terms, it works by scouring its dataset, which is most of what is written on the internet, finding the answers that best fit a given prompt, and rendering it in clear, if wooden, English. It’s a bit like the autocomplete function on your phone or email, except on a much grander scale. ChatGPT in action. ChatGPT in action. To those unfamiliar with developments in AI, the latest capabilities are mind-boggling. You type in a request and it generates a response. It can write scripts, poems, even newspaper articles. As a test, I asked it for a limerick about The Daily Telegraph. There once was a newspaper called the Daily Telegraph Whose reporters were known  for their cheek They’d write about royal affairs With a very particular flair But their puns were often quite weak. What it lacks in scansion, it makes up for in insolence. It isn’t only journalists and copywriters at risk of being made redundant. Even more impressively, ChatGPT has been able to write credible computer code. It has been suggested that the software could soon replace Google . Earlier this month, Paul Buchheit, an engineer working on Google’s email service, Gmail, tweeted that Google might only be a year or two away from “total disruption”. A cautionary tale. Where Google simply provides search results, GPT can turn out a comprehensive answer. “A lot of the creative industries have been caught flat-footed by this,” says Dr Daniel Susskind, a professor at Oxford and author of a book, The Future of the Professions, about the effect AI will have on employment. “They think there’s something special about a faculty like creativity, but it turns out these systems can solve problems that might require creativity from us, but do it in different ways. Where Google simply provides search results, GPT can turn out a comprehensive answer. Credit : Stock. It’s a cautionary tale for many people who think what they do is too complex or subtle for these systems to do.” While ChatGPT is impressive to the layman, he says, it is not a surprise to those studying the field. Automated writing software has been used in certain kinds of journalism for years. The software is not infallible. While its doggerel is impressive, the poetry lacks heart. It sometimes comes up with outright nonsense. As it is trained on data only going up to last year, it is not up to date on current affairs. Patterns quickly emerge. Asking it similar questions will produce formulaic responses. It has been likened to a confident 11-year-old winging its answers without real understanding. But as Susskind observes, this is just the start. We are on the cusp of an era in which creativity, or rather acts that have been thought of as creative, will be free. For just one example, what does it mean for cinema when a great script is free? Will writers simply become prompters? Will art come with a “free range” label to denote that it was written by a computer? “However remarkable this seems, it’s still the worst it’s ever going to be,” Susskind adds. “People spend a lot of time finding the holes and shortcomings but these machines are going to get relentlessly better. The surprise and bewilderment from a lot of people this week suggests we’re not really prepared for that.” Ethical concerns. The software raises ethical questions, too. A source that generates information can equally generate disinformation. The machines are programmed by people, and can end up reflecting their prejudices. “It’s impressive, but it raises a lot of ethical concerns,” says Prof Carissa Veliz, a professor of ethics and philosophy at the University of Oxford. School essays, for example, may need a complete rethink when any student can generate a passable essay for free in seconds. Perhaps we will see a return to handwriting, or exam conditions. “You can ask it to create a conspiracy theory about Covid, say, and it can do it quite well,” Veliz adds. “It makes it cheaper and easier to spread fake news. It’s inherently deceptive in its design. It’s designed to sound like a thinking being but it’s simply statistical inference. It doesn’t have any understanding. It just mimics discourse.” It may even be that this technology shouldn’t be made available to the public, she says. “Some kinds of technology are too sensitive to make available to everyone. Things about radiation or nuclear technology. It has become trendy to think that anything open access is better than if it’s not. But in this case, where abuse is so easy, it’s unclear to me whether we should be allowing just anyone to access it.” New developments. ChatGPT is only the most high profile in a wave of impressive AI developments in recent weeks. For a small fee, you can spruce up your photos with Lensa. Cicero, a bot developed by Meta to play the classic board game Diplomacy, finished in the top 10 per cent of an online competition. Diplomacy is a step on from chess or Go, because the key gameplay dynamic is that you must make deals with your fellow players and then betray them. In the chat boxes, the bots were happily plotting and reneging against their human counterparts. Nobody suspected they weren’t playing a human. It raised ethical alarms. If an AI could be trained to lie about whether it will support you in your invasion of France next turn, what will it be able to lie about in a hundred years? Gary Marcus, an AI entrepreneur and writer, says it’s important not to get carried away. “I tend to view these developments with a note of scepticism,” he says. “Cicero is genuinely interesting from a computer science perspective. I’m not sure there’s an immediate ethical worry. The notion of lying is very narrow. People built poker-playing bots that had to bluff, which is a form of lying. But the fact they can build a poker bot that bluffs doesn’t mean that Walter White [the murderous anti-hero of the TV series Breaking Bad] in robot form is around the corner.” He is more worried by another Meta programme, Galactica, which writes scientific papers. “Galactica makes the publication of misinformation really easy,” he says. “It is indifferent to the truth. We should really worry about having bots that can generate really plausible information that is hard to distinguish from reality.” The next generation of GPT will be even better. “I’m concerned that GPT-4 will make the cost of misinformation basically zero, and really difficult to detect. That’s a real problem. The genie is out of the bottle.” As a final test, I asked the ChatGPT bot to write a poem about a journalist called Ed who is worried he will be replaced by a computer. In the third stanza, it sounded a hopeful note. But Ed is a fighter, and he won’t give up He’ll continue to write and to keep up With the latest developments in technology And prove that he’s more than just a commodity. Frankly, I’m not so sure, but it’s nice that someone believes in me. Do you think AI could do your job? Tell us in the comments below. Artificial Intelligence, Robots. Twitter Icon. Facebook Icon. WhatsApp Icon. Email Icon. Comment speech bubble icon. Comments. commenting policy. here. Log In. Subscribe. More stories. Artificial Intelligence, Robots. Twitter Icon. Facebook Icon. WhatsApp Icon. Email Icon. Comment speech bubble icon.