Can an Artificial Intelligence Write Better Than a Human?

5
Can an AI write better than a human? OpenAI's ChatGPT is raising issues with educators. (Image credit: Soren Diedrich/Gamestar.De)

OpenAI describes itself as an artificial intelligence (AI) research and deployment company with a mission to benefit humanity through AI. It is the developer of a new program called ChatGPT, capable of writing and conversing very much like a human. Optimized for dialogue it can answer questions, learn, do language translation, admit when it makes a mistake and can be integrated with a blog like this one to provide an online chat mining the information in more than 3,550 postings on this site and answering questions from readers. It generates human-like text, can write essays, and even debug computer code.

Recently, Ryan Reynolds, the actor and CEO of Mint Mobile, a budget cellular provider, tested ChatGPT to write a commercial for Mint Mobile using his voice. He instructed it to include a joke, a curse word, and a promo the company was working on. He told Fast Company that the results were “mildly terrifying.”

Ryan Reynolds asks ChatGPT to write an ad for MintMobile. Calls the results “eerie” and “mildly terrifying.”

My First Test of ChatGPT

The latest API for ChatGPT-3.5 is used to produce text that is remarkably similar to what humans write. Knowing this I had to try it out. So I asked ChatGPT to take on a subject from a posting I wrote yesterday I said “explain the relationship between atmospheric warming and rising ocean temperatures.” And this is what it wrote within a minute of my query.

“Atmospheric warming, caused by an increase in greenhouse gases in the atmosphere, causes rising ocean temperatures. As the Earth’s atmosphere warms, the oceans absorb about 93% of the excess heat. This causes the temperature of the ocean to rise, which in turn leads to a number of consequences including sea level rise, ocean acidification, and changes in ocean circulation. Additionally, warmer oceans can lead to more intense and frequent storms, such as hurricanes and typhoons.” 

That’s a good summary of the issue described in the article I posted yesterday. If Ryan Reynolds says he was blown away by the promo ad ChatGPT produced, I can only reinforce his comment with my own “pretty creepy.” 

Microsoft Invests in ChatGPT

Microsoft has put money into OpenAI to the tune of $1 billion US in recent years. But now, according to recent reports, it is adding $10 billion to the investment. Why? Because Microsoft has the Bing search engine, its competition for Alphabet’s Google Search. To date, the company has been spectacularly unsuccessful in taking market share away from Google. But incorporating ChatGPT into Bing may be the game changer that can beat Google’s search algorithm.

ChatGPT isn’t an isolated case of an AI capable of human-like conversation. Google has its own technology called LaMDA, a conversation algorithm that understands query requests to produce better results for users of Google Search. Google uses an AI neural network for LaMDA which is capable of delivering sensible responses in human conversations.

Impact of ChatGPT on Education

But ChatGPT has made a much bigger splash recently because of the implications this technology may have in education and because it is open source which means it’s free. LaMDA is proprietary.

For teachers, ChatGPT can facilitate and automate many tasks. Madeline Will, a reporter at Education Week, wrote a piece on ChatGPT yesterday describing teachers’ reactions to the algorithm’s ability to almost write anything. She decided to give it five common teaching tasks: generate a lesson plan, write a response to a concerned parent, compose a rubric, provide feedback on student work, and put together a letter of recommendation.

On the first task, the general impression from teachers was that at best it provided a framework but not the details for lesson planning. The same was said about the letter-writing capabilities of the tool, and the composing of a rubric. On providing feedback on student work, the response was unimpressive in terms of the AI’s comments and grading. And on writing a letter of recommendation, teachers stated that what ChatGPT came up with was “far too generic.” So all in all, the current version of ChatGPT as a teaching aid seems underwhelming.

On the student work side, ChatGPT, however, is far more problematic. Because it can compose the kind of content produced for me in the above example, teachers are concerned that the tool will make it easy for students to submit work they didn’t write. Is this plagiarism? Can you plagiarize an AI?

When OpenAI made it publicly available in November of last year, concerns led the New York City School Board to block access to it on its networks and computers. A Princeton University student took the initiative to create an app called GPTZero aimed at the responsible use of AI in education and capable of detecting ChatGPT-written documents.

Some teachers are responding to the arrival of ChatGPT by reverting to having students submit handwritten work although a student can easily copy by hand something the AI produces. And at some schools, students are being asked to sign an “authenticity pledge” with any work submitted.

Is there a positive response from educators to ChatGPT? Because teachers generally have commented on ChatGPT’s framing abilities, some see it as a useful tool to help students improve their writing abilities.

But I think most teachers would see the AI as disruptive. Take for example a December 9, 2022 article that appeared in the Atlantic entitled, “The End of High-School English.” Produced by a high school English teacher, Daniel Herman, after he tried ChatGPT, he is convinced that this will not make students better communicators of the written word.