Why Writers Know Using ChatGPT Is a Bad Idea

what is Chatgpt and open AI

Nearly everyone who writes or edits for a living knows instantly what’s wrong with chatgpt the free AI tool that uses natural language input to produce text, answering prompts you give it. No, it’s not that it’s going to take our jobs.

The problem with having chatgpt or any other AI write articles is that it will be wrong or do a poor job, and it will lead to lawsuits.

Take the latest drama at CNET and Bankrate, two websites owned by Red Ventures that ran AI-generated content as informational articles without being transparent about it. The most painful part of the CNET debacle for me is that any writer could have seen it coming. The Verge’s reporting says that many staff were never told about the use of AI to write content.

Not Everything Should Be Automated

Behindthe doors of any publication are people who write and people who try to make money. One group looks at tools like ChatGPT and sees potential value: How can we use this tool to be more efficient, turn a higher profit, automate something that’s routine? The other group knows just how hard it is to write content that’s original, accurate, and based on reliable sources. What seems automatable to one group is so obviously not to the other.

Most businesspeople know better than to say a machine can replace a writer, full stop. But some would and do say a machine can replace a writer for some kinds of writing. That mentality devalues writers and is shortsighted in understanding what writers do. It’s also disrespectful to readers. In the case of CNET and Bankrate, choosing to auto-generate articles about personal finance shows a lack of care, if not disrespect, to people who need help understanding their money.

Read More, 5 Technology-Based Tools To Enhance Online Business Deals

Lawsuits on the Horizon

Lawsuits are another real concern. When you let ChatGPT cruise the internet openly for information, it doesn’t provide a list of the sources it used. As Futurism found, AI bots know to reword or rephrase a chunk of content instead of repeating it word for word, but they do so about as well as a seventh grader. Publishers who let ripped-off paragraphs go out into the world without attributing the source are opening themselves up to legal action.

I imagine that educators, especially those familiar with TurnItIn, can easily spot these awkwardly reworded texts most of the time. TurnItIn, which was founded in 1998  is a service that compares the supposedly original writing of a student with content published online and all other papers submitted to TurnItIn.

That way it can identify plagiarism from published works as well as other students’ writing no matter where they are in the world. TurnItIn analyzes for both word-for-word plagiarism as well as text that has been altered slightly but is definitely not original. It can do more, too, like advise students when they rely too heavily on quotes for their papers.

A Blatantly Bad Idea

None of this is to say that AI can’t be beneficial to writing somewhere, somehow. TurnItIn certainly has its issues, but it’s good at helping educators spot plagiarism and guiding students who do it unwittingly to learn better.

Grammarly is another decent example—it doesn’t make a skilled writer better, but it’s extremely beneficial for catching simple errors and helping certain groups, such as writers who aren’t fluent in a language. AI writing bots are tools. They can be useful if we find what they are good tools for. Writing informational, public-facing content isn’t it.

To outsiders, I can see how writers warning about the dangers of Chatgpt and other AI writing assistants may come off as insecurity about their jobs or alarmist. But writers are so opposed to it not because we’re afraid, but because it’s so obvious to us why using AI to write content for publication is a bad idea.