While the guide for educators gives tips on how to use the technology for lesson planning, crafting quizzes, and generating examples, OpenAI has some bad news on the question of whether AI cheating detectors work: “In short, no,” the company writes in the FAQ section.
“While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content,” the company added.
Furthermore, when prompted with text and asked if “ChatGPT wrote this” or whether the content could have been written by AI, OpenAI says the responses are “random and have no basis in fact.” The company added that when training the program, it was given text by Shakespeare and the Declaration of Independence, which is subsequently labeled as AI-generated.
OpenAI says it will continue to “provide resources and insights” in the cheating sphere, but in the meantime, it suggests a counter approach: accept students are using the tool, and require them to submit it as part of their work.
“Sharing interactions with the model ensures that students are held accountable for the way they use AI in their work,” OpenAI added. “Educators can verify that students are engaging with the tool responsibly and meaningfully, rather than simply copying answers.”
ChatGPT has been subject to ongoing concern since its launch, with ongoing lawsuits from authors claiming that OpenAI used copyrighted material to train the chatbot, lawyers facing ridicule for using the technology in writing briefs, and even OpenAI CEO Sam Altman himself saying government intervention is “crucial” as the world embarks on expanding use of artificial intelligence.
Entrepreneur has reached out to OpenAI for comment.