OpenAI, the creator of viral artificial intelligence software ChatGPT, has launched a new tool to help users detect AI-generated text. The decision comes after complaints from the academic community that ChatGPT could easily be used by students to cheat on assignments.
According to OpenAI, the new detection tool isn't totally reliable. The head of OpenAI's effort to improve system safety, Jan Leike, said that the detection system "is imperfect and it will be wrong sometimes," adding that "because of that, it shouldn't be solely relied upon when making decisions."
After plugging in a piece of text, users can expect to see a label stating the likelihood that the content was AI-generated, from "very unlikely" to "likely". OpenAI says that the more text a user plugs in, the more accurate the results will be.
While OpenAI acknowledged the potential weaknesses of the tool, it also pointed out additional uses. Along with cheating, the system could be used to catch automated disinformation campaigns online.
ChatGPT is a powerful tool for both students and teachers alike, but concerns about cheating lead several major school districts, including New York City and Los Angeles, to block the site from being accessed on school devices. In contrast, some school districts like Seattle Public School are encouraging teachers to take advantage of the chatbot to help students learn. The discussion surrounding the right way to deal with the AI tool is ongoing.
"The initial reaction was 'OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,'" Devin Page, a technology specialist with the Calvert County Public School District in Maryland, told reporters. Now, Page says that administrators are increasingly realizing that AI in schools "is the future".
"I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power," Page said.
A similar conversation is happening at higher learning institutions around the world, with some prestigious universities also prohibiting the use of ChatGPT. Academics who do use the tool are at risk of being banned from the community.
"Like many other technologies, it may be that one district decides that it's inappropriate for use in their classrooms. We don't really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them," said OpenAI's policy researcher Lama Ahmad.
France's digital economy minister and former MIT professor Jean-Noël Barrot says that the risk of cheating using AI-generated text varies from one area of study to another depending on the assignments used.
"So if you're in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive," Barrot said at the World Economic Forum. "If you are in the economics faculty, then you're fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty."
To Barrot, the most important part of preventing the misuse of AI tools is having a basic understanding of how they work.
When it comes to the details, however, even the software creators aren't entirely sure how it works. Both ChatGPT and the new detection tool rely on massive troves of data to formulate their pattern recognition, and it's not always clear why they make the decisions they do.
"We don't fundamentally know what kind of pattern it pays attention to, or how it works internally," said Leike. "There's really not much we could say at this point about how the classifier actually works."