Artificial Intelligence essay generators, like ChatGPT, are designed to simulate human essay writing. The better the AI generator, the more difficult it is to distinguish its essays from that of a human author. The whole point of the generator is to be indistinguishable from human authoriship.
AI generators are simply deep statistical analysis engines. Each word in the English language is assigned a numeric value, then statistical engines use large language data to correlate word frequency and distance from each other in sentences. When a string of words is supplied (a question), the engine turns the words into numbers, use the numbers as input, correlates what a reply sequence should look like, and spits that reply sequence back. The numbers are changed back into words right before the words appear on your screen.
All the AI generator does is mindlessly mirror back to us our own Q-A sequences. Ultimately, everything an AI generates was in some way human-generated, both by programmers and by the vast audience of essays the statistical engine was trained on.
So, how well do these detectors work?
If a non-native English speaker writes an essay, it will be flagged as AI-generated. Stanford University's study indicates AI-detection tools commonly flagged international students.
The University of Pittsburgh was one. In a note to faculty at the end of June, the university’s teaching center said it didn’t support the use of any AI detectors, citing the fact that false positives “carry the risk of loss of student trust, confidence and motivation, bad publicity, and potential legal sanctions.”
Every AI generator has political biases, with ChatGPT being the most far left
AI-generated content probably cannot be copyrighted, according to the US Copyright Office. There have been very few court cases, so no one knows where this will end up. But the first court ruling on the subject agrees with the US Copyright Office.
On the other hand, AI generators claims the right to use anything you input as its own, in perpetuity:
“perpetual, revocable, nonexclusive, royalty-free, worldwide, fully paid, transferable, sub-licensable license to use, reproduce, modify, adapt, translate, create derivative works”
Now, is this all that different from Raphael or Michelangelo claiming as their own work created by their assistants? Or Ph.D.s claiming as their own work done by their grad students or postdocs?
Film credits often list the names of dozens of talented people who contributed to the creation of a film, but typically just one copyright owner — namely, the film company that produced it.
AI pattern-matching tools can discriminate against the poor and disabled.
AI detection tools may be impossible to create:
"Considering those factors, it might well be impossible for humans to create tools to identify AI-generated text with 100 percent accuracy and reliability, something the paper alluded to: "Our findings strongly suggest that the 'easy solution' for detection of AI-generated text does not (and maybe even could not) exist. Therefore, rather than focusing on detection strategies, educators continue to need to focus on preventive measures and continue to rethink academic assessment strategies (see, for example, Bjelobaba 2020). Written assessment should focus on the process of development of student skills rather than the final product."
And AI-generated watermarks can be spoofed:
"For a sufficiently advanced language model seeking to imitate human text, even the best-possible detector may only perform marginally better than a random classifier. Our result is general enough to capture specific scenarios such as particular writing styles, clever prompt design, or text paraphrasing. We also extend the impossibility result to include the case where pseudorandom number generators are used for AI-text generation instead of true randomness. We show that the same result holds with a negligible correction term for all polynomial-time computable detectors. Finally, we show that even LLMs protected by watermarking schemes can be vulnerable against spoofing attacks where adversarial humans can infer hidden LLM text signatures and add them to human-generated text to be detected as text generated by the LLMs, potentially causing reputational damage to their developers."
You can fool the detectors by running your AI-generated content through a paraphrasing engine like undetectable.ai, Quillbot, or HideMyAI. The detector abilities are so bad, that OpenAI, the creators of ChatGPT, have shut down their own AI detector.
Instructors at the high school and college level are deeply concerned. They cannot believe they cannot reliably determine what is AI-generated and what is not. But, the fact is, they cannot determine the differences with any reliability. If a student uses AI text generators from Day One, the instructor will not see a sudden change in writing ability. No tool will reliably allow detection of AI text generation. The business that employs the graduate will undoubtedly use AI to streamline text-writing tasks, so there isn't much point in penalizing a student for using AI, nor is there much of a way to do it even assuming the professor desired it.
Between automated learning platforms, like Khan Academy, streaming video from Youtube, AI text and image generators, and the fact that certifications are replacing college degrees, instructors are losing the thread that explains why they exist.
Addendum (is modern academia just an elaborate form of hazing?):
However, AI models may be able to learn how animals communicate
Vanderbilt has shut down TurnitIn's AI detector for being too unreliable. One test showed only 68% accuracy in AI detection.
The University of Kansas recommends against TurnItIn's tool
July 2023: MIT Tech Review, AI Detection tools are easy to fool
Sept 2023: OpenAI admits AI Detection tool doesn't work.
October 2023: GoldPenguin, AI Detectors don't work
October 2023: ZDNet, AI Detectors, 80% accuracy at best
Report says AI can always beat detectors
HowToGeek: How AI Detection Works (hint: it also uses LLM)
AI is becoming ubiquitous, it is now present in all major mail/search engines (Bing, Google, Yahoo)
Students are using AI to create college application essays
Four colleges have turned off AI detection (Vanderbilt, Michigan State, Northwestern and the University of Texas at Austin):
“Turnitin’s technology is not meant to replace educators’ professional discretion. Reports indicating the presence of AI writing, like Turnitin’s AI writing detection feature, simply provide data points and resources to support a conversation with students, not determinations of misconduct.”
One recent study from Stanford found that seven AI detectors incorrectly flagged more than half of essays written by non-native English students as AI-generated, whereas the results were “near-perfect” for English speakers. Turnitin was not one of the services tested in the study.
OpenAI has initiated Copyright Shield to protect its users from copyright claims.
Applicants use AI essay generation to apply to colleges
Half of all students use ChatGPT to cheat.
Elon Musk's Neural Link has volunteers. Also, the Humane AI pin
Experts cannot reliably distinguish what constitutes "evidence-based" statements.
No comments:
Post a Comment