To Best Serve Students, Schools Shouldn’t Try to Block Generative AI, or Use Faulty AI Detection Tools

<

div class=”field field–name-body field–type-text-with-summary field–label-hidden”>

<

div class=”field__items”>

<

div class=”field__item even”>

Generative AI gained widespread attention earlier this year, but one group has had to reckon with it more quickly than most: educators. Teachers and school administrators have struggled with two big questions: should the use of generative AI be banned? And should a school implement new tools to detect when students have used generative AI? EFF believes the answer to both of these questions is no.

AI Detection Tools Harm Students

For decades, students have had to defend themselves from an increasing variety of invasive technology in schools—from disciplinary tech like student monitoring software, remote proctoring tools, and comprehensive learning management systems, to surveillance tech like cameras, face recognition, and other biometrics. “AI detection” software is a new generation of inaccurate and dangerous tech that’s being added to the mix.

Tools such as GPTZero and TurnItIn that use AI detection claim that they can determine (with varying levels of accuracy) whether a student’s writing was likely to have been created by a generative AI tool. But these detection tools are so inaccurate as to be dangerous, and have already led to false charges of plagiarism. As with remote proctoring, this software looks for signals that may not indicate cheating at all. For example, they are more likely to flag writing as AI-created when the word choice is fairly predictable and the sentences are less complex—and as a result, research has already shown that false positives are more frequent for some gr

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Deeplinks

Read the original article: