If I was a student who wrote a text that was rejected due to this tool, do I have a case against either my institution, the professor who threw it out or OpenAI?
I am stuck with defamation but idk if that’s actually defamatory in itself, as that only works if the professor or school had done due diligence that the tool is good for use, but there were already reports that it was not.
do I have a case against either my institution, the professor who threw it out or OpenAI?
This all seems like such recent technology, I can not imagine this question being very answerable except via the long way: a courtroom. I suspect it would take someone trying in order to set precedent.
Turnitin isn’t AI technology but I assume it has similar legal ramifications and a lot of schools require teachers to have everything go through turnitin (usually by having students submit online). It just spits out a percentage so that the prof can take a closer look. Real quotes count towards the percentage displayed. Maybe with AI you’d have a bit more of a case against the company because you might claim you trusted it to be accurate or something?
There’s a similar issue in chess with cheating detection. They use statistical analysis to see if someone’s moves are too good. Computers play at a much higher level than humans and you can measure how “accurate” a move is.
It doesn’t mean much for a few moves or even 1 or 2 games but with more data you get more confidence that someone is cheating or not cheating.
Chess.com released a rather infamous report last year about a high profile chess player that was cheating on their site. They never directly said “he is cheating” but simply stated “his games triggered our anti-cheating algorithms”
One is debatable, the other is a simple fact. The truth is an absolute defense to defamation. Hans attempted to sue Chess.com for defamation and from what I understand, the case got recently dismissed.
I’d imagine these AI detectors for schools have similar wordings to avoid legal risk. “High probability for AI” instead of saying “AI written”. In that case, you may have very little case for defamation.
However, I’m not a lawyer. I’m just guessing these companies that offer this analysis to colleges have lawyers and have spent time shielding the company from legal liability.
If I was a student who wrote a text that was rejected due to this tool, do I have a case against either my institution, the professor who threw it out or OpenAI?
I am stuck with defamation but idk if that’s actually defamatory in itself, as that only works if the professor or school had done due diligence that the tool is good for use, but there were already reports that it was not.
This all seems like such recent technology, I can not imagine this question being very answerable except via the long way: a courtroom. I suspect it would take someone trying in order to set precedent.
Turnitin isn’t AI technology but I assume it has similar legal ramifications and a lot of schools require teachers to have everything go through turnitin (usually by having students submit online). It just spits out a percentage so that the prof can take a closer look. Real quotes count towards the percentage displayed. Maybe with AI you’d have a bit more of a case against the company because you might claim you trusted it to be accurate or something?
> Real quotes count towards the percentage displayed.
TII can be configured to ignore properly quoted texts.
There’s a similar issue in chess with cheating detection. They use statistical analysis to see if someone’s moves are too good. Computers play at a much higher level than humans and you can measure how “accurate” a move is.
It doesn’t mean much for a few moves or even 1 or 2 games but with more data you get more confidence that someone is cheating or not cheating.
Chess.com released a rather infamous report last year about a high profile chess player that was cheating on their site. They never directly said “he is cheating” but simply stated “his games triggered our anti-cheating algorithms”
One is debatable, the other is a simple fact. The truth is an absolute defense to defamation. Hans attempted to sue Chess.com for defamation and from what I understand, the case got recently dismissed.
I’d imagine these AI detectors for schools have similar wordings to avoid legal risk. “High probability for AI” instead of saying “AI written”. In that case, you may have very little case for defamation.
However, I’m not a lawyer. I’m just guessing these companies that offer this analysis to colleges have lawyers and have spent time shielding the company from legal liability.