Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Meta

Comments on Do we have any policy on AI-generated or plagiarized answers?

Post

Do we have any policy on AI-generated or plagiarized answers?

+6
−0

An answer was recently posted and its edit history strongly suggests that it was AI-generated in entirety; that apparently attracted some flagging. The answer happens to be of poor quality, but the text is such that a human could, in my opinion, conceivably write and post a similar pseudo-answer, too. Apart from the very low quality, the answer is on topic. It doesn't quite answer the OP, or only shallowly and unhelpfully.

When processing the flag queue, I currently tend to delete any flagged answers that are spam (completely off topic), while I leave any on-topic answers that are merely of poor quality to be downvoted or maybe even somehow salvaged through their potential further interactions (esp. edits) through the community. I generally assume good intentions behind every post.

I am aware of two current issues around AI-generated answers:

  • Plagiarism. The poster likely is not the author of the post content even if they say they are.
  • The answer might be of low quality, while being costly to evaluate as such (due to grammatical correctness, idiomatic specialized terminology, and other signals of authoritativeness).

I think that downvoting and comments could be sufficient to deal with the latter group of issues. And I prefer that over deletion by a moderator, because the shorter an answer is, the harder it is to detect its AI-generated nature reliably for a moderator.

So, for me, any policy over AI use mostly boils down to a plagiarism policy. Plagiarism doesn't equal a copyright violation; a particular AI interface might allow unlimited redistribution under an arbitrary license, but it would still be unethical for anyone to pass AI-generated content as their own. Not all AI use is unethical either; I can imagine very many great linguistic questions or answers that would include samples of LLM productions presented as such.

However, I cannot detect or verify all plagiarism reliably either, especially if the original source (in this case, an AI production) isn't public enough.

Hence my questions:

  • Should an answer be flagged and deleted (rather than just downvoted or commented on), once an incidental proof of plagiarism (including AI use) is discovered?
  • Should an answer be flagged and deleted once its author openly admits that the entire answer is AI production?
  • Would a (good faith) manually augmented AI production with an unclear production process be any different from a merely pasted AI production?
  • Is helpfulness of an answer ever a relevant factor for the decision whether to delete?
History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

There is a network-wide policy (4 comments)
There is a network-wide policy
Mithical‭ wrote 2 months ago

There is a default Network-wide policy on AI-generated content; communities are free to determine their own specific policies, however, as long as attribution requirements are met.

Jirka Hanika‭ wrote 2 months ago

Big thanks to pointing me to the network wide policy! Had I found it before asking on L&L meta, I might have just handled the currently pending AI related flags based on it and leave the rest till a more nuanced AI use case arrives. Given that "L&L" and "LLM" share one of their respective letters, the one standing for "language", I'll let my question stand till it attracts a little community feedback (such as a popular actionable answer).

I'm pretty happy to see that my own take on the issue is pretty consistent with the network wide policy — maybe not in the entirety of its premises; I occasionally collaborated on AI based research projects since the nineties, so I'm no intense AI lover or hater — but surely in the key practical implications.

Jirka Hanika‭ wrote about 2 months ago

I just applied the network wide policy, deleting the particular referenced AI-generated post. (Whose entire text was "German uses the third-person plural for the second-person polite form to show respect and formality in conversations. It reflects a linguistic tradition of addressing individuals with courtesy and distance.", in case anyone wonders.) This meta question is still considered open, but the default answer, while no other answers have been proposed and voted on yet, is certainly "we are all comfortable with following the network wide policy on AI-generated answers".

Monica Cellio‭ wrote about 2 months ago

Thanks for asking the question. The network policy is a default, but as it says, communities are free to modify it in either direction, and this question gives this community a place to propose those changes.