Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Meta

Do we have any policy on AI-generated or plagiarized answers?

+6
−0

An answer was recently posted and its edit history strongly suggests that it was AI-generated in entirety; that apparently attracted some flagging. The answer happens to be of poor quality, but the text is such that a human could, in my opinion, conceivably write and post a similar pseudo-answer, too. Apart from the very low quality, the answer is on topic. It doesn't quite answer the OP, or only shallowly and unhelpfully.

When processing the flag queue, I currently tend to delete any flagged answers that are spam (completely off topic), while I leave any on-topic answers that are merely of poor quality to be downvoted or maybe even somehow salvaged through their potential further interactions (esp. edits) through the community. I generally assume good intentions behind every post.

I am aware of two current issues around AI-generated answers:

  • Plagiarism. The poster likely is not the author of the post content even if they say they are.
  • The answer might be of low quality, while being costly to evaluate as such (due to grammatical correctness, idiomatic specialized terminology, and other signals of authoritativeness).

I think that downvoting and comments could be sufficient to deal with the latter group of issues. And I prefer that over deletion by a moderator, because the shorter an answer is, the harder it is to detect its AI-generated nature reliably for a moderator.

So, for me, any policy over AI use mostly boils down to a plagiarism policy. Plagiarism doesn't equal a copyright violation; a particular AI interface might allow unlimited redistribution under an arbitrary license, but it would still be unethical for anyone to pass AI-generated content as their own. Not all AI use is unethical either; I can imagine very many great linguistic questions or answers that would include samples of LLM productions presented as such.

However, I cannot detect or verify all plagiarism reliably either, especially if the original source (in this case, an AI production) isn't public enough.

Hence my questions:

  • Should an answer be flagged and deleted (rather than just downvoted or commented on), once an incidental proof of plagiarism (including AI use) is discovered?
  • Should an answer be flagged and deleted once its author openly admits that the entire answer is AI production?
  • Would a (good faith) manually augmented AI production with an unclear production process be any different from a merely pasted AI production?
  • Is helpfulness of an answer ever a relevant factor for the decision whether to delete?
History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

There is a network-wide policy (4 comments)

0 answers

Sign up to answer this question »