Do we have any policy on AI-generated or plagiarized answers?
An answer was recently posted and its edit history strongly suggests that it was AI-generated in entirety; that apparently attracted some flagging. The answer happens to be of poor quality, but the text is such that a human could, in my opinion, conceivably write and post a similar pseudo-answer, too. Apart from the very low quality, the answer is on topic. It doesn't quite answer the OP, or only shallowly and unhelpfully.
When processing the flag queue, I currently tend to delete any flagged answers that are spam (completely off topic), while I leave any on-topic answers that are merely of poor quality to be downvoted or maybe even somehow salvaged through their potential further interactions (esp. edits) through the community. I generally assume good intentions behind every post.
I am aware of two current issues around AI-generated answers:
- Plagiarism. The poster likely is not the author of the post content even if they say they are.
- The answer might be of low quality, while being costly to evaluate as such (due to grammatical correctness, idiomatic specialized terminology, and other signals of authoritativeness).
I think that downvoting and comments could be sufficient to deal with the latter group of issues. And I prefer that over deletion by a moderator, because the shorter an answer is, the harder it is to detect its AI-generated nature reliably for a moderator.
So, for me, any policy over AI use mostly boils down to a plagiarism policy. Plagiarism doesn't equal a copyright violation; a particular AI interface might allow unlimited redistribution under an arbitrary license, but it would still be unethical for anyone to pass AI-generated content as their own. Not all AI use is unethical either; I can imagine very many great linguistic questions or answers that would include samples of LLM productions presented as such.
However, I cannot detect or verify all plagiarism reliably either, especially if the original source (in this case, an AI production) isn't public enough.
Hence my questions:
- Should an answer be flagged and deleted (rather than just downvoted or commented on), once an incidental proof of plagiarism (including AI use) is discovered?
- Should an answer be flagged and deleted once its author openly admits that the entire answer is AI production?
- Would a (good faith) manually augmented AI production with an unclear production process be any different from a merely pasted AI production?
- Is helpfulness of an answer ever a relevant factor for the decision whether to delete?
1 comment thread