Head-to-head results
In our experiments, we explored whether AI could tell compelling stories. We used descriptions from published studies to prompt ChatGPT to generate three narratives, then asked over 2,000 participants to read and rate their engagement with these stories. We labeled half as AI-written and half as human-written.
Our results were mixed. In three experiments, participants found human-written stories to be generally more “transporting” than AI-generated ones, regardless of how the source was labeled. However, they were not more likely to raise questions about AI-generated stories. In multiple cases, they even challenged them less than human-written ones. The one clear finding was that labeling a story as AI-written made it less appealing to participants and led to more skepticism, no matter the actual author.