Topic

reddit

A collection of 1 issue

When AI knows exactly which buttons to push

The University of Zurich researchers wanted to know if LLMs can persuade internet users to change their minds. They targeted a popular subreddit /ChangeMyView, designed specifically for this purpose, and asked AI to craft arguments based on a Redditor’s posting history.

Did it work? Well, no paper from the unauthorized experiment will be published. But here's what preliminary findings revealed, as reported in The Atlantic:

In one sense, the AI comments appear to have been rather effective. When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed.⁠⁠ Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters, according to preliminary findings that the researchers shared with Reddit moderators and later made private. (This analysis, of course, assumes that no one else in the subreddit was using AI to hone their arguments.)

Moderators weren't happy. They published a comprehensive post in response, requesting an apology among other complaints.

What makes this whole thing deeply troubling:

The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

This becomes more troubling when paired with a recent NYT finding that AI hallucinations are worsening with the use of new reasoning systems.