Topic

ai ethics

A collection of 2 issues

Mark Zuck wants you to have AI friends

He thinks it will be "really compelling" once "the AI starts to get to know you better and better." All this from an interview with Dwarkesh Patel on AI.

It's hollow and unsettling and could rob people of the chance to build genuine, messy friendships.

Here's what's actually compelling. Eric Hal Schwartz's TechRadar piece on Zuck missing the point of AI and the point of friendship:

But compelling conversation doesn't mean real friendship. AI isn’t your friend. It can’t be. And the more we try to make it one, the more we end up misunderstanding both AI and actual friendship. AI is a tool. An amazing, occasionally dazzling, often frustrating tool, but a tool no different than your text message autocomplete or your handy Swiss Army knife. It's designed to assist you and make your life easier.



It’s not a being. It has no inner monologue. It’s all surface and syntax. A robotic parrot that reads the internet instead of mimicking your catchphrases. Mimicry and scripted empathy are not real connections. They're just performance without sentience.

When AI knows exactly which buttons to push

The University of Zurich researchers wanted to know if LLMs can persuade internet users to change their minds. They targeted a popular subreddit /ChangeMyView, designed specifically for this purpose, and asked AI to craft arguments based on a Redditor’s posting history.

Did it work? Well, no paper from the unauthorized experiment will be published. But here's what preliminary findings revealed, as reported in The Atlantic:

In one sense, the AI comments appear to have been rather effective. When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed.⁠⁠ Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters, according to preliminary findings that the researchers shared with Reddit moderators and later made private. (This analysis, of course, assumes that no one else in the subreddit was using AI to hone their arguments.)

Moderators weren't happy. They published a comprehensive post in response, requesting an apology among other complaints.

What makes this whole thing deeply troubling:

The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

This becomes more troubling when paired with a recent NYT finding that AI hallucinations are worsening with the use of new reasoning systems.