The Latest

An editorial expert on the future of editing jobs in the age of AI

Samantha Enslen owns and leads a content marketing team at Dragonfly Editorial.

In a Grammar Girl podcast episode, she said she candidly tells clients when to use AI instead of hiring her team. When asked how AI might shrink her business, she admitted not knowing.

Her uncertainty is so relatable.

Again, the honest answer is "I don't know." Yes, it could be that two years from, five years from now, our agency is 20% smaller because that's how work has changed. And you know that's what the market will bear. What I'm gambling on slash hoping is that it's the opposite. We will be freed up from some of the more mundane tasks so our staff has more time to focus on higher value things—writing, substantive editing, project management, proposal management, strategizing, creating... not just sitting and writing a blog but what's the whole content strategy, marketing and communication strategy that the company has for the whole year?

I track discussions about AI's impact on jobs, but I've grown skeptical of absolute predictions. Maybe what we need are more narratives like those in Studs Terkel's Working, honest accounts of how people feel about what they do.

True fans are willing to pay to read newsletters

Yes, even in this economy.

This stands out in Logan Sachon’s NYT piece, How Much Do People Pay for Newsletters Like Substack? It Can Be Surprising.

It doesn't explicitly reference Kevin Kelly's 1,000 True Fans concept, but it perfectly illustrates it.

In the last few years, more people are spending a significant amount of money on email newsletters from their favorite writers. As a result, some have also fallen into a familiar budgeting trap: It can be difficult to keep track of how many newsletters they’ve signed up for and how much they’re paying for them.

Despite the surprise, Ms. Hermann-Johnson didn’t consider culling her list. As she read through her paid newsletters — among them from Nora McInerny, a grief writer; Laura McKowen, a sobriety writer; and Catherine Newman, a memoirist and novelist — there were no surprises. All were writers she read, loved and felt good about giving money to.

“I just want to support them and their work, and that’s how I feel like I can do it,” she said.

It’s not just about the budgeting trap. Many of the most successful web services know exactly where to put friction.

Subscription-based newsletters like Substack don’t provide users with their total money spent on paid newsletters. That's friction. But no friction when it comes to finding and subscribing to newsletters. It only takes a few clicks.

The model works because, again, true fans are more than willing to support authors who deliver consistently.

In The State of the Email Newsletters report by the What If Media Group, 20.4% of subscribers were willing to pay up to $120 per year on a newsletter for an ad-free reading experience.

Those who stay on budget adopted a video streamer’s tactic — pausing or rotating subscriptions.

Here’s a revealing example from the NYT article:

Some subscribers know exactly whom they are paying for, and even develop systems to spread their dollars around.

Phyllis Unterschuetz, 76, is a retiree in Atlanta. “My husband and I are living on Social Security, which does not reach,” she said.

She answers paid online surveys to fund her newsletter subscriptions and can earn enough to afford three to five at a time. When she feels it’s time to rotate to another publication, she sends a note to explain that she is canceling not because of the content but because she needs to free up dollars to support other writers.

AI in the newsroom, some viewpoints

AI should be considered a hand, not a brain.

That's from Susie Cagle, a writer and artist for ProPublica, The Guardian, Wired, and The Nation. It's part of CJR's collected viewpoints piece, How We're Using AI, with "we're" referring to reporters, editors, and media executives.

Sidenote: A hand, not a brain. I've been mulling over my use of GenAI. As a hand, it extends what I'm capable of. As a hand, I still get to decide and not have my vision dimmed.

There's an important bit about the modern newsroom, as depicted by Claire Leibowicz, Head of the AI and Media Integrity Program at the nonprofit Partnership on AI:

Newsrooms are simultaneously preparing for threats (e.g., by cryptographically certifying their media to assert its authenticity) and embracing AI as a way to reduce costs, tell stories, and even build trust with audiences and reimagine the news. Sometimes this embrace seems pragmatic, evidence-based, and even revolutionary. At others, it seems like an overeager corrective to a collective sense that newsrooms missed the social media moment—an impulsive fix for the industry’s business-model woes.

Those threats are real, with grave consequences. Among those cited are plagiarism, layoffs, dulling of skill sets, environmental costs, AI slop, and homogenization of global news. No wonder why we'll continue to worry about GenAI even as newsrooms find LLMs useful for transcription, language translation, and even for finding "needles of corruption in the haystacks of data produced by political campaigns."

Mark Zuck wants you to have AI friends

He thinks it will be "really compelling" once "the AI starts to get to know you better and better." All this from an interview with Dwarkesh Patel on AI.

It's hollow and unsettling and could rob people of the chance to build genuine, messy friendships.

Here's what's actually compelling. Eric Hal Schwartz's TechRadar piece on Zuck missing the point of AI and the point of friendship:

But compelling conversation doesn't mean real friendship. AI isn’t your friend. It can’t be. And the more we try to make it one, the more we end up misunderstanding both AI and actual friendship. AI is a tool. An amazing, occasionally dazzling, often frustrating tool, but a tool no different than your text message autocomplete or your handy Swiss Army knife. It's designed to assist you and make your life easier.



It’s not a being. It has no inner monologue. It’s all surface and syntax. A robotic parrot that reads the internet instead of mimicking your catchphrases. Mimicry and scripted empathy are not real connections. They're just performance without sentience.

When AI knows exactly which buttons to push

The University of Zurich researchers wanted to know if LLMs can persuade internet users to change their minds. They targeted a popular subreddit /ChangeMyView, designed specifically for this purpose, and asked AI to craft arguments based on a Redditor’s posting history.

Did it work? Well, no paper from the unauthorized experiment will be published. But here's what preliminary findings revealed, as reported in The Atlantic:

In one sense, the AI comments appear to have been rather effective. When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed.⁠⁠ Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters, according to preliminary findings that the researchers shared with Reddit moderators and later made private. (This analysis, of course, assumes that no one else in the subreddit was using AI to hone their arguments.)

Moderators weren't happy. They published a comprehensive post in response, requesting an apology among other complaints.

What makes this whole thing deeply troubling:

The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

This becomes more troubling when paired with a recent NYT finding that AI hallucinations are worsening with the use of new reasoning systems.

On peak newsletter and subscription fatigue

Vanity Fair declared we're at peak newsletter back in 2019. Yet Axios co-founder made a counter-argument in 2022: "It’s not peak newsletters — it’s the end of weak newsletters."

Clearly, the humble email newsletter is not yet done. Substack, Ghost, and Beehiiv are fueling the independent creator and alternative media era. With more newsletters launching daily, I can't help but ask: Have we finally reached peak newsletter?

In a podcast episode's final few minutes, Ghost's John O'Nolan thinks subscription fatigue is possible but we're not there yet.

Dot Social with Mike McCue, interview with John O'Nolan

We call it subscription fatigue. This theory that once everything's a subscription, you don't want any more subscriptions. We have thought about it, but what we've seen play out so far is it doesn't really seem to happen, or at least not in the space we operate in.

The reason for that is that it's not really 50 people subscribing all to the same three or ten publications. It's ten publications and 50 people, and each subscribes to something different. You might have this handful of really popular creators that everyone subscribes to, but broadly the way we see people interacting with subscriptions is far more niche.

They're all subscribed to different stuff. There's not necessarily a giant overlap. It's not quite like having Apple TV, Netflix, Prime, and whatever else subscription. It's more like following three individuals who cater to my specific hobbies, where I can't get that content anywhere else. I can only get it from this person.

Goodbye, Skype, you were amazing

Here's from the Rest of the World, with readers reminiscing how life-changing Skype was.

At its peak, Skype had about 300 million users around the world. But it was a product of the desktop era, and as users went mobile, Skype lost its edge to upstarts like WhatsApp and FaceTime. Today, the app is forgotten on most phones and computers, particularly in the West.

Skype was crucial to Estonia becoming the world's first digital nation. It changed how we stayed connected abroad, allowing users to call friends and family for cheap. It created opportunities for Filipino teachers to offer one-on-one English lessons to students across Asia.

I remember Skype interviews and meetings in my early days as a freelancer back in the 2010s.

Thank you, Skype.

Some pointers on how to use AI at work

You’ll know it when you see it, a text or a whole chunk of email completely generated or partially polished by AI. What makes it so? There are tells according to this and this. It’s a shame that I now have to think twice before using the long dash.

Laziness and sloppy writing aside, we do need to talk about AI norms at work.

Kevin Delay at Charter wrote a 4-point summary on “How to use ChatGPT without being annoying":

💡
1. Don't be AI’s middleman. Any task you’re using genAI for should still involve some effort. If you’re brainstorming with colleagues, don’t send them the 20 ideas ChatGPT or Claude gives you. (Admitting you used the tool doesn’t make this much better.) Select the best ideas and then send those to your colleagues, along with a description of what you think about each.

2. Verify facts. It’s well established at this point that genAI tools occasionally make things up. If you would have been embarrassed to share a document with errors before genAI, you should feel the same way now.

3. Ask yourself, “Would I accept this level of quality from a colleague?” If the answer is no, don’t pass it along yet; edit it until you’re happy with the output, then send it to them. 

4. Provide context the AI tool doesn’t have. You know things about your company and the project you’re working on that genAI tools aren't privy to. Give them that context in your prompts; edit what they give you to make it work for your company.

Duolingo goes AI-first

Duolingo may have brought its owl mascot back to life, but there’s no bringing back the jobs lost to AI. The company’s CEO just announced they’re “going to become AI-first,” when according to Blood in the Machine author Brian Merchant, they already are. Translators and writers were let go months ago.

From Merchant’s piece on Substack, The AI jobs crisis is here, now:

Well, I have bad news. The AI jobs crisis has arrived. It’s here, right now. It just doesn’t look quite like many expected it to.

The AI jobs crisis does not, as I’ve written before, look like sentient programs arising all around us, inexorably replacing human jobs en masse. It’s a series of management decisions being made by executives seeking to cut labor costs and consolidate control in their organizations. The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse—it’s DOGE firing tens of thousands of federal employees while waving the banner of “an AI-first strategy.”

Against journalism for journalism's sake

Patrick Boehler challenges journalism's self-importance and calls for a media reset: Stop pretending journalism matters on its own.

For media funders, this shift could be transformative. Rather than perpetuating trickle-down systems of generic content creator patronage and rent-seeking, they could support ventures that demonstrate real utility to communities. The strongest media organizations I've encountered understand that people don't seek out journalism for journalism's sake - they want solutions to problems, ways to improve their lives, recognition, community.

Wonderful short read. Subscribe to Patrick's newsletter here.