Topic

ai

A collection of 11 issues

AI in the newsroom, some viewpoints

AI should be considered a hand, not a brain.

That's from Susie Cagle, a writer and artist for ProPublica, The Guardian, Wired, and The Nation. It's part of CJR's collected viewpoints piece, How We're Using AI, with "we're" referring to reporters, editors, and media executives.

Sidenote: A hand, not a brain. I've been mulling over my use of GenAI. As a hand, it extends what I'm capable of. As a hand, I still get to decide and not have my vision dimmed.

There's an important bit about the modern newsroom, as depicted by Claire Leibowicz, Head of the AI and Media Integrity Program at the nonprofit Partnership on AI:

Newsrooms are simultaneously preparing for threats (e.g., by cryptographically certifying their media to assert its authenticity) and embracing AI as a way to reduce costs, tell stories, and even build trust with audiences and reimagine the news. Sometimes this embrace seems pragmatic, evidence-based, and even revolutionary. At others, it seems like an overeager corrective to a collective sense that newsrooms missed the social media moment—an impulsive fix for the industry’s business-model woes.

Those threats are real, with grave consequences. Among those cited are plagiarism, layoffs, dulling of skill sets, environmental costs, AI slop, and homogenization of global news. No wonder why we'll continue to worry about GenAI even as newsrooms find LLMs useful for transcription, language translation, and even for finding "needles of corruption in the haystacks of data produced by political campaigns."

Mark Zuck wants you to have AI friends

He thinks it will be "really compelling" once "the AI starts to get to know you better and better." All this from an interview with Dwarkesh Patel on AI.

It's hollow and unsettling and could rob people of the chance to build genuine, messy friendships.

Here's what's actually compelling. Eric Hal Schwartz's TechRadar piece on Zuck missing the point of AI and the point of friendship:

But compelling conversation doesn't mean real friendship. AI isn’t your friend. It can’t be. And the more we try to make it one, the more we end up misunderstanding both AI and actual friendship. AI is a tool. An amazing, occasionally dazzling, often frustrating tool, but a tool no different than your text message autocomplete or your handy Swiss Army knife. It's designed to assist you and make your life easier.



It’s not a being. It has no inner monologue. It’s all surface and syntax. A robotic parrot that reads the internet instead of mimicking your catchphrases. Mimicry and scripted empathy are not real connections. They're just performance without sentience.

Some pointers on how to use AI at work

You’ll know it when you see it, a text or a whole chunk of email completely generated or partially polished by AI. What makes it so? There are tells according to this and this. It’s a shame that I now have to think twice before using the long dash.

Laziness and sloppy writing aside, we do need to talk about AI norms at work.

Kevin Delay at Charter wrote a 4-point summary on “How to use ChatGPT without being annoying":

💡
1. Don't be AI’s middleman. Any task you’re using genAI for should still involve some effort. If you’re brainstorming with colleagues, don’t send them the 20 ideas ChatGPT or Claude gives you. (Admitting you used the tool doesn’t make this much better.) Select the best ideas and then send those to your colleagues, along with a description of what you think about each.

2. Verify facts. It’s well established at this point that genAI tools occasionally make things up. If you would have been embarrassed to share a document with errors before genAI, you should feel the same way now.

3. Ask yourself, “Would I accept this level of quality from a colleague?” If the answer is no, don’t pass it along yet; edit it until you’re happy with the output, then send it to them. 

4. Provide context the AI tool doesn’t have. You know things about your company and the project you’re working on that genAI tools aren't privy to. Give them that context in your prompts; edit what they give you to make it work for your company.

Duolingo goes AI-first

Duolingo may have brought its owl mascot back to life, but there’s no bringing back the jobs lost to AI. The company’s CEO just announced they’re “going to become AI-first,” when according to Blood in the Machine author Brian Merchant, they already are. Translators and writers were let go months ago.

From Merchant’s piece on Substack, The AI jobs crisis is here, now:

Well, I have bad news. The AI jobs crisis has arrived. It’s here, right now. It just doesn’t look quite like many expected it to.

The AI jobs crisis does not, as I’ve written before, look like sentient programs arising all around us, inexorably replacing human jobs en masse. It’s a series of management decisions being made by executives seeking to cut labor costs and consolidate control in their organizations. The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse—it’s DOGE firing tens of thousands of federal employees while waving the banner of “an AI-first strategy.”

Empathy in the AI debate

I've always associated equanimity with stillness. This essay reframes it entirely. It's a mobility of the mind, says the author, not stillness.

Equanimity is best recognised by its inherent mobility of perspective-taking. As a mode of perception, equanimity is on the move, looking over things (internal and external) with hovering attention. Equanimity is not about serenely settling. It is not averse to the presence of judgments, just to their rigidifying. It ranges over whatever may appear, fluidly noticing, for example, both the disturbing and the tranquil with the awareness that one depends on the other for its meaning.

Why does it matter? Because, in our polarized landscape, anything that helps dissolve hardened dogmatism is a gift.

I appreciate this "mobility of the mind" perspective even more after reading Paul Ford’s piece, Accepting All the AI Opinions. We're missing empathy in the AI debate. We're too quick to pick sides when we should be trying to imagine other people's corners:

“AI” is not just one big thing, but a set of intense, wild, overlapping reactions to a technology that humans created. As I’ve learned more, I’ve come to realize that I, too, can only see my little corner of the weird new world.

Are we losing our critical thinking skills to AI?

An OSINT analyst piece pointing to a Microsoft Research and Carnegie Mellon paper begins with a troubling scene of how we might work:

I’ve seen it firsthand, analysts running solid investigations, then slowly shifting more and more of the thinking to GenAI tools. At first, it’s small. You use ChatGPT to summarise a document or translate a foreign post. Then it’s helping draft your reports. Then it’s generating leads. And eventually, you’re not thinking as critically as you used to. You’re verifying less, questioning less, relying more. We tell ourselves we’re “working smarter.” But somewhere along the way, we stop noticing how much of the actual thinking is being offloaded.

Like any other anxious white-collar worker, I'm no stranger to what an AI-augmented workday could look like. But what drove me to the original research was this:

Confidence in AI replaces confidence in self and with it, the thinking disappears.

On AGI and our unthinkable future

John Herrman, New York Magazine:

AGI, like G-less AI, automation, and even mechanization, are indeed stories, but they’re also sequels: This time, the technology isn’t just inconceivable and inevitable; it’s anthropomorphized and given a will of its own. If mechanization conjured images of factories, automation conjured images of factories without people, and AI conjured humanoid machine assistants, AGI and ASI conjure an economy, and a wider world, in which humans are either made limitlessly rich and powerful by superhuman machines or dominated and subjugated (or perhaps even killed) by them (Industrial Revolution 3: The Robot Awakens). In imagining centralized machine authoritarianism in the future, AGI creates a sort of authoritarian, exclusionary discourse now. A narrative emerges in which the decisions of AGI stakeholders — AI firms, their investors, and maybe a few government leaders — are all that matter. The rest of us inhabit the roles of subject and audience but not author.

2025 AI Index report: the majority did not feel threatened by AI

IEEE Spectrum distilled Stanford University‘s 2025 AI Index report. All 400+ pages of it.

Here are the highlights: 12 graphs that explain AI’s current landscape.

What surprised me the most was the last graph. The majority of the global survey respondents did not feel threatened by AI.

While 60 percent of respondents from 32 countries believe that AI will change how they do their jobs, only 36 percent expected to be replaced.

[NYT Opinion] AI is a parasite

Tressie McMillan Cottom's NYT piece reframes AI with a biological metaphor that sticks:

A.I. is a parasite. It attaches itself to a robust learning ecosystem and speeds up some parts of the decision process. The parasite and the host can peacefully coexist as long as the parasite does not starve its host. The political problem with A.I.'s hype is that its most compelling use case is starving the host — fewer teachers, fewer degrees, fewer workers, fewer healthy information environments.