Topic

ai at work

A collection of 5 issues

AI in the newsroom, some viewpoints

AI should be considered a hand, not a brain.

That's from Susie Cagle, a writer and artist for ProPublica, The Guardian, Wired, and The Nation. It's part of CJR's collected viewpoints piece, How We're Using AI, with "we're" referring to reporters, editors, and media executives.

Sidenote: A hand, not a brain. I've been mulling over my use of GenAI. As a hand, it extends what I'm capable of. As a hand, I still get to decide and not have my vision dimmed.

There's an important bit about the modern newsroom, as depicted by Claire Leibowicz, Head of the AI and Media Integrity Program at the nonprofit Partnership on AI:

Newsrooms are simultaneously preparing for threats (e.g., by cryptographically certifying their media to assert its authenticity) and embracing AI as a way to reduce costs, tell stories, and even build trust with audiences and reimagine the news. Sometimes this embrace seems pragmatic, evidence-based, and even revolutionary. At others, it seems like an overeager corrective to a collective sense that newsrooms missed the social media moment—an impulsive fix for the industry’s business-model woes.

Those threats are real, with grave consequences. Among those cited are plagiarism, layoffs, dulling of skill sets, environmental costs, AI slop, and homogenization of global news. No wonder why we'll continue to worry about GenAI even as newsrooms find LLMs useful for transcription, language translation, and even for finding "needles of corruption in the haystacks of data produced by political campaigns."

Some pointers on how to use AI at work

You’ll know it when you see it, a text or a whole chunk of email completely generated or partially polished by AI. What makes it so? There are tells according to this and this. It’s a shame that I now have to think twice before using the long dash.

Laziness and sloppy writing aside, we do need to talk about AI norms at work.

Kevin Delay at Charter wrote a 4-point summary on “How to use ChatGPT without being annoying":

💡
1. Don't be AI’s middleman. Any task you’re using genAI for should still involve some effort. If you’re brainstorming with colleagues, don’t send them the 20 ideas ChatGPT or Claude gives you. (Admitting you used the tool doesn’t make this much better.) Select the best ideas and then send those to your colleagues, along with a description of what you think about each.

2. Verify facts. It’s well established at this point that genAI tools occasionally make things up. If you would have been embarrassed to share a document with errors before genAI, you should feel the same way now.

3. Ask yourself, “Would I accept this level of quality from a colleague?” If the answer is no, don’t pass it along yet; edit it until you’re happy with the output, then send it to them. 

4. Provide context the AI tool doesn’t have. You know things about your company and the project you’re working on that genAI tools aren't privy to. Give them that context in your prompts; edit what they give you to make it work for your company.

Duolingo goes AI-first

Duolingo may have brought its owl mascot back to life, but there’s no bringing back the jobs lost to AI. The company’s CEO just announced they’re “going to become AI-first,” when according to Blood in the Machine author Brian Merchant, they already are. Translators and writers were let go months ago.

From Merchant’s piece on Substack, The AI jobs crisis is here, now:

Well, I have bad news. The AI jobs crisis has arrived. It’s here, right now. It just doesn’t look quite like many expected it to.

The AI jobs crisis does not, as I’ve written before, look like sentient programs arising all around us, inexorably replacing human jobs en masse. It’s a series of management decisions being made by executives seeking to cut labor costs and consolidate control in their organizations. The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse—it’s DOGE firing tens of thousands of federal employees while waving the banner of “an AI-first strategy.”

Are we losing our critical thinking skills to AI?

An OSINT analyst piece pointing to a Microsoft Research and Carnegie Mellon paper begins with a troubling scene of how we might work:

I’ve seen it firsthand, analysts running solid investigations, then slowly shifting more and more of the thinking to GenAI tools. At first, it’s small. You use ChatGPT to summarise a document or translate a foreign post. Then it’s helping draft your reports. Then it’s generating leads. And eventually, you’re not thinking as critically as you used to. You’re verifying less, questioning less, relying more. We tell ourselves we’re “working smarter.” But somewhere along the way, we stop noticing how much of the actual thinking is being offloaded.

Like any other anxious white-collar worker, I'm no stranger to what an AI-augmented workday could look like. But what drove me to the original research was this:

Confidence in AI replaces confidence in self and with it, the thinking disappears.

2025 AI Index report: the majority did not feel threatened by AI

IEEE Spectrum distilled Stanford University‘s 2025 AI Index report. All 400+ pages of it.

Here are the highlights: 12 graphs that explain AI’s current landscape.

What surprised me the most was the last graph. The majority of the global survey respondents did not feel threatened by AI.

While 60 percent of respondents from 32 countries believe that AI will change how they do their jobs, only 36 percent expected to be replaced.