A dispute between Politico management and its union could set a precedent for how journalists use AI in newsrooms.
Last year, Politico started using AI tools to generate news summaries. This year, in partnership with Y Combinator-backed startup Capitol AI, it enabled Policy Intelligent Assistance for paying subscribers.
The union says this AI rollout violates their contract and claims no notice was given. Here’s PEN union chair Ariel Wittenberg, quoted in Wired:
The company is required to give us 60 days notice of any use of new technology that will materially and substantively impact bargaining unit job duties.
And here’s Arianna Skibell, the union’s vice chair for contract enforcement:
Politico’s contract stipulates that the publication needs to use AI in a manner that follows the company’s standards of journalistic ethics. We're not against AI, but it should be held to the same ethical and style standards as our political journalists.
The gist is that a syndicated summer guide (things to do, books to read, etc.) published in the Chicago Sun-Times and the Philadelphia Inquirer was filled with AI-generated fake books and misinformation.
The independent tech news site also released a podcast episode, AI Slop Summer.
Jason Koehler, author of the 404 Media story, is on Bluesky posting behind-the-scenes updates.
I spoke to the person who AI-generated the Chicago Sun-Times reading list. Says he's very embarrassed. This was part of a generic package inserted into newspapers and other publications, so likely to run elsewhere. He didn't know it'd be in Chicago Sun-Times
www.404media.co/chicago-sun-...
The Chicago Sun-Times issued a response in a Bluesky thread.
On Sunday, May 18, the print and e-paper editions of the Chicago Sun-Times included a special section titled the Heat Index: Your Guide to the Best of Summer, featuring a summer reading list that our circulation department licensed from a national content partner. 🧵
Look at the comments; one unhappy subscriber responded:
Nowhere in this “What We Are Doing” do I see a pledge to your subscribers, of which I am one, to not use AI-generated content, either in-house or from 3rd-party providers. And if a syndicator can’t make that same pledge, their content doesn’t belong in our newspaper. Not so hard, is it?
Others asked about hiring actual writers and journalists. Among them, a writer who lost her job to AI.
I’m a writer who was recently laid off from my full time job and “replaced” by AI.
Spoiler alert, AI can’t produce quality content the way humans can, and any journalist worthy of the title would never submit anything without fact checking.
Fortunately, I’m available for hire! 😊
A headline from The Atlantic asked the question on everyone's mind — AI in Newspapers. How Did This Happen? The article surfaced the many issues that plague the industry.
There are layers to this story, all of them a depressing case study. The very existence of a package like “Heat Index” is the result of a local-media industry that’s been hollowed out by the internet, plummeting advertising, private-equity firms, and a lack of investment and interest in regional newspapers. In this precarious environment, thinned-out and underpaid editorial staff under constant threat of layoffs and with few resources are forced to cut corners for publishers who are frantically trying to turn a profit in a dying industry. It stands to reason that some of these harried staffers, and any freelancers they employ, now armed with automated tools such as generative AI, would use them to stay afloat.
Another from the Nieman Lab featured Reddit comments, including one from a subscriber:
Do they use AI consistently in their work? How did the editors … not catch this?” Reddit user xxxlovelit wrote. “As a subscriber, I am livid! What is the point of subscribing to a hard copy paper if they are just going to include AI slop too!?
That's from Susie Cagle, a writer and artist for ProPublica, The Guardian, Wired, and The Nation. It's part of CJR's collected viewpoints piece, How We're Using AI, with "we're" referring to reporters, editors, and media executives.
Sidenote: A hand, not a brain. I've been mulling over my use of GenAI. As a hand, it extends what I'm capable of. As a hand, I still get to decide and not have my vision dimmed.
There's an important bit about the modern newsroom, as depicted by Claire Leibowicz, Head of the AI and Media Integrity Program at the nonprofit Partnership on AI:
Newsrooms are simultaneously preparing for threats (e.g., by cryptographically certifying their media to assert its authenticity) and embracing AI as a way to reduce costs, tell stories, and even build trust with audiences and reimagine the news. Sometimes this embrace seems pragmatic, evidence-based, and even revolutionary. At others, it seems like an overeager corrective to a collective sense that newsrooms missed the social media moment—an impulsive fix for the industry’s business-model woes.
Those threats are real, with grave consequences. Among those cited are plagiarism, layoffs, dulling of skill sets, environmental costs, AI slop, and homogenization of global news. No wonder why we'll continue to worry about GenAI even as newsrooms find LLMs useful for transcription, language translation, and even for finding "needles of corruption in the haystacks of data produced by political campaigns."