Detecting User Effects and Mapping Information Needs in Automated News Pipelines Using AI Agents

Modular journalism is the practice of structuring news into discrete, meaningful units, each designed to fulfill a specific user information need. By combining AI agents with editorial logic, we can detect bias, assess story readiness, and generate tailored coverage.

Research Update and a Peek into the Crew’s Log

I’ve picked up my research on modular journalism from where I left off last fall and tested an AI agent pipeline to generate modular content by transforming existing news artifacts. I ran the full flow to get a general feel for the approach, then started building each piece for real. I discovered a few interesting things along the way:

👉 AI agent generation based on a deeply structured news system is infinitely more precise and disciplined, and yields richer results than garden-variety AI agent generation.

👉 Training AI agents to work with structured news is a task for journalism expertise, not engineering.

👉 More precision requires a deeper and more precise taxonomy—not more engineering. The modular API tripled in the number of entities during the project and is now the core part of prompting.

👉 The bias detector—one of the agents in the pipeline tasked with spotting rhetorical patterns such as biased framing, loaded language, or missing context solely through lexical and structural cues (not by looking at facts, people, or truth)—shows, even in its early stage, how deeply rooted these practices are in almost all journalistic content we consume

👉 In some countries and news markets — those without a BBC or an AP — biased framing, loaded language, or missing context reach extreme levels and are the accepted norm, even for ethical and high-profile organizations.

👉 AI agents and structured news allow us to reinvent journalism for users in ways we wouldn’t be able to—or have the courage to—without them.

👉 Dysfunctional journalism seems to have many lexical and structural patterns in common with sloppy journalism and propaganda.

How I rewrote this list several times to fix my off-pitch cues

The first time I ran my list of things through the bias detector, I had a moment of irritation: for each and every sentence, I had gotten a red card. I realized I had forgotten to let the agent know it was technically an opinion piece, not news. I ran it again and got, again, a handful of red cards: too emphatic, too Italian, too many big words, not enough caveats, not enough data. Come on, agent, this is a short summary of a river of words — it's okay to cut some corners. The agent kept staring at me in silence, so sentence by sentence, I changed my language, to at least bring it into warning territory. The final result is below. Two red marks left, and I will live with them. I realized I need Agent 2 whenever I feel opinionated — to read the social cues that are at first invisible to me.

👉 See more test results from the bias detector agent [here].

Loading...

👉 See the latest update on the modular journalism research introducing AI agents [here].

👉 See the updated and expanded map of user information needs and user effects [here].

👉 See the theory behind this approach to user-needs-based modularity [here].

Trucks, tantrums, and the ballot box: why what they drive says how Americans vote