Research Update and a Peek into the Crew’s Log
I’ve picked up my research on modular journalism from where I left off last fall and tested an AI agent pipeline to generate modular content by transforming existing news artifacts. I ran the full flow to get a general feel for the approach, then started building each piece for real. I discovered a few interesting things along the way:
👉 AI agent generation based on a deeply structured news system is infinitely more precise and disciplined, and yields richer results than garden-variety AI agent generation.
👉 Training AI agents to work with structured news is a task for journalism expertise, not engineering.
👉 More precision requires a deeper and more precise taxonomy—not more engineering. The modular API tripled in the number of entities during the project and is now the core part of prompting.
👉 The bias detector—one of the agents in the pipeline tasked with spotting rhetorical patterns such as biased framing, loaded language, or missing context solely through lexical and structural cues (not by looking at facts, people, or truth)—shows, even in its early stage, how deeply rooted these practices are in almost all journalistic content we consume
👉 In some countries and news markets — those without a BBC or an AP — biased framing, loaded language, or missing context reach extreme levels and are the accepted norm, even for ethical and high-profile organizations.
👉 AI agents and structured news allow us to reinvent journalism for users in ways we wouldn’t be able to—or have the courage to—without them.
👉 Dysfunctional journalism seems to have many lexical and structural patterns in common with sloppy journalism and propaganda.