But it’s almost certainly possible to do better than nothing.The idea of tracking the effects of journalism is old, beginning in discussions of the newly professionalized press in the early 20th century and flowering in the “agenda-setting” research of the 1970s.Pre-Internet, there was usually no way to know what happened to a story after it was published, and the question seems to have been mostly ignored for a very long time. mid-term elections showed that a large fraction of voters were misinformed about basic issues, such as expert consensus on climate change or the predicted costs of the recently passed healthcare bill.Asking about impact gets us to the idea that the journalistic task might not be complete until a story changes something in the thoughts or actions of the user. Though coverage of the study focused on the fact that Fox News viewers scored worse than others, that missed the point: No news source came out particularly well.That’s why the first step in choosing metrics is to articulate what you want to measure, regardless of whether or not there’s an easy way to measure it.Choosing metrics poorly, or misunderstanding their limitations, can make things worse.
As a profession, journalism rarely considers its impact directly.
A major newsroom is publicly asking the question: The metrics newsrooms have traditionally used tended to be fairly imprecise: Did a law change? It’s also very hard to track what happens to a story once it is released into the wild, and even harder to know for sure if any particular change was really caused by that story.
The problem now is figuring out which data to pay attention to and which to ignore.
Metrics are powerful tools for insight and decision-making.
But they are not ends in themselves because they will never exactly represent what is important.