To determine whether the algorithms were characterizing the literature in a way that seemed correct—at least for this initial phase of our research—we chose a series of case studies that follow some familiar patterns, illustrating some common challenges of reporting on deadline. Some were studies we knew had been covered poorly by the press because the resulting news stories had been analyzed by scientists and/or journalists. (We relied on excellent analyses by such sources as HealthNewsReviews, Neuroskeptic, and Knight Science Journalism Tracker and on input from our scientific advisors.) Others were studies on the same topic that found similar or different results so we could see whether the algorithms could capture that similarity or difference. Some were studies for which we knew the history or how the results had held up (or not) over time so we could check Science Surveyor’s characterizations against a known outcome.