Last month Google announced the release of DeepMind, an AI-powered research assistant that “uses advanced reasoning to help scientists synthesize vast amounts of literature and generate novel hypotheses,” the company said. More simply, what the company referred to is that AI is getting smarter, and its applications are not limited to simple queries.
This came almost a year following the release of NotebookLM, a front-facing user product within the same vicinity of tools that Google considers part of the “agentic era” in AI experimentation.
Projects Astra and Mariner have shown how reasoning tools are capable of cracking a hypothesis in two days concerning a study on antibiotic immunity that was under investigation for decades. The professor who made that discovery said the machine did not have access to the published articles, meaning the AI reviewed literature, analyzed data and generated citations at a faster and more accurate pace than ever before.
"It's not just that the top hypothesis they provide was the right one," Professor Penades said. "It's that they provide another four, and all of them made sense.”
Over the past few weeks, I have taken it upon myself to figure out how these advances will transform the way scientific knowledge is created, shared and understood. I prompted Sora, a text-to-video generator, to create scientific infographics, fed NotebookLM hundreds of publicly available academic papers and YouTube explainers and listened to panels between journalists and scientists to figure out the sentiment behind such capabilities.
My conclusion: efficiency means little to nothing when the story lacks human verification. Here’s what I mean:
For science communicators, AI-driven tools reduce the barrier to understanding technical concepts. Journalists and educators can leverage AI to interpret research findings accurately, fact-check information and generate summaries that make science more approachable for the public.
However, reading science stories also means reading stories about public health and community impact. To make a story worthwhile for the reader, it must have a real-world attachment that crystallizes the bubble of theory into something that can be felt.
Moreover, reasons to doubt these tools remain. Videos generated by Sora based on scientific research lacked sufficient accuracy, had little guardrails for preventing misinformation and gave me the intuitive reporting sense that their proliferation without oversight was unethical.



The necessity for collaborative fact-checking between journalists and scientists is at an all-time high. Based on my qualitative testing of models that create visuals for the sake of satisfying a prompt, I foresee audience trust and humanized angles to be at the forefront of effective science communication.