Over the last year in particular, entire industries have been caught out by Generative AI, the branch of artificial intelligence that uses everything that is known already to create something new. For a time, GenAI’s potential was constrained by the materials the technology was exposed to, then the ability to understand this in context. But with accelerating speed those limitations are being overcome.
This is giving rise to a challenging duality: the future is both already here yet still unknown. Companies must step up their preparations for profound changes which for now remain intangible.
From explanation to exploration
Next generations of GenAI are emerging almost monthly, even weekly. GenAI is now moving beyond contextual intelligence to distil and create new insight, swiftly morphing into a potential to understand and predict the previously unknowable.
From early conversational capabilities, through reasoning, the technology is already delivering ‘agentic’ capabilities (goal-driven abilities to act independently and make decisions with human intervention only where needed). And there are early signs that “innovating AI” is emerging. That’s as AI becomes capable of creating novel frameworks, generate fresh hypotheses, and pioneer new approaches. This creative potential pushes AI from merely processing information to actively shaping the future of scientific discovery, applying it to problems yet to be solved.
OpenAI, one of several major players vying for dominance, has just raised $40 billion in a new funding round, valuing the company at $300 billion – an unprecedented amount for a startup.
This level of investment underscores the belief that upcoming capabilities will be significant on a human scale. Meanwhile the other GenAI foundation model players, including Anthropic, Google, Meta, and xAI, are hardly idle, and new heavy-hitters are emerging outside the US.
At the core of the latest advances is the accelerated pace of large language model (LLM) development. LLMs are deep learning models, trained on extensive data sets, that are capable of performing a range of natural language processing (NLP) and analysis tasks, including identifying complex data patterns, risks and anomalies.
In parallel, a growing movement towards open-source GenAI models is making the technology more accessible and customisable, alongside the option of proprietary GenAI models.
Crossing new frontiers in scientific discovery
In life sciences, there are persuasive reasons to keep pace with and harness latest developments as they evolve. Initially considered a technology looking for a purpose, GenAI is poised to become a gamechanger in scientific discovery and new knowledge generation – at speed and at scale.
In human intelligence terms, since the launch of ChatGPT in November 2022 we have already surpassed human expertise levels as measured by the Massive Multitask Language Understanding (MMLU) benchmark (1).
Recent advancements in Agentic AI models have even led to the need for a new benchmark, “Humanity’s Last Exam” (2) intended to be the final closed-ended academic measure for AI capabilities (we’ll see).
The advanced reasoning promise, a highlighted benefit of DeepSeek’s latest AI model, has enormous scope in science (enabling logical inferences and advanced decision-making). Google and OpenAI both have Deep Research agents that go off and perform their own searches, combining reasoning and agentic capabilities.
As reasoning capabilities continue to improve, and as the technology becomes more context-aware, the potential to accelerate scientific discovery becomes real through the creation of new knowledge. The ability to project forward, and consider “What if?” and “What next?”.
Already OpenAI’s Deep Research is optimised for intelligence gathering, data analysis and multi-step reasoning. It employs end-to-end reinforcement learning for complex search and synthesis tasks, effectively combining LLM reasoning with real-time internet browsing.
Meanwhile Google has recently introduced its AI co-scientist (3), a multi-agent AI system built with Gemini 2.0 as a “virtual scientific collaborator”. It is designed “to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries”. Give it a research goal, and it will suggest novel hypotheses, novel research and novel research plans. The “co-scientist” identifies various agents that fulfil different things that a scientist would do, and its “orchestrator” coordinates them, ready to break new scientific frontiers.
Embracing an uncertain future
With all of this potential, the strategic question for biopharma research and development (R&D) organisations becomes one of how to keep pace with all of these technology developments and build them into the business-as-usual. In other words, how can they prepare for a future that is simultaneously already here and continuously shape-shifting?
Up to now, most established companies have been experimenting with GenAI to see how it might help improve operational efficiency and improve accuracy, consistency and compliance, across specific pain points in Safety/Pharmacovigilance, Regulatory, Quality and some Clinical and Pre-Clinical processes. These initial pilots have been largely about becoming familiar with the technology, and assessing its trustworthiness and value. Others have gone further, creating lab-like constructs for experimentation.
As valid as these approaches have been up to now, the hastening pace of technology development and the intangibility around what’s coming means that the industry now needs to embed AI more intrinsically within its infrastructure and culture. This is about becoming AI-ready and AI-first, rather than simply “receptive to” what the technology can do.
Being discerning about external AI expertise
When previously hyped technologies or business change models emerged (e.g. blockchain or Six Sigma), one approach was to “sprinkle” champions across the business. In the case of AI, some organisations are taking a venture-capital like approach of bringing in non-native talent to key roles – visionaries and master-crafters from other industries where, historically, tech innovation runs deeper.
But AI is moving so quickly, and its likely impact is so fundamental to life sciences, that experts need to be “neck deep” in it to be of strategic value.
One of the biggest challenges is the duality companies are now grappling with: the simultaneous need to be ready for and get moving with deeper AI use today, while gearing up for a tomorrow that is likely to look very different. This has widespread “change” implications: at a mindset and method level; and from a technical and cultural perspective – both today and tomorrow.
For this reason, strategic partnerships are proving a safer route – with tech companies that are fully up to speed with the latest developments, are enmeshed in it and its expanding application, and are actively building sector-specific solutions. Even so, companies will need to choose their AI advocates wisely, as “AI washing” is commonplace among consultants and service providers now.
The good news is that many IT and data teams in pharma are already well versed in the technology, with high ambitions for it. The key now is to bring AI’s potential to fruition where it could make a difference strategically. This will involve sitting with the organisation’s real problem areas and asking how emerging iterations of the technology might hold the key.
References and notes
1. Measuring Massive Multitask Language Understanding, Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021): https://arxiv.org/pdf/2009.03300
2. Humanity’s Last Exam, November 2024, https://agi.safe.ai/
3. Accelerating scientific breakthroughs with an AI co-scientist, Google Research blog, February 2025: https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/