The secret of successful AI pilots in R&D
January 13, 2026 | 8 min read
By Ann-Marie Roche

Around 88% of AI pilot programs in R&D fail. What are the secrets of the 12% that take flight?
“Tools without users don’t generate any value,” warns Birgit “Bea” Braun, R&D Fellow for Digital Innovation at Dow Chemical, as she gets to the heart of the challenges of making corporate research work with artificial intelligence. “Any data foundation without insights doesn’t create value either.”
Although these observations may appear obvious, they emphasize the areas where most organizations often stumble.
The Gap Between Hype and Execution
A recent webinar, ‘AI in R&D workflows: lessons learned, common pitfalls, and what’s next,’ brought together Bea and experts from Elsevier to discuss what sets the successful application of AI apart from costly failures. Their insights show a more complex path forward than the hype implies.
The stakes are high. Jelena Sevo, Executive Vice President at Elsevier, frames the challenge: “On many metrics and in many fields, R&D productivity has been declining. Each dollar we invest in R&D has been delivering a little less innovation over time. There is a strong belief that AI has the potential to change that.”
Certainly, many leaders feel the pressure to adopt AI for greater efficiency and productivity, driven by the fear of falling behind other companies. This sense of urgency can lead to imperfect implementations of the technology. In fact, IDC reports that 88% of AI pilots fail to reach production.
Meanwhile, Elsevier’s Corporate Researcher of the Future 2025 Report highlights a key trust gap: only 27% of corporate researchers trust AI tools, while 42% consider them unreliable.
Lessons Learned: What Really Works
After nearly 15 years of implementing digital solutions across chemical manufacturing and research, Bea has one main message: understand the result you want and then find the right solution, AI or otherwise, to achieve it.
She emphasizes that successful AI tools need three key ingredients: quality data, effective processing tools and people who actually use them. “When you look at any digital solution, whether it’s a machine learning model, data-driven model or LLM, you really need to ensure you have all three ingredients that enable you to build a solution that truly addresses the problem,” Bea explains.
Start with What You Know
One important lesson: remember what you already know. Dow has achieved success by combining its extensive body of fundamental scientific knowledge with data-driven models – a method known as hybrid modeling. This principle applies equally to large language models. General LLMs often won’t provide the detailed insights needed for specific R&D questions unless you add your own knowledge base.
Augment, Don’t Replace
This institutional knowledge extends far beyond those often scattered databases, files and lab books. It also includes human expertise, which should be supported, not replaced.
Frederik van den Broek, Senior Director of Professional Services at Elsevier, shares from his experience helping organizations worldwide implement AI at scale: “AI is about bringing new ideas. The decision still lies with the scientists. It just brings them new ideas or summarizes a very long email – those sorts of tasks that make life easier so they can spend more time on what really matters.”
Build Trust Through Transparency
To effectively assist these individuals, you need to first build trust. Ben Geary, Portfolio Delivery Lead for AI Innovations at Elsevier, emphasizes an important lesson: trust depends on transparency.
“Scientific research requires clarity and understanding, but too often, AI is like a black box where the internal workings aren't clear to the user,” Ben explains. “How can you trust something to help with your critical research if you can’t understand the thought process, logic or sources that a model is using?”
Common pitfalls: Where organizations go wrong
They skimp on upskilling
The most fundamental pitfall? The “If we build it, they will come” misconception. Bea stresses that upskilling and training are essential. Dow has created a Citizen Data Science program that enables researchers to develop expertise in areas that interest them. Not everyone needs to do everything. The key is helping people improve their traditional research with digital skills that matter to them.
They fall for the hype
Another common mistake is following the herd. Referencing Gartner’s hype cycle, Frederik notes that generative AI in 2024 was approaching the “trough of disillusionment.” “Don’t do AI or generative AI just because it’s at the top of the hype,” he cautions. But he also notes that traditional AI technologies such as deep learning and machine learning, which sat at the same peak eight years ago, are now respected mainstream fields.
They put garbage in, and get garbage out
Data quality remains a constant challenge. “It’s garbage in, garbage out,” Frederik states. “If your data is not good, the models you build on that will not produce anything you want.” He notes that getting your data in order is like planting a tree – the best time to start was 20 years ago, but the second-best time is right now.
They underestimate the importance of metadata
Organizations often underestimate the complexity of data in R&D environments. As Bea points out, research is inherently creative, with different hypotheses requiring various data structures and contexts. “Data capture and context are pretty difficult,” she notes. “This understanding of how to capture this is really important for any sort of digital solution.”
Finally, there’s that black box problem. Many AI models aren’t deterministic – ask the same question twice, get different answers. “That’s not a problem as long as you’re aware of it,” Frederik says, “but if you’re not aware of that and don’t take it into account, it can have unexpected consequences.”
What’s next
Looking ahead, the panelists see opportunities despite ongoing challenges. As Jelena notes, “The industry faces a critical gap between adoption and trust that requires addressing head-on.”
Embrace transparency
Bea identifies transparency and interpretability as major hurdles to wider adoption of LLMs. “This is a key challenge for the existing architectures of large language models,” she says. “I think there’s still quite a bit of work for us to do before we really get there.”
Responsible AI and security require continuous attention. “This is really important for an industrial application,” Bea emphasizes. “There are many different steps in the process where responsible and FAIR AI is crucial. But there’s also an element of ensuring decisions are traceable and that there’s understanding behind them.”
Learn about the research-grade AI workspace LeapSpace.
Learn from the mistakes of others
As the regulatory landscape continues to evolve, with frameworks such as the European Union’s AI Act establishing new compliance requirements, the situation becomes increasingly complex. So why waste time? Take advantage of the fact that different industries and companies have varying levels of digital maturity. For example, as Frederik points out, the laggards can learn from the pioneers in Pharma for best practices: “Why make the same mistakes they did?”
Put users first, tech second
The way forward requires reversing the usual method. “Bring the people who are meant to use a tool into the process from the very beginning,” Bea recommends. When users help shape solutions early, they understand the tools' capabilities and limitations, feel a sense of ownership and naturally become advocates.
The 12% of AI pilots who succeed share one common trait: they solve real problems for real people. Everything else is just expensive experimentation.
To dive deeper, watch the full webinar: AI in R&D workflows: lessons learned, common pitfalls, and what’s next.
Contributor

Ann-Marie Roche
Senior Director of Customer Engagement Marketing
Elsevier
Read more about Ann-Marie Roche