Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

A chat with GPThibault: “Forget the Metaverse, ChatGPT is already useful”

June 7, 2023

By Ann-Marie Roche

Woman using chatbot in computer and tablet © istock.com/nuttapong punna

Since Open AI’s release of ChatGPT, Elsevier's Thibault Geoui — AKA GPThibault — has immersed himself in its potential for Life Sciences. He shares his findings via his popular newsletter GPThibault Pulse. Already, both the chatbot and the newsletter have transformed his work life. But what does Downton Abbey have to do with it? It was time for a chat …

Greetings, fellow tech enthusiasts! GPThibault Pulse(opens in new tab/window) is back with our weekly dose of PromptEngineering, insider tips, and news on Generative AI and Life Sciences. Get ready to geek out with us as we explore the latest trends and technologies in these exciting fields. So put on your thinking caps and get ready to dive into the world of GPThibault Pulse!

And so opens another edition of the LinkedIn newsletter GPThibault Pulse, which arose from “a perfect storm,” according to its author, Dr Thibault Geoui(opens in new tab/window). In December 2022, just as he was taking on a new role at Elsevier as Senior Director of Biomedical Innovation, ChapGPT was released. “It was part of my new job to track emerging technologies — particularly if they could be relevant to improving patient outcomes and enhancing drug development,” he said. “So obviously, I had to jump in.”

And jump he did — as did the 690 subscribers. So, we decided to tap into his infectious enthusiasm and his newfound passion through an old-fashioned human-to-human chat. 

Before ChatGPT, did you have a previous wow moment related to AI?

I had it with Alphafold(opens in new tab/window), which fit with my PhD background in working on Protein 3D Structure using X-Ray crystallography (more specifically, Structural Proteomics of Epstein-Barr virus, which is actually becoming trendy again since it now seems to be linked with autoimmune disease). At that time, I was working on one of the first projects that tried to map out the three-dimensional shapes of all the proteins that make it a virus. I was proud to publish a couple of papers around that. But then Alphafold came along and made my PhD irrelevant [laughs].  

And a lot has happened since Alphafold was released less than two years ago…

AI is absolutely an exponential thing. I’m now having a wow-moment almost daily. Before, AI was called “AI” for marketing purposes — to make it sound sexier. But in the end, it was statistics or algorithmic, and really only used for super specific applications, such as winning at chess or predicting if something is a cat. It was more of a gimmick. But they were also stepping stones towards something truly useful — like Alphafold. 

As Senior Director of Elsevier Life Sciences, Dr Thibault Geoui leads biomedical innovation to support the company’s growth and evolution in using FAIR data and new technologies to reimagine the research space. He has 20+ years of driving innovations in life sciences, from launching over 30 lab products to repositioning Elsevier's flagship chemistry solution and advancing drug R&D with AI.

Thibault Geoui, PhD

Thibault Geoui, PhD

And with ChatGPT, when did you start yelling “Gamechanger!”?

Like many people, I followed a certain love-hate process as I got to understand it and its limitations. But last Christmas, I was with my parents and started to play with it more. I decided to try writing our seasonal greeting cards with ChatGPT. So we asked it to write one in the style of Downton Abbey’s Violet Crawley — that distinctive old lady. It started writing and it was totally in her voice and tone. We were amazed. Around the same time, I was talking to a friend who was procrastinating on sending a complaint letter to his landlord and I provided him with a perfect letter in about 10 seconds. 

And here’s the most amazing thing: this product is still in beta! I've launched a lot of products, and I can tell you a beta usually still needs to be shaped. But this thing was actually already useful! Fast forward to now, and it’s something I use daily for many different things. But I’m also very mindful of regulation and compliance — so I don’t use it for any company confidential stuff.  

What are your favorite use cases?

It's really difficult to answer this question. We design our products at Elsevier for 20 or 50 use cases. But this thing has unlimited use cases because you can interact with it. And I think retrospectively, this is what really blew me away initially: that we can interact with something that’s like a — and I hate to use this word — human. I also think this “humanness” is why this thing triggers all these strong reactions. It’s sparking all these discussions on what’s intelligence. For instance, there’s a very interesting Microsoft research paper, “Sparks of Artificial General Intelligence(opens in new tab/window),” that’s upsetting a lot of people. And I’m not sure where I stand. But I am still impressed every day, and it’s already proved transformative in how it's impacting my work. It’s like having a super assistant. 

But what aspects around ChatGPT get you worried?

Well, there’s this open letter(opens in new tab/window) asking for a ban on the development of any Large Language Models (LLMs) larger than GPT4. And much smarter people than me signed this letter. They are literally saying it’s potentially a WMD — a Weapon of Mass Destruction. And I think that goes too far. But I might be wrong. Who knows?

Then there’s this really long and interesting technical report(opens in new tab/window) the OpenAI team released when GPT4 was launched. It has a part that talks about testing whether this thing can leave its box, become autonomous and start to destroy the world. Suddenly, this paper starts to read like a science fiction book — like Daemon(opens in new tab/window) from one of my favorite writers Daniel Suarez, which is about an AI that escapes, propagates, starts to control humans, etcetera.

That fear of a sci-fi novel coming to life …

Right. But many people who work in this field — scientists and high-tech entrepreneurs — have been nurtured by science fiction throughout their lives. So, I think now there’s a very blurred line between any real threat and what people have in their subconscious because of what they read. So that’s one thing.

But the other thing is more real: there are limitations. For example, LLMs hallucinate: they will give very wrong answers very assertively. It will just tell you crazy stuff. So, while it’s good for certain stuff, we also need to understand the limitations.  

With any speedy development comes less-than-thought-out regulations. How do you see that playing out?

Well, now there are all sorts of reactions when it first came out. Italy banned it for 40 days for GDPR reasons — and regulating personal data is a completely legitimate concern. And some countries, US states and companies are banning it outright. And others are putting together guidelines, while Singapore is showing more vision by introducing it to the classrooms and teaching people how to use it. And certainly, we need to figure out how to integrate this “new normal” into the curriculum — we need to make sure our kids are still using their brains and not just generating their homework.

At the end of the day, it’s out in the wild so it’s not going to disappear. While I don’t think it’s a potential WMD, it’s also not perfect. And I think it has a lot of potential for misuse when it comes to disinformation(opens in new tab/window). So, there’s a lot to figure out in terms of regulation.

But I do believe people using GPT with their brains will be more efficient than people with GPT who don’t use their brains. There’s already a research paper(opens in new tab/window) that shows people using GPT at work are 50% faster and have a 50% better output then people not using it. They’re also happier since it’s automating a lot of super boring stuff. But we do need to use it responsibly.

What was the direct inspiration for starting the newsletter?

There were a couple of things. When I started my new role in innovation, I was reading a lot and wanted to start organizing and sharing my thoughts online. So I started to write a lot. But I am a very slow writer, and GPT helped me with that. My favorite prompt is proofread — or improve this part or that part. So that was really useful. It accelerated my writing workflow enough so I could actually start a newsletter.

The newsletter also offered me a way to start to better connect with people with similar interests. And it’s working: I got 300,000 views in the first five months of this year compared to 50,000 for all of last year. So, it was really about developing a community where I can test my ideas — particularly around innovation — without having to form some sort of expert committee. For instance, I put up a video on medical science liaisons(opens in new tab/window) (MSLs), and someone reached out who has the largest MSL group on LinkedIn with 30,000 members. And last time I was in Amsterdam, I had a drink with him and we discussed many potential product ideas. 

Returning to that infinity of use cases, what do you see as the most promising use cases for the life sciences in the short term?

That’s really a good question to which I have a pretty bad answer because we are still figuring it out. But I think one of the very promising use cases is that today, people still interact with information rather awkwardly. If you look at our portfolio, we have five or six different products — which means we have five or six different experiences in how someone can interact with information. But in the case of ChatGPT, you talk to it in your own words and get a personalized experience every time. So if we connect our information to a LLM, it means everyone can start asking questions the way they want and get the information in the format they want — for instance, as a graph, an innovation framework or a JIRA ticket…

Or in the voice of Aunt Violet … So, do you think this ‘translator/interpreter’ role can be applied to the triple helix model of innovation — to help the academics, the business people and the regulators all get on the same page more quickly?

So here's the thing: I don't think that GPT will solve every problem. But I think it can help with understanding what other people are doing. Because it makes it very easy to ask questions about things that you don't understand. So yes, it might really help since you can break things down into the way someone else wants to consume it. 

What do you see as the next leap forward? Getting rid of the hallucination problem?

I think the biggest thing will be when we can train our own language model with licensed data — with the idea that if we train these things with the right data, we may avoid hallucinations. That’s the theory anyway. Already, Stanford is experimenting with smaller language models(opens in new tab/window) that show promise and that are also a lot cheaper than GPT. And actually, the CEO of OpenAI just announced that he thinks the age of LLMs is already over(opens in new tab/window) and people are now talking a lot about Augmented Language Models, or autonomous agents, being the next big thing. And a more recent development is how LLMs are being used as controllers for other apps(opens in new tab/window). It’s all happening very fast, so stay tuned.  

How do you hope things play out in the next year? 

Basically, I hope it doesn’t get banned and there will be much more experimentation. And just as with any new technology, we need intelligent regulations formulated by educated regulators. So really, it’s about education. And we need people to demonstrate where this tool has the most value. So over the next year, we are going to see much more diversity as we follow a typical innovation curve. A lot of industry early-adopters will demonstrate what works, and then it will have much wider adoption in many different important domains. And we at Elsevier will actively look into how we can support our customers if they want to adopt this technology to support drug R&D. 

What would you say to those late adopters or those who gave up on ChatGPT after the first hallucination?

Don't give up. Start to look at how other people have been using it and play with it. When you have 100 million people using a tool within a month or two, it's not a hype but a turning point. For me, it's proven more impressive than the first time I got on the internet in the late 1990s — when modems were slow and the websites were not very useful. And look at the Metaverse: at best, it's an experiment that looks like a 1980s video game — it’s just not very useful yet. But ChatGPT is already very useful. Maybe it can answer your most burning question. 

Contributor

Ann-Marie Roche

AR

Ann-Marie Roche

Senior Director of Customer Engagement Marketing

Elsevier

Read more about Ann-Marie Roche