Friday, September 20, 2024
26.8 C
Surat
26.8 C
Surat
Friday, September 20, 2024

‘AI scientists’ write science papers without human input. This is worrying

Must read

Scientific discovery is one of the most sophisticated human activities. First, scientists must understand existing knowledge and identify a significant gap. Next, they must formulate a research question and design and conduct an experiment in search of an answer. Then, they must analyze and interpret the results of the experiment, which may raise another research question.

Can such a complex process be automated? Last week, Sakana AI Labs announced the creation of an “AI Scientist” – an artificial intelligence system that they claim can make scientific discoveries in the field of machine learning in a fully automated way.

Using the generative large language model (LLM) behind ChatGPT and other AI chatbots, the system can brainstorm, select a promising idea, code a new algorithm, plot the results, and write a paper summarizing the experiment and its findings, including references. Sakana claims that the AI ​​tool can carry out the entire lifecycle of a scientific experiment at a cost of just US$15 per paper – less than the cost of a scientist’s lunch.

These are some big claims. Do these claims turn out to be true? And even if they do, would an army of AI scientists churning out research papers at inhuman speed be good news for science?

How computers can ‘do science’

Much science is done in the open, and almost all scientific knowledge is written down somewhere (or we would have no way of “knowing” it). Millions of scientific papers are freely available online in repositories like arXiv and PubMed.

LLMs trained with this data understand the language of science and its patterns. So perhaps it is not surprising at all that a generative LLM can produce something that looks like a good scientific paper – it has picked up many examples that it can copy.

What is less clear is whether an AI system can create a product that interesting Scientific papers. The important thing is that good science requires novelty.

But is it interesting?

Scientists do not want to learn about things that are already known. Instead, they want to learn new things, especially new things that are very different from what is already known. This requires judgments about the scope and value of the contribution.

The Sakana system attempts to address interestingness in two ways. First, it “scores” new paper ideas for similarity with existing research (indexed in the Semantic Scholar repository). Anything too similar is discarded.

Second, Sakana’s system introduces a “peer review” stage – using another LLM to judge the quality and novelty of the generated paper. Here again, there are plenty of examples of peer review online on sites such as openreview.net that can provide guidance on how to critique a paper. LLMs have embraced these as well.

AI can be a poor judge of AI output

Reaction to the Sakana AI’s output has been mixed. Some have described it as “endless scientific nonsense”.

Even a review of the system’s own output reveals the paper to be weak. This is likely to improve as the technology develops, but the question remains whether automated scientific papers are valuable.

The ability of LLMs to assess research quality is also an open question. My own work (soon to be published in Research Synthesis Methods) shows that LLMs are not very good at assessing the risk of bias in medical research studies, although this may also improve over time.

Sakana’s system automates discoveries in computational research, which is much easier than other types of science that require physical experiments. Sakana’s experiments are done with code, which is also structured text that LLMs can be trained to produce.

AI tools will aid scientists, not replace them

AI researchers have been developing systems to support science for decades. Given the vast amount of published research, finding publications related to a specific scientific question can also be challenging.

Specialized search tools use AI to help scientists find and synthesize existing work. These include the above-mentioned Semantic Scholar, but also newer systems such as Elicit, Research Rabbit, Cite, and Consensus.

Text mining tools such as PubMed search deep into research papers to identify key points, such as specific genetic mutations and diseases, and their established relationships. This is particularly useful for storing and organizing scientific information.

Machine learning has also been used to support the synthesis and analysis of medical evidence in tools such as RobotReviewer. ScholarSys’s summaries that compare and contrast claims in research papers help to conduct literature reviews.

All of these tools are intended to help scientists do their work more effectively, not replace them.

AI research could exacerbate existing problems

While Sakana AI says it doesn’t see the role of human scientists diminishing, the company’s vision of a “fully AI-powered scientific ecosystem” will have a major impact on science.

One concern is that if AI-generated papers flood the scientific literature, future AI systems may be trained on AI outputs and suffer model collapse. This means they could become increasingly ineffective at innovating.

However, its implications for science go far beyond the impact AI has on science systems.

There are already bad guys in science, including “paper mills” that produce fake papers. This problem will be even worse when a scientific paper can be produced for as little as US$15 and a vague initialism.

The need to check for errors in piles of automatically generated research could quickly overwhelm the capacity of real scientists. The peer review system is arguably already broken, and pouring more research of dubious quality into the system will not fix it.

Science is fundamentally based on trust. Scientists insist on the integrity of the scientific process so that we can be assured that our understanding of the world (and now, the world’s machines) is valid and improving.

A scientific ecosystem where AI systems are the key players raises fundamental questions about the meaning and value of this process, and what level of trust we should have in AI scientists. Is this the scientific ecosystem we want?‘AI scientists’ write science papers without human input. This is worrying

,Author: Karin Verspoor, Dean, School of Computing Technologies, RMIT University, RMIT University)

,disclosure statement: Karin Verspoor receives funding from the Australian Research Council, the Medical Research Future Fund, the National Health and Medical Research Council and Elsevier B.V. She is associated with BioGrid Australia and is a co-founder of the Australian Alliance for Artificial Intelligence in Healthcare)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article