Research repository ArXiv will ban authors for a year if they let AI do all the work


ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research. 

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop. 

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.



Source link

  • Related Posts

    $60B AI chip darling Cerebras almost died early on, burning $8M a month

    Today, Cerebras Systems is a public company that sells AI chips for inference to giants like OpenAI and AWS. It held a blockbuster IPO on Thursday, with both of its…

    OpenAI Is Offering ChatGPT Plus To Citizens Of Malta For A Year

    OpenAI has signed deals with fintech startups, tech giants and even Disney, but it’s breaking new ground by announcing a “world’s first partnership” with the…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Another man has died while waiting for care at hospital: Alberta Medical Association

    Another man has died while waiting for care at hospital: Alberta Medical Association

    $60B AI chip darling Cerebras almost died early on, burning $8M a month

    Women’s One-Day Cup: Durham off bottom as Surrey & Warwickshire win too

    Women’s One-Day Cup: Durham off bottom as Surrey & Warwickshire win too

    The Fight for Voting Rights Returns to Selma

    The Fight for Voting Rights Returns to Selma

    Tesla issues recall for more than 13,000 vehicles in Canada

    Tesla issues recall for more than 13,000 vehicles in Canada

    Iraq’s new PM Ali al-Zaidi formally takes over | Newsfeed

    Iraq’s new PM Ali al-Zaidi formally takes over | Newsfeed