EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects



We recently introduced a policy governing large language model (LLM) assisted contributions to EFF’s open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.

LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.

It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.

Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.

EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong  advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.



Source link

  • Related Posts

    A $10K+ bounty is waiting for anyone who can unplug Ring doorbells from Amazon’s cloud

    With Ring facing fierce backlash over its Search Party feature, a new program is challenging developers to move Ring doorbell footage off of Amazon’s cloud — and into users’ own…

    Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube

    For people who are awful at the original Rubik’s Cube, like this author, a more accessible version of the puzzle is welcome. Solving the new Rubik’s Cube feels more attainable…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Alectra crews continue 24/7 line “washing blitz” to prevent more power outages this weekend

    Former Prince Andrew arrested in UK

    Former Prince Andrew arrested in UK

    Life is Strange developer Don’t Nod adds a dash of Alien: Isolation anxiety to its usual cinematic formula, and I think it works

    Life is Strange developer Don’t Nod adds a dash of Alien: Isolation anxiety to its usual cinematic formula, and I think it works

    US pays about $160m towards nearly $4bn in UN dues | Donald Trump News

    US pays about $160m towards nearly $4bn in UN dues | Donald Trump News

    Canadian designer takes centre ice at 2026 Winter Olympics

    Canadian designer takes centre ice at 2026 Winter Olympics

    A $10K+ bounty is waiting for anyone who can unplug Ring doorbells from Amazon’s cloud

    A $10K+ bounty is waiting for anyone who can unplug Ring doorbells from Amazon’s cloud