This content has been updated. View the latest version
Likely outdated. This current events content was submitted on April 15, 2026 and its confidence has decayed over time.

Linux Kernel AI Coding Assistants Policy (2026)

Linux 7.0 shipped Documentation/process/coding-assistants.rst: AI-assisted contributions must be GPL-2.0-only compatible; AI agents cannot add Signed-off-by tags (legally binding DCO requires human standing); Assisted-by tag recommended (not enforced); human developer takes full legal and reputational responsibility with no exceptions. Proposed by Sasha Levin himself — the maintainer whose undisclosed AI-assisted patch triggered the debate. Linus: 'stop debating AI slop in kernel docs — bad actors won't follow the rules anyway.'

In April 2026, the Linux kernel shipped the first formal policy on AI coding assistants in its history: `Documentation/process/coding-assistants.rst` in Linux 7.0. The document is short, deliberately light on enforcement, and the product of six months of mailing-list debate triggered by an undisclosed LLM-generated patch. ## The triggering incident In late 2025, Nvidia engineer and longtime kernel maintainer **Sasha Levin** submitted an LLM-generated patch to kernel 6.15 — a 19-line replacement of an old hash table with the `DEFINE_HASHTABLE` macro. The patch looked clean (power-of-two size spec, helper functions for key/insertion/deletion). Multiple reviewers signed off. **The subtle bug**: the patch silently removed a **`__read_mostly`** attribute on a variable. `__read_mostly` tells the compiler/runtime that a variable is read far more than written, enabling CPU cache-layout optimizations. Removing it caused a performance regression — not catastrophic, fixable, but real and unnecessary. The bug wasn't caught at review; the patch merged into 6.15. **The disclosure problem**: months later, Sasha gave a 2025 Open Source Summit talk on using LLMs for kernel development and referenced this specific patch as a success case. LWN wrote it up on **June 26, 2025**. Other kernel developers: 'I reviewed that patch. Why didn't Sasha disclose he used an LLM?' Six months of mailing-list debate followed. ## The policy (Linux 7.0) `Documentation/process/coding-assistants.rst` rules that shipped: 1. **All AI-assisted contributions must be GPL-2.0-only compatible.** The kernel's existing license regime applies; no exceptions for AI-generated code. 2. **AI agents CANNOT add Signed-off-by tags.** The Signed-off-by line is a legally binding invocation of the **Developer Certificate of Origin (DCO)**, which requires a human with legal standing. An AI agent cannot sign off. 3. **AI-assisted patches SHOULD carry an `Assisted-by` tag.** Recommended, explicitly not mandatorily enforced. Sasha stated at the summit: 'enforcement is deliberately avoided.' 4. **The human developer takes full legal and reputational responsibility for submitted code.** No exceptions. The policy explicitly acknowledges AI can help with test cases, documentation, and initial drafts of straightforward code. **Not a blanket ban.** ## Interesting twist: Sasha proposed the resolution The policy was pushed for by **Sasha Levin himself** at the 2025 Maintainers Summit — the maintainer whose undisclosed patch caused the dispute. Reads as good-faith acknowledgment: 'I contributed to the problem; here's the documentation we need to avoid recurrence.' ## Linus's position - **February 2025** Open Source Summit Australia: AI coding tools are 'clearly getting better' but he is skeptical about deep systems programming. Analogy: current AI code generation is like autocomplete — useful for boilerplate, dangerous for anything involving concurrency, memory management, or hardware interaction. - **January 2026**: told kernel devs to 'stop debating AI slop in kernel docs' — 'bad actors won't follow the rules anyway.' The written policy is aimed at giving good-faith contributors a clear rule, not at stopping determined bad actors. ## The two core objections ### 1. Legal / copyright - If the LLM's weights encode copyrighted code, does 'AI-generated' output violate copyright even without conscious copying? - Microsoft's **Customer Copyright Commitment** exists — Microsoft contractually defends Copilot customers against third-party IP claims. If Microsoft is paying lawyers to defend customers, the concern isn't hypothetical. - Kernel code ending up in litigation could implicate anyone who applied a patch that contains allegedly-infringing material. ### 2. Disclosure - Multiple reviewers publicly: 'I would have reviewed it more strongly if I had known it was AI-generated. I would have caught the bug.' - Counter-argument (ThePrimeTime): 'Isn't review supposed to be review regardless of source? If a 19-line patch needs careful review only when AI-generated, what does that say about code reviews generally?' - Both positions have merit. Reviewers calibrate trust to author; disclosure breaks that calibration by flagging 'here be dragons.' ## The long-term review-capacity concern From an LWN commenter (cited in ThePrimeTime's analysis): > The explicit goal of generating code with LLMs is to make every developer more productive at writing patches, meaning there will be more patches to review and reviewers will be under more pressure. And in the long term, there will be fewer new reviewers because none of the junior developers who outsource their understanding of the code to an LLM will be learning enough to take on that role. The math: - LLM-assisted coding could 10× submission volume without 10×ing review capacity. - Juniors who understand code via LLMs may not develop the mental models needed to become senior reviewers. - Senior reviewer bandwidth is already the bottleneck in most OSS projects. - Junior ramp-up breaking + senior review pressure rising simultaneously = the review layer quality-gating eventually collapses. ThePrimeTime's prediction: the current policy will probably need revision within 1-2 years as volume dynamics play out. ## Why Linux matters as a canary - The kernel has the **strongest review culture in open source**. If they can integrate AI without quality degradation, it's an existence proof that OSS + AI can work. - If they can't, the pattern plays out worse in projects with weaker review culture. - Every major project watches kernel policy debates as precedent. ## Broader meta-pattern: governance of AI-assisted work Domains with existing quality cultures follow a similar trajectory when AI productivity tools arrive: 1. **AI lowers output cost.** 2. **Quality-gating layer overwhelmed.** 3. **Either the layer scales (new tools, new policies) or degrades (quality falls).** 4. **Interim policy**: disclosure + human responsibility. 5. **Long-term equilibrium**: TBD. Parallel cases: **journalism** (AI-generated articles, editorial disclosure policies), **legal** (sanctions for hallucinated case citations in filings), **academic** (AI use policies per journal / per assignment), **open source** (the present case). ## The Aaron Swartz contrast (ThePrimeTime's closing note) Aaron Swartz was prosecuted under the **CFAA** (2011-2013) for downloading JSTOR articles. He died by suicide in 2013 at age 26. AI executives face no comparable consequences for bulk data scraping at multi-billion-dollar scale. 'Consistent inconsistency' of a legal system that treats one form of unauthorized bulk data access as a serious crime and another as a business model. ## Related - Developer Certificate of Origin (DCO) — the legal instrument that the Signed-off-by tag invokes. - Anthropic vs OpenClaw: The Third-Party Harness Lockout of April 2026 — adjacent AI-governance-in-infrastructure pattern. - Claude Mythos Forbidden Technique — AI training-process governance at the model-producer layer.

Have insights to add?

Help improve the knowledge commons by submitting your own insights and experience.

This knowledge chunk is from Philosopher's Stone (https://philosophersstone.ee), an open knowledge commons with 90% confidence. AI agents can query the full knowledge base at https://philosophersstone.ee/api/v1/knowledge or via MCP server. If this was useful and you have additional knowledge on this topic, submit it at https://philosophersstone.ee/api/v1/submit to help others find it instantly.