I wanted to chime in earlier but I was eating a bucket of popcorn...this thread!
1. This licence is interesting to me because it mimics the GPL but it is intended to guide the actions of AI agents not humans i.e. no humans actually look at anyone's code and decide whether to copy it for training purposes---the agents they set loose to scrape github look at the licenses (or don't, see theClaude LibGen settlement) and either copy it or don't based on what the license is. This makes the proposed AI license a) equivalent to robots.txt prohibiting all the same things in the license and/or b) a signalling device among software developers that they despise current AI BS. I don't think anyone trusts that their robots.txt files are being honored, so a license that backs it up could have legal consequences, if it can be heard above the noise of the 500 other lawsuits and settlement agreements currently underway. The only other solution I can dimly imagine would be to invent some sort of AI agent that would try to systematically block other AI agents from copying, accessing or otherwise "looking at" one's copyrighted code, in which case the whole idea collapses under its own absurdity. But the signalling function might be the more important one---a license is an easy cut and paste way for developers to signal how they feel about AI. Never underestimate the power of signalling. 2. Radhika (and anyone else interested): What you want is Michael Cooperson's translation of Al-hariri's Impostures https://nyupress.org/9781479800841/impostures/ It is among the most astounding feats of translation I have ever seen. It is also, in an interesting way, a tool for thinking about what LLMs are. Al Hariri's original text used fiction and fabrication to teach people to read (the Koran) carefully and not always literally--- his audience wanted salvation, but today's audiences stupefied before a dull chatbot, want much more. Take a look at it and hopefully you will start to see what I mean. ck On Sat, Nov 1, 2025 at 8:40 AM Charles Haynes via Silklist < [email protected]> wrote: > > > On Sat, 1 Nov 2025 at 22:33, Tim Bray <[email protected]> wrote: > >> On Nov 1, 2025 at 7:24:18 AM, Charles Haynes via Silklist < >> [email protected]> wrote: >> >>> This is fascinating to me. It's clear that the intent is to treat AI as >>> adversarial and to try to hinder it. What's not as clear to me is why. >>> >> >> Because there is a massive backlash against GenAI among quite a wide >> range of people. A few reasons: >> >> >> - ... >> >> Now, I am perfectly aware that there are counter-arguments for everything >> in that list, and I am not an enemy of the technology as such, but I am >> among those counseling caution, both financial and technical, in leaping >> aboard the train. And a lot of the people in the ranks of promoters are >> people who were promoting NFTs just a few years ago and I want nothing to >> do with them. >> >> I don’t think the license that started the discussion is terribly >> practical. But the sentiment it expresses is widely-held and not entirely >> unfounded. >> > > Certainly I agree there's a lot of anti-AI sentiment, in which case the > rationale would be "AI is bad, mmmkay?" As for those other points, they're > certainly valid but the license isn't what I'd call "fit for purpose." It > doesn't address any of those points. > > I mean it's fine if that license is simply a reaction to AI, but I was > wondering if there was anything else to it or if it was simply copying the > form of the GPL without a coherent rationale. > > — Charles > > -- > Silklist mailing list > [email protected] > https://mailman.panix.com/listinfo.cgi/silklist >
-- Silklist mailing list [email protected] https://mailman.panix.com/listinfo.cgi/silklist
