This is fascinating to me. It's clear that the intent is to treat AI as adversarial and to try to hinder it. What's not as clear to me is why. The fascinating part is that it seems that some people take it as given that this is desirable, but haven't really articulated what the goal is.
Are we trying to prevent the singularity? Slow down the arrival of superintelligence? React to the way current models were trained by running roughshod over existing creators rights? Or is it just "AI is bad, mmmkay?" Without knowing the goals of such a thing, or the perceived threats it's trying to address it's hard to know if this is appropriate or would be effective. Unlike, say, the GPL where both the goals and the threat are explicit and articulable. — Charles On Fri, 31 Oct 2025 at 12:46, Udhay Shankar N via Silklist < [email protected]> wrote: > > https://vanderessen.com/posts/hopl/ > > <quote> > The idea is that any software published under this license would be > forbidden to be used by AI. The scope of the AI ban is maximal. It is > forbidden for AI to analyze the source code, but also to use the software. > Even indirect use of the software is forbidden. If, for example, a backend > system were to include such software, it would be forbidden for AI to make > requests to such a system. > </quote> > > -- > ((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com)) > > > > > > -- > Silklist mailing list > [email protected] > https://mailman.panix.com/listinfo.cgi/silklist >
-- Silklist mailing list [email protected] https://mailman.panix.com/listinfo.cgi/silklist
