It's unhelpful to limit AI discussions to the most basic understanding
   of LLMs. There are AI models that use reasoning rather than simply LLM
   approaches. LLMs have limitations, and AI development is already
   passing those by going beyond the approach of LLMs.
   Even way back with Alpha Go, it famously made moves that were
   *creative* in the sense that they were not moves a human would make nor
   that would be predictable moves within training data. And now, new
   approaches to reasoning-based AIs are already being used.
   Humans do predictable patterns too. Given damage to short-term memory,
   people get into loops. Part of the mistake in minimizing AI
   significance comes from limited view of AI, but part of it comes from
   putting too much distinction in terms of how we see humans.
   AIs today are neural-networks, and even though they are a different
   sort of neural net than our brains, the comparison holds up pretty well
   in lots of ways. There's nothing utterly specially magical about how
   I'm typing this. My brain has a whole pattern of putting language
   together and responding to inputs.
   What we can say about AI is that it is *unlike* humans, it isn't human.
   But we won't be right in saying that it is anything like a simplistic
   word-by-word prediction algorithm. That is just one aspect of many
   features AIs can have today. And unlike Eliza, there's no simple
   program, there's an *evolved* neural net that we can't really inspect
   in human-programming terms. We *raise* AI, *parent* it, *grow* it,
   rather than program it. Just don't take this to mean that it's alive or
   conscious, we have no reason to say that. But it's not like other
   programs, it's categorically different.
   Aaron

   On 7/17/25 5:02, Jean Louis wrote:

* Lars Noodén via libreplanet-discuss [1]<libreplanet-discuss@libreplanet.org> [
2025-07-16 20:11]:

On 7/16/25 13:30, Jean Louis wrote:

What you call "AI" is just new technology powered with knowledge that
gives us good outcomes, it is new computing age, and not "intelligent"
by any means. It is just computer and software. So let's not give it
too much of the importance.

There is no knowledge involved, just statistical probabilities in those
"plausible sentence generators" or "stochastical parrots".

Come on — "no knowledge involved" is quite the claim. Sure, LLMs don’t
*understand* like humans do, but dismissing them as just “stochastic
parrots” ignores what they’re actually doing: leveraging vast amounts
of structured human knowledge to produce useful, context-aware
outputs. It’s still software, yes — but software that helps me reason,
write, debug, and explain things better than most tools I’ve ever
used. Just because it’s probabilistic doesn’t mean it’s not powerful
or valuable. Let’s not throw the baby out with the buzzwords.


Thus we see daily the catastrophic failure of these systems in
regards to factual output.

Sure, LLMs can and do make mistakes, especially with facts sometimes,
but dismissing the whole technology because of that overlooks how
often they get things right and actually boost productivity. Like any
tool, they have limits, but calling it a “catastrophic failure” across
the board doesn’t do justice to the real-world benefits many of us see
every day.


More money just make them more expensive.

Sounds like you might be a bit frustrated about the price side of
things — totally get it, high-end gear like RTX 30, 40, or even 5090
cards with enough VRAM isn’t cheap, and running them does have energy
costs, which can add up. But here’s the thing: you don’t have to pay
for expensive subscriptions to tap into LLMs — there are hundreds of
free, solid models available on Hugging Face, and tools like DeepSeek
are completely free to use. So with the right hardware, you can run
powerful LLMs locally without breaking the bank on subscriptions.


More electricity just makes them more polluting.

Right, because obviously saving time and effort should always come at
the cost of hugging a tree instead of running your GPU.


LLMs have peaked, technologically, but the investment bubble still
grows.  It relates to software freedom in that these parrots strip
freedom-preserving attribution and licensing information from the
code snippets which they regurgitate.

Interesting take — though I’d say the tech still has plenty of room to
grow, and the investment buzz isn’t just hype. As for “parrots”
stripping attribution, that’s a real concern worth addressing, but
it’s more about how we manage licensing and data sources than a death
sentence for LLMs themselves.


AI (using today's definitions) is good at recombining pieces, once
the pieces are identified.  So it can be useful right now in areas
like protein folding, I would expect.  However, as far as producing
code, it can't.

Who says that? The same person with limited scripting experience?


All it can do in that regard is strip licensing and attribution from
existing code and mix the pieces until something compiles.  As
pointed out earlier in the thread, that reduces productivity.

That’s a pretty narrow view. From my experience, LLMs do much more
than just remix code—they help brainstorm solutions, explain tricky
parts, and speed up writing clean, working code. Sure, licensing needs
care, but dismissing their productivity boost misses the bigger
picture.


Programmers using LLMs may /fee/ that they are 24% more effective,
but the data actually shows a 19% drop in productivity.

Which one? I have provided to you quite contrary references to
papers.


It is the stripping of licensing and attribution which may be a
greater harm than the reduced productivity, from a software freedom
perspective.  Indeed, it is the licensing, specifically copyleft,
which ensures the freedom to code going forward.  Once that is
stripped from the files, the freedom is gone.

I get the concern about licensing and attribution, but honestly, there
are hundreds of LLMs out there trained on datasets curated by various
organizations and communities. You don’t have to rely on models that
strip or ignore licensing — it’s all about choosing the right LLM that
respects and preserves software freedom. No need to generalize when
you have responsible options available.


Furthermore, the LLMs are being used to take away agency from coders,
turning them into, as Cory Doctorow put it, reverse centaurs which have
already been mentioned in an earlier message:


        "A centaur is someone whose work is supercharged by
        automation: you are a human head atop the tireless body
        of a machine that lets you get more done than you could
        ever do on your own."

Ah yes, because clearly the worst thing ever is getting a supercharged
toolkit that makes you way more productive — heaven forbid we become
“reverse centaurs” instead of just centaurs. Sounds like some people
just prefer running on all human horsepower, no upgrades allowed!


That situation is antithetical to the goals of software freedom,
which is the goal for the human to be in charge of the system and
use it as a tool to amplify his or her ability.

If software freedom means being fully in charge and using tools to
boost your own abilities, then are you ready to take that lead? With
all the free software tools out there—like those on Hugging
Face—anyone can contribute or even build an LLM tailored to their
needs. So what’s holding you back from jumping in and shaping one
yourself?


The people maneuvering to take away freedom and agency from the
public are working hard in the press to present "AI" as a done deal.

Sounds like a conspiracy thriller script — but honestly, if the tech
really takes away freedom, wouldn’t it make more sense to push for
free software tools and community-driven projects instead of just
fearing the whole thing? The “done deal” narrative doesn’t have to be
the final word if people actually get involved.


It is not, at least not as long as those working towards software
freedom remain able to continue to push back.  These LLMs are
enjoying an extended overtime investment bubble which I posit will
leave nothing useful when it does finally burst.

Maybe you haven’t come across the hundreds of (almost) fully free LLMs
out there yet, so it makes sense that you might only see the
commercial options as the “leading” or only real choices. There’s a
whole world beyond the hype bubble that’s worth exploring before
writing off the tech completely.


But as for Akira's question at the start of the thread, is
AI-generated code changing free software?  Since the LLMs strip both
attribution and licensing information, I would say yes, AI generated
code is changing free software by stripping away the freedom while
simultaneously detaching the code from the upstream projects it has
been plagiarized from.  In that way it separates people from the
free software projects they could be working with.

That’s a bit of a generalization. While some models and uses might
strip attribution or licensing, it’s not true across the board—and
plenty of LLMs and projects actively respect and preserve those
important freedoms and connections.

Sorry, I am not impressed with your opinion.

Do you have GPU running? How can we argue about it?

Jean Louis

_______________________________________________
libreplanet-discuss mailing list
[2]libreplanet-discuss@libreplanet.org
[3]https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss

References

   1. mailto:libreplanet-discuss@libreplanet.org
   2. mailto:libreplanet-discuss@libreplanet.org
   3. https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss
_______________________________________________
libreplanet-discuss mailing list
libreplanet-discuss@libreplanet.org
https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss

Reply via email to