* Henrik Ahlgren <pa...@seestieto.com> [2025-03-18 11:25]:
> Richard Stallman <r...@gnu.org> writes:
> 
> > Is there any configuration of Minuet that you can use with all free
> > software, all running on your own computer?  If so, which
> > configurations can do that?
> 
> I haven't used Minuet specifically, but since I believe it can use any
> LLM that can be inferenced under Ollama, there are plenty of free models
> available. For instance DeepSeek-R1 is under the MIT license and some
> distilled variants like DeepSeek-R1-Distill-Qwen-7B are derived from
> Apache 2.0 licensed models. (Of course, nobody really knows what it
> actually means in legal terms, and Chinese companies are not really well
> known for taking copyright seriously.)

What the license Apache-2.0 means is clear from license. Those who
need interpretation can use the same Large Language Model (LLM) to
explain it for them.

Let's not generalize about Chinese companies. 

I have not seen any Chinese company making mistakes with licensing. I
have seen Western users re-licensing illegally and making something
apparently free software what was proprietary.

Companies in general have their attorneys and take care of
licensing. Laws are strict in China. And no company needs bad image.

> (BTW, I feel the dominance of NVIDIA hardware in the field of BS
> generation is perhaps the biggest practical threat to software freedom
> right now.)

I don't think it is of any threat. Their position to publish some
proprietary software drives other companies to publish free
software. Their position to have proprietary chips, drives some
companies to provide chips with free software.

You will see a lot of changes soon. And NVIDIA may be in trouble
catching up with all the new technologies. There are chips now that
are 20 times and more faster than whatever NVIDIA can offer. We will
see which direction it moves.

The Large Language Model (LLM) community is founded on free software,
and it is important to keep repeating and telling them where it all
came from, it is from GNU project and Richard Stallman.

Right now is more important to speak about free software than ever before.

We have to enforce the notion of Four Freedoms so that the general
agreement with public doesn't get dilluted.

I don't see how NVIDIA is any threat to free software, for reason,
that they clearly make sure of their licensing. For example the NVIDIA
Canary 1B model is excellent for speech recognition, and it is clearly
licensed under CC-BY-NC-4.0 -- however, they do not claim it is free
software anywhere.

And NVIDIA has NVIDIA Open Model License:
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf

It would be good for FSF to evaluate that license. I think that is
free software license complying to four freedoms.

How is NVIDIA threat if they do provide free software models? Why not say thank 
you?

Some of NVIDIA free software models:
https://huggingface.co/nvidia/domain-classifier
https://huggingface.co/nvidia/Cosmos-0.1-Tokenizer-CV8x16x16
https://huggingface.co/nvidia/stt_en_conformer_transducer_large
https://huggingface.co/nvidia/ssl_en_nest_xlarge_v1.0
https://huggingface.co/nvidia/DeepSeek-R1-FP4
https://huggingface.co/nvidia/parakeet-rnnt-0.6b
https://huggingface.co/nvidia/parakeet-tdt_ctc-110m
https://huggingface.co/nvidia/Minitron-4B-Base
https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct
https://huggingface.co/nvidia/Minitron-8B-Base
https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World

The real threat comes from Google and META for reason that they are
openly claiming to have "Open Source" licenses, thus deceiving people.

> And the ollama and and llama.cpp are free (MIT):
> https://github.com/ollama/ollama
> https://github.com/ggml-org/llama.cpp

Yes, that is free software though named after the proprietar Llama
models from META.

> I find the names of these projects unfortunate. The field evolves
> rapidly, and software projects often adopt names inspired by popular
> early movers, even when they later develop into generic frameworks
> applicable to multiple LLMs.

Because software is free, everybody can fork it and remove support for
proprietary software models. That is however tedious.

> As you well know, the licensing landscape surrounding many LLM-related
> technologies can be quite perplexing. It's uncertain whether traditional
> software licenses are suitable for model weights (which after all are
> not software in the traditional sense), and the inference code typically
> represents only a small fraction of the entire stack.

The model itself is data, like a picture. Personally, I don't think
that user need to have everything necessary to replicate the model to
make it free software. Model is full of numbers, not programming
code. If such bunch of numbers is given to you under free software
license, question is how much you can really modify it, but if you can
modify it, if you have the freedom to do so, that is already fine.

Though there are projects to do that:
https://huggingface.co/open-r1

User doesn't need to get the freedom to replicate the creation of
it. 

I understand many will disagree, though I look on the model not as
source code, and not something even comprehensible by any human
being. 

It is explainable but not comprehensible when you look at it. It is
similar like looking into the PNG picture. You can know it is picture,
you can know that numbers represent pixels, but you cannot as human
comprehend the sequence of those numbers. While source code can be
comprehended.

The LLM model is for me a mirror of some parts of written or
illustrated or soundable human knowledge.

It mirrors digital information of a picture. How did you make the
picture? I don't think it is relevant, what matters is that model can
be modified, distributed, used, studied. But you cannot study too much
to the origins of it. Just like you cannot study to the origins of the
picture and text and sound that was used to train the model.

Where does specific color comes from? There are too many details of
the origin.

We are now in different stage of computer that absorbse different type
of data. Data is not necessarily specific, but inaccurate, and
computer works by probability, not by rules. I agree that rule based
systems would be more powerful and rather recognized to be the true
artificial intelligence and I anticipate the merge of the both, the
probabilistic systems meeting the rule-based systems.

Though looking at the Large Language Model (LLM) as probabilistic
systems is what is happening right now.

Models can be learning automatically, there are those which are
adapted in real time. Information like that can't be said to be the
code, but for as long as can freely modify it, use it, train it, that
is for me free software.

Like a game with computer assets.

But we have now Large Language Model (LLM) creating worlds. There are
Large Language Model (LLM) that create Large Language Model (LLM)
based on previous Large Language Model (LLM) all based on randomity
and proability.

If you give to computer to absorb information in such continuous
running manner, there is no way that we can somehow go back and
understand how it was created exactly to make it possible to replicate
it.

There is however certainly the way to go to make it from scratch.

It is clear now that we can use computer to get an image, that image
could come from camera, and it would not be reproducible at all. But
we would get digital mirror that has numbers which we are free to
modify under free software license, but not necessarily able to
reproduce.

We can't be insisting on reproducibility of models, as we are in the
age where it is still possible, but the progress of humanity will be
so much faster, and keeping all the information by which tensors were
created will become impossible task.

Tensors may be considered as compressed human knowledge. But
uncompressed it is huge. A model of 8 billion parameters could take up
to 32 GB on hard disk, but the total output which model consists of
could range from 10 terabytes to 100 terabytes and more, it could
reach petabytes of data. 

In general, inspecting such a model or starting from scratch is also
technically limited to majority of people.

> But I don't feel it is a reason not to try to promote LLM related
> free software, since people are going to use LLMs no matter what we
> think (simply because they get benefit from it), and if nobody
> fights for freedom, the only option for them might be to use
> proprietary software

That is long time not so. There is no danger of "not having" free
software Large Language Model (LLM). Largest companies are playing
with it, like Microsoft, IBM, NVIDIA, Mozilla, and there are many like
Qwen, Deepseek, EuroLLM, so the way forward is with free software, and
the commercial industry recognized they have to build with it to get
their market share. Their money comes from the provision of that
software, like Alibaba Cloud provisioning Qwen and Deepseek and any
other model, like Microsoft Azure provisioning Phi-4 for customer. 

> and then the situation will become even worse. So personally I don't
> feel like rejecting all LLM related project simply because right now
> it's still mostly bullshit driven by Big Tech is helpful in the long
> run

Just to make sure, the word "bullshit" in the previous context refers
to Large Language Model (LLM) output without thinking.

ChatGPT is bullshit | Ethics and Information Technology
https://link.springer.com/article/10.1007/s10676-024-09775-5

And I do not see "bullshit driven by Big Tech" myself. I see progress,
advances, new scientifically explained methods that are helpful to
other people, and I can see international wanter or unwanted, but it
is there, scientific collaboration.

It is now very hard to start limiting the progress.

> since I find it almost certain that the LLMs are here to stay in one
> form or another. (On a positive note, I see the majority of "AI"
> software released today is actually free software, even if the
> people/companies doing it call it "open source" and have very little
> interest on what it actually means.)

Companies like META and Google, know very well what is Open Source and
they and others are intentionally abusing that term.

Company like Microsoft, long time abuser, obviously doesn't make those
mistakes. They are releasing almost everything under free software
licenses.

-- 
Jean Louis

---
via emacs-tangents mailing list 
(https://lists.gnu.org/mailman/listinfo/emacs-tangents)
  • ... Jean Louis
    • ... Jean Louis
      • ... Pyromania via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
        • ... Jean Louis
          • ... Pyromania via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
            • ... Jean Louis
              • ... Pyromania via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
                • ... Jean Louis
                • ... Pyromania via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
                • ... Jean Louis
                • ... Pyromania via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists

Reply via email to