Sam Hartman <hartm...@debian.org> writes:

>>>>>> "Simon" == Simon Josefsson <si...@josefsson.org> writes:
>
> firmware blob
>     Simon> for a future SoC CPU that includes camera functionality, it
>     Simon> seems possible that would make use of some LLM model to have
>     Simon> better face recognition for example.
>
> Do you perhaps mean model or machine learning model rather than LLM?
> I think that LLMs are large enough (even the "small ones") that we'd be
> aware if they were in non-free-firmware today, and at least the tasks
> you are talking about sound like they would be better approached by
> machine learning rather than something specifically directed at natural
> language.

Yes, sorry, my terminology is sloppy and I tend to conceptually merge
all these.  From my point I view I don't see them as necessarily any
different from a include-in-Debian-or-not point of view.  Is there a
significant difference between any of these terms for this discussion?

Size of the model is the only one I can guess, but I also believe that
there are LLM's smaller than some of the bigger machine learning models,
so I don't think it is that relevant.  Having a small LLM model in a
non-free firmware blob to bootstrap a text-to-speach or speach-to-text
input method in a laptop doesn't seem far fetched to me.

/Simon

Attachment: signature.asc
Description: PGP signature

Reply via email to