On Fri, Jan 30, 2026 at 01:20:42PM +0800, hongxu via lists.openembedded.org 
wrote:
> Hi all,
> 
> ChatGPT[1] has been popular for several years, perhaps you also know
> Deepseek[2]
> which is an open-source language model and run locally.
> 
> The local runner to run Deepseek is Ollama[3]. It lets you download and run
> large language models (LLMs) on your own machine, without relying on
> cloud-hosted services.
> 
> That is good for Yocto, so I integrate ollama to Yocto-based Linux system,
> build from source, offline install and deploy.
> 
> For the convenience of review, show it on my github [4] temporary,

I had a question about your contribution policy, but I checked your README and 
MAINTAINERS that mention sending patches to either yocto@ or yocto-patches@


> - The BUILD.md [5] provides the steps to build and run on CPU by default,
> - The BUILD-cuda-x86-64.md [6] provides the steps to build and run
> on NVIDIA GPU
>   with CUDA for x86-64
> 
> And now contribute meta-ollama to Yocto, the layer provides:
> 
> - Recipe ollama: provides application ollama.

Will you be open to llama.cpp client besides ollama?


>   It supports to run on CPU by default. And optional on NVIDIA GPU
> with CUDA
> 
> - Recipe llama3.2-1b and llama3.2-3b: the large language model by Meta [7]
> 
> - Recipe gemma2-9b and gemma2-2b: the large language model by Google [8]
> 
> - Recipe deepseek-r1-7b and deepseek-r1-1dot5b, the large language
> model by Deepseek [9]

Will you be open to Qwen3 LLM by Alibaba? GPT LLM by OpenAI?


> - Recipe nvidia-open-gpu-kernel-module: the kernel module for NVIDIA GPU
> 
> - Recipe nvidia-driver-x86-64: provides firmware for NVIDIA kernel module,
>   CUDA library and application nvidia-smi for x86-64 BSP. NOTE, no sources,
>   only binaries from NVIDIA [10], license is Proprietary [11]
> 
> - dynamic-layers/meta-tegra: customize meta-tegra to support CUDA for x86-64

I'm not going to ask about other architectures for now...

-- 
Denys
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#2228): 
https://lists.openembedded.org/g/openembedded-architecture/message/2228
Mute This Topic: https://lists.openembedded.org/mt/117540395/21656
Group Owner: [email protected]
Unsubscribe: https://lists.openembedded.org/g/openembedded-architecture/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to