On 2/3/26 03:17, Denys Dmytriyenko via lists.yoctoproject.org wrote:
CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

On Fri, Jan 30, 2026 at 01:20:42PM +0800, hongxu via lists.openembedded.org 
wrote:
Hi all,

ChatGPT[1] has been popular for several years, perhaps you also know
Deepseek[2]
which is an open-source language model and run locally.

The local runner to run Deepseek is Ollama[3]. It lets you download and run
large language models (LLMs) on your own machine, without relying on
cloud-hosted services.

That is good for Yocto, so I integrate ollama to Yocto-based Linux system,
build from source, offline install and deploy.

For the convenience of review, show it on my github [4] temporary,
I had a question about your contribution policy, but I checked your README and
MAINTAINERS that mention sending patches to either yocto@ or yocto-patches@

It should be [email protected], I've corrected https://github.com/hongxu-jia/meta-ollama/blob/main/MAINTAINERS.md


- The BUILD.md [5] provides the steps to build and run on CPU by default,
- The BUILD-cuda-x86-64.md [6] provides the steps to build and run
on NVIDIA GPU
   with CUDA for x86-64

And now contribute meta-ollama to Yocto, the layer provides:

- Recipe ollama: provides application ollama.
Will you be open to llama.cpp client besides ollama?
I need some time to do the investigation and make the decision.

   It supports to run on CPU by default. And optional on NVIDIA GPU
with CUDA

- Recipe llama3.2-1b and llama3.2-3b: the large language model by Meta [7]

- Recipe gemma2-9b and gemma2-2b: the large language model by Google [8]

- Recipe deepseek-r1-7b and deepseek-r1-1dot5b, the large language
model by Deepseek [9]
Will you be open to Qwen3 LLM by Alibaba? GPT LLM by OpenAI?
I need some time to do the investigation and make the decision.

- Recipe nvidia-open-gpu-kernel-module: the kernel module for NVIDIA GPU

- Recipe nvidia-driver-x86-64: provides firmware for NVIDIA kernel module,
   CUDA library and application nvidia-smi for x86-64 BSP. NOTE, no sources,
   only binaries from NVIDIA [10], license is Proprietary [11]

- dynamic-layers/meta-tegra: customize meta-tegra to support CUDA for x86-64
I'm not going to ask about other architectures for now...

For other arch, the CPU should be work by default, but AMD GPU is not supported

the arch in meta-tegra supports CUDA, but this layer does not support the machine in meta-tegra,

//Hongxu

--
Denys






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#2229): 
https://lists.openembedded.org/g/openembedded-architecture/message/2229
Mute This Topic: https://lists.openembedded.org/mt/117609251/21656
Group Owner: [email protected]
Unsubscribe: https://lists.openembedded.org/g/openembedded-architecture/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to