This is the Codex LLM agent from OpenAI. It can perform actions to use tools within the filesystem to aid in a variety of actions, particularly coding-related activities.
The pipeline of llama.cpp + codex enables an entirely native OpenBSD based, AMDGPU accelerated LLM inference agent system. This is one of the last versions of Codex that will support the "Chat Completions" API which is currently served through llama.cpp. (Future Codex versions will require "Responses" API support in llama.cpp.)
codex.tgz
Description: application/tar-gz
