This is an automated email from the ASF dual-hosted git repository.
junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm-ffi.git
The following commit(s) were added to refs/heads/main by this push:
new 8caa0cb feat: Introduce `<tvm/ffi/tvm_ffi.h>` (#369)
8caa0cb is described below
commit 8caa0cbe70e369a525192be0e0f159702dacb71c
Author: Junru Shao <[email protected]>
AuthorDate: Sun Jan 4 14:50:44 2026 -0800
feat: Introduce `<tvm/ffi/tvm_ffi.h>` (#369)
Analogous to:
- `nanobind/nanobind.h`:
https://nanobind.readthedocs.io/en/latest/basics.html#creating-your-first-extension
- `pybind11/pybind11.h`:
https://pybind11.readthedocs.io/en/stable/basics.html#header-and-namespace-conventions
---
docs/get_started/quickstart.rst | 53 ++++++++++++++------------
docs/packaging/python_packaging.rst | 6 +++
examples/python_packaging/pyproject.toml | 21 +++++------
examples/python_packaging/src/extension.cc | 4 +-
examples/quickstart/compile/add_one_cpu.cc | 3 +-
examples/quickstart/compile/add_one_cuda.cu | 3 +-
examples/quickstart/load/load_cpp.cc | 2 +-
examples/quickstart/load/load_cuda.cc | 2 +-
include/tvm/ffi/tvm_ffi.h | 58 +++++++++++++++++++++++++++++
9 files changed, 108 insertions(+), 44 deletions(-)
diff --git a/docs/get_started/quickstart.rst b/docs/get_started/quickstart.rst
index 958b0a4..8dd0fe5 100644
--- a/docs/get_started/quickstart.rst
+++ b/docs/get_started/quickstart.rst
@@ -54,6 +54,14 @@ Source Code
Suppose we implement a C++ function ``AddOne`` that performs elementwise ``y =
x + 1`` for a 1-D ``float32`` vector. The source code (C++, CUDA) is:
+.. hint::
+
+ Include the umbrella header to access all the core C++ APIs.
+
+ .. code-block:: cpp
+
+ #include <tvm/ffi/tvm_ffi.h>
+
.. tabs::
.. group-tab:: C++
@@ -62,7 +70,7 @@ Suppose we implement a C++ function ``AddOne`` that performs
elementwise ``y = x
.. literalinclude:: ../../examples/quickstart/compile/add_one_cpu.cc
:language: cpp
- :emphasize-lines: 8, 17
+ :emphasize-lines: 7, 16
:start-after: [example.begin]
:end-before: [example.end]
@@ -70,7 +78,7 @@ Suppose we implement a C++ function ``AddOne`` that performs
elementwise ``y = x
.. literalinclude:: ../../examples/quickstart/compile/add_one_cuda.cu
:language: cpp
- :emphasize-lines: 15, 22, 26
+ :emphasize-lines: 14, 21, 25
:start-after: [example.begin]
:end-before: [example.end]
@@ -166,9 +174,9 @@ the ``add_one_cpu.so`` or ``add_one_cuda.so`` into
:py:class:`tvm_ffi.Module`.
.. code-block:: python
- import tvm_ffi
- mod : tvm_ffi.Module = tvm_ffi.load_module("add_one_cpu.so")
- func : tvm_ffi.Function = mod.add_one_cpu
+ import tvm_ffi
+ mod : tvm_ffi.Module = tvm_ffi.load_module("add_one_cpu.so")
+ func : tvm_ffi.Function = mod.add_one_cpu
``mod.add_one_cpu`` retrieves a callable :py:class:`tvm_ffi.Function` that
accepts tensors from host frameworks
directly. This process is done zero-copy, without any boilerplate code, under
extremely low latency.
@@ -292,20 +300,19 @@ Compile and run it with:
.. code-block:: cpp
- // Linked with `add_one_cpu.o` or `add_one_cuda.o`
- #include <tvm/ffi/function.h>
- #include <tvm/ffi/container/tensor.h>
+ // Linked with `add_one_cpu.o` or `add_one_cuda.o`
+ #include <tvm/ffi/tvm_ffi.h>
- // declare reference to the exported symbol
- extern "C" int __tvm_ffi_add_one_cpu(void*, const TVMFFIAny*, int32_t,
TVMFFIAny*);
+ // declare reference to the exported symbol
+ extern "C" int __tvm_ffi_add_one_cpu(void*, const TVMFFIAny*, int32_t,
TVMFFIAny*);
- namespace ffi = tvm::ffi;
+ namespace ffi = tvm::ffi;
- int bundle_add_one(ffi::TensorView x, ffi::TensorView y) {
- void* closure_handle = nullptr;
- ffi::Function::InvokeExternC(closure_handle, __tvm_ffi_add_one_cpu, x,
y);
- return 0;
- }
+ int bundle_add_one(ffi::TensorView x, ffi::TensorView y) {
+ void* closure_handle = nullptr;
+ ffi::Function::InvokeExternC(closure_handle, __tvm_ffi_add_one_cpu, x,
y);
+ return 0;
+ }
.. _ship-to-rust:
@@ -318,13 +325,13 @@ This procedure is identical to those in C++ and Python:
.. code-block:: rust
- fn run_add_one(x: &Tensor, y: &Tensor) -> Result<()> {
- let module = tvm_ffi::Module::load_from_file("add_one_cpu.so")?;
- let func = module.get_function("add_one_cpu")?;
- let typed_fn = into_typed_fn!(func, Fn(&Tensor, &Tensor) ->
Result<()>);
- typed_fn(x, y)?;
- Ok(())
- }
+ fn run_add_one(x: &Tensor, y: &Tensor) -> Result<()> {
+ let module = tvm_ffi::Module::load_from_file("add_one_cpu.so")?;
+ let func = module.get_function("add_one_cpu")?;
+ let typed_fn = into_typed_fn!(func, Fn(&Tensor, &Tensor) -> Result<()>);
+ typed_fn(x, y)?;
+ Ok(())
+ }
.. hint::
diff --git a/docs/packaging/python_packaging.rst
b/docs/packaging/python_packaging.rst
index c4856cf..3dd6a41 100644
--- a/docs/packaging/python_packaging.rst
+++ b/docs/packaging/python_packaging.rst
@@ -47,6 +47,12 @@ internals. We will cover three checkpoints:
Export C++ to Python
--------------------
+Include the umbrella header to access the core TVM-FFI C++ API.
+
+.. code-block:: cpp
+
+ #include <tvm/ffi/tvm_ffi.h>
+
TVM-FFI offers three ways to expose code:
- C symbols in TVM FFI ABI: Export code as plain C symbols. This is the
recommended way for
diff --git a/examples/python_packaging/pyproject.toml
b/examples/python_packaging/pyproject.toml
index 7652bb4..20daed7 100644
--- a/examples/python_packaging/pyproject.toml
+++ b/examples/python_packaging/pyproject.toml
@@ -45,16 +45,13 @@ wheel.py-api = "py3"
wheel.packages = ["python/my_ffi_extension"]
# The install dir matches the import name
wheel.install-dir = "my_ffi_extension"
-# [pyproject.build.end]
-
-# Build configuration
-build-dir = "build"
-build.verbose = true
-
-# CMake configuration
+# Build the extension in-place under "./build-wheel"
+build-dir = "build-wheel"
+# Specify minimum CMake version
cmake.version = "CMakeLists.txt"
-cmake.build-type = "RelWithDebInfo"
-
-# Logging
-logging.level = "INFO"
-minimum-version = "build-system.requires"
+# Specify CMake build type
+cmake.build-type = "Release"
+# Pass custom CMake definitions
+[tool.scikit-build.cmake.define]
+CMAKE_EXPORT_COMPILE_COMMANDS = "ON"
+# [pyproject.build.end]
diff --git a/examples/python_packaging/src/extension.cc
b/examples/python_packaging/src/extension.cc
index 81fb003..bf6cfea 100644
--- a/examples/python_packaging/src/extension.cc
+++ b/examples/python_packaging/src/extension.cc
@@ -20,9 +20,7 @@
* \file extension.cc
* \brief Example of a tvm-ffi based library that registers various functions.
*/
-#include <tvm/ffi/function.h>
-#include <tvm/ffi/object.h>
-#include <tvm/ffi/reflection/registry.h>
+#include <tvm/ffi/tvm_ffi.h>
#include <cstdint>
diff --git a/examples/quickstart/compile/add_one_cpu.cc
b/examples/quickstart/compile/add_one_cpu.cc
index 3ddf6cf..57f951e 100644
--- a/examples/quickstart/compile/add_one_cpu.cc
+++ b/examples/quickstart/compile/add_one_cpu.cc
@@ -19,8 +19,7 @@
// [example.begin]
// File: compile/add_one_cpu.cc
-#include <tvm/ffi/container/tensor.h>
-#include <tvm/ffi/function.h>
+#include <tvm/ffi/tvm_ffi.h>
namespace tvm_ffi_example_cpu {
diff --git a/examples/quickstart/compile/add_one_cuda.cu
b/examples/quickstart/compile/add_one_cuda.cu
index 2b743e8..de6c389 100644
--- a/examples/quickstart/compile/add_one_cuda.cu
+++ b/examples/quickstart/compile/add_one_cuda.cu
@@ -19,9 +19,8 @@
// [example.begin]
// File: compile/add_one_cuda.cu
-#include <tvm/ffi/container/tensor.h>
#include <tvm/ffi/extra/c_env_api.h>
-#include <tvm/ffi/function.h>
+#include <tvm/ffi/tvm_ffi.h>
namespace tvm_ffi_example_cuda {
diff --git a/examples/quickstart/load/load_cpp.cc
b/examples/quickstart/load/load_cpp.cc
index b00db42..cee261e 100644
--- a/examples/quickstart/load/load_cpp.cc
+++ b/examples/quickstart/load/load_cpp.cc
@@ -18,8 +18,8 @@
*/
// [main.begin]
// File: load/load_cpp.cc
-#include <tvm/ffi/container/tensor.h>
#include <tvm/ffi/extra/module.h>
+#include <tvm/ffi/tvm_ffi.h>
namespace {
namespace ffi = tvm::ffi;
diff --git a/examples/quickstart/load/load_cuda.cc
b/examples/quickstart/load/load_cuda.cc
index 8e2e55f..07e43ff 100644
--- a/examples/quickstart/load/load_cuda.cc
+++ b/examples/quickstart/load/load_cuda.cc
@@ -18,8 +18,8 @@
*/
// [main.begin]
// File: load/load_cuda.cc
-#include <tvm/ffi/container/tensor.h>
#include <tvm/ffi/extra/module.h>
+#include <tvm/ffi/tvm_ffi.h>
namespace {
namespace ffi = tvm::ffi;
diff --git a/include/tvm/ffi/tvm_ffi.h b/include/tvm/ffi/tvm_ffi.h
new file mode 100644
index 0000000..be26aed
--- /dev/null
+++ b/include/tvm/ffi/tvm_ffi.h
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * \file tvm/ffi/tvm_ffi.h
+ * \brief Umbrella header for the core TVM-FFI C++ APIs (excluding
tvm/ffi/extra).
+ *
+ * \code{.cpp}
+ * #include <tvm/ffi/tvm_ffi.h>
+ * \endcode
+ */
+#ifndef TVM_FFI_TVM_FFI_H_
+#define TVM_FFI_TVM_FFI_H_
+
+#include <tvm/ffi/any.h>
+#include <tvm/ffi/base_details.h>
+#include <tvm/ffi/c_api.h>
+#include <tvm/ffi/cast.h>
+#include <tvm/ffi/container/array.h>
+#include <tvm/ffi/container/container_details.h>
+#include <tvm/ffi/container/map.h>
+#include <tvm/ffi/container/shape.h>
+#include <tvm/ffi/container/tensor.h>
+#include <tvm/ffi/container/tuple.h>
+#include <tvm/ffi/container/variant.h>
+#include <tvm/ffi/dtype.h>
+#include <tvm/ffi/endian.h>
+#include <tvm/ffi/error.h>
+#include <tvm/ffi/function.h>
+#include <tvm/ffi/function_details.h>
+#include <tvm/ffi/memory.h>
+#include <tvm/ffi/object.h>
+#include <tvm/ffi/optional.h>
+#include <tvm/ffi/reflection/access_path.h>
+#include <tvm/ffi/reflection/accessor.h>
+#include <tvm/ffi/reflection/creator.h>
+#include <tvm/ffi/reflection/overload.h>
+#include <tvm/ffi/reflection/registry.h>
+#include <tvm/ffi/rvalue_ref.h>
+#include <tvm/ffi/string.h>
+#include <tvm/ffi/type_traits.h>
+
+#endif // TVM_FFI_TVM_FFI_H_