junrushao commented on code in PR #168: URL: https://github.com/apache/tvm-ffi/pull/168#discussion_r2441473103
########## docs/get_started/quickstart.rst: ########## @@ -20,12 +20,12 @@ Quick Start This guide walks through shipping a minimal ``add_one`` function that computes ``y = x + 1`` in C++ and CUDA. - -TVM-FFI's Open ABI and FFI makes it possible to **build once, ship everywhere**. That said, -a single shared library works across: +TVM-FFI's Open ABI and FFI make it possible to **ship one library**. +We can build a single shared library that works across many environments: - **ML frameworks**, e.g. PyTorch, JAX, NumPy, CuPy, etc., and - **languages**, e.g. C++, Python, Rust, etc. +- **language ABI versions**, e.g. ship one wheel to support multiple Python versions, including free-threaded Python. Review Comment: maybe be more specific to Python? cross C++ ABI versions seems a bad idea ```suggestion - **Python ABI versions**, e.g. ship one wheel to support multiple Python versions, including free-threaded Python. ``` ########## docs/get_started/quickstart.rst: ########## @@ -50,7 +50,7 @@ Write a Simple ``add_one`` Source Code ~~~~~~~~~~~ -Suppose we implement a C++ function ``AddOne`` that performs elementwise ``y = x + 1`` for a 1-D ``float32`` vector. The source code (C++, CUDA) is: +Suppose we implement a C++ function ``AddOne`` that performs element-wise ``y = x + 1`` for a 1-D ``float32`` vector. The source code (C++, CUDA) is: Review Comment: `elementwise` is a word -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
