gemini-code-assist[bot] commented on code in PR #283: URL: https://github.com/apache/tvm-ffi/pull/283#discussion_r2561682583
########## docs/guides/cubin_launcher.rst: ########## @@ -0,0 +1,422 @@ +.. Licensed to the Apache Software Foundation (ASF) under one +.. or more contributor license agreements. See the NOTICE file +.. distributed with this work for additional information +.. regarding copyright ownership. The ASF licenses this file +.. to you under the Apache License, Version 2.0 (the +.. "License"); you may not use this file except in compliance +.. with the License. You may obtain a copy of the License at +.. +.. http://www.apache.org/licenses/LICENSE-2.0 +.. +.. Unless required by applicable law or agreed to in writing, +.. software distributed under the License is distributed on an +.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +.. KIND, either express or implied. See the License for the +.. specific language governing permissions and limitations +.. under the License. + +CUBIN Launcher Guide +==================== + +This guide demonstrates how to load and launch CUDA kernels from CUBIN (CUDA Binary) modules using TVM-FFI. The CUBIN launcher enables you to execute pre-compiled or runtime-compiled CUDA kernels efficiently through the CUDA Runtime API. + +Overview +-------- + +TVM-FFI provides utilities for loading and launching CUDA kernels from CUBIN modules. The implementation is in ``tvm/ffi/extra/cuda/cubin_launcher.h`` and provides: + +- :cpp:class:`tvm::ffi::CubinModule`: RAII wrapper for loading CUBIN modules from memory +- :cpp:class:`tvm::ffi::CubinKernel`: Handle for launching CUDA kernels with specified parameters +- :c:macro:`TVM_FFI_EMBED_CUBIN`: Macro for embedding CUBIN data at compile time +- :c:macro:`TVM_FFI_EMBED_CUBIN_GET_KERNEL`: Macro for retrieving kernels from embedded CUBIN + +The CUBIN launcher supports: + +- Loading CUBIN from memory (embedded data or runtime-generated) +- Multi-GPU execution using CUDA primary contexts +- Kernel parameter management and launch configuration +- Integration with NVRTC, Triton, and other CUDA compilation tools + +**Build Integration:** + +TVM-FFI provides convenient tools for embedding CUBIN data at build time: + +- **CMake utilities** (``cmake/Utils/EmbedCubin.cmake``): Functions for compiling CUDA to CUBIN and embedding it into C++ code +- **Python utility** (``python -m tvm_ffi.utils.embed_cubin``): Command-line tool for embedding CUBIN into object files +- **Python API** (:py:func:`tvm_ffi.cpp.load_inline`): Runtime embedding via ``embed_cubin`` parameter + +Python Usage +------------ + +Basic Workflow +~~~~~~~~~~~~~~ + +The typical workflow for launching CUBIN kernels from Python involves: + +1. **Generate CUBIN**: Compile your CUDA kernel to CUBIN format +2. **Define C++ Wrapper**: Write C++ code to load and launch the kernel +3. **Load Module**: Use :py:func:`tvm_ffi.cpp.load_inline` with ``embed_cubin`` parameter +4. **Call Kernel**: Invoke the kernel function from Python + +Example: NVRTC Compilation +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Here's a complete example using NVRTC to compile CUDA source at runtime. + +**Step 1: Compile CUDA source to CUBIN using NVRTC** + +.. literalinclude:: ../../examples/cubin_launcher/example_nvrtc_cubin.py + :language: python + :start-after: [cuda_source.begin] + :end-before: [cuda_source.end] + :dedent: 4 + +**Step 2: Define C++ wrapper with embedded CUBIN** + +.. literalinclude:: ../../examples/cubin_launcher/example_nvrtc_cubin.py + :language: python + :start-after: [cpp_wrapper.begin] + :end-before: [cpp_wrapper.end] + :dedent: 4 + +**Key Points:** + +- The ``embed_cubin`` parameter is a dictionary mapping CUBIN names to their binary data +- CUBIN names in ``embed_cubin`` must match names in :c:macro:`TVM_FFI_EMBED_CUBIN` +- Use ``cuda_sources`` parameter (instead of ``cpp_sources``) to automatically link with CUDA libraries +- The C++ wrapper handles device management, stream handling, and kernel launching + +Example: Using Triton Kernels +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can compile Triton kernels to CUBIN and launch them through TVM-FFI. + +**Step 1: Define and compile Triton kernel** + +.. literalinclude:: ../../examples/cubin_launcher/example_triton_cubin.py + :language: python + :start-after: [triton_kernel.begin] + :end-before: [triton_kernel.end] + :dedent: 4 + +**Step 2: Define C++ wrapper to launch the Triton kernel** + +.. literalinclude:: ../../examples/cubin_launcher/example_triton_cubin.py + :language: python + :start-after: [cpp_wrapper.begin] + :end-before: [cpp_wrapper.end] + :dedent: 4 + +.. note:: + + Triton kernels may require extra dummy parameters in the argument list. Check the compiled kernel's signature to determine the exact parameter count needed. + +C++ Usage +--------- + +Embedding CUBIN at Compile Time +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The recommended approach in C++ is to embed CUBIN data directly into your shared library: + +.. literalinclude:: ../../examples/cubin_launcher/embedded_cubin/src/lib_embedded.cc + :language: cpp + :start-after: [example.begin] + :end-before: [example.end] + +**Key Points:** + +- Use ``static auto kernel`` to cache the kernel lookup for efficiency +- Kernel arguments must be pointers to the actual values (use ``&`` for addresses) +- :cpp:type:`tvm::ffi::dim3` supports 1D, 2D, or 3D configurations: ``dim3(x)``, ``dim3(x, y)``, ``dim3(x, y, z)`` +- ``TVMFFIEnvGetStream`` retrieves the correct CUDA stream for the device +- Always check kernel launch results with :c:macro:`TVM_FFI_CHECK_CUDA_ERROR` (which checks CUDA Runtime API errors) + +Loading CUBIN at Runtime +~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can also load CUBIN modules dynamically from memory: + +.. literalinclude:: ../../examples/cubin_launcher/dynamic_cubin/src/lib_dynamic.cc + :language: cpp + :start-after: [example.begin] + :end-before: [example.end] + +Embedding CUBIN with CMake Utilities +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +TVM-FFI provides CMake utility functions that simplify the CUBIN embedding process. This is the recommended approach for CMake-based projects. + +**Using CMake Utilities:** + +.. literalinclude:: ../../examples/cubin_launcher/embedded_cubin/CMakeLists.txt + :language: cmake + :start-after: [cmake_example.begin] + :end-before: [cmake_example.end] + +**Available CMake Functions:** + +- ``tvm_ffi_generate_cubin()``: Compiles CUDA source to CUBIN using nvcc + + - ``OUTPUT``: Path to output CUBIN file + - ``SOURCE``: Path to CUDA source file + - ``ARCH``: Target GPU architecture (default: ``native`` for auto-detection) + - ``OPTIONS``: Additional nvcc compiler options (optional) + - ``DEPENDS``: Additional dependencies (optional) + +- ``tvm_ffi_embed_cubin()``: Compiles C++ source and embeds CUBIN data + + - ``OUTPUT``: Path to output combined object file + - ``SOURCE``: Path to C++ source file with ``TVM_FFI_EMBED_CUBIN`` macro + - ``CUBIN``: Path to CUBIN file to embed + - ``NAME``: Symbol name used in ``TVM_FFI_EMBED_CUBIN(name)`` macro + - ``DEPENDS``: Additional dependencies (optional) + +The utilities automatically handle: + +- Compiling C++ source to intermediate object file +- Creating CUBIN symbols with proper naming +- Merging object files using ``ld -r`` +- Adding ``.note.GNU-stack`` section for security +- Localizing symbols to prevent conflicts + +Embedding CUBIN with Python Utility +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For more advanced use cases or non-CMake build systems, you can use the Python command-line utility to embed CUBIN data into existing object files. + +**Command-Line Usage:** + +.. code-block:: bash + + # Step 1: Compile C++ source to object file + g++ -c -fPIC -std=c++17 -I/path/to/tvm-ffi/include mycode.cc -o mycode.o + + # Step 2: Embed CUBIN into the object file + python -m tvm_ffi.utils.embed_cubin \ + --output-obj mycode_with_cubin.o \ + --input-obj mycode.o \ + --cubin kernel.cubin \ + --name my_kernels + + # Step 3: Link into final library + g++ -o mylib.so -shared mycode_with_cubin.o -lcudart + +**Python API:** + +.. code-block:: python + + from pathlib import Path + from tvm_ffi.utils.embed_cubin import embed_cubin + + embed_cubin( + cubin_path=Path("kernel.cubin"), + input_obj_path=Path("mycode.o"), + output_obj_path=Path("mycode_with_cubin.o"), + name="my_kernels", + verbose=True # Optional: print detailed progress + ) + +The Python utility performs these steps: + +1. Creates intermediate CUBIN object file using ``ld -r -b binary`` +2. Adds ``.note.GNU-stack`` section for security +3. Renames symbols to match TVM-FFI format (``__tvm_ffi__cubin_<name>``) +4. Merges with input object file using relocatable linking +5. Localizes symbols to prevent conflicts when multiple object files use the same name + + +Manual CUBIN Embedding +~~~~~~~~~~~~~~~~~~~~~~ + +For reference, here's how to manually embed CUBIN using objcopy and ld: + +**Step 1: Compile CUDA kernel to CUBIN** + +.. code-block:: bash + + nvcc --cubin -arch=sm_75 kernel.cu -o kernel.cubin + +**Step 2: Convert CUBIN to object file** + +.. code-block:: bash + + ld -r -b binary -o kernel_data.o kernel.cubin + +**Step 3: Rename symbols with objcopy** + +.. code-block:: bash + + objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents \ + --redefine-sym _binary_kernel_cubin_start=__tvm_ffi__cubin_my_kernels \ + --redefine-sym _binary_kernel_cubin_end=__tvm_ffi__cubin_my_kernels_end \ + kernel_data.o + +**Step 4: Link with your library** + +.. code-block:: bash + + g++ -o mylib.so -shared mycode.cc kernel_data.o -Wl,-z,noexecstack -lcuda Review Comment:  The manual linking example uses `-lcuda`, which links against the CUDA Driver API. However, `cubin_launcher.h` uses the CUDA Runtime API (e.g., `cudaLibraryLoadData`, `cudaLaunchKernel`), which requires linking against the CUDA Runtime library with `-lcudart`. Using `-lcuda` might lead to linking errors. Please update the example to use `-lcudart` for consistency with the implementation and other examples. ```suggestion g++ -o mylib.so -shared mycode.cc kernel_data.o -Wl,-z,noexecstack -lcudart ``` ########## include/tvm/ffi/extra/cuda/cubin_launcher.h: ########## @@ -0,0 +1,623 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +/*! + * \file tvm/ffi/extra/cuda/cubin_launcher.h + * \brief CUDA CUBIN launcher utility for loading and executing CUDA kernels. + * + * This header provides a lightweight C++ wrapper around CUDA Runtime API + * for loading CUBIN modules and launching kernels. It supports: + * - Loading CUBIN from memory (embedded data) + * - Multi-GPU execution using CUDA primary contexts + * - Kernel parameter management and launch configuration + */ +#ifndef TVM_FFI_EXTRA_CUBIN_LAUNCHER_H_ +#define TVM_FFI_EXTRA_CUBIN_LAUNCHER_H_ + +#include <cuda_runtime.h> +#include <tvm/ffi/error.h> +#include <tvm/ffi/extra/c_env_api.h> +#include <tvm/ffi/string.h> + +#include <cstdint> +#include <cstring> + +namespace tvm { +namespace ffi { + +/*! + * \brief Macro for checking CUDA runtime API errors. + * + * This macro checks the return value of CUDA runtime API calls and throws + * a RuntimeError with detailed error information if the call fails. + * + * \param stmt The CUDA runtime API call to check. + */ +#define TVM_FFI_CHECK_CUDA_ERROR(stmt) \ + do { \ + cudaError_t __err = (stmt); \ + if (__err != cudaSuccess) { \ + const char* __err_name = cudaGetErrorName(__err); \ + const char* __err_str = cudaGetErrorString(__err); \ + TVM_FFI_THROW(RuntimeError) << "CUDA Runtime Error: " << __err_name << " (" \ + << static_cast<int>(__err) << "): " << __err_str; \ + } \ + } while (0) + +/*! + * \brief A simple 3D dimension type for CUDA kernel launch configuration. + * + * This struct mimics the behavior of dim3 from CUDA Runtime API and provides + * a compatible interface for kernel launch configuration. It can be constructed + * from 1, 2, or 3 dimensions. + */ +struct dim3 { + /*! \brief X dimension (number of blocks in x-direction or threads in x-direction) */ + unsigned int x; + /*! \brief Y dimension (number of blocks in y-direction or threads in y-direction) */ + unsigned int y; + /*! \brief Z dimension (number of blocks in z-direction or threads in z-direction) */ + unsigned int z; + + /*! \brief Default constructor initializes to (1, 1, 1) */ + dim3() : x(1), y(1), z(1) {} + + /*! \brief Construct with x dimension, y and z default to 1 */ + explicit dim3(unsigned int x_) : x(x_), y(1), z(1) {} + + /*! \brief Construct with x and y dimensions, z defaults to 1 */ + dim3(unsigned int x_, unsigned int y_) : x(x_), y(y_), z(1) {} + + /*! \brief Construct with all three dimensions */ + dim3(unsigned int x_, unsigned int y_, unsigned int z_) : x(x_), y(y_), z(z_) {} +}; + +/*! + * \brief Macro to embed a CUBIN module with static initialization. + * + * This macro declares external symbols for embedded CUBIN data and creates + * a singleton struct to manage the CubinModule instance. The CUBIN data + * symbols should be named `__tvm_ffi__cubin_<name>` and `__tvm_ffi__cubin_<name>_end`, + * typically created using objcopy and ld. + * + * \par Creating Embedded CUBIN with TVM-FFI Utilities + * TVM-FFI provides utilities to simplify CUBIN embedding. You have two options: + * + * \par Option 1: CMake Utility (Recommended) + * Use the `tvm_ffi_embed_cubin` CMake function: + * \code{.cmake} + * # Find tvm_ffi package (provides tvm_ffi_embed_cubin utility) + * find_package(tvm_ffi CONFIG REQUIRED) + * find_package(CUDAToolkit REQUIRED) + * + * # Compile CUDA kernel to CUBIN + * tvm_ffi_generate_cubin( + * OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/kernel.cubin + * SOURCE src/kernel.cu + * ARCH native # or sm_75, sm_80, etc. + * ) + * + * # Embed CUBIN into C++ object file + * tvm_ffi_embed_cubin( + * OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/mycode_with_cubin.o + * SOURCE src/mycode.cc + * CUBIN ${CMAKE_CURRENT_BINARY_DIR}/kernel.cubin + * NAME my_kernels # Must match TVM_FFI_EMBED_CUBIN(my_kernels) in code + * ) + * + * # Link into shared library + * add_library(mylib SHARED ${CMAKE_CURRENT_BINARY_DIR}/mycode_with_cubin.o) + * target_link_libraries(mylib PRIVATE tvm_ffi_header CUDA::cudart) + * \endcode + * + * \par Option 2: Python Utility + * Use the `tvm_ffi.utils.embed_cubin` command-line tool: + * \code{.bash} + * # Step 1: Compile CUDA kernel to CUBIN + * nvcc --cubin -arch=sm_75 kernel.cu -o kernel.cubin + * + * # Step 2: Compile C++ source to object file + * g++ -c -fPIC -std=c++17 -I/path/to/tvm-ffi/include mycode.cc -o mycode.o + * + * # Step 3: Embed CUBIN using Python utility + * python -m tvm_ffi.utils.embed_cubin \ + * --output-obj mycode_with_cubin.o \ + * --input-obj mycode.o \ + * --cubin kernel.cubin \ + * --name my_kernels + * + * # Step 4: Link into shared library + * g++ -o mylib.so -shared mycode_with_cubin.o -lcudart + * \endcode + * + * The utilities automatically handle: + * - Symbol renaming to __tvm_ffi__cubin_<name> format + * - Adding .note.GNU-stack section for security + * - Symbol localization to prevent conflicts + * + * \par Usage in C++ Code + * In your C++ source file, use the embedded CUBIN: + * \code{.cpp} + * #include <tvm/ffi/extra/cuda/cubin_launcher.h> + * + * // Declare the embedded CUBIN module (name must match CMake NAME parameter) + * TVM_FFI_EMBED_CUBIN(my_kernels); + * + * void MyFunction() { + * // Get kernel from embedded CUBIN (cached in static variable for efficiency) + * static auto kernel = TVM_FFI_EMBED_CUBIN_GET_KERNEL(my_kernels, "my_kernel"); + * // Use kernel... + * } + * \endcode + * + * \note CMake Setup: To use the utilities, add to your CMakeLists.txt: + * \code{.cmake} + * find_package(tvm_ffi CONFIG REQUIRED) # Provides tvm_ffi_embed_cubin utility + * \endcode + * + * \par Option 3: Python Integration with load_inline + * When using `tvm_ffi.cpp.load_inline()` with the `embed_cubin` parameter, + * the CUBIN data is automatically embedded using the Python utility internally: + * \code{.py} + * from tvm_ffi import cpp + * from tvm_ffi.cpp import nvrtc + * + * # Compile CUDA source to CUBIN + * cubin_bytes = nvrtc.nvrtc_compile(cuda_source) + * + * # Load with embedded CUBIN - automatically handles embedding + * mod = cpp.load_inline( + * "my_module", + * cuda_sources=cpp_code, + * embed_cubin={"my_kernels": cubin_bytes}, + * extra_ldflags=["-lcudart"] + * ) + * \endcode + * + * \param name The identifier for this embedded CUBIN module (must match the + * symbol names created with objcopy or the key in embed_cubin dict). + * + * \see TVM_FFI_EMBED_CUBIN_GET_KERNEL + * \see CubinModule + * \see CubinKernel + */ +#define TVM_FFI_EMBED_CUBIN(name) \ + extern "C" const char __tvm_ffi__cubin_##name[]; \ + extern "C" const char __tvm_ffi__cubin_##name##_end[]; \ + namespace { \ + struct EmbedCubinModule_##name { \ + tvm::ffi::CubinModule mod{__tvm_ffi__cubin_##name}; \ + static EmbedCubinModule_##name* Global() { \ + static EmbedCubinModule_##name inst; \ + return &inst; \ + } \ + }; \ + } /* anonymous namespace */ + +/*! + * \brief Macro to get a kernel from an embedded CUBIN module. + * + * This macro retrieves a kernel by name from a previously declared embedded + * CUBIN module (using TVM_FFI_EMBED_CUBIN). The result is a CubinKernel object + * that can be used to launch the kernel with specified parameters. + * + * \par Performance Tip + * It's recommended to store the result in a static variable to avoid repeated + * kernel lookups, which improves performance: + * \code{.cpp} + * static auto kernel = TVM_FFI_EMBED_CUBIN_GET_KERNEL(my_kernels, "kernel_name"); + * \endcode + * + * \par Complete Example + * \code{.cpp} + * // Declare embedded CUBIN module + * TVM_FFI_EMBED_CUBIN(my_kernels); + * + * void LaunchKernel(tvm::ffi::TensorView input, tvm::ffi::TensorView output) { + * // Get kernel (cached in static variable for efficiency) + * static auto kernel = TVM_FFI_EMBED_CUBIN_GET_KERNEL(my_kernels, "add_one"); + * + * // Prepare kernel arguments + * void* in_ptr = input.data_ptr(); + * void* out_ptr = output.data_ptr(); + * int64_t n = input.size(0); + * void* args[] = {&in_ptr, &out_ptr, &n}; + * + * // Configure launch + * tvm::ffi::dim3 grid((n + 255) / 256); + * tvm::ffi::dim3 block(256); + * + * // Get stream and launch + * DLDevice device = input.device(); + * cudaStream_t stream = static_cast<cudaStream_t>( + * TVMFFIEnvGetStream(device.device_type, device.device_id)); + * + * cudaError_t result = kernel.Launch(args, grid, block, stream); + * TVM_FFI_CHECK_CUDA_ERROR(result); + * } + * \endcode + * + * \param name The identifier of the embedded CUBIN module (must match the name + * used in TVM_FFI_EMBED_CUBIN). + * \param kernel_name The name of the kernel function as it appears in the CUBIN + * (typically the function name for `extern "C"` kernels). + * \return A CubinKernel object for the specified kernel. + * + * \see TVM_FFI_EMBED_CUBIN + * \see CubinKernel::Launch + */ +#define TVM_FFI_EMBED_CUBIN_GET_KERNEL(name, kernel_name) \ + (EmbedCubinModule_##name::Global()->mod[kernel_name]) + +// Forward declaration +class CubinKernel; + +/*! + * \brief CUDA CUBIN module loader and manager. + * + * This class provides a RAII wrapper around CUDA Runtime API's library management. + * It loads a CUBIN module from memory and manages the library handle automatically. + * The library is unloaded when the CubinModule object is destroyed. + * + * \par Features + * - Load CUBIN from memory (embedded data or runtime-generated) + * - Automatic resource management (RAII pattern) + * - Multi-GPU execution using CUDA primary contexts + * - Retrieve multiple kernels from the same module + * + * \par Example Usage + * \code{.cpp} + * // Load CUBIN from memory + * tvm::ffi::Bytes cubin_data = ...; + * tvm::ffi::CubinModule module(cubin_data); + * + * // Get kernels by name + * tvm::ffi::CubinKernel kernel1 = module["add_one"]; + * tvm::ffi::CubinKernel kernel2 = module.GetKernel("mul_two"); + * + * // Launch kernels + * void* args[] = {...}; + * tvm::ffi::dim3 grid(32), block(256); + * cudaStream_t stream = ...; + * kernel1.Launch(args, grid, block, stream); + * \endcode + * + * \note This class is movable but not copyable. + * \see TVM_FFI_EMBED_CUBIN for embedding CUBIN at compile time + * \see CubinKernel for kernel launching + */ +class CubinModule { + public: + /*! + * \brief Load CUBIN module from memory. + * + * \param bytes CUBIN binary data as a Bytes object. + * \note CUDA Runtime API automatically initializes on first use. + */ + explicit CubinModule(const Bytes& bytes) { + TVM_FFI_CHECK_CUDA_ERROR( + cudaLibraryLoadData(&library_, bytes.data(), nullptr, nullptr, 0, nullptr, nullptr, 0)); + } + + /*! + * \brief Load CUBIN module from raw memory buffer. + * + * \param code Pointer to CUBIN binary data. + * \note CUDA Runtime API automatically initializes on first use. + * \note This constructor is primarily used by TVM_FFI_EMBED_CUBIN macro. + * \note The code buffer must be null-terminated; size parameter is not required + * as cudaLibraryLoadData can determine the size from the data itself. Review Comment:  The comment for the `CubinModule(const char* code)` constructor states that the code buffer must be null-terminated. This is misleading, as the embedded CUBIN data is a binary ELF image, not a C-style string. `cudaLibraryLoadData` determines the size from the ELF header and does not rely on null termination. To avoid confusion, I suggest clarifying this. ```suggestion * ote The `code` buffer points to an ELF image. `cudaLibraryLoadData` can * determine the size from the data itself, so it does not need to be null-terminated. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
