mulanxiaodingdang commented on issue #18146:
URL: https://github.com/apache/tvm/issues/18146#issuecomment-3068882209

   Thanks for your response, though I have some follow-up questions.
   I changed the packedfunc in the code to ffi::Function, but I'm still 
encountering the following error:
   
   > > (base) hgs@hgs:~/tvm_25_06/tvm/my_example$ g++ -std=c++17 -O2 -fPIC 
-I/home/hgs/tvm_25_06/tvm/include 
-I/home/hgs/tvm_25_06/tvm/3rdparty/dmlc-core/include 
-I/home/hgs/tvm_25_06/tvm/3rdparty/dlpack/include 
-DDMLC_USE_LOGGING_LIBRARY=\<tvm/runtime/logging.h\> -o lib/compiled_artifact  
cc_deploy.cc -L/home/hgs/tvm_25_06/tvm/tvm_build -ldl -pthread -ltvm 
-ltvm_runtime
   > cc_deploy.cc: In function ‘int main()’:
   > cc_deploy.cc:30:36: error: conversion from ‘tvm::ffi::Any’ to non-scalar 
type ‘tvm::runtime::Module’ requested
   >    30 |     Module mod = vm_load_executable();
   >       |                  ~~~~~~~~~~~~~~~~~~^~
   > cc_deploy.cc:55:40: error: conversion from ‘tvm::ffi::Any’ to non-scalar 
type ‘tvm::runtime::NDArray’ requested
   >    55 |     tvm::runtime::NDArray output = main(input);
   >       |                                    ~~~~^~~~~~~
   >
   
   
   The original C++ code is as follows:
   `//https://discuss.tvm.apache.org/t/deploy-relax-ir-using-c-api/17989
   #include <iostream>
   #include <tvm/runtime/relax_vm/executable.h>  // TVM Relax虚拟机可执行文件相关头文件
   #include <tvm/runtime/logging.h>             // TVM日志系统
   #include <tvm/runtime/memory/memory_manager.h> // 内存管理
   #include <tvm/runtime/data_type.h>           // 数据类型定义
   #include <tvm/ffi/function.h>
   
   // 使用声明,简化后续代码
   using tvm::runtime::relax_vm::VMExecutable;  // Relax虚拟机可执行文件类
   using tvm::runtime::Module;                // TVM模块类
   //using tvm::runtime::PackedFunc;            // 打包函数接口
   using tvm::runtime::memory::AllocatorType; // 内存分配器类型
   
   
   int main()
   {
       std::string path = "./compiled_artifact.so";
   
       // Load the shared object into a Module.
       Module m = Module::LoadFromFile(path);
       std::cout << m << std::endl;
   
       tvm::ffi::Function vm_load_executable = 
m.GetFunction("vm_load_executable");
       CHECK(vm_load_executable != nullptr)
           << "Error: File `" << path
           << "` is not built by RelaxVM, because `vm_load_executable` does not 
exist";
   
       // Create a VM from the Executable in the Module.
       Module mod = vm_load_executable();
       tvm::ffi::Function vm_initialization = 
mod.GetFunction("vm_initialization");
       CHECK(vm_initialization != nullptr)
           << "Error: File `" << path
           << "` is not built by RelaxVM, because `vm_initialization` does not 
exist";
   
       // Initialize the VM
       tvm::Device device{kDLCPU, 0};
       vm_initialization(static_cast<int>(device.device_type), 
static_cast<int>(device.device_id),
                       static_cast<int>(AllocatorType::kPooled), 
static_cast<int>(kDLCPU), 0,
                       static_cast<int>(AllocatorType::kPooled));
   
       tvm::ffi::Function main = mod.GetFunction("main");
       CHECK(main != nullptr)
           << "Error: File `" << path
           << "` does not contain the expected entry function, `main`";
   
       // Create and initialize the input array
       auto i32 = tvm::runtime::DataType::Int(32);
       tvm::runtime::NDArray input = tvm::runtime::NDArray::Empty({3, 3}, i32, 
device);
       int numel = input.Shape()->Product();
       for (int i = 0; i < numel; ++i)
           static_cast<int*>(input->data)[i] = 42;
   
       // Run the main function
       tvm::runtime::NDArray output = main(input);
       for (int i = 0; i < numel; ++i)
           std::cout << static_cast<int*>(output->data)[i] << std::endl;
   
   }`
   
   If it's convenient, could you provide me with the complete deployment code? 
Thank you.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to