[GitHub] [tvm] Joetibz commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2023-02-03 Thread via GitHub


Joetibz commented on issue #4272:
URL: https://github.com/apache/tvm/issues/4272#issuecomment-1415973447

   okay so if i run my own python script like this i should be able to
   run tvm on the board with rpc and yes i am aware TVM has long improved
   from then till no
   
   import ctypes
   
   import tvm
   from tvm.contrib import graph_runtime as runtime
   
   libvta_path = "/home/xilinx/tvm/build/libvta.so"
   ctypes.CDLL(libvta_path, ctypes.RTLD_GLOBAL)
   
   # load compiled model
   with open("graph.json", "r") as graph_file:
   graph = graph_file.read()
   with open("params.params", "rb") as params_file:
   params = bytearray(params_file.read())
   lib = tvm.module.load("./lib.tar")
   
   ctx = tvm.ext_dev(0)
   
   module = runtime.create(graph, lib, ctx)
   module.load_params(params)
   okay so if i run my own python script like this i should be able to run tvm
   on the board with rpc and yes i am aware TVM has long improved from then
   till now but i am open to any contributions you might suggest about running
   tvm on the fpga board
   
   On Fri, Feb 3, 2023 at 2:13 PM Philipp Krones ***@***.***>
   wrote:
   
   > I'm not aware of any other modifications. However, when I used TVM it was
   > even pre-Apache time. So a lot has happened since then. So take all of the
   > suggestions here with a grain of salt.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   > You are receiving this because you commented.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Joetibz commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2023-02-03 Thread via GitHub


Joetibz commented on issue #4272:
URL: https://github.com/apache/tvm/issues/4272#issuecomment-1415967290

   w. but any
   
   
   import ctypes
   import tvmfrom tvm.contrib import graph_runtime as runtime
   libvta_path = "/home/xilinx/tvm/build/libvta.so"ctypes.CDLL(libvta_path,
   ctypes.RTLD_GLOBAL)
   # load compiled modelwith open("graph.json", "r") as graph_file:
   graph = graph_file.read()with open("params.params", "rb") as params_file:
   params = bytearray(params_file.read())lib = tvm.module.load("./lib.tar")
   ctx = tvm.ext_dev(0)
   module = runtime.create(graph, lib, ctx)module.load_params(params)
   
   okay so if i run my own python script like this i should be able to
   run tvm on the board with rpc and yes i am aware TVM has long improved
   from then till now but i am open to any contributions you might
   suggest about running tvm on the fpga board directlyKrones
   ***@***.***> wrote:
   
   I'm not aware of any other modifications. However, when I used TVM it was
   > even pre-Apache time. So a lot has happened since then. So take all of the
   > suggestions here with a grain of salt.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   > You are receiving this because you commented.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Joetibz commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2023-02-03 Thread via GitHub


Joetibz commented on issue #4272:
URL: https://github.com/apache/tvm/issues/4272#issuecomment-1415805878

   Hello yes thank you so much for your feedback I am aware it has been a
   minute since you worked on TVM, but please bear with me I need this for my
   studies, I am also trying to run the prediction of a cat from the FPGA
   directly, and I saw your modifications you made to have that working, in
   the case of the picture model to run on the board are there any other
   modifications i should add?
   
   On Fri, Feb 3, 2023 at 12:53 PM Philipp Krones ***@***.***>
   wrote:
   
   > I haven't worked with TVM in ages. But this is probably unrelated to this
   > issue. Either this lib is not installed on the device or you should try to
   > run the same commands with root privileges. The latter usually solved
   > things like this on the PYNQ for me.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   > You are receiving this because you commented.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Joetibz commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2023-02-02 Thread via GitHub


Joetibz commented on issue #4272:
URL: https://github.com/apache/tvm/issues/4272#issuecomment-1414447359

   hello guys I am equally new to tvm, and I am trying to run inference on the 
zynq board without rpc but I keep getting the error stated below, I have not 
even gotten to making the changes as discussed on this forum, can I get some 
guidance?
   OSError: libmkl_intel_lp64.so.1: cannot open shared object file: No such 
file or directory
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org