jwfromm commented on a change in pull request #10722:
URL: https://github.com/apache/tvm/pull/10722#discussion_r833623465
##########
File path: python/tvm/driver/tvmc/runner.py
##########
@@ -530,58 +531,82 @@ def run_module(
assert device == "cpu"
dev = session.cpu()
- # TODO(gromero): Adjust for micro targets.
- if profile:
- logger.debug("Creating runtime with profiling enabled.")
- module = debug_executor.create(tvmc_package.graph, lib, dev,
dump_root="./prof")
+ if tvmc_package.use_vm:
+ assert inputs is not None and isinstance(
+ inputs, dict
+ ), "vm runner requires inputs to be provided as a dict"
+ exe = vm.VirtualMachine(lib, dev)
+ input_tensor = {}
+ for e, i in inputs.items():
+ input_tensor[e] = tvm.nd.array(i, dev)
+ exe.set_input("main", **input_tensor)
+ exe.invoke_stateful("main")
+ times = exe.benchmark(
+ dev,
+ **input_tensor,
+ func_name="main",
+ repeat=repeat,
+ number=number,
+ end_to_end=end_to_end,
+ )
+ exe_outputs = exe.get_outputs()
+ outputs = {}
+ for i, val in enumerate(exe_outputs):
+ output_name = "output_{}".format(i)
+ outputs[output_name] = val
else:
- if device == "micro":
- logger.debug("Creating runtime (micro) with profiling
disabled.")
- module =
tvm.micro.create_local_graph_executor(tvmc_package.graph, lib, dev)
+ # TODO(gromero): Adjust for micro targets.
+ if profile:
Review comment:
The VM also supports profiling using `VirtualMachineProfiler`. We should
support that in runner.
##########
File path: python/tvm/driver/tvmc/compiler.py
##########
@@ -243,7 +244,8 @@ def compile_model(
PassContext.
additional_target_options: Optional[Dict[str, Dict[str, Any]]]
Additional target options in a dictionary to combine with initial
Target arguments
-
+ use_vm: bool
Review comment:
After reading this code a little more, I think it would make sense to
drop the `use_vm` argument, and instead specify that you should compile and run
with the VM by setting `executor=Executor("vm")` instead of
`executor=Executor("graph")` by default. I think this is consistent with the
intention of Executor.
##########
File path: python/tvm/driver/tvmc/runner.py
##########
@@ -530,58 +531,82 @@ def run_module(
assert device == "cpu"
dev = session.cpu()
- # TODO(gromero): Adjust for micro targets.
- if profile:
- logger.debug("Creating runtime with profiling enabled.")
- module = debug_executor.create(tvmc_package.graph, lib, dev,
dump_root="./prof")
+ if tvmc_package.use_vm:
Review comment:
Unfortunately the VM doesnt track any information about the inputs,
presumably because its geared more towards dynamism. I think we'll just have to
check that inputs are explicitly provided.
##########
File path: python/tvm/driver/tvmc/model.py
##########
@@ -337,7 +385,21 @@ def import_package(self, package_path: str):
t = tarfile.open(package_path)
t.extractall(temp.relpath("."))
- if os.path.exists(temp.relpath("metadata.json")):
+ if self.use_vm:
Review comment:
Instead of requiring users to remember if a saved package uses the VM or
not, maybe we can combine the VM handling with the classic format handling and
set `self.use_vm` if we find `lib.tar` and `lib.so` instead of `mod.tar` and
`mod.so`.
##########
File path: python/tvm/driver/tvmc/model.py
##########
@@ -248,11 +284,12 @@ def export_classic_format(
def export_package(
self,
- executor_factory: GraphExecutorFactoryModule,
+ executor_factory: Union[GraphExecutorFactoryModule,
tvm.runtime.vm.Executable],
Review comment:
To look prettier we can add `from tvm.runtime.vm import Executable` up
top then just use `Executable` here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]