Lunderberg commented on a change in pull request #10604:
URL: https://github.com/apache/tvm/pull/10604#discussion_r837560202
##########
File path: tests/python/contrib/test_hexagon/test_launcher.py
##########
@@ -40,6 +42,251 @@
# triggering TIME_WAIT state on the server socket. This prevents another
# server to bind to the same port until the wait time elapses.
+@requires_hexagon_toolchain
+def test_add_hvx(android_serial_number, tvm_tracker_host, tvm_tracker_port,
adb_server_socket):
+ """
+ Starting with an elementwise-add computation, try various schedules /
optimizations to
+ see the impact they have on performance.
+
+ The main motivation for this test is to explore the relationship between
these
+ schedules / optimizations vs. how effectively the primfunc uses the
Hexagon's
+ HVX units.
+ """
+
+ host_output_dir = tempfile.mkdtemp()
+
+ print("-"*80)
+ print("OUTPUT DIRECTORY: {}".format(host_output_dir))
+ print("-"*80)
+ print()
+
+ class benchmark_results_collection:
Review comment:
I think I'd see three advantages, two of which exist even if this is the
only test case.
1. This would separate the execution of the benchmarks from the test case
generation separate from the benchmarking framework itself. Currently, the
majority of the benchmarking occurs within 4 nested for-loops, and it isn't
immediately obvious on a first read that each case of the benchmarks is
independent.
2. Individual benchmarks could be run independently, using pytest's existing
command line (e.g. `pytest path/to/benchmark_hexagon.py::test_matmul`)
3. If somebody wants to add a new benchmarked case, it makes it obvious how
to add them, by copying the existing benchmark function. As it is, reading
through a 250 line function is possible, but doesn't emphasize where the change
would need to be made.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]