csullivan commented on code in PR #11018:
URL: https://github.com/apache/tvm/pull/11018#discussion_r856397727
##########
src/runtime/threading_backend.cc:
##########
@@ -34,13 +34,63 @@
#endif
#if defined(__hexagon__)
#include <dlfcn.h>
+#include <qurt.h>
+#include <stdlib.h>
+#define HEXAGON_STACK_SIZE 65536
+#define HEXAGON_STACK_ALIGNMENT 32
#endif
#include <algorithm>
#include <thread>
#define CURRENT_THREAD_HANDLE (static_cast<std::thread::native_handle_type>(0))
namespace tvm {
namespace runtime {
namespace threading {
+#ifdef __hexagon__
Review Comment:
One approach would be to introduce
```
template <typename ThreadType>
class ThreadGroup::Impl;
```
and then wrap both std::thread and QuRT thread into types with a common
interface that `ThreadGroup::Impl` calls into.
The thing I got hung up on there is the runtime dispatch. Right now it
should be doable from the device type -- when it is `kDLHexagon` dispatch to
`ThreadGroup::Impl<QuRTThreadInterface>` when it's `kDLCPU` dispatch to
`TheadGroup::Impl<StdThreadInterface>`. However there is impetus to move
Hexagon fully over to kDLCPU -- wherein we could no longer do runtime dispatch
based on the device type.
##########
src/runtime/threading_backend.cc:
##########
@@ -34,13 +34,63 @@
#endif
#if defined(__hexagon__)
#include <dlfcn.h>
+#include <qurt.h>
+#include <stdlib.h>
+#define HEXAGON_STACK_SIZE 65536
+#define HEXAGON_STACK_ALIGNMENT 32
#endif
#include <algorithm>
#include <thread>
#define CURRENT_THREAD_HANDLE (static_cast<std::thread::native_handle_type>(0))
namespace tvm {
namespace runtime {
namespace threading {
+#ifdef __hexagon__
Review Comment:
Then the only other options for dispatch I see are then
(1) TVM compile-time dispatch -- during codegen, do something specific based
on the target
(2) Build time dispatch via either preprocessor or through changes to the
build system to conditionally include one translation unit over another.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]