supersat commented on code in PR #11018:
URL: https://github.com/apache/tvm/pull/11018#discussion_r859117807


##########
src/runtime/threading_backend.cc:
##########
@@ -34,13 +34,63 @@
 #endif
 #if defined(__hexagon__)
 #include <dlfcn.h>
+#include <qurt.h>
+#include <stdlib.h>
+#define HEXAGON_STACK_SIZE 65536
+#define HEXAGON_STACK_ALIGNMENT 32
 #endif
 #include <algorithm>
 #include <thread>
 #define CURRENT_THREAD_HANDLE (static_cast<std::thread::native_handle_type>(0))
 namespace tvm {
 namespace runtime {
 namespace threading {
+#ifdef __hexagon__

Review Comment:
   I've refactored this PR to split up pthread and qurt threading 
implementations. This introduces the following changes:
   
   - ThreadGroup::Impl is now an abstract class.
   - A ThreadGroupImplTemplate class is introduced with common code between 
pthread and qurt implementations. It is a subclass of ThreadGroup::Impl. 
(AFAICT, you can't have a pointer to an unspecialized template class. We need a 
pointer to a concrete type for ThreadGroup to call.)
   - ThreadGroupPosixImpl now contains the bulk of the code from 
ThreadGroup::Impl. It inherits from ThreadGroupImplTemplate<std::thread>. This 
is in a new file src/runtime/posix/threading_posix.cc.
   - ThreadGroupHexagonImpl inherits from ThreadGroupImplTemplate<QuRTThread>, 
which are both defined in a new file, src/runtime/hexagon/threading_hexagon.cc.
   - There are now two different versions of Yield()
   - CMakeLists.txt has been modified to either include 
src/runtime/posix/threading_posix.cc or 
src/runtime/hexagon/threading_hexagon.cc, depending on whether we're building 
for Hexagon.
   
   Honestly, this seems like a pretty ugly solution, especially when 
threading_backend.cc is already littered with #ifdefs for various platforms. 
Thoughts?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to