junrushao1994 commented on a change in pull request #10501:
URL: https://github.com/apache/tvm/pull/10501#discussion_r820169125
##########
File path: src/target/tag.cc
##########
@@ -70,14 +70,38 @@ Target TargetTag::AddTag(String name, Map<String,
ObjectRef> config, bool overri
/********** Register Target tags **********/
+TVM_REGISTER_TARGET_TAG("raspberry-pi/4b-aarch64")
+ .set_config({{"kind", String("llvm")},
+ {"mtriple", String("aarch64-linux-gnu")},
+ {"mcpu", String("cortex-a72")},
+ {"mattr", Array<String>{"+neon"}},
+ {"num-cores", Integer(4)},
+ {"host", Map<String, ObjectRef>{{"kind", String("llvm")},
+ {"mtriple",
String("aarch64-linux-gnu")},
+ {"mcpu",
String("cortex-a72")},
+ {"mattr",
Array<String>{"+neon"}},
+ {"num-cores", Integer(4)}}}});
+
+TVM_REGISTER_TARGET_TAG("nvidia/jetson-agx-xavier")
+ .set_config({{"kind", String("cuda")},
+ {"arch", String("sm_72")},
+ {"max_shared_memory_per_block", Integer(49152)},
+ {"max_threads_per_block", Integer(1024)},
+ {"thread_warp_size", Integer(32)},
+ {"registers_per_block", Integer(65536)},
+ {"host", Map<String, ObjectRef>{{"kind", String("llvm")},
+ {"mtriple",
String("aarch64-linux-gnu")},
+ {"mcpu", String("carmel")},
+ {"num-cores", Integer(4)}}}});
+
#define TVM_REGISTER_CUDA_TAG(Name, Arch, SharedMem, RegPerBlock) \
TVM_REGISTER_TARGET_TAG(Name).set_config({ \
{"kind", String("cuda")}, \
{"arch", String(Arch)}, \
- {"shared_memory_per_block", Integer(SharedMem)}, \
- {"registers_per_block", Integer(RegPerBlock)}, \
+ {"max_shared_memory_per_block", Integer(SharedMem)}, \
Review comment:
Per discussion with @masahi, we updated `shared_memory_per_block` to
`max_shared_memory_per_block` to be consistent with Vulkan settings. I'm not
fully convinced but would love to follow the convention as Masa suggested.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]