junrushao1994 commented on a change in pull request #8817:
URL: https://github.com/apache/tvm/pull/8817#discussion_r694394696



##########
File path: src/tir/schedule/concrete_schedule.cc
##########
@@ -208,6 +211,25 @@ Schedule ConcreteScheduleNode::Copy() const {
   }
 
 /******** Schedule: Schedule: Sampling ********/
+
+void ConcreteScheduleNode::Seed(support::LinearCongruentialEngine::TRandState 
seed) {
+  support::LinearCongruentialEngine(&rand_state_).Seed(seed == -1 ? 
std::random_device()() : seed);
+}
+support::LinearCongruentialEngine::TRandState ConcreteScheduleNode::ForkSeed() 
{
+  // In order for reproducibility, we computer the new seed using RNG's random 
state and a different
+  // set of parameters. Note that both 32767 and 1999999973 are prime numbers.
+  return (support::LinearCongruentialEngine(&rand_state_)() * 32767) % 
1999999973;
+}

Review comment:
       I actually agree with Tristan's theory in general. Thank you for 
bringing this up! Indeed seeding of parallel PRNG would require some really 
careful thought to avoid quick repetition. LCG may not be the best candidate to 
ensure such a property.
   
   Fortunately, in our particular use case it is not a practical problem. Here 
is a quick example, supposing we have 128 threads and 10k trials: 
https://gist.github.com/junrushao1994/ea986add81b01b89fd99a5a7d41d087a. The 
result is that there is no repetition at all. This is a harsher condition than 
our practical usage.
   
   To further address the issue, architecturally we have designed the PRNG 
interface to be generic and compliant to STL, and easily switchable to any 
splittable PRNG in the future if there are new interesting usecases. Therefore, 
I assume it won't constitute an architecture issue :-)
   
   Thanks again for the discussion!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to