Paolo Bonzini, le Wed 23 Oct 2013 08:51:21 +0100, a écrit : > > +void icmp6_init(Slirp *slirp) > > +{ > > + srand(time(NULL)); > > + ra_timer = timer_new_s(QEMU_CLOCK_VIRTUAL, ra_timer_handler, slirp); > > + timer_mod(ra_timer, qemu_clock_get_s(QEMU_CLOCK_VIRTUAL) + > > NDP_Interval); > > +} > > Should the granularity of the timer really be seconds? Or should you > use the existing milli/nanosecond interface and scale the interval, so > that you really get a uniformly distributed random value, even for very > small MaxRtrAdvInterval (e.g. for min=3, max=4 you won't get any other > value than 3 or 4, which is not really uniformly distributed).
I don't think we need to care about fine granularity. We are not going to run more than one RA anyway. Actually, the RFC itself says that when the max value is less than 9, the default for the min value would be equal to the max value, i.e. the delay being always just the same. Samuel