> - do not allow bfd to call bfd_may_decay when decay_min_rx > is less than rmt_min_tx.
Why? I don't think we should need this. If the rmt_min_tx is greater than our min_rx, according to the bfd protocol we'll simply use the rmt_min_tx as our rx interval (see bfd_rx_interval()). In other words, I don't think correct from a feature design perspective to have the min_rx value depend on the rmt_min_tx value. They're completely independent. > - bfd->rmt_min_tx = MAX(ntohl(msg->min_tx) / 1000, 1); > + rmt_min_tx = MAX(ntohl(msg->min_tx) / 1000, 1); > + if (bfd->rmt_min_tx != rmt_min_tx) { > + /* Resets the decay, when rmt_min_tx > + * becomes smaller than decay_min_rx. */ > + if (bfd->decay_min_rx && (bfd->decay_min_rx < bfd->rmt_min_tx) > + && (bfd->decay_min_rx > rmt_min_tx)) { > + bfd_decay_update(bfd); > + } > + bfd->rmt_min_tx = rmt_min_tx; > + } Basically, I think we should delete all of this and just do what was there before. > + } > + ovs_mutex_unlock(&mutex); > +} > + Redundant newline added here. > + diff = bfd_rx_packets(bfd) - bfd->rx_packets; > + expect = (MAX(time_msec() - bfd->decay_last_detect_time, 2000) > + / bfd->min_rx) + (2000 / bfd->cfg_min_rx + 1); All of this feels awfully fragile to me. Instead of estimating the number of BFD packets we've received and subtracting it out. Why don't we simply count the number of BFD packet's we've received in bfd_process_packet()? I.E we ditch the bfd->decay_last_detect_time field and instead every time bfd_process_packet() is called we increment bfd->rx_packets. Thus, bfd_rx_packets(bfd) - bfd->rx_packets won't include the bfd control packets we've received. If that number is less than some threshold (giving us some wiggle room), we go ahead and drop into decay. Note that it's entirely possible that the diff will be negative in which case we drop into decay. What do you think? Ethan X-CudaMail-Whitelist-To: dev@openvswitch.org _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev