On Wed, 5 Sep 2012 13:20:47 -0400 Steve Simmons <[email protected]> wrote:
> After reading the code involved (both 1.4.8 and current master) it > appears this is a non-issue in most cases. The hash bucket in use gets > wrapped back to the beginning, presumably expiring out semi-random > stuff. However, there may be another problem. > > Note the first error message on Sept 4. This states that the max > bucket was 32. Several other times it complains about 31. Reading the > code, it appears the bucket is actually overflowed due to a flaw in > hash_bucket_stat() as it appears in both src/afs/LINUX/osi_alloc.c, > src/afs/LINUX24/osi_alloc.c. As long as it's using the same bucket > it'll happily increment cur_bucket_len without checking it against the > max. As shown above, this condition does apply occasionally. It's not reporting that the max bucket is 32; it's reporting that there is a hash bucket whose length is 32 (exceeding the 'max' bucket length of 30). So, it is correct that cur_bucket_len is incremented without checking any bounds; it is keeping track of how many items are in the current bucket. Then when we move on to the next bucket, we record how many items were in our bucket, and increment the appropriate afs_linux_hash_bucket_dist entry. That's when we check the bounds and issue that warning. As far as I can tell at a quick glance, this code is pretty useless. The function populates the afs_linux_hash_bucket_dist array, which records how many hash buckets exist with a particular length. But, nothing ever reads the results; all we do is periodically populate the array but we never read it. It seems like we could just get rid of get_hash_stats() entirely, unless I'm misreading it. -- Andrew Deason [email protected] _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
