tl;dr Did you by any chance compile tor with bufferevents enabled (--enable-bufferevents)?
Let's see the path of the sent bytes string: The heartbeat code (src/or/status.c) receives the bytes sent in log_heartbeat() using 'uint64_t get_bytes_written(void)' and stores it into a uint64_t. Then it passes it to 'static char *bytes_to_usage(uint64_t bytes)' which has an 'if' statement checking the number of bytes so that it can return a meaningful string. In this case we got into the 'else' part of the 'if' which is activated iff '(bytes >= (1<<30))', which means more or equals to a gigabyte. Then 'bytes', a uint64_t, is casted into a double and printed into a string. The only problem I can see here is if 'bytes' is bigger than what the "mantissa" part of double can represent, in which case we start losing precision. The "mantissa" part is usuallyâ„¢ 53-bits which can represent 9~ petabytes; so it's gonna take a while and is probably irrelevant with this thread's problem. I don't see an integer overflow or underflow happening anywhere either. From what I can gather, 'bytes' got into bytes_to_usage() with a value around 48.08*(2^30). Did you by any chance compile tor with bufferevents enabled? _______________________________________________ tor-talk mailing list [email protected] https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
