On 6/3/20 7:58 PM, Wei Wang wrote: > It is possible that encoded_size==0, but unencoded_size !=0. For example, > a page is written with the same data that it already has.
That really contains 0 bytes? Not even the ones that say "same data"? You certainly have a magical compression algorithm there. Or bad accounting. > The encoding_rate is expected to reflect if the page is xbzrle encoding > friendly. > The larger, the more friendly, so 0 might not be a good representation here. > > Maybe, we could change UINT64_MAX above to "~0ULL" to avoid the issue? ~0ull is no different than UINT64_MAX -- indeed, they are *exactly* the same value -- and is not an exactly representible floating-point value. If unencoded_size != 0, and (somehow) encoded_size == 0, then unencoded_size / encoded_size = Inf which is indeed the limit of x -> 0, n / x. Which is *also* printable by %0.2f. I still contend that the middle if should be removed, and you should print out whatever's left. Either NaN or Inf is instructive. Certainly nothing in the middle cares about the actual value. r~