On 06/12/2018 03:51 PM, Richard Henderson wrote:
On 06/10/2018 03:14 PM, Philippe Mathieu-Daudé wrote:
xen_pv_printf(xendev, 1, "type \"%s\", fileproto \"%s\", filename \"%s\","
- " size %" PRId64 " (%" PRId64 " MB)\n",
+ " size %" PRId64 " (%llu MB)\n",
blkdev->type, blkdev->fileproto, blkdev->filename,
- blkdev->file_size, blkdev->file_size >> 20);
+ blkdev->file_size, blkdev->file_size / MiB);
Having to change printf markup is exactly why you shouldn't use ULL in MiB.
Conversely, M_BYTE was already ULL, so if you don't use it in MiB,
you'll have to change other printf markup where you were changing those
One benefit of using the widest possible type: we avoid risk of silent
truncation. Potential downsides: wasted processing time (when 32 bits
was sufficient), and compilers might start warning when we narrow a
64-bit value into a 32-bit variable (but I think we already ignore that).
One benefit of using the natural type that holds the value: use of
64-bit math is explicit based on the type of what else is being
multiplied by the macro. Potential downside: 32*32 assigned to a 64-bit
result may be botched (but hopefully Coverity will flag it).
So there's tradeoffs either way, and you at least need to document in
your commit messages what auditing you have done that any type changes
introduced by your changes are safe.
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org