On Thu 22 Feb 2018 04:59:21 PM CET, Eric Blake wrote: > Our code was already checking that we did not attempt to > allocate more clusters than what would fit in an INT64 (the > physical maximimum if we can access a full off_t's worth of > data). But this does not catch smaller limits enforced by > various spots in the qcow2 image description: L1 and normal > clusters of L2 are documented as having bits 63-56 reserved > for other purposes, capping our maximum offset at 64PB (bit > 55 is the maximum bit set). And for compressed images with > 2M clusters, the cap drops the maximum offset to bit 48, or > a maximum offset of 512TB. If we overflow that offset, we > would write compressed data into one place, but try to > decompress from another, which won't work. > > I don't have 512TB handy to prove whether things break if we > compress so much data that we overflow that limit, and don't > think that iotests can (quickly) test it either. Test 138 > comes close (it corrupts an image into thinking something lives > at 32PB, which is half the maximum for L1 sizing - although > it relies on 512-byte clusters). But that test points out > that we will generally hit other limits first (such as running > out of memory for the refcount table, or exceeding file system > limits like 16TB on ext4, etc), so this is more a theoretical > safety valve than something likely to be hit. > > Signed-off-by: Eric Blake <ebl...@redhat.com>
Reviewed-by: Alberto Garcia <be...@igalia.com> Berto