On 02/09/2018 04:36 AM, Piotr Sarna wrote:
BlockSizes structure used in block size probing has uint32_t types
for logical and physical sizes. These fields are wrongfully assigned
to uint16_t in BlockConf, which results, among other errors,
in assigning 0 instead of 65536 (which will be the case in at least
future LizardFS block device driver among other things).
This commit makes BlockConf's physical_block_size and logical_block_size
fields uint32_t to avoid inconsistencies.
Signed-off-by: Piotr Sarna <sa...@skytechnology.pl>
---
- const int64_t max = 32768;
+ const int64_t max = 2147483648;
@@ -762,9 +762,9 @@ static void set_blocksize(Object *obj, Visitor *v, const
char *name,
}
const PropertyInfo qdev_prop_blocksize = {
- .name = "uint16",
- .description = "A power of two between 512 and 32768",
- .get = get_uint16,
+ .name = "uint32",
+ .description = "A power of two between 512 and 2147483648",
+ .get = get_uint32,
I can understand a block size larger than 16 bits, but all the way up to
2G seems rather perverse (as we have to perform read-modify-write on
anything that is smaller than the blocksize, and that adds 4G of
overhead for a blocksize of 2G). Would it be better to cap this at 1M
for now?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org