On 3/4/21 5:24 PM, Kevin Wolf wrote:
> Am 24.02.2021 um 11:47 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> We are going to use it in more places, calculating
>> "s->tracks << BDRV_SECTOR_BITS" doesn't look good.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com>
>> @@ -771,6 +770,7 @@ static int parallels_open(BlockDriverState *bs, QDict 
>> *options, int flags,
>>          ret = -EFBIG;
>>          goto fail;
>>      }
>> +    s->cluster_size = s->tracks << BDRV_SECTOR_BITS;
>>  
>>      s->bat_size = le32_to_cpu(ph.bat_entries);
>>      if (s->bat_size > INT_MAX / sizeof(uint32_t)) {
> Checking the context, I saw this a few lines above:
>
>     if (s->tracks > INT32_MAX/513) {
>
> Is the 513 intentional?
>
> Kevin
>
I can not remember why I have written this at that time,
but original comment for the commit was

commit d25d59802021a747812472780d80a0e792078f40
Author: Denis V. Lunev <d...@openvz.org>
Date:   Mon Jul 28 20:23:55 2014 +0400

    parallels: 2TB+ parallels images support
   
    Parallels has released in the recent updates of Parallels Server 5/6
    new addition to his image format. Images with signature WithouFreSpacExt
    have offsets in the catalog coded not as offsets in sectors (multiple
    of 512 bytes) but offsets coded in blocks (i.e. header->tracks * 512)
   
    In this case all 64 bits of header->nb_sectors are used for image size.
   
    This patch implements support of this for qemu-img and also adds
specific
    check for an incorrect image. Images with block size greater than
    INT_MAX/513 are not supported. The biggest available Parallels image
    cluster size in the field is 1 Mb. Thus this limit will not hurt
    anyone.
   
    Signed-off-by: Denis V. Lunev <d...@openvz.org>
    CC: Jeff Cody <jc...@redhat.com>
    CC: Kevin Wolf <kw...@redhat.com>
    CC: Stefan Hajnoczi <stefa...@redhat.com>
    Reviewed-by: Jeff Cody <jc...@redhat.com>
    Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>

Thus I believe that this is intentional.

Den

Reply via email to