Re: Question about (and problem with) pflash data access

2020-02-12 Thread Alexey Kardashevskiy



On 13/02/2020 10:50, Philippe Mathieu-Daudé wrote:
> Cc'ing Paolo and Alexey.
> 
> On 2/13/20 12:09 AM, Guenter Roeck wrote:
>> On Wed, Feb 12, 2020 at 10:39:30PM +0100, Philippe Mathieu-Daudé wrote:
>>> Cc'ing Jean-Christophe and Peter.
>>>
>>> On 2/12/20 7:46 PM, Guenter Roeck wrote:
 Hi,

 I have been playing with pflash recently. For the most part it works,
 but I do have an odd problem when trying to instantiate pflash on sx1.

 My data file looks as follows.

 000 0001       
 020        
 *
 0002000 0002       
 0002020        
 *
 0004000 0003       
 0004020        
 ...

 In the sx1 machine, this becomes:

 000 6001       
 020        
 *
 0002000 6002       
 0002020        
 *
 0004000 6003       
 0004020        
 *
 ...

 pflash is instantiated with "-drive
 file=flash.32M.test,format=raw,if=pflash".

 I don't have much success with pflash tracing - data accesses don't
 show up there.

 I did find a number of problems with the sx1 emulation, but I have
 no clue
 what is going on with pflash. As far as I can see pflash works fine on
 other machines. Can someone give me a hint what to look out for ?
>>>
>>> This is specific to the SX1, introduced in commit 997641a84ff:
>>>
>>>   64 static uint64_t static_read(void *opaque, hwaddr offset,
>>>   65 unsigned size)
>>>   66 {
>>>   67 uint32_t *val = (uint32_t *) opaque;
>>>   68 uint32_t mask = (4 / size) - 1;
>>>   69
>>>   70 return *val >> ((offset & mask) << 3);
>>>   71 }
>>>
>>> Only guessing, this looks like some hw parity, and I imagine you need to
>>> write the parity bits in your flash.32M file before starting QEMU,
>>> then it
>>> would appear "normal" within the guest.
>>>
>> I thought this might be related, but that is not the case. I added log
>> messages, and even ran the code in gdb. static_read() and static_write()
>> are not executed.
>>
>> Also,
>>
>>  memory_region_init_io([0], NULL, _ops, ,
>>    "sx1.cs0", OMAP_CS0_SIZE - flash_size);
>>   ^^
>>  memory_region_add_subregion(address_space,
>>  OMAP_CS0_BASE + flash_size, [0]);
>>  ^^
>>
>> suggests that the code is only executed for memory accesses _after_
>> the actual flash. The memory tree is:
>>
>> memory-region: system
>>    - (prio 0, i/o): system
>>  -01ff (prio 0, romd): omap_sx1.flash0-1
>>  -01ff (prio 0, rom): omap_sx1.flash0-0
> 
> Eh two memory regions with same size and same priority... Is this legal?


I'd say yes if used with memory_region_set_enabled() to make sure only
one is enabled. Having both enabled is weird and we should print a
warning. Thanks,



> 
> (qemu) info mtree -f -d
> FlatView #0
>  AS "memory", root: system
>  AS "cpu-memory-0", root: system
>  Root memory region: system
>   -01ff (prio 0, romd): omap_sx1.flash0-1
>   0200-03ff (prio 0, i/o): sx1.cs0
>   0400-07ff (prio 0, i/o): sx1.cs1
>   0800-0bff (prio 0, i/o): sx1.cs3
>   1000-11ff (prio 0, ram): omap1.dram
>   2000-2002 (prio 0, ram): omap1.sram
>   ...
>   Dispatch
>     Physical sections
>   #0 @.. (noname) [unassigned]
>   #1 @..01ff omap_sx1.flash0-1 [not dirty]
>   #2 @0200..03ff sx1.cs0 [ROM]
>   #3 @0400..07ff sx1.cs1 [watch]
>   #4 @0800..0bff sx1.cs3
>   #5 @1000..11ff omap1.dram
>   #6 @2000..2002 omap1.sram
>   ...
>     Nodes (9 bits per level, 6 levels) ptr=[3] skip=4
>   [0]
>   0   skip=3  ptr=[3]
>   1..511  skip=1  ptr=NIL
>   [1]
>   0   skip=2  ptr=[3]
>   1..511  skip=1  ptr=NIL
>   [2]
>   0   skip=1  ptr=[3]
>   1..511  skip=1  ptr=NIL
>   [3]
>   0   skip=1  ptr=[4]
>   1   skip=1  ptr=[5]
>   2   skip=2  ptr=[7]
>   3..13   skip=1  ptr=NIL
>  14   skip=2  ptr=[9]
>  15   skip=2  ptr=[11]
>  16..511  skip=1  ptr=NIL
> 

Re: [Qemu-block] [PATCH v2 0/3] Don't write headers if BDS is INACTIVE

2017-11-01 Thread Alexey Kardashevskiy
On 31/10/17 00:10, Jeff Cody wrote:
> Changes from v1->v2:
> 
> * Drop previous parallels patches, just check BDRV_O_INACTIVE now
>   (Kevin)
> 
> git-backport-diff -r qemu/master.. -u github/master
> Key:
> [] : patches are identical
> [] : number of functional differences between upstream/downstream patch
> [down] : patch is downstream-only
> The flags [FC] indicate (F)unctional and (C)ontextual differences, 
> respectively
> 
> 001/3:[] [--] 'block/vhdx.c: Don't blindly update the header'
> 002/3:[down] 'block/parallals: Do not update header or truncate image when 
> INMIGRATE'
> 003/3:[] [--] 'qemu-iotests: update unsupported image formats in 194'



Tested-by: Alexey Kardashevskiy <a...@ozlabs.ru>


> 
> v1:
> 
> VHDX and Parallels both blindly write headers to the image file
> if the images are opened R/W.  This causes an assert if the QEMU run
> state is INMIGRATE.
> 
> Jeff Cody (3):
>   block/vhdx.c: Don't blindly update the header
>   block/parallals: Do not update header or truncate image when INMIGRATE
>   qemu-iotests: update unsupported image formats in 194
> 
>  block/parallels.c  | 7 ++-
>  block/vhdx.c   | 7 ---
>  tests/qemu-iotests/194 | 2 +-
>  3 files changed, 3 insertions(+), 13 deletions(-)
> 


-- 
Alexey



Re: [Qemu-block] [Qemu-devel] [PATCH] qcow2: allocate cluster_cache/cluster_data on demand

2017-08-30 Thread Alexey Kardashevskiy
On 31/08/17 03:20, Stefan Hajnoczi wrote:
> On Tue, Aug 22, 2017 at 02:56:00PM +1000, Alexey Kardashevskiy wrote:
>> On 19/08/17 12:46, Alexey Kardashevskiy wrote:
>>> On 19/08/17 01:18, Eric Blake wrote:
>>>> On 08/18/2017 08:31 AM, Stefan Hajnoczi wrote:
>>>>> Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1)
>>>>> * cluster_size + 512 bytes upfront.  Allocate s->cluster_cache and
>>>>> s->cluster_data when the first read operation is performance on a
>>>>> compressed cluster.
>>>>>
>>>>> The buffers are freed in .bdrv_close().  .bdrv_open() no longer has any
>>>>> code paths that can allocate these buffers, so remove the free functions
>>>>> in the error code path.
>>>>>
>>>>> Reported-by: Alexey Kardashevskiy <a...@ozlabs.ru>
>>>>> Cc: Kevin Wolf <kw...@redhat.com>
>>>>> Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
>>>>> ---
>>>>> Alexey: Does this improve your memory profiling results?
>>>>
>>>> Is this a regression from earlier versions? 
>>>
>>> Hm, I have not thought about this.
>>>
>>> So. I did bisect and this started happening from
>>> 9a4c0e220d8a4f82b5665d0ee95ef94d8e1509d5
>>> "hw/virtio-pci: fix virtio behaviour"
>>>
>>> Before that, the very same command line would take less than 1GB of
>>> resident memory. That thing basically enforces virtio-1.0 for QEMU <=2.6
>>> which means that upstream with "-machine pseries-2.6" works fine (less than
>>> 1GB), "-machine pseries-2.7" does not (close to 7GB, sometime even 9GB).
>>>
>>> Then I tried bisecting again, with
>>> "scsi=off,disable-modern=off,disable-legacy=on" on my 150 virtio-block
>>> devices, started from
>>> e266d421490e0 "virtio-pci: add flags to enable/disable legacy/modern" (it
>>> added the disable-modern switch) which uses 2GB of memory.
>>>
>>> I ended up with ada434cd0b44 "virtio-pci: implement cfg capability".
>>>
>>> Then I removed proxy->modern_as on v2.10.0-rc3 (see below) and got 1.5GB of
>>> used memory (yay!)
>>>
>>> I do not really know how to reinterpret all of this, do you?
>>
>>
>> Anyone, ping? Should I move the conversation to the original thread? Any
>> hacks to try with libc?
> 
> I suggest a new top-level thread with Michael Tsirkin CCed.


I am continuing in the original "Memory use with >100 virtio devices"
thread and the problem is more generic than virtio, it is just easier to
reproduce it with virtio, that's all.



-- 
Alexey



Re: [Qemu-block] [Qemu-devel] [PATCH] qcow2: allocate cluster_cache/cluster_data on demand

2017-08-21 Thread Alexey Kardashevskiy
On 19/08/17 12:46, Alexey Kardashevskiy wrote:
> On 19/08/17 01:18, Eric Blake wrote:
>> On 08/18/2017 08:31 AM, Stefan Hajnoczi wrote:
>>> Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1)
>>> * cluster_size + 512 bytes upfront.  Allocate s->cluster_cache and
>>> s->cluster_data when the first read operation is performance on a
>>> compressed cluster.
>>>
>>> The buffers are freed in .bdrv_close().  .bdrv_open() no longer has any
>>> code paths that can allocate these buffers, so remove the free functions
>>> in the error code path.
>>>
>>> Reported-by: Alexey Kardashevskiy <a...@ozlabs.ru>
>>> Cc: Kevin Wolf <kw...@redhat.com>
>>> Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
>>> ---
>>> Alexey: Does this improve your memory profiling results?
>>
>> Is this a regression from earlier versions? 
> 
> Hm, I have not thought about this.
> 
> So. I did bisect and this started happening from
> 9a4c0e220d8a4f82b5665d0ee95ef94d8e1509d5
> "hw/virtio-pci: fix virtio behaviour"
> 
> Before that, the very same command line would take less than 1GB of
> resident memory. That thing basically enforces virtio-1.0 for QEMU <=2.6
> which means that upstream with "-machine pseries-2.6" works fine (less than
> 1GB), "-machine pseries-2.7" does not (close to 7GB, sometime even 9GB).
> 
> Then I tried bisecting again, with
> "scsi=off,disable-modern=off,disable-legacy=on" on my 150 virtio-block
> devices, started from
> e266d421490e0 "virtio-pci: add flags to enable/disable legacy/modern" (it
> added the disable-modern switch) which uses 2GB of memory.
> 
> I ended up with ada434cd0b44 "virtio-pci: implement cfg capability".
> 
> Then I removed proxy->modern_as on v2.10.0-rc3 (see below) and got 1.5GB of
> used memory (yay!)
> 
> I do not really know how to reinterpret all of this, do you?


Anyone, ping? Should I move the conversation to the original thread? Any
hacks to try with libc?



> 
> 
> Note: 1GB..9GB numbers from below are the peak values from valgrind's
> massif. This is pretty much resident memory used by QEMU process. In my
> testing I did not enable KVM and I did not start the guest (i.e. used -S).
> 150 virtio-block devices, 2GB RAM for the guest.
> 
> Also, while bisecting, I only paid attention if it is 1..2GB or 6..9GB -
> all tests did fit these 2 ranges, for any given sha1 the amount of memory
> would be stable but among "good" commits it could change between 1GB and 2GB.
> 
> 
> 
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 5d14bd6..7ad447a 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -1783,6 +1783,7 @@ static void virtio_pci_realize(PCIDevice *pci_dev,
> Error **errp)
> /* PCI BAR regions must be powers of 2 */
> pow2ceil(proxy->notify.offset + proxy->notify.size));
> 
> +#if 0
>  memory_region_init_alias(>modern_cfg,
>   OBJECT(proxy),
>   "virtio-pci-cfg",
> @@ -1791,7 +1792,7 @@ static void virtio_pci_realize(PCIDevice *pci_dev,
> Error **errp)
>   memory_region_size(>modern_bar));
> 
>  address_space_init(>modern_as, >modern_cfg,
> "virtio-pci-cfg-as");
> -
> +#endif
>  if (proxy->disable_legacy == ON_OFF_AUTO_AUTO) {
>  proxy->disable_legacy = pcie_port ? ON_OFF_AUTO_ON : ON_OFF_AUTO_OFF;
>  }
> @@ -1860,10 +1861,10 @@ static void virtio_pci_realize(PCIDevice *pci_dev,
> Error **errp)
> 
>  static void virtio_pci_exit(PCIDevice *pci_dev)
>  {
> -VirtIOPCIProxy *proxy = VIRTIO_PCI(pci_dev);
> +//VirtIOPCIProxy *proxy = VIRTIO_PCI(pci_dev);
> 
>  msix_uninit_exclusive_bar(pci_dev);
> -address_space_destroy(>modern_as);
> +//address_space_destroy(>modern_as);
>  }
> 
>  static void virtio_pci_reset(DeviceState *qdev)
> 
> 
> 
> 
> 
>> Likely, it is NOT -rc4
>> material, and thus can wait for 2.11; but it should be fine for -stable
>> as part of 2.10.1 down the road.
>>
>>> +++ b/block/qcow2-cluster.c
>>> @@ -1516,6 +1516,23 @@ int qcow2_decompress_cluster(BlockDriverState *bs, 
>>> uint64_t cluster_offset)
>>>  nb_csectors = ((cluster_offset >> s->csize_shift) & s->csize_mask) 
>>> + 1;
>>>  sector_offset = coffset & 511;
>>>  csize = nb_csectors * 512 - sector_offset;
>>>

Re: [Qemu-block] [Qemu-devel] [PATCH] qcow2: allocate cluster_cache/cluster_data on demand

2017-08-19 Thread Alexey Kardashevskiy
On 19/08/17 12:46, Alexey Kardashevskiy wrote:
> On 19/08/17 01:18, Eric Blake wrote:
>> On 08/18/2017 08:31 AM, Stefan Hajnoczi wrote:
>>> Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1)
>>> * cluster_size + 512 bytes upfront.  Allocate s->cluster_cache and
>>> s->cluster_data when the first read operation is performance on a
>>> compressed cluster.
>>>
>>> The buffers are freed in .bdrv_close().  .bdrv_open() no longer has any
>>> code paths that can allocate these buffers, so remove the free functions
>>> in the error code path.
>>>
>>> Reported-by: Alexey Kardashevskiy <a...@ozlabs.ru>
>>> Cc: Kevin Wolf <kw...@redhat.com>
>>> Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
>>> ---
>>> Alexey: Does this improve your memory profiling results?
>>
>> Is this a regression from earlier versions? 
> 
> Hm, I have not thought about this.
> 
> So. I did bisect and this started happening from
> 9a4c0e220d8a4f82b5665d0ee95ef94d8e1509d5
> "hw/virtio-pci: fix virtio behaviour"
> 
> Before that, the very same command line would take less than 1GB of
> resident memory. That thing basically enforces virtio-1.0 for QEMU <=2.6
> which means that upstream with "-machine pseries-2.6" works fine (less than
> 1GB), "-machine pseries-2.7" does not (close to 7GB, sometime even 9GB).
> 
> Then I tried bisecting again, with
> "scsi=off,disable-modern=off,disable-legacy=on" on my 150 virtio-block
> devices, started from
> e266d421490e0 "virtio-pci: add flags to enable/disable legacy/modern" (it
> added the disable-modern switch) which uses 2GB of memory.
> 
> I ended up with ada434cd0b44 "virtio-pci: implement cfg capability".
> 
> Then I removed proxy->modern_as on v2.10.0-rc3 (see below) and got 1.5GB of
> used memory (yay!)
> 
> I do not really know how to reinterpret all of this, do you?
> 
> 
> Note: 1GB..9GB numbers from below are the peak values from valgrind's

s/from below/from above/ , sorry, bad cut-n-paste :)


-- 
Alexey



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH] qcow2: allocate cluster_cache/cluster_data on demand

2017-08-19 Thread Alexey Kardashevskiy
On 18/08/17 23:31, Stefan Hajnoczi wrote:
> Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1)
> * cluster_size + 512 bytes upfront.  Allocate s->cluster_cache and
> s->cluster_data when the first read operation is performance on a
> compressed cluster.
> 
> The buffers are freed in .bdrv_close().  .bdrv_open() no longer has any
> code paths that can allocate these buffers, so remove the free functions
> in the error code path.
> 
> Reported-by: Alexey Kardashevskiy <a...@ozlabs.ru>
> Cc: Kevin Wolf <kw...@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
> ---
> Alexey: Does this improve your memory profiling results?

Yes, it does:

was:
12.81% (1,023,193,088B)
now:
05.36% (393,893,888B)


Tested-by: Alexey Kardashevskiy <a...@ozlabs.ru>

>  block/qcow2-cluster.c | 17 +
>  block/qcow2.c | 12 
>  2 files changed, 17 insertions(+), 12 deletions(-)
> 
> diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
> index f06c08f64c..c47600a44e 100644
> --- a/block/qcow2-cluster.c
> +++ b/block/qcow2-cluster.c
> @@ -1516,6 +1516,23 @@ int qcow2_decompress_cluster(BlockDriverState *bs, 
> uint64_t cluster_offset)
>  nb_csectors = ((cluster_offset >> s->csize_shift) & s->csize_mask) + 
> 1;
>  sector_offset = coffset & 511;
>  csize = nb_csectors * 512 - sector_offset;
> +
> +/* Allocate buffers on first decompress operation, most images are
> + * uncompressed and the memory overhead can be avoided.  The buffers
> + * are freed in .bdrv_close().
> + */
> +if (!s->cluster_data) {
> +/* one more sector for decompressed data alignment */
> +s->cluster_data = qemu_try_blockalign(bs->file->bs,
> +QCOW_MAX_CRYPT_CLUSTERS * s->cluster_size + 512);
> +if (!s->cluster_data) {
> +return -EIO;
> +}
> +}
> +if (!s->cluster_cache) {
> +s->cluster_cache = g_malloc(s->cluster_size);
> +}
> +
>  BLKDBG_EVENT(bs->file, BLKDBG_READ_COMPRESSED);
>  ret = bdrv_read(bs->file, coffset >> 9, s->cluster_data,
>  nb_csectors);
> diff --git a/block/qcow2.c b/block/qcow2.c
> index 40ba26c111..0ac201910a 100644
> --- a/block/qcow2.c
> +++ b/block/qcow2.c
> @@ -1360,16 +1360,6 @@ static int qcow2_do_open(BlockDriverState *bs, QDict 
> *options, int flags,
>  goto fail;
>  }
>  
> -s->cluster_cache = g_malloc(s->cluster_size);
> -/* one more sector for decompressed data alignment */
> -s->cluster_data = qemu_try_blockalign(bs->file->bs, 
> QCOW_MAX_CRYPT_CLUSTERS
> -* s->cluster_size + 512);
> -if (s->cluster_data == NULL) {
> -error_setg(errp, "Could not allocate temporary cluster buffer");
> -ret = -ENOMEM;
> -goto fail;
> -}
> -
>  s->cluster_cache_offset = -1;
>  s->flags = flags;
>  
> @@ -1507,8 +1497,6 @@ static int qcow2_do_open(BlockDriverState *bs, QDict 
> *options, int flags,
>  if (s->refcount_block_cache) {
>  qcow2_cache_destroy(bs, s->refcount_block_cache);
>  }
> -g_free(s->cluster_cache);
> -qemu_vfree(s->cluster_data);
>  qcrypto_block_free(s->crypto);
>  qapi_free_QCryptoBlockOpenOptions(s->crypto_opts);
>  return ret;
> 


-- 
Alexey



Re: [Qemu-block] [Qemu-devel] [PATCH] qcow2: allocate cluster_cache/cluster_data on demand

2017-08-19 Thread Alexey Kardashevskiy
On 19/08/17 01:18, Eric Blake wrote:
> On 08/18/2017 08:31 AM, Stefan Hajnoczi wrote:
>> Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1)
>> * cluster_size + 512 bytes upfront.  Allocate s->cluster_cache and
>> s->cluster_data when the first read operation is performance on a
>> compressed cluster.
>>
>> The buffers are freed in .bdrv_close().  .bdrv_open() no longer has any
>> code paths that can allocate these buffers, so remove the free functions
>> in the error code path.
>>
>> Reported-by: Alexey Kardashevskiy <a...@ozlabs.ru>
>> Cc: Kevin Wolf <kw...@redhat.com>
>> Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
>> ---
>> Alexey: Does this improve your memory profiling results?
> 
> Is this a regression from earlier versions? 

Hm, I have not thought about this.

So. I did bisect and this started happening from
9a4c0e220d8a4f82b5665d0ee95ef94d8e1509d5
"hw/virtio-pci: fix virtio behaviour"

Before that, the very same command line would take less than 1GB of
resident memory. That thing basically enforces virtio-1.0 for QEMU <=2.6
which means that upstream with "-machine pseries-2.6" works fine (less than
1GB), "-machine pseries-2.7" does not (close to 7GB, sometime even 9GB).

Then I tried bisecting again, with
"scsi=off,disable-modern=off,disable-legacy=on" on my 150 virtio-block
devices, started from
e266d421490e0 "virtio-pci: add flags to enable/disable legacy/modern" (it
added the disable-modern switch) which uses 2GB of memory.

I ended up with ada434cd0b44 "virtio-pci: implement cfg capability".

Then I removed proxy->modern_as on v2.10.0-rc3 (see below) and got 1.5GB of
used memory (yay!)

I do not really know how to reinterpret all of this, do you?


Note: 1GB..9GB numbers from below are the peak values from valgrind's
massif. This is pretty much resident memory used by QEMU process. In my
testing I did not enable KVM and I did not start the guest (i.e. used -S).
150 virtio-block devices, 2GB RAM for the guest.

Also, while bisecting, I only paid attention if it is 1..2GB or 6..9GB -
all tests did fit these 2 ranges, for any given sha1 the amount of memory
would be stable but among "good" commits it could change between 1GB and 2GB.



diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 5d14bd6..7ad447a 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1783,6 +1783,7 @@ static void virtio_pci_realize(PCIDevice *pci_dev,
Error **errp)
/* PCI BAR regions must be powers of 2 */
pow2ceil(proxy->notify.offset + proxy->notify.size));

+#if 0
 memory_region_init_alias(>modern_cfg,
  OBJECT(proxy),
  "virtio-pci-cfg",
@@ -1791,7 +1792,7 @@ static void virtio_pci_realize(PCIDevice *pci_dev,
Error **errp)
  memory_region_size(>modern_bar));

 address_space_init(>modern_as, >modern_cfg,
"virtio-pci-cfg-as");
-
+#endif
 if (proxy->disable_legacy == ON_OFF_AUTO_AUTO) {
 proxy->disable_legacy = pcie_port ? ON_OFF_AUTO_ON : ON_OFF_AUTO_OFF;
 }
@@ -1860,10 +1861,10 @@ static void virtio_pci_realize(PCIDevice *pci_dev,
Error **errp)

 static void virtio_pci_exit(PCIDevice *pci_dev)
 {
-VirtIOPCIProxy *proxy = VIRTIO_PCI(pci_dev);
+//VirtIOPCIProxy *proxy = VIRTIO_PCI(pci_dev);

 msix_uninit_exclusive_bar(pci_dev);
-address_space_destroy(>modern_as);
+//address_space_destroy(>modern_as);
 }

 static void virtio_pci_reset(DeviceState *qdev)





> Likely, it is NOT -rc4
> material, and thus can wait for 2.11; but it should be fine for -stable
> as part of 2.10.1 down the road.
> 
>> +++ b/block/qcow2-cluster.c
>> @@ -1516,6 +1516,23 @@ int qcow2_decompress_cluster(BlockDriverState *bs, 
>> uint64_t cluster_offset)
>>  nb_csectors = ((cluster_offset >> s->csize_shift) & s->csize_mask) 
>> + 1;
>>  sector_offset = coffset & 511;
>>  csize = nb_csectors * 512 - sector_offset;
>> +
>> +/* Allocate buffers on first decompress operation, most images are
>> + * uncompressed and the memory overhead can be avoided.  The buffers
>> + * are freed in .bdrv_close().
>> + */
>> +if (!s->cluster_data) {
>> +/* one more sector for decompressed data alignment */
>> +s->cluster_data = qemu_try_blockalign(bs->file->bs,
>> +QCOW_MAX_CRYPT_CLUSTERS * s->cluster_size + 512);
>> +if (!s->cluster_data) {
>> +return -EIO;
> 
> Is -ENOMEM any better than -EIO here?
> 


-- 
Alexey



signature.asc
Description: OpenPGP digital signature