On Thu, 13 Sep 2018 07:16:05 +0200
Cédric Le Goater wrote:
> So that we don't have to call qdev_get_machine() to get the machine
> class and the sPAPRIrq backend holding the number of MSIs.
>
> Signed-off-by: Cédric Le Goater
> ---
Reviewed-by: Greg Kurz
> include/hw/pci-host/spapr.h | 5
On Wednesday, September 12, 2018 10:34 PM, Eric Blake wrote:
> On 7/16/18 6:54 AM, Igor Mammedov wrote:
>
> >> +
> >> +#include "unistd.h"
> >> +#include "fcntl.h"
> >> +#include "qemu/osdep.h"
> >> +#include "sysemu/numa.h"
> >> +#include "hw/i386/pc.h"
> >> +#include "hw/i386/acpi-build.h"
> >>
On Wed, 09/12 19:10, Paolo Bonzini wrote:
> This is a preparation for the next patch, and also a very small
> optimization. Compute the timeout only once, before invoking
> try_poll_mode, and adjust it in run_poll_handlers. The adjustment
> is the polling time when polling fails, or zero
>
> > This patch adds virtio-pmem Qemu device.
> >
> > This device presents memory address range information to guest
> > which is backed by file backend type. It acts like persistent
> > memory device for KVM guest. Guest can perform read and
> > persistent write operations on this
Peter,
On 9/12/18 11:52 AM, Peter Xu wrote:
On Tue, Sep 11, 2018 at 11:49:49AM -0500, Brijesh Singh wrote:
Now that amd-iommu support interrupt remapping, enable the GASup in IVRS
table and GASup in extended feature register to indicate that IOMMU
support guest virtual APIC mode.
Note that
On Wed, 09/12 19:10, Paolo Bonzini wrote:
> Commit 70232b5253 ("aio-posix: Don't count ctx->notifier as progress when
> 2018-08-15), by not reporting progress, causes aio_poll to execute the
> system call when polling succeeds because of ctx->notifier. This introduces
> latency before the call to
On Wed, 09/12 14:42, Paolo Bonzini wrote:
> On 12/09/2018 13:50, Fam Zheng wrote:
> >> I think it's okay if it is invoked. The sequence is first you stop the
> >> vq, then you drain the BlockBackends, then you switch AioContext. All
> >> that matters is the outcome when
On Tue, Sep 11, 2018 at 06:41:24AM +0200, Cédric Le Goater wrote:
> On 09/11/2018 03:48 AM, David Gibson wrote:
> > On Mon, Sep 10, 2018 at 01:02:20PM +0200, Cédric Le Goater wrote:
> > 11;rgb://> The number of MSI interrupts a sPAPR machine can
> > allocate is in direct
> >> relation
On Thu, Sep 13, 2018 at 07:16:05AM +0200, Cédric Le Goater wrote:
> So that we don't have to call qdev_get_machine() to get the machine
> class and the sPAPRIrq backend holding the number of MSIs.
>
> Signed-off-by: Cédric Le Goater
Applied, thanks.
> ---
> include/hw/pci-host/spapr.h | 5
Brijesh,
On 9/11/18 11:49 PM, Brijesh Singh wrote:
Emulate the interrupt remapping support when guest virtual APIC is
enabled.
See IOMMU spec:https://support.amd.com/TechDocs/48882_IOMMU.pdf
(section 2.2.5.2) for details information.
When VAPIC is enabled, it uses interrupt remapping as
> > --- a/docs/specs/standard-vga.txt
> > +++ b/docs/specs/standard-vga.txt
> > @@ -61,7 +61,7 @@ MMIO area spec
> > Likewise applies to the pci variant only for obvious reasons.
> > - - 03ff : reserved, for possible virtio extension.
> > + - 03ff : edid data blob.
>
>
> > +if
On 09/06/2018 07:03 PM, Juan Quintela wrote:
guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Changelog in v6:
Thanks to Juan's review, in this version we
1) move flush compressed data to find_dirty_block() where it hits the end
of memblock
2) use save_page_use_compression instead
On 2018-09-12 22:08, Tony Krowiak wrote:
> This patch provides documentation describing the AP architecture and
> design concepts behind the virtualization of AP devices. It also
> includes an example of how to configure AP devices for exclusive
> use of KVM guests.
>
> Signed-off-by: Tony
OK, thanks for the confirmation, John, so seems like this bug has been
fixed in the past and we can close it now.
** Changed in: qemu
Status: Incomplete => Fix Released
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
On Tue, Sep 11, 2018 at 07:26:43PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao
>
> Like commit 16b4226(hw/acpi-build: Add a check for memory-less NUMA node
> ), it also needs to check memory length for NUMA nodes on ARM.
>
> Signed-off-by: Shannon Zhao
> ---
> hw/arm/virt-acpi-build.c |
On Thu, Sep 13, 2018 at 11:10 AM Zhang Chen wrote:
>
>
> On Wed, Sep 12, 2018 at 3:50 PM Jason Wang wrote:
>
>>
>>
>> On 2018年09月10日 16:16, Zhang Chen wrote:
>> > Hi All.
>> > Have any comments?
>> > Ping...
>> >
>> > Thanks
>> > Zhang Chen
>>
>> I've queued them with some tweaks on the commit
Hi Luiz,
Thanks for the review.
>
> > This patch adds virtio-pmem driver for KVM guest.
> >
> > Guest reads the persistent memory range information from
> > Qemu over VIRTIO and registers it on nvdimm_bus. It also
> > creates a nd_region object with the persistent memory
> > range
On 09/13/2018 07:48 AM, Thomas Huth wrote:
> On 2018-09-12 22:08, Tony Krowiak wrote:
>> From: Tony Krowiak
>>
>> Introduces the base object model for virtualizing AP devices.
>>
>> Signed-off-by: Tony Krowiak
>> ---
> [...]
>> diff --git a/hw/s390x/ap-bridge.c b/hw/s390x/ap-bridge.c
>> new
On Wed, 09/12 19:10, Paolo Bonzini wrote:
> It is valid for an aio_set_fd_handler to happen concurrently with
> aio_poll. In that case, poll_disable_cnt can change under the heels
> of aio_poll, and the assertion on poll_disable_cnt can fail in
> run_poll_handlers.
>
> Therefore, this patch
There are two callers for vtd_sync_shadow_page_table_range(), one
provided a valid context entry and one not. Move that fetching
operation into the caller vtd_sync_shadow_page_table() where we need to
fetch the context entry.
Meanwhile, we should handle VTD_FR_CONTEXT_ENTRY_P properly when
On 09/13/2018 08:29 AM, Christian Borntraeger wrote:
>>> +++ b/hw/s390x/ap-bridge.c
>> [...]
>>> +void s390_init_ap(void)
>>> +{
>>> +DeviceState *dev;
>>> +
>>> +/* Create bridge device */
>>> +dev = qdev_create(NULL, TYPE_AP_BRIDGE);
>>> +
> From: Alex Bennée [mailto:alex.ben...@linaro.org]
> Pavel Dovgalyuk writes:
>
> > This patch adds support for dynamically loaded plugins.
> > Every plugin is a dynamic library with a set of optional exported
> > functions that will be called from QEMU.
> >
> > Signed-off-by: Pavel Dovgalyuk
>
> From: Alex Bennée [mailto:alex.ben...@linaro.org]
> Pavel Dovgalyuk writes:
>
> > From: Pavel Dovgalyuk
> >
> > This is a samples of the instrumenting interface and implementation
> > of some instruction tracing tasks.
> >
> > Signed-off-by: Pavel Dovgalyuk
> > ---
> >
On 2018-09-13 10:54, Fam Zheng wrote:
On Thu, 09/13 10:31, yuchen...@synology.com wrote:
From: yuchenlin
There is a rare case which the size of last compressed cluster
is larger than the cluster size, which will cause the file is
not aligned at the sector boundary.
The code looks good to
Brijesh / Peter,
On 9/13/18 10:15 AM, Peter Xu wrote:
On Wed, Sep 12, 2018 at 01:59:06PM -0500, Brijesh Singh wrote:
[...]
}
return _as[devfn]->as;
}
@@ -1172,6 +1274,10 @@ static void amdvi_realize(DeviceState *dev, Error **err)
return;
}
+/* Pseudo
On Thu, 09/13 10:29, Paolo Bonzini wrote:
> On 13/09/2018 08:56, Fam Zheng wrote:
> >> +/* No need to order poll_disable_cnt writes against other updates;
> >> + * the counter is only used to avoid wasting time and latency on
> >> + * iterated polling when the system call will be
On Mon, Sep 03, 2018 at 04:32:10PM +, Ryan El Kochta wrote:
> This patch adds a new option to the input-linux object:
>
> grab_toggle=key-key-key
"grab-toggle" (no underscore) please.
I'm still not convinced we need that much flexibility.
I would go for a fixed list of combinations.
On 13/09/2018 08:03, Fam Zheng wrote:
> On Wed, 09/12 14:42, Paolo Bonzini wrote:
>> On 12/09/2018 13:50, Fam Zheng wrote:
I think it's okay if it is invoked. The sequence is first you stop the
vq, then you drain the BlockBackends, then you switch AioContext. All
that matters is
On 12/09/2018 10:17, Pavel Dovgalyuk wrote:
> GDB remote protocol supports reverse debugging of the targets.
> It includes 'reverse step' and 'reverse continue' operations.
> The first one finds the previous step of the execution,
> and the second one is intended to stop at the last breakpoint
Brijesh/Peter,
On 9/13/18 4:14 AM, Brijesh Singh wrote:
On 09/11/2018 11:52 PM, Peter Xu wrote:
...
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 5c2c638..1cbc8ba 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2565,7 +2565,8 @@ build_amd_iommu(GArray
On Thu, 09/13 16:29, yuchen...@synology.com wrote:
> From: yuchenlin
>
> There is a rare case which the size of last compressed cluster
> is larger than the cluster size, which will cause the file is
> not aligned at the sector boundary.
>
> There are three reasons to do it. First, if vmdk
On 09/12/2018 03:57 PM, Fam Zheng wrote:
On Fri, 09/07 21:39, Fei Li wrote:
Add a new Error parameter for vnc_display_init() to handle errors
in its caller: vnc_init_func(), just like vnc_display_open() does.
And let the call trace propagate the Error.
Besides, make
Hi Kirill,
That's a bit tricky to debug; could you build qemu from git and try and
bisect between 2.12.0 and 3.0 to see which commit broke it?
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1792193
On 13/09/2018 11:11, Paolo Bonzini wrote:
> On 13/09/2018 08:03, Fam Zheng wrote:
>> On Wed, 09/12 14:42, Paolo Bonzini wrote:
>>> On 12/09/2018 13:50, Fam Zheng wrote:
> I think it's okay if it is invoked. The sequence is first you stop the
> vq, then you drain the BlockBackends, then
On 12/09/2018 10:19, Pavel Dovgalyuk wrote:
> This patch tries to wake up the vCPU when it sleeps and the icount warp
> checkpoint isn't met. It means that vCPU has something to do, because
> there are no other reasons of non-matching warp checkpoint.
What happens if !replay_has_checkpoint()?
* Marc-André Lureau (marcandre.lur...@redhat.com) wrote:
> hostmem-file and hostmem-memfd use the whole object path for the
> memory region name, but hostname-ram uses only the path component (the
> basename):
>
> qemu -m 1024 -object memory-backend-ram,id=mem,size=1G -numa node,memdev=mem
>
On 12/09/2018 18:01, Li Qiang wrote:
> From: Li Qiang
>
> Signed-off-by: Li Qiang
This cannot happen, since TLB_NOTDIRTY is only added to the addr_write
member (see accel/tcg/cputlb.c).
Paolo
> ---
> exec.c | 7 +++
> 1 file changed, 7 insertions(+)
>
> diff --git a/exec.c b/exec.c
>
Hi Peter,
On 09/13/2018 09:55 AM, Peter Xu wrote:
There are two callers for vtd_sync_shadow_page_table_range(), one
provided a valid context entry and one not. Move that fetching
operation into the caller vtd_sync_shadow_page_table() where we need to
fetch the context entry.
Meanwhile, we
On Thu, Sep 13, 2018 at 10:16:20AM +0200, Maxime Coquelin wrote:
> Hi Peter,
>
> On 09/13/2018 09:55 AM, Peter Xu wrote:
> > There are two callers for vtd_sync_shadow_page_table_range(), one
> > provided a valid context entry and one not. Move that fetching
> > operation into the caller
On 09/12/2018 03:55 PM, Fam Zheng wrote:
On Fri, 09/07 21:38, Fei Li wrote:
Currently, when qemu_signal_init() fails it only returns a non-zero
value but without propagating any Error. But its callers need a
non-null err when runs error_report_err(err), or else 0->msg occurs.
To avoid such
On Thu, Sep 13, 2018 at 03:15:27PM +0700, Suravee Suthikulpanit wrote:
> Brijesh / Peter,
>
> On 9/13/18 10:15 AM, Peter Xu wrote:
> > On Wed, Sep 12, 2018 at 01:59:06PM -0500, Brijesh Singh wrote:
> >
> > [...]
> >
> > > > >}
> > > > >return _as[devfn]->as;
> > > > >}
> > >
On Thu, 09/13 16:46, Fei Li wrote:
>
>
> On 09/12/2018 03:55 PM, Fam Zheng wrote:
> > On Fri, 09/07 21:38, Fei Li wrote:
> > > Currently, when qemu_signal_init() fails it only returns a non-zero
> > > value but without propagating any Error. But its callers need a
> > > non-null err when runs
Previously, if the size of initrd >=2G, qemu exits with error:
root@haswell-OptiPlex-9020:/home/lizj#
/home/lizhijian/lkp/qemu-colo/x86_64-softmmu/qemu-system-x86_64 -kernel
./vmlinuz-4.16.0-rc4 -initrd large.cgz -nographic
qemu: error reading initrd large.cgz: No such file or directory
On 09/12/2018 04:20 PM, Fam Zheng wrote:
On Fri, 09/07 21:39, Fei Li wrote:
Make qemu_thread_create() return a Boolean to indicate if it succeeds
rather than failing with an error. And add an Error parameter to hold
the error message and let the callers handle it.
Besides, directly return
On Thu, 09/13 15:47, yuchenlin wrote:
> On 2018-09-13 10:54, Fam Zheng wrote:
> > On Thu, 09/13 10:31, yuchen...@synology.com wrote:
> > > From: yuchenlin
> > >
> > > There is a rare case which the size of last compressed cluster
> > > is larger than the cluster size, which will cause the file
From: yuchenlin
There is a rare case which the size of last compressed cluster
is larger than the cluster size, which will cause the file is
not aligned at the sector boundary.
There are three reasons to do it. First, if vmdk doesn't align at
the sector boundary, there may be many undefined
On 13/09/2018 08:56, Fam Zheng wrote:
>> +/* No need to order poll_disable_cnt writes against other updates;
>> + * the counter is only used to avoid wasting time and latency on
>> + * iterated polling when the system call will be ultimately necessary.
>> + * Changing handlers is a
Am 12.09.2018 um 19:03 hat Denis V. Lunev geschrieben:
> On 09/12/2018 04:15 PM, Kevin Wolf wrote:
> > Am 12.09.2018 um 14:03 hat Denis Plotnikov geschrieben:
> >> On 10.09.2018 15:41, Kevin Wolf wrote:
> >>> Am 29.06.2018 um 14:40 hat Denis Plotnikov geschrieben:
> Fixes the problem of ide
On 13/09/2018 06:21, Mark Cave-Ayland wrote:
> Indeed, see the Based-on header attached to the cover letter: it is
> dependent upon the lsi53c8xx_create() removal patchset at
> https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg00797.html
> which Paolo has queued here:
>
On 09/13/2018 04:25 AM, David Gibson wrote:
> On Tue, Sep 11, 2018 at 07:55:03AM +0200, Cédric Le Goater wrote:
>> The new layout using static IRQ number does not leave much space to
>> the dynamic MSI range, only 0x100 IRQ numbers. Increase the total
>> number of IRQS for newer machines and
On 12/09/2018 10:19, Pavel Dovgalyuk wrote:
> + uint64_t id = replay_get_current_step();
> + replay_add_event(REPLAY_ASYNC_EVENT_BH_ONESHOT, cb, opaque, id);
Why does it need an id, while REPLAY_ASYNC_EVENT_BH does not?
Paolo
> From: Paolo Bonzini [mailto:pbonz...@redhat.com]
> On 12/09/2018 10:19, Pavel Dovgalyuk wrote:
> > This patch tries to wake up the vCPU when it sleeps and the icount warp
> > checkpoint isn't met. It means that vCPU has something to do, because
> > there are no other reasons of non-matching warp
On Thu, Sep 13, 2018 at 03:36:28PM +0700, Suravee Suthikulpanit wrote:
> Brijesh/Peter,
>
> On 9/13/18 4:14 AM, Brijesh Singh wrote:
> >
> >
> > On 09/11/2018 11:52 PM, Peter Xu wrote:
> > ...
> >
> > > >
> > > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> > > > index
On Thu, 13 Sep 2018 03:06:27 -0400 (EDT)
Pankaj Gupta wrote:
> >
> > > This patch adds virtio-pmem Qemu device.
> > >
> > > This device presents memory address range information to guest
> > > which is backed by file backend type. It acts like persistent
> > > memory device for KVM
Especially the combination of iothreads, block jobs and drain tends to
lead to hangs currently. This series fixes a few of these bugs, although
there are more of them, to be addressed in separate patches.
The primary goal of this series is to fix the scenario from:
job_finish_sync() needs to release the AioContext lock of the job before
calling aio_poll(). Otherwise, callbacks called by aio_poll() would
possibly take the lock a second time and run into a deadlock with a
nested AIO_WAIT_WHILE() call.
Also, job_drain() without aio_poll() isn't necessarily
All callers in QEMU proper hold the AioContext lock when calling
job_finish_sync(). test-blockjob should do the same when it calls the
function indirectly through job_cancel_sync().
Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
---
include/qemu/job.h| 6 ++
tests/test-blockjob.c | 6
Amongst others, job_finalize_single() calls the .prepare/.commit/.abort
callbacks of the individual job driver. Recently, their use was adapted
for all block jobs so that they involve code calling AIO_WAIT_WHILE()
now. Such code must be called under the AioContext lock for the
respective job, but
This adds tests for calling AIO_WAIT_WHILE() in the .commit and .abort
callbacks. Both reasons why .abort could be called for a single job are
tested: Either .run or .prepare could return an error.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 116
This is a regression test for a deadlock that could occur in callbacks
called from the aio_poll() in bdrv_drain_poll_top_level(). The
AioContext lock wasn't released and therefore would be taken a second
time in the callback. This would cause a possible AIO_WAIT_WHILE() in
the callback to hang.
/20180913
for you to fetch changes up to 418fe4f69648b4f3f0afd2588926deafac81cfe8:
tests/migration: Enable the migration test on s390x, too (2018-09-13 14:06:26
+0200)
migration/next for 20180913
Hi this patchset has all pending
Block jobs claim in .drained_poll() that they are in a quiescent state
as soon as job->deferred_to_main_loop is true. This is obviously wrong,
they still have a completion BH to run. We only get away with this
because commit 91af091f923 added an unconditional aio_poll(false) to the
drain
From: Wei Huang
Recently a new configure option, CROSS_CC_GUEST, was added to
$(TARGET)-softmmu/config-target.mak to support TCG-related tests. This
patch tries to leverage this option to support cross compilation when the
migration boot block file is being re-generated:
* The x86 related
commit
(5cdc9b76e3 vl.c: Remove dead assignment)
removed sockets calculation when 'sockets' weren't provided on CLI
since there wasn't any users for it back then. Exiting checks
are neither reachable
} else if (sockets * cores * threads < cpus) {
or nor triggerable
if (sockets * cores *
Changelog since v7:
* drop repetitive sentence in deprication doc (Eric Blake )
Changelog since v5:
* add(v6) and then remove(v7) Notes section to/from deprication doc
(Eduardo Habkost )
* fix up wording and math formating in deprication doc
(Eduardo Habkost )
* drop !socket
> From: Paolo Bonzini [mailto:pbonz...@redhat.com]
> On 12/09/2018 10:19, Pavel Dovgalyuk wrote:
> > + uint64_t id = replay_get_current_step();
> > + replay_add_event(REPLAY_ASYNC_EVENT_BH_ONESHOT, cb, opaque, id);
>
> Why does it need an id, while REPLAY_ASYNC_EVENT_BH does not?
Because
On Wed, 12 Sep 2018 01:12:43 +
"Liu, Jingqi" wrote:
> On Monday, July 16, 2018 8:29 PM, Igor Mammedov wrote:
> > On Tue, 19 Jun 2018 23:20:57 +0800
> > Liu Jingqi wrote:
> >
> > > OSPM evaluates HMAT only during system initialization.
> > > Any changes to the HMAT state at runtime or
On Wed, 29 Aug 2018 17:36:09 +0200
David Hildenbrand wrote:
> To factor out plugging and unplugging of memory device we need access to
> the memory region. So let's replace get_region_size() by
> get_memory_region().
>
> If any memory device will in the future have multiple memory regions
>
A bdrv_drain operation must ensure that all parents are quiesced, this
includes BlockBackends. Otherwise, callbacks called by requests that are
completed on the BDS layer, but not quite yet on the BlockBackend layer
could still create new requests.
Signed-off-by: Kevin Wolf
Reviewed-by: Fam
From: Xiao Guangrong
It avoids to touch compression locks if xbzrle and compression
are both enabled
Signed-off-by: Xiao Guangrong
Reviewed-by: Juan Quintela
Message-Id: <20180906070101.27280-4-xiaoguangr...@tencent.com>
Signed-off-by: Juan Quintela
---
migration/ram.c | 4 +++-
1 file
When starting an active commit job, other callbacks can run before
mirror_start_job() calls bdrv_ref() where needed and cause the nodes to
go away. Add another pair of bdrv_ref/unref() around it to protect
against this case.
Signed-off-by: Kevin Wolf
---
block/mirror.c | 11 +++
1 file
From: Jose Ricardo Ziviani
This patch adds a small hint for the failure case of the load snapshot
process. It may be useful for users to remember that the VM
configuration has changed between the save and load processes.
(qemu) loadvm vm-20180903083641
Unknown savevm section or instance
bdrv_drain_poll_top_level() was buggy because it didn't release the
AioContext lock of the node to be drained before calling aio_poll().
This way, callbacks called by aio_poll() would possibly take the lock a
second time and run into a deadlock with a nested AIO_WAIT_WHILE() call.
However, it
-smp [cpus],sockets/cores/threads[,maxcpus] should describe topology
so that total number of logical CPUs [sockets * cores * threads]
would be equal to [maxcpus], however historically we didn't have
such check in QEMU and it is possible to start VM with an invalid
topology.
Deprecate invalid
As discussed during "[PATCH v4 00/29] vhost-user for input & GPU"
review, let's define a common set of backend conventions to help with
management layer implementation, and interoperability.
Cc: libvir-l...@redhat.com
Cc: Gerd Hoffmann
Cc: Daniel P. Berrangé
Cc: Changpeng Liu
Cc: Dr. David
On Mon, 10 Sep 2018 17:49:46 +0400
Marc-André Lureau wrote:
> memfd_backend_memory_alloc/file_backend_memory_alloc both needlessly
> are are calling host_memory_backend_mr_inited() which creates an
> illusion that alloc could be called multiple times but it isn't, it's
> called once from
On Fri, Sep 07, 2018 at 06:08:48PM -0400, Bandan Das wrote:
> v2:
> Same as v1 but with another minor cleanup
> patch. The write buffer breakup is still WIP.
>
> A documentation fix and changes to return the
> right error code on write failures.
Added to usb queue.
thanks,
Gerd
On Wed, 29 Aug 2018 17:36:12 +0200
David Hildenbrand wrote:
> Keep it simple for now and simply set the static property, that will
> fail once realized.
I'd merge this with previous patch and mention that set_addr will replace
'addr' property setting in the next patch where preliminary steps
On Thu, 13 Sep 2018 02:58:21 -0400 (EDT)
Pankaj Gupta wrote:
> Hi Luiz,
>
> Thanks for the review.
>
> >
> > > This patch adds virtio-pmem driver for KVM guest.
> > >
> > > Guest reads the persistent memory range information from
> > > Qemu over VIRTIO and registers it on nvdimm_bus. It
job_completed() had a problem with double locking that was recently
fixed independently by two different commits:
"job: Fix nested aio_poll() hanging in job_txn_apply"
"jobs: add exit shim"
One fix removed the first aio_context_acquire(), the other fix removed
the other one. Now we have a bug
In the context of draining a BDS, the .drained_poll callback of block
jobs is called. If this returns true (i.e. there is still some activity
pending), the drain operation may call aio_poll() with blocking=true to
wait for completion.
As soon as the pending activity is completed and the job
blk_unref() first decreases the refcount of the BlockBackend and calls
blk_delete() if the refcount reaches zero. Requests can still be in
flight at this point, they are only drained during blk_delete():
At this point, arbitrary callbacks can run. If any callback takes a
temporary BlockBackend
This extends the existing drain test with a block job to include
variants where the block job runs in a different AioContext.
Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
---
tests/test-bdrv-drain.c | 92 +
1 file changed, 86 insertions(+), 6
On Wed, 29 Aug 2018 17:36:10 +0200
David Hildenbrand wrote:
> Document the functions and when to not expect errors.
>
> Signed-off-by: David Hildenbrand
> ---
> include/hw/mem/memory-device.h | 13 +
> 1 file changed, 13 insertions(+)
>
> diff --git
Even if AIO_WAIT_WHILE() is called in the home context of the
AioContext, we still want to allow the condition to change depending on
other threads as long as they kick the AioWait. Specfically block jobs
can be running in an I/O thread and should then be able to kick a drain
in the main loop
Request callbacks can do pretty much anything, including operations that
will yield from the coroutine (such as draining the backend). In that
case, a decreased in_flight would be visible to other code and could
lead to a drain completing while the callback hasn't actually completed
yet.
bdrv_do_drained_begin/end() assume that they are called with the
AioContext lock of bs held. If we call drain functions from a coroutine
with the AioContext lock held, we yield and schedule a BH to move out of
coroutine context. This means that the lock for the home context of the
coroutine is
This is a regression test for a deadlock that occurred in block job
completion callbacks (via job_defer_to_main_loop) because the AioContext
lock was taken twice: once in job_finish_sync() and then again in
job_defer_to_main_loop_bh(). This would cause AIO_WAIT_WHILE() to hang.
Signed-off-by:
From: "Dr. David Alan Gilbert"
Clang correctly errors out moaning that rdma_return_path
is used uninitialised in the earlier error paths.
Make it NULL so that the error path ignores it.
Fixes: 55cc1b5937a8e709e4c102e74b206281073aab82
Signed-off-by: Dr. David Alan Gilbert
Reported-by: Cornelia
From: Xiao Guangrong
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size
From: Xiao Guangrong
As Peter pointed out:
| - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's
| per-guest-page granularity
|
| - RAMState.iterations is done for each ram_find_and_save_block(), so
| it's per-host-page granularity
|
| An example is that when we migrate a 2M
From: Wei Huang
The x86 boot block header currently is generated with a shell script.
To better support other CPUs (e.g. aarch64), we convert the script
into Makefile. This allows us to 1) support cross-compilation easily,
and 2) avoid creating a script file for every architecture.
Note that,
From: Xiao Guangrong
ram_find_and_save_block() can return negative if any error hanppens,
however, it is completely ignored in current code
Signed-off-by: Xiao Guangrong
Reviewed-by: Juan Quintela
Message-Id: <20180903092644.25812-5-xiaoguangr...@tencent.com>
Signed-off-by: Juan Quintela
---
From: Thomas Huth
We can re-use the s390-ccw bios code to implement a small firmware
for a s390x guest which prints out the "A" and "B" characters and
modifies the memory, as required for the migration test.
Signed-off-by: Thomas Huth
Message-Id:
From: Wei Huang
This patch adds migration test support for aarch64. The test code, which
implements the same functionality as x86, is booted as a kernel in qemu.
Here are the design choices we make for aarch64:
* We choose this -kernel approach because aarch64 QEMU doesn't provide a
On 13.09.18 14:52, Kevin Wolf wrote:
> job_completed() had a problem with double locking that was recently
> fixed independently by two different commits:
>
> "job: Fix nested aio_poll() hanging in job_txn_apply"
> "jobs: add exit shim"
>
> One fix removed the first aio_context_acquire(), the
On Sat, Sep 8, 2018 at 11:11 AM Mark Cave-Ayland
wrote:
>
> Whilst the PReP specification describes how all PCI IRQs are routed via IRQ
> 15 on the interrupt controller, the real 40p machine has routing quirk in
> that the LSI SCSI device is routed to IRQ 13.
Is it a routing quirk or does 40p
Emilio G. Cota writes:
> Signed-off-by: Emilio G. Cota
Reviewed-by: Alex Bennée
> ---
> target/i386/translate.c | 32 ++--
> 1 file changed, 18 insertions(+), 14 deletions(-)
>
> diff --git a/target/i386/translate.c b/target/i386/translate.c
> index
Emilio G. Cota writes:
> Signed-off-by: Emilio G. Cota
Reviewed-by: Alex Bennée
> ---
> target/i386/translate.c | 472
> 1 file changed, 236 insertions(+), 236 deletions(-)
>
> diff --git a/target/i386/translate.c b/target/i386/translate.c
> index
Emilio G. Cota writes:
> Signed-off-by: Emilio G. Cota
Reviewed-by: Alex Bennée
> ---
> target/i386/translate.c | 1174 ---
> 1 file changed, 594 insertions(+), 580 deletions(-)
>
> diff --git a/target/i386/translate.c b/target/i386/translate.c
> index
1 - 100 of 195 matches
Mail list logo