[PATCH] doc: Adapt example to for numa setting.

2022-04-28 Thread Jack Wang
Add numa sgx setting in one leftover example, without numa setting qemu will error out with message below: qemu-7.0: Parameter 'sgx-epc.0.node' is missing Fixes: d1889b36098c ("doc: Add the SGX numa description") Cc: Yu Zhang Signed-off-by: Jack Wang --- docs/system/i386/sgx.rst | 2

[PATCH v3] migration/rdma: set the REUSEADDR option for destination

2022-02-17 Thread Jack Wang
number. Set the REUSEADDR option for destination, This allow address could be reused to avoid rdma_bind_addr error out. Signed-off-by: Jack Wang Reviewed-by: Pankaj Gupta --- v3: add reviewed-by tags from David and Pankaj. v2: extend commit message as discussed with Pankaj and David

[PATCH v2] migration/rdma: set the REUSEADDR option for destination

2022-02-08 Thread Jack Wang
number. Set the REUSEADDR option for destination, This allow address could be reused to avoid rdma_bind_addr error out. Signed-off-by: Jack Wang --- v2: extend commit message as discussed with Pankaj and David --- migration/rdma.c | 7 +++ 1 file changed, 7 insertions(+) diff --git

[PATCH 1/2] migration/rdma: Increase the backlog from 5 to 128

2022-02-01 Thread Jack Wang
So it can handle more incoming requests. Signed-off-by: Jack Wang --- migration/rdma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/migration/rdma.c b/migration/rdma.c index c7c7a384875b..2e223170d06d 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -4238,7 +4238,7

[PATCH 2/2] migration/rdma: set the REUSEADDR option for destination

2022-02-01 Thread Jack Wang
This allow address could be reused to avoid rdma_bind_addr error out. Signed-off-by: Jack Wang --- migration/rdma.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/migration/rdma.c b/migration/rdma.c index 2e223170d06d..b498ef013c77 100644 --- a/migration/rdma.c +++ b/migration

Re: io_uring possibly the culprit for qemu hang (linux-5.4.y)

2020-10-01 Thread Jack Wang
Stefano Garzarella 于2020年10月1日周四 上午10:59写道: > > +Cc: qemu-devel@nongnu.org > > Hi, > > On Thu, Oct 01, 2020 at 01:26:51AM +0900, Ju Hyung Park wrote: > > Hi everyone. > > > > I have recently switched to a setup running QEMU 5.0(which supports > > io_uring) for a Windows 10 guest on Linux v5.4.63.

Re: [PATCH 3/3] target/i386: modify Icelake-Client and Icelake-Server CPU model number

2020-02-27 Thread Jack Wang
Chenyi Qiang 于2020年2月27日周四 上午10:07写道: > > According to the Intel Icelake family list, Icelake-Client uses model > number 126(0x7D) 0x7D is 125 in hex, so the commit message needs to be fixed. Cheers Jack Wang

Re: [Qemu-devel] Overcommiting cpu results in all vms offline

2018-09-17 Thread Jack Wang
Stefan Priebe - Profihost AG 于2018年9月17日周一 上午9:00写道: > > Hi, > > Am 17.09.2018 um 08:38 schrieb Jack Wang: > > Stefan Priebe - Profihost AG 于2018年9月16日周日 下午3:31写道: > >> > >> Hello, > >> > >> while overcommiting cpu I had several situations wh

Re: [Qemu-devel] Overcommiting cpu results in all vms offline

2018-09-17 Thread Jack Wang
Stefan Priebe - Profihost AG 于2018年9月16日周日 下午3:31写道: > > Hello, > > while overcommiting cpu I had several situations where all vms gone offline > while two vms saturated all cores. > > I believed all vms would stay online but would just not be able to use all > their cores? > > My original idea

Re: [Qemu-devel] [Bug 1523246] Re: Virtio-blk does not support TRIM

2017-04-19 Thread Jack Wang
Latest effort was one month ago: https://patchwork.kernel.org/patch/9645537/ 2017-04-19 14:08 GMT+02:00 Mike Mol : > discard support for virtio-blk is on the QEMU TODO list: > > http://wiki.qemu-project.org/ToDo/Block#virtio- > blk_discard_support_.5BPeter_Lieven.5D > > -- >

Re: [Qemu-devel] [RFC] block io lost in the guest , possible related to qemu?

2013-10-30 Thread Jack Wang
On 10/30/2013 10:50 AM, Stefan Hajnoczi wrote: On Fri, Oct 25, 2013 at 05:01:54PM +0200, Jack Wang wrote: We've seen guest block io lost in a VM.any response will be helpful environment is: guest os: Ubuntu 1304 running busy database workload with xfs on a disk export with virtio-blk

Re: [Qemu-devel] [RFC] block io lost in the guest , possible related to qemu?

2013-10-28 Thread Jack Wang
Hello Kevin Stefan Any comments or wild guess about the bug? Regards, Jack On 10/25/2013 05:01 PM, Jack Wang wrote: Hi Experts, We've seen guest block io lost in a VM.any response will be helpful environment is: guest os: Ubuntu 1304 running busy database workload with xfs on a disk

Re: [Qemu-devel] [RFC] block io lost in the guest , possible related to qemu?

2013-10-28 Thread Jack Wang
, Jack On Mon, Oct 28, 2013 at 10:15 AM, Jack Wang xjtu...@gmail.com wrote: Hello Kevin Stefan Any comments or wild guess about the bug? Regards, Jack On 10/25/2013 05:01 PM, Jack Wang wrote: Hi Experts, We've seen guest block io lost in a VM.any response will be helpful

[Qemu-devel] [RFC] block io lost in the guest , possible related to qemu?

2013-10-25 Thread Jack Wang
Hi Experts, We've seen guest block io lost in a VM.any response will be helpful environment is: guest os: Ubuntu 1304 running busy database workload with xfs on a disk export with virtio-blk the exported vdb has very high infight io over 300. Some times later a lot io process in D state, looks

[Qemu-devel] [RFC] O_EXCL or not open block device

2013-09-12 Thread Jack Wang
Hi all, We're using qemu export md-raid to guest OS, and we saw deadlock on MD(which is already fixed by Neil), please see thread below: http://marc.info/?l=linux-raidm=137894040228125w=2 As Neil suggested it would be good for userspace applications to call open() with O_EXCL flag, to avoid

Re: [Qemu-devel] [RFC] O_EXCL or not open block device

2013-09-12 Thread Jack Wang
On 09/12/2013 04:27 PM, Kevin Wolf wrote: Am 12.09.2013 um 15:58 hat Stefan Hajnoczi geschrieben: On Thu, Sep 12, 2013 at 01:27:32PM +0200, Jack Wang wrote: Hi all, We're using qemu export md-raid to guest OS, and we saw deadlock on MD(which is already fixed by Neil), please see thread below