Re: [Qemu-devel] QEMU crashed when reconnecting over iscsi protocol

2018-12-19 Thread Bob Chen
BTW, the iscsi server I used is scsi-target-utils ( https://github.com/fujita/tgt). Bob Chen 于2018年12月19日周三 下午7:34写道: > I looked into the source code, and found some reconnect method from > libiscsi. Are they able to work? > > QEMU: 2.12.1 > libiscsi: 1.18.0 (https://gith

[Qemu-devel] QEMU crashed when reconnecting over iscsi protocol

2018-12-19 Thread Bob Chen
I looked into the source code, and found some reconnect method from libiscsi. Are they able to work? QEMU: 2.12.1 libiscsi: 1.18.0 (https://github.com/sahlberg/libiscsi) (gdb) f #0 0x7fcd956933bd in iscsi_reconnect (iscsi=0x7fcd97f206d0) at connect.c:461 461 memcpy(tmp_iscsi->old_iscsi,

Re: [Qemu-devel] [QEMU + SPDK] The demo in the official document is not working

2018-04-23 Thread Bob Chen
the right fix, ethier in seabios or SPDK, this will be fixed very soon. Best Regards, Changpeng Liu 2018-04-23 16:19 GMT+08:00 Bob Chen <a175818...@gmail.com>: > Hi, > > I was trying to run qemu with spdk, referring to > http://www.spdk.io/doc/vhost.html#vhost_qemu_config .

Re: [Qemu-devel] [SPDK] qemu process hung at boot-up, no explicit errors or warnings

2018-04-23 Thread Bob Chen
2018-04-21 1:34 GMT+08:00 John Snow <js...@redhat.com>: > > > On 04/20/2018 07:13 AM, Bob Chen wrote: > > 2.11.1 could work, qemu is no longer occupying 100% CPU. That's > > interesting... > > > > Does 2.12 use 100% even at the firmware menu? Maybe we're

[Qemu-devel] [QEMU + SPDK] The demo in the official document is not working

2018-04-23 Thread Bob Chen
Hi, I was trying to run qemu with spdk, referring to http://www.spdk.io/doc/ vhost.html#vhost_qemu_config . Steps were strictly followed. # Environment: latest CentOS 7 kernel, nvme ssd, spdk v18.01.x, > dpdk 17.11.1, qemu 2.11.1 cd spdk > sudo su > ulimit -l unlimited > HUGEMEM=2048

Re: [Qemu-devel] [SPDK] qemu process hung at boot-up, no explicit errors or warnings

2018-04-20 Thread Bob Chen
;js...@redhat.com>: > Forwarding to qemu-block. > > On 04/19/2018 06:13 AM, Bob Chen wrote: > > Hi, > > > > I was trying to run qemu with spdk, referring to > > http://www.spdk.io/doc/vhost.html#vhost_qemu_config > > > > Everything went well since I had alrea

[Qemu-devel] [SPDK] qemu process hung at boot-up, no explicit errors or warnings

2018-04-19 Thread Bob Chen
Hi, I was trying to run qemu with spdk, referring to http://www.spdk.io/doc/vhost.html#vhost_qemu_config Everything went well since I had already set up hugepages, vfio, vhost targets, vhost-scsi device(vhost-block was also tested), etc, without errors or warnings reported. But at the last step

Re: [Qemu-devel] Latest v2.12.0-rc4 has compiling error, rc3 is OK

2018-04-18 Thread Bob Chen
I think you can edit the github's repo description to tell people not to download release from this site. 2018-04-18 17:29 GMT+08:00 Peter Maydell <peter.mayd...@linaro.org>: > On 18 April 2018 at 09:09, Bob Chen <a175818...@gmail.com> wrote: > > I found that

Re: [Qemu-devel] Latest v2.12.0-rc4 has compiling error, rc3 is OK

2018-04-18 Thread Bob Chen
I found that it has nothing to do with the release version, but the github one is just not able to work... So what github and qemu.org provide are totally different things? 2018-04-18 14:52 GMT+08:00 Bob Chen <a175818...@gmail.com>: > No, I downloaded the release tarball fr

Re: [Qemu-devel] Latest v2.12.0-rc4 has compiling error, rc3 is OK

2018-04-18 Thread Bob Chen
No, I downloaded the release tarball from github. 2018-04-18 14:25 GMT+08:00 Stefan Weil <s...@weilnetz.de>: > Am 18.04.2018 um 08:19 schrieb Bob Chen: > > fatal error: ui/input-keymap-atset1-to-qcode.c: No such file or > directory > > > > Build on my CentOS 7.

[Qemu-devel] Latest v2.12.0-rc4 has compiling error, rc3 is OK

2018-04-18 Thread Bob Chen
fatal error: ui/input-keymap-atset1-to-qcode.c: No such file or directory Build on my CentOS 7.

Re: [Qemu-devel] [GPU and VFIO] qemu hang at startup, VFIO_IOMMU_MAP_DMA is extremely slow

2018-01-01 Thread Bob Chen
Ping... Was it because VFIO_IOMMU_MAP_DMA needs contiguous memory and my host was not able to provide them immediately? 2017-12-26 19:37 GMT+08:00 Bob Chen <a175818...@gmail.com>: > > > 2017-12-26 18:51 GMT+08:00 Liu, Yi L <yi.l@intel.com>: > >> > -Orig

Re: [Qemu-devel] [GPU and VFIO] qemu hang at startup, VFIO_IOMMU_MAP_DMA is extremely slow

2017-12-26 Thread Bob Chen
2017-12-26 18:51 GMT+08:00 Liu, Yi L <yi.l@intel.com>: > > -Original Message- > > From: Qemu-devel [mailto:qemu-devel-bounces+yi.l.liu= > intel@nongnu.org] > > On Behalf Of Bob Chen > > Sent: Tuesday, December 26, 2017 6:30 PM > > To: qemu-de

[Qemu-devel] [GPU and VFIO] qemu hang at startup, VFIO_IOMMU_MAP_DMA is extremely slow

2017-12-26 Thread Bob Chen
Hi, I have a host server with multiple GPU cards, and was assigning them to qemu with VFIO. I found that when setting up the last free GPU, the qemu process would hang there and took almost 10 minutes before finishing startup. I made some dig by gdb, and found the slowest part occurred at the

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-11-30 Thread Bob Chen
Hi, After 3 months of work and investigation, and tedious mail discussions with Nvidia, I think some progress have been made, in terms of the GPUDirect(p2p) in virtual environment. The only remaining issue then, is the low bidirectional bandwidth between two sibling GPUs under the same PCIe

Re: [Qemu-devel] [PATCH 0/3] vfio/pci: Add NVIDIA GPUDirect P2P clique support

2017-11-20 Thread Bob Chen
It's a mistake, please ignore. This patch is able to work. 2017-10-26 18:45 GMT+08:00 Bob Chen <a175818...@gmail.com>: > There seem to be some bugs in these patches, causing my VM failed to boot. > > Test case: > > 0. Merge these 3 patches in to release 2.10.1 > > 1

Re: [Qemu-devel] [PATCH 0/3] vfio/pci: Add NVIDIA GPUDirect P2P clique support

2017-10-26 Thread Bob Chen
There seem to be some bugs in these patches, causing my VM failed to boot. Test case: 0. Merge these 3 patches in to release 2.10.1 1. qemu-system-x86_64_2.10.1 ... \ -device vfio-pci,host=04:00.0 \ -device vfio-pci,host=05:00.0 \ -device vfio-pci,host=08:00.0 \ -device vfio-pci,host=09:00.0 \

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-09-01 Thread Bob Chen
GMT+08:00 Alex Williamson <alex.william...@redhat.com>: > On Wed, 30 Aug 2017 17:41:20 +0800 > Bob Chen <a175818...@gmail.com> wrote: > > > I think I have observed what you said... > > > > The link speed on host remained 8GT/s until I finished running > &

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-08-30 Thread Bob Chen
I think I have observed what you said... The link speed on host remained 8GT/s until I finished running p2pBandwidthLatencyTest for the first time. Then it became 2.5GT/s... # lspci -s 09:00.0 -vvv 09:00.0 3D controller: NVIDIA Corporation GM204GL [Tesla M60] (rev a1) Subsystem: NVIDIA

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-08-29 Thread Bob Chen
+08:00 Michael S. Tsirkin <m...@redhat.com>: > On Tue, Aug 22, 2017 at 10:56:59AM -0600, Alex Williamson wrote: > > On Tue, 22 Aug 2017 15:04:55 +0800 > > Bob Chen <a175818...@gmail.com> wrote: > > > > > Hi, > > > > > > I go

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-08-08 Thread Bob Chen
Plus: 1 GB hugepages neither improved bandwidth nor latency. Results remained the same. 2017-08-08 9:44 GMT+08:00 Bob Chen <a175818...@gmail.com>: > 1. How to test the KVM exit rate? > > 2. The switches are separate devices of PLX Technology > > # lspci -s 07:08.0 -nn >

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-08-07 Thread Bob Chen
+ TransBlk- ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans- 2017-08-07 23:52 GMT+08:00 Alex Williamson <alex.william...@redhat.com>: > On Mon, 7 Aug 2017 21:00:04 +0800 > Bob Chen <a175818...@gmail.com> wrote: > > > Bad news... The performance had dropped drama

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-08-07 Thread Bob Chen
Besides, I checked the lspci -vvv output, no capabilities of Access Control are seen. 2017-08-01 23:01 GMT+08:00 Alex Williamson <alex.william...@redhat.com>: > On Tue, 1 Aug 2017 17:35:40 +0800 > Bob Chen <a175818...@gmail.com> wrote: > > > 2017-08-01 13:4

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-08-07 Thread Bob Chen
16.28 16.68 4.03 Is it because the heavy load of CPU emulation had caused a bottleneck? 2017-08-01 23:01 GMT+08:00 Alex Williamson <alex.william...@redhat.com>: > On Tue, 1 Aug 2017 17:35:40 +0800 > Bob Chen <a175818...@gmail.com> wrote: > > > 2017-08-01 13:4

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-08-01 Thread Bob Chen
2017-08-01 13:46 GMT+08:00 Alex Williamson <alex.william...@redhat.com>: > On Tue, 1 Aug 2017 13:04:46 +0800 > Bob Chen <a175818...@gmail.com> wrote: > > > Hi, > > > > This is a sketch of my hardware topology. > > >

Re: [Qemu-devel] About virtio device hotplug in Q35! 【外域邮件.谨慎查阅】

2017-07-31 Thread Bob Chen
Hi, This is a sketch of my hardware topology. CPU0 <- QPI ->CPU1 | | Root Port(at PCIe.0)Root Port(at PCIe.1) /\ / \ SwitchSwitch SwitchSwitch / \

Re: [Qemu-devel] [Device passthrough] Is there a way to passthrough PCIE switch/bridge ?

2017-07-24 Thread Bob Chen
(without traversing the PCIe Host Bridge) PIX = Connection traversing a single PCIe switch NV# = Connection traversing a bonded set of # NVLinks 2017-07-24 14:03 GMT+08:00 Bob Chen <a175818...@gmail.com>: > > - Bob >

[Qemu-devel] [Device passthrough] Is there a way to passthrough PCIE switch/bridge ?

2017-07-24 Thread Bob Chen
- Bob

[Qemu-devel] Questions about GPU passthrough + multiple PCIE switches on host

2017-06-29 Thread Bob Chen
Hi folks, I have 8 GPU cards needed to passthrough to 1 vm. These cards are placed at 2 PCIE switches on host server, in case there might be bandwidth limit within a single bus. So what is the correct QEMU bus parameter if I want to achieve the best performance. The QEMU's pcie.0/1 parameter

[Qemu-devel] How to upgrade QEMU?

2017-02-14 Thread Bob Chen
Hi folks, I am about to upgrade my QEMU version from an ancient 1.1.2 to the latest. My plan is to override all the installing files with the new ones. Since I used to rename all the qemu-xxx binaries under /usr/local/bin with a specified version suffix, so they are not my concern. Just

Re: [Qemu-devel] [Nbd] [Qemu-block] How to online resize qemu disk with nbd protocol?

2017-01-22 Thread Bob Chen
Hi folks, My time schedule doesn't allow me to wait for the community's solution, so I started to work on quick fix, which is to add a 'bdrv_truncate' function to the current NBD's BlockDriver. Basically it's an 'active resize' implementation. I also realized that the 'bdrv_truncate' caller

Re: [Qemu-devel] [Qemu-block] [Nbd] How to online resize qemu disk with nbd protocol?

2017-01-12 Thread Bob Chen
There might be a time window between the NBD server's resize and the client's `re-read size` request. Is it safe? What about an active `resize` request from the client? Considering some NBD servers might have the capability to do instant resizing, not applying to LVM or host block device, of

[Qemu-devel] How to online resize qemu disk with nbd protocol?

2017-01-12 Thread Bob Chen
Hi, My qemu runs on a 3rd party distributed block storage, and the disk backend protocol is nbd. I notices that there are differences between default qcow2 local disk and my nbd disk, in terms of resizing the disk on the fly. Local qcow2 disk could work no matter using qemu-img resize or qemu

Re: [Qemu-devel] Live migration + cpu/mem hotplug

2017-01-09 Thread Bob Chen
6PM +0800, Bob Chen wrote: > > Hi, > > > > According to the docs, the destination Qemu must have the exactly same > > parameters as the source one. So if the source has just finished cpu or > > memory hotplug, what would the dest's parameters be like? > > > &

[Qemu-devel] Live migration + cpu/mem hotplug

2017-01-05 Thread Bob Chen
Hi, According to the docs, the destination Qemu must have the exactly same parameters as the source one. So if the source has just finished cpu or memory hotplug, what would the dest's parameters be like? Does DIMM device, or logically QOM object, have to be reflected on the new command-line

[Qemu-devel] QEMU -smp paramater: Will multiplue threads and cores improve performance?

2016-12-21 Thread Bob Chen
Hi, -smp 16 -smp cores=4,threads=4,sockets=1 Which one has better performance? The scenario is guest VMs running on cloud server. -Bob

[Qemu-devel] QEMU 1.1.2: block IO throttle might occasionally freeze running process's IO to zero

2016-11-30 Thread Bob Chen
Test case: 1. QEMU 1.1.2 2. Run fio inside the vm, give it some pressure. Watch the realtime throughput 3. block_set_io_throttle drive_2 1 0 0 2000 0 0 # throttle bps and iops, any value 4. Observed that the IO is very likely to freeze to zero. The fio process stuck! 5. Kill the

[Qemu-devel] cgroup blkio weight has no effect on qemu

2016-01-20 Thread Bob Chen
Sorry for disturbing by reply, don't know why I'm not able to send a new mail. Hi folks, Could you enlighten me how to achieve proportional IO sharing by using cgroup, instead of qemu's io-throttling? My qemu config is like: -drive

[Qemu-devel] cgroup blkio weight has no effect on qemu

2016-01-20 Thread Bob Chen
Hi folks, Could you enlighten me how to achieve proportional IO sharing by using cgroup, instead of qemu's io-throttling? My qemu config is like: -drive file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device virtio-blk-pci... Test command inside vm is like: dd if=/dev/vdc

[Qemu-devel] cgroup blkio weight has no effect on qemu

2016-01-20 Thread Bob Chen
Hi folks, Could you enlighten me how to achieve proportional IO sharing by using cgroup, instead of qemu's io-throttling? My qemu config is like: -drive file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device virtio-blk-pci... Test command inside vm is like: dd if=/dev/vdc

[Qemu-devel] cgroup blkio weight has no effect on qemu

2016-01-20 Thread Bob Chen
Hi folks, Could you enlighten me how to achieve proportional IO sharing by using cgroup, instead of qemu's io-throttling? My qemu config is like: -drive file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device virtio-blk-pci... Test command inside vm is like: dd if=/dev/vdc

[Qemu-devel] cgroup blkio weight has no effect on qemu

2016-01-20 Thread Bob Chen
Hi folks, Could you enlighten me how to achieve proportional IO sharing by using cgroup, instead of qemu's io-throttling? My qemu config is like: -drive file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device virtio-blk-pci... Test command inside vm is like: dd if=/dev/vdc

[Qemu-devel] cgroup blkio.weight is not working for qemu

2016-01-20 Thread Bob Chen
Hi folks, Could you enlighten me how to achieve proportional IO sharing by using cgroup, instead of qemu's io-throttling? My qemu config is like: -drive file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device virtio-blk-pci... Test command inside vm is like: dd if=/dev/vdc