BTW, the iscsi server I used is scsi-target-utils (
https://github.com/fujita/tgt).
Bob Chen 于2018年12月19日周三 下午7:34写道:
> I looked into the source code, and found some reconnect method from
> libiscsi. Are they able to work?
>
> QEMU: 2.12.1
> libiscsi: 1.18.0 (https://gith
I looked into the source code, and found some reconnect method from
libiscsi. Are they able to work?
QEMU: 2.12.1
libiscsi: 1.18.0 (https://github.com/sahlberg/libiscsi)
(gdb) f
#0 0x7fcd956933bd in iscsi_reconnect (iscsi=0x7fcd97f206d0) at
connect.c:461
461 memcpy(tmp_iscsi->old_iscsi,
the right fix, ethier in seabios
or SPDK, this will be fixed very soon.
Best Regards,
Changpeng Liu
2018-04-23 16:19 GMT+08:00 Bob Chen <a175818...@gmail.com>:
> Hi,
>
> I was trying to run qemu with spdk, referring to
> http://www.spdk.io/doc/vhost.html#vhost_qemu_config .
2018-04-21 1:34 GMT+08:00 John Snow <js...@redhat.com>:
>
>
> On 04/20/2018 07:13 AM, Bob Chen wrote:
> > 2.11.1 could work, qemu is no longer occupying 100% CPU. That's
> > interesting...
> >
>
> Does 2.12 use 100% even at the firmware menu? Maybe we're
Hi,
I was trying to run qemu with spdk, referring to http://www.spdk.io/doc/
vhost.html#vhost_qemu_config . Steps were strictly followed.
# Environment: latest CentOS 7 kernel, nvme ssd, spdk v18.01.x,
> dpdk 17.11.1, qemu 2.11.1
cd spdk
> sudo su
> ulimit -l unlimited
> HUGEMEM=2048
;js...@redhat.com>:
> Forwarding to qemu-block.
>
> On 04/19/2018 06:13 AM, Bob Chen wrote:
> > Hi,
> >
> > I was trying to run qemu with spdk, referring to
> > http://www.spdk.io/doc/vhost.html#vhost_qemu_config
> >
> > Everything went well since I had alrea
Hi,
I was trying to run qemu with spdk, referring to
http://www.spdk.io/doc/vhost.html#vhost_qemu_config
Everything went well since I had already set up hugepages, vfio, vhost
targets, vhost-scsi device(vhost-block was also tested), etc, without
errors or warnings reported.
But at the last step
I think you can edit the github's repo description to tell people not to
download release from this site.
2018-04-18 17:29 GMT+08:00 Peter Maydell <peter.mayd...@linaro.org>:
> On 18 April 2018 at 09:09, Bob Chen <a175818...@gmail.com> wrote:
> > I found that
I found that it has nothing to do with the release version, but the github
one is just not able to work...
So what github and qemu.org provide are totally different things?
2018-04-18 14:52 GMT+08:00 Bob Chen <a175818...@gmail.com>:
> No, I downloaded the release tarball fr
No, I downloaded the release tarball from github.
2018-04-18 14:25 GMT+08:00 Stefan Weil <s...@weilnetz.de>:
> Am 18.04.2018 um 08:19 schrieb Bob Chen:
> > fatal error: ui/input-keymap-atset1-to-qcode.c: No such file or
> directory
> >
> > Build on my CentOS 7.
fatal error: ui/input-keymap-atset1-to-qcode.c: No such file or directory
Build on my CentOS 7.
Ping...
Was it because VFIO_IOMMU_MAP_DMA needs contiguous memory and my host was
not able to provide them immediately?
2017-12-26 19:37 GMT+08:00 Bob Chen <a175818...@gmail.com>:
>
>
> 2017-12-26 18:51 GMT+08:00 Liu, Yi L <yi.l@intel.com>:
>
>> > -Orig
2017-12-26 18:51 GMT+08:00 Liu, Yi L <yi.l@intel.com>:
> > -Original Message-
> > From: Qemu-devel [mailto:qemu-devel-bounces+yi.l.liu=
> intel@nongnu.org]
> > On Behalf Of Bob Chen
> > Sent: Tuesday, December 26, 2017 6:30 PM
> > To: qemu-de
Hi,
I have a host server with multiple GPU cards, and was assigning them to
qemu with VFIO.
I found that when setting up the last free GPU, the qemu process would hang
there and took almost 10 minutes before finishing startup. I made some dig
by gdb, and found the slowest part occurred at the
Hi,
After 3 months of work and investigation, and tedious mail discussions with
Nvidia, I think some progress have been made, in terms of the
GPUDirect(p2p) in virtual environment.
The only remaining issue then, is the low bidirectional bandwidth between
two sibling GPUs under the same PCIe
It's a mistake, please ignore. This patch is able to work.
2017-10-26 18:45 GMT+08:00 Bob Chen <a175818...@gmail.com>:
> There seem to be some bugs in these patches, causing my VM failed to boot.
>
> Test case:
>
> 0. Merge these 3 patches in to release 2.10.1
>
> 1
There seem to be some bugs in these patches, causing my VM failed to boot.
Test case:
0. Merge these 3 patches in to release 2.10.1
1. qemu-system-x86_64_2.10.1 ... \
-device vfio-pci,host=04:00.0 \
-device vfio-pci,host=05:00.0 \
-device vfio-pci,host=08:00.0 \
-device vfio-pci,host=09:00.0 \
GMT+08:00 Alex Williamson <alex.william...@redhat.com>:
> On Wed, 30 Aug 2017 17:41:20 +0800
> Bob Chen <a175818...@gmail.com> wrote:
>
> > I think I have observed what you said...
> >
> > The link speed on host remained 8GT/s until I finished running
> &
I think I have observed what you said...
The link speed on host remained 8GT/s until I finished running
p2pBandwidthLatencyTest
for the first time. Then it became 2.5GT/s...
# lspci -s 09:00.0 -vvv
09:00.0 3D controller: NVIDIA Corporation GM204GL [Tesla M60] (rev a1)
Subsystem: NVIDIA
+08:00 Michael S. Tsirkin <m...@redhat.com>:
> On Tue, Aug 22, 2017 at 10:56:59AM -0600, Alex Williamson wrote:
> > On Tue, 22 Aug 2017 15:04:55 +0800
> > Bob Chen <a175818...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I go
Plus:
1 GB hugepages neither improved bandwidth nor latency. Results remained the
same.
2017-08-08 9:44 GMT+08:00 Bob Chen <a175818...@gmail.com>:
> 1. How to test the KVM exit rate?
>
> 2. The switches are separate devices of PLX Technology
>
> # lspci -s 07:08.0 -nn
>
+ TransBlk- ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl-
DirectTrans-
2017-08-07 23:52 GMT+08:00 Alex Williamson <alex.william...@redhat.com>:
> On Mon, 7 Aug 2017 21:00:04 +0800
> Bob Chen <a175818...@gmail.com> wrote:
>
> > Bad news... The performance had dropped drama
Besides, I checked the lspci -vvv output, no capabilities of Access Control
are seen.
2017-08-01 23:01 GMT+08:00 Alex Williamson <alex.william...@redhat.com>:
> On Tue, 1 Aug 2017 17:35:40 +0800
> Bob Chen <a175818...@gmail.com> wrote:
>
> > 2017-08-01 13:4
16.28 16.68 4.03
Is it because the heavy load of CPU emulation had caused a bottleneck?
2017-08-01 23:01 GMT+08:00 Alex Williamson <alex.william...@redhat.com>:
> On Tue, 1 Aug 2017 17:35:40 +0800
> Bob Chen <a175818...@gmail.com> wrote:
>
> > 2017-08-01 13:4
2017-08-01 13:46 GMT+08:00 Alex Williamson <alex.william...@redhat.com>:
> On Tue, 1 Aug 2017 13:04:46 +0800
> Bob Chen <a175818...@gmail.com> wrote:
>
> > Hi,
> >
> > This is a sketch of my hardware topology.
> >
>
Hi,
This is a sketch of my hardware topology.
CPU0 <- QPI ->CPU1
| |
Root Port(at PCIe.0)Root Port(at PCIe.1)
/\ / \
SwitchSwitch SwitchSwitch
/ \
(without traversing
the PCIe Host Bridge)
PIX = Connection traversing a single PCIe switch
NV# = Connection traversing a bonded set of # NVLinks
2017-07-24 14:03 GMT+08:00 Bob Chen <a175818...@gmail.com>:
>
> - Bob
>
- Bob
Hi folks,
I have 8 GPU cards needed to passthrough to 1 vm.
These cards are placed at 2 PCIE switches on host server, in case there
might be bandwidth limit within a single bus.
So what is the correct QEMU bus parameter if I want to achieve the best
performance. The QEMU's pcie.0/1 parameter
Hi folks,
I am about to upgrade my QEMU version from an ancient 1.1.2 to the latest.
My plan is to override all the installing files with the new ones. Since I
used to rename all the qemu-xxx binaries under /usr/local/bin with a
specified version suffix, so they are not my concern.
Just
Hi folks,
My time schedule doesn't allow me to wait for the community's solution, so
I started to work on quick fix, which is to add a 'bdrv_truncate' function
to the current NBD's BlockDriver. Basically it's an 'active resize'
implementation.
I also realized that the 'bdrv_truncate' caller
There might be a time window between the NBD server's resize and the
client's `re-read size` request. Is it safe?
What about an active `resize` request from the client? Considering some NBD
servers might have the capability to do instant resizing, not applying to
LVM or host block device, of
Hi,
My qemu runs on a 3rd party distributed block storage, and the disk backend
protocol is nbd.
I notices that there are differences between default qcow2 local disk and
my nbd disk, in terms of resizing the disk on the fly.
Local qcow2 disk could work no matter using qemu-img resize or qemu
6PM +0800, Bob Chen wrote:
> > Hi,
> >
> > According to the docs, the destination Qemu must have the exactly same
> > parameters as the source one. So if the source has just finished cpu or
> > memory hotplug, what would the dest's parameters be like?
> >
> &
Hi,
According to the docs, the destination Qemu must have the exactly same
parameters as the source one. So if the source has just finished cpu or
memory hotplug, what would the dest's parameters be like?
Does DIMM device, or logically QOM object, have to be reflected on the new
command-line
Hi,
-smp 16
-smp cores=4,threads=4,sockets=1
Which one has better performance? The scenario is guest VMs running on
cloud server.
-Bob
Test case:
1. QEMU 1.1.2
2. Run fio inside the vm, give it some pressure. Watch the realtime
throughput
3. block_set_io_throttle drive_2 1 0 0 2000 0 0 # throttle
bps and iops, any value
4. Observed that the IO is very likely to freeze to zero. The fio process
stuck!
5. Kill the
Sorry for disturbing by reply, don't know why I'm not able to send a new
mail.
Hi folks,
Could you enlighten me how to achieve proportional IO sharing by using
cgroup, instead of qemu's io-throttling?
My qemu config is like: -drive
Hi folks,
Could you enlighten me how to achieve proportional IO sharing by using cgroup,
instead of qemu's io-throttling?
My qemu config is like: -drive
file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device
virtio-blk-pci...
Test command inside vm is like: dd if=/dev/vdc
Hi folks,
Could you enlighten me how to achieve proportional IO sharing by using
cgroup, instead of qemu's io-throttling?
My qemu config is like: -drive
file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device
virtio-blk-pci...
Test command inside vm is like: dd if=/dev/vdc
Hi folks,
Could you enlighten me how to achieve proportional IO sharing by using cgroup,
instead of qemu's io-throttling?
My qemu config is like: -drive
file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device
virtio-blk-pci...
Test command inside vm is like: dd if=/dev/vdc
Hi folks,
Could you enlighten me how to achieve proportional IO sharing by using
cgroup, instead of qemu's io-throttling?
My qemu config is like: -drive
file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device
virtio-blk-pci...
Test command inside vm is like: dd if=/dev/vdc
Hi folks,
Could you enlighten me how to achieve proportional IO sharing by using
cgroup, instead of qemu's io-throttling?
My qemu config is like: -drive
file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device
virtio-blk-pci...
Test command inside vm is like: dd if=/dev/vdc
43 matches
Mail list logo