Hi,
After 3 months of work and investigation, and tedious mail discussions with
Nvidia, I think some progress have been made, in terms of the
GPUDirect(p2p) in virtual environment.
The only remaining issue then, is the low bidirectional bandwidth between
two sibling GPUs under the same PCIe switc
More updates:
1. This behavior was found not only on M60, but also on TITAN 1080Ti or Xp.
2. When not setting up the p2p compatibility, i.e. run the original qemu
with GPUs attached to the root pcie bus, the LnkSta on host always remains
at 8 GT/s. Don't know why the new p2p change would cause th
On Wed, 30 Aug 2017 17:41:20 +0800
Bob Chen wrote:
> I think I have observed what you said...
>
> The link speed on host remained 8GT/s until I finished running
> p2pBandwidthLatencyTest
> for the first time. Then it became 2.5GT/s...
>
>
> # lspci -s 09:00.0 -vvv
...
> LnkSta: Speed 8GT/s, Wi
I think I have observed what you said...
The link speed on host remained 8GT/s until I finished running
p2pBandwidthLatencyTest
for the first time. Then it became 2.5GT/s...
# lspci -s 09:00.0 -vvv
09:00.0 3D controller: NVIDIA Corporation GM204GL [Tesla M60] (rev a1)
Subsystem: NVIDIA Corporati
On Tue, 29 Aug 2017 18:41:44 +0800
Bob Chen wrote:
> The topology is already having all GPUs directly attached to root bus 0. In
> this situation you can't see the LnkSta attribute in any capabilities.
Right, this is why I suggested viewing the physical device lspci info
from the host. I haven'
The topology is already having all GPUs directly attached to root bus 0. In
this situation you can't see the LnkSta attribute in any capabilities.
The other way of using emulated switch would somehow show this attribute,
at 8 GT/s, although the real bandwidth is low as usual.
2017-08-23 2:06 GMT+
On Tue, Aug 22, 2017 at 10:56:59AM -0600, Alex Williamson wrote:
> On Tue, 22 Aug 2017 15:04:55 +0800
> Bob Chen wrote:
>
> > Hi,
> >
> > I got a spec from Nvidia which illustrates how to enable GPU p2p in
> > virtualization environment. (See attached)
>
> Neat, looks like we should implement a
On Tue, 22 Aug 2017 15:04:55 +0800
Bob Chen wrote:
> Hi,
>
> I got a spec from Nvidia which illustrates how to enable GPU p2p in
> virtualization environment. (See attached)
Neat, looks like we should implement a new QEMU vfio-pci option,
something like nvidia-gpudirect-p2p-id=. I don't think
On Mon, Aug 07, 2017 at 09:52:24AM -0600, Alex Williamson wrote:
> I wonder if it has something to do
> with the link speed/width advertised on the switch port. I don't think
> the endpoint can actually downshift the physical link, so lspci on the
> host should probably still show the full bandwid
On Tue, 8 Aug 2017 09:44:56 +0800
Bob Chen wrote:
> 1. How to test the KVM exit rate?
You can use tracing: http://www.linux-kvm.org/page/Tracing
> 2. The switches are separate devices of PLX Technology
>
> # lspci -s 07:08.0 -nn
> 07:08.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8747 48-La
Plus:
1 GB hugepages neither improved bandwidth nor latency. Results remained the
same.
2017-08-08 9:44 GMT+08:00 Bob Chen :
> 1. How to test the KVM exit rate?
>
> 2. The switches are separate devices of PLX Technology
>
> # lspci -s 07:08.0 -nn
> 07:08.0 PCI bridge [0604]: PLX Technology, Inc.
1. How to test the KVM exit rate?
2. The switches are separate devices of PLX Technology
# lspci -s 07:08.0 -nn
07:08.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port
PCI Express Gen 3 (8.0 GT/s) Switch [10b5:8747] (rev ca)
# This is one of the Root Ports in the system.
[:0
On Mon, 7 Aug 2017 21:04:16 +0800
Bob Chen wrote:
> Besides, I checked the lspci -vvv output, no capabilities of Access Control
> are seen.
Are these switches onboard an NVIDIA card or are they separate
components? The examples I have on NVIDIA cards do include ACS:
+-02.0-[42-47]00.0-[43-
On Mon, 7 Aug 2017 21:00:04 +0800
Bob Chen wrote:
> Bad news... The performance had dropped dramatically when using emulated
> switches.
>
> I was referring to the PCIe doc at
> https://github.com/qemu/qemu/blob/master/docs/pcie.txt
>
> # qemu-system-x86_64_2.6.2 -enable-kvm -cpu host,kvm=off -
Besides, I checked the lspci -vvv output, no capabilities of Access Control
are seen.
2017-08-01 23:01 GMT+08:00 Alex Williamson :
> On Tue, 1 Aug 2017 17:35:40 +0800
> Bob Chen wrote:
>
> > 2017-08-01 13:46 GMT+08:00 Alex Williamson :
> >
> > > On Tue, 1 Aug 2017 13:04:46 +0800
> > > Bob Chen
Bad news... The performance had dropped dramatically when using emulated
switches.
I was referring to the PCIe doc at
https://github.com/qemu/qemu/blob/master/docs/pcie.txt
# qemu-system-x86_64_2.6.2 -enable-kvm -cpu host,kvm=off -machine
q35,accel=kvm -nodefaults -nodefconfig \
-device ioh3420,i
On Tue, 1 Aug 2017 17:35:40 +0800
Bob Chen wrote:
> 2017-08-01 13:46 GMT+08:00 Alex Williamson :
>
> > On Tue, 1 Aug 2017 13:04:46 +0800
> > Bob Chen wrote:
> >
> > > Hi,
> > >
> > > This is a sketch of my hardware topology.
> > >
> > > CPU0 <- QPI ->CPU1
> > >
On Tue, Aug 01, 2017 at 05:35:40PM +0800, Bob Chen wrote:
> How to display GPU's ACS settings? Like this?
>
> [420 v2] Advanced Error Reporting
> UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC-
> UnsupReq- ACSViol-
> UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- Un
2017-08-01 13:46 GMT+08:00 Alex Williamson :
> On Tue, 1 Aug 2017 13:04:46 +0800
> Bob Chen wrote:
>
> > Hi,
> >
> > This is a sketch of my hardware topology.
> >
> > CPU0 <- QPI ->CPU1
> >| |
> > Root Port(at PCIe.0)Ro
On Tue, 1 Aug 2017 13:04:46 +0800
Bob Chen wrote:
> Hi,
>
> This is a sketch of my hardware topology.
>
> CPU0 <- QPI ->CPU1
>| |
> Root Port(at PCIe.0)Root Port(at PCIe.1)
>/\ /
Hi,
This is a sketch of my hardware topology.
CPU0 <- QPI ->CPU1
| |
Root Port(at PCIe.0)Root Port(at PCIe.1)
/\ / \
SwitchSwitch SwitchSwitch
/ \
On Wed, 26 Jul 2017 19:06:58 +0300
"Michael S. Tsirkin" wrote:
> On Wed, Jul 26, 2017 at 09:29:31AM -0600, Alex Williamson wrote:
> > On Wed, 26 Jul 2017 09:21:38 +0300
> > Marcel Apfelbaum wrote:
> >
> > > On 25/07/2017 11:53, 陈博 wrote:
> > > > To accelerate data traversing between devices
On Wed, Jul 26, 2017 at 09:29:31AM -0600, Alex Williamson wrote:
> On Wed, 26 Jul 2017 09:21:38 +0300
> Marcel Apfelbaum wrote:
>
> > On 25/07/2017 11:53, 陈博 wrote:
> > > To accelerate data traversing between devices under the same PCIE Root
> > > Port or Switch.
> > >
> > > See https://lists.n
On Wed, 26 Jul 2017 09:21:38 +0300
Marcel Apfelbaum wrote:
> On 25/07/2017 11:53, 陈博 wrote:
> > To accelerate data traversing between devices under the same PCIE Root
> > Port or Switch.
> >
> > See https://lists.nongnu.org/archive/html/qemu-devel/2017-07/msg07209.html
> >
>
> Hi,
>
> It m
On 25/07/2017 11:53, 陈博 wrote:
To accelerate data traversing between devices under the same PCIE Root
Port or Switch.
See https://lists.nongnu.org/archive/html/qemu-devel/2017-07/msg07209.html
Hi,
It may be possible, but maybe PCIe Switch assignment is not
the only way to go.
Adding Alex a
On Mon, Jul 24, 2017 at 08:05:54AM +0300, Marcel Apfelbaum wrote:
+ Anthony
> On 24/07/2017 4:47, Zhong Yang wrote:
> >Hello all,
> >
>
> Hi,
>
> >When we did virtio device hotplug in Q35 platform, which always failed in
> >hotplug.
> >
>
> Can we please see the QEMU command line and the desc
On 24/07/2017 4:47, Zhong Yang wrote:
Hello all,
Hi,
When we did virtio device hotplug in Q35 platform, which always failed in
hotplug.
Can we please see the QEMU command line and the description
of the hotplug steps?
Would you please tell me how to configure VM to make virtio device h
Hello all,
When we did virtio device hotplug in Q35 platform, which always failed in
hotplug.
Would you please tell me how to configure VM to make virtio device hotplug work
in Q35 platform? Many thanks!
Regards,
Yang zhong
28 matches
Mail list logo