On 2019-07-22 11:39 a.m., Timur Kristóf wrote:
>>>
>>> 1. Why is the GTT->VRAM copy so much slower than the VRAM->GTT
>>> copy?
>>>
>>> 2. Why is the bus limited to 24 Gbit/sec? I would expect the
>>> Thunderbolt port to give me at least 32 Gbit/sec for PCIe traffic.
>>
>> That's unrealistic I'm
> >
> > 1. Why is the GTT->VRAM copy so much slower than the VRAM->GTT
> > copy?
> >
> > 2. Why is the bus limited to 24 Gbit/sec? I would expect the
> > Thunderbolt port to give me at least 32 Gbit/sec for PCIe traffic.
>
> That's unrealistic I'm afraid. As I said on IRC, from the GPU POV
>
On Thu, Jul 18, 2019 at 10:38 AM Timur Kristóf wrote:
>
>
> > >
> > > I took a look at amdgpu_device_get_pcie_info() and found that it
> > > uses
> > > pcie_bandwidth_available to determine the capabilities of the PCIe
> > > port. However, pcie_bandwidth_available gives you only the current
> > >
On Fri, 2019-07-05 at 09:36 -0400, Alex Deucher wrote:
> On Thu, Jul 4, 2019 at 6:55 AM Michel Dänzer
> wrote:
> > On 2019-07-03 1:04 p.m., Timur Kristóf wrote:
> > > > > There may be other factors, yes. I can't offer a good
> > > > > explanation
> > > > > on
> > > > > what exactly is happening,
> >
> > I took a look at amdgpu_device_get_pcie_info() and found that it
> > uses
> > pcie_bandwidth_available to determine the capabilities of the PCIe
> > port. However, pcie_bandwidth_available gives you only the current
> > bandwidth as set by the PCIe link status register, not the maximum
>
> > Thanks Marek, I didn't know about that option.
> > Tried it, here is the output: https://pastebin.com/raw/9SAAbbAA
> >
> > I'm not quite sure how to interpret the numbers, they are
> > inconsistent
> > with the results from both pcie_bw and amdgpu.benchmark, for
> > example
> > GTT->VRAM at a
On Thu, Jul 18, 2019 at 5:11 AM Timur Kristóf wrote:
>
> On Fri, 2019-07-05 at 09:36 -0400, Alex Deucher wrote:
> > On Thu, Jul 4, 2019 at 6:55 AM Michel Dänzer
> > wrote:
> > > On 2019-07-03 1:04 p.m., Timur Kristóf wrote:
> > > > > > There may be other factors, yes. I can't offer a good
> > >
On 2019-07-18 11:06 a.m., Timur Kristóf wrote:
>>> Thanks Marek, I didn't know about that option.
>>> Tried it, here is the output: https://pastebin.com/raw/9SAAbbAA
>>>
>>> I'm not quite sure how to interpret the numbers, they are
>>> inconsistent
>>> with the results from both pcie_bw and
On Friday, 5 July 2019, Marek Olšák wrote:
> On Fri, Jul 5, 2019 at 5:27 AM Timur Kristóf
> wrote:
>
> > On Wed, 2019-07-03 at 14:44 -0400, Marek Olšák wrote:
> > > You can run:
> > > AMD_DEBUG=testdmaperf glxgears
> > >
> > > It tests transfer sizes of up to 128 MB, and it tests ~60 slightly
On Fri, Jul 5, 2019 at 5:27 AM Timur Kristóf
wrote:
> On Wed, 2019-07-03 at 14:44 -0400, Marek Olšák wrote:
> > You can run:
> > AMD_DEBUG=testdmaperf glxgears
> >
> > It tests transfer sizes of up to 128 MB, and it tests ~60 slightly
> > different methods of transfering data.
> >
> > Marek
>
>
On Thu, Jul 4, 2019 at 6:55 AM Michel Dänzer wrote:
>
> On 2019-07-03 1:04 p.m., Timur Kristóf wrote:
> >
> >>> There may be other factors, yes. I can't offer a good explanation
> >>> on
> >>> what exactly is happening, but it's pretty clear that amdgpu can't
> >>> take
> >>> full advantage of
> > Can you point me to the place where amdgpu decides the PCIe link
> > speed?
> > I'd like to try to tweak it a little bit to see if that helps at
> > all.
>
> I'm not sure offhand, Alex or anyone?
Thus far, I started by looking at how the pp_dpm_pcie sysfs interface
works, and found
On Wed, 2019-07-03 at 14:44 -0400, Marek Olšák wrote:
> You can run:
> AMD_DEBUG=testdmaperf glxgears
>
> It tests transfer sizes of up to 128 MB, and it tests ~60 slightly
> different methods of transfering data.
>
> Marek
Thanks Marek, I didn't know about that option.
Tried it, here is the
On 2019-07-03 1:04 p.m., Timur Kristóf wrote:
>
>>> There may be other factors, yes. I can't offer a good explanation
>>> on
>>> what exactly is happening, but it's pretty clear that amdgpu can't
>>> take
>>> full advantage of the TB3 link, so it seemed like a good idea to
>>> start
>>>
> > Okay, so I booted my system with amdgpu.benchmark=3
> > You can find the full dmesg log here: https://pastebin.com/zN9FYGw4
> >
> > The result is between 1-5 Gbit / sec depending on the transfer size
> > (the higher the better), which corresponds to neither the 8 Gbit /
> > sec
> > that the
You can run:
AMD_DEBUG=testdmaperf glxgears
It tests transfer sizes of up to 128 MB, and it tests ~60 slightly
different methods of transfering data.
Marek
On Wed, Jul 3, 2019 at 4:07 AM Michel Dänzer wrote:
> On 2019-07-02 11:49 a.m., Timur Kristóf wrote:
> > On Tue, 2019-07-02 at 10:09
On 2019-07-02 11:49 a.m., Timur Kristóf wrote:
> On Tue, 2019-07-02 at 10:09 +0200, Michel Dänzer wrote:
>> On 2019-07-01 6:01 p.m., Timur Kristóf wrote:
>>> On Mon, 2019-07-01 at 16:54 +0200, Michel Dänzer wrote:
On 2019-06-28 2:21 p.m., Timur Kristóf wrote:
> I haven't found a good way
On Tue, 2019-07-02 at 10:09 +0200, Michel Dänzer wrote:
> On 2019-07-01 6:01 p.m., Timur Kristóf wrote:
> > On Mon, 2019-07-01 at 16:54 +0200, Michel Dänzer wrote:
> > > On 2019-06-28 2:21 p.m., Timur Kristóf wrote:
> > > > I haven't found a good way to measure the maximum PCIe
> > > > throughput
On 2019-07-01 6:01 p.m., Timur Kristóf wrote:
> On Mon, 2019-07-01 at 16:54 +0200, Michel Dänzer wrote:
>> On 2019-06-28 2:21 p.m., Timur Kristóf wrote:
>>> I haven't found a good way to measure the maximum PCIe throughput
>>> between the CPU and GPU,
>>
>> amdgpu.benchmark=3
>>
>> on the kernel
On Fri, Jun 28, 2019 at 04:53:02PM +0200, Timur Kristóf wrote:
> On Fri, 2019-06-28 at 17:14 +0300, Mika Westerberg wrote:
> > On Fri, Jun 28, 2019 at 03:33:56PM +0200, Timur Kristóf wrote:
> > > I have two more questions:
> > >
> > > 1. What is the best way to test that the virtual link is
On Mon, Jul 01, 2019 at 10:46:34AM -0400, Alex Deucher wrote:
> > 2. As far as I understood what Mika said, there isn't really a 2.5 GT/s
> > limitation there, since the virtual link should be running at 40 Gb/s
> > regardless of the reported speed of that device. Would it be possible
> > to run
> > > > Like I said the device really is limited to 2.5 GT/s even
> > > > though it
> > > > should be able to do 8 GT/s.
> > >
> > > There is Thunderbolt link between the host router (your host
> > > system)
> > > and
> > > the eGPU box. That link is not limited to 2.5 GT/s so even if the
> > >
> >
> > That's unfortunate, I would have expected there to be some sort of
> > PCIe
> > speed test utility.
> >
> > Now that I gave it a try, I can measure ~20 Gbit/sec when I run
> > Gnome
> > Wayland on this system (which forces the eGPU to send the
> > framebuffer
> > back and forth all the
On Mon, 2019-07-01 at 16:54 +0200, Michel Dänzer wrote:
> On 2019-06-28 2:21 p.m., Timur Kristóf wrote:
> > I haven't found a good way to measure the maximum PCIe throughput
> > between the CPU and GPU,
>
> amdgpu.benchmark=3
>
> on the kernel command line will measure throughput for various
>
On 2019-06-28 2:21 p.m., Timur Kristóf wrote:
>
> I haven't found a good way to measure the maximum PCIe throughput
> between the CPU and GPU,
amdgpu.benchmark=3
on the kernel command line will measure throughput for various transfer
sizes during driver initialization.
> but I did take a look
On Mon, Jul 1, 2019 at 10:38 AM Timur Kristóf wrote:
>
> > > > > Like I said the device really is limited to 2.5 GT/s even
> > > > > though it
> > > > > should be able to do 8 GT/s.
> > > >
> > > > There is Thunderbolt link between the host router (your host
> > > > system)
> > > > and
> > > >
On Sun, Jun 30, 2019 at 2:27 PM Timur Kristóf wrote:
>
>
> > > Sure, though in this case 3 of those downstream ports are not
> > > exposed
> > > by the hardware, so it's a bit surprising to see them there.
> >
> > They lead to other peripherals on the TBT host router such as the TBT
> >
Hi guys,
I use an AMD RX 570 in a Thunderbolt 3 external GPU box.
dmesg gives me the following message:
pci :3a:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4
link at :04:04.0 (capable of 31.504 Gb/s with 8 GT/s x4 link)
Here is a tree view of the devices as well as
Hi Mika,
Thanks for your quick reply.
> > 1. Why are there four bridge devices? 04:00.0, 04:01.0 and 04:02.0
> > look
> > superfluous to me and nothing is connected to them. It actually
> > gives
> > me the feeling that the TB3 driver creates 4 devices with 2.5 GT/s
> > each, instead of one
On Fri, Jun 28, 2019 at 12:23:09PM +0200, Timur Kristóf wrote:
> Hi guys,
>
> I use an AMD RX 570 in a Thunderbolt 3 external GPU box.
> dmesg gives me the following message:
> pci :3a:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4
> link at :04:04.0 (capable of 31.504
> > Sure, though in this case 3 of those downstream ports are not
> > exposed
> > by the hardware, so it's a bit surprising to see them there.
>
> They lead to other peripherals on the TBT host router such as the TBT
> controller and xHCI. Also there are two downstream ports for
> extension
>
On Fri, 2019-06-28 at 17:14 +0300, Mika Westerberg wrote:
> On Fri, Jun 28, 2019 at 03:33:56PM +0200, Timur Kristóf wrote:
> > I have two more questions:
> >
> > 1. What is the best way to test that the virtual link is indeed
> > capable
> > of 40 Gbit / sec? So far I've been unable to figure out
On Fri, Jun 28, 2019 at 01:08:07PM +0200, Timur Kristóf wrote:
> Hi Mika,
>
> Thanks for your quick reply.
>
> > > 1. Why are there four bridge devices? 04:00.0, 04:01.0 and 04:02.0
> > > look
> > > superfluous to me and nothing is connected to them. It actually
> > > gives
> > > me the feeling
On Fri, Jun 28, 2019 at 03:33:56PM +0200, Timur Kristóf wrote:
> I have two more questions:
>
> 1. What is the best way to test that the virtual link is indeed capable
> of 40 Gbit / sec? So far I've been unable to figure out how to measure
> its maximum throughput.
I don't think there is any
On Fri, Jun 28, 2019 at 02:21:36PM +0200, Timur Kristóf wrote:
>
> > > Sure, though in this case 3 of those downstream ports are not
> > > exposed
> > > by the hardware, so it's a bit surprising to see them there.
> >
> > They lead to other peripherals on the TBT host router such as the TBT
> >
35 matches
Mail list logo