On Thu, 1 Apr 2021 10:12:27 -0300
Jason Gunthorpe wrote:
> On Mon, Mar 29, 2021 at 05:10:53PM -0600, Alex Williamson wrote:
> > On Tue, 23 Mar 2021 16:32:13 -0300
> > Jason Gunthorpe wrote:
> >
> > > On Mon, Mar 22, 2021 at 10:40:16AM -0600, Alex Williamson wrote:
> > >
> > > > Of course
On Mon, 29 Mar 2021 17:10:53 -0600
Alex Williamson wrote:
> On Tue, 23 Mar 2021 16:32:13 -0300
> Jason Gunthorpe wrote:
>
> > On Mon, Mar 22, 2021 at 10:40:16AM -0600, Alex Williamson wrote:
> > > So unless you want to do some bitkeeper archaeology, we've always
> > > allowed driver probes to
On Mon, Mar 29, 2021 at 05:10:53PM -0600, Alex Williamson wrote:
> On Tue, 23 Mar 2021 16:32:13 -0300
> Jason Gunthorpe wrote:
>
> > On Mon, Mar 22, 2021 at 10:40:16AM -0600, Alex Williamson wrote:
> >
> > > Of course if you start looking at features like migration support,
> > > that's more
On Tue, 23 Mar 2021 16:32:13 -0300
Jason Gunthorpe wrote:
> On Mon, Mar 22, 2021 at 10:40:16AM -0600, Alex Williamson wrote:
>
> > Of course if you start looking at features like migration support,
> > that's more than likely not simply an additional region with optional
> > information, it
On 24/03/2021 06:32, Jason Gunthorpe wrote:
For NVIDIA GPU Max checked internally and we saw it looks very much
like how Intel GPU works. Only some PCI IDs trigger checking on the
feature the firmware thing is linked to.
And as Alexey noted, the table came up incomplete. But also those
On Mon, Mar 22, 2021 at 10:40:16AM -0600, Alex Williamson wrote:
> Of course if you start looking at features like migration support,
> that's more than likely not simply an additional region with optional
> information, it would need to interact with the actual state of the
> device. For those,
On Tue, Mar 23, 2021 at 02:17:09PM +0100, Christoph Hellwig wrote:
> On Mon, Mar 22, 2021 at 01:44:11PM -0300, Jason Gunthorpe wrote:
> > This isn't quite the scenario that needs solving. Lets go back to
> > Max's V1 posting:
> >
> > The mlx5_vfio_pci.c pci_driver matches this:
> >
> > + {
On Mon, Mar 22, 2021 at 01:44:11PM -0300, Jason Gunthorpe wrote:
> This isn't quite the scenario that needs solving. Lets go back to
> Max's V1 posting:
>
> The mlx5_vfio_pci.c pci_driver matches this:
>
> + { PCI_DEVICE_SUB(PCI_VENDOR_ID_REDHAT_QUMRANET, 0x1042,
> +
On Mon, Mar 22, 2021 at 04:11:25PM +0100, Christoph Hellwig wrote:
> On Fri, Mar 19, 2021 at 05:07:49PM -0300, Jason Gunthorpe wrote:
> > The way the driver core works is to first match against the already
> > loaded driver list, then trigger an event for module loading and when
> > new drivers
On Sun, 21 Mar 2021 09:58:18 -0300
Jason Gunthorpe wrote:
> On Fri, Mar 19, 2021 at 10:40:28PM -0600, Alex Williamson wrote:
>
> > > Well, today we don't, but Max here adds id_table's to the special
> > > devices and a MODULE_DEVICE_TABLE would come too if we do the flavours
> > > thing below.
On Fri, Mar 19, 2021 at 05:07:49PM -0300, Jason Gunthorpe wrote:
> The way the driver core works is to first match against the already
> loaded driver list, then trigger an event for module loading and when
> new drivers are registered they bind to unbound devices.
>
> So, the trouble is the
On Fri, Mar 19, 2021 at 10:40:28PM -0600, Alex Williamson wrote:
> > Well, today we don't, but Max here adds id_table's to the special
> > devices and a MODULE_DEVICE_TABLE would come too if we do the flavours
> > thing below.
>
> I think the id_tables are the wrong approach for IGD and NVLink
>
On Fri, 19 Mar 2021 19:59:43 -0300
Jason Gunthorpe wrote:
> On Fri, Mar 19, 2021 at 03:08:09PM -0600, Alex Williamson wrote:
> > On Fri, 19 Mar 2021 17:07:49 -0300
> > Jason Gunthorpe wrote:
> >
> > > On Fri, Mar 19, 2021 at 11:36:42AM -0600, Alex Williamson wrote:
> > > > On Fri, 19 Mar
On Fri, Mar 19, 2021 at 03:08:09PM -0600, Alex Williamson wrote:
> On Fri, 19 Mar 2021 17:07:49 -0300
> Jason Gunthorpe wrote:
>
> > On Fri, Mar 19, 2021 at 11:36:42AM -0600, Alex Williamson wrote:
> > > On Fri, 19 Mar 2021 17:34:49 +0100
> > > Christoph Hellwig wrote:
> > >
> > > > On Fri,
On Fri, 19 Mar 2021 17:07:49 -0300
Jason Gunthorpe wrote:
> On Fri, Mar 19, 2021 at 11:36:42AM -0600, Alex Williamson wrote:
> > On Fri, 19 Mar 2021 17:34:49 +0100
> > Christoph Hellwig wrote:
> >
> > > On Fri, Mar 19, 2021 at 01:28:48PM -0300, Jason Gunthorpe wrote:
> > > > The wrinkle I
On Fri, Mar 19, 2021 at 11:36:42AM -0600, Alex Williamson wrote:
> On Fri, 19 Mar 2021 17:34:49 +0100
> Christoph Hellwig wrote:
>
> > On Fri, Mar 19, 2021 at 01:28:48PM -0300, Jason Gunthorpe wrote:
> > > The wrinkle I don't yet have an easy answer to is how to load vfio_pci
> > > as a
On Fri, 19 Mar 2021 17:34:49 +0100
Christoph Hellwig wrote:
> On Fri, Mar 19, 2021 at 01:28:48PM -0300, Jason Gunthorpe wrote:
> > The wrinkle I don't yet have an easy answer to is how to load vfio_pci
> > as a universal "default" within the driver core lazy bind scheme and
> > still have
On Fri, Mar 19, 2021 at 01:28:48PM -0300, Jason Gunthorpe wrote:
> The wrinkle I don't yet have an easy answer to is how to load vfio_pci
> as a universal "default" within the driver core lazy bind scheme and
> still have working module autoloading... I'm hoping to get some
> research into this..
On Fri, Mar 19, 2021 at 05:20:33PM +0100, Christoph Hellwig wrote:
> On Fri, Mar 19, 2021 at 01:17:22PM -0300, Jason Gunthorpe wrote:
> > I think we talked about this.. We still need a better way to control
> > binding of VFIO modules - now that we have device-specific modules we
> > must have
On Fri, Mar 19, 2021 at 01:17:22PM -0300, Jason Gunthorpe wrote:
> I think we talked about this.. We still need a better way to control
> binding of VFIO modules - now that we have device-specific modules we
> must have these match tables to control what devices they connect
> to.
>
> Previously
On Fri, Mar 19, 2021 at 09:23:41AM -0600, Alex Williamson wrote:
> On Wed, 10 Mar 2021 14:57:57 +0200
> Max Gurtovoy wrote:
> > On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
> > > On 09/03/2021 19:33, Max Gurtovoy wrote:
> > >> +static const struct pci_device_id nvlink2gpu_vfio_pci_table[]
On Wed, 10 Mar 2021 14:57:57 +0200
Max Gurtovoy wrote:
> On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
> > On 09/03/2021 19:33, Max Gurtovoy wrote:
> >> +static const struct pci_device_id nvlink2gpu_vfio_pci_table[] = {
> >> + { PCI_VDEVICE(NVIDIA, 0x1DB1) }, /* GV100GL-A NVIDIA Tesla
>
On Thu, Mar 11, 2021 at 06:54:09PM +1100, Alexey Kardashevskiy wrote:
> Is there an idea how it is going to work? For example, the Intel IGD driver
> and vfio-pci-igd - how should the system pick one? If there is no
> MODULE_DEVICE_TABLE in vfio-pci-xxx, is the user supposed to try binding all
>
On Thu, Mar 11, 2021 at 11:44:38AM +0200, Max Gurtovoy wrote:
>
> On 3/11/2021 9:54 AM, Alexey Kardashevskiy wrote:
> >
> >
> > On 11/03/2021 13:00, Jason Gunthorpe wrote:
> > > On Thu, Mar 11, 2021 at 12:42:56PM +1100, Alexey Kardashevskiy wrote:
> > > > > > btw can the id list have only
On 3/11/2021 9:54 AM, Alexey Kardashevskiy wrote:
On 11/03/2021 13:00, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 12:42:56PM +1100, Alexey Kardashevskiy wrote:
btw can the id list have only vendor ids and not have device ids?
The PCI matcher is quite flexable, see the other patch
On 11/03/2021 13:00, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 12:42:56PM +1100, Alexey Kardashevskiy wrote:
btw can the id list have only vendor ids and not have device ids?
The PCI matcher is quite flexable, see the other patch from Max for
the igd
ah cool, do this for NVIDIA
On Thu, Mar 11, 2021 at 12:42:56PM +1100, Alexey Kardashevskiy wrote:
> > > btw can the id list have only vendor ids and not have device ids?
> >
> > The PCI matcher is quite flexable, see the other patch from Max for
> > the igd
>
> ah cool, do this for NVIDIA GPUs then please, I just
On 11/03/2021 12:34, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 12:20:33PM +1100, Alexey Kardashevskiy wrote:
It is supposed to match exactly the same match table as the pci_driver
above. We *don't* want different behavior from what the standrd PCI
driver matcher will do.
This is not
On Thu, Mar 11, 2021 at 12:20:33PM +1100, Alexey Kardashevskiy wrote:
> > It is supposed to match exactly the same match table as the pci_driver
> > above. We *don't* want different behavior from what the standrd PCI
> > driver matcher will do.
>
> This is not a standard PCI driver though
It
On 11/03/2021 06:40, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 01:24:47AM +1100, Alexey Kardashevskiy wrote:
On 11/03/2021 00:02, Jason Gunthorpe wrote:
On Wed, Mar 10, 2021 at 02:57:57PM +0200, Max Gurtovoy wrote:
+ .err_handler = _pci_core_err_handlers,
+};
+
+#ifdef
On 3/10/2021 4:19 PM, Alexey Kardashevskiy wrote:
On 10/03/2021 23:57, Max Gurtovoy wrote:
On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
On 09/03/2021 19:33, Max Gurtovoy wrote:
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first will be
On Thu, Mar 11, 2021 at 01:24:47AM +1100, Alexey Kardashevskiy wrote:
>
>
> On 11/03/2021 00:02, Jason Gunthorpe wrote:
> > On Wed, Mar 10, 2021 at 02:57:57PM +0200, Max Gurtovoy wrote:
> >
> > > > > + .err_handler = _pci_core_err_handlers,
> > > > > +};
> > > > > +
> > > > > +#ifdef
On 11/03/2021 00:02, Jason Gunthorpe wrote:
On Wed, Mar 10, 2021 at 02:57:57PM +0200, Max Gurtovoy wrote:
+ .err_handler = _pci_core_err_handlers,
+};
+
+#ifdef CONFIG_VFIO_PCI_DRIVER_COMPAT
+struct pci_driver *get_nvlink2gpu_vfio_pci_driver(struct pci_dev *pdev)
+{
+ if
On 10/03/2021 23:57, Max Gurtovoy wrote:
On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
On 09/03/2021 19:33, Max Gurtovoy wrote:
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first will be responsible for providing special extensions for
NVIDIA GPUs
On Wed, Mar 10, 2021 at 02:57:57PM +0200, Max Gurtovoy wrote:
> > > + .err_handler = _pci_core_err_handlers,
> > > +};
> > > +
> > > +#ifdef CONFIG_VFIO_PCI_DRIVER_COMPAT
> > > +struct pci_driver *get_nvlink2gpu_vfio_pci_driver(struct pci_dev *pdev)
> > > +{
> > > + if
On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
On 09/03/2021 19:33, Max Gurtovoy wrote:
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first will be responsible for providing special extensions for
NVIDIA GPUs with NVLINK2 support for P9 platform (and
On 09/03/2021 19:33, Max Gurtovoy wrote:
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first will be responsible for providing special extensions for
NVIDIA GPUs with NVLINK2 support for P9 platform (and others in the
future). The last will be responsible for
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first will be responsible for providing special extensions for
NVIDIA GPUs with NVLINK2 support for P9 platform (and others in the
future). The last will be responsible for POWER9 NPU2 unit (NVLink2 host
bus adapter).
38 matches
Mail list logo