On Fri, May 25, 2007 at 01:16:57PM -0700, Jonathan Lundell wrote:
> On May 24, 2007, at 10:51 PM, Andi Kleen wrote:
>
> >>Do we have a feel for how much performace we're losing on those
> >>systems which _could_ do MSI, but which will end up defaulting
> >>to not using it?
> >
> >At least on 10GB
On Fri, May 25, 2007 at 01:35:01PM -0700, Greg KH wrote:
...
> And again, over time, like years, this list is going to grow way beyond
> a managable thing, especially as any new chipset that comes out in 2009
> is going to have working MSI, right? I think our blacklist is easier to
> manage over
On Fri, May 25, 2007 at 01:35:01PM -0700, Greg KH wrote:
...
And again, over time, like years, this list is going to grow way beyond
a managable thing, especially as any new chipset that comes out in 2009
is going to have working MSI, right? I think our blacklist is easier to
manage over
On Fri, May 25, 2007 at 01:16:57PM -0700, Jonathan Lundell wrote:
On May 24, 2007, at 10:51 PM, Andi Kleen wrote:
Do we have a feel for how much performace we're losing on those
systems which _could_ do MSI, but which will end up defaulting
to not using it?
At least on 10GB ethernet it is
Eric W. Biederman wrote:
> @@ -1677,43 +1650,16 @@ static int __devinit msi_ht_cap_enabled(struct
> pci_dev *dev)
> return 0;
> }
>
> -/* Check the hypertransport MSI mapping to know whether MSI is enabled or
> not */
> +/* Enable MSI on hypertransport chipsets supporting MSI */
>
On Fri, May 25, 2007 at 03:06:22PM -0600, Eric W. Biederman wrote:
> Greg KH <[EMAIL PROTECTED]> writes:
> > It's a trade off, and I'd like to choose the one that over the long
> > term, causes the least ammount of work and maintaiblity. I think the
> > current blacklist meets that goal.
>
> A
> I think for most of Intel I can reduce my test to:
> If (bus == 0 , device == 0, function == 0 && vendor == Intel &&
> has a pci express capability) {
> Enable msi on all busses().
> }
MSI was working on every Intel PCI-X chipset I ever saw too...
- R.
-
To unsubscribe from this
Greg KH <[EMAIL PROTECTED]> writes:
>> MSI appears to have enough problems that enabling it in a kernel
>> that is supposed to run lots of different hardware (like a distro
>> kernel) is a recipe for disaster.
>
> Oh, I agree it's a major pain in the ass at times...
>
> But I'm real hesitant to
> > In addition to PCI INTx compatible interrupt emulation, PCI Express
> > requires support of MSI or MSI-X or both.
> Which suggests that INTx support is required.
>
> I do not find any wording that suggest the opposite.
> I do see it stated that it is intended to EOL support for INTx at
On Fri, May 25, 2007 at 09:17:35AM -0600, Eric W. Biederman wrote:
> Greg KH <[EMAIL PROTECTED]> writes:
>
> > Originally I would have thought this would be a good idea, but now that
> > Vista is out, which supports MSI, I don't think we are going to need
> > this in the future. All new chipsets
On May 24, 2007, at 10:51 PM, Andi Kleen wrote:
Do we have a feel for how much performace we're losing on those
systems which _could_ do MSI, but which will end up defaulting
to not using it?
At least on 10GB ethernet it is a significant difference; you usually
cannot go anywhere near line
> Hmm...
> I find in section 6.1:
> > In addition to PCI INTx compatible interrupt emulation, PCI Express
> > requires support of MSI or MSI-X or both.
> Which suggests that INTx support is required.
Unfortunately, this can be equally well read to suggest that MSI/MSI-X is
not required, but
Roland Dreier <[EMAIL PROTECTED]> writes:
> > > - In spec hardware does not require MSI to generate interrupts
> > > Which leaves enabling MSI optional.
> >
> > Actually at least the Qlogic/Pathscale PCI Express ipath adapters
> > cannot generate INTx interrupts -- they definitely do
> > - In spec hardware does not require MSI to generate interrupts
> > Which leaves enabling MSI optional.
>
> Actually at least the Qlogic/Pathscale PCI Express ipath adapters
> cannot generate INTx interrupts -- they definitely do require MSI to
> operate.
Oh yeah... when I first
> - In spec hardware does not require MSI to generate interrupts
> Which leaves enabling MSI optional.
Actually at least the Qlogic/Pathscale PCI Express ipath adapters
cannot generate INTx interrupts -- they definitely do require MSI to
operate.
- R.
-
To unsubscribe from this list: send
On 05/25/2007 11:17 AM, Eric W. Biederman wrote:
>
> MSI appears to have enough problems that enabling it in a kernel
> that is supposed to run lots of different hardware (like a distro
> kernel) is a recipe for disaster.
Ubuntu and Fedora have disabled it and added a "pci=msi" option
to enable
Greg KH <[EMAIL PROTECTED]> writes:
> Originally I would have thought this would be a good idea, but now that
> Vista is out, which supports MSI, I don't think we are going to need
> this in the future. All new chipsets should support MSI fine and this
> table will only grow in the future, while
From: Michael Ellerman <[EMAIL PROTECTED]>
Date: Fri, 25 May 2007 15:14:10 +1000
> On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
> > Currently we blacklist known bad msi configurations which means we
> > keep getting MSI enabled on chipsets that either do not support MSI,
> > or MSI
Michael Ellerman <[EMAIL PROTECTED]> writes:
> On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly. Since the
Michael Ellerman [EMAIL PROTECTED] writes:
On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly. Since the normal
From: Michael Ellerman [EMAIL PROTECTED]
Date: Fri, 25 May 2007 15:14:10 +1000
On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is
Greg KH [EMAIL PROTECTED] writes:
Originally I would have thought this would be a good idea, but now that
Vista is out, which supports MSI, I don't think we are going to need
this in the future. All new chipsets should support MSI fine and this
table will only grow in the future, while the
On 05/25/2007 11:17 AM, Eric W. Biederman wrote:
MSI appears to have enough problems that enabling it in a kernel
that is supposed to run lots of different hardware (like a distro
kernel) is a recipe for disaster.
Ubuntu and Fedora have disabled it and added a pci=msi option
to enable it if
- In spec hardware does not require MSI to generate interrupts
Which leaves enabling MSI optional.
Actually at least the Qlogic/Pathscale PCI Express ipath adapters
cannot generate INTx interrupts -- they definitely do require MSI to
operate.
- R.
-
To unsubscribe from this list: send the
- In spec hardware does not require MSI to generate interrupts
Which leaves enabling MSI optional.
Actually at least the Qlogic/Pathscale PCI Express ipath adapters
cannot generate INTx interrupts -- they definitely do require MSI to
operate.
Oh yeah... when I first found out
Roland Dreier [EMAIL PROTECTED] writes:
- In spec hardware does not require MSI to generate interrupts
Which leaves enabling MSI optional.
Actually at least the Qlogic/Pathscale PCI Express ipath adapters
cannot generate INTx interrupts -- they definitely do require MSI to
Hmm...
I find in section 6.1:
In addition to PCI INTx compatible interrupt emulation, PCI Express
requires support of MSI or MSI-X or both.
Which suggests that INTx support is required.
Unfortunately, this can be equally well read to suggest that MSI/MSI-X is
not required, but that
On May 24, 2007, at 10:51 PM, Andi Kleen wrote:
Do we have a feel for how much performace we're losing on those
systems which _could_ do MSI, but which will end up defaulting
to not using it?
At least on 10GB ethernet it is a significant difference; you usually
cannot go anywhere near line
On Fri, May 25, 2007 at 09:17:35AM -0600, Eric W. Biederman wrote:
Greg KH [EMAIL PROTECTED] writes:
Originally I would have thought this would be a good idea, but now that
Vista is out, which supports MSI, I don't think we are going to need
this in the future. All new chipsets should
In addition to PCI INTx compatible interrupt emulation, PCI Express
requires support of MSI or MSI-X or both.
Which suggests that INTx support is required.
I do not find any wording that suggest the opposite.
I do see it stated that it is intended to EOL support for INTx at
some
Greg KH [EMAIL PROTECTED] writes:
MSI appears to have enough problems that enabling it in a kernel
that is supposed to run lots of different hardware (like a distro
kernel) is a recipe for disaster.
Oh, I agree it's a major pain in the ass at times...
But I'm real hesitant to change things
I think for most of Intel I can reduce my test to:
If (bus == 0 , device == 0, function == 0 vendor == Intel
has a pci express capability) {
Enable msi on all busses().
}
MSI was working on every Intel PCI-X chipset I ever saw too...
- R.
-
To unsubscribe from this list:
On Fri, May 25, 2007 at 03:06:22PM -0600, Eric W. Biederman wrote:
Greg KH [EMAIL PROTECTED] writes:
It's a trade off, and I'd like to choose the one that over the long
term, causes the least ammount of work and maintaiblity. I think the
current blacklist meets that goal.
A reasonable
Eric W. Biederman wrote:
@@ -1677,43 +1650,16 @@ static int __devinit msi_ht_cap_enabled(struct
pci_dev *dev)
return 0;
}
-/* Check the hypertransport MSI mapping to know whether MSI is enabled or
not */
+/* Enable MSI on hypertransport chipsets supporting MSI */
static void
Greg KH <[EMAIL PROTECTED]> writes:
> On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
>>
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly. Since
> Do we have a feel for how much performace we're losing on those
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?
At least on 10GB ethernet it is a significant difference; you usually
cannot go anywhere near line speed without MSI
I suspect it is visible on
On Thu, May 24, 2007 at 09:31:57PM -0700, Andrew Morton wrote:
> On Thu, 24 May 2007 22:19:09 -0600 [EMAIL PROTECTED] (Eric W. Biederman)
> wrote:
>
> > Currently we blacklist known bad msi configurations which means we
> > keep getting MSI enabled on chipsets that either do not support MSI,
> >
Andrew Morton <[EMAIL PROTECTED]> writes:
> On Thu, 24 May 2007 22:19:09 -0600 [EMAIL PROTECTED] (Eric W. Biederman)
> wrote:
>
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented
On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
>
> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly. Since the normal IRQ routing
> mechanism seems to
On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly. Since the normal IRQ routing
> mechanism seems to works even
On Thu, 24 May 2007 22:19:09 -0600 [EMAIL PROTECTED] (Eric W. Biederman) wrote:
> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly. Since the normal IRQ routing
> mechanism
On Thu, 24 May 2007 22:19:09 -0600 [EMAIL PROTECTED] (Eric W. Biederman) wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly. Since the normal IRQ routing
mechanism
On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly. Since the normal IRQ routing
mechanism seems to works even when
On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly. Since the normal IRQ routing
mechanism seems to works
Andrew Morton [EMAIL PROTECTED] writes:
On Thu, 24 May 2007 22:19:09 -0600 [EMAIL PROTECTED] (Eric W. Biederman)
wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly.
On Thu, May 24, 2007 at 09:31:57PM -0700, Andrew Morton wrote:
On Thu, 24 May 2007 22:19:09 -0600 [EMAIL PROTECTED] (Eric W. Biederman)
wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI
Do we have a feel for how much performace we're losing on those
systems which _could_ do MSI, but which will end up defaulting
to not using it?
At least on 10GB ethernet it is a significant difference; you usually
cannot go anywhere near line speed without MSI
I suspect it is visible on
Greg KH [EMAIL PROTECTED] writes:
On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly. Since the normal
48 matches
Mail list logo