On Fri, Oct 11, 2013 at 04:29:39PM -0400, Mark Lord wrote:
static int xx_alloc_msix_irqs(struct xx_dev *dev, int nvec)
{
nvec = roundup_pow_of_two(nvec);/* assume 0 nvec = 16 */
xx_disable_all_irqs(dev);
pci_lock_msi(dev-pdev);
rc =
On Thu, Oct 10, 2013 at 07:17:18PM -0400, Mark Lord wrote:
Just to help us all understand the loop issue..
Here's an example of driver code which uses the existing MSI-X interfaces,
for a device which can work with either 16, 8, 4, 2, or 1 MSI-X interrupt.
This is from a new driver I'm
On 13-10-11 04:41 AM, Alexander Gordeev wrote:
On Thu, Oct 10, 2013 at 07:17:18PM -0400, Mark Lord wrote:
Just to help us all understand the loop issue..
Here's an example of driver code which uses the existing MSI-X interfaces,
for a device which can work with either 16, 8, 4, 2, or 1 MSI-X
On Wed, Oct 09, 2013 at 03:24:08PM +1100, Benjamin Herrenschmidt wrote:
On Tue, 2013-10-08 at 20:55 -0700, H. Peter Anvin wrote:
Why not add a minimum number to pci_enable_msix(), i.e.:
pci_enable_msix(pdev, msix_entries, nvec, minvec)
... which means nvec is the number of interrupts
On 10/10/2013 03:17 AM, Alexander Gordeev wrote:
On Wed, Oct 09, 2013 at 03:24:08PM +1100, Benjamin Herrenschmidt wrote:
Ok, this suggestion sounded in one or another form by several people.
What about name it pcim_enable_msix_range() and wrap in couple more
helpers to complete an API:
On Thu, Oct 10, 2013 at 09:28:27AM -0700, H. Peter Anvin wrote:
On 10/10/2013 03:17 AM, Alexander Gordeev wrote:
On Wed, Oct 09, 2013 at 03:24:08PM +1100, Benjamin Herrenschmidt wrote:
Ok, this suggestion sounded in one or another form by several people.
What about name it
Just to help us all understand the loop issue..
Here's an example of driver code which uses the existing MSI-X interfaces,
for a device which can work with either 16, 8, 4, 2, or 1 MSI-X interrupt.
This is from a new driver I'm working on right now:
static int xx_alloc_msix_irqs (struct xx_dev
On Mon, Oct 07, 2013 at 02:01:11PM -0400, Tejun Heo wrote:
Hmmm... yean, the race condition could be an issue as multiple msi
allocation might fail even if the driver can and explicitly handle
multiple allocation if the quota gets reduced inbetween.
BTW, should we care about the quota getting
Hello,
On Tue, Oct 08, 2013 at 02:22:16PM +0200, Alexander Gordeev wrote:
If we talk about pSeries quota, then the current pSeries pci_enable_msix()
implementation is racy internally and could fail if the quota went down
*while* pci_enable_msix() is executing. In this case the loop will have
Hello,
On Wed, Oct 09, 2013 at 02:57:16PM +0200, Alexander Gordeev wrote:
On Mon, Oct 07, 2013 at 02:01:11PM -0400, Tejun Heo wrote:
Hmmm... yean, the race condition could be an issue as multiple msi
allocation might fail even if the driver can and explicitly handle
multiple allocation if
On Mon, Oct 07, 2013 at 09:48:01PM +0100, Ben Hutchings wrote:
There is one major flaw in min-max approach - the generic MSI layer
will have to take decisions on exact number of MSIs to request, not
device drivers.
[...
No, the min-max functions should be implemented using the same loop
Hello,
On Tue, Oct 08, 2013 at 11:07:16AM +0200, Alexander Gordeev wrote:
Multipe MSIs is just a handful of drivers, really. MSI-X impact still
Yes, so it's pretty nice to try out things there before going full-on.
will be huge. But if we opt a different name for the new pci_enable_msix()
On Tue, Oct 08, 2013 at 03:33:30PM +1100, Michael Ellerman wrote:
On Wed, Oct 02, 2013 at 12:29:04PM +0200, Alexander Gordeev wrote:
This technique proved to be confusing and error-prone. Vast share
of device drivers simply fail to follow the described guidelines.
To clarify Vast share of
On Mon, Oct 07, 2013 at 02:21:17PM -0400, Tejun Heo wrote:
Whee that's a lot more than I expected. I was just scanning
multiple msi users. Maybe we can stage the work in more manageable
steps so that you don't have to go through massive conversion only to
do it all over again afterwards
On Mon, Oct 07, 2013 at 02:01:11PM -0400, Tejun Heo wrote:
What about introducing pci_lock_msi() and pci_unlock_msi() and let device
drivers care about their ranges and specifics in race-safe manner?
I do not call to introduce it right now (since it appears pSeries has not
been hitting the
On Tue, Oct 08, 2013 at 09:33:02AM +0200, Alexander Gordeev wrote:
On Tue, Oct 08, 2013 at 03:33:30PM +1100, Michael Ellerman wrote:
On Wed, Oct 02, 2013 at 12:29:04PM +0200, Alexander Gordeev wrote:
This technique proved to be confusing and error-prone. Vast share
of device drivers
On 13-10-02 06:29 AM, Alexander Gordeev wrote:
..
This update converts pci_enable_msix() and pci_enable_msi_block()
interfaces to canonical kernel functions and makes them return a
error code in case of failure or 0 in case of success.
Rather than silently break dozens of drivers in mysterious
On 10/02/2013 03:29 AM, Alexander Gordeev wrote:
As result, device drivers will cease to use the overcomplicated
repeated fallbacks technique and resort to a straightforward
pattern - determine the number of MSI/MSI-X interrupts required
before calling pci_enable_msi_block() and
On Tue, 2013-10-08 at 20:55 -0700, H. Peter Anvin wrote:
Why not add a minimum number to pci_enable_msix(), i.e.:
pci_enable_msix(pdev, msix_entries, nvec, minvec)
... which means nvec is the number of interrupts *requested*, and
minvec is the minimum acceptable number (otherwise fail).
Hey, guys.
On Sun, Oct 06, 2013 at 09:10:30AM +0200, Alexander Gordeev wrote:
On Sun, Oct 06, 2013 at 05:19:46PM +1100, Benjamin Herrenschmidt wrote:
On Sun, 2013-10-06 at 08:02 +0200, Alexander Gordeev wrote:
In fact, in the current design to address the quota race decently the
drivers
Hello, Alexander.
On Wed, Oct 02, 2013 at 12:48:16PM +0200, Alexander Gordeev wrote:
Alexander Gordeev (77):
PCI/MSI: Fix return value when populate_msi_sysfs() failed
PCI/MSI/PPC: Fix wrong RTAS error code reporting
PCI/MSI/s390: Fix single MSI only check
PCI/MSI/s390: Remove
On Mon, 2013-10-07 at 14:01 -0400, Tejun Heo wrote:
I don't think the same race condition would happen with the loop. The
problem case is where multiple msi(x) allocation fails completely
because the global limit went down before inquiry and allocation. In
the loop based interface, it'd
On Tue, 2013-10-08 at 07:10 +1100, Benjamin Herrenschmidt wrote:
On Mon, 2013-10-07 at 14:01 -0400, Tejun Heo wrote:
I don't think the same race condition would happen with the loop. The
problem case is where multiple msi(x) allocation fails completely
because the global limit went down
On Sun, 2013-10-06 at 09:10 +0200, Alexander Gordeev wrote:
On Sun, Oct 06, 2013 at 05:19:46PM +1100, Benjamin Herrenschmidt wrote:
On Sun, 2013-10-06 at 08:02 +0200, Alexander Gordeev wrote:
In fact, in the current design to address the quota race decently the
drivers would have to
On Wed, Oct 02, 2013 at 12:29:04PM +0200, Alexander Gordeev wrote:
This series is against next branch in Bjorn's repo:
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git
Currently pci_enable_msi_block() and pci_enable_msix() interfaces
return a error code in case of failure, 0 in
On Sun, Oct 06, 2013 at 08:46:26AM +1100, Benjamin Herrenschmidt wrote:
On Sat, 2013-10-05 at 16:20 +0200, Alexander Gordeev wrote:
So my point is - drivers should first obtain a number of MSIs they *can*
get, then *derive* a number of MSIs the device is fine with and only then
request that
On Sun, 2013-10-06 at 08:02 +0200, Alexander Gordeev wrote:
On Sun, Oct 06, 2013 at 08:46:26AM +1100, Benjamin Herrenschmidt wrote:
On Sat, 2013-10-05 at 16:20 +0200, Alexander Gordeev wrote:
So my point is - drivers should first obtain a number of MSIs they *can*
get, then *derive* a
On Sun, Oct 06, 2013 at 05:19:46PM +1100, Benjamin Herrenschmidt wrote:
On Sun, 2013-10-06 at 08:02 +0200, Alexander Gordeev wrote:
In fact, in the current design to address the quota race decently the
drivers would have to protect the *loop* to prevent the quota change
between a
On Fri, Oct 04, 2013 at 10:29:16PM +0100, Ben Hutchings wrote:
On Fri, 2013-10-04 at 10:29 +0200, Alexander Gordeev wrote:
All I can see there is that Tejun didn't think that the global limits
and positive return values were implemented by any architecture.
I would say more than just that :) I
On Sat, 2013-10-05 at 16:20 +0200, Alexander Gordeev wrote:
So my point is - drivers should first obtain a number of MSIs they *can*
get, then *derive* a number of MSIs the device is fine with and only then
request that number. Not terribly different from memory or any other type
of resource
On Thu, Oct 03, 2013 at 11:49:45PM +0100, Ben Hutchings wrote:
On Wed, 2013-10-02 at 12:48 +0200, Alexander Gordeev wrote:
This update converts pci_enable_msix() and pci_enable_msi_block()
interfaces to canonical kernel functions and makes them return a
error code in case of failure or 0 in
It seems to me that a more useful interface would take a minimum and
maximum number of vectors from the driver. This wouldn't allow the
driver to specify that it could only accept, say, any even number within
a certain range, but you could still leave the current functions
available for
On Fri, Oct 04, 2013 at 09:31:49AM +0100, David Laight wrote:
Mmmm.. I am not sure I am getting it. Could you please rephrase?
One possibility is for drivers than can use a lot of interrupts to
request a minimum number initially and then request the additional
ones much later on.
That
On Fri, 2013-10-04 at 10:29 +0200, Alexander Gordeev wrote:
On Thu, Oct 03, 2013 at 11:49:45PM +0100, Ben Hutchings wrote:
On Wed, 2013-10-02 at 12:48 +0200, Alexander Gordeev wrote:
This update converts pci_enable_msix() and pci_enable_msi_block()
interfaces to canonical kernel functions
On Wed, 2013-10-02 at 12:48 +0200, Alexander Gordeev wrote:
This series is against next branch in Bjorn's repo:
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git
Currently pci_enable_msi_block() and pci_enable_msix() interfaces
return a error code in case of failure, 0 in case of
This series is against next branch in Bjorn's repo:
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git
Currently pci_enable_msi_block() and pci_enable_msix() interfaces
return a error code in case of failure, 0 in case of success and a
positive value which indicates the number of
36 matches
Mail list logo