On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
because of high lock congestion for high-performance nvm devices. To
remove the congestion within the traditional block layer, a multi-queue
block layer is being implemented.
This
On Fri, 11 Oct 2013, Matias Bjorling wrote:
The doorbell code is repeated various places. Refactor it into its own function
for clarity.
Signed-off-by: Matias Bjorling m...@bjorling.me
Looks good to me.
Reviewed-by: Keith Busch keith.bu...@intel.com
---
drivers/block/nvme-core.c | 29
On Fri, 28 Feb 2014, Kent Overstreet wrote:
On Thu, Feb 27, 2014 at 12:22:54PM -0500, Matthew Wilcox wrote:
On Wed, Feb 26, 2014 at 03:39:49PM -0800, Kent Overstreet wrote:
We do this by adding calls to blk_queue_split() to the various
make_request functions that need it - a few can already
Looks good to me. This won't apply in linux-nvme yet and it may be a
little while before it does, so this might be considered to go upstream
through a different tree if you want this in sooner.
On Tue, 4 Mar 2014, Paul Bolle wrote:
Building nvme-core.o on 32 bit x86 triggers a rather impressive
-by: Alexander Gordeev agord...@redhat.com
Reviewed-by: Keith Busch keith.bu...@intel.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
On Thu, 20 Feb 2014, Paul Bolle wrote:
On Tue, 2014-02-18 at 10:02 +0100, Geert Uytterhoeven wrote:
And these popped up in v3.14-rc1 on 32 bit x86. This patch makes these
warnings go away. Compile tested only (on 32 and 64 bit x86).
Review is appreciated, because the code I'm touching here
On Fri, 18 Oct 2013, Matias Bjørling wrote:
On 10/18/2013 05:13 PM, Keith Busch wrote:
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
because of high lock congestion for high-performance nvm devices. To
remove the congestion
On Tue, 22 Oct 2013, Matias Bjorling wrote:
Den 22-10-2013 18:55, Keith Busch skrev:
On Fri, 18 Oct 2013, Matias Bjørling wrote:
On 10/18/2013 05:13 PM, Keith Busch wrote:
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
On Tue, 8 Oct 2013, Matias Bjørling wrote:
Convert the driver to blk mq.
The patch consists of:
* Initializion of mq data structures.
* Convert function calls from bio to request data structures.
* IO queues are split into an admin queue and io queues.
* bio splits are removed as it should be
On Tue, 8 Oct 2013, Jens Axboe wrote:
On Tue, Oct 08 2013, Matthew Wilcox wrote:
On Tue, Oct 08, 2013 at 11:34:20AM +0200, Matias Bjørling wrote:
The nvme driver implements itself as a bio-based driver. This primarily because
of high lock congestion for high-performance nvm devices. To remove
indefinitely.
Signed-off-by: Keith Busch keith.bu...@intel.com
---
drivers/base/core.c|4
include/linux/device.h |1 +
2 files changed, 5 insertions(+)
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 20da3ad..71b83bb 100644
--- a/drivers/base/core.c
+++ b/drivers/base
irq if performing the shutdown asynchronously.
Signed-off-by: Keith Busch keith.bu...@intel.com
---
drivers/block/nvme-core.c | 28 ++--
include/linux/nvme.h |1 +
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/block/nvme-core.c b/drivers
to the nvm-express driver
here so there's at least one user, assuming this is acceptable.
Keith Busch (2):
driver-core: allow asynchronous device shutdown
NVMe: Complete shutdown asynchronously
drivers/base/core.c |4
drivers/block/nvme-core.c | 28
On Tue, 10 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'd like to run xfstests on this, but it is failing mkfs.xfs. I honestly
don't know much about this area, but I think this may be from the recent
chunk sectors patch causing a
On Tue, 10 Jun 2014, Jens Axboe wrote:
On Jun 10, 2014, at 9:52 AM, Keith Busch keith.bu...@intel.com wrote:
On Tue, 10 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'd like to run xfstests on this, but it is failing mkfs.xfs. I honestly
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 01:29 PM, Keith Busch wrote:
I have two devices, one formatted 4k, the other 512. The 4k is used as
the TEST_DEV and 512 is used as SCRATCH_DEV. I'm always hitting a BUG when
unmounting the scratch dev in xfstests generic/068. The bug looks
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 03:10 PM, Keith Busch wrote:
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 01:29 PM, Keith Busch wrote:
I have two devices, one formatted 4k, the other 512. The 4k is used as
the TEST_DEV and 512 is used as SCRATCH_DEV. I'm always
On Wed, 11 Jun 2014, Matias Bjørling wrote:
I've rebased nvmemq_review and added two patches from Jens that add
support for requests with single range virtual addresses.
Keith, will you take it for a spin and see if it fixes 068 for you?
There might still be a problem with some flushes, I'm
On Thu, 12 Jun 2014, Matias Bjørling wrote:
On 06/12/2014 12:51 AM, Keith Busch wrote:
So far so good: it passed the test that was previously failing. I'll
let the remaining xfstests run and see what happens.
Great.
The flushes was a fluke. I haven't been able to reproduce.
Cool, most
On Thu, 12 Jun 2014, Keith Busch wrote:
On Thu, 12 Jun 2014, Matias Bjørling wrote:
On 06/12/2014 12:51 AM, Keith Busch wrote:
So far so good: it passed the test that was previously failing. I'll
let the remaining xfstests run and see what happens.
Great.
The flushes was a fluke. I haven't
On Fri, 13 Jun 2014, Jens Axboe wrote:
On 06/12/2014 06:06 PM, Keith Busch wrote:
When cancelling IOs, we have to check if the hwctx has a valid tags
for some reason. I have 32 cores in my system and as many queues, but
It's because unused queues are torn down, to save memory.
blk-mq
On Fri, 13 Jun 2014, Jens Axboe wrote:
On 06/13/2014 09:05 AM, Keith Busch wrote:
Here are the performance drops observed with blk-mq with the existing
driver as baseline:
CPU : Drop
:.
0 : -6%
8 : -36%
16 : -12%
We need the hints back for sure, I'll run some of the same
On Fri, 13 Jun 2014, Jens Axboe wrote:
OK, same setup as mine. The affinity hint is really screwing us over, no
question about it. We just need a:
irq_set_affinity_hint(dev-entry[nvmeq-cq_vector].vector, hctx-cpumask);
in the -init_hctx() methods to fix that up.
That brings us to roughly the
On Wed, 28 May 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I am concerned about device hot removal since the h/w queues can be
freed at any time. I *think* blk-mq helps with this in that the driver
will not see a new request after calling
On Thu, 29 May 2014, Jens Axboe wrote:
On 2014-05-28 21:07, Keith Busch wrote:
Barring any bugs in the code, then yes, this should work. On the scsi-mq
side, extensive error injection and pulling has been done, and it seems to
hold up fine now. The ioctl path would need to be audited.
It's
On Thu, 29 May 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'm pretty darn sure this new nvme_remove can cause a process
with an open reference to use queues after they're freed in the
nvme_submit_sync_command path, maybe even the admin tags
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+ struct nvme_dev *dev = pci_get_drvdata(pdev);
-
On Mon, 2 Jun 2014, Matias Bjørling wrote:
Hi Matthew and Keith,
Here is an updated patch with the feedback from the previous days. It's against
Jens' for-3.16/core tree. You may use the nvmemq_wip_review branch at:
I'm testing this on my normal hardware now. As I feared, hot removal
doesn't
0f ab
Jun 2 16:45:40 kbgrz1 kernel: [ 265.760706] RSP 8804283e7d80
Jun 2 16:45:40 kbgrz1 kernel: [ 265.764705] CR2:
Jun 2 16:45:40 kbgrz1 kernel: [ 265.768531] ---[ end trace 785048a51785f51e
]---
On Mon, 2 Jun 2014, Keith Busch wrote:
On Mon, 2 Jun 2014, Matias
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
Still fails as before:
[ 88.933881] BUG: unable to handle kernel NULL pointer dereference at
0014
[ 88.942900] IP: [811c51b8] blk_mq_map_queue+0xf/0x1e
[
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
BTW, if you want to test this out yourself, it's pretty simple to
recreate. I just run a simple user admin program sending nvme passthrough
commands in a tight loop, then run:
# echo
On Wed, 4 Jun 2014, Matias Bjørling wrote:
On 06/04/2014 12:27 AM, Keith Busch wrote:
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
BTW, if you want to test this out yourself, it's pretty simple to
recreate. I just run a simple
On Wed, 4 Jun 2014, Jens Axboe wrote:
On 06/04/2014 12:28 PM, Keith Busch wrote:
Are you testing against 3.13? You really need the current tree for this,
otherwise I'm sure you'll run into issues (as you appear to be :-)
I'm using Matias' current tree:
git://github.com/MatiasBjorling/linux
On Mon, 20 Jan 2014, Alexander Gordeev wrote:
This update fixes an oddity when a device is first added
and then removed from dev_list in case of initialization
failure, instead of just being added in case of success.
Signed-off-by: Alexander Gordeev agord...@redhat.com
---
On Mon, 20 Jan 2014, Alexander Gordeev wrote:
This is an attempt to make handling of admin queue in a
single scope. This update also fixes a IRQ leak in case
nvme_setup_io_queues() failed to allocate enough iomem
and bailed out with -ENOMEM errno.
This definitely seems to improve the code
On Tue, 21 Jan 2014, Alexander Gordeev wrote:
This is an attempt to make handling of admin queue in a
single scope. This update also fixes a IRQ leak in case
nvme_setup_io_queues() failed to allocate enough iomem
and bailed out with -ENOMEM errno.
Signed-off-by: Alexander Gordeev
On Fri, 17 Jan 2014, Bjorn Helgaas wrote:
On Fri, Jan 17, 2014 at 9:02 AM, Alexander Gordeev agord...@redhat.com wrote:
In case MSI-X and MSI initialization failed the function
irq_set_affinity_hint() is called with uninitialized value
in dev-entry[0].vector. This update fixes the issue.
On Thu, 10 Jul 2014, Bjorn Helgaas wrote:
[+cc LKML, Greg KH for driver core async shutdown question]
On Tue, Jun 24, 2014 at 10:48:57AM -0600, Keith Busch wrote:
To provide context why I want to do this asynchronously, NVM-Express has
one PCI device per controller, of which there could
On Tue, 24 Jun 2014, Matias Bjorling wrote:
Den 16-06-2014 17:57, Keith Busch skrev:
This latest is otherwise stable on my dev machine.
May I add an Acked-by from you?
Totally up to Willy, but my feeling is not just yet. I see the value
this driver provides, but I would need to give
On Tue, 24 Jun 2014, Matias Bjørling wrote:
On Tue, Jun 24, 2014 at 10:33 PM, Keith Busch keith.bu...@intel.com wrote:
On Tue, 24 Jun 2014, Matias Bjorling wrote:
Den 16-06-2014 17:57, Keith Busch skrev:
This latest is otherwise stable on my dev machine.
May I add an Acked-by from you
Signed-off-by: Keith Busch keith.bu...@intel.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: x...@kernel.org
---
kernel/irq/irqdesc.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 7339e42..1487a12 100644
--- a/kernel
On Mon, 30 Jun 2014, David Rientjes wrote:
On Mon, 30 Jun 2014, Keith Busch wrote:
Signed-off-by: Keith Busch keith.bu...@intel.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: x...@kernel.org
Acked-by: David Rientjes rient...@google.com
This is definitely a fix for genirq: Provide generic
irq_free_hwirqs() always calls irq_free_descs() with a cnt == 0
which makes it a no-op since the interrupt count to free is
decremented in itself.
Fixes: 7b6ef1262549f6afc5c881aaef80beb8fd15f908
Signed-off-by: Keith Busch keith.bu...@intel.com
Cc: Thomas Gleixner t...@linutronix.de
Acked
-by: Keith Busch keith.bu...@intel.com
---
This was briefly discussed here:
http://lists.infradead.org/pipermail/linux-nvme/2014-August/001120.html
This patch goes one step further and fixes the same problem for partitions
and disks.
block/genhd.c | 18 +-
block/partition
-by: Keith Busch keith.bu...@intel.com
---
v1-v2:
Applied comments from Willy: fixed gfp mask in idr_alloc to not wait,
and preload.
block/genhd.c | 24 ++--
block/partition-generic.c |2 +-
2 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/block
On Thu, 21 Aug 2014, Matias Bjørling wrote:
On 08/19/2014 12:49 AM, Keith Busch wrote:
I see the driver's queue suspend logic is removed, but I didn't mean to
imply it was safe to do so without replacing it with something else. I
thought maybe we could use the blk_stop/start_queue() functions
inode so that two different disks that have
a major/minor collision can coexist.
Signed-off-by: Keith Busch keith.bu...@intel.com
---
Maybe this is terrible idea!?
This came from proposals to the nvme driver that remove the dynamic
partitioning that was recently added, and I wanted to know why
On Fri, 22 Aug 2014, Christoph Hellwig wrote:
On Fri, Aug 22, 2014 at 10:28:16AM -0600, Keith Busch wrote:
When using the GENHD_FL_EXT_DEVT disk flags, a newly added device may
be assigned the same major/minor as one that was previously removed but
opened, and the pesky userspace refuses
On Fri, 22 Aug 2014, Keith Busch wrote:
On Fri, 22 Aug 2014, Christoph Hellwig wrote:
On Fri, Aug 22, 2014 at 10:28:16AM -0600, Keith Busch wrote:
When using the GENHD_FL_EXT_DEVT disk flags, a newly added device may
be assigned the same major/minor as one that was previously removed
On Sun, 10 Aug 2014, Matias Bjørling wrote:
On Sat, Jul 26, 2014 at 11:07 AM, Matias Bjørling m...@bjorling.me wrote:
This converts the NVMe driver to a blk-mq request-based driver.
Willy, do you need me to make any changes to the conversion? Can you
pick it up for 3.17?
Hi Matias,
I'm
On Thu, 14 Aug 2014, Jens Axboe wrote:
On 08/14/2014 02:25 AM, Matias Bjørling wrote:
The result is set to BLK_MQ_RQ_QUEUE_ERROR, or am I mistaken?
Looks OK to me, looking at the code, 'result' is initialized to
BLK_MQ_RQ_QUEUE_BUSY though. Which looks correct, we don't want to error
on a
On Thu, 14 Aug 2014, Matias Bjorling wrote:
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with slab debugging? Matias, might be worth trying.
The queue's tags were freed in
On Thu, 14 Aug 2014, Jens Axboe wrote:
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with slab debugging? Matias, might be worth trying.
The allocation and freeing of blk-mq parts seems a bit
On Fri, 15 Aug 2014, Matias Bjørling wrote:
* NVMe queues are merged with the tags structure of blk-mq.
I see the driver's queue suspend logic is removed, but I didn't mean to
imply it was safe to do so without replacing it with something else. I
thought maybe we could use the
On Wed, 8 Oct 2014, Matias Bjørling wrote:
NVMe devices are identified by the vendor specific bits:
Bit 3 in OACS (device-wide). Currently made per device, as the nvme
namespace is missing in the completion path.
The NVM-Express 1.2 actually defined this bit for Namespace Management,
so I
On Tue, 30 Sep 2014, Matias Bjørling wrote:
@@ -1967,27 +1801,30 @@ static struct nvme_ns *nvme_alloc_ns(struct nvme_dev
*dev, unsigned nsid,
{
...
- ns-queue-queue_flags = QUEUE_FLAG_DEFAULT;
+ queue_flag_set_unlocked(QUEUE_FLAG_DEFAULT, ns-queue);
Instead of the above, you
starts minors at the last defined misc
minor (255) and works up to the max possible.
Signed-off-by: Keith Busch keith.bu...@intel.com
---
drivers/char/misc.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/drivers/char/misc.c b/drivers/char/misc.c
index
On Tue, 9 Dec 2014, Arnd Bergmann wrote:
On Monday 08 December 2014 16:01:50 Keith Busch wrote:
This increases the number of available miscellaneous character device
dynamic minors from 63 to the max minor, 1M.
Dynamic minor previously started at 63 and went down to zero. That's not
enough
On Wed, 21 Jan 2015, Yan Liu wrote:
For IO passthrough command, it uses an IO queue associated with the device.
Actually, this patch does not modify that part.
This patch is not really focused on io queues; instead, it is more about
namespace protection from other namespace's user ios. The
On Fri, 23 Jan 2015, Christoph Hellwig wrote:
On Fri, Jan 23, 2015 at 04:22:02PM +, Keith Busch wrote:
The namespace id should be enforced on block devices, but is there a
problem allowing arbitrary commands through the management char device?
I have a need for a pure passthrough
On Sun, 25 Jan 2015, Christoph Hellwig wrote:
On Fri, Jan 23, 2015 at 03:57:06PM -0800, Yan Liu wrote:
When a passthrough IO command is issued with a specific block device file
descriptor. It should be applied at
the namespace which is associated with that block device file descriptor. This
On Wed, 21 Jan 2015, Yan Liu wrote:
When a passthrough IO command is issued with a specific block device file
descriptor. It should be applied at
the namespace which is associated with that block device file descriptor. This
patch makes such passthrough
command ingore nsid in nvme_passthru_cmd
On Thu, 22 Jan 2015, Christoph Hellwig wrote:
On Thu, Jan 22, 2015 at 12:47:24AM +, Keith Busch wrote:
The IOCTL's purpose was to let someone submit completely arbitrary
commands on IO queues. This technically shouldn't even need a namespace
handle, but we don't have a request_queue
On Thu, 22 Jan 2015, Christoph Hellwig wrote:
On Thu, Jan 22, 2015 at 03:21:28PM +, Keith Busch wrote:
But if you really need to restrict namespace access, shouldn't that be
enforced on the target side with reservations or similar mechanism?
Think for example about containers where we
On Mon, 9 Feb 2015, Mike Snitzer wrote:
On Mon, Feb 09 2015 at 11:38am -0500,
Dongsu Park dongsu.p...@profitbricks.com wrote:
So that commit 6d6285c45f5a should be either reverted, or moved to
linux-dm tree, doesn't it?
Cheers,
Dongsu
[1]
-by: Keith Busch keith.bu...@intel.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Sun, 22 Mar 2015, Steven Noonan wrote:
This happens on boot, and then eventually results in an RCU stall.
[8.047533] nvme :05:00.0: Device not ready; aborting initialisation
Note that the above is expected with this hardware (long story).
Although 3.19.x prints the above and then
On Mon, 23 Feb 2015, Arnd Bergmann wrote:
A patch that was added to 4.0-rc1 in the last minute caused a
build break in the NVMe driver unless integrity support is
also enabled:
drivers/block/nvme-core.c: In function 'nvme_dif_remap':
drivers/block/nvme-core.c:523:24: error: dereferencing
On Thu, 22 Jan 2015, Christoph Hellwig wrote:
On Thu, Jan 22, 2015 at 04:02:08PM -0800, Yan Liu wrote:
When a passthrough IO command is issued with a specific block device file
descriptor. It should be applied at
the namespace which is associated with that block device file descriptor. This
On Tue, 28 Apr 2015, Christoph Hellwig wrote:
This seems to lack support for QUEUE_FLAG_SG_GAPS to work around
the retarded PPR format in the NVMe driver.
Might strong words, sir! I'm missing the context here, but I'll say PRP
is much more efficient for h/w to process over SGL, and the
On Tue, 12 May 2015, Nicholas Krause wrote:
This changes the function,nvme_alloc_queue to use the kernel code,
-ENOMEM for when failing to allocate the memory required for the
nvme_queue structure pointer,nvme in order to correctly return
to the caller the correct reason for this function's
On Wed, 13 May 2015, Matthew Wilcox wrote:
On Wed, May 13, 2015 at 12:21:18PM -0400, Nicholas Krause wrote:
This removes the include statement for including the header file,
linux/mm.h in the file, nvme-core.c due this driver file never
calling any functions from the header file, linux/mm.h
On Wed, 15 Apr 2015, Matias Bjørling wrote:
@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
- int shift = NVME_CAP_MPSMIN(readq(dev-bar-cap)) + 12;
+ u64 cap = readq(dev-bar-cap);
+
On Thu, 16 Apr 2015, James R. Bergsten wrote:
My two cents worth is that it's (always) better to put ALL the commands into
one place so that the entire set can be viewed at once and thus avoid
inadvertent overloading of an opcode. Otherwise you don't know what you
don't know.
Yes, but these
On Thu, 16 Apr 2015, Javier González wrote:
On 16 Apr 2015, at 16:55, Keith Busch keith.bu...@intel.com wrote:
Otherwise it looks pretty good to me, but I think it would be cleaner if
the lightnvm stuff is not mixed in the same file with the standard nvme
command set. We might end up splitting
On Wed, 17 Jun 2015, Dheepthi K wrote:
Memory freeing order has been corrected incase of
allocation failure.
This isn't necessary. The nvme_dev is zero'ed on allocation, and
kfree(NULL or (void *)0) is okay to do.
Signed-off-by: Dheepthi K dheepth...@gracelabs.com
---
of
different size [-Wint-to-pointer-cast]
In order to shup up that warning, this introduces a new
temporary variable that uses a double cast to extract
the pointer from an __u64 structure member.
Thanks for the fix.
Acked-by: Keith Busch keith.bu...@intel.com
Signed-off-by: Arnd Bergmann a...@arndb.de
On Thu, 21 May 2015, Parav Pandit wrote:
Avoid diabling interrupt and holding q_lock for the queue
which is just getting initialized.
With this change, online_queues is also incremented without
lock during queue setup stage.
if Power management nvme_suspend() kicks in during queue setup time,
On Fri, 22 May 2015, Parav Pandit wrote:
On Fri, May 22, 2015 at 8:18 PM, Keith Busch keith.bu...@intel.com wrote:
The rcu protection on nvme queues was removed with the blk-mq conversion
as we rely on that layer for h/w access.
o.k. But above is at level where data I/Os are not even active
On Thu, 21 May 2015, Parav Pandit wrote:
On Fri, May 22, 2015 at 1:04 AM, Keith Busch keith.bu...@intel.com wrote:
The q_lock is held to protect polling from reading inconsistent data.
ah, yes. I can see the nvme_kthread can poll the CQ while its getting
created through the nvme_resume().
I
On Fri, 22 May 2015, Parav Pandit wrote:
During normal positive path probe,
(a) device is added to dev_list in nvme_dev_start()
(b) nvme_kthread got created, which will eventually refers to
dev-queues[qid] to check for NULL.
(c) dev_start() worker thread has started probing device and creating
On Fri, 22 May 2015, Parav Pandit wrote:
On Fri, May 22, 2015 at 9:53 PM, Keith Busch keith.bu...@intel.com wrote:
A memory barrier before incrementing the dev-queue_count (and assigning
the pointer in the array before that) should address this concern.
Sure. mb() will solve the publisher
On Fri, 22 May 2015, Parav Pandit wrote:
I agree to it that nvmeq won't be null after mb(); That alone is not sufficient.
What I have proposed in previous email is,
Converting,
struct nvme_queue *nvmeq = dev-queues[i];
if (!nvmeq)
continue;
spin_lock_irq(nvmeq-q_lock);
to replace with,
On Mon, 17 Aug 2015, Bjorn Helgaas wrote:
On Wed, Jul 29, 2015 at 04:18:53PM -0600, Keith Busch wrote:
The new pcie tuning will check the device's MPS against the parent bridge
when it is initially added to the pci subsystem, prior to attaching
to a driver. If MPS is mismatched, the downstream
On Tue, 11 Aug 2015, Christoph Hellwig wrote:
This series adds support for a simplified Persistent Reservation API
to the block layer. The intent is that both in-kernel and userspace
consumers can use the API instead of having to hand craft SCSI or NVMe
command through the various pass through
From: Dave Jiang dave.ji...@intel.com
This is in perperation of un-exporting the pcie_set_mps() function
symbol. A driver should not be changing the MPS as that is the
responsibility of the PCI subsystem.
Signed-off-by: Dave Jiang dave.ji...@intel.com
---
drivers/infiniband/hw/qib/qib_pcie.c |
, or explicit request to rescan.
Signed-off-by: Keith Busch keith.bu...@intel.com
Cc: Dave Jiang dave.ji...@intel.com
Cc: Austin Bolen austin_bo...@dell.com
Cc: Myron Stowe mst...@redhat.com
Cc: Jon Mason jdma...@kudzu.us
Cc: Bjorn Helgaas bhelg...@google.com
---
arch/arm/kernel/bios32.c
to update the
down stream port to match the upstream port if it is capable.
Dave Jiang (2):
QIB: Removing usage of pcie_set_mps()
PCIE: Remove symbol export for pcie_set_mps()
Keith Busch (1):
pci: Default MPS tuning to match upstream port
arch/arm/kernel/bios32.c | 12
From: Dave Jiang dave.ji...@intel.com
The setting of PCIe MPS should be left to the PCI subsystem and not
the driver. An ill configured MPS by the driver could cause the device
to not function or unstablize the entire system. Removing the exported
symbol.
Signed-off-by: Dave Jiang
On Tue, 4 Aug 2015, Christoph Hellwig wrote:
NVMe support currently isn't included as I don't have a multihost
NVMe setup to test on, but if I can find a volunteer to test it I'm
happy to write the code for it.
Looks pretty good so far. I'd be happy to give try it out with NVMe
subsystems.
--
On Wed, 15 Jul 2015, Bart Van Assche wrote:
* With blk-mq and scsi-mq optimal performance can only be achieved if
the relationship between MSI-X vector and NUMA node does not change
over time. This is necessary to allow a blk-mq/scsi-mq driver to
ensure that interrupts are processed on the
g these features should either not be placed below VMD-owned
root ports, or VMD should be disabled by BIOS for such endpoints.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
arch/x86/Kconfig | 17 ++
arch/x86/include/asm/vmd.h | 10 +
arch/x86/kernel/apic/msi.c | 38 ++
PCI-e segments will continue to use the lower 16 bits as required by
ACPI. Special domains may use the full 32-bits.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
lib/filter.c |2 +-
lib/pci.h|2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/fi
c operations rather than defining
this as a PCI FIXUP.
Fixed memory leak if irq_domain creation failed.
Keith Busch (4):
pci: skip child bus with conflicting resources
x86/pci: allow pci domain specific dma ops
x86/pci: Initial commit for new VMD device driver
pciutils: Allow
New x86 pci h/w will require dma operations specific to that domain. This
patch allows those domains to register their operations, and sets devices
as they are discovere3d in that domain to use them.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
arch/x86/include/asm/device.h
And use the max bus resource from the parent rather than assume 255.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/pci/probe.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 8361d27..1cb3be7
the same configuration checks.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
v1 -> v2: Fixed corrupted patch and subject line spelling error
include/linux/bio.h | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/include/linu
the same configuration checks.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
include/linux/bio.h | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index b9b6e04..f0c46d0 100644
--- a/i
intel.com>
Tested-by: Keith Busch <keith.bu...@intel.com>
---
kernel/irq/msi.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
index 6b0c0b7..5e15cb4 100644
--- a/kernel/irq/msi.c
+++ b/kernel/irq/msi.c
@@ -109,9 +10
And use the max bus resource from the parent rather than assume 255.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/pci/probe.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index f14a970..ae5a4b3
1 - 100 of 1501 matches
Mail list logo