Signed-off-by: Keith Busch
Cc: Thomas Gleixner
Cc: x...@kernel.org
---
kernel/irq/irqdesc.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 7339e42..1487a12 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
On Mon, 30 Jun 2014, David Rientjes wrote:
On Mon, 30 Jun 2014, Keith Busch wrote:
Signed-off-by: Keith Busch
Cc: Thomas Gleixner
Cc: x...@kernel.org
Acked-by: David Rientjes
This is definitely a fix for "genirq: Provide generic hwirq allocation
facility", but the changel
irq_free_hwirqs() always calls irq_free_descs() with a cnt == 0
which makes it a no-op since the interrupt count to free is
decremented in itself.
Fixes: 7b6ef1262549f6afc5c881aaef80beb8fd15f908
Signed-off-by: Keith Busch
Cc: Thomas Gleixner
Acked-by: David Rientjes
---
kernel/irq/irqdesc.c
On Fri, 13 Jun 2014, Jens Axboe wrote:
On 06/12/2014 06:06 PM, Keith Busch wrote:
When cancelling IOs, we have to check if the hwctx has a valid tags
for some reason. I have 32 cores in my system and as many queues, but
It's because unused queues are torn down, to save memory.
blk-mq
On Fri, 13 Jun 2014, Jens Axboe wrote:
On 06/13/2014 09:05 AM, Keith Busch wrote:
Here are the performance drops observed with blk-mq with the existing
driver as baseline:
CPU : Drop
:.
0 : -6%
8 : -36%
16 : -12%
We need the hints back for sure, I'll run some of the same
On Fri, 13 Jun 2014, Jens Axboe wrote:
OK, same setup as mine. The affinity hint is really screwing us over, no
question about it. We just need a:
irq_set_affinity_hint(dev->entry[nvmeq->cq_vector].vector, hctx->cpumask);
in the ->init_hctx() methods to fix that up.
That brings us to roughly
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+ struct nvme_dev *dev = pci_get_drvdata(pdev);
-
On Thu, 21 Aug 2014, Matias Bjørling wrote:
On 08/19/2014 12:49 AM, Keith Busch wrote:
I see the driver's queue suspend logic is removed, but I didn't mean to
imply it was safe to do so without replacing it with something else. I
thought maybe we could use the blk_stop/start_queue() functions
inode so that two different disks that have
a major/minor collision can coexist.
Signed-off-by: Keith Busch
---
Maybe this is terrible idea!?
This came from proposals to the nvme driver that remove the dynamic
partitioning that was recently added, and I wanted to know why exactly
it was failing
On Fri, 22 Aug 2014, Christoph Hellwig wrote:
On Fri, Aug 22, 2014 at 10:28:16AM -0600, Keith Busch wrote:
When using the GENHD_FL_EXT_DEVT disk flags, a newly added device may
be assigned the same major/minor as one that was previously removed but
opened, and the pesky userspace refuses
On Fri, 22 Aug 2014, Keith Busch wrote:
On Fri, 22 Aug 2014, Christoph Hellwig wrote:
On Fri, Aug 22, 2014 at 10:28:16AM -0600, Keith Busch wrote:
When using the GENHD_FL_EXT_DEVT disk flags, a newly added device may
be assigned the same major/minor as one that was previously removed
On Sun, 10 Aug 2014, Matias Bjørling wrote:
On Sat, Jul 26, 2014 at 11:07 AM, Matias Bjørling wrote:
This converts the NVMe driver to a blk-mq request-based driver.
Willy, do you need me to make any changes to the conversion? Can you
pick it up for 3.17?
Hi Matias,
I'm starting to get a
On Thu, 14 Aug 2014, Jens Axboe wrote:
On 08/14/2014 02:25 AM, Matias Bjørling wrote:
The result is set to BLK_MQ_RQ_QUEUE_ERROR, or am I mistaken?
Looks OK to me, looking at the code, 'result' is initialized to
BLK_MQ_RQ_QUEUE_BUSY though. Which looks correct, we don't want to error
on a
On Thu, 14 Aug 2014, Matias Bjorling wrote:
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with slab debugging? Matias, might be worth trying.
The queue's tags were freed in
On Thu, 14 Aug 2014, Jens Axboe wrote:
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with slab debugging? Matias, might be worth trying.
The allocation and freeing of blk-mq parts seems a bit
On Fri, 15 Aug 2014, Matias Bjørling wrote:
* NVMe queues are merged with the tags structure of blk-mq.
I see the driver's queue suspend logic is removed, but I didn't mean to
imply it was safe to do so without replacing it with something else. I
thought maybe we could use the
On Wed, Feb 28, 2018 at 10:53:31AM +0800, jianchao.wang wrote:
> On 02/27/2018 11:13 PM, Keith Busch wrote:
> > On Tue, Feb 27, 2018 at 04:46:17PM +0800, Jianchao Wang wrote:
> >> Currently, adminq and ioq0 share the same irq vector. This is
> >> unfair for both amdinq a
On Wed, Feb 28, 2018 at 11:46:20PM +0800, jianchao.wang wrote:
>
> the irqbalance may migrate the adminq irq away from cpu0.
No, irqbalance can't touch managed IRQs. See irq_can_set_affinity_usr().
Thanks, applied.
On Wed, Feb 28, 2018 at 04:31:37PM -0600, wenxiong wrote:
> On 2018-02-15 14:05, wenxi...@linux.vnet.ibm.com wrote:
> > From: Wen Xiong
> >
> > With b2a0eb1a0ac72869c910a79d935a0b049ec78ad9(nvme-pci: Remove watchdog
> > timer), EEH recovery stops working on ppc.
> >
> > After removing whatdog
On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote:
> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq
> twice, one for itself,
> one for adminq completion irq action.
Let's be a little more careful on the terminology when referring to spec
defined features:
On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote:
> If all CPUs for the 1st IRQ vector of admin queue are offline, then I
> guess NVMe can't work any more.
Yikes, with respect to admin commands, it appears you're right if your
system allows offlining CPU0.
> So looks it is a good idea to
On Thu, Mar 01, 2018 at 11:12:08AM -0600, Wen Xiong wrote:
>Hi Keith,
>
>It is perfect! I go with it.
Thanks, queued up for 4.16.
On Thu, Mar 01, 2018 at 01:52:20AM +0100, Christoph Hellwig wrote:
> Looks fine,
>
> and we should pick this up for 4.16 independent of the rest, which
> I might need a little more review time for.
>
> Reviewed-by: Christoph Hellwig
Thanks, queued up for 4.16.
On Thu, Mar 01, 2018 at 11:00:51PM +, Stephen Bates wrote:
>
> P2P is about offloading the memory and PCI subsystem of the host CPU
> and this is achieved no matter which p2p_dev is used.
Even within a device, memory attributes for its various regions may not be
the same. There's a
On Tue, Mar 13, 2018 at 06:45:00PM +0800, Ming Lei wrote:
> On Tue, Mar 13, 2018 at 05:58:08PM +0800, Jianchao Wang wrote:
> > Currently, adminq and ioq1 share the same irq vector which is set
> > affinity to cpu0. If a system allows cpu0 to be offlined, the adminq
> > will not be able work any
Thanks, applied for 4.17.
On Mon, Mar 12, 2018 at 11:47:12PM -0400, Sinan Kaya wrote:
>
> The spec is recommending code to use "Hotplug Surprise" to differentiate
> these two cases we are looking for.
>
> The use case Keith is looking for is for hotplug support.
> The case I and Oza are more interested is for error
y INT interrupt.
>
> With current code we do not acknowledge the interrupt back in dpc_irq()
> and we get dpc interrupt storm.
>
> This patch acknowledges the interrupt in interrupt handler.
>
> Signed-off-by: Oza Pawandeep
Thanks, this looks good to me.
Reviewed-by: Keith Busch
On Wed, Mar 14, 2018 at 02:52:30PM -0600, Keith Busch wrote:
>
> Reviewed-by: Keith Busch
On Tue, Mar 27, 2018 at 08:00:33PM +0200, Matias Bjørling wrote:
> Compiling on 32 bits system produces a warning for the shift width
> when shifting 32 bit integer with 64bit integer.
>
> Make sure that offset always is 64bit, and use macros for retrieving
> lower and upper bits of the offset.
On Thu, Feb 15, 2018 at 02:49:56PM +0100, Julien Durillon wrote:
> I opened an issue here:
> https://github.com/dracutdevs/dracut/issues/373 for dracut. You can
> read there how dracuts enters an infinite loop.
>
> TL;DR: in linux-4.14, trying to find the last "slave" of /dev/dm-0
> ends with a
On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > On Wed, Feb 28, 2018 at 10:34:11PM +0530, Oza Pawandeep wrote:
>
> > That difference has been there since the beginning of DPC, so it has
> > nothing to do with *this* series EXCEPT for
On Mon, Mar 12, 2018 at 08:16:38PM +0530, p...@codeaurora.org wrote:
> On 2018-03-12 19:55, Keith Busch wrote:
> > On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> > > On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > > > On Wed, Feb 28, 2018 at 10:34:1
On Mon, Mar 12, 2018 at 09:04:47PM +0530, p...@codeaurora.org wrote:
> On 2018-03-12 20:28, Keith Busch wrote:
> > I'm not sure I understand. The link is disabled while DPC is triggered,
> > so if anything, you'd want to un-enumerate everything below the
> > contained
> >
On Mon, Mar 12, 2018 at 10:21:29AM -0700, Alexander Duyck wrote:
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 024a1beda008..9cab9d0d51dc 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -1953,6 +1953,7 @@ static inline void pci_mmcfg_late_init(void) { }
>
On Mon, Mar 12, 2018 at 01:41:07PM -0400, Sinan Kaya wrote:
> I was just writing a reply to you. You acted first :)
>
> On 3/12/2018 1:33 PM, Keith Busch wrote:
> >>> After releasing a slot from DPC, the link is allowed to retrain. If
> >>> there
> >&
On Mon, Mar 12, 2018 at 11:09:34AM -0700, Alexander Duyck wrote:
> On Mon, Mar 12, 2018 at 10:40 AM, Keith Busch wrote:
> > On Mon, Mar 12, 2018 at 10:21:29AM -0700, Alexander Duyck wrote:
> >> diff --git a/include/linux/pci.h b/include/linux/pci.h
> >> index 024a1b
Hi Jianchao,
The patch tests fine on all hardware I had. I'd like to queue this up
for the next 4.16-rc. Could you send a v3 with the cleanup changes Andy
suggested and a changelog aligned with Ming's insights?
Thanks,
Keith
On Mon, Mar 12, 2018 at 02:47:30PM -0500, Bjorn Helgaas wrote:
> [+cc Alex]
>
> On Mon, Mar 12, 2018 at 08:25:51AM -0600, Keith Busch wrote:
> > On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> > > On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > > >
On Wed, Mar 28, 2018 at 03:57:47PM +0200, Arnd Bergmann wrote:
> @@ -2233,8 +2233,8 @@ int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct
> nvme_ns *ns,
> c.get_log_page.lid = log_page;
> c.get_log_page.numdl = cpu_to_le16(dwlen & ((1 << 16) - 1));
> c.get_log_page.numdu =
Thanks, applied.
Thanks, applied.
Thanks, applied.
On Wed, Mar 28, 2018 at 10:06:46AM +0200, Christoph Hellwig wrote:
> For PCIe devices the right policy is not a round robin but to use
> the pcie device closer to the node. I did a prototype for that
> long ago and the concept can work. Can you look into that and
> also make that policy used
On Wed, Mar 21, 2018 at 03:06:05AM -0700, Matias Bjørling wrote:
> > outside of nvme core so that we can use it form lightnvm.
> >
> > Signed-off-by: Javier González
> > ---
> > drivers/lightnvm/core.c | 11 +++
> > drivers/nvme/host/core.c | 6 ++--
> >
On Wed, Mar 21, 2018 at 11:48:09PM +0800, Ming Lei wrote:
> On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote:
> > > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
> > >> NVMe driver uses threads for the work at device reset, including enabling
> > >> the PCIe
On Wed, Mar 21, 2018 at 08:27:07PM +0100, Matias Bjørling wrote:
> Enable the lightnvm integration to use the nvme_get_log_ext()
> function.
>
> Signed-off-by: Matias Bjørling
Thanks, applied to nvme-4.17.
On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote:
>
> So I suspect we'll need to go with a patch like this, just with a way
> better changelog.
I have to agree this is required for that use case. I'll run some
quick tests and propose an alternate changelog.
Longer term, the
On Tue, Feb 20, 2018 at 02:21:37PM +0100, Peter Zijlstra wrote:
> Also, set_current_state(TASK_RUNNING) is dodgy (similarly in
> __blk_mq_poll), why do you need that memory barrier?
You're right. The subsequent revision that was committed removed the
barrier. The commit is here:
On Thu, Apr 26, 2018 at 02:25:15PM -0600, Johannes Thumshirn wrote:
> Keith reported that command submission and command completion
> tracepoints have the order of the cmdid and qid fields swapped.
>
> While it isn't easily possible to change the command submission
> tracepoint, as there is a
Thank you, applied for the next nvme 4.17-rc.
On Mon, May 07, 2018 at 06:57:54AM +, Bharat Kumar Gogada wrote:
> Hi,
>
> Does anyone have any inputs ?
Hi,
I recall we did observe issues like this when legacy interrupts were
used, so the driver does try to use MSI/MSIx if possible.
The nvme_timeout() is called from the block layer when
and nvme_dev.
> Second, it makes it clearer what error is being passed on:
> 'return -ENODEV' vs 'goto out', where 'result' happens to be -ENODEV
>
> CC: Keith Busch
> Signed-off-by: Alexandru Gagniuc
Ah, that's just wrapping a function that has a single out. The challenge
is to f
On Fri, May 11, 2018 at 11:57:52AM -0500, Bjorn Helgaas wrote:
> We reported several corrected errors before the nvme timeout:
>
> [12750.281158] nvme nvme0: controller is down; will reset: CSTS=0x,
> PCI_STATUS=0x10
> [12750.297594] nvme nvme0: I/O 455 QID 2 timeout, disable
On Fri, May 11, 2018 at 11:26:11AM -0600, Keith Busch wrote:
> I trust you know the offsets here, but it's hard to tell what this
> is doing with hard-coded addresses. Just to be safe and for clarity,
> I recommend the 'CAP_*+' with a mask.
>
> For example, disabling ASPM L1.
On Thu, May 03, 2018 at 05:00:35PM +0200, Johannes Thumshirn wrote:
> After commit bb06ec31452f ("nvme: expand nvmf_check_if_ready checks")
> resetting of the loopback nvme target failed as we forgot to switch
> it's state to NVME_CTRL_CONNECTING before we reconnect the admin
> queues. Therefore
On Wed, May 16, 2018 at 12:35:15PM +, Bharat Kumar Gogada wrote:
> Hi,
>
> As per NVME specification:
> 7.5.1.1 Host Software Interrupt Handling
> It is recommended that host software utilize the Interrupt Mask Set and
> Interrupt Mask Clear (INTMS/INTMC)
> registers to efficiently handle
On Wed, May 16, 2018 at 06:44:22PM -0400, Sinan Kaya wrote:
> On 5/16/2018 5:33 PM, Alexandru Gagniuc wrote:
> > AER status bits are sticky, and they survive system resets. Downstream
> > devices are usually taken care of after re-enumerating the downstream
> > busses, as the AER bits are cleared
On Thu, May 17, 2018 at 11:15:59AM +, Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > As per NVME specification:
> > > 7.5.1.1 Host Software Interrupt Handling It is recommended that host
> > > software utilize the Interrupt Mask Set and Interrupt Mask Clear
> > > (INTMS/INTMC) registers to
fault status from new events if the driver hasn't seen the
power fault clear from the previous handling attempt.
Fixes: fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking
for new ones")
Cc: # 4.9+
Cc: Mayurkumar Patel
Signed-off-by: Keith Busch
---
Resending due to
us patch from Abhishek Shah.
>
> Fixes: 8ffaadf7 ("NVMe: Use CMB for the IO SQes if available")
> Cc: sta...@vger.kernel.org
> Reported-by: Abhishek Shah
> Signed-off-by: Christoph Hellwig
This looks good.
Reviewed-by: Keith Busch
On Sat, Sep 30, 2017 at 02:30:16PM +0530, Abhishek Shah wrote:
> > On a similar note, we also break CMB usage in virutalization with direct
> > assigned devices: the guest doesn't know the host physical bus address,
> > so it sets the CMB queue address incorrectly there, too. I don't know of
> > a
On Fri, Sep 29, 2017 at 10:59:26AM +0530, Abhishek Shah wrote:
> Currently, NVMe PCI host driver is programming CMB dma address as
> I/O SQs addresses. This results in failures on systems where 1:1
> outbound mapping is not used (example Broadcom iProc SOCs) because
> CMB BAR will be progammed
ck
Looks good.
Reviewed-by: Keith Busch
groups
> and adds them to the device before sending out uevents.
>
> Signed-off-by: Martin Wilck
Is NVMe the only one having this problem? Was putting our attributes in
the disk's kobj a bad choice?
Any, looks fine to me.
Reviewed-by: Keith Busch
On Tue, Jan 09, 2018 at 10:03:11AM +0800, Jianchao Wang wrote:
> Hello
Sorry for the distraction, but could you possibly fix the date on your
machine? For some reason, lists.infradead.org sorts threads by the time
you claim to have sent your message rather than the time it was received,
and
5e ("nvme: add hostid token to fabric
> options").
>
> Fixes: 6bfe04255d5e ("nvme: add hostid token to fabric options")
> Reported-by: Alexander Potapenko
> Signed-off-by: Johannes Thumshirn
Thanks for the report and the fix. It'd still be good to use the kzalloc
variant in addition to this.
Reviewed-by: Keith Busch
On Thu, Jan 11, 2018 at 01:09:39PM +0800, Jianchao Wang wrote:
> The calculation of iod and avg_seg_size maybe meaningless if
> nvme_pci_use_sgls returns before uses them. So calculate
> just before use them.
The compiler will do the right thing here, but I see what you mean. I
think Christoph
On Thu, Jan 11, 2018 at 06:50:40PM +0100, Maik Broemme wrote:
> I've re-run the test with 4.15rc7.r111.g5f615b97cdea and the following
> patches from Keith:
>
> [PATCH 1/4] PCI/AER: Return approrpiate value when AER is not supported
> [PATCH 2/4] PCI/AER: Provide API for getting AER information
>
On Thu, Jan 04, 2018 at 12:01:34PM -0700, Logan Gunthorpe wrote:
> Register the CMB buffer as p2pmem and use the appropriate allocation
> functions to create and destroy the IO SQ.
>
> If the CMB supports WDS and RDS, publish it for use as p2p memory
> by other devices.
<>
> + if (qid &&
On Fri, Jan 05, 2018 at 11:19:28AM -0700, Logan Gunthorpe wrote:
> Although it is not explicitly stated anywhere, pci_alloc_p2pmem() should
> always be at least 4k aligned. This is because the gen_pool that implements
> it is created with PAGE_SHIFT for its min_alloc_order.
Ah, I see that now.
On Wed, Dec 13, 2017 at 05:01:58PM -0700, Alex Williamson wrote:
> @@ -109,6 +109,7 @@ static void interrupt_event_handler(struct work_struct
> *work)
> struct dpc_dev *dpc = container_of(work, struct dpc_dev, work);
> struct pci_dev *dev, *temp, *pdev = dpc->dev->port;
> struct
else we may never see it execute due to further incoming interrupts.
> A software generated DPC floods the system otherwise.
>
> Signed-off-by: Alex Williamson
Thanks, looks good.
Reviewed-by: Keith Busch
On Thu, Dec 14, 2017 at 06:21:55PM -0600, Bjorn Helgaas wrote:
> [+cc Rajat, Keith, linux-kernel]
>
> On Thu, Dec 14, 2017 at 07:47:01PM +0100, Maik Broemme wrote:
> > I have a Samsung 960 PRO NVMe SSD (Non-Volatile memory controller:
> > Samsung Electronics Co Ltd NVMe SSD Controller
On Wed, Dec 27, 2017 at 02:20:18AM -0800, Oza Pawandeep wrote:
> DPC should enumerate the devices after recovering the link, which is
> achieved by implementing error_resume callback.
Wouldn't that race with the link-up event that pciehp currently handles?
On Fri, Dec 29, 2017 at 12:54:17PM +0530, Oza Pawandeep wrote:
> This patch addresses the race condition between AER and DPC for recovery.
>
> Current DPC driver does not do recovery, e.g. calling end-point's driver's
> callbacks, which sanitize the device.
> DPC driver implements link_reset
On Fri, Dec 29, 2017 at 11:30:02PM +0530, p...@codeaurora.org wrote:
> On 2017-12-29 22:53, Keith Busch wrote:
>
> > 2. A DPC event suppresses the error message required for the Linux
> > AER driver to run. How can AER and DPC run concurrently?
>
> I afraid I could
On Fri, Nov 03, 2017 at 01:53:40PM +0100, Christoph Hellwig wrote:
> > - if (ns && ns->ms &&
> > + if (ns->ms &&
> > (!ns->pi_type || ns->ms != sizeof(struct t10_pi_tuple)) &&
> > !blk_integrity_rq(req) && !blk_rq_is_passthrough(req))
> > return BLK_STS_NOTSUPP;
>
the 'ph' format, which would look like this:
01 02 03 04 05 06 07 08
The change will make it look like this:
01-02-03-04-05-06-07-08
I think that was the original intention.
Reviewed-by: Keith Busch
On Sat, Nov 04, 2017 at 09:18:25AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 03, 2017 at 09:02:04AM -0600, Keith Busch wrote:
> > If the namespace has metadata, but the request doesn't have a metadata
> > payload attached to it for whatever reason, we can't constr
On Mon, Nov 06, 2017 at 10:13:24AM +0100, Christoph Hellwig wrote:
> On Sat, Nov 04, 2017 at 09:38:45AM -0600, Keith Busch wrote:
> > That's not quite right. For non-PI metadata formats, we use the
> > 'nop_profile', which gets the metadata buffer allocated so we can safely
>
On Thu, Aug 10, 2017 at 11:23:31AM +0200, Johannes Thumshirn wrote:
> From: Keith Busch
>
> We need to return an error if a timeout occurs on any NVMe command during
> initialization. Without this, the nvme reset work will be stuck. A timeout
> will have a negative error code,
On Mon, Aug 07, 2017 at 01:57:11PM -0600, Jon Derrick wrote:
> Add myself as VMD maintainer
>
> Signed-off-by: Jon Derrick
Thanks for adding.
Acked-by: Keith Busch
> ---
> MAINTAINERS | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/MAINTAINERS b/MAI
On Mon, Aug 07, 2017 at 08:45:25AM -0700, James Bottomley wrote:
> On Mon, 2017-08-07 at 20:01 +0530, Kashyap Desai wrote:
> >
> > We have to attempt this use case and see how it behaves. I have not
> > tried this, so not sure if things are really bad or just some tuning
> > may be helpful. I
On Tue, Aug 08, 2017 at 12:33:40PM +0530, Sreekanth Reddy wrote:
> On Tue, Aug 8, 2017 at 9:34 AM, Keith Busch wrote:
> >
> > It looks like they can make existing nvme tooling work with little
> > effort if they have the driver implement NVME_IOCTL_ADMIN_COMMAND,
On Mon, Aug 14, 2017 at 03:59:48PM -0500, Bjorn Helgaas wrote:
> On Tue, Aug 01, 2017 at 03:11:52AM -0400, Keith Busch wrote:
> > We've encountered a particular platform that under some circumstances
> > always has the power fault detected status raised. The pciehp irq handler
On Tue, Aug 15, 2017 at 01:48:25PM -0700, Bjorn Helgaas wrote:
> On Mon, Aug 14, 2017 at 06:11:23PM -0400, Keith Busch wrote:
> > On Mon, Aug 14, 2017 at 03:59:48PM -0500, Bjorn Helgaas wrote:
> > > On Tue, Aug 01, 2017 at 03:11:52AM -0400, Keith Busch wrote:
> > > >
On Fri, Mar 30, 2018 at 09:04:46AM +, Eric H. Chang wrote:
> We internally call PCIe-retimer as HBA. It's not a real Host Bus Adapter that
> translates the interface from PCIe to SATA or SAS. Sorry for the confusion.
Please don't call a PCIe retimer an "HBA"! :)
While your experiment is
On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> >
> > I think the scenario you are describing is two systems that are
> > identical except that in the first, the endpoint is below a hotplug
> > bridge, while in the second, it's below a
On Thu, Apr 12, 2018 at 08:39:54AM -0600, Keith Busch wrote:
> On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> > On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> > >
> > > I think the scenario you are describing is two systems that are
> > >
On Thu, Apr 12, 2018 at 12:27:20PM -0400, Sinan Kaya wrote:
> On 4/12/2018 11:02 AM, Keith Busch wrote:
> >
> > Also, I thought the plan was to keep hotplug and non-hotplug the same,
> > except for the very end: if not a hotplug bridge, initiate the rescan
> > automat
Thanks, applied for 4.17-rc1.
I was a little surprised git was able to apply this since the patch
format is off, but it worked!
On Mon, Apr 09, 2018 at 10:41:49AM -0400, Oza Pawandeep wrote:
> This patch renames error recovery to generic name with pcie prefix
>
> Signed-off-by: Oza Pawandeep
Looks fine.
Reviewed-by: Keith Busch
off-by: Oza Pawandeep
Looks fine.
Reviewed-by: Keith Busch
On Mon, Apr 09, 2018 at 10:41:51AM -0400, Oza Pawandeep wrote:
> This patch implements generic pcie_port_find_service() routine.
>
> Signed-off-by: Oza Pawandeep
Looks good.
Reviewed-by: Keith Busch
On Mon, Apr 09, 2018 at 10:41:53AM -0400, Oza Pawandeep wrote:
> +/**
> + * pcie_wait_for_link - Wait for link till it's active/inactive
> + * @pdev: Bridge device
> + * @active: waiting for active or inactive ?
> + *
> + * Use this to wait till link becomes active or inactive.
> + */
> +bool
On Mon, Apr 09, 2018 at 10:41:52AM -0400, Oza Pawandeep wrote:
> +static int find_dpc_dev_iter(struct device *device, void *data)
> +{
> + struct pcie_port_service_driver *service_driver;
> + struct device **dev;
> +
> + dev = (struct device **) data;
> +
> + if (device->bus ==
Thanks, staged for 4.18.
On Fri, Mar 30, 2018 at 06:18:50PM -0300, Rodrigo R. Galvao wrote:
> When trying to issue write_zeroes command against TARGET the nr_sector is
> being incremented by 1, which ends up hitting the following condition at
> __blkdev_issue_zeroout:
>
> if ((sector | nr_sects) & bs_mask)
>
601 - 700 of 1501 matches
Mail list logo