From: Hans Holmberg
Write failures should not happen under normal circumstances,
so in order to bring the chunk back into a known state as soon
as possible, evacuate all the valid data out of the line and let the
fw judge if the block can be written to in the next
From: Hans Holmberg
Smeta write errors were previously ignored. Skip these
lines instead and throw them back on the free
list, so the chunks will go through a reset cycle
before we attempt to use the line again.
Signed-off-by: Hans Holmberg
From: Hans Holmberg
The write error recovery path is incomplete, so rework
the write error recovery handling to do resubmits directly
from the write buffer.
When a write error occurs, the remaining sectors in the chunk are
mapped out and invalidated and the request
From: Hans Holmberg
This patch series fixes the(currently incomplete) write error handling
in pblk by:
* queuing and re-submitting failed writes in the write buffer
* evacuating valid data data in lines with write failures, so the
chunk(s) with write failures
On 04/23/2018 04:30 PM, Logan Gunthorpe wrote:> > Signed-off-by: Logan
Gunthorpe > ---
> drivers/pci/Kconfig| 9 +> drivers/pci/p2pdma.c | 45
> ++---> drivers/pci/pci.c |
> 6 ++>
On Mon, Apr 23, 2018 at 07:04:18PM +0200, Christoph Hellwig wrote:
> This way we have one central definition of it, and user can select it as
> needed. Note that we also add a second ARCH_HAS_SWIOTLB symbol to
> indicate the architecture supports swiotlb at all, so that we can still
> make the
Some PCI devices may have memory mapped in a BAR space that's
intended for use in peer-to-peer transactions. In order to enable
such transactions the memory must be registered with ZONE_DEVICE pages
so it can be used by DMA interfaces in existing drivers.
Add an interface for other subsystems to
The DMA address used when mapping PCI P2P memory must be the PCI bus
address. Thus, introduce pci_p2pmem_[un]map_sg() to map the correct
addresses when using P2P memory.
For this, we assume that an SGL passed to these functions contain all
P2P memory or no P2P memory.
Signed-off-by: Logan
For P2P requests, we must use the pci_p2pmem_[un]map_sg() functions
instead of the dma_map_sg functions.
With that, we can then indicate PCI_P2P support in the request queue.
For this, we create an NVME_F_PCI_P2P flag which tells the core to
set QUEUE_FLAG_PCI_P2P in the request queue.
Add helpers to allocate and free the SGL in a struct nvmet_req:
int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq)
void nvmet_req_free_sgl(struct nvmet_req *req)
This will be expanded in a future patch to implement peer-to-peer
memory DMAs and should be common with all target
Add a sysfs group to display statistics about P2P memory that is
registered in each PCI device.
Attributes in the group display the total amount of P2P memory, the
amount available and whether it is published or not.
Signed-off-by: Logan Gunthorpe
---
QUEUE_FLAG_PCI_P2P is introduced meaning a driver's request queue
supports targeting P2P memory.
REQ_PCI_P2P is introduced to indicate a particular bio request is
directed to/from PCI P2P memory. A request with this flag is not
accepted unless the corresponding queues have the QUEUE_FLAG_PCI_P2P
Add a restructured text file describing how to write drivers
with support for P2P DMA transactions. The document describes
how to use the APIs that were added in the previous few
commits.
Also adds an index for the PCI documentation tree even though this
is the only PCI document that has been
Use the new helpers introduced in the previous patch to allocate
the SGLs for the request.
Seeing we use req.transfer_len as the length of the SGL it is
set earlier and cleared on any error. It also seems to be unnecessary
to accumulate the length as the map_sgl functions should only ever
be
We create a configfs attribute in each nvme-fabrics target port to
enable p2p memory use. When enabled, the port will only then use the
p2p memory if a p2p memory device can be found which is behind the
same switch heirarchy as the RDMA port and all the block devices in
use. If the user enabled it
For peer-to-peer transactions to work the downstream ports in each
switch must not have the ACS flags set. At this time there is no way
to dynamically change the flags and update the corresponding IOMMU
groups so this is done at enumeration time before the groups are
assigned.
This effectively
In order to use PCI P2P memory pci_p2pmem_[un]map_sg() functions must be
called to map the correct PCI bus address.
To do this, check the first page in the scatter list to see if it is P2P
memory or not. At the moment, scatter lists that contain P2P memory must
be homogeneous so if the first page
Register the CMB buffer as p2pmem and use the appropriate allocation
functions to create and destroy the IO submission queues.
If the CMB supports WDS and RDS, publish it for use as P2P memory
by other devices.
We can now drop the __iomem safety on the buffer seeing that, by
convention,
Add a new directory in the driver API guide for PCI specific
documentation.
This is in preparation for adding a new PCI P2P DMA driver writers
guide which will go in this directory.
Signed-off-by: Logan Gunthorpe
Cc: Jonathan Corbet
Cc: Mauro Carvalho
Introduce a quirk to use CMB-like memory on older devices that have
an exposed BAR but do not advertise support for using CMBLOC and
CMBSIZE.
We'd like to use some of these older cards to test P2P memory.
Signed-off-by: Logan Gunthorpe
Reviewed-by: Sagi Grimberg
On 2018/04/23 19:09, Tetsuo Handa wrote:
> By the way, I got a newbie question regarding commit 5318ce7d46866e1d ("bdi:
> Shutdown writeback on all cgwbs in cgwb_bdi_destroy()"). It uses clear_bit()
> to clear WB_shutting_down bit so that threads waiting at wait_on_bit() will
> wake up. But
On Mon, Apr 23, 2018 at 07:04:18PM +0200, Christoph Hellwig wrote:
> This way we have one central definition of it, and user can select it as
> needed. Note that we also add a second ARCH_HAS_SWIOTLB symbol to
> indicate the architecture supports swiotlb at all, so that we can still
> make the
On Mon, Apr 23, 2018 at 07:04:17PM +0200, Christoph Hellwig wrote:
> swiotlb is only used as a library of helper for xen-swiotlb if Xen support
> is enabled on arm, so don't build it by default.
>
CCing Stefano
> Signed-off-by: Christoph Hellwig
> ---
> arch/arm/Kconfig | 3 ++-
>
Thanks, James. The idea of cutting communications with Scsi_Host at
bsg_unregister_queue(..) time and leaving bsg_class_device to
its own fate makes a lot of sense, conceptually. But there are
implementation issues that are difficult to work around.
bsg.c creates bsg_class_device and takes a
swiotlb is only used as a library of helper for xen-swiotlb if Xen support
is enabled on arm, so don't build it by default.
Signed-off-by: Christoph Hellwig
---
arch/arm/Kconfig | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/Kconfig
This symbol is now always identical to CONFIG_ARCH_DMA_ADDR_T_64BIT, so
remove it.
Signed-off-by: Christoph Hellwig
Acked-by: Bjorn Helgaas
---
drivers/pci/Kconfig | 4
drivers/pci/bus.c | 4 ++--
include/linux/pci.h | 2 +-
3 files changed, 3
This way we have one central definition of it, and user can select it as
needed. Note that we also add a second ARCH_HAS_SWIOTLB symbol to
indicate the architecture supports swiotlb at all, so that we can still
make the usage optional for a few architectures that want this feature
to be user
swiotlb now selects the DMA_DIRECT_OPS config symbol, so this will
always be true.
Signed-off-by: Christoph Hellwig
---
lib/swiotlb.c | 4
1 file changed, 4 deletions(-)
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index fece57566d45..6954f7ad200a 100644
--- a/lib/swiotlb.c
+++
This way we have one central definition of it, and user can select it as
needed.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
arch/alpha/Kconfig | 4 +---
arch/arm/Kconfig| 3 ---
arch/arm64/Kconfig
Define this symbol if the architecture either uses 64-bit pointers or the
PHYS_ADDR_T_64BIT is set. This covers 95% of the old arch magic. We only
need an additional select for Xen on ARM (why anyway?), and we now always
set ARCH_DMA_ADDR_T_64BIT on mips boards with 64-bit physical addressing
This way we have one central definition of it, and user can select it as
needed. Note that we now also always select it when CONFIG_DMA_API_DEBUG
is select, which fixes some incorrect checks in a few network drivers.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
This way we have one central definition of it, and user can select it as
needed.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
arch/powerpc/Kconfig | 4 +---
arch/s390/Kconfig| 5 ++---
arch/sparc/Kconfig | 5 +
Instead select the PHYS_ADDR_T_64BIT for 32-bit architectures that need a
64-bit phys_addr_t type directly.
Signed-off-by: Christoph Hellwig
---
arch/arc/Kconfig | 4 +---
arch/arm/kernel/setup.c| 2 +-
arch/arm/mm/Kconfig
This avoids selecting IOMMU_HELPER just for this function. And we only
use it once or twice in normal builds so this often even is a size
reduction.
Signed-off-by: Christoph Hellwig
---
arch/alpha/Kconfig | 3 ---
arch/arm/Kconfig| 3 ---
This function is only used by built-in code.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
lib/iommu-helper.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/lib/iommu-helper.c b/lib/iommu-helper.c
index
This code is only used by sparc, and all new iommu drivers should use the
drivers/iommu/ framework. Also remove the unused exports.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
{include/linux =>
Hi all,
this seris aims for a single defintion of the Kconfig symbol. To get
there various cleanups, mostly about config symbols are included as well.
Chances since V2 are a fixed s/Reviewed/Signed-Off/ for me, and a few
reviewed-by tags. I'd like to start merging this into the dma-mapping
On Mon, 23 Apr 2018 14:43:13 +0200
Steffen Maier wrote:
> > - TP_printk("[%s] %d", __entry->comm, __entry->nr_rq)
> > + TP_printk("[%s] %d %s", __entry->comm, __entry->nr_rq,
> > + __entry->explicit ? "Sync" : "Async")
> > );
> >
> > /**
>
> This
On 04/17/2018 12:00 PM, Bean Huo (beanhuo) wrote:
#Cat trace
iozone-4055 [000] 665.039276: block_unplug: [iozone] 1 Sync
iozone-4055 [000] ...1 665.039278: block_rq_insert: 8,48 WS 0 () 39604352 +
128 tag=18 [iozone]
iozone-4055 [000] ...1 665.039280: block_rq_issue: 8,48 WS 0
On 04/16/2018 04:33 PM, Bean Huo (beanhuo) wrote:
Print the request tag along with other information in block trace events
when tracing request , and unplug type (Sync / Async).
Signed-off-by: Bean Huo
---
include/trace/events/block.h | 36
On Fri, Apr 20, 2018 at 9:49 PM, Javier Gonzalez wrote:
>> On 19 Apr 2018, at 09.39, Hans Holmberg
>> wrote:
>>
>> From: Hans Holmberg
>>
>> Write failures should not happen under normal circumstances,
>> so in
On Fri, Apr 20, 2018 at 9:38 PM, Javier Gonzalez wrote:
>> On 19 Apr 2018, at 09.39, Hans Holmberg
>> wrote:
>>
>> From: Hans Holmberg
>>
>> The write error recovery path is incomplete, so rework
>> the write
On 04/19/2018 10:18 PM, Omar Sandoval wrote:
> On Thu, Apr 19, 2018 at 01:44:41PM -0600, Jens Axboe wrote:
>> On 4/19/18 1:41 PM, Bart Van Assche wrote:
>>> On Thu, 2018-04-19 at 12:13 -0700, Omar Sandoval wrote:
On Thu, Apr 19, 2018 at 11:53:30AM -0700, Omar Sandoval wrote:
> Thanks for
On 2018/04/20 1:05, syzbot wrote:
> kasan: CONFIG_KASAN_INLINE enabled
> kasan: GPF could be caused by NULL-ptr deref or user memory access
> general protection fault: [#1] SMP KASAN
> Dumping ftrace buffer:
> (ftrace buffer empty)
> Modules linked in:
> CPU: 0 PID: 28 Comm: kworker/u4:2
On Fri, Apr 20, 2018 at 10:47:31AM +, Stanislav Kinsburskii wrote:
>
> #include
> #include
> @@ -1649,6 +1650,7 @@ static int __init netback_init(void)
> PTR_ERR(xen_netback_dbg_root));
> #endif /* CONFIG_DEBUG_FS */
>
> + (void) xen_netbk_fi_init();
If you
On 18/4/23 15:35, Paolo Valente wrote:
>
>
>> Il giorno 23 apr 2018, alle ore 08:05, Joseph Qi ha
>> scritto:
>>
>> Hi Paolo,
>
> Hi Joseph,
> thanks for chiming in.
>
>> What's your idle and latency config?
>
> I didn't set them at all, as the only (explicit)
Hello Ming.
Ming Lei - 18.04.18, 18:46:
> On Mon, Apr 16, 2018 at 03:12:30PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 16.04.18, 02:45:
> > > On Sun, Apr 15, 2018 at 06:31:44PM +0200, Martin Steigerwald
wrote:
> > > > Hi Ming.
> > > >
> > > > Ming Lei - 15.04.18, 17:43:
> > > > > Hi Jens,
Hi Jianchao.
jianchao.wang - 17.04.18, 16:34:
> On 04/17/2018 08:10 PM, Martin Steigerwald wrote:
> > For testing it I add it to 4.16.2 with the patches I have already?
>
> You could try to only apply this patch to have a test. :)
I tested 4.16.3 with just your patch (+ the unrelated btrfs
Hi Paolo
When I test execute the script, I got this
8:0 rbps=1000 wbps=0 riops=0 wiops=0 idle=0 latency=max
The idle is 0.
I'm afraid the io.low would not work.
Please refer to the following code in tg_set_limit
/* force user to configure all settings for low limit */
if
> Il giorno 23 apr 2018, alle ore 08:35, jianchao.wang
> ha scritto:
>
> Hi Paolo
>
> On 04/23/2018 01:32 PM, Paolo Valente wrote:
>> Thanks for sharing this fix. I tried it too, but nothing changes in
>> my test :(>
>
> That's really sad.
>
>> At this point,
> Il giorno 23 apr 2018, alle ore 08:05, Joseph Qi ha
> scritto:
>
> Hi Paolo,
Hi Joseph,
thanks for chiming in.
> What's your idle and latency config?
I didn't set them at all, as the only (explicit) requirement in my
basic test is that one of the group is guaranteed
Hi Jianchao.
jianchao.wang - 17.04.18, 16:34:
> On 04/17/2018 08:10 PM, Martin Steigerwald wrote:
> > For testing it I add it to 4.16.2 with the patches I have already?
>
> You could try to only apply this patch to have a test.
Compiling now to have a test.
Thanks,
--
Martin
Hi Paolo
On 04/23/2018 01:32 PM, Paolo Valente wrote:
> Thanks for sharing this fix. I tried it too, but nothing changes in
> my test :(>
That's really sad.
> At this point, my doubt is still: am I getting io.low limit right? I
> understand that an I/O-bound group should be guaranteed a rbps
Hi Paolo,
What's your idle and latency config?
IMO, io.low will allow others run more bandwidth if cgroup's average
idle time is high or latency is low. In such cases, low limit won't get
guaranteed.
Thanks,
Joseph
On 18/4/22 17:23, Paolo Valente wrote:
> Hi Shaohua, all,
> at last, I started
54 matches
Mail list logo