On 2/1/18 6:02 PM, Joseph Qi wrote:
> Hi Bart,
>
> On 18/2/2 00:16, Bart Van Assche wrote:
>> On Thu, 2018-02-01 at 09:53 +0800, Joseph Qi wrote:
>>> I'm afraid the risk may also exist in blk_cleanup_queue, which will
>>> set queue_lock to to the default internal lock.
>>>
>>>
On 2/2/18 8:03 AM, Arnd Bergmann wrote:
> skd includes slab_def.h to get access to the slab cache object size.
> However, including this header breaks when we use SLUB or SLOB instead of
> the SLAB allocator, since the structure layout is completely different,
> as shown by this warning when we
Add IBNBD Makefile, Kconfig and also corresponding lines into upper
block layer files.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
---
drivers/block/Kconfig| 2 ++
README with description of major sysfs entries.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
---
drivers/block/ibnbd/README | 272 +
1
This is main functionality of ibnbd-client module, which provides
interface to map remote device as local block device /dev/ibnbd
and feeds IBTRS with IO requests.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
This provides helper functions for IO submission to file or block dev.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
---
drivers/block/ibnbd/ibnbd-srv-dev.c | 410
This series introduces IBNBD/IBTRS modules.
IBTRS (InfiniBand Transport) is a reliable high speed transport library
which allows for establishing connection between client and server
machines via RDMA. It is optimized to transfer (read/write) IO blocks
in the sense that it follows the BIO
This is a set of library functions existing as a ibtrs-core module,
used by client and server modules.
Mainly these functions wrap IB and RDMA calls and provide a bit higher
abstraction for implementing of IBTRS protocol on client or server
sides.
Signed-off-by: Roman Pen
This is main functionality of ibtrs-server module, which accepts
set of RDMA connections (so called IBTRS session), creates/destroys
sysfs entries associated with IBTRS session and notifies upper layer
(user of IBTRS API) about RDMA requests or link events.
Signed-off-by: Roman Pen
This header describes main structs and functions used by ibnbd-server
module, namely structs for managing sessions from different clients
and mapped (opened) devices.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack
This header describes main structs and functions used by ibnbd-client
module, mainly for managing IBNBD sessions and mapped block devices,
creating and destroying sysfs entries.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
This is the sysfs interface to IBNBD block devices on client side:
/sys/kernel/ibnbd_client/
|- map_device
| *** maps remote device
|
|- devices/
*** all mapped devices
/sys/block/ibnbd/ibnbd_client/
|- unmap_device
| *** unmaps device
|
|- state
This is main functionality of ibnbd-server module, which handles IBTRS
events and IBNBD protocol requests, like map (open) or unmap (close)
device. Also server side is responsible for processing incoming IBTRS
IO requests and forward them to local mapped devices.
Signed-off-by: Roman Pen
Signed-off-by: Roman Pen
Cc: Danil Kipnis
Cc: Jack Wang
---
MAINTAINERS | 14 ++
1 file changed, 14 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 18994806e441..fad9c2529f8a
These are common private headers with IBTRS protocol structures,
logging, sysfs and other helper functions, which are used on
both client and server sides.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
README with description of major sysfs entries.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
---
drivers/infiniband/ulp/ibtrs/README | 238
1
Add IBTRS Makefile, Kconfig and also corresponding lines into upper
layer infiniband/ulp files.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
---
drivers/infiniband/Kconfig
This header describes main structs and functions used by ibtrs-client
module, mainly for managing IBTRS sessions, creating/destroying sysfs
entries, accounting statistics on client side.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Introduce public header which provides set of API functions to
establish RDMA connections from client to server machine using
IBTRS protocol, which manages RDMA connections for each session,
does multipathing and load balancing.
Main functions for client (active) side:
ibtrs_clt_open() -
This header describes main structs and functions used by ibtrs-server
module, mainly for accepting IBTRS sessions, creating/destroying
sysfs entries, accounting statistics on server side.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
This introduces set of functions used on server side to account
statistics of RDMA data sent/received.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
---
This is the sysfs interface to IBNBD mapped devices on server side:
/sys/kernel/ibnbd_server/devices//
|- block_dev
| *** link pointing to the corresponding block device sysfs entry
|
|- sessions//
| *** sessions directory
|
|- read_only
| *** is
This is the sysfs interface to IBTRS sessions on server side:
/sys/kernel/ibtrs_server//
*** IBTRS session accepted from a client peer
|
|- paths//
*** established paths from a client in a session
|
|- disconnect
| *** disconnect path
|
|-
On 2/2/18 1:07 AM, kemi wrote:
> Hi, Jens
> Could you help to merge this patch to your tree? Thanks
Yes, I'll queue it up, thanks.
--
Jens Axboe
This introduces set of functions used on client side to account
statistics of RDMA data sent/received, amount of IOs inflight,
latency, cpu migrations, etc. Almost all statistics is collected
using percpu variables.
Signed-off-by: Roman Pen
Signed-off-by: Danil
This is main functionality of ibtrs-client module, which manages
set of RDMA connections for each IBTRS session, does multipathing,
load balancing and failover of RDMA requests.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
This is the sysfs interface to IBTRS sessions on client side:
/sys/kernel/ibtrs_client//
*** IBTRS session created by ibtrs_clt_open() API call
|
|- max_reconnect_attempts
| *** number of reconnect attempts for session
|
|- add_path
| *** adds another connection
skd includes slab_def.h to get access to the slab cache object size.
However, including this header breaks when we use SLUB or SLOB instead of
the SLAB allocator, since the structure layout is completely different,
as shown by this warning when we build this driver in one of the invalid
On 2/2/18 7:08 AM, Roman Pen wrote:
> This is main functionality of ibnbd-client module, which provides
> interface to map remote device as local block device /dev/ibnbd
> and feeds IBTRS with IO requests.
Kill the legacy IO path for this, the driver should only support
blk-mq. Hence kill off
These are common private headers with IBNBD protocol structures,
logging, sysfs and other helper functions, which are used on
both client and server sides.
Signed-off-by: Roman Pen
Signed-off-by: Danil Kipnis
Cc: Jack Wang
On Fri, Feb 02 2018 at 1:19am -0500,
NeilBrown wrote:
> On Mon, Jan 29 2018, Mike Snitzer wrote:
>
> > I'd like to enable bio-based DM to _not_ need to clone bios. But to do
> > so each bio-based DM target's required per-bio-data would need to be
> > provided by upper layer
On Fri, 2018-02-02 at 09:02 +0800, Joseph Qi wrote:
> We triggered this race when using single queue. I'm not sure if it
> exists in multi-queue.
Regarding the races between modifying the queue_lock pointer and the code that
uses that pointer, I think the following construct in
On Fri, 2018-02-02 at 16:07 +, Bart Van Assche wrote:
> On Fri, 2018-02-02 at 15:08 +0100, Roman Pen wrote:
> > Since the first version the following was changed:
> >
> >- Load-balancing and IO fail-over using multipath features were added.
> >- Major parts of the code were rewritten,
On Fri, 2018-02-02 at 15:08 +0100, Roman Pen wrote:
> o Simple configuration of IBNBD:
>- Server side is completely passive: volumes do not need to be
> explicitly exported.
That sounds like a security hole? I think the ability to configure whether or
not an initiator is allowed to log
On Fri, 2018-02-02 at 15:09 +0100, Roman Pen wrote:
> +Entries under /sys/kernel/ibnbd_client/
> +===
> [ ... ]
You will need Greg KH's permission to add new entries directly under
/sys/kernel.
Since I think that it is unlikely that he will give that
On Fri, 2018-02-02 at 15:08 +0100, Roman Pen wrote:
> Since the first version the following was changed:
>
>- Load-balancing and IO fail-over using multipath features were added.
>- Major parts of the code were rewritten, simplified and overall code
> size was reduced by a quarter.
On Fri, 2018-02-02 at 15:08 +0100, Roman Pen wrote:
> +static inline struct ibtrs_tag *
> +__ibtrs_get_tag(struct ibtrs_clt *clt, enum ibtrs_clt_con_type con_type)
> +{
> + size_t max_depth = clt->queue_depth;
> + struct ibtrs_tag *tag;
> + int cpu, bit;
> +
> + cpu = get_cpu();
>
Hi Bart,
On 18/2/3 00:21, Bart Van Assche wrote:
> On Fri, 2018-02-02 at 09:02 +0800, Joseph Qi wrote:
>> We triggered this race when using single queue. I'm not sure if it
>> exists in multi-queue.
>
> Regarding the races between modifying the queue_lock pointer and the code that
> uses that
Quite a few HBAs(such as HPSA, megaraid, mpt3sas, ..) support multiple
reply queues, but tags is often HBA wide.
These HBAs have switched to use pci_alloc_irq_vectors(PCI_IRQ_AFFINITY)
for automatic affinity assignment.
Now 84676c1f21e8ff5(genirq/affinity: assign vectors to all possible CPUs)
This patch changes tags->breserved_tags, tags->bitmap_tags and
tags->active_queues as pointer, and prepares for supporting global tags.
No functional change.
Cc: Laurence Oberman
Cc: Mike Snitzer
Cc: Christoph Hellwig
Signed-off-by:
Hi All,
This patchset supports global tags which was started by Hannes originally:
https://marc.info/?l=linux-block=149132580511346=2
Also inroduce 'force_blk_mq' to 'struct scsi_host_template', so that
driver can avoid to support two IO paths(legacy and blk-mq), especially
recent
This patch introduces the parameter of 'g_global_tags' so that we can
test this feature by null_blk easiy.
Not see obvious performance drop with global_tags when the whole hw
depth is kept as same:
1) no 'global_tags', each hw queue depth is 1, and 4 hw queues
modprobe null_blk queue_mode=2
Now 84676c1f21e8ff5(genirq/affinity: assign vectors to all possible CPUs)
has been merged to V4.16-rc, and it is easy to allocate all offline CPUs
for some irq vectors, this can't be avoided even though the allocation
is improved.
For example, on a 8cores VM, 4~7 are not-present/offline, 4 queues
>From scsi driver view, it is a bit troublesome to support both blk-mq
and non-blk-mq at the same time, especially when drivers need to support
multi hw-queue.
This patch introduces 'force_blk_mq' to scsi_host_template so that drivers
can provide blk-mq only support, so driver code can avoid the
On Fri, Feb 02 2018 at 11:08am -0500,
Mike Snitzer wrote:
>
> But if the bioset enhancements are implemented properly then the kernels
> N biosets shouldn't need to be in doubt. They'll all just adapt to have
> N backing mempools (N being for N conflicting front_pad
Hi Jan,
On Thu, Jan 25, 2018 at 12:57:27PM +0100, Jan Kara wrote:
> Hello,
>
> this is about a problem I have identified last month and for which I still
> don't have good solution. Some discussion of the problem happened here [1]
> where also technical details are posted but culprit of the
Hi, Jens
Could you help to merge this patch to your tree? Thanks
On 2017年11月03日 10:29, kemi wrote:
>
>
> On 2017年10月24日 09:16, Kemi Wang wrote:
>> It's expensive to set buffer flags that are already set, because that
>> causes a costly cache line transition.
>>
>> A common case is setting the
On Mon, Jan 29 2018, Mike Snitzer wrote:
> We currently don't restack the queue_limits if the lowest, or
> intermediate, layer of an IO stack changes.
>
> This is particularly unfortunate in the case of FLUSH/FUA which may
> change if/when a HW controller's BBU fails; whereby requiring the device
48 matches
Mail list logo