Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

2018-06-04 Thread Danil Kipnis
Hi Doug,

thanks for the feedback. You read the cover letter correctly: our
transport library implements multipath (load balancing and failover)
on top of RDMA API. Its name "IBTRS" is slightly misleading in that
regard: it can sit on top of ROCE as well. The library allows for
"bundling" multiple rdma "paths" (source addr - destination addr pair)
into one "session". So our session consists of one or more paths and
each path under the hood consists of as many QPs (each connecting
source with destination) as there are CPUs on the client system. The
user load (In our case IBNBD is a block device and generates some
block requests) is load-balanced on per cpu-basis.
I understand, this is something very different to what smc-r is doing.
Am I right? Do you know what stage MP-RDMA development currently is?

Best,

Danil Kipnis.

P.S. Sorry for the duplicate if any, first mail was returned cause of html.

On Thu, Feb 8, 2018 at 7:10 PM Bart Van Assche  wrote:
>
> On Thu, 2018-02-08 at 18:38 +0100, Danil Kipnis wrote:
> > thanks for the link to the article. To the best of my understanding,
> > the guys suggest to authenticate the devices first and only then
> > authenticate the users who use the devices in order to get access to a
> > corporate service. They also mention in the presentation the current
> > trend of moving corporate services into the cloud. But I think this is
> > not about the devices from which that cloud is build of. Isn't a cloud
> > first build out of devices connected via IB and then users (and their
> > devices) are provided access to the services of that cloud as a whole?
> > If a malicious user already plugged his device into an IB switch of a
> > cloud internal infrastructure, isn't it game over anyway? Can't he
> > just take the hard drives instead of mapping them?
>
> Hello Danil,
>
> It seems like we each have been focussing on different aspects of the article.
> The reason I referred to that article is because I read the following in
> that article: "Unlike the conventional perimeter security model, BeyondCorp
> doesn’t gate access to services and tools based on a user’s physical location
> or the originating network [ ... ] The zero trust architecture spells trouble
> for traditional attacks that rely on penetrating a tough perimeter to waltz
> freely within an open internal network." Suppose e.g. that an organization
> decides to use RoCE or iWARP for connectivity between block storage initiator
> systems and block storage target systems and that it has a single company-
> wide Ethernet network. If the target system does not restrict access based
> on initiator IP address then any penetrator would be able to access all the
> block devices exported by the target after a SoftRoCE or SoftiWARP initiator
> driver has been loaded. If the target system however restricts access based
> on the initiator IP address then that would make it harder for a penetrator
> to access the exported block storage devices. Instead of just penetrating the
> network access, IP address spoofing would have to be used or access would
> have to be obtained to a system that has been granted access to the target
> system.
>
> Thanks,
>
> Bart.
>
>


-- 
Danil Kipnis
Linux Kernel Developer


Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

2018-02-08 Thread Danil Kipnis
On Wed, Feb 7, 2018 at 6:32 PM, Bart Van Assche  wrote:
> On Wed, 2018-02-07 at 18:18 +0100, Roman Penyaev wrote:
>> So the question is: are there real life setups where
>> some of the local IB network members can be untrusted?
>
> Hello Roman,
>
> You may want to read more about the latest evolutions with regard to network
> security. An article that I can recommend is the following: "Google reveals
> own security regime policy trusts no network, anywhere, ever"
> (https://www.theregister.co.uk/2016/04/06/googles_beyondcorp_security_policy/).
>
> If data-centers would start deploying RDMA among their entire data centers
> (maybe they are already doing this) then I think they will want to restrict
> access to block devices to only those initiator systems that need it.
>
> Thanks,
>
> Bart.
>
>

Hi Bart,

thanks for the link to the article. To the best of my understanding,
the guys suggest to authenticate the devices first and only then
authenticate the users who use the devices in order to get access to a
corporate service. They also mention in the presentation the current
trend of moving corporate services into the cloud. But I think this is
not about the devices from which that cloud is build of. Isn't a cloud
first build out of devices connected via IB and then users (and their
devices) are provided access to the services of that cloud as a whole?
If a malicious user already plugged his device into an IB switch of a
cloud internal infrastructure, isn't it game over anyway? Can't he
just take the hard drives instead of mapping them?

Thanks,

Danil.


Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

2018-02-06 Thread Danil Kipnis
On Mon, Feb 5, 2018 at 7:38 PM, Bart Van Assche <bart.vanass...@wdc.com> wrote:
> On 02/05/18 08:40, Danil Kipnis wrote:
>>
>> It just occurred to me, that we could easily extend the interface in
>> such a way that each client (i.e. each session) would have on server
>> side her own directory with the devices it can access. I.e. instead of
>> just "dev_search_path" per server, any client would be able to only
>> access devices under /session_name. (session name
>> must already be generated by each client in a unique way). This way
>> one could have an explicit control over which devices can be accessed
>> by which clients. Do you think that would do it?
>
>
> Hello Danil,
>
> That sounds interesting to me. However, I think that approach requires to
> configure client access completely before the kernel target side module is
> loaded. It does not allow to configure permissions dynamically after the
> kernel target module has been loaded. Additionally, I don't see how to
> support attributes per (initiator, block device) pair with that approach.
> LIO e.g. supports the
> /sys/kernel/config/target/srpt/*/*/acls/*/lun_*/write_protect attribute. You
> may want to implement similar functionality if you want to convince more
> users to use IBNBD.
>
> Thanks,
>
> Bart.

Hello Bart,

the configuration (which devices can be accessed by a particular
client) can happen also after the kernel target module is loaded. The
directory in  is a module parameter and is fixed. It
contains for example "/ibnbd_devices/". But a particular client X
would be able to only access the devices located in the subdirectory
"/ibnbd_devices/client_x/". (The sessionname here is client_x) One can
add or remove the devices from that directory (those are just symlinks
to /dev/xxx) at any time - before or after the server module is
loaded. But you are right, we need something additional in order to be
able to specify which devices a client can access writable and which
readonly. May be another subdirectories "wr" and "ro" for each client:
those under /ibnbd_devices/client_x/ro/ can only be read by client_x
and those in /ibnbd_devices/client_x/wr/ can also be written to?

Thanks,

Danil.


Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

2018-02-05 Thread Danil Kipnis
On Mon, Feb 5, 2018 at 3:17 PM, Sagi Grimberg  wrote:
>
 Hi Bart,

 My another 2 cents:)
 On Fri, Feb 2, 2018 at 6:05 PM, Bart Van Assche 
 wrote:
>
>
> On Fri, 2018-02-02 at 15:08 +0100, Roman Pen wrote:
>>
>>
>> o Simple configuration of IBNBD:
>>  - Server side is completely passive: volumes do not need to be
>>explicitly exported.
>
>
>
> That sounds like a security hole? I think the ability to configure
> whether or
> not an initiator is allowed to log in is essential and also which
> volumes
> an
> initiator has access to.


 Our design target for well controlled production environment, so
 security is handle in other layer.
>>>
>>>
>>>
>>> What will happen to a new adopter of the code you are contributing?
>>
>>
>> Hi Sagi, Hi Bart,
>> thanks for your feedback.
>> We considered the "storage cluster" setup, where each ibnbd client has
>> access to each ibnbd server. Each ibnbd server manages devices under
>> his "dev_search_path" and can provide access to them to any ibnbd
>> client in the network.
>
>
> I don't understand how that helps?
>
>> On top of that Ibnbd server has an additional
>> "artificial" restriction, that a device can be mapped in writable-mode
>> by only one client at once.
>
>
> I think one would still need the option to disallow readable export as
> well.

It just occurred to me, that we could easily extend the interface in
such a way that each client (i.e. each session) would have on server
side her own directory with the devices it can access. I.e. instead of
just "dev_search_path" per server, any client would be able to only
access devices under /session_name. (session name
must already be generated by each client in a unique way). This way
one could have an explicit control over which devices can be accessed
by which clients. Do you think that would do it?


Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

2018-02-05 Thread Danil Kipnis
>
>> Hi Bart,
>>
>> My another 2 cents:)
>> On Fri, Feb 2, 2018 at 6:05 PM, Bart Van Assche 
>> wrote:
>>>
>>> On Fri, 2018-02-02 at 15:08 +0100, Roman Pen wrote:

 o Simple configuration of IBNBD:
 - Server side is completely passive: volumes do not need to be
   explicitly exported.
>>>
>>>
>>> That sounds like a security hole? I think the ability to configure
>>> whether or
>>> not an initiator is allowed to log in is essential and also which volumes
>>> an
>>> initiator has access to.
>>
>> Our design target for well controlled production environment, so
>> security is handle in other layer.
>
>
> What will happen to a new adopter of the code you are contributing?

Hi Sagi, Hi Bart,
thanks for your feedback.
We considered the "storage cluster" setup, where each ibnbd client has
access to each ibnbd server. Each ibnbd server manages devices under
his "dev_search_path" and can provide access to them to any ibnbd
client in the network. On top of that Ibnbd server has an additional
"artificial" restriction, that a device can be mapped in writable-mode
by only one client at once.

-- 
Danil