Hi folks, 

For those who missed it, the fun was here :-) : 
https://youtu.be/IgpVOOVNJc0?t=3715 

Frederic. 

----- Le 11 Oct 17, à 17:05, Jake Young <[email protected]> a écrit : 

> On Wed, Oct 11, 2017 at 8:57 AM Jason Dillaman < [ mailto:[email protected] 
> |
> [email protected] ] > wrote:

>> On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López < [
>> mailto:[email protected] | [email protected] ] > wrote:

>>> As far as I am able to understand there are 2 ways of setting iscsi for ceph

>>> 1- using kernel (lrbd) only able on SUSE, CentOS, fedora...

>> The target_core_rbd approach is only utilized by SUSE (and its derivatives 
>> like
>> PetaSAN) as far as I know. This was the initial approach for Red Hat-derived
>> kernels as well until the upstream kernel maintainers indicated that they
>> really do not want a specialized target backend for just krbd. The next 
>> attempt
>> was to re-use the existing target_core_iblock to interface with krbd via the
>> kernel's block layer, but that hit similar upstream walls trying to get 
>> support
>> for SCSI command passthrough to the block layer.

>>> 2- using userspace (tcmu , ceph-iscsi-conf, ceph-iscsi-cli)

>> The TCMU approach is what upstream and Red Hat-derived kernels will support
>> going forward.
>> The lrbd project was developed by SUSE to assist with configuring a cluster 
>> of
>> iSCSI gateways via the cli. The ceph-iscsi-config + ceph-iscsi-cli projects 
>> are
>> similar in goal but take a slightly different approach. ceph-iscsi-config
>> provides a set of common Python libraries that can be re-used by 
>> ceph-iscsi-cli
>> and ceph-ansible for deploying and configuring the gateway. The 
>> ceph-iscsi-cli
>> project provides the gwcli tool which acts as a cluster-aware replacement for
>> targetcli.

>>> I don't know which one is better, I am seeing that oficial support is 
>>> pointing
>>> to tcmu but i havent done any testbench.

>> We (upstream Ceph) provide documentation for the TCMU approach because that 
>> is
>> what is available against generic upstream kernels (starting with 4.14 when
>> it's out). Since it uses librbd (which still needs to undergo some 
>> performance
>> improvements) instead of krbd, we know that librbd 4k IO performance is 
>> slower
>> compared to krbd, but 64k and 128k IO performance is comparable. However, I
>> think most iSCSI tuning guides would already tell you to use larger block 
>> sizes
>> (i.e. 64K NTFS blocks or 32K-128K ESX blocks).

>>> Does anyone tried both? Do they give the same output? Are both able to 
>>> manage
>>> multiple iscsi targets mapped to a single rbd disk?

>> Assuming you mean multiple portals mapped to the same RBD disk, the answer is
>> yes, both approaches should support ALUA. The ceph-iscsi-config tooling will
>> only configure Active/Passive because we believe there are certain edge
>> conditions that could result in data corruption if configured for 
>> Active/Active
>> ALUA.

>> The TCMU approach also does not currently support SCSI persistent reservation
>> groups (needed for Windows clustering) because that support isn't available 
>> in
>> the upstream kernel. The SUSE kernel has an approach that utilizes two
>> round-trips to the OSDs for each IO to simulate PGR support. Earlier this
>> summer I believe SUSE started to look into how to get generic PGR support
>> merged into the upstream kernel using corosync/dlm to synchronize the states
>> between multiple nodes in the target. I am not sure of the current state of
>> that work, but it would benefit all LIO targets when complete.

>>> I will try to make my own testing but if anyone has tried in advance it 
>>> would be
>>> really helpful.

>>> Jorge Pinilla López
>>> [ mailto:[email protected] | [email protected] ]

>>>     [
>>>     
>>> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient
>>>     ]       Libre de virus. [
>>>     
>>> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient
>>>     | www.avast.com ] [
>>>     
>>> https://mail.univ-lorraine.fr/#m_7291678653307726003_m_7112777861777147567_m_2432837294105570265_m_4580024349895004366_m_-4947191068488210222_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2
>>>     |   ]

>>> _______________________________________________
>>> ceph-users mailing list
>>> [ mailto:[email protected] | [email protected] ]
>>> [ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]

>> --
>> Jason
>> _______________________________________________
>> ceph-users mailing list
>> [ mailto:[email protected] | [email protected] ]
>> [ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]

> Thanks Jason!

> You should cut and paste that answer into a blog post on [ http://ceph.com/ |
> ceph.com ] . It is a great summary of where things stand.

> Jake

> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to