Hi Sam, 

Pacemaker will take care of HA failover but you will need to progagate
the PR data yourself.
If you are interested in a solution that works out of the box with
Windows, have a look at PetaSAN 
www.petasan.org
It works well with MS hyper-v/storage spaces/Scale Out File Server. 

Cheers
/Maged 

On 2017-08-09 18:42, Samuel Soulard wrote:

> Hmm :(  Even for an Active/Passive configuration?  I'm guessing we will need 
> to do something with Pacemaker in the meantime? 
> 
> On Wed, Aug 9, 2017 at 12:37 PM, Jason Dillaman <jdill...@redhat.com> wrote:
> 
>> I can probably say that it won't work out-of-the-gate for Hyper-V
>> since it most likely will require iSCSI persistent reservations. That
>> support is still being added to the kernel because right now it isn't
>> being distributed to all the target portal group nodes.
>> 
>> On Wed, Aug 9, 2017 at 12:30 PM, Samuel Soulard
>> 
>> <samuel.soul...@gmail.com> wrote:
>>> Thanks! we'll visit back this subject once it is released.  Waiting on this
>>> to perform some tests for Hyper-V/VMware ISCSI LUNs :)
>>> 
>>> Sam
>>> 
>>> On Wed, Aug 9, 2017 at 10:35 AM, Jason Dillaman <jdill...@redhat.com> wrote:
>>>> 
>>>> Yes, RHEL/CentOS 7.4 or kernel 4.13 (once it's released).
>>>> 
>>>> On Wed, Aug 9, 2017 at 6:56 AM, Samuel Soulard <samuel.soul...@gmail.com>
>>>> wrote:
>>>>> Hi Jason,
>>>>> 
>>>>> Oh the documentation is awesome:
>>>>> 
>>>>> https://github.com/ritz303/ceph/blob/6ab7bc887b265127510c3c3fde6dbad0e047955d/doc/rbd/iscsi-target-cli.rst
>>>>>  [1]
>>>>> 
>>>>> So I assume that this is not yet available for CentOS and requires us to
>>>>> wait until CentOS 7.4 is released?
>>>>> 
>>>>> Thanks for the documentation, it makes everything more clear.
>>>>> 
>>>>> On Tue, Aug 8, 2017 at 9:37 PM, Jason Dillaman <jdill...@redhat.com>
>>>>> wrote:
>>>>>> 
>>>>>> We are working hard to formalize active/passive iSCSI configuration
>>>>>> across Linux/Windows/ESX via LIO. We have integrated librbd into LIO's
>>>>>> tcmu-runner and have developed a set of support applications to
>>>>>> managing the clustered configuration of your iSCSI targets. There is
>>>>>> some preliminary documentation here [1] that will be merged once we
>>>>>> can finish our testing.
>>>>>> 
>>>>>> [1] https://github.com/ceph/ceph/pull/16182 [2]
>>>>>> 
>>>>>> On Tue, Aug 8, 2017 at 4:45 PM, Samuel Soulard
>>>>>> <samuel.soul...@gmail.com>
>>>>>> wrote:
>>>>>>> Hi all,
>>>>>>>
>>>>>>> Platform : Centos 7 Luminous 12.1.2
>>>>>>>
>>>>>>> First time here but, are there any guides or guidelines out there on
>>>>>>> how
>>>>>>> to
>>>>>>> configure ISCSI gateways in HA so that if one gateway fails, IO can
>>>>>>> continue
>>>>>>> on the passive node?
>>>>>>>
>>>>>>> What I've done so far
>>>>>>> -ISCSI node with Ceph client map rbd on boot
>>>>>>> -Rbd has exclusive-lock feature enabled and layering
>>>>>>> -Targetd service dependent on rbdmap.service
>>>>>>> -rbd exported through LUN ISCSI
>>>>>>> -Windows ISCSI imitator can map the lun and format / write to it
>>>>>>> (awesome)
>>>>>>>
>>>>>>> Now I have no idea where to start to have an active /passive scenario
>>>>>>> for
>>>>>>> luns exported with LIO.  Any ideas?
>>>>>>>
>>>>>>> Also the web dashboard seem to hint that it can get stats for various
>>>>>>> clients made on ISCSI gateways, I'm not sure where it pulls that
>>>>>>> information. Is Luminous now shipping a ISCSI daemon of some sort?
>>>>>>>
>>>>>>> Thanks all!
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [3]
>>>>>>>
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Jason
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Jason
>>> 
>>> 
>> 
>> --
>> Jason
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  

Links:
------
[1]
https://github.com/ritz303/ceph/blob/6ab7bc887b265127510c3c3fde6dbad0e047955d/doc/rbd/iscsi-target-cli.rst
[2] https://github.com/ceph/ceph/pull/16182
[3] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to