[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Idan Shaby
Hi,

I think that there's already a bug on this issue:
*Bug 1577529*  - [RFE]
Support multiple hosts in posix storage domain path for cephfs


Regards,
Idan

On Thu, Aug 30, 2018 at 1:41 AM, Nir Soffer  wrote:

> On Thu, Aug 30, 2018 at 1:24 AM Stack Korora 
> wrote:
>
>> On 08/29/2018 10:44 AM, Stack Korora wrote:
>> > On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
>> >> Hi,
>> >>
>> >> maybe a foolish guess: Did you try this
>> >>
>> >> https://www.spinics.net/lists/ceph-devel/msg30958.html
>> >>
>> >> Mit freundlichen Grüßen,
>> >>
>> >> Markus Stockhausen
>> >> Head of Software Technology
>> > Thanks, I thought about that but I have not tried it. I will add it to
>> > my list to check today and will report back if it works (though I don't
>> > see why it wouldn't). It is good to know that someone else has at least
>> > had success with having a DNS entry for the multiple CephFS monitor
>> hosts.
>>
>> A single DNS entry did not work. Red Hat's oVirt did not like mounting
>> it even though it works fine via command line. :-/
>>
>> I now have a Red Hat ticket open so we will see what happens on that
>> front.
>>
>
> I can confirm that multiple hosts:port in a mount spec is not supported by
> the current code.
>
> You can see all the supported formats here:
> https://github.com/oVirt/vdsm/blob/d43376f3b2e913f3ee0ef226b5c196
> eb03da708f/tests/storage/fileutil_test.py#L182
>
> Nir
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/IEHMS5UZIF4HXYY7YEP6H66TMU74DAWW/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SQVS6HFE4WXC2T36Q3JWE4AFBUVHTCNO/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Nir Soffer
On Thu, Aug 30, 2018 at 1:24 AM Stack Korora 
wrote:

> On 08/29/2018 10:44 AM, Stack Korora wrote:
> > On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
> >> Hi,
> >>
> >> maybe a foolish guess: Did you try this
> >>
> >> https://www.spinics.net/lists/ceph-devel/msg30958.html
> >>
> >> Mit freundlichen Grüßen,
> >>
> >> Markus Stockhausen
> >> Head of Software Technology
> > Thanks, I thought about that but I have not tried it. I will add it to
> > my list to check today and will report back if it works (though I don't
> > see why it wouldn't). It is good to know that someone else has at least
> > had success with having a DNS entry for the multiple CephFS monitor
> hosts.
>
> A single DNS entry did not work. Red Hat's oVirt did not like mounting
> it even though it works fine via command line. :-/
>
> I now have a Red Hat ticket open so we will see what happens on that front.
>

I can confirm that multiple hosts:port in a mount spec is not supported by
the current code.

You can see all the supported formats here:
https://github.com/oVirt/vdsm/blob/d43376f3b2e913f3ee0ef226b5c196eb03da708f/tests/storage/fileutil_test.py#L182

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IEHMS5UZIF4HXYY7YEP6H66TMU74DAWW/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Stack Korora
On 08/29/2018 10:44 AM, Stack Korora wrote:
> On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
>> Hi,
>>
>> maybe a foolish guess: Did you try this
>>
>> https://www.spinics.net/lists/ceph-devel/msg30958.html
>>
>> Mit freundlichen Grüßen,
>>
>> Markus Stockhausen
>> Head of Software Technology
> Thanks, I thought about that but I have not tried it. I will add it to
> my list to check today and will report back if it works (though I don't
> see why it wouldn't). It is good to know that someone else has at least
> had success with having a DNS entry for the multiple CephFS monitor hosts.

A single DNS entry did not work. Red Hat's oVirt did not like mounting
it even though it works fine via command line. :-/

I now have a Red Hat ticket open so we will see what happens on that front.

Thanks!
~Stack~
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4RFPEUFOIGHKA6MD2JPC72SBD6GHIZPZ/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Stack Korora
On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
> Hi,
>
> maybe a foolish guess: Did you try this
>
> https://www.spinics.net/lists/ceph-devel/msg30958.html
>
> Mit freundlichen Grüßen,
>
> Markus Stockhausen
> Head of Software Technology

Thanks, I thought about that but I have not tried it. I will add it to
my list to check today and will report back if it works (though I don't
see why it wouldn't). It is good to know that someone else has at least
had success with having a DNS entry for the multiple CephFS monitor hosts.

~Stack~
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3VJO3TMKB6WVOPORFDLC6D6OFWJDQGZS/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Stack Korora
On 08/29/2018 09:28 AM, Nir Soffer wrote:
>
>
> On Wed, 29 Aug 2018, 15:48 Stack Korora,  > wrote:
>
> Greetings,
>
> My setup is a complete Red Hat install.
> Manager OS: RHEL 7.5
> Hypervisors OS: RHEL 7.5
> Running Red Hat CephFS (with their Ceph repos on all of the systems)
> with Red Hat Virtualization (aka oVirt).
> Everything is fully patched and updated as of yesterday morning.
>
> Yes, I have valid Red Hat support but I figured this was an odd enough
> problem that the community (and the Red-Hat-ers who hang out on this
> list) might have a better idea of where to start. (Although I
> might open
> a ticket anyway just because that is what support is for, right? :)
>
> Quick background:
>
> Your /etc/fstab when you mount a nfs should probably look
> something like
> this:
> :/path/ /mount/point nfs  0 0
>
> Just one IP is needed. Since part of the redundancy for Ceph is in the
> monitors, to mount CephFS the fstab should look something like this:
>
> ,,:/path/
> /mount/point ceph  0 0
>
> Both the Ceph community and Red Hat recommend the comma separator for
> mounting multiple CephFS monitor nodes. (See section 4.2 point 3)
> 
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_file_system_guide_technology_preview/mounting_and_unmounting_ceph_file_systems
>
>
> Now to oVirt/RHV.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ":/path/" it works splendidly well (especially
> after
> the last Red Hat kernel update!). I've done a bunch of stuff to it and
> it seems to work every time. However, I don't have the redundancy of
> multiple Ceph Monitors.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ",,:/path/"
> most things seem to work. But I noticed a higher rate of failures. The
> only failure that I can trigger 100% of the time though is to mount a
> second data import domain and attempt to copy a vm disk from the
> import
> into the CephFS Data domain. Then I get an error like this:
>
> would
> VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed:
> low level Image copy failed:
> (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error:
> ParamsList: sep , in
> 
> /rhev/data-center/mnt/,,:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',)
>
> Uh, oh. It seems that the commas in the mount path are causing the
> problems. So I went looking through the logs for "sep , in" and
> found a
> bunch more hits which makes me think that this is actually the problem
> message.
>
> I've switched back to just one IP address for the time being but I
> obviously want the Ceph redundancy back. While running on just one IP,
> the vm disk that refused to copy before had no problem copying. The
> _only_ change I made was dropping two of the three IP's from the Data
> Domain path option.
>
> Is this a bug, or did I do something wrong?
>
>
>
> Looks like a bug,aybe vdsm is not parsing the mount spec correctly.
>
> Please file vdsm bug and attach vdsm logs showing the entire flow.
>
> Regardless, I'm not sure how well oVirt with cephfs is tested, or
> recommended.
>
> Adding Yaniv t9 add more info on this.
>
> Nir

Thank you. I can file a report today.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJKZDJGU5TSF2HQXK3F6C6QPO5IQDWQ3/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Nir Soffer
On Wed, 29 Aug 2018, 15:48 Stack Korora,  wrote:

> Greetings,
>
> My setup is a complete Red Hat install.
> Manager OS: RHEL 7.5
> Hypervisors OS: RHEL 7.5
> Running Red Hat CephFS (with their Ceph repos on all of the systems)
> with Red Hat Virtualization (aka oVirt).
> Everything is fully patched and updated as of yesterday morning.
>
> Yes, I have valid Red Hat support but I figured this was an odd enough
> problem that the community (and the Red-Hat-ers who hang out on this
> list) might have a better idea of where to start. (Although I might open
> a ticket anyway just because that is what support is for, right? :)
>
> Quick background:
>
> Your /etc/fstab when you mount a nfs should probably look something like
> this:
> :/path/ /mount/point nfs  0 0
>
> Just one IP is needed. Since part of the redundancy for Ceph is in the
> monitors, to mount CephFS the fstab should look something like this:
>
> ,,:/path/
> /mount/point ceph  0 0
>
> Both the Ceph community and Red Hat recommend the comma separator for
> mounting multiple CephFS monitor nodes. (See section 4.2 point 3)
>
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_file_system_guide_technology_preview/mounting_and_unmounting_ceph_file_systems
>
>
> Now to oVirt/RHV.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ":/path/" it works splendidly well (especially after
> the last Red Hat kernel update!). I've done a bunch of stuff to it and
> it seems to work every time. However, I don't have the redundancy of
> multiple Ceph Monitors.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ",,:/path/"
> most things seem to work. But I noticed a higher rate of failures. The
> only failure that I can trigger 100% of the time though is to mount a
> second data import domain and attempt to copy a vm disk from the import
> into the CephFS Data domain. Then I get an error like this:
>
> would
> VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed:
> low level Image copy failed:
> (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error:
> ParamsList: sep , in
>
> /rhev/data-center/mnt/,,:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',)
>
> Uh, oh. It seems that the commas in the mount path are causing the
> problems. So I went looking through the logs for "sep , in" and found a
> bunch more hits which makes me think that this is actually the problem
> message.
>
> I've switched back to just one IP address for the time being but I
> obviously want the Ceph redundancy back. While running on just one IP,
> the vm disk that refused to copy before had no problem copying. The
> _only_ change I made was dropping two of the three IP's from the Data
> Domain path option.
>
> Is this a bug, or did I do something wrong?
>


Looks like a bug,aybe vdsm is not parsing the mount spec correctly.

Please file vdsm bug and attach vdsm logs showing the entire flow.

Regardless, I'm not sure how well oVirt with cephfs is tested, or
recommended.

Adding Yaniv t9 add more info on this.

Nir


> Does anyone have a suggestion for me to try?
>
> Thank you!
> ~Stack~
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VVKOIQIDEH5ZV5XPVO3ZTJKFZPVF2GG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PD7WJVCN3UVU2SUAASMKE5OB24BERMMO/