[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Idan Shaby
Hi,

I think that there's already a bug on this issue:
*Bug 1577529*  - [RFE]
Support multiple hosts in posix storage domain path for cephfs


Regards,
Idan

On Thu, Aug 30, 2018 at 1:41 AM, Nir Soffer  wrote:

> On Thu, Aug 30, 2018 at 1:24 AM Stack Korora 
> wrote:
>
>> On 08/29/2018 10:44 AM, Stack Korora wrote:
>> > On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
>> >> Hi,
>> >>
>> >> maybe a foolish guess: Did you try this
>> >>
>> >> https://www.spinics.net/lists/ceph-devel/msg30958.html
>> >>
>> >> Mit freundlichen Grüßen,
>> >>
>> >> Markus Stockhausen
>> >> Head of Software Technology
>> > Thanks, I thought about that but I have not tried it. I will add it to
>> > my list to check today and will report back if it works (though I don't
>> > see why it wouldn't). It is good to know that someone else has at least
>> > had success with having a DNS entry for the multiple CephFS monitor
>> hosts.
>>
>> A single DNS entry did not work. Red Hat's oVirt did not like mounting
>> it even though it works fine via command line. :-/
>>
>> I now have a Red Hat ticket open so we will see what happens on that
>> front.
>>
>
> I can confirm that multiple hosts:port in a mount spec is not supported by
> the current code.
>
> You can see all the supported formats here:
> https://github.com/oVirt/vdsm/blob/d43376f3b2e913f3ee0ef226b5c196
> eb03da708f/tests/storage/fileutil_test.py#L182
>
> Nir
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/IEHMS5UZIF4HXYY7YEP6H66TMU74DAWW/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SQVS6HFE4WXC2T36Q3JWE4AFBUVHTCNO/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Nir Soffer
On Thu, Aug 30, 2018 at 1:24 AM Stack Korora 
wrote:

> On 08/29/2018 10:44 AM, Stack Korora wrote:
> > On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
> >> Hi,
> >>
> >> maybe a foolish guess: Did you try this
> >>
> >> https://www.spinics.net/lists/ceph-devel/msg30958.html
> >>
> >> Mit freundlichen Grüßen,
> >>
> >> Markus Stockhausen
> >> Head of Software Technology
> > Thanks, I thought about that but I have not tried it. I will add it to
> > my list to check today and will report back if it works (though I don't
> > see why it wouldn't). It is good to know that someone else has at least
> > had success with having a DNS entry for the multiple CephFS monitor
> hosts.
>
> A single DNS entry did not work. Red Hat's oVirt did not like mounting
> it even though it works fine via command line. :-/
>
> I now have a Red Hat ticket open so we will see what happens on that front.
>

I can confirm that multiple hosts:port in a mount spec is not supported by
the current code.

You can see all the supported formats here:
https://github.com/oVirt/vdsm/blob/d43376f3b2e913f3ee0ef226b5c196eb03da708f/tests/storage/fileutil_test.py#L182

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IEHMS5UZIF4HXYY7YEP6H66TMU74DAWW/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Stack Korora
On 08/29/2018 10:44 AM, Stack Korora wrote:
> On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
>> Hi,
>>
>> maybe a foolish guess: Did you try this
>>
>> https://www.spinics.net/lists/ceph-devel/msg30958.html
>>
>> Mit freundlichen Grüßen,
>>
>> Markus Stockhausen
>> Head of Software Technology
> Thanks, I thought about that but I have not tried it. I will add it to
> my list to check today and will report back if it works (though I don't
> see why it wouldn't). It is good to know that someone else has at least
> had success with having a DNS entry for the multiple CephFS monitor hosts.

A single DNS entry did not work. Red Hat's oVirt did not like mounting
it even though it works fine via command line. :-/

I now have a Red Hat ticket open so we will see what happens on that front.

Thanks!
~Stack~
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4RFPEUFOIGHKA6MD2JPC72SBD6GHIZPZ/


[ovirt-users] Re: Turn Off Email Alerts

2018-08-29 Thread Martin Sivak
Hi,

> Yes, indeed!
>
> How can I change this internal Python setting?

The same as the notification regex. Using hosted-engine
--set-shared-config   --type=

Executing hosted-engine --get-shared-config xxx (or anything else that
does not exist) should give you the list of all types and keys you can
change (on 4.2 fo sure and 4.1 very probably too).

Best regards

Martin Sivak


On Wed, Aug 29, 2018 at 11:56 PM, Douglas Duckworth
 wrote:
> Yes, indeed!
>
> How can I change this internal Python setting?
>
> On Wed, Aug 29, 2018, 5:43 PM Martin Sivak  wrote:
>>
>> Hi,
>>
>> two clarifications:
>>
>> Hosted engine is sending those emails using built-in Python SMTP
>> client that talks directly yo the SMTP server specified during install
>> time. We default to localhost, but you might have changed it.
>>
>> > notify.state_transition : maintenance|start|stop|migrate|up|down, type :
>> > broker
>>
>> The value here is a regular expression that is matched against the
>> state transition string in the email.
>>
>> Best regards
>>
>> Martin Sivak
>>
>> On Wed, Aug 29, 2018 at 10:34 PM, Douglas Duckworth
>>  wrote:
>> > Thanks for sharing
>> >
>> > I may want to do that
>> >
>> > Though first I want to understand how the emails are arriving.
>> >
>> > I stooped ovirt-engine-notifier.service and postfix.service on all hosts
>> > and
>> > the hosted engine.  So how are the email being delivered?  They are not
>> > running senmail so I don't understand what daemon's sending these
>> > messages.
>> >
>> > Thanks,
>> >
>> > Douglas Duckworth, MSc, LFCS
>> > HPC System Administrator
>> > Scientific Computing Unit
>> > Weill Cornell Medicine
>> > 1300 York Avenue
>> > New York, NY 10065
>> > E: d...@med.cornell.edu
>> > O: 212-746-6305
>> > F: 212-746-8690
>> >
>> >
>> >
>> > On Wed, Aug 29, 2018 at 3:16 PM, Simone Tiraboschi 
>> > wrote:
>> >>
>> >> Hi,
>> >> you can change the list of status you want to be notified about with
>> >> hosted-engine --set-shared-config notify.state_transition
>> >> The default is:
>> >>
>> >> [root@hehost01 ~]# hosted-engine --get-shared-config
>> >> notify.state_transition --type=broker
>> >>
>> >> notify.state_transition : maintenance|start|stop|migrate|up|down, type
>> >> :
>> >> broker
>> >>
>> >>
>> >> On Wed, Aug 29, 2018 at 7:47 PM Douglas Duckworth
>> >>  wrote:
>> >>>
>> >>> I agree however we are in testing phase so I make changes a lot.
>> >>> Therefore alerts are not presently needed.
>> >>>
>> >>> So how to I turn them off?
>> >>>
>> >>> These steps do not work on hosted engine:
>> >>>
>> >>>
>> >>> me@ovirt-engine[~]$ sudo systemctl stop postfix.service
>> >>> me@ovirt-engine[~]$ sudo systemctl stop ovirt-engine-notifier.service
>> >>> me@ovirt-engine[~]$ sudo systemctl status
>> >>> ovirt-engine-notifier.service
>> >>> ● ovirt-engine-notifier.service - oVirt Engine Notifier
>> >>>Loaded: loaded
>> >>> (/usr/lib/systemd/system/ovirt-engine-notifier.service;
>> >>> enabled; vendor preset: disabled)
>> >>>Active: inactive (dead) since Wed 2018-08-29 13:41:55 EDT; 3s ago
>> >>>   Process: 1814
>> >>>
>> >>> ExecStart=/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.py
>> >>> --redirect-output --systemd=notify $EXTRA_ARGS start (code=exited,
>> >>> status=0/SUCCESS)
>> >>>  Main PID: 1814 (code=exited, status=0/SUCCESS)
>> >>>
>> >>> Aug 25 12:09:31 ovirt-engine.pbtech systemd[1]: Starting oVirt Engine
>> >>> Notifier...
>> >>> Aug 25 12:09:33 ovirt-engine.pbtech systemd[1]: Started oVirt Engine
>> >>> Notifier.
>> >>> Aug 29 13:41:54 ovirt-engine.pbtech systemd[1]: Stopping oVirt Engine
>> >>> Notifier...
>> >>> Aug 29 13:41:55 ovirt-engine.pbtech systemd[1]: Stopped oVirt Engine
>> >>> Notifier.
>> >>>
>> >>> I still get an email every time I put a host in maint mode for
>> >>> example.
>> >>>
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Douglas Duckworth, MSc, LFCS
>> >>> HPC System Administrator
>> >>> Scientific Computing Unit
>> >>> Weill Cornell Medicine
>> >>> 1300 York Avenue
>> >>> New York, NY 10065
>> >>> E: d...@med.cornell.edu
>> >>> O: 212-746-6305
>> >>> F: 212-746-8690
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Aug 28, 2018 at 4:43 PM, Johan Bernhardsson 
>> >>> wrote:
>> 
>>  Those alerts are also coming from hosted-engine that keeps ovirt
>>  manager
>>  running.
>> 
>>  I would rather have a filter in my email client for them than
>>  disabling
>>  all of the alerting stuff
>> 
>>  /Johan
>> 
>>  On August 28, 2018 22:36:34 Douglas Duckworth
>>  
>>  wrote:
>> >
>> > Hi
>> >
>> > Can someone please help?  I keep getting ovirt alerts via email
>> > despite
>> > turning off postix and ovirt-engine-notifier.service
>> >
>> > Thanks,
>> >
>> > Douglas Duckworth, MSc, LFCS
>> > HPC System Administrator
>> > Scientific Computing Unit
>> > Weill Cornell Medicine
>> > 1300 York Avenue
>> > New York, NY 10065
>> > E: d...@med.cornell

[ovirt-users] Re: Turn Off Email Alerts

2018-08-29 Thread Douglas Duckworth
Yes, indeed!

How can I change this internal Python setting?

On Wed, Aug 29, 2018, 5:43 PM Martin Sivak  wrote:

> Hi,
>
> two clarifications:
>
> Hosted engine is sending those emails using built-in Python SMTP
> client that talks directly yo the SMTP server specified during install
> time. We default to localhost, but you might have changed it.
>
> > notify.state_transition : maintenance|start|stop|migrate|up|down, type :
> broker
>
> The value here is a regular expression that is matched against the
> state transition string in the email.
>
> Best regards
>
> Martin Sivak
>
> On Wed, Aug 29, 2018 at 10:34 PM, Douglas Duckworth
>  wrote:
> > Thanks for sharing
> >
> > I may want to do that
> >
> > Though first I want to understand how the emails are arriving.
> >
> > I stooped ovirt-engine-notifier.service and postfix.service on all hosts
> and
> > the hosted engine.  So how are the email being delivered?  They are not
> > running senmail so I don't understand what daemon's sending these
> messages.
> >
> > Thanks,
> >
> > Douglas Duckworth, MSc, LFCS
> > HPC System Administrator
> > Scientific Computing Unit
> > Weill Cornell Medicine
> > 1300 York Avenue
> > New York, NY 10065
> > E: d...@med.cornell.edu
> > O: 212-746-6305
> > F: 212-746-8690
> >
> >
> >
> > On Wed, Aug 29, 2018 at 3:16 PM, Simone Tiraboschi 
> > wrote:
> >>
> >> Hi,
> >> you can change the list of status you want to be notified about with
> >> hosted-engine --set-shared-config notify.state_transition
> >> The default is:
> >>
> >> [root@hehost01 ~]# hosted-engine --get-shared-config
> >> notify.state_transition --type=broker
> >>
> >> notify.state_transition : maintenance|start|stop|migrate|up|down, type :
> >> broker
> >>
> >>
> >> On Wed, Aug 29, 2018 at 7:47 PM Douglas Duckworth
> >>  wrote:
> >>>
> >>> I agree however we are in testing phase so I make changes a lot.
> >>> Therefore alerts are not presently needed.
> >>>
> >>> So how to I turn them off?
> >>>
> >>> These steps do not work on hosted engine:
> >>>
> >>>
> >>> me@ovirt-engine[~]$ sudo systemctl stop postfix.service
> >>> me@ovirt-engine[~]$ sudo systemctl stop ovirt-engine-notifier.service
> >>> me@ovirt-engine[~]$ sudo systemctl status
> ovirt-engine-notifier.service
> >>> ● ovirt-engine-notifier.service - oVirt Engine Notifier
> >>>Loaded: loaded
> (/usr/lib/systemd/system/ovirt-engine-notifier.service;
> >>> enabled; vendor preset: disabled)
> >>>Active: inactive (dead) since Wed 2018-08-29 13:41:55 EDT; 3s ago
> >>>   Process: 1814
> >>>
> ExecStart=/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.py
> >>> --redirect-output --systemd=notify $EXTRA_ARGS start (code=exited,
> >>> status=0/SUCCESS)
> >>>  Main PID: 1814 (code=exited, status=0/SUCCESS)
> >>>
> >>> Aug 25 12:09:31 ovirt-engine.pbtech systemd[1]: Starting oVirt Engine
> >>> Notifier...
> >>> Aug 25 12:09:33 ovirt-engine.pbtech systemd[1]: Started oVirt Engine
> >>> Notifier.
> >>> Aug 29 13:41:54 ovirt-engine.pbtech systemd[1]: Stopping oVirt Engine
> >>> Notifier...
> >>> Aug 29 13:41:55 ovirt-engine.pbtech systemd[1]: Stopped oVirt Engine
> >>> Notifier.
> >>>
> >>> I still get an email every time I put a host in maint mode for example.
> >>>
> >>>
> >>> Thanks,
> >>>
> >>> Douglas Duckworth, MSc, LFCS
> >>> HPC System Administrator
> >>> Scientific Computing Unit
> >>> Weill Cornell Medicine
> >>> 1300 York Avenue
> >>> New York, NY 10065
> >>> E: d...@med.cornell.edu
> >>> O: 212-746-6305
> >>> F: 212-746-8690
> >>>
> >>>
> >>>
> >>> On Tue, Aug 28, 2018 at 4:43 PM, Johan Bernhardsson 
> >>> wrote:
> 
>  Those alerts are also coming from hosted-engine that keeps ovirt
> manager
>  running.
> 
>  I would rather have a filter in my email client for them than
> disabling
>  all of the alerting stuff
> 
>  /Johan
> 
>  On August 28, 2018 22:36:34 Douglas Duckworth <
> dod2...@med.cornell.edu>
>  wrote:
> >
> > Hi
> >
> > Can someone please help?  I keep getting ovirt alerts via email
> despite
> > turning off postix and ovirt-engine-notifier.service
> >
> > Thanks,
> >
> > Douglas Duckworth, MSc, LFCS
> > HPC System Administrator
> > Scientific Computing Unit
> > Weill Cornell Medicine
> > 1300 York Avenue
> > New York, NY 10065
> > E: d...@med.cornell.edu
> > O: 212-746-6305
> > F: 212-746-8690
> >
> >
> >
> > On Fri, Aug 24, 2018 at 8:59 AM, Douglas Duckworth
> >  wrote:
> >>
> >> Hi
> >>
> >> How do I turn off hosted engine alerts?  We are in a testing phase
> so
> >> these are not needed.  I have disabled postfix on all hosts as well
> as
> >> stopped the ovirt notification daemon on the hosted engine.  I kept
> it
> >> running while putting /dev/null in
> >>
> /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf
> >> for mail server.  Yet I still get alerts for every 

[ovirt-users] Re: Turn Off Email Alerts

2018-08-29 Thread Martin Sivak
Hi,

two clarifications:

Hosted engine is sending those emails using built-in Python SMTP
client that talks directly yo the SMTP server specified during install
time. We default to localhost, but you might have changed it.

> notify.state_transition : maintenance|start|stop|migrate|up|down, type : 
> broker

The value here is a regular expression that is matched against the
state transition string in the email.

Best regards

Martin Sivak

On Wed, Aug 29, 2018 at 10:34 PM, Douglas Duckworth
 wrote:
> Thanks for sharing
>
> I may want to do that
>
> Though first I want to understand how the emails are arriving.
>
> I stooped ovirt-engine-notifier.service and postfix.service on all hosts and
> the hosted engine.  So how are the email being delivered?  They are not
> running senmail so I don't understand what daemon's sending these messages.
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> 1300 York Avenue
> New York, NY 10065
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
>
>
> On Wed, Aug 29, 2018 at 3:16 PM, Simone Tiraboschi 
> wrote:
>>
>> Hi,
>> you can change the list of status you want to be notified about with
>> hosted-engine --set-shared-config notify.state_transition
>> The default is:
>>
>> [root@hehost01 ~]# hosted-engine --get-shared-config
>> notify.state_transition --type=broker
>>
>> notify.state_transition : maintenance|start|stop|migrate|up|down, type :
>> broker
>>
>>
>> On Wed, Aug 29, 2018 at 7:47 PM Douglas Duckworth
>>  wrote:
>>>
>>> I agree however we are in testing phase so I make changes a lot.
>>> Therefore alerts are not presently needed.
>>>
>>> So how to I turn them off?
>>>
>>> These steps do not work on hosted engine:
>>>
>>>
>>> me@ovirt-engine[~]$ sudo systemctl stop postfix.service
>>> me@ovirt-engine[~]$ sudo systemctl stop ovirt-engine-notifier.service
>>> me@ovirt-engine[~]$ sudo systemctl status ovirt-engine-notifier.service
>>> ● ovirt-engine-notifier.service - oVirt Engine Notifier
>>>Loaded: loaded (/usr/lib/systemd/system/ovirt-engine-notifier.service;
>>> enabled; vendor preset: disabled)
>>>Active: inactive (dead) since Wed 2018-08-29 13:41:55 EDT; 3s ago
>>>   Process: 1814
>>> ExecStart=/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.py
>>> --redirect-output --systemd=notify $EXTRA_ARGS start (code=exited,
>>> status=0/SUCCESS)
>>>  Main PID: 1814 (code=exited, status=0/SUCCESS)
>>>
>>> Aug 25 12:09:31 ovirt-engine.pbtech systemd[1]: Starting oVirt Engine
>>> Notifier...
>>> Aug 25 12:09:33 ovirt-engine.pbtech systemd[1]: Started oVirt Engine
>>> Notifier.
>>> Aug 29 13:41:54 ovirt-engine.pbtech systemd[1]: Stopping oVirt Engine
>>> Notifier...
>>> Aug 29 13:41:55 ovirt-engine.pbtech systemd[1]: Stopped oVirt Engine
>>> Notifier.
>>>
>>> I still get an email every time I put a host in maint mode for example.
>>>
>>>
>>> Thanks,
>>>
>>> Douglas Duckworth, MSc, LFCS
>>> HPC System Administrator
>>> Scientific Computing Unit
>>> Weill Cornell Medicine
>>> 1300 York Avenue
>>> New York, NY 10065
>>> E: d...@med.cornell.edu
>>> O: 212-746-6305
>>> F: 212-746-8690
>>>
>>>
>>>
>>> On Tue, Aug 28, 2018 at 4:43 PM, Johan Bernhardsson 
>>> wrote:

 Those alerts are also coming from hosted-engine that keeps ovirt manager
 running.

 I would rather have a filter in my email client for them than disabling
 all of the alerting stuff

 /Johan

 On August 28, 2018 22:36:34 Douglas Duckworth 
 wrote:
>
> Hi
>
> Can someone please help?  I keep getting ovirt alerts via email despite
> turning off postix and ovirt-engine-notifier.service
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> 1300 York Avenue
> New York, NY 10065
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
>
>
> On Fri, Aug 24, 2018 at 8:59 AM, Douglas Duckworth
>  wrote:
>>
>> Hi
>>
>> How do I turn off hosted engine alerts?  We are in a testing phase so
>> these are not needed.  I have disabled postfix on all hosts as well as
>> stopped the ovirt notification daemon on the hosted engine.  I kept it
>> running while putting /dev/null in
>> /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf
>> for mail server.  Yet I still get alerts for every thing done such as
>> putting hosts in maintenance mode.  Very confusing.
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/

[ovirt-users] Re: Turn Off Email Alerts

2018-08-29 Thread Douglas Duckworth
Thanks for sharing

I may want to do that

Though first I want to understand how the emails are arriving.

I stooped ovirt-engine-notifier.service and postfix.service on all hosts
and the hosted engine.  So how are the email being delivered?  They are not
running senmail so I don't understand what daemon's sending these messages.

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690



On Wed, Aug 29, 2018 at 3:16 PM, Simone Tiraboschi 
wrote:

> Hi,
> you can change the list of status you want to be notified about
> with hosted-engine --set-shared-config notify.state_transition
> The default is:
>
> [root@hehost01 ~]# hosted-engine --get-shared-config
> notify.state_transition --type=broker
>
> notify.state_transition : maintenance|start|stop|migrate|up|down, type :
> broker
>
>
> On Wed, Aug 29, 2018 at 7:47 PM Douglas Duckworth 
> wrote:
>
>> I agree however we are in testing phase so I make changes a lot.
>> Therefore alerts are not presently needed.
>>
>> So how to I turn them off?
>>
>> These steps do not work on hosted engine:
>>
>>
>> me@ovirt-engine[~]$ sudo systemctl stop postfix.service
>> me@ovirt-engine[~]$ sudo systemctl stop ovirt-engine-notifier.service
>> me@ovirt-engine[~]$ sudo systemctl status ovirt-engine-notifier.service
>> ● ovirt-engine-notifier.service - oVirt Engine Notifier
>>Loaded: loaded (/usr/lib/systemd/system/ovirt-engine-notifier.service;
>> enabled; vendor preset: disabled)
>>Active: inactive (dead) since Wed 2018-08-29 13:41:55 EDT; 3s ago
>>   Process: 1814 ExecStart=/usr/share/ovirt-engine/services/ovirt-engine-
>> notifier/ovirt-engine-notifier.py --redirect-output --systemd=notify
>> $EXTRA_ARGS start (code=exited, status=0/SUCCESS)
>>  Main PID: 1814 (code=exited, status=0/SUCCESS)
>>
>> Aug 25 12:09:31 ovirt-engine.pbtech systemd[1]: Starting oVirt Engine
>> Notifier...
>> Aug 25 12:09:33 ovirt-engine.pbtech systemd[1]: Started oVirt Engine
>> Notifier.
>> Aug 29 13:41:54 ovirt-engine.pbtech systemd[1]: Stopping oVirt Engine
>> Notifier...
>> Aug 29 13:41:55 ovirt-engine.pbtech systemd[1]: Stopped oVirt Engine
>> Notifier.
>>
>> I still get an email every time I put a host in maint mode for example.
>>
>>
>> Thanks,
>>
>> Douglas Duckworth, MSc, LFCS
>> HPC System Administrator
>> Scientific Computing Unit
>> Weill Cornell Medicine
>> 1300 York Avenue
>> 
>> New York, NY 10065
>> 
>> E: d...@med.cornell.edu
>> O: 212-746-6305
>> F: 212-746-8690
>>
>>
>>
>> On Tue, Aug 28, 2018 at 4:43 PM, Johan Bernhardsson 
>> wrote:
>>
>>> Those alerts are also coming from hosted-engine that keeps ovirt manager
>>> running.
>>>
>>> I would rather have a filter in my email client for them than disabling
>>> all of the alerting stuff
>>>
>>> /Johan
>>>
>>> On August 28, 2018 22:36:34 Douglas Duckworth 
>>> wrote:
>>>
 Hi

 Can someone please help?  I keep getting ovirt alerts via email despite
 turning off postix and ovirt-engine-notifier.service

 Thanks,

 Douglas Duckworth, MSc, LFCS
 HPC System Administrator
 Scientific Computing Unit
 Weill Cornell Medicine
 1300 York Avenue
 
 New York, NY 10065
 
 E: d...@med.cornell.edu
 O: 212-746-6305
 F: 212-746-8690



 On Fri, Aug 24, 2018 at 8:59 AM, Douglas Duckworth <
 dod2...@med.cornell.edu> wrote:

> Hi
>
> How do I turn off hosted engine alerts?  We are in a testing phase so
> these are not needed.  I have disabled postfix on all hosts as well as
> stopped the ovirt notification daemon on the hosted engine.  I kept it
> running while putting /dev/null in /usr/share/ovirt-engine/
> services/ovirt-engine-notifier/ovirt-engine-notifier.conf for mail
> server.  Yet I still get alerts for every thing done such as putting hosts
> in maintenance mode.  Very confusing.
>

 ___
 Users mailing list

[ovirt-users] Re: Turn Off Email Alerts

2018-08-29 Thread Simone Tiraboschi
Hi,
you can change the list of status you want to be notified about
with hosted-engine --set-shared-config notify.state_transition
The default is:

[root@hehost01 ~]# hosted-engine --get-shared-config
notify.state_transition --type=broker

notify.state_transition : maintenance|start|stop|migrate|up|down, type :
broker


On Wed, Aug 29, 2018 at 7:47 PM Douglas Duckworth 
wrote:

> I agree however we are in testing phase so I make changes a lot.
> Therefore alerts are not presently needed.
>
> So how to I turn them off?
>
> These steps do not work on hosted engine:
>
>
> me@ovirt-engine[~]$ sudo systemctl stop postfix.service
> me@ovirt-engine[~]$ sudo systemctl stop ovirt-engine-notifier.service
> me@ovirt-engine[~]$ sudo systemctl status ovirt-engine-notifier.service
> ● ovirt-engine-notifier.service - oVirt Engine Notifier
>Loaded: loaded (/usr/lib/systemd/system/ovirt-engine-notifier.service;
> enabled; vendor preset: disabled)
>Active: inactive (dead) since Wed 2018-08-29 13:41:55 EDT; 3s ago
>   Process: 1814
> ExecStart=/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.py
> --redirect-output --systemd=notify $EXTRA_ARGS start (code=exited,
> status=0/SUCCESS)
>  Main PID: 1814 (code=exited, status=0/SUCCESS)
>
> Aug 25 12:09:31 ovirt-engine.pbtech systemd[1]: Starting oVirt Engine
> Notifier...
> Aug 25 12:09:33 ovirt-engine.pbtech systemd[1]: Started oVirt Engine
> Notifier.
> Aug 29 13:41:54 ovirt-engine.pbtech systemd[1]: Stopping oVirt Engine
> Notifier...
> Aug 29 13:41:55 ovirt-engine.pbtech systemd[1]: Stopped oVirt Engine
> Notifier.
>
> I still get an email every time I put a host in maint mode for example.
>
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> 1300 York Avenue
> New York, NY 10065
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
>
>
> On Tue, Aug 28, 2018 at 4:43 PM, Johan Bernhardsson 
> wrote:
>
>> Those alerts are also coming from hosted-engine that keeps ovirt manager
>> running.
>>
>> I would rather have a filter in my email client for them than disabling
>> all of the alerting stuff
>>
>> /Johan
>>
>> On August 28, 2018 22:36:34 Douglas Duckworth 
>> wrote:
>>
>>> Hi
>>>
>>> Can someone please help?  I keep getting ovirt alerts via email despite
>>> turning off postix and ovirt-engine-notifier.service
>>>
>>> Thanks,
>>>
>>> Douglas Duckworth, MSc, LFCS
>>> HPC System Administrator
>>> Scientific Computing Unit
>>> Weill Cornell Medicine
>>> 1300 York Avenue
>>> 
>>> New York, NY 10065
>>> 
>>> E: d...@med.cornell.edu
>>> O: 212-746-6305
>>> F: 212-746-8690
>>>
>>>
>>>
>>> On Fri, Aug 24, 2018 at 8:59 AM, Douglas Duckworth <
>>> dod2...@med.cornell.edu> wrote:
>>>
 Hi

 How do I turn off hosted engine alerts?  We are in a testing phase so
 these are not needed.  I have disabled postfix on all hosts as well as
 stopped the ovirt notification daemon on the hosted engine.  I kept it
 running while putting /dev/null in
 /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf
  for
 mail server.  Yet I still get alerts for every thing done such as putting
 hosts in maintenance mode.  Very confusing.

>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> 
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> 
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X6YRFWUAKYFY2HQF56HGUK3BPXJL2HBH/
>>> 
>>>
>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.or

[ovirt-users] Bricks do not appear in oVirt Manager UI under Hosts unless manually created on nodes.

2018-08-29 Thread payden . pringle+ovirtlist
I have a CentOS 7.5 system running the oVirt-Engine package as the manager for 
three oVirt Nodes (version 4.2). Two of the nodes have a 200GB disk and one has 
a 2GB disk. I am attempting to create a Gluster Volume that replicates between 
the two 200GB disks with the 2GB as Arbiter. 

When I create the bricks by going to Hosts > host1 > Storage Devices > Create 
Brick (with the 200GB device selected), it succeeds, but no bricks appear under 
the Bricks tab for the Host. Doing the same for host2 and host3 results in the 
same problem. 

If I run the following command on a host `gluster volume create test-volume 
replica 3 arbiter 1 192.168.5.{50,60,40}:/gluster-bricks/200GBbrick/test`, it 
creates the volume correctly and the bricks then appear as they should. 
However, then these Bricks are labeled as "Down" and when I go to Storage > 
Volumes, I get this error:
```
Uncaught exception occurred. Please try reloading the page. Details: 
(TypeError) : Cannot read property 'a' of null
Please have your administrator check the UI logs
```
The Volume is also shown as "Down". SELinux is enabled on my CentOS 7.5 system. 

I'm not really sure what the correct method is to create bricks then a Volume 
in Gluster through the oVirt Management UI, but I've followed the guides and 
this is where it has got me. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDVRBKYSGHY72RLV2S7EN774BZNIUOY5/


[ovirt-users] Re: Turn Off Email Alerts

2018-08-29 Thread Douglas Duckworth
I agree however we are in testing phase so I make changes a lot.  Therefore
alerts are not presently needed.

So how to I turn them off?

These steps do not work on hosted engine:


me@ovirt-engine[~]$ sudo systemctl stop postfix.service
me@ovirt-engine[~]$ sudo systemctl stop ovirt-engine-notifier.service
me@ovirt-engine[~]$ sudo systemctl status ovirt-engine-notifier.service
● ovirt-engine-notifier.service - oVirt Engine Notifier
   Loaded: loaded (/usr/lib/systemd/system/ovirt-engine-notifier.service;
enabled; vendor preset: disabled)
   Active: inactive (dead) since Wed 2018-08-29 13:41:55 EDT; 3s ago
  Process: 1814
ExecStart=/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.py
--redirect-output --systemd=notify $EXTRA_ARGS start (code=exited,
status=0/SUCCESS)
 Main PID: 1814 (code=exited, status=0/SUCCESS)

Aug 25 12:09:31 ovirt-engine.pbtech systemd[1]: Starting oVirt Engine
Notifier...
Aug 25 12:09:33 ovirt-engine.pbtech systemd[1]: Started oVirt Engine
Notifier.
Aug 29 13:41:54 ovirt-engine.pbtech systemd[1]: Stopping oVirt Engine
Notifier...
Aug 29 13:41:55 ovirt-engine.pbtech systemd[1]: Stopped oVirt Engine
Notifier.

I still get an email every time I put a host in maint mode for example.


Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690



On Tue, Aug 28, 2018 at 4:43 PM, Johan Bernhardsson  wrote:

> Those alerts are also coming from hosted-engine that keeps ovirt manager
> running.
>
> I would rather have a filter in my email client for them than disabling
> all of the alerting stuff
>
> /Johan
>
> On August 28, 2018 22:36:34 Douglas Duckworth 
> wrote:
>
>> Hi
>>
>> Can someone please help?  I keep getting ovirt alerts via email despite
>> turning off postix and ovirt-engine-notifier.service
>>
>> Thanks,
>>
>> Douglas Duckworth, MSc, LFCS
>> HPC System Administrator
>> Scientific Computing Unit
>> Weill Cornell Medicine
>> 1300 York Avenue
>> 
>> New York, NY 10065
>> 
>> E: d...@med.cornell.edu
>> O: 212-746-6305
>> F: 212-746-8690
>>
>>
>>
>> On Fri, Aug 24, 2018 at 8:59 AM, Douglas Duckworth <
>> dod2...@med.cornell.edu> wrote:
>>
>>> Hi
>>>
>>> How do I turn off hosted engine alerts?  We are in a testing phase so
>>> these are not needed.  I have disabled postfix on all hosts as well as
>>> stopped the ovirt notification daemon on the hosted engine.  I kept it
>>> running while putting /dev/null in /usr/share/ovirt-engine/ser
>>> vices/ovirt-engine-notifier/ovirt-engine-notifier.conf for mail
>>> server.  Yet I still get alerts for every thing done such as putting hosts
>>> in maintenance mode.  Very confusing.
>>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> 
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> 
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/X6YRFWUAKYFY2HQF56HGUK3BPXJL2HBH/
>> 
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LIBA3OSRJUMGAGYOQDQAGLCYOWWNWJQ2/


[ovirt-users] Re: Weird Memory Leak Issue

2018-08-29 Thread Robert O'Kane
Not for me. After restarting the engine,it doesn't matter if I restart the Hypervisors, I only get the "protocol=gluster" when I restart the VMs... migration is 
not enough.


"virsh -r dumpxml "


Cheers,

Robert O'Kane


On 08/29/2018 06:54 PM, Edward Clay wrote:

Not sure how you're setup.  I only had to migrate the VMs off to another 
hypervisor then put the hypervisor in maintenance which seems to unmount the 
one gluster volume.  After upgrading, rebooting and activating the HV I could 
migrate VMs back to it.  This seemed to work for me.


From: Robert O'Kane 
Sent: Wednesday, August 29, 2018 10:42:56 AM
To: users@ovirt.org
Subject: [ovirt-users] Re: Weird Memory Leak Issue

**Security Notice - This external email is NOT from The Hut Group**

Ah, the FUSE mounts... I just saw last week that the upgrade to 4.2 removed the 
"LibgfApiSupported" flag by default.

That is possibly why the leak simply appeared

OK, I was wondering where this came from. Tomorrow I will upgrade and test. I 
still will have to eventually reboot the VMs to get the gluster mounts 
again :-/



On 08/29/2018 05:18 PM, Cole Johnson wrote:

Great! I'll look for the update.

On Wed, Aug 29, 2018 at 7:50 AM Darrell Budic  wrote:


There’s a memory leak in gluster 3.12.9 - 3.12.12 on fuse mounted volumes, 
sounds like what you’re seeing.

The fix is in 3.12.13, which should be showing up today or tomorrow in the 
centos repos (currently available from the testing repo). I’ve been running it 
overnight on one host to test, looks like they got it.


From: Cole Johnson 
Subject: [ovirt-users] Weird Memory Leak Issue
Date: August 29, 2018 at 9:35:39 AM CDT
To: users@ovirt.org

Hello,
I have a hyperconverged, self hosted ovirt cluster with three hosts,
running 4 VM's. The hosts are running the latest ovirt node. The
VM's are Linux, Windows server 2016, and Windows Server 2008r2. The
problem is with any host running the 2008r2 VM will run out of memory
after 8-10 hours, causing any VM on the host to be paused, and making
to host all but unresponsive. This problem seems to only exist with
this specific VM. None of the other running VM's have this problem.
I can resolve the problem by migrating the VM to a different host,
then putting the host into maintenance mode, the activating it back.
The leak appears to be in glusterfsd. Is there anything I can do to
permanently fix this?


__

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLNILVLZ4D3BJO4JFW4UYQMZLPWOQJ6T/



--
Robert O'Kane
Systems Administrator
Kunsthochschule für Medien Köln
Peter-Welter-Platz 2
50676 Köln

fon: +49(221)20189-223
fax: +49(221)20189-49223
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F7EY4FX7UOBGWEXMVFNAAC22TDON3DSM/

Edward Clay
Systems Administrator
The Hut Group

Tel:
Email: edward.c...@uk2group.com

For the purposes of this email, the "company" means The Hut Group Limited, a 
company registered in England and Wales (company number 6539496) whose registered office 
is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any 
of its respective subsidiaries.

Confidentiality Notice
This e-mail is confidential and intended for the use of the named recipient 
only. If you are not the intended recipient please notify us by telephone 
immediately on +44(0)1606 811888 or return it to us by e-mail. Please then 
delete it from your system and note that any use, dissemination, forwarding, 
printing or copying is strictly prohibited. Any views or opinions are solely 
those of the author and do not necessarily represent those of the company.

Encryptions and Viruses
Please note that this e-mail and any attachments have not been encrypted. They 
may therefore be liable to be compromised. Please also note that it is your 
responsibility to scan this e-mail and any attachments for viruses. We do not

[ovirt-users] Re: Weird Memory Leak Issue

2018-08-29 Thread Edward Clay
Not sure how you're setup.  I only had to migrate the VMs off to another 
hypervisor then put the hypervisor in maintenance which seems to unmount the 
one gluster volume.  After upgrading, rebooting and activating the HV I could 
migrate VMs back to it.  This seemed to work for me.


From: Robert O'Kane 
Sent: Wednesday, August 29, 2018 10:42:56 AM
To: users@ovirt.org
Subject: [ovirt-users] Re: Weird Memory Leak Issue

**Security Notice - This external email is NOT from The Hut Group**

Ah, the FUSE mounts... I just saw last week that the upgrade to 4.2 removed the 
"LibgfApiSupported" flag by default.

That is possibly why the leak simply appeared

OK, I was wondering where this came from. Tomorrow I will upgrade and test. I 
still will have to eventually reboot the VMs to get the gluster mounts 
again :-/



On 08/29/2018 05:18 PM, Cole Johnson wrote:
> Great! I'll look for the update.
>
> On Wed, Aug 29, 2018 at 7:50 AM Darrell Budic  wrote:
>>
>> There’s a memory leak in gluster 3.12.9 - 3.12.12 on fuse mounted volumes, 
>> sounds like what you’re seeing.
>>
>> The fix is in 3.12.13, which should be showing up today or tomorrow in the 
>> centos repos (currently available from the testing repo). I’ve been running 
>> it overnight on one host to test, looks like they got it.
>>
>> 
>> From: Cole Johnson 
>> Subject: [ovirt-users] Weird Memory Leak Issue
>> Date: August 29, 2018 at 9:35:39 AM CDT
>> To: users@ovirt.org
>>
>> Hello,
>> I have a hyperconverged, self hosted ovirt cluster with three hosts,
>> running 4 VM's. The hosts are running the latest ovirt node. The
>> VM's are Linux, Windows server 2016, and Windows Server 2008r2. The
>> problem is with any host running the 2008r2 VM will run out of memory
>> after 8-10 hours, causing any VM on the host to be paused, and making
>> to host all but unresponsive. This problem seems to only exist with
>> this specific VM. None of the other running VM's have this problem.
>> I can resolve the problem by migrating the VM to a different host,
>> then putting the host into maintenance mode, the activating it back.
>> The leak appears to be in glusterfsd. Is there anything I can do to
>> permanently fix this?

__
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: 
> https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLNILVLZ4D3BJO4JFW4UYQMZLPWOQJ6T/
>

--
Robert O'Kane
Systems Administrator
Kunsthochschule für Medien Köln
Peter-Welter-Platz 2
50676 Köln

fon: +49(221)20189-223
fax: +49(221)20189-49223
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F7EY4FX7UOBGWEXMVFNAAC22TDON3DSM/

Edward Clay
Systems Administrator
The Hut Group

Tel:
Email: edward.c...@uk2group.com

For the purposes of this email, the "company" means The Hut Group Limited, a 
company registered in England and Wales (company number 6539496) whose 
registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester 
Airport, M90 3DQ and/or any of its respective subsidiaries.

Confidentiality Notice
This e-mail is confidential and intended for the use of the named recipient 
only. If you are not the intended recipient please notify us by telephone 
immediately on +44(0)1606 811888 or return it to us by e-mail. Please then 
delete it from your system and note that any use, dissemination, forwarding, 
printing or copying is strictly prohibited. Any views or opinions are solely 
those of the author and do not necessarily represent those of the company.

Encryptions and Viruses
Please note that this e-mail and any attachments have not been encrypted. They 
may therefore be liable to be compromised. Please also note that it is your 
responsibility to scan this e-mail and any attachments for viruses. We do not, 
to the extent permitted by law, accept any liability (whether in contract, 
negligence or otherwise) for any virus infection and/or external compromise of 
security and/or c

[ovirt-users] Re: Weird Memory Leak Issue

2018-08-29 Thread Robert O'Kane

Ah, the FUSE mounts...   I just saw last week that the upgrade to 4.2 removed the 
"LibgfApiSupported" flag by default.

That is possibly why the leak simply appeared

OK, I was wondering where this came from. Tomorrow I will upgrade and test. I 
still will have to eventually reboot the VMs to get the gluster mounts 
again :-/



On 08/29/2018 05:18 PM, Cole Johnson wrote:

Great! I'll look for the update.

On Wed, Aug 29, 2018 at 7:50 AM Darrell Budic  wrote:


There’s a memory leak in gluster 3.12.9 - 3.12.12 on fuse mounted volumes, 
sounds like what you’re seeing.

The fix is in 3.12.13, which should be showing up today or tomorrow in the 
centos repos (currently available from the testing repo). I’ve been running it 
overnight on one host to test, looks like they got it.


From: Cole Johnson 
Subject: [ovirt-users] Weird Memory Leak Issue
Date: August 29, 2018 at 9:35:39 AM CDT
To: users@ovirt.org

Hello,
I have a hyperconverged, self hosted ovirt cluster with three hosts,
running 4 VM's.  The hosts are running the latest ovirt node.  The
VM's are Linux, Windows server 2016, and Windows Server 2008r2.  The
problem is with any host running the 2008r2 VM will run out of memory
after 8-10 hours, causing any VM on the host to be paused, and making
to host all but unresponsive. This problem seems to only exist with
this specific VM.  None of the other running VM's have this problem.
I can resolve the problem by migrating the VM to a different host,
then putting the host into maintenance mode, the activating it back.
The leak appears to be in glusterfsd.  Is there anything I can do to
permanently fix this?


__

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLNILVLZ4D3BJO4JFW4UYQMZLPWOQJ6T/



--
Robert O'Kane
Systems Administrator
Kunsthochschule für Medien Köln
Peter-Welter-Platz 2
50676 Köln

fon: +49(221)20189-223
fax: +49(221)20189-49223
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F7EY4FX7UOBGWEXMVFNAAC22TDON3DSM/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Stack Korora
On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
> Hi,
>
> maybe a foolish guess: Did you try this
>
> https://www.spinics.net/lists/ceph-devel/msg30958.html
>
> Mit freundlichen Grüßen,
>
> Markus Stockhausen
> Head of Software Technology

Thanks, I thought about that but I have not tried it. I will add it to
my list to check today and will report back if it works (though I don't
see why it wouldn't). It is good to know that someone else has at least
had success with having a DNS entry for the multiple CephFS monitor hosts.

~Stack~
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3VJO3TMKB6WVOPORFDLC6D6OFWJDQGZS/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Stack Korora
On 08/29/2018 09:28 AM, Nir Soffer wrote:
>
>
> On Wed, 29 Aug 2018, 15:48 Stack Korora,  > wrote:
>
> Greetings,
>
> My setup is a complete Red Hat install.
> Manager OS: RHEL 7.5
> Hypervisors OS: RHEL 7.5
> Running Red Hat CephFS (with their Ceph repos on all of the systems)
> with Red Hat Virtualization (aka oVirt).
> Everything is fully patched and updated as of yesterday morning.
>
> Yes, I have valid Red Hat support but I figured this was an odd enough
> problem that the community (and the Red-Hat-ers who hang out on this
> list) might have a better idea of where to start. (Although I
> might open
> a ticket anyway just because that is what support is for, right? :)
>
> Quick background:
>
> Your /etc/fstab when you mount a nfs should probably look
> something like
> this:
> :/path/ /mount/point nfs  0 0
>
> Just one IP is needed. Since part of the redundancy for Ceph is in the
> monitors, to mount CephFS the fstab should look something like this:
>
> ,,:/path/
> /mount/point ceph  0 0
>
> Both the Ceph community and Red Hat recommend the comma separator for
> mounting multiple CephFS monitor nodes. (See section 4.2 point 3)
> 
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_file_system_guide_technology_preview/mounting_and_unmounting_ceph_file_systems
>
>
> Now to oVirt/RHV.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ":/path/" it works splendidly well (especially
> after
> the last Red Hat kernel update!). I've done a bunch of stuff to it and
> it seems to work every time. However, I don't have the redundancy of
> multiple Ceph Monitors.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ",,:/path/"
> most things seem to work. But I noticed a higher rate of failures. The
> only failure that I can trigger 100% of the time though is to mount a
> second data import domain and attempt to copy a vm disk from the
> import
> into the CephFS Data domain. Then I get an error like this:
>
> would
> VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed:
> low level Image copy failed:
> (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error:
> ParamsList: sep , in
> 
> /rhev/data-center/mnt/,,:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',)
>
> Uh, oh. It seems that the commas in the mount path are causing the
> problems. So I went looking through the logs for "sep , in" and
> found a
> bunch more hits which makes me think that this is actually the problem
> message.
>
> I've switched back to just one IP address for the time being but I
> obviously want the Ceph redundancy back. While running on just one IP,
> the vm disk that refused to copy before had no problem copying. The
> _only_ change I made was dropping two of the three IP's from the Data
> Domain path option.
>
> Is this a bug, or did I do something wrong?
>
>
>
> Looks like a bug,aybe vdsm is not parsing the mount spec correctly.
>
> Please file vdsm bug and attach vdsm logs showing the entire flow.
>
> Regardless, I'm not sure how well oVirt with cephfs is tested, or
> recommended.
>
> Adding Yaniv t9 add more info on this.
>
> Nir

Thank you. I can file a report today.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJKZDJGU5TSF2HQXK3F6C6QPO5IQDWQ3/


[ovirt-users] Re: Weird Memory Leak Issue

2018-08-29 Thread Cole Johnson
Great! I'll look for the update.

On Wed, Aug 29, 2018 at 7:50 AM Darrell Budic  wrote:
>
> There’s a memory leak in gluster 3.12.9 - 3.12.12 on fuse mounted volumes, 
> sounds like what you’re seeing.
>
> The fix is in 3.12.13, which should be showing up today or tomorrow in the 
> centos repos (currently available from the testing repo). I’ve been running 
> it overnight on one host to test, looks like they got it.
>
> 
> From: Cole Johnson 
> Subject: [ovirt-users] Weird Memory Leak Issue
> Date: August 29, 2018 at 9:35:39 AM CDT
> To: users@ovirt.org
>
> Hello,
> I have a hyperconverged, self hosted ovirt cluster with three hosts,
> running 4 VM's.  The hosts are running the latest ovirt node.  The
> VM's are Linux, Windows server 2016, and Windows Server 2008r2.  The
> problem is with any host running the 2008r2 VM will run out of memory
> after 8-10 hours, causing any VM on the host to be paused, and making
> to host all but unresponsive. This problem seems to only exist with
> this specific VM.  None of the other running VM's have this problem.
> I can resolve the problem by migrating the VM to a different host,
> then putting the host into maintenance mode, the activating it back.
> The leak appears to be in glusterfsd.  Is there anything I can do to
> permanently fix this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A223GXBU32TQGGVA2KADYTIBHPEF3EID/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLNILVLZ4D3BJO4JFW4UYQMZLPWOQJ6T/


[ovirt-users] Re: Next Gluster Updates?

2018-08-29 Thread Darrell Budic
3.12.13 is now showing up  in the storage repo.

I can confirm it fixes the leak I’ve been seeing since 3.12.9 (upgraded one of 
my nodes and ran it overnight). Hurray!

> From: Sahina Bose 
> Subject: [ovirt-users] Re: Next Gluster Updates?
> Date: August 28, 2018 at 3:28:27 AM CDT
> To: Robert OKane; Gluster Devel
> Cc: users
> 
> 
> 
> On Mon, Aug 27, 2018 at 5:51 PM, Robert O'Kane  > wrote:
> I had a bug request in Bugzilla for Gluster being killed due to a memory 
> leak. The Gluster People say it is fixed in gluster-3.12.13
> 
> When will Ovirt have this update?  I am getting tired of having to restart my 
> hypervisors every week or so...
> 
> I currently have ovirt-release42-4.2.5.1-1.el7.noarch  and yum check-updates 
> shows me no new gluster versions.(still 3.12.11)
> 
> oVirt will pick it up as soon as the gluster release is pushed to CentOS 
> storage repo - http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.12/ 
> 
> 
> Niels, Shyam - any ETA for gluster-3.12.13 in CentOS
> 
> 
> Cheers,
> 
> Robert O'Kane
> 
> -- 
> Robert O'Kane
> Systems Administrator
> Kunsthochschule für Medien Köln
> Peter-Welter-Platz 2
> 50676 Köln
> 
> fon: +49(221)20189-223
> fax: +49(221)20189-49223
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L7ZTIQA3TAM7IR4LCTWMXXCSGCLWUJJN/
>  
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2ZKCL2QUVEQRPPV4I3EUYXMAE6PGZUNM/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGKDOCJBQWZEDFCHMTJGJLLQGCVSMEWK/


[ovirt-users] Re: Weird Memory Leak Issue

2018-08-29 Thread Darrell Budic
There’s a memory leak in gluster 3.12.9 - 3.12.12 on fuse mounted volumes, 
sounds like what you’re seeing.

The fix is in 3.12.13, which should be showing up today or tomorrow in the 
centos repos (currently available from the testing repo). I’ve been running it 
overnight on one host to test, looks like they got it.

> From: Cole Johnson 
> Subject: [ovirt-users] Weird Memory Leak Issue
> Date: August 29, 2018 at 9:35:39 AM CDT
> To: users@ovirt.org
> 
> Hello,
> I have a hyperconverged, self hosted ovirt cluster with three hosts,
> running 4 VM's.  The hosts are running the latest ovirt node.  The
> VM's are Linux, Windows server 2016, and Windows Server 2008r2.  The
> problem is with any host running the 2008r2 VM will run out of memory
> after 8-10 hours, causing any VM on the host to be paused, and making
> to host all but unresponsive. This problem seems to only exist with
> this specific VM.  None of the other running VM's have this problem.
> I can resolve the problem by migrating the VM to a different host,
> then putting the host into maintenance mode, the activating it back.
> The leak appears to be in glusterfsd.  Is there anything I can do to
> permanently fix this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A223GXBU32TQGGVA2KADYTIBHPEF3EID/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGP2JDZH3BRZFNMDR5DI3V2U4SXIFSAY/


[ovirt-users] Weird Memory Leak Issue

2018-08-29 Thread Cole Johnson
Hello,
I have a hyperconverged, self hosted ovirt cluster with three hosts,
running 4 VM's.  The hosts are running the latest ovirt node.  The
VM's are Linux, Windows server 2016, and Windows Server 2008r2.  The
problem is with any host running the 2008r2 VM will run out of memory
after 8-10 hours, causing any VM on the host to be paused, and making
to host all but unresponsive. This problem seems to only exist with
this specific VM.  None of the other running VM's have this problem.
I can resolve the problem by migrating the VM to a different host,
then putting the host into maintenance mode, the activating it back.
The leak appears to be in glusterfsd.  Is there anything I can do to
permanently fix this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A223GXBU32TQGGVA2KADYTIBHPEF3EID/


[ovirt-users] Re: Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Nir Soffer
On Wed, 29 Aug 2018, 15:48 Stack Korora,  wrote:

> Greetings,
>
> My setup is a complete Red Hat install.
> Manager OS: RHEL 7.5
> Hypervisors OS: RHEL 7.5
> Running Red Hat CephFS (with their Ceph repos on all of the systems)
> with Red Hat Virtualization (aka oVirt).
> Everything is fully patched and updated as of yesterday morning.
>
> Yes, I have valid Red Hat support but I figured this was an odd enough
> problem that the community (and the Red-Hat-ers who hang out on this
> list) might have a better idea of where to start. (Although I might open
> a ticket anyway just because that is what support is for, right? :)
>
> Quick background:
>
> Your /etc/fstab when you mount a nfs should probably look something like
> this:
> :/path/ /mount/point nfs  0 0
>
> Just one IP is needed. Since part of the redundancy for Ceph is in the
> monitors, to mount CephFS the fstab should look something like this:
>
> ,,:/path/
> /mount/point ceph  0 0
>
> Both the Ceph community and Red Hat recommend the comma separator for
> mounting multiple CephFS monitor nodes. (See section 4.2 point 3)
>
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_file_system_guide_technology_preview/mounting_and_unmounting_ceph_file_systems
>
>
> Now to oVirt/RHV.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ":/path/" it works splendidly well (especially after
> the last Red Hat kernel update!). I've done a bunch of stuff to it and
> it seems to work every time. However, I don't have the redundancy of
> multiple Ceph Monitors.
>
> When I mount my Data Domain path as a Posix file system with a path of
> ",,:/path/"
> most things seem to work. But I noticed a higher rate of failures. The
> only failure that I can trigger 100% of the time though is to mount a
> second data import domain and attempt to copy a vm disk from the import
> into the CephFS Data domain. Then I get an error like this:
>
> would
> VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed:
> low level Image copy failed:
> (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error:
> ParamsList: sep , in
>
> /rhev/data-center/mnt/,,:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',)
>
> Uh, oh. It seems that the commas in the mount path are causing the
> problems. So I went looking through the logs for "sep , in" and found a
> bunch more hits which makes me think that this is actually the problem
> message.
>
> I've switched back to just one IP address for the time being but I
> obviously want the Ceph redundancy back. While running on just one IP,
> the vm disk that refused to copy before had no problem copying. The
> _only_ change I made was dropping two of the three IP's from the Data
> Domain path option.
>
> Is this a bug, or did I do something wrong?
>


Looks like a bug,aybe vdsm is not parsing the mount spec correctly.

Please file vdsm bug and attach vdsm logs showing the entire flow.

Regardless, I'm not sure how well oVirt with cephfs is tested, or
recommended.

Adding Yaniv t9 add more info on this.

Nir


> Does anyone have a suggestion for me to try?
>
> Thank you!
> ~Stack~
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VVKOIQIDEH5ZV5XPVO3ZTJKFZPVF2GG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PD7WJVCN3UVU2SUAASMKE5OB24BERMMO/


[ovirt-users] Multiple CephFS Monitors cause issues with oVirt

2018-08-29 Thread Stack Korora
Greetings,

My setup is a complete Red Hat install.
Manager OS: RHEL 7.5
Hypervisors OS: RHEL 7.5
Running Red Hat CephFS (with their Ceph repos on all of the systems)
with Red Hat Virtualization (aka oVirt).
Everything is fully patched and updated as of yesterday morning.

Yes, I have valid Red Hat support but I figured this was an odd enough
problem that the community (and the Red-Hat-ers who hang out on this
list) might have a better idea of where to start. (Although I might open
a ticket anyway just because that is what support is for, right? :)

Quick background:

Your /etc/fstab when you mount a nfs should probably look something like
this:
:/path/ /mount/point nfs  0 0

Just one IP is needed. Since part of the redundancy for Ceph is in the
monitors, to mount CephFS the fstab should look something like this:

,,:/path/
/mount/point ceph  0 0

Both the Ceph community and Red Hat recommend the comma separator for
mounting multiple CephFS monitor nodes. (See section 4.2 point 3)
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_file_system_guide_technology_preview/mounting_and_unmounting_ceph_file_systems


Now to oVirt/RHV.

When I mount my Data Domain path as a Posix file system with a path of
":/path/" it works splendidly well (especially after
the last Red Hat kernel update!). I've done a bunch of stuff to it and
it seems to work every time. However, I don't have the redundancy of
multiple Ceph Monitors.

When I mount my Data Domain path as a Posix file system with a path of
",,:/path/"
most things seem to work. But I noticed a higher rate of failures. The
only failure that I can trigger 100% of the time though is to mount a
second data import domain and attempt to copy a vm disk from the import
into the CephFS Data domain. Then I get an error like this:

would
VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed:
low level Image copy failed:
(u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error:
ParamsList: sep , in
/rhev/data-center/mnt/,,:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',)

Uh, oh. It seems that the commas in the mount path are causing the
problems. So I went looking through the logs for "sep , in" and found a
bunch more hits which makes me think that this is actually the problem
message.

I've switched back to just one IP address for the time being but I
obviously want the Ceph redundancy back. While running on just one IP,
the vm disk that refused to copy before had no problem copying. The
_only_ change I made was dropping two of the three IP's from the Data
Domain path option.

Is this a bug, or did I do something wrong?

Does anyone have a suggestion for me to try?

Thank you!
~Stack~
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VVKOIQIDEH5ZV5XPVO3ZTJKFZPVF2GG/


[ovirt-users] Re: Windows 10 vs others windows

2018-08-29 Thread Petr Kotas
Hi Carl,

it might happen you are solving the same issue as is discussed within this
thread: "[ovirt-users] Windows vm and cpu type"

Maybe you do not have all flags turned on in the bios?

Best,
Petr

On Tue, Aug 28, 2018 at 5:13 PM carl langlois 
wrote:

> Hi
>
> Why when en try to do a Windows 10 machine i have a error on the cpu guest
> os not supported but not windows 8 or 7?
>
> Thanks,
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GYOFKCJXTUI4ZHU3PHGYT4MYRETZM7G7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/23CQU3ICVIDRWSZEMUYUESNEN5A4IPQA/


[ovirt-users] Re: snapshots upload

2018-08-29 Thread Shani Leviim
Hi David,

It seems that you're trying to create a RAW SPARSE disk.

According to [1], the RAW Sparse disks' configuration is supported for NFS
storage domains (not for block domains like iSCSI or FCP).
Can you please verify the target storage domain's type on your script?

[1]
https://www.ovirt.org/documentation/admin-guide/chap-Virtual_Machine_Disks/#understanding-virtual-disks


*Regards,*

*Shani Leviim*

On Wed, Aug 29, 2018 at 10:40 AM, David David  wrote:

> hi all
>
> ovirt engine 4.2.5.2-1.el7
>
> ovirt node:
> KVM Version: 2.9.0 - 16.el7_4.14.1
> LIBVIRT Version: libvirt-3.2.0-14.el7_4.9
> VDSM Version: vdsm-4.20.27.1-1.el7.centos
>
> Can't restore vm by following this instruction https://ovirt.org/develop/
> release-management/features/storage/backup-restore-disk-snapshots/
> error message:
>
> # python upload_disk_snapshots.py
> Creating disk: 414d6613-5cfe-493c-ae6c-aa29caa32983
> Traceback (most recent call last):
>   File "upload_disk_snapshots.py", line 305, in 
> disk = create_disk(base_volume, disk_id, sd_name, disks_service)
>   File "upload_disk_snapshots.py", line 186, in create_disk
> name=sd_name
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line
> 6715, in add
> return self._internal_add(disk, headers, query, wait)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 232, in _internal_add
> return future.wait() if wait else future
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 55, in wait
> return self._code(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 229, in callback
> self._check_fault(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 132, in _check_fault
> self._raise_error(response, body)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> 118, in _raise_error
> raise error
> ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is
> "[Cannot add Virtual Disk. Disk configuration (RAW Sparse) is incompatible
> with the storage domain type.]". HTTP response code is 400.
>
> # tree 414d6613-5cfe-493c-ae6c-aa29caa32983/
> 414d6613-5cfe-493c-ae6c-aa29caa32983/
> ├── 3610d5fd-6f55-46d9-a226-c06eee8e21e6
> └── f77207b2-6e5b-4464-bd6f-5ae6d776435d
>
> 414d6613-5cfe-493c-ae6c-aa29caa32983 - disk id
> 3610d5fd-6f55-46d9-a226-c06eee8e21e6 - base image file
> f77207b2-6e5b-4464-bd6f-5ae6d776435d - snapshot1 file
>
> # qemu-img info 
> 414d6613-5cfe-493c-ae6c-aa29caa32983/3610d5fd-6f55-46d9-a226-c06eee8e21e6
>
> image: 414d6613-5cfe-493c-ae6c-aa29caa32983/3610d5fd-6f55-
> 46d9-a226-c06eee8e21e6
> file format: qcow2
> virtual size: 20G (21474836480 bytes)
> disk size: 22G
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
> # qemu-img info 
> 414d6613-5cfe-493c-ae6c-aa29caa32983/f77207b2-6e5b-4464-bd6f-5ae6d776435d
>
> image: 414d6613-5cfe-493c-ae6c-aa29caa32983/f77207b2-6e5b-
> 4464-bd6f-5ae6d776435d
> file format: qcow2
> virtual size: 20G (21474836480 bytes)
> disk size: 1.0G
> cluster_size: 65536
> backing file: 3610d5fd-6f55-46d9-a226-c06eee8e21e6 (actual path:
> 414d6613-5cfe-493c-ae6c-aa29caa32983/3610d5fd-6f55-46d9-a226-c06eee8e21e6)
> backing file format: qcow2
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
> upload_disk_snapshots.py:
> ==
> if __name__ == "__main__":
>
> # Set storage domain name
> sd_name = 'data_sas3'
>
> # Set OVF file path
> ovf_file_path = 'f4fdaf18-b944-4d22-879b-e235145a93f6.ovf'
>
> # Disk to upload
> disk_path = '414d6613-5cfe-493c-ae6c-aa29caa32983'
> disk_id = os.path.basename(disk_path)
> ==
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/EATGJIKM3ZHUVKXYA2ZRDNLBDDSEM4XG/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFJF3IQEXKKEYKUSBBWOLH7EAQLP73RM/


[ovirt-users] snapshots upload

2018-08-29 Thread David David
hi all

ovirt engine 4.2.5.2-1.el7

ovirt node:
KVM Version: 2.9.0 - 16.el7_4.14.1
LIBVIRT Version: libvirt-3.2.0-14.el7_4.9
VDSM Version: vdsm-4.20.27.1-1.el7.centos

Can't restore vm by following this instruction
https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots/
error message:

# python upload_disk_snapshots.py
Creating disk: 414d6613-5cfe-493c-ae6c-aa29caa32983
Traceback (most recent call last):
  File "upload_disk_snapshots.py", line 305, in 
disk = create_disk(base_volume, disk_id, sd_name, disks_service)
  File "upload_disk_snapshots.py", line 186, in create_disk
name=sd_name
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line
6715, in add
return self._internal_add(disk, headers, query, wait)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 232,
in _internal_add
return future.wait() if wait else future
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55,
in wait
return self._code(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 229,
in callback
self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 132,
in _check_fault
self._raise_error(response, body)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118,
in _raise_error
raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is
"[Cannot add Virtual Disk. Disk configuration (RAW Sparse) is incompatible
with the storage domain type.]". HTTP response code is 400.

# tree 414d6613-5cfe-493c-ae6c-aa29caa32983/
414d6613-5cfe-493c-ae6c-aa29caa32983/
├── 3610d5fd-6f55-46d9-a226-c06eee8e21e6
└── f77207b2-6e5b-4464-bd6f-5ae6d776435d

414d6613-5cfe-493c-ae6c-aa29caa32983 - disk id
3610d5fd-6f55-46d9-a226-c06eee8e21e6 - base image file
f77207b2-6e5b-4464-bd6f-5ae6d776435d - snapshot1 file

# qemu-img info
414d6613-5cfe-493c-ae6c-aa29caa32983/3610d5fd-6f55-46d9-a226-c06eee8e21e6
image:
414d6613-5cfe-493c-ae6c-aa29caa32983/3610d5fd-6f55-46d9-a226-c06eee8e21e6
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 22G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

# qemu-img info
414d6613-5cfe-493c-ae6c-aa29caa32983/f77207b2-6e5b-4464-bd6f-5ae6d776435d
image:
414d6613-5cfe-493c-ae6c-aa29caa32983/f77207b2-6e5b-4464-bd6f-5ae6d776435d
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 1.0G
cluster_size: 65536
backing file: 3610d5fd-6f55-46d9-a226-c06eee8e21e6 (actual path:
414d6613-5cfe-493c-ae6c-aa29caa32983/3610d5fd-6f55-46d9-a226-c06eee8e21e6)
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

upload_disk_snapshots.py:
==
if __name__ == "__main__":

# Set storage domain name
sd_name = 'data_sas3'

# Set OVF file path
ovf_file_path = 'f4fdaf18-b944-4d22-879b-e235145a93f6.ovf'

# Disk to upload
disk_path = '414d6613-5cfe-493c-ae6c-aa29caa32983'
disk_id = os.path.basename(disk_path)
==
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EATGJIKM3ZHUVKXYA2ZRDNLBDDSEM4XG/