Re: [ovirt-users] Shutdown all VM's command line

2018-01-09 Thread Yaniv Kaul
On Jan 10, 2018 1:56 AM, "Marcelo Leandro"  wrote:

Do you can do this with python sdk, do you can build a list and shutdown
all vms. If you want I can make a script to you tomorrow.


Or with Ansible.
But in 4.2, I believe the VMs are registered with libvirt and will be shut
down nicely with the host.
Y.



Marcelo Leandro

Em 09/01/2018 8:30 PM, "Wesley Stewart"  escreveu:

> Is there an easy way to do this?
>
> I have a UPS connected to my freenas box, and using NUT and UPSC I can
> monitor the battery level on my centos 7 host.
>
> In the case of an emergency, and my UPS goes below, lets say 40% battery,
> I would like to make a script to shutdown all running VMs, and then reboot
> the host.
>
> Otherwise I am afraid of sending a straight reboot command to the host
> without taking care of the virtual machines first.
>
> Thanks!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-live 4.2 - missing stable ISO? + Networking question - when to setup host & in NM or ifcfg?

2018-01-09 Thread Sam McLeod
I'm trying to find the stable / current oVirt-live ISO to download.

According to the official documentation, it looks like there is only the legacy 
4.1 ISO, or nightly / unstable builds of 4.2(.2?)?

SRC: https://www.ovirt.org/download/ovirt-live/ 


---

Alternatively, if one is to deploy the 'self-hosted-engine' on top of CentOS 
7.4, the documentation doesn't make it clear what you should vs shouldn't setup 
on the host prior to deploying the hosted engine.

For example, some big questions pop up around networking, e.g.:

- Should I be setting those up during CentOS's install using the installer 
(which I believe configures them with Network Manager), or should I be setting 
up the ifcfg files manually by hand without touching Network Manager via the 
server's remote console after the install has finished?

- Is it OK for me to setup my first two NICs in a mode 1 (A/P) or 4 (XOR) bond 
(these are connected to a 'dumb' switch and provide internet access and later 
will be used for management activities such as vm migrations).

- Is it OK for me to setup my second two NICs in a LACP bond (required as they 
connect to our core switches) and to add VLANs on top of that bond, include the 
storage VLAN required for iSCSI access which is later required to host the 
hosted-engine?

SRC: 
https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
 


---

I think the hardest thing is that the documentation for oVirt seems very poorly 
maintained, or it's at least scattered around various links or different guides.

Perhaps this isn't obvious to people that are already familiar with the 
components, terminology and setup practises of oVirt / RHEV, but for someone 
like me coming from XenServer - it's confusing as anything.


Example diagram of infrastructure: https://i.imgur.com/U4hCP3a.png 


--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod

Words are my own opinions and do not necessarily represent those of my employer 
or partners.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Rebuilding my infra..

2018-01-09 Thread Kasturi Narra
Hi carl,

  During deployment via cockpit+gdeploy plugin when you input the host,
host in the third text box will be considered as Arbiter host.

Thanks
kasturi



On Tue, Jan 9, 2018 at 11:26 PM, carl langlois 
wrote:

> Some question about the arbiter box.
>
> 1- Lets says i am using a old box for the arbiter with minimum spec. Do i
> need to mark it as a arbiter box? Does the installer scripts need to know
> that it is a arbiter box?
> 2- When i get my hand on new host for running VM. Do i need still need the
> arbiter box?
>
> Thanks
>
> On Tue, Jan 9, 2018 at 2:28 AM, Yedidyah Bar David 
> wrote:
>
>> On Mon, Jan 8, 2018 at 10:57 PM, Karli Sjöberg 
>> wrote:
>>
>>>
>>>
>>> Den 8 jan. 2018 21:48 skrev Vinícius Ferrão :
>>>
>>> If I’m not wrong GlusterFS in oVirt requires 3 hosts.
>>>
>>>
>>> Wasn't there a Ridning Hood not long ago that said they'd change it down
>>> to 1 with 4.2, just to have something to get you started? To lower the bar
>>> for POC systems, like all-in-one?
>>>
>>
>> There is this:
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1494112
>>
>> One dependent bug is in POST. Although I was asked to
>> review the patch, I don't know much about the general
>> plan, including whether that's enough or more bugs/fixes
>> will be needed. Adding Sahina.
>>
>> Best regards,
>>
>>
>>>
>>> /K
>>>
>>>
>>> Here’s the RHHI guide, it’s pretty much the same for oVirt:
>>> https://access.redhat.com/documentation/en-us/red_hat_hyperc
>>> onverged_infrastructure/1.1/html/deploying_red_hat_hyperconv
>>> erged_infrastructure/
>>>
>>> > On 8 Jan 2018, at 18:10, carl langlois wrote:
>>> >
>>> > Hi all
>>> >
>>> > After screwing my infra with the update to 4.2 (probably a bad
>>> manipulation), i am planning a rebuild of the entire infra. First i want to
>>> replace my NFS storage with a glusterfs storage. All documentation tell me
>>> that i need 3 hosts.. but for the moment i only have 2 but planning to had
>>> more later.
>>> >
>>> > So does it make sense to start with 2 hosts and use glusterfs as the
>>> storage domain(lets says with a replicate of two with all its limitations).
>>> > If it make sense,
>>> > 1- what is the best way to do it.
>>> > 2- how hard will it be to had the 3rd host when available and make it
>>> replica 2+arbiter.
>>> >
>>> > Also in a setup where i have 3 hosts (replica 2+arbiter) does all the
>>> 3 hosts can run users vm?
>>> >
>>> > Thanks for your inputs.
>>> >
>>> > Carl
>>> >
>>> > ___
>>> > Users mailing list
>>> > Users@ovirt.org
>>> > http://lists.ovirt.org/mailman/listinfo/users
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>> Didi
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Planned restart of production services

2018-01-09 Thread Evgheni Dereveanchin
All services are finally back up and running. The updated kernel refused to
boot on Gerrit which caused the delay.
I also restored the proper IP for Gerrit so if you see any name resolution
issues - please flush your DNS resolver cache.
For any other issues - please open a ticket at jira.ovirt.org so that we
can take a look at it.

On Wed, Jan 10, 2018 at 1:34 AM, Evgheni Dereveanchin 
wrote:

> Unfortunately, the outage is still ongoing due to an unexpected issue with
> the Gerrit system.
> Keeping Jenkins offline until that is resolved. Package repositories are
> up and running. Will update you on further progress.
>
> On Tue, Jan 9, 2018 at 11:39 PM, Evgheni Dereveanchin  > wrote:
>
>> Hi everyone,
>>
>> I will be restarting several production systems within the following hour
>> to apply security updates.
>> The following services may be unreachable for some period of time:
>> - resources.ovirt.org - package repositories
>> - gerrit.ovirt.org - code review
>> - jenkins.ovirt.org - CI master
>>
>> It will not be possible to submit/review patches, clone repositories or
>> run CI jobs during this period. Package repositories will also be
>> unreachable for a short period of time.
>>
>> I will announce you once the maintenance is complete.
>>
>> --
>> Regards,
>> Evgheni Dereveanchin
>>
>
>
>
> --
> Regards,
> Evgheni Dereveanchin
>



-- 
Regards,
Evgheni Dereveanchin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Planned restart of production services

2018-01-09 Thread Evgheni Dereveanchin
Unfortunately, the outage is still ongoing due to an unexpected issue with
the Gerrit system.
Keeping Jenkins offline until that is resolved. Package repositories are up
and running. Will update you on further progress.

On Tue, Jan 9, 2018 at 11:39 PM, Evgheni Dereveanchin 
wrote:

> Hi everyone,
>
> I will be restarting several production systems within the following hour
> to apply security updates.
> The following services may be unreachable for some period of time:
> - resources.ovirt.org - package repositories
> - gerrit.ovirt.org - code review
> - jenkins.ovirt.org - CI master
>
> It will not be possible to submit/review patches, clone repositories or
> run CI jobs during this period. Package repositories will also be
> unreachable for a short period of time.
>
> I will announce you once the maintenance is complete.
>
> --
> Regards,
> Evgheni Dereveanchin
>



-- 
Regards,
Evgheni Dereveanchin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Non-RHEL RPMs gone after RHV-H upgrade

2018-01-09 Thread Colin Coe
Hi all

We're running RHV 4.1.6.  Yesterday I upgraded the RHV-H nodes in our DEV
environment to 20180102 and found that all the non-RHEL RPMs are now gone.
Their associated config files in /etc are still there.  The RPMs in
question were from HPE SPP plus a monitoring system client (Xymon).

I had thought that non-RHEL RPMs would persist after a host upgrade.

Am I wrong on this?

Thanks

CC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Shutdown all VM's command line

2018-01-09 Thread Marcelo Leandro
Do you can do this with python sdk, do you can build a list and shutdown
all vms. If you want I can make a script to you tomorrow.


Marcelo Leandro

Em 09/01/2018 8:30 PM, "Wesley Stewart"  escreveu:

> Is there an easy way to do this?
>
> I have a UPS connected to my freenas box, and using NUT and UPSC I can
> monitor the battery level on my centos 7 host.
>
> In the case of an emergency, and my UPS goes below, lets say 40% battery,
> I would like to make a script to shutdown all running VMs, and then reboot
> the host.
>
> Otherwise I am afraid of sending a straight reboot command to the host
> without taking care of the virtual machines first.
>
> Thanks!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Shutdown all VM's command line

2018-01-09 Thread Wesley Stewart
Is there an easy way to do this?

I have a UPS connected to my freenas box, and using NUT and UPSC I can
monitor the battery level on my centos 7 host.

In the case of an emergency, and my UPS goes below, lets say 40% battery, I
would like to make a script to shutdown all running VMs, and then reboot
the host.

Otherwise I am afraid of sending a straight reboot command to the host
without taking care of the virtual machines first.

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Planned restart of production services

2018-01-09 Thread Evgheni Dereveanchin
Hi everyone,

I will be restarting several production systems within the following hour
to apply security updates.
The following services may be unreachable for some period of time:
- resources.ovirt.org - package repositories
- gerrit.ovirt.org - code review
- jenkins.ovirt.org - CI master

It will not be possible to submit/review patches, clone repositories or run
CI jobs during this period. Package repositories will also be unreachable
for a short period of time.

I will announce you once the maintenance is complete.

-- 
Regards,
Evgheni Dereveanchin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Nir Soffer
On Tue, Jan 9, 2018 at 12:33 AM ~Stack~  wrote:

> On 01/08/2018 07:15 AM, Gianluca Cecchi wrote:
> > Probably he refers to this blog:
> >
> https://rhelblog.redhat.com/2018/01/04/red-hat-virtualization-4-2-beta-is-live/
> >
> > with:
> > "
> > *Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested and
> > certified as a storage domain for virtual machines. This provides more
> > infrastructure and deployment choices for engineers and architects.
> > "
> >
> > It seems a described feature that didn't get any referral in oVirt 4.2
> > release notes:
> > https://ovirt.org/release/4.2.0/
> >
> > But I think in general, given a version, it is not guaranteed that what
> > in RHEV maps with what in oVirt and viceversa.
> > I don't know if this one about Ceph via iSCSI is one of them.
>
> ErrrWHAA???
>
> If Ceph support is in oVirt, I am about to be extremely excited. I'm
> just racked the hardware for a new oVirt install today and the Ceph gear
> is showing up in a few weeks. I was planning on setting up a dedicated
> NFS server for VM's essentially having two storage domains, but if I can
> just have Ceph...I would be a very happy sys admin!
>

Ceph is supported since 3.6 via Cinder.

But the support is not complete, some features are not available with Ceph,
mainly integration with other storage types, like moving disks to/from Ceph
and other storage, live storage migration, image upload and download,
and vm leases.

But it may be good enough to keep you happy!

Nir


> ~Stack~
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Abdurrahman A. Ibrahim
Thank you for the explanation. :)

Best regards,
Ab

On Tue, Jan 9, 2018 at 12:47 PM, Yaniv Kaul  wrote:

>
>
> On Mon, Jan 8, 2018 at 3:27 PM, Abdurrahman A. Ibrahim <
> a.rahman.at...@gmail.com> wrote:
>
>> Yup, Gianluca is right.
>> My bad to mention  RHV 4.2 Beta release notes instead of  CaptinKVM blog
>> post "https://www.ovirt.org/documentation/vmm-guide/chap-Addition
>> al_Configuration/"
>> """
>>
>>> *Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested
>>> and certified as a storage domain for virtual machines. This provides more
>>> infrastructure and deployment choices for engineers and architects.
>>
>> """
>>
>> Do we have any oVirt documentation mentioned that?
>>
>
> I'm afraid not. For oVirt, it's just another iSCSI target like any other
> iSCSI storage.
> Y.
>
>
>>
>> Best regards,
>> Ab
>>
>> On Mon, Jan 8, 2018 at 2:15 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Mon, Jan 8, 2018 at 2:06 PM, Fred Rolland 
>>> wrote:
>>>
 Hi,

 Do you have a link about this information?

 Thanks,
 Freddy



>>>
>>> Probably he refers to this blog:
>>> https://rhelblog.redhat.com/2018/01/04/red-hat-virtualizatio
>>> n-4-2-beta-is-live/
>>>
>>> with:
>>> "
>>> *Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested
>>> and certified as a storage domain for virtual machines. This provides more
>>> infrastructure and deployment choices for engineers and architects.
>>> "
>>>
>>> It seems a described feature that didn't get any referral in oVirt 4.2
>>> release notes:
>>> https://ovirt.org/release/4.2.0/
>>>
>>> But I think in general, given a version, it is not guaranteed that what
>>> in RHEV maps with what in oVirt and viceversa.
>>> I don't know if this one about Ceph via iSCSI is one of them.
>>> Gianluca
>>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Rebuilding my infra..

2018-01-09 Thread carl langlois
Some question about the arbiter box.

1- Lets says i am using a old box for the arbiter with minimum spec. Do i
need to mark it as a arbiter box? Does the installer scripts need to know
that it is a arbiter box?
2- When i get my hand on new host for running VM. Do i need still need the
arbiter box?

Thanks

On Tue, Jan 9, 2018 at 2:28 AM, Yedidyah Bar David  wrote:

> On Mon, Jan 8, 2018 at 10:57 PM, Karli Sjöberg 
> wrote:
>
>>
>>
>> Den 8 jan. 2018 21:48 skrev Vinícius Ferrão :
>>
>> If I’m not wrong GlusterFS in oVirt requires 3 hosts.
>>
>>
>> Wasn't there a Ridning Hood not long ago that said they'd change it down
>> to 1 with 4.2, just to have something to get you started? To lower the bar
>> for POC systems, like all-in-one?
>>
>
> There is this:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1494112
>
> One dependent bug is in POST. Although I was asked to
> review the patch, I don't know much about the general
> plan, including whether that's enough or more bugs/fixes
> will be needed. Adding Sahina.
>
> Best regards,
>
>
>>
>> /K
>>
>>
>> Here’s the RHHI guide, it’s pretty much the same for oVirt:
>> https://access.redhat.com/documentation/en-us/red_hat_hyperc
>> onverged_infrastructure/1.1/html/deploying_red_hat_hyperco
>> nverged_infrastructure/
>>
>> > On 8 Jan 2018, at 18:10, carl langlois wrote:
>> >
>> > Hi all
>> >
>> > After screwing my infra with the update to 4.2 (probably a bad
>> manipulation), i am planning a rebuild of the entire infra. First i want to
>> replace my NFS storage with a glusterfs storage. All documentation tell me
>> that i need 3 hosts.. but for the moment i only have 2 but planning to had
>> more later.
>> >
>> > So does it make sense to start with 2 hosts and use glusterfs as the
>> storage domain(lets says with a replicate of two with all its limitations).
>> > If it make sense,
>> > 1- what is the best way to do it.
>> > 2- how hard will it be to had the 3rd host when available and make it
>> replica 2+arbiter.
>> >
>> > Also in a setup where i have 3 hosts (replica 2+arbiter) does all the 3
>> hosts can run users vm?
>> >
>> > Thanks for your inputs.
>> >
>> > Carl
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Didi
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Luca 'remix_tj' Lorenzetto
On Tue, Jan 9, 2018 at 2:45 PM, Matthias Leopold
 wrote:

> just for the records:
> we are running a productive oVirt 4.1 cluster with storage on a Ceph 12.2
> cluster connected as an external provider, type: Openstack Volume (= Cinder)
>

Yup, but that's different than using iSCSI, since you're using
directly qemu with librbd as storage module.

Luca


-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Konstantin Shalygin

if I can
just have Ceph...I would be a very happy sys admin!


What is stopped you from start use ceph via librbd NOW? All you need is 
a OpenStack Cinder as volume manager wrapper.


You can check librbd version of your hosts via oVirt manager (see 
attached sceenshot).




I read in RHV 4.2 Beta release note that CEPH will be supported using iSCSI.
I have tried to check community documentation regarding CEPH support but
there was no luck. Do we have such document?


This is just usual iSCSI

http://docs.ceph.com/docs/master/rbd/iscsi-overview/



k

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
for example
./ovirt-engine/host-deploy/ovirt-host-mgmt-2018010924-dipovirt03.cnc.sk-15970547.log

ansible firewalld module is not working if the firewalld is not running.
In my case the cluster settings is set to 'iptables' so thos was my mistake.

Consider this to be solved ;)

Peter

On 09/01/2018 15:01, Yedidyah Bar David wrote:
> On Tue, Jan 9, 2018 at 3:25 PM, Peter Hudec  > wrote:
> 
> It's not a bug as I'm digging.
> 
> 
> Very well :-)
>  
> 
> 
> In logs I found
> 
> 
> Which logs?
>  
> 
> 
> 2018-01-09 08:23:22,421+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False'
> 2018-01-09 08:23:22,422+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
> 
> So how to disable iptables and enable firewalld ?
> 
> 
> If host-deploy, then it's a per-host/per-cluster option you should
> be able to choose in the web admin ui.
>  
> 
> 
>         Peter
> 
> On 09/01/2018 13:47, Yedidyah Bar David wrote:
> > (Adding Ondra for the firewalld stuff. But I think it's probably
> > easier to debug if you open a bug and attach logs there).
> >
> > On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec  
> > >> wrote:
> >
> >     If I run host reinstall with custom firewall rules in
> >     /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task 
> will
> >     fails due the firewalld is not running.
> >
> >     The reinstall task will disable firewalld and enable 
> iptables-services.
> >     I'm little bit confused ;(
> >
> >     ---
> >     - name: Enable additional port on firewalld
> >       firewalld:
> >         port: "10050/tcp"
> >         permanent: yes
> >         immediate: yes
> >         state: enabled
> >
> >
> >     2018-01-09 13:27:30,103 p=13550 u=ovirt |  included:
> >     /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for
> >     dipovirt01.cnc.sk 
> 
> >     2018-01-09 13:27:30,134 p=13550 u=ovirt |  TASK [Enable additional 
> port
> >     on firewalld] *
> >     2018-01-09 13:27:32,089 p=13550 u=ovirt |  fatal: 
> [dipovirt01.cnc.sk 
> >     ]:
> >     FAILED! => {"changed": false, "module_stderr": "Shared connection to
> >     dipovirt01.cnc.sk 
>  closed.\r\n",
> >     "module_stdout": "Traceback (most recent
> >     call last):\r\n  File
> >     \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in
> >     \r\n    main()\r\n  File
> >     \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in
> >     main\r\n    module.fail(msg='firewall is not currently running, 
> unable
> >     to perform immediate actions without a running firewall
> >     daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute
> >     'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0}
> >     2018-01-09 13:27:32,095 p=13550 u=ovirt |  PLAY RECAP
> >     
> *
> >
> >
> >     After reinstalation the status of firewalld is
> >     [PROD] r...@dipovirt01.cnc.sk 
> >:
> >     /var/log/vdsm # systemctl status firewalld
> >     ● firewalld.service - firewalld - dynamic firewall daemon
> >        Loaded: loaded (/usr/lib/systemd/system/firewalld.service; 
> disabled;
> >     vendor preset: enabled)
> >        Active: inactive (dead)
> >          Docs: man:firewalld(1)
> >
> >
> >     So how could I switch to firewalld? package iptables-service could 
> not
> >     be removed due the dependencies.
> >
> >             Peter
> >
> >     On 09/01/2018 09:35, Yedidyah Bar David wrote:
> >     >
> >     >     1) firewalld
> >     >     after upgrade the hot server, the i needed to stop firewalld. 
> It seems,
> >     >     that, the rules are not generated correctly. The engine was 
> not able to
> >     >     connect to the host. How do I could fix it?
> >     >
> >     >
> >     > Please check/share relevant files from 
> /var/log/ovirt-engine/ansible/
> >     > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and
> >     > attach them there.
> >
> >
> >     --
> >     *Peter Hudec*
> >     Infraštruktúrny architekt
> >     phu...@cnc.sk   > 
> >     

Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Yedidyah Bar David
On Tue, Jan 9, 2018 at 3:25 PM, Peter Hudec  wrote:

> It's not a bug as I'm digging.
>

Very well :-)


>
> In logs I found
>

Which logs?


>
> 2018-01-09 08:23:22,421+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False'
> 2018-01-09 08:23:22,422+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
>
> So how to disable iptables and enable firewalld ?
>

If host-deploy, then it's a per-host/per-cluster option you should
be able to choose in the web admin ui.


>
> Peter
>
> On 09/01/2018 13:47, Yedidyah Bar David wrote:
> > (Adding Ondra for the firewalld stuff. But I think it's probably
> > easier to debug if you open a bug and attach logs there).
> >
> > On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec  > > wrote:
> >
> > If I run host reinstall with custom firewall rules in
> > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task
> will
> > fails due the firewalld is not running.
> >
> > The reinstall task will disable firewalld and enable
> iptables-services.
> > I'm little bit confused ;(
> >
> > ---
> > - name: Enable additional port on firewalld
> >   firewalld:
> > port: "10050/tcp"
> > permanent: yes
> > immediate: yes
> > state: enabled
> >
> >
> > 2018-01-09 13:27:30,103 p=13550 u=ovirt |  included:
> > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for
> > dipovirt01.cnc.sk 
> > 2018-01-09 13:27:30,134 p=13550 u=ovirt |  TASK [Enable additional
> port
> > on firewalld] *
> > 2018-01-09 13:27:32,089 p=13550 u=ovirt |  fatal: [dipovirt01.cnc.sk
> > ]:
> > FAILED! => {"changed": false, "module_stderr": "Shared connection to
> > dipovirt01.cnc.sk  closed.\r\n",
> > "module_stdout": "Traceback (most recent
> > call last):\r\n  File
> > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in
> > \r\nmain()\r\n  File
> > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in
> > main\r\nmodule.fail(msg='firewall is not currently running,
> unable
> > to perform immediate actions without a running firewall
> > daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute
> > 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0}
> > 2018-01-09 13:27:32,095 p=13550 u=ovirt |  PLAY RECAP
> > 
> *
> >
> >
> > After reinstalation the status of firewalld is
> > [PROD] r...@dipovirt01.cnc.sk :
> > /var/log/vdsm # systemctl status firewalld
> > ● firewalld.service - firewalld - dynamic firewall daemon
> >Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
> disabled;
> > vendor preset: enabled)
> >Active: inactive (dead)
> >  Docs: man:firewalld(1)
> >
> >
> > So how could I switch to firewalld? package iptables-service could
> not
> > be removed due the dependencies.
> >
> > Peter
> >
> > On 09/01/2018 09:35, Yedidyah Bar David wrote:
> > >
> > > 1) firewalld
> > > after upgrade the hot server, the i needed to stop firewalld.
> It seems,
> > > that, the rules are not generated correctly. The engine was
> not able to
> > > connect to the host. How do I could fix it?
> > >
> > >
> > > Please check/share relevant files from
> /var/log/ovirt-engine/ansible/
> > > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and
> > > attach them there.
> >
> >
> > --
> > *Peter Hudec*
> > Infraštruktúrny architekt
> > phu...@cnc.sk   > >
> >
> > *CNC, a.s.*
> > Borská 6, 841 04 Bratislava
> > Recepcia: +421 2  35 000 100 
> >
> > Mobil:+421 905 997 203 
> > *www.cnc.sk *  > >
> >
> >
> >
> >
> > --
> > Didi
>
>
> --
> *Peter Hudec*
> Infraštruktúrny architekt
> phu...@cnc.sk 
>
> *CNC, a.s.*
> Borská 6, 841 04 Bratislava
> Recepcia: +421 2  35 000 100
>
> Mobil:+421 905 997 203
> *www.cnc.sk* 
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Matthias Leopold



Am 2018-01-08 um 23:32 schrieb ~Stack~:

On 01/08/2018 07:15 AM, Gianluca Cecchi wrote:

Probably he refers to this blog:
https://rhelblog.redhat.com/2018/01/04/red-hat-virtualization-4-2-beta-is-live/

with:
"
*Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested and
certified as a storage domain for virtual machines. This provides more
infrastructure and deployment choices for engineers and architects.
"

It seems a described feature that didn't get any referral in oVirt 4.2
release notes:
https://ovirt.org/release/4.2.0/

But I think in general, given a version, it is not guaranteed that what
in RHEV maps with what in oVirt and viceversa.
I don't know if this one about Ceph via iSCSI is one of them.


ErrrWHAA???

If Ceph support is in oVirt, I am about to be extremely excited. I'm
just racked the hardware for a new oVirt install today and the Ceph gear
is showing up in a few weeks. I was planning on setting up a dedicated
NFS server for VM's essentially having two storage domains, but if I can
just have Ceph...I would be a very happy sys admin!

~Stack~



just for the records:
we are running a productive oVirt 4.1 cluster with storage on a Ceph 
12.2 cluster connected as an external provider, type: Openstack Volume 
(= Cinder)


Matthias











___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
thanks. The upgrade is in process. I needed to solve some issues before.

Peter

On 09/01/2018 14:39, Martin Perina wrote:
> 
> 
> On Tue, Jan 9, 2018 at 2:25 PM, Peter Hudec  > wrote:
> 
> It's not a bug as I'm digging.
> 
> In logs I found
> 
> 2018-01-09 08:23:22,421+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False'
> 2018-01-09 08:23:22,422+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
> 
> So how to disable iptables and enable firewalld ?
> 
> 
> ​Hi,
> 
> firewall type is a cluster level option. Please go to Clusters, edit
> selected cluster and change Firewall type to firewalld. After that you
> need to execute Reinstall on all hosts​ in the cluster to switch from
> iptables to firewalld on them.
> 
> Btw, I assume this is upgraded cluster, so please make sure that VDSM
> 4.20 (from oVirt 4.2) is installed on all hosts before making this change.
> 
> Thanks
> 
> Martin
> 
> 
>         Peter
> 
> On 09/01/2018 13:47, Yedidyah Bar David wrote:
> > (Adding Ondra for the firewalld stuff. But I think it's probably
> > easier to debug if you open a bug and attach logs there).
> >
> > On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec  
> > >> wrote:
> >
> >     If I run host reinstall with custom firewall rules in
> >     /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the
> task will
> >     fails due the firewalld is not running.
> >
> >     The reinstall task will disable firewalld and enable
> iptables-services.
> >     I'm little bit confused ;(
> >
> >     ---
> >     - name: Enable additional port on firewalld
> >       firewalld:
> >         port: "10050/tcp"
> >         permanent: yes
> >         immediate: yes
> >         state: enabled
> >
> >
> >     2018-01-09 13:27:30,103 p=13550 u=ovirt |  included:
> >     /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for
> >     dipovirt01.cnc.sk 
> 
> >     2018-01-09 13:27:30,134 p=13550 u=ovirt |  TASK [Enable
> additional port
> >     on firewalld] *
> >     2018-01-09 13:27:32,089 p=13550 u=ovirt |  fatal:
> [dipovirt01.cnc.sk 
> >     ]:
> >     FAILED! => {"changed": false, "module_stderr": "Shared
> connection to
> >     dipovirt01.cnc.sk 
>  closed.\r\n",
> >     "module_stdout": "Traceback (most recent
> >     call last):\r\n  File
> >     \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in
> >     \r\n    main()\r\n  File
> >     \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in
> >     main\r\n    module.fail(msg='firewall is not currently
> running, unable
> >     to perform immediate actions without a running firewall
> >     daemon')\r\nAttributeError: 'AnsibleModule' object has no
> attribute
> >     'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0}
> >     2018-01-09 13:27:32,095 p=13550 u=ovirt |  PLAY RECAP
> >   
>  *
> >
> >
> >     After reinstalation the status of firewalld is
> >     [PROD] r...@dipovirt01.cnc.sk 
> >:
> >     /var/log/vdsm # systemctl status firewalld
> >     ● firewalld.service - firewalld - dynamic firewall daemon
> >        Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
> disabled;
> >     vendor preset: enabled)
> >        Active: inactive (dead)
> >          Docs: man:firewalld(1)
> >
> >
> >     So how could I switch to firewalld? package iptables-service
> could not
> >     be removed due the dependencies.
> >
> >             Peter
> >
> >     On 09/01/2018 09:35, Yedidyah Bar David wrote:
> >     >
> >     >     1) firewalld
> >     >     after upgrade the hot server, the i needed to stop
> firewalld. It seems,
> >     >     that, the rules are not generated correctly. The engine
> was not able to
> >     >     connect to the host. How do I could fix it?
> >     >
> >     >
> >     > Please check/share relevant files from
> /var/log/ovirt-engine/ansible/
> >     > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a
> bug and
> >     > attach them there.
> >
> >
> >     --
> >     *Peter Hudec*
> >     Infraštruktúrny architekt
> >     phu...@cnc.sk  

Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Martin Perina
On Tue, Jan 9, 2018 at 2:25 PM, Peter Hudec  wrote:

> It's not a bug as I'm digging.
>
> In logs I found
>
> 2018-01-09 08:23:22,421+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False'
> 2018-01-09 08:23:22,422+0100 DEBUG otopi.context
> context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'
>
> So how to disable iptables and enable firewalld ?
>

​Hi,

firewall type is a cluster level option. Please go to Clusters, edit
selected cluster and change Firewall type to firewalld. After that you need
to execute Reinstall on all hosts​ in the cluster to switch from iptables
to firewalld on them.

Btw, I assume this is upgraded cluster, so please make sure that VDSM 4.20
(from oVirt 4.2) is installed on all hosts before making this change.

Thanks

Martin


> Peter
>
> On 09/01/2018 13:47, Yedidyah Bar David wrote:
> > (Adding Ondra for the firewalld stuff. But I think it's probably
> > easier to debug if you open a bug and attach logs there).
> >
> > On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec  > > wrote:
> >
> > If I run host reinstall with custom firewall rules in
> > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task
> will
> > fails due the firewalld is not running.
> >
> > The reinstall task will disable firewalld and enable
> iptables-services.
> > I'm little bit confused ;(
> >
> > ---
> > - name: Enable additional port on firewalld
> >   firewalld:
> > port: "10050/tcp"
> > permanent: yes
> > immediate: yes
> > state: enabled
> >
> >
> > 2018-01-09 13:27:30,103 p=13550 u=ovirt |  included:
> > /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for
> > dipovirt01.cnc.sk 
> > 2018-01-09 13:27:30,134 p=13550 u=ovirt |  TASK [Enable additional
> port
> > on firewalld] *
> > 2018-01-09 13:27:32,089 p=13550 u=ovirt |  fatal: [dipovirt01.cnc.sk
> > ]:
> > FAILED! => {"changed": false, "module_stderr": "Shared connection to
> > dipovirt01.cnc.sk  closed.\r\n",
> > "module_stdout": "Traceback (most recent
> > call last):\r\n  File
> > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in
> > \r\nmain()\r\n  File
> > \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in
> > main\r\nmodule.fail(msg='firewall is not currently running,
> unable
> > to perform immediate actions without a running firewall
> > daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute
> > 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0}
> > 2018-01-09 13:27:32,095 p=13550 u=ovirt |  PLAY RECAP
> > 
> *
> >
> >
> > After reinstalation the status of firewalld is
> > [PROD] r...@dipovirt01.cnc.sk :
> > /var/log/vdsm # systemctl status firewalld
> > ● firewalld.service - firewalld - dynamic firewall daemon
> >Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
> disabled;
> > vendor preset: enabled)
> >Active: inactive (dead)
> >  Docs: man:firewalld(1)
> >
> >
> > So how could I switch to firewalld? package iptables-service could
> not
> > be removed due the dependencies.
> >
> > Peter
> >
> > On 09/01/2018 09:35, Yedidyah Bar David wrote:
> > >
> > > 1) firewalld
> > > after upgrade the hot server, the i needed to stop firewalld.
> It seems,
> > > that, the rules are not generated correctly. The engine was
> not able to
> > > connect to the host. How do I could fix it?
> > >
> > >
> > > Please check/share relevant files from
> /var/log/ovirt-engine/ansible/
> > > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and
> > > attach them there.
> >
> >
> > --
> > *Peter Hudec*
> > Infraštruktúrny architekt
> > phu...@cnc.sk   > >
> >
> > *CNC, a.s.*
> > Borská 6, 841 04 Bratislava
> > Recepcia: +421 2  35 000 100 
> >
> > Mobil:+421 905 997 203 
> > *www.cnc.sk *  > >
> >
> >
> >
> >
> > --
> > Didi
>
>
> --
> *Peter Hudec*
> Infraštruktúrny architekt
> phu...@cnc.sk 
>
> *CNC, a.s.*
> Borská 6, 841 04 Bratislava
> Recepcia: +421 2  35 000 100
>
> Mobil:+421 905 997 203
> *www.cnc.sk* 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat 

Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
It's not a bug as I'm digging.

In logs I found

2018-01-09 08:23:22,421+0100 DEBUG otopi.context
context.dumpEnvironment:831 ENV NETWORK/firewalldEnable=bool:'False'
2018-01-09 08:23:22,422+0100 DEBUG otopi.context
context.dumpEnvironment:831 ENV NETWORK/iptablesEnable=bool:'True'

So how to disable iptables and enable firewalld ?

Peter

On 09/01/2018 13:47, Yedidyah Bar David wrote:
> (Adding Ondra for the firewalld stuff. But I think it's probably
> easier to debug if you open a bug and attach logs there).
> 
> On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec  > wrote:
> 
> If I run host reinstall with custom firewall rules in
> /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will
> fails due the firewalld is not running.
> 
> The reinstall task will disable firewalld and enable iptables-services.
> I'm little bit confused ;(
> 
> ---
> - name: Enable additional port on firewalld
>   firewalld:
>     port: "10050/tcp"
>     permanent: yes
>     immediate: yes
>     state: enabled
> 
> 
> 2018-01-09 13:27:30,103 p=13550 u=ovirt |  included:
> /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for
> dipovirt01.cnc.sk 
> 2018-01-09 13:27:30,134 p=13550 u=ovirt |  TASK [Enable additional port
> on firewalld] *
> 2018-01-09 13:27:32,089 p=13550 u=ovirt |  fatal: [dipovirt01.cnc.sk
> ]:
> FAILED! => {"changed": false, "module_stderr": "Shared connection to
> dipovirt01.cnc.sk  closed.\r\n",
> "module_stdout": "Traceback (most recent
> call last):\r\n  File
> \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in
> \r\n    main()\r\n  File
> \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in
> main\r\n    module.fail(msg='firewall is not currently running, unable
> to perform immediate actions without a running firewall
> daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute
> 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0}
> 2018-01-09 13:27:32,095 p=13550 u=ovirt |  PLAY RECAP
> *
> 
> 
> After reinstalation the status of firewalld is
> [PROD] r...@dipovirt01.cnc.sk :
> /var/log/vdsm # systemctl status firewalld
> ● firewalld.service - firewalld - dynamic firewall daemon
>    Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled;
> vendor preset: enabled)
>    Active: inactive (dead)
>      Docs: man:firewalld(1)
> 
> 
> So how could I switch to firewalld? package iptables-service could not
> be removed due the dependencies.
> 
>         Peter
> 
> On 09/01/2018 09:35, Yedidyah Bar David wrote:
> >
> >     1) firewalld
> >     after upgrade the hot server, the i needed to stop firewalld. It 
> seems,
> >     that, the rules are not generated correctly. The engine was not 
> able to
> >     connect to the host. How do I could fix it?
> >
> >
> > Please check/share relevant files from /var/log/ovirt-engine/ansible/
> > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and
> > attach them there.
> 
> 
> --
> *Peter Hudec*
> Infraštruktúrny architekt
> phu...@cnc.sk   >
> 
> *CNC, a.s.*
> Borská 6, 841 04 Bratislava
> Recepcia: +421 2  35 000 100 
> 
> Mobil:+421 905 997 203 
> *www.cnc.sk *  >
> 
> 
> 
> 
> -- 
> Didi


-- 
*Peter Hudec*
Infraštruktúrny architekt
phu...@cnc.sk 

*CNC, a.s.*
Borská 6, 841 04 Bratislava
Recepcia: +421 2  35 000 100

Mobil:+421 905 997 203
*www.cnc.sk* 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Yedidyah Bar David
(Adding Ondra for the firewalld stuff. But I think it's probably
easier to debug if you open a bug and attach logs there).

On Tue, Jan 9, 2018 at 2:34 PM, Peter Hudec  wrote:

> If I run host reinstall with custom firewall rules in
> /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will
> fails due the firewalld is not running.
>
> The reinstall task will disable firewalld and enable iptables-services.
> I'm little bit confused ;(
>
> ---
> - name: Enable additional port on firewalld
>   firewalld:
> port: "10050/tcp"
> permanent: yes
> immediate: yes
> state: enabled
>
>
> 2018-01-09 13:27:30,103 p=13550 u=ovirt |  included:
> /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for
> dipovirt01.cnc.sk
> 2018-01-09 13:27:30,134 p=13550 u=ovirt |  TASK [Enable additional port
> on firewalld] *
> 2018-01-09 13:27:32,089 p=13550 u=ovirt |  fatal: [dipovirt01.cnc.sk]:
> FAILED! => {"changed": false, "module_stderr": "Shared connection to
> dipovirt01.cnc.sk closed.\r\n", "module_stdout": "Traceback (most recent
> call last):\r\n  File
> \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in
> \r\nmain()\r\n  File
> \"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in
> main\r\nmodule.fail(msg='firewall is not currently running, unable
> to perform immediate actions without a running firewall
> daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute
> 'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0}
> 2018-01-09 13:27:32,095 p=13550 u=ovirt |  PLAY RECAP
> *
>
>
> After reinstalation the status of firewalld is
> [PROD] r...@dipovirt01.cnc.sk: /var/log/vdsm # systemctl status firewalld
> ● firewalld.service - firewalld - dynamic firewall daemon
>Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled;
> vendor preset: enabled)
>Active: inactive (dead)
>  Docs: man:firewalld(1)
>
>
> So how could I switch to firewalld? package iptables-service could not
> be removed due the dependencies.
>
> Peter
>
> On 09/01/2018 09:35, Yedidyah Bar David wrote:
> >
> > 1) firewalld
> > after upgrade the hot server, the i needed to stop firewalld. It
> seems,
> > that, the rules are not generated correctly. The engine was not able
> to
> > connect to the host. How do I could fix it?
> >
> >
> > Please check/share relevant files from /var/log/ovirt-engine/ansible/
> > and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and
> > attach them there.
>
>
> --
> *Peter Hudec*
> Infraštruktúrny architekt
> phu...@cnc.sk 
>
> *CNC, a.s.*
> Borská 6, 841 04 Bratislava
> Recepcia: +421 2  35 000 100
>
> Mobil:+421 905 997 203
> *www.cnc.sk* 
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
If I run host reinstall with custom firewall rules in
/etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml the task will
fails due the firewalld is not running.

The reinstall task will disable firewalld and enable iptables-services.
I'm little bit confused ;(

---
- name: Enable additional port on firewalld
  firewalld:
port: "10050/tcp"
permanent: yes
immediate: yes
state: enabled


2018-01-09 13:27:30,103 p=13550 u=ovirt |  included:
/etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml for
dipovirt01.cnc.sk
2018-01-09 13:27:30,134 p=13550 u=ovirt |  TASK [Enable additional port
on firewalld] *
2018-01-09 13:27:32,089 p=13550 u=ovirt |  fatal: [dipovirt01.cnc.sk]:
FAILED! => {"changed": false, "module_stderr": "Shared connection to
dipovirt01.cnc.sk closed.\r\n", "module_stdout": "Traceback (most recent
call last):\r\n  File
\"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 936, in
\r\nmain()\r\n  File
\"/tmp/ansible_2Ilnjq/ansible_module_firewalld.py\", line 788, in
main\r\nmodule.fail(msg='firewall is not currently running, unable
to perform immediate actions without a running firewall
daemon')\r\nAttributeError: 'AnsibleModule' object has no attribute
'fail'\r\n", "msg": "MODULE FAILURE", "rc": 0}
2018-01-09 13:27:32,095 p=13550 u=ovirt |  PLAY RECAP
*


After reinstalation the status of firewalld is
[PROD] r...@dipovirt01.cnc.sk: /var/log/vdsm # systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled;
vendor preset: enabled)
   Active: inactive (dead)
 Docs: man:firewalld(1)


So how could I switch to firewalld? package iptables-service could not
be removed due the dependencies.

Peter

On 09/01/2018 09:35, Yedidyah Bar David wrote:
> 
> 1) firewalld
> after upgrade the hot server, the i needed to stop firewalld. It seems,
> that, the rules are not generated correctly. The engine was not able to
> connect to the host. How do I could fix it?
> 
> 
> Please check/share relevant files from /var/log/ovirt-engine/ansible/
> and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and
> attach them there.


-- 
*Peter Hudec*
Infraštruktúrny architekt
phu...@cnc.sk 

*CNC, a.s.*
Borská 6, 841 04 Bratislava
Recepcia: +421 2  35 000 100

Mobil:+421 905 997 203
*www.cnc.sk* 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
guest migration issue was due the firewalld, in logs a I found in logs

2018-01-09 10:12:06,700+0100 ERROR (migsrc/d6e3745b) [virt.vm]
(vmId='d6e3745b-1444-42a3-8cc0-29eaf59b8520') Failed to migrate
(migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
382, in run
self._setupVdsConnection()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
219, in _setupVdsConnection
client = self._createClient(port)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
206, in _createClient
client_socket = utils.create_connected_socket(host, int(port), sslctx)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 950, in
create_connected_socket
sock.connect((host, port))
  File "/usr/lib64/python2.7/site-packages/M2Crypto/SSL/Connection.py",
line 181, in connect
self.socket.connect(addr)
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 113] No route to host

disabling the firewalld solved the issue, so back to the orininal
problem with the firewall ;)



On 09/01/2018 11:08, Yedidyah Bar David wrote:
> On Tue, Jan 9, 2018 at 12:04 PM, Peter Hudec  > wrote:
> 
> quick fix is follow the
> 
> https://gerrit.ovirt.org/#/c/84802/2/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/IOvfBuilder.java
> 
> 
> 
> and remove trailing '/' in
> 
> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py
> 
> 
> Adding Denis. Thanks for the logs!
>  
> 
> 
> 
> On 09/01/2018 10:45, Peter Hudec wrote:
> > the old  hypervisoer /oVirt 4.1.8/ got probblem to release the HE due
> > this exception.  The HE is on the NFS store.
> >
> > MainThread::INFO::2018-01-09
> >
> 
> 10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36)
> > Host configuration is already up-to-date
> > MainThread::INFO::2018-01-09
> >
> 
> 10:40:28,498::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
> > Reloading vm.conf from the shared storage domain
> > MainThread::INFO::2018-01-09
> >
> 
> 10:40:28,498::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
> > Trying to get a fresher copy of vm configuration from the OVF_STORE
> > MainThread::INFO::2018-01-09
> >
> 
> 10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > Extracting Engine VM OVF from the OVF_STORE
> > MainThread::INFO::2018-01-09
> >
> 
> 10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > OVF_STORE volume path:
> >
> 
> /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778
> > MainThread::INFO::2018-01-09
> >
> 
> 10:40:28,517::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
> > Found an OVF for HE VM, trying to convert
> > MainThread::ERROR::2018-01-09
> >
> 
> 10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> > Traceback (most recent call last):
> >   File
> >
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> > line 191, in _run_agent
> >     return action(he)
> >   File
> >
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> > line 64, in action_proper
> >     return he.start_monitoring()
> >   File
> >
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> > line 421, in start_monitoring
> >     self._config.refresh_vm_conf()
> >   File
> >
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
> > line 496, in refresh_vm_conf
> >     content_from_ovf = self._get_vm_conf_content_from_ovf_store()
> >   File
> >
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
> > line 438, in _get_vm_conf_content_from_ovf_store
> >     conf = ovf2VmParams.confFromOvf(heovf)
> >   File
> >
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
> > line 283, in confFromOvf
> >     vmConf = toDict(ovf)
> >   File
> >
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
> > line 210, in toDict
> >     vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS
> + 'id']
> >   File 

Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-09 Thread Doron Fediuck
On 9 January 2018 at 13:54, Yaniv Kaul  wrote:

>
>
> On Mon, Jan 8, 2018 at 11:52 PM, Sam McLeod 
> wrote:
>
>> Hi Yaniv,
>>
>> Thanks for your detailed reply, it's very much appreciated.
>>
>> On 5 Jan 2018, at 8:34 pm, Yaniv Kaul  wrote:
>>
>> Indeed, greenfield deployment has its advantages.
>>
>>>
>>> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs
>>> on XenServer off one LUN at a time, remove that LUN from XenServer and add
>>> it to oVirt as new storage, and continue - but if it's what has to be done,
>>> we'll do it.
>>>
>>
>> The migration of VMs has three parts:
>> - VM configuration data (from name to number of CPUs, memory, etc.)
>>
>>
>> That's not too much of an issue for us, we have a pretty standard set of
>> configuration for performance / sizing.
>>
>> - Data - the disks themselves.
>>
>>
>> This is the big one, for most hosts at least the data is on a dedicated
>> logical volume, for example if it's postgresql, it would be LUKS on top of
>> a logical volume for /var/lib/pgsql etc
>>
>> - Adjusting VM internal data (paths, boot kernel, grub?, etc.)
>>
>>
>> Everything is currently PVHVM which uses standard grub2, you could
>> literally dd any one of our VMs to a physical disk and boot it in any
>> x86/64 machine.
>>
>> The first item could be automated. Unfortunately, it was a bit of a
>> challenge to find a common automation platform. For example, we have great
>> Ansible support, which I could not find for XenServer (but[1], which may be
>> a bit limited). Perhaps if there aren't too many VMs, this could be done
>> manually. If you use Foreman, btw, then it could probably be used for both
>> to provision?
>> The 2nd - data movement could be done in at least two-three ways:
>> 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
>> 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload
>> using the oVirt upload API (example in Python[2]). I think that's an easy
>> to implement option and provides the flexibility to copy from pretty much
>> any source to oVirt.
>>
>>
>> A key thing here would be how quickly the oVirt API can ingest the data,
>> our storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s
>> and around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find
>> hypervisors disk virtualisation mechanisms to be the bottleneck - but
>> adding an API to the mix, especially one that is single threaded (if that
>> does the data stream processing) could be a big performance problem.
>>
>
> Well, it's just for the data copy. We can do ~300 or so MBps in a single
> upload API call, but you can copy multiple disks via multiple hypervisors
> in parallel. In addition, if you are using 'dd' you might even be able to
> use sg_xcopy (if it's the same storage) - worth looking into it.
> In any case, we have concrete plans to improve the performance of the
> upload API.
>
>>
>> 3. There are ways to convert XVA to qcow2 - I saw some references on the
>> Internet, never tried any.
>>
>>
>> This is something I was thinking of potentially doing, I can actually
>> export each VM as an OVF/OVA package - since that's very standard I'm
>> assuming oVirt can likely import them and convert to qcow2 or raw/LVM?
>>
>
> Well, in theory, OVF/OVA is a standard. In practice, it's far from it - it
> defines how the XML should look and what it contains, but a VMware
> para-virtual NIC is not a para-virtual Xen NIC is not an oVirt
> para-virtual NIC, so the fact it describes a NIC means nothing when it
> comes to cross-platform compatibility.
>
>
While exporting, please ensure you include snapshots. You can learn more on
snapshot tree export support in Xen here:
https://xenserver.org/partners/18-sdk-development/114-import-export-vdi.html


>
>>
>> As for the last item, I'm really not sure what changes are needed, if at
>> all. I don't know the disk convention, for example (/dev/sd* for SCSI disk
>> -> virtio-scsi, but are there are other device types?)
>>
>>
>> Xen's virtual disks are all /dev/xvd[a-z]
>> Thankfully, we partition everything as LVM and partitions (other than
>> boot I think) are mounted as such.
>>
>
> And there's nothing that needs to address such path as /dev/xvd* ?
> Y.
>
>
>
>>
>>
>> I'd be happy to help with any adjustment needed for the Python script
>> below.
>>
>>
>> Very much appreciated, when I get to the point where I'm happy with the
>> basic architectural design and POC deployment of oVirt - that's when I'll
>> be testing importing VMs / data in various ways and have made note of these
>> scripts.
>>
>>
>> Y.
>>
>> [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html
>> [2] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sd
>> k/examples/upload_disk.py
>>
>>
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-09 Thread Yaniv Kaul
On Mon, Jan 8, 2018 at 11:52 PM, Sam McLeod 
wrote:

> Hi Yaniv,
>
> Thanks for your detailed reply, it's very much appreciated.
>
> On 5 Jan 2018, at 8:34 pm, Yaniv Kaul  wrote:
>
> Indeed, greenfield deployment has its advantages.
>
>>
>> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on
>> XenServer off one LUN at a time, remove that LUN from XenServer and add it
>> to oVirt as new storage, and continue - but if it's what has to be done,
>> we'll do it.
>>
>
> The migration of VMs has three parts:
> - VM configuration data (from name to number of CPUs, memory, etc.)
>
>
> That's not too much of an issue for us, we have a pretty standard set of
> configuration for performance / sizing.
>
> - Data - the disks themselves.
>
>
> This is the big one, for most hosts at least the data is on a dedicated
> logical volume, for example if it's postgresql, it would be LUKS on top of
> a logical volume for /var/lib/pgsql etc
>
> - Adjusting VM internal data (paths, boot kernel, grub?, etc.)
>
>
> Everything is currently PVHVM which uses standard grub2, you could
> literally dd any one of our VMs to a physical disk and boot it in any
> x86/64 machine.
>
> The first item could be automated. Unfortunately, it was a bit of a
> challenge to find a common automation platform. For example, we have great
> Ansible support, which I could not find for XenServer (but[1], which may be
> a bit limited). Perhaps if there aren't too many VMs, this could be done
> manually. If you use Foreman, btw, then it could probably be used for both
> to provision?
> The 2nd - data movement could be done in at least two-three ways:
> 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
> 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using
> the oVirt upload API (example in Python[2]). I think that's an easy to
> implement option and provides the flexibility to copy from pretty much any
> source to oVirt.
>
>
> A key thing here would be how quickly the oVirt API can ingest the data,
> our storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s
> and around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find
> hypervisors disk virtualisation mechanisms to be the bottleneck - but
> adding an API to the mix, especially one that is single threaded (if that
> does the data stream processing) could be a big performance problem.
>

Well, it's just for the data copy. We can do ~300 or so MBps in a single
upload API call, but you can copy multiple disks via multiple hypervisors
in parallel. In addition, if you are using 'dd' you might even be able to
use sg_xcopy (if it's the same storage) - worth looking into it.
In any case, we have concrete plans to improve the performance of the
upload API.

>
> 3. There are ways to convert XVA to qcow2 - I saw some references on the
> Internet, never tried any.
>
>
> This is something I was thinking of potentially doing, I can actually
> export each VM as an OVF/OVA package - since that's very standard I'm
> assuming oVirt can likely import them and convert to qcow2 or raw/LVM?
>

Well, in theory, OVF/OVA is a standard. In practice, it's far from it - it
defines how the XML should look and what it contains, but a VMware
para-virtual NIC is not a para-virtual Xen NIC is not an oVirt
para-virtual NIC, so the fact it describes a NIC means nothing when it
comes to cross-platform compatibility.


>
> As for the last item, I'm really not sure what changes are needed, if at
> all. I don't know the disk convention, for example (/dev/sd* for SCSI disk
> -> virtio-scsi, but are there are other device types?)
>
>
> Xen's virtual disks are all /dev/xvd[a-z]
> Thankfully, we partition everything as LVM and partitions (other than boot
> I think) are mounted as such.
>

And there's nothing that needs to address such path as /dev/xvd* ?
Y.



>
>
> I'd be happy to help with any adjustment needed for the Python script
> below.
>
>
> Very much appreciated, when I get to the point where I'm happy with the
> basic architectural design and POC deployment of oVirt - that's when I'll
> be testing importing VMs / data in various ways and have made note of these
> scripts.
>
>
> Y.
>
> [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html
> [2] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sd
> k/examples/upload_disk.py
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Yaniv Kaul
On Mon, Jan 8, 2018 at 3:27 PM, Abdurrahman A. Ibrahim <
a.rahman.at...@gmail.com> wrote:

> Yup, Gianluca is right.
> My bad to mention  RHV 4.2 Beta release notes instead of  CaptinKVM blog
> post "https://www.ovirt.org/documentation/vmm-guide/chap-
> Additional_Configuration/"
> """
>
>> *Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested and
>> certified as a storage domain for virtual machines. This provides more
>> infrastructure and deployment choices for engineers and architects.
>
> """
>
> Do we have any oVirt documentation mentioned that?
>

I'm afraid not. For oVirt, it's just another iSCSI target like any other
iSCSI storage.
Y.


>
> Best regards,
> Ab
>
> On Mon, Jan 8, 2018 at 2:15 PM, Gianluca Cecchi  > wrote:
>
>> On Mon, Jan 8, 2018 at 2:06 PM, Fred Rolland  wrote:
>>
>>> Hi,
>>>
>>> Do you have a link about this information?
>>>
>>> Thanks,
>>> Freddy
>>>
>>>
>>>
>>
>> Probably he refers to this blog:
>> https://rhelblog.redhat.com/2018/01/04/red-hat-virtualizatio
>> n-4-2-beta-is-live/
>>
>> with:
>> "
>> *Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested and
>> certified as a storage domain for virtual machines. This provides more
>> infrastructure and deployment choices for engineers and architects.
>> "
>>
>> It seems a described feature that didn't get any referral in oVirt 4.2
>> release notes:
>> https://ovirt.org/release/4.2.0/
>>
>> But I think in general, given a version, it is not guaranteed that what
>> in RHEV maps with what in oVirt and viceversa.
>> I don't know if this one about Ceph via iSCSI is one of them.
>> Gianluca
>>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Abdurrahman A. Ibrahim
Yup, Gianluca is right.
My bad to mention  RHV 4.2 Beta release notes instead of  CaptinKVM blog
post "
https://www.ovirt.org/documentation/vmm-guide/chap-Additional_Configuration/
"
"""

> *Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested and
> certified as a storage domain for virtual machines. This provides more
> infrastructure and deployment choices for engineers and architects.

"""

Do we have any oVirt documentation mentioned that?

Best regards,
Ab

On Mon, Jan 8, 2018 at 2:15 PM, Gianluca Cecchi 
wrote:

> On Mon, Jan 8, 2018 at 2:06 PM, Fred Rolland  wrote:
>
>> Hi,
>>
>> Do you have a link about this information?
>>
>> Thanks,
>> Freddy
>>
>>
>>
>
> Probably he refers to this blog:
> https://rhelblog.redhat.com/2018/01/04/red-hat-virtualization-4-2-beta-is-
> live/
>
> with:
> "
> *Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested and
> certified as a storage domain for virtual machines. This provides more
> infrastructure and deployment choices for engineers and architects.
> "
>
> It seems a described feature that didn't get any referral in oVirt 4.2
> release notes:
> https://ovirt.org/release/4.2.0/
>
> But I think in general, given a version, it is not guaranteed that what in
> RHEV maps with what in oVirt and viceversa.
> I don't know if this one about Ceph via iSCSI is one of them.
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Yedidyah Bar David
On Tue, Jan 9, 2018 at 12:04 PM, Peter Hudec  wrote:

> quick fix is follow the
> https://gerrit.ovirt.org/#/c/84802/2/backend/manager/
> modules/utils/src/main/java/org/ovirt/engine/core/utils/
> ovf/IOvfBuilder.java
>
> and remove trailing '/' in
> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/lib/ovf/ovf2VmParams.py
>

Adding Denis. Thanks for the logs!


>
>
> On 09/01/2018 10:45, Peter Hudec wrote:
> > the old  hypervisoer /oVirt 4.1.8/ got probblem to release the HE due
> > this exception.  The HE is on the NFS store.
> >
> > MainThread::INFO::2018-01-09
> > 10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib.
> upgrade.StorageServer::(upgrade_35_36)
> > Host configuration is already up-to-date
> > MainThread::INFO::2018-01-09
> > 10:40:28,498::config::493::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(refresh_vm_conf)
> > Reloading vm.conf from the shared storage domain
> > MainThread::INFO::2018-01-09
> > 10:40:28,498::config::416::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
> > Trying to get a fresher copy of vm configuration from the OVF_STORE
> > MainThread::INFO::2018-01-09
> > 10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > Extracting Engine VM OVF from the OVF_STORE
> > MainThread::INFO::2018-01-09
> > 10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > OVF_STORE volume path:
> > /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-
> aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667-
> 27b48f6cf778
> > MainThread::INFO::2018-01-09
> > 10:40:28,517::config::435::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
> > Found an OVF for HE VM, trying to convert
> > MainThread::ERROR::2018-01-09
> > 10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent.
> agent.Agent::(_run_agent)
> > Traceback (most recent call last):
> >   File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/agent/agent.py",
> > line 191, in _run_agent
> > return action(he)
> >   File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/agent/agent.py",
> > line 64, in action_proper
> > return he.start_monitoring()
> >   File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/agent/hosted_engine.py",
> > line 421, in start_monitoring
> > self._config.refresh_vm_conf()
> >   File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
> > line 496, in refresh_vm_conf
> > content_from_ovf = self._get_vm_conf_content_from_ovf_store()
> >   File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
> > line 438, in _get_vm_conf_content_from_ovf_store
> > conf = ovf2VmParams.confFromOvf(heovf)
> >   File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/lib/ovf/ovf2VmParams.py",
> > line 283, in confFromOvf
> > vmConf = toDict(ovf)
> >   File
> > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_
> ha/lib/ovf/ovf2VmParams.py",
> > line 210, in toDict
> > vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS +
> 'id']
> >   File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__
> > (src/lxml/lxml.etree.c:55336)
> > KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id'
> >
> >
> > On 09/01/2018 10:18, Peter Hudec wrote:
> >> The HA is flapping between 3400 nad 0. ;(
> >> And I'm not able to migrate also any other Vm to this host.
> >>
> >> Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file
> >>
> >> MainThread::INFO::2018-01-08
> >> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(refresh)
> >> Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True,
> 'extra':
> >> 'metadata_parse_version=1\nmetadata_feature_version=1\
> ntimestamp=8232312
> >> (Mon Jan  8 21:44:29
> >> 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan  8
> >> 21:44:33
> >> 2018)\nconf_on_shared_storage=True\nmaintenance=False\
> nstate=EngineUp\nstopped=False\n',
> >> 'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status':
> >> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400,
> >> 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648',
> >> 'local_conf_timestamp': 8232316, 'host-ts': 8232312}
> >> MainThread::INFO::2018-01-08
> >> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(refresh)
> >> Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True,
> 'extra':
> >> 'metadata_parse_version=1\nmetada...skipping...
> >> neVMOVF) OVF_STORE volume path:
> >> /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-
> aca316a95d1f/3513775f-d6b0-4423-be19-bbe
> >> b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778
> >> MainThread::INFO::2018-01-09
> >> 

Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
quick fix is follow the
https://gerrit.ovirt.org/#/c/84802/2/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/IOvfBuilder.java

and remove trailing '/' in
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py


On 09/01/2018 10:45, Peter Hudec wrote:
> the old  hypervisoer /oVirt 4.1.8/ got probblem to release the HE due
> this exception.  The HE is on the NFS store.
> 
> MainThread::INFO::2018-01-09
> 10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36)
> Host configuration is already up-to-date
> MainThread::INFO::2018-01-09
> 10:40:28,498::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
> Reloading vm.conf from the shared storage domain
> MainThread::INFO::2018-01-09
> 10:40:28,498::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
> Trying to get a fresher copy of vm configuration from the OVF_STORE
> MainThread::INFO::2018-01-09
> 10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> Extracting Engine VM OVF from the OVF_STORE
> MainThread::INFO::2018-01-09
> 10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> OVF_STORE volume path:
> /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778
> MainThread::INFO::2018-01-09
> 10:40:28,517::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
> Found an OVF for HE VM, trying to convert
> MainThread::ERROR::2018-01-09
> 10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Traceback (most recent call last):
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 191, in _run_agent
> return action(he)
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 64, in action_proper
> return he.start_monitoring()
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 421, in start_monitoring
> self._config.refresh_vm_conf()
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
> line 496, in refresh_vm_conf
> content_from_ovf = self._get_vm_conf_content_from_ovf_store()
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
> line 438, in _get_vm_conf_content_from_ovf_store
> conf = ovf2VmParams.confFromOvf(heovf)
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
> line 283, in confFromOvf
> vmConf = toDict(ovf)
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
> line 210, in toDict
> vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id']
>   File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__
> (src/lxml/lxml.etree.c:55336)
> KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id'
> 
> 
> On 09/01/2018 10:18, Peter Hudec wrote:
>> The HA is flapping between 3400 nad 0. ;(
>> And I'm not able to migrate also any other Vm to this host.
>>
>> Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file
>>
>> MainThread::INFO::2018-01-08
>> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
>> Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra':
>> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=8232312
>> (Mon Jan  8 21:44:29
>> 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan  8
>> 21:44:33
>> 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n',
>> 'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status':
>> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400,
>> 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648',
>> 'local_conf_timestamp': 8232316, 'host-ts': 8232312}
>> MainThread::INFO::2018-01-08
>> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
>> Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra':
>> 'metadata_parse_version=1\nmetada...skipping...
>> neVMOVF) OVF_STORE volume path:
>> /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbe
>> b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778
>> MainThread::INFO::2018-01-09
>> 10:15:13,904::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
>> Global metadata: {'maintenance': False}
>> MainThread::INFO::2018-01-09
>> 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
>> Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra':
>> 

[ovirt-users] 答复: 答复: How power-saving schedule policy applied?

2018-01-09 Thread Alex Shen
Martin, 
Thanks for your response.
I've understood why ovirt keep hosts resource reserved. Meanwhile, I totally 
agree with you that CLI is better manner to do timeline based power saving 
policy.

Best regards


> Set ‘HostInReserve’ to 0. Bingo, cloudstack03 goes down as expected.
>
> But when I launch a new VM I get an error popup dialog which inform me 
> that there isn’t enough memory resource to start a new VM. Is it 
> normal way that power saving mechanism works? Can’t power saving policy 
> scheduling module wake up a host to launch VM automatically?
>

We decided against this. Booting a host can take a long time + there is some 
time before our management agent reports in and we can't wait that long when 
starting a VM. That is why we keep the empty host in reserve. That way the VM 
can be started immediately and the additional host's boot time does not matter 
too much.

> Also, Is there any timeline base power saving policy I can apply? Like 
> from 8’clock AM to 6’clock PM, 10 hosts up, 6’clock PM to next 8’clock AM, 
> just keep 3 hosts alive.

We do not have UI for this, but it is easily done using cron and REST API 
(either directly or using some SDK).
We were even working on it for a while, but it was put on backburner since 
sysadmins know cron well and the UI would have to be limited anyway.

Best regards

--
Martin Sivak
SLA / oVirt

On Tue, Jan 9, 2018 at 8:55 AM, Alex Shen  wrote:
>
> Hi Martin, Karli
>
>
>
> Thank a lot for your help.
>
>
>
> According to your instruction, I added two parameters into cluster 
> configuration, as below.
>
> 1)  enable power cycling mechanisms
>
> 2)  reserve one host standing by
>
> My test bed:
>
> 3 hosts which are real servers, cloudstack01/cloudstack03/cloudstack04.
>
> Heavy load on cloudstack01, 21 vms
>
> Light load on cloudstack03, 1 vms
>
> No thing on cloudstack04
>
>
>
> When I shut down the last vm on cloudstack03(because I can’t migrate it to 
> cloudstack01, I guess the load is already exceeding the threshold), the 
> cloudstack04 goes to ‘maintenance’ then goes to ‘down’ state. Yepp, POWER 
> SAVING.
>
>
>
>
>
> I tried further test:
>
> Set ‘HostInReserve’ to 0. Bingo, cloudstack03 goes down as expected.
>
> But when I launch a new VM I get an error popup dialog which inform me that 
> there isn’t enough memory resource to start a new VM. Is it normal way that 
> power saving mechanism works? Can’t power saving policy scheduling module 
> wake up a host to launch VM automatically?
>
>
>
> Also, Is there any timeline base power saving policy I can apply? Like from 
> 8’clock AM to 6’clock PM, 10 hosts up, 6’clock PM to next 8’clock AM, just 
> keep 3 hosts alive.
>
>
>
> Best regards
>
> Alex
>
>
>
>
>
> 发件人: Martin Sivak [mailto:msi...@redhat.com]
> 发送时间: 2018年1月8日 19:43
> 收件人: Karli Sjöberg 
> 抄送: Alex Shen ; users 
> 主题: Re: [ovirt-users] How power-saving schedule policy applied?
>
>
>
> Hi,
>
>
>
> and here is a fresh screenshot just for you :)
>
>
>
> You need to edit the cluster, select scheduling policy tab and add two 
> parameters to your Power saving policy. One enables the power cycling 
> mechanisms and the second one (HostsInReserve) controls how many empty hosts 
> are allowed to stay up. When the host is not empty anymore a new one will be 
> started.
>
>
>
> Best regards
>
>
>
> --
>
> Martin Sivak
>
> SLA / oVirt
>
>
>
> On Mon, Jan 8, 2018 at 12:29 PM, Karli Sjöberg  wrote:
>
>
>
>
>
> Den 8 jan. 2018 12:07 skrev Alex Shen :
>
> Hi,
>
>
>
> I’m wondering how to apply power-saving schedule in ovirt cluster(version is 
> 4.2.0.2-1.el7.centos)? Is there any instruction or manual with UI snapshots 
> guidance? That will be appreciated.
>
>
>
> I’ve configed all host with ipmilan protocol and test ok. I suppose 
> that power saving policy should converge VMs into hosts one by one. 
> Something like that. From the beginning, all hosts are in ‘power off’ 
> state, if a VM launch, power saving scheduling module should power on 
> one host, and assign VM into that host. With many VMs activated, 
> exceeding the threshold, power saving scheduling module should turn on 
> another host…
>
>
>
> Jepp, that's basically how it works:)
>
>
>
> /K
>
>
>
>
>
> If anyone had played power saving scheduling policy, please share me your 
> scenarios or some hints. Thanks a lot.
>
>
>
> Alex
>
>
>
> 无病毒。www.avast.com
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> 无病毒。www.avast.com



---
Avast 防毒软件已对此电子邮件执行病毒检查。
https://www.avast.com/antivirus


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Luca 'remix_tj' Lorenzetto
On Mon, Jan 8, 2018 at 2:15 PM, Gianluca Cecchi
 wrote:
> On Mon, Jan 8, 2018 at 2:06 PM, Fred Rolland  wrote:
>>
>> Hi,
>>
>> Do you have a link about this information?
>>
>> Thanks,
>> Freddy
>>
>>
>
>
> Probably he refers to this blog:
> https://rhelblog.redhat.com/2018/01/04/red-hat-virtualization-4-2-beta-is-live/
>
> with:
> "
> Support for Ceph via iSCSI – The Ceph iSCSI target has been tested and
> certified as a storage domain for virtual machines. This provides more
> infrastructure and deployment choices for engineers and architects.
> "
>
> It seems a described feature that didn't get any referral in oVirt 4.2
> release notes:
> https://ovirt.org/release/4.2.0/
>
> But I think in general, given a version, it is not guaranteed that what in
> RHEV maps with what in oVirt and viceversa.
> I don't know if this one about Ceph via iSCSI is one of them.


I suppose they have tested ceph iscsi gateway

http://docs.ceph.com/docs/master/rbd/iscsi-overview/

And they have certified as compatible.

I don't really know why it shouldn't but the certification means that
opening a SR to RH when using this setup will get the appropriate
help.

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
the old  hypervisoer /oVirt 4.1.8/ got probblem to release the HE due
this exception.  The HE is on the NFS store.

MainThread::INFO::2018-01-09
10:40:28,497::upgrade::998::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36)
Host configuration is already up-to-date
MainThread::INFO::2018-01-09
10:40:28,498::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2018-01-09
10:40:28,498::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2018-01-09
10:40:28,498::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2018-01-09
10:40:28,498::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:
/var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbeb79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778
MainThread::INFO::2018-01-09
10:40:28,517::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Found an OVF for HE VM, trying to convert
MainThread::ERROR::2018-01-09
10:40:28,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Traceback (most recent call last):
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 191, in _run_agent
return action(he)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 64, in action_proper
return he.start_monitoring()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 421, in start_monitoring
self._config.refresh_vm_conf()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 496, in refresh_vm_conf
content_from_ovf = self._get_vm_conf_content_from_ovf_store()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 438, in _get_vm_conf_content_from_ovf_store
conf = ovf2VmParams.confFromOvf(heovf)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
line 283, in confFromOvf
vmConf = toDict(ovf)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
line 210, in toDict
vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id']
  File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__
(src/lxml/lxml.etree.c:55336)
KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id'


On 09/01/2018 10:18, Peter Hudec wrote:
> The HA is flapping between 3400 nad 0. ;(
> And I'm not able to migrate also any other Vm to this host.
> 
> Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file
> 
> MainThread::INFO::2018-01-08
> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=8232312
> (Mon Jan  8 21:44:29
> 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan  8
> 21:44:33
> 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n',
> 'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status':
> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400,
> 'stopped': False, 'maintenance': False, 'crc32': 'f28d4648',
> 'local_conf_timestamp': 8232316, 'host-ts': 8232312}
> MainThread::INFO::2018-01-08
> 21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra':
> 'metadata_parse_version=1\nmetada...skipping...
> neVMOVF) OVF_STORE volume path:
> /var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbe
> b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778
> MainThread::INFO::2018-01-09
> 10:15:13,904::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2018-01-09
> 10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=38598
> (Tue Jan  9 09:23:33
> 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=38598 (Tue Jan  9
> 09:23:34
> 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n',
> 'hostname': 'dipovirt03.cnc.sk', 'alive': False, 'host-id': 1,
> 'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'},
> 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32':
> '4c1d1890', 

Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
The HA is flapping between 3400 nad 0. ;(
And I'm not able to migrate also any other Vm to this host.

Loggs fromthe /var/log/ovirt-hosted-engine-ha/agent.log file

MainThread::INFO::2018-01-08
21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=8232312
(Mon Jan  8 21:44:29
2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=8232316 (Mon Jan  8
21:44:33
2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n',
'hostname': 'dipovirt03.cnc.sk', 'host-id': 1, 'engine-status':
{'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 3400,
'stopped': False, 'maintenance': False, 'crc32': 'f28d4648',
'local_conf_timestamp': 8232316, 'host-ts': 8232312}
MainThread::INFO::2018-01-08
21:44:45,805::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra':
'metadata_parse_version=1\nmetada...skipping...
neVMOVF) OVF_STORE volume path:
/var/run/vdsm/storage/3981424d-a55c-4f07-bff2-aca316a95d1f/3513775f-d6b0-4423-be19-bbe
b79c72ad2/7ee3f450-5976-48f8-b667-27b48f6cf778
MainThread::INFO::2018-01-09
10:15:13,904::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2018-01-09
10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host dipovirt03.cnc.sk (id 1): {'conf_on_shared_storage': True, 'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=38598
(Tue Jan  9 09:23:33
2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=38598 (Tue Jan  9
09:23:34
2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n',
'hostname': 'dipovirt03.cnc.sk', 'alive': False, 'host-id': 1,
'engine-status': {'health': 'good', 'vm': 'up', 'detail': 'up'},
'score': 3400, 'stopped': False, 'maintenance': False, 'crc32':
'4c1d1890', 'local_conf_timestamp': 38598, 'host-ts': 38598}
MainThread::INFO::2018-01-09
10:15:13,905::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host dipovirt02.cnc.sk (id 3): {'conf_on_shared_storage': True, 'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=40677
(Tue Jan  9 09:24:11
2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=40677 (Tue Jan  9
09:24:11
2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n',
'hostname': 'dipovirt02.cnc.sk', 'alive': False, 'host-id': 3,
'engine-status': {'reason': 'vm not running on this host', 'health':
'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped':
False, 'maintenance': False, 'crc32': '3bf104bc',
'local_conf_timestamp': 40677, 'host-ts': 40677}
MainThread::INFO::2018-01-09
10:15:13,905::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 2): {'engine-health': {'reason': 'vm not running on this
host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
True, 'mem-free': 39540.0, 'maintenance': False, 'cpu-load': 0.0432,
'gateway': 1.0, 'storage-domain': True}
MainThread::INFO::2018-01-09
10:15:13,905::states::775::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Another host already took over..
MainThread::INFO::2018-01-09
10:15:13,928::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Timeout cleared while transitioning  -> 
MainThread::INFO::2018-01-09
10:15:14,046::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineStarting-EngineForceStop) sent? sent
MainThread::INFO::2018-01-09
10:15:14,464::hosted_engine::494::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
Current state EngineForceStop (score: 3400)
MainThread::INFO::2018-01-09
10:15:14,467::hosted_engine::1002::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff`
MainThread::INFO::2018-01-09
10:15:15,198::hosted_engine::1007::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
stdout:
MainThread::INFO::2018-01-09
10:15:15,198::hosted_engine::1008::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
stderr: Command VM.destroy with args {'vmID':
'9a8ea503-f598-433e-9751-93aee3e7b347'} failed:
(code=1, message=Virtual machine does not exist: {'vmId':
u'9a8ea503-f598-433e-9751-93aee3e7b347'})

MainThread::ERROR::2018-01-09
10:15:15,199::hosted_engine::1013::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
Failed to stop engine vm with /usr/sbin/hosted-engine --vm-poweroff:
Command VM.destroy with args {'vmID':

Re: [ovirt-users] 答复: How power-saving schedule policy applied?

2018-01-09 Thread Martin Sivak
> Set ‘HostInReserve’ to 0. Bingo, cloudstack03 goes down as expected.
>
> But when I launch a new VM I get an error popup dialog which inform me that 
> there isn’t enough memory resource
> to start a new VM. Is it normal way that power saving mechanism works? Can’t 
> power saving policy scheduling module
> wake up a host to launch VM automatically?
>

We decided against this. Booting a host can take a long time + there
is some time before our management agent reports in
and we can't wait that long when starting a VM. That is why we keep
the empty host in reserve. That way the VM can be
started immediately and the additional host's boot time does not
matter too much.

> Also, Is there any timeline base power saving policy I can apply? Like from 
> 8’clock AM to 6’clock PM, 10 hosts up,
> 6’clock PM to next 8’clock AM, just keep 3 hosts alive.

We do not have UI for this, but it is easily done using cron and REST
API (either directly or using some SDK).
We were even working on it for a while, but it was put on backburner
since sysadmins know cron well and the UI would have to be limited
anyway.

Best regards

--
Martin Sivak
SLA / oVirt

On Tue, Jan 9, 2018 at 8:55 AM, Alex Shen  wrote:
>
> Hi Martin, Karli
>
>
>
> Thank a lot for your help.
>
>
>
> According to your instruction, I added two parameters into cluster 
> configuration, as below.
>
> 1)  enable power cycling mechanisms
>
> 2)  reserve one host standing by
>
> My test bed:
>
> 3 hosts which are real servers, cloudstack01/cloudstack03/cloudstack04.
>
> Heavy load on cloudstack01, 21 vms
>
> Light load on cloudstack03, 1 vms
>
> No thing on cloudstack04
>
>
>
> When I shut down the last vm on cloudstack03(because I can’t migrate it to 
> cloudstack01, I guess the load is already exceeding the threshold), the 
> cloudstack04 goes to ‘maintenance’ then goes to ‘down’ state. Yepp, POWER 
> SAVING.
>
>
>
>
>
> I tried further test:
>
> Set ‘HostInReserve’ to 0. Bingo, cloudstack03 goes down as expected.
>
> But when I launch a new VM I get an error popup dialog which inform me that 
> there isn’t enough memory resource to start a new VM. Is it normal way that 
> power saving mechanism works? Can’t power saving policy scheduling module 
> wake up a host to launch VM automatically?
>
>
>
> Also, Is there any timeline base power saving policy I can apply? Like from 
> 8’clock AM to 6’clock PM, 10 hosts up, 6’clock PM to next 8’clock AM, just 
> keep 3 hosts alive.
>
>
>
> Best regards
>
> Alex
>
>
>
>
>
> 发件人: Martin Sivak [mailto:msi...@redhat.com]
> 发送时间: 2018年1月8日 19:43
> 收件人: Karli Sjöberg 
> 抄送: Alex Shen ; users 
> 主题: Re: [ovirt-users] How power-saving schedule policy applied?
>
>
>
> Hi,
>
>
>
> and here is a fresh screenshot just for you :)
>
>
>
> You need to edit the cluster, select scheduling policy tab and add two 
> parameters to your Power saving policy. One enables the power cycling 
> mechanisms and the second one (HostsInReserve) controls how many empty hosts 
> are allowed to stay up. When the host is not empty anymore a new one will be 
> started.
>
>
>
> Best regards
>
>
>
> --
>
> Martin Sivak
>
> SLA / oVirt
>
>
>
> On Mon, Jan 8, 2018 at 12:29 PM, Karli Sjöberg  wrote:
>
>
>
>
>
> Den 8 jan. 2018 12:07 skrev Alex Shen :
>
> Hi,
>
>
>
> I’m wondering how to apply power-saving schedule in ovirt cluster(version is 
> 4.2.0.2-1.el7.centos)? Is there any instruction or manual with UI snapshots 
> guidance? That will be appreciated.
>
>
>
> I’ve configed all host with ipmilan protocol and test ok. I suppose that 
> power saving policy should converge VMs into hosts one by one. Something like 
> that. From the beginning, all hosts are in ‘power off’ state, if a VM launch, 
> power saving scheduling module should power on one host, and assign VM into 
> that host. With many VMs activated, exceeding the threshold, power saving 
> scheduling module should turn on another host…
>
>
>
> Jepp, that's basically how it works:)
>
>
>
> /K
>
>
>
>
>
> If anyone had played power saving scheduling policy, please share me your 
> scenarios or some hints. Thanks a lot.
>
>
>
> Alex
>
>
>
> 无病毒。www.avast.com
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> 无病毒。www.avast.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Yedidyah Bar David
On Tue, Jan 9, 2018 at 10:13 AM, Peter Hudec  wrote:

> Hi,
>
> maybe it was already here, but I haven't found it quickly in archive ;(
>
> I upgrade the hosted engine and one hots, my notes and questions.
> The upgrade goes well, I only needed to manually fix the memy value in
> database for hosted engine
>
> [ ERROR ] schema.sh: FATAL: Cannot execute sql command:
> --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_
> 02_0140_add_max_memory_constraint.sql
> [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
> refresh failed
>

You mean you changed something in the db and then ran engine-setup again
and it worked?

Can you please share both setup logs and the change you made in between?
Thanks.


>
>
> 1) firewalld
> after upgrade the hot server, the i needed to stop firewalld. It seems,
> that, the rules are not generated correctly. The engine was not able to
> connect to the host. How do I could fix it?
>

Please check/share relevant files from /var/log/ovirt-engine/ansible/
and /var/log/ovirt-engine/host-deploy/ . Or perhaps file a bug and
attach them there.


>
> 2) old repo removal
> Could i remove the 4.1 repo? If yes, what is the best way to do that?
>

I think 'yum remove ovirt-release41', or remove the relevant files in
/etc/yum.repos.d.


>
> 3) Hosted Engine HA:
> Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1
> hosts. Is this good or bad?
>

It's good.

Best regards,


>
> regards
> Peter
>
> --
> *Peter Hudec*
> Infraštruktúrny architekt
> phu...@cnc.sk 
>
> *CNC, a.s.*
> Borská 6, 841 04 Bratislava
> Recepcia: +421 2  35 000 100
>
> Mobil:+421 905 997 203
> *www.cnc.sk* 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Suggestions on changing hosts' network

2018-01-09 Thread Irit Goihman
On Mon, Jan 8, 2018 at 6:08 PM, Gianluca Cecchi 
wrote:

> On Mon, Jan 8, 2018 at 12:57 PM, Edward Haas  wrote:
>
>>

>>>
>>> hello,
>>> any update on how to give JSON representation (or other way) to use
>>> vdsm-client and change ip/gateway of ovirt-ng node?
>>> Thanks
>>>
>>
>> How to use vdsm-client in general: Just check its man page.
>> You will need to fill up the existing management network details in the
>> json format, replacing just the IP address.
>> The main concern here is, that if you missed something, it may be removed.
>> You should follow https://access.redhat.com/solutions/168983 for full
>> details.
>>
>> I would suggest using a different approach, although I have not tested it
>> myself:
>> Edit the persisted relevant configuration files:
>> /var/lib/vdsm/persistence/netconf
>> (Change the IP there, without touching the other stuff)
>> Then, reboot the host.. It should identify that the existing config is
>> not in sync with the persisted one and a reconfig will be issued
>> automatically.
>> The risk here is that the config will not successfully get applied, so
>> make sure you save the previous version.
>>
>> Thanks,
>> Edy.
>>
>>
> Actually I see some misalignment between what I see in man page, what I
> get with "-h" option and real command usage.
> Eg in oVirt 4.1.7 (vdsm-client-4.19.37-1.el7.centos.noarch):
>
> man vdsm-client
>
> "
>Invoking commands with complex parameters
>For invoking commands with complex or many arguments, you can read
> a JSON dictionary from a file:
>
>vdsm-client Lease info -f lease.json
>
>where lease.json file content is:
>
>{
>"lease": {
>"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
>"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
>}
>}
> "
>
> But actually if I create a json file and execute
>
> [root@ovirtng4101 ~]# vdsm-client Host setupNetworks -f network.json
>
> I get:
>
> usage: vdsm-client [-h] [-a HOST] [-p PORT] [--unsecure] [--insecure]
>[--timeout TIMEOUT] [-f FILE]
>namespace method [arg=value] ...
> vdsm-client: error: unrecognized arguments: -f
>
> Please try running:
# vdsm-client -f network.json Host setupNetworks
There is an error in the man page, I will fix it.

I have to try it in 4.1.8 if solved
>
> The workaround to change in place the file (in my case ipaddr and gateway
> fileds): /var/lib/vdsm/persistence/netconf.1515365150707441369/
> nets/ovirtmgmt
> and then shutdown/power on seems to work ok instead. I can reinstall the
> host and the change remains persistent across reboots
>
> Thanks for the pointer for RHEV doc, because I didn't find it at first
> glance and I have to do the same for a RHEV eval too; but it seems the
> problem is present also there with RHV-H installed in December (nodectl
> info reports that current layer is rhvh-4.1-0.20171101.0+1; probably the
> problem has been already solved, I have to check). I can give more details
> on this off-list if you like as this doesn't directly relate with oVirt.
>
> Thanks in the mean time for the persistence file method that I think/hope
> will work for RHV-H too if I don't update the image
>
> Gianluca
>
> PS: ok also for the "not officially supported." advise. This is only a
> test where two hosts are to be moved from datacenter 1 to datacenter 2...
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

IRIT GOIHMAN

SOFTWARE ENGINEER

EMEA VIRTUALIZATION R

Red Hat EMEA 


TRIED. TESTED. TRUSTED. 
@redhatnews    Red Hat
   Red Hat

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.2 upgrade questions

2018-01-09 Thread Peter Hudec
Hi,

maybe it was already here, but I haven't found it quickly in archive ;(

I upgrade the hosted engine and one hots, my notes and questions.
The upgrade goes well, I only needed to manually fix the memy value in
database for hosted engine

[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed


1) firewalld
after upgrade the hot server, the i needed to stop firewalld. It seems,
that, the rules are not generated correctly. The engine was not able to
connect to the host. How do I could fix it?

2) old repo removal
Could i remove the 4.1 repo? If yes, what is the best way to do that?

3) Hosted Engine HA:
Hosted Engine HA on upgraded hosts is 3400, the same as on the 4.1
hosts. Is this good or bad?

regards
Peter

-- 
*Peter Hudec*
Infraštruktúrny architekt
phu...@cnc.sk 

*CNC, a.s.*
Borská 6, 841 04 Bratislava
Recepcia: +421 2  35 000 100

Mobil:+421 905 997 203
*www.cnc.sk* 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users