Re: [ovirt-users] Two ovirt-engine manage one hypervisor

2016-06-15 Thread Yedidyah Bar David
On Thu, Jun 16, 2016 at 5:33 AM, Sandvik Agustin
 wrote:
> Hi users,
>
> Good day, is it possible to configure two ovirt-engine to manage one
> hypervisor? My purpose for this is what if the first ovirt-engine fails, I
> still have the 2nd ovirt-engine to manage hypervisor.
>
> is this possible? or any suggestion similar to my purpose?

The "normal" solution is hosted-engine, which has HA - the engine
runs in a VM, and HA daemons monitor it and the hosts, and if there
is a problem they can start it on another host.

There were discussions in the past, which you can find in the list archives,
about running two engines against a single database, and current bottom line
is that it's not supported, will not work, and iiuc will require some
significant development investment to support.

You might manage to have an active/passive solution - install an engine
on two machines, configure both to use the same remote database, but
make sure only one of them is active at any given time. Not sure if that's
considered "fully supported", but might come close.

You can find on the net docs/resources about creating a redundant
postgresql cluster.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host usb

2016-06-15 Thread Fernando Fuentes
I got the vm to start but the vm wont see the usb device.

Any idaes?

-- 
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org

On Wed, Jun 15, 2016, at 09:21 PM, Fernando Fuentes wrote:
> After upgrading my ovirt to 3.6 and configuring my host usb passthrough
> again. My vm is unable to start with the following error message:
> 
> VM methub is down with error. Exit message: Node device not found: no
> node device with matching name 'usb'
> 
> The host is running Centos 7 x86_64
> Any ideas?
> 
> Regards,
> 
> 
> -- 
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Two ovirt-engine manage one hypervisor

2016-06-15 Thread Sandvik Agustin
Hi users,

Good day, is it possible to configure two ovirt-engine to manage one
hypervisor? My purpose for this is what if the first ovirt-engine fails, I
still have the 2nd ovirt-engine to manage hypervisor.

is this possible? or any suggestion similar to my purpose?

TIA
Sandvik
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] host usb

2016-06-15 Thread Fernando Fuentes
After upgrading my ovirt to 3.6 and configuring my host usb passthrough
again. My vm is unable to start with the following error message:

VM methub is down with error. Exit message: Node device not found: no
node device with matching name 'usb'

The host is running Centos 7 x86_64
Any ideas?

Regards,


-- 
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems exporting VM's

2016-06-15 Thread Luciano Natale
Finally did it. Not failing by the time being!

On Fri, May 13, 2016 at 9:31 PM, Luciano Natale  wrote:

> Ok! I'll do!
>
> On Wed, May 11, 2016 at 9:30 AM, Nir Soffer  wrote:
>
>> On Sun, May 8, 2016 at 3:14 AM, Luciano Natale 
>> wrote:
>> > Hi everyone. I've been having trouble when exporting VM's. I get error
>> when
>> > moving image. I've created a whole new storage domain exclusive for this
>> > issue, and same thing happens. It's not always the same VM that fails,
>> but
>> > once it fails on a certain storage domain, I cannot export it anymore.
>> > Please tell me which logs are relevant so i can post them and any other
>> > relevant iformation I can provide, and maybe someone can help me get
>> through
>> > this problem.
>> >
>> > Ovirt version is 3.5.4.2-1.el6.
>>
>> Please upgrade to latest 3.5 version, and report if this issue still
>> exists there.
>>
>> Nir
>>
>
>
>
> --
> Luciano Natale
>



-- 
Luciano Natale
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Woes

2016-06-15 Thread Nic Seltzer
Hey folks,

Just wanna follow-up here. Restarting the hosted-engine through the steps
provided by Simone seems to have resolved the issue. Now, I get to retro
and figure out why that paused a VM. Something I may come back to this list
for. Thank you for your help and quick response.

Thank you again,



nic "have you tried turning it off and on again" seltzer


On Wed, Jun 15, 2016 at 3:40 PM, Patrick Russell <
patrick.russ...@volusion.com> wrote:

> We had some funkiness with hosted-engine and the steps Simone suggested is
> essentially what we went thru to get it all back to normal. Just remember
> to be patient, it seems the agent can take some time to poll all the hosts.
>
> -Patrick
>
> On Wed, Jun 15, 2016 at 4:12 PM, Nic Seltzer 
> wrote:
>
>> Has anyone else experienced a similar issue? Is the advised action to
>> reboot the hosted-engine? I defer the to the expertise in this mailing list
>> so that I might help others.
>>
>> Thanks,
>>
>> On Tue, Jun 14, 2016 at 3:11 PM, Simone Tiraboschi 
>> wrote:
>>
>>> On Tue, Jun 14, 2016 at 8:45 PM, Nic Seltzer 
>>> wrote:
>>> > Hello!
>>> >
>>> > I'm looking for someone who can help me out with a hosted-engine setup
>>> that
>>> > I have. I experienced a power event a couple of weeks ago. Initially,
>>> things
>>> > seemed to have come back fine, but the other day, I noticed that one
>>> of the
>>> > nodes for the cluster was down. I tried to drop it into maintenance
>>> mode
>>> > (which never completed) and reboot it then "Confirm the Host has been
>>> > rebooted". Neither of these steps allowed the host to re-enter the
>>> cluster.
>>> > Has anyone encountered this? At this point, I'd like to reboot the
>>> > hosted-engine, but I can't find documentation instructing me on "how".
>>> I'm
>>>
>>> hosted-engine --set-maintenance --mode=global
>>> hosted-engine --vm-shutdown
>>> hosted-engine --vm-status # poll till the VM is down
>>> hosted-engine --vm-start
>>> hosted-engine --set-maintenance --mode=none
>>>
>>> > also open to other suggestions or references to documentation that
>>> will help
>>> > triage my issue.
>>> >
>>> > Thanks!
>>> >
>>> >
>>> >
>>> > nic
>>> >
>>> > ___
>>> > Users mailing list
>>> > Users@ovirt.org
>>> > http://lists.ovirt.org/mailman/listinfo/users
>>> >
>>>
>>
>>
>>
>> --
>> Nic Seltzer
>> Esports Ops Tech | Riot Games
>> Cell: +1.402.431.2642 | NA Summoner: Riot Dankeboop
>> http://www.riotgames.com
>> http://www.leagueoflegends.com
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Patrick Russell | Manager, Private Cloud Infrastructure
> 512.605.2378 |  patrick.russ...@volusion.com
> www.volusion.com | www.material.com
>
> Volusion, Inc. | More successful businesses are built here.
>
> This email and any attached files are intended solely for the use of the
> individual(s) or entity(ies) to whom they are addressed, and may contain
> confidential information. If you have received this email in error, please
> notify me immediately by responding to this email and do not forward or
> otherwise distribute or copy this email.
>
>


-- 
Nic Seltzer
Esports Ops Tech | Riot Games
Cell: +1.402.431.2642 | NA Summoner: Riot Dankeboop
http://www.riotgames.com
http://www.leagueoflegends.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.5 to 3.6 upgrade

2016-06-15 Thread Fernando Fuentes

I think I fix my own issue.

TIA! :D

-- 
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org

On Wed, Jun 15, 2016, at 04:34 PM, Fernando Fuentes wrote:
> Hello All!
> 
> I am upgrading from 3.5 to 3.6 and I am running through some issues with
> yum...
> 
> I am running Centos 6.8 x86_64
> 
> Any ideas?
> 
> Resolving Dependencies
> --> Running transaction check
> ---> Package ovirt-engine-setup.noarch 0:3.5.6.2-1.el6 will be updated
> ---> Package ovirt-engine-setup.noarch 0:3.6.6.2-1.el6 will be an update
> ---> Package ovirt-engine-setup-base.noarch 0:3.5.6.2-1.el6 will be
> updated
> ---> Package ovirt-engine-setup-base.noarch 0:3.6.6.2-1.el6 will be an
> update
> --> Processing Dependency: ovirt-engine-lib >= 3.6.6.2-1.el6 for
> package: ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
> --> Processing Dependency: otopi >= 1.4.1 for package:
> ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:3.5.6.2-1.el6 will be updated
> --> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
> 3.5.6.2-1.el6 for package:
> ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:3.6.6.2-1.el6 will be an update
> --> Processing Dependency:
> ovirt-engine-setup-plugin-vmconsole-proxy-helper = 3.6.6.2-1.el6 for
> package: ovirt-engine-setup-plugin-ovirt-engine-3.6.6.2-1.el6.noarch
> --> Processing Dependency: ovirt-engine-extension-aaa-jdbc for package:
> ovirt-engine-setup-plugin-ovirt-engine-3.6.6.2-1.el6.noarch
> ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
> 0:3.5.6.2-1.el6 will be updated
> ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
> 0:3.6.6.2-1.el6 will be an update
> --> Processing Dependency: ovirt-setup-lib for package:
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.6.2-1.el6.noarch
> ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
> 0:3.5.6.2-1.el6 will be updated
> ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
> 0:3.6.6.2-1.el6 will be an update
> --> Running transaction check
> ---> Package otopi.noarch 0:1.3.2-1.el6 will be updated
> --> Processing Dependency: otopi = 1.3.2-1.el6 for package:
> otopi-java-1.3.2-1.el6.noarch
> ---> Package otopi.noarch 0:1.4.1-1.el6 will be an update
> ---> Package ovirt-engine-extension-aaa-jdbc.noarch 0:1.0.7-1.el6 will
> be installed
> ---> Package ovirt-engine-lib.noarch 0:3.5.6.2-1.el6 will be updated
> ---> Package ovirt-engine-lib.noarch 0:3.6.6.2-1.el6 will be an update
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:3.5.6.2-1.el6 will be updated
> --> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
> 3.5.6.2-1.el6 for package:
> ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
> ---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
> 0:3.6.6.2-1.el6 will be installed
> ---> Package ovirt-setup-lib.noarch 0:1.0.1-1.el6 will be installed
> --> Running transaction check
> ---> Package otopi-java.noarch 0:1.3.2-1.el6 will be updated
> ---> Package otopi-java.noarch 0:1.4.1-1.el6 will be an update
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:3.5.6.2-1.el6 will be updated
> --> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
> 3.5.6.2-1.el6 for package:
> ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
> --> Processing Conflict: ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
> conflicts ovirt-engine-reports-setup < 3.6.0
> --> Restarting Dependency Resolution with new changes.
> --> Running transaction check
> ---> Package ovirt-engine-reports-setup.noarch 0:3.5.5-2.el6 will be
> updated
> ---> Package ovirt-engine-reports-setup.noarch 0:3.6.5.1-1.el6 will be
> an update
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:3.5.6.2-1.el6 will be updated
> --> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
> 3.5.6.2-1.el6 for package:
> ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
> --> Processing Conflict: ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
> conflicts ovirt-engine-dwh-setup < 3.6.0
> --> Restarting Dependency Resolution with new changes.
> --> Running transaction check
> ---> Package ovirt-engine-dwh-setup.noarch 0:3.5.5-1.el6 will be updated
> ---> Package ovirt-engine-dwh-setup.noarch 0:3.6.6-1.el6 will be an
> update
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:3.5.6.2-1.el6 will be updated
> --> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
> 3.5.6.2-1.el6 for package:
> ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
> --> Finished Dependency Resolution
> Error: Package: ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
> (@ovirt-3.5)
>Requires: ovirt-engine-setup-plugin-ovirt-engine =
>3.5.6.2-1.el6
>Removing:
>ovirt-engine-setup-plugin-ovirt-engine-3.5.6.2-1.el6.noarch
>(@ovirt-3.5)
>  

Re: [ovirt-users] WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!

2016-06-15 Thread Nir Soffer
On Thu, Jun 16, 2016 at 12:31 AM, Claude Durocher
 wrote:
> I get this warning in vdsm logs with ovirt 3.6 installed on CentOS 7.2 :
>
> WARNING: lvmetad is running but disabled. Restart lvmetad before enabling
> it!
>
> The service is effectively running but disabled in systemd. In
> /etc/lvm/lvm.conf I have :
>
> use_lvmetad = 1
>
> Can someone please advise on how to solve this?

You see lvm warnings in vdsm debug logs, this is not a warning, and
you can safely
ignore it.

If you like to stop these warnings, you can stop and mask lvm2-lvmetad
and disable
it in lvm.conf:

systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket

And disable it in lvm.conf:

use_lvmetad = 0

This has no effect on vdsm, which already ignore lvmetad using command line
configuration (look for --config in vdsm.log).

We are considering adding this to vdsm installation.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 3.5 to 3.6 upgrade

2016-06-15 Thread Fernando Fuentes
Hello All!

I am upgrading from 3.5 to 3.6 and I am running through some issues with
yum...

I am running Centos 6.8 x86_64

Any ideas?

Resolving Dependencies
--> Running transaction check
---> Package ovirt-engine-setup.noarch 0:3.5.6.2-1.el6 will be updated
---> Package ovirt-engine-setup.noarch 0:3.6.6.2-1.el6 will be an update
---> Package ovirt-engine-setup-base.noarch 0:3.5.6.2-1.el6 will be
updated
---> Package ovirt-engine-setup-base.noarch 0:3.6.6.2-1.el6 will be an
update
--> Processing Dependency: ovirt-engine-lib >= 3.6.6.2-1.el6 for
package: ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
--> Processing Dependency: otopi >= 1.4.1 for package:
ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
0:3.5.6.2-1.el6 will be updated
--> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
3.5.6.2-1.el6 for package:
ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
0:3.6.6.2-1.el6 will be an update
--> Processing Dependency:
ovirt-engine-setup-plugin-vmconsole-proxy-helper = 3.6.6.2-1.el6 for
package: ovirt-engine-setup-plugin-ovirt-engine-3.6.6.2-1.el6.noarch
--> Processing Dependency: ovirt-engine-extension-aaa-jdbc for package:
ovirt-engine-setup-plugin-ovirt-engine-3.6.6.2-1.el6.noarch
---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
0:3.5.6.2-1.el6 will be updated
---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
0:3.6.6.2-1.el6 will be an update
--> Processing Dependency: ovirt-setup-lib for package:
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.6.2-1.el6.noarch
---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
0:3.5.6.2-1.el6 will be updated
---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
0:3.6.6.2-1.el6 will be an update
--> Running transaction check
---> Package otopi.noarch 0:1.3.2-1.el6 will be updated
--> Processing Dependency: otopi = 1.3.2-1.el6 for package:
otopi-java-1.3.2-1.el6.noarch
---> Package otopi.noarch 0:1.4.1-1.el6 will be an update
---> Package ovirt-engine-extension-aaa-jdbc.noarch 0:1.0.7-1.el6 will
be installed
---> Package ovirt-engine-lib.noarch 0:3.5.6.2-1.el6 will be updated
---> Package ovirt-engine-lib.noarch 0:3.6.6.2-1.el6 will be an update
---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
0:3.5.6.2-1.el6 will be updated
--> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
3.5.6.2-1.el6 for package:
ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
0:3.6.6.2-1.el6 will be installed
---> Package ovirt-setup-lib.noarch 0:1.0.1-1.el6 will be installed
--> Running transaction check
---> Package otopi-java.noarch 0:1.3.2-1.el6 will be updated
---> Package otopi-java.noarch 0:1.4.1-1.el6 will be an update
---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
0:3.5.6.2-1.el6 will be updated
--> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
3.5.6.2-1.el6 for package:
ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
--> Processing Conflict: ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
conflicts ovirt-engine-reports-setup < 3.6.0
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package ovirt-engine-reports-setup.noarch 0:3.5.5-2.el6 will be
updated
---> Package ovirt-engine-reports-setup.noarch 0:3.6.5.1-1.el6 will be
an update
---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
0:3.5.6.2-1.el6 will be updated
--> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
3.5.6.2-1.el6 for package:
ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
--> Processing Conflict: ovirt-engine-setup-base-3.6.6.2-1.el6.noarch
conflicts ovirt-engine-dwh-setup < 3.6.0
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package ovirt-engine-dwh-setup.noarch 0:3.5.5-1.el6 will be updated
---> Package ovirt-engine-dwh-setup.noarch 0:3.6.6-1.el6 will be an
update
---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
0:3.5.6.2-1.el6 will be updated
--> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine =
3.5.6.2-1.el6 for package:
ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
--> Finished Dependency Resolution
Error: Package: ovirt-engine-setup-plugin-allinone-3.5.6.2-1.el6.noarch
(@ovirt-3.5)
   Requires: ovirt-engine-setup-plugin-ovirt-engine =
   3.5.6.2-1.el6
   Removing:
   ovirt-engine-setup-plugin-ovirt-engine-3.5.6.2-1.el6.noarch
   (@ovirt-3.5)
   ovirt-engine-setup-plugin-ovirt-engine = 3.5.6.2-1.el6
   Updated By:
   ovirt-engine-setup-plugin-ovirt-engine-3.6.6.2-1.el6.noarch
   (ovirt-3.6)
   ovirt-engine-setup-plugin-ovirt-engine = 3.6.6.2-1.el6
   Available:
   ovirt-engine-setup-plugin-ovirt-engine-3.6.0.3-1.el6.noarch
   (ovirt-3.6)

Re: [ovirt-users] Hosted Engine Woes

2016-06-15 Thread Nic Seltzer
Has anyone else experienced a similar issue? Is the advised action to
reboot the hosted-engine? I defer the to the expertise in this mailing list
so that I might help others.

Thanks,

On Tue, Jun 14, 2016 at 3:11 PM, Simone Tiraboschi 
wrote:

> On Tue, Jun 14, 2016 at 8:45 PM, Nic Seltzer 
> wrote:
> > Hello!
> >
> > I'm looking for someone who can help me out with a hosted-engine setup
> that
> > I have. I experienced a power event a couple of weeks ago. Initially,
> things
> > seemed to have come back fine, but the other day, I noticed that one of
> the
> > nodes for the cluster was down. I tried to drop it into maintenance mode
> > (which never completed) and reboot it then "Confirm the Host has been
> > rebooted". Neither of these steps allowed the host to re-enter the
> cluster.
> > Has anyone encountered this? At this point, I'd like to reboot the
> > hosted-engine, but I can't find documentation instructing me on "how".
> I'm
>
> hosted-engine --set-maintenance --mode=global
> hosted-engine --vm-shutdown
> hosted-engine --vm-status # poll till the VM is down
> hosted-engine --vm-start
> hosted-engine --set-maintenance --mode=none
>
> > also open to other suggestions or references to documentation that will
> help
> > triage my issue.
> >
> > Thanks!
> >
> >
> >
> > nic
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>



-- 
Nic Seltzer
Esports Ops Tech | Riot Games
Cell: +1.402.431.2642 | NA Summoner: Riot Dankeboop
http://www.riotgames.com
http://www.leagueoflegends.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] best way to migrate VMs from VMware to oVirt

2016-06-15 Thread Brett I. Holcomb



On 06/15/2016 01:41 PM, Cam Mac wrote:

Hi,

I haven't had any luck using the oVirt GUI or virt-v2v (see earlier 
email), and I need to find a way to migrate quite a few Windows hosts 
(Windows 7, 2012, 2008, 2k3 etc) into my test oVirt cluster as a PoC 
so I can make a compelling case for getting rid of VMware. Using OVF 
files looks like a lot more manual work as compared to the GUI or 
virt-v2v, with their nice conversion features.


Any suggestions?

Thanks,

Cam


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Are you getting errors?

Here's what I have in my notes but you may already have tried it and the 
GUI doesn't seem to have a way to do it.  I migrated from both ESXi6 and 
VMwareWorkstation 11.  First I exported as ova.


**  Make sure the export directory is mounted.
*  virt-v2v doesn't like being run as root so run it as vdsm user and 
you need to specify the shell.


  su - vdsm -s /bin/bash

For Export storage located on another computer remote to the one running 
virt-v2v use the host:/export format


  virt-v2v -i ova -of raw -o rhev -os 
ovhost1:/srv/exports/ovirt/export1 --network VLAN100 -oa sparse -on name> 


For Export storage located on a server that is doing exporting use this 
format with local directory path for -os


  virt-v2v -i ova -of raw -o rhev -os /srv/exports/ovirt/export1 -oa 
sparse --network VLAN100 -on   /path/


I moved my ova files to my host (I run hosted engine deployment). Then I 
su'd and ran the second command since my host exports the oVirt Export 
directory.


Once the command completes I run the oVirt admin import and select the 
VMs from the list and move them over.  I have to change the location of 
the storage since it defaults to the Engine storage and not my iSCSI 
storage.


I've done it on mainly Linux and a Win 7 VM and it worked.  I haven't 
tried any servers yet.




*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] best way to migrate VMs from VMware to oVirt

2016-06-15 Thread Nir Soffer
On Wed, Jun 15, 2016 at 8:41 PM, Cam Mac  wrote:
> Hi,
>
> I haven't had any luck using the oVirt GUI or virt-v2v (see earlier email),
> and I need to find a way to migrate quite a few Windows hosts (Windows 7,
> 2012, 2008, 2k3 etc) into my test oVirt cluster as a PoC so I can make a
> compelling case for getting rid of VMware. Using OVF files looks like a lot
> more manual work as compared to the GUI or virt-v2v, with their nice
> conversion features.
>
> Any suggestions?

I think the best way is use our import external vm feature.

Can you point us to the mail about this?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.6.7 Fourth Release Candidate is now available for testing

2016-06-15 Thread Rafael Martins
The oVirt Project is pleased to announce the availability of the Fourth Release
Candidate of oVirt 3.6.7 for testing, as of June 15th, 2016

This release is available now for:
* Fedora 22
* Red Hat Enterprise Linux 6.7 or later
* CentOS Linux (or similar) 6.7 or later

* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 22

This release candidate includes the following updated packages:

* ovirt-engine

See the release notes [1] for installation / upgrade instructions and a list
of new features and bugs fixed.

Notes:
* A new oVirt Live ISO is available [2].
* A new oVirt Appliance will be available soon.
* Mirrors[3] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 3.6.7 release highlights: 
http://www.ovirt.org/release/3.6.7/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: 
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/3.6.7/
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/ovirt-live/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] best way to migrate VMs from VMware to oVirt

2016-06-15 Thread Cam Mac
Hi,

I haven't had any luck using the oVirt GUI or virt-v2v (see earlier email),
and I need to find a way to migrate quite a few Windows hosts (Windows 7,
2012, 2008, 2k3 etc) into my test oVirt cluster as a PoC so I can make a
compelling case for getting rid of VMware. Using OVF files looks like a lot
more manual work as compared to the GUI or virt-v2v, with their nice
conversion features.

Any suggestions?

Thanks,

Cam
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Gianluca Cecchi
Il 15/Giu/2016 17:29, "Giorgio Bersano"  ha
scritto:
>
> 2016-06-15 12:21 GMT+02:00 Donny Davis :
> > Do you have a requirement for 3d acceleration on the VDI guests?
>
> On the first deployment no, it is basically for frontend and backend
> office activity.
> But we would probably also asked to try it for multimedia activities
> like casual guests watching internet videos at the local public
> library.
> So two different kind of devices, I suppose.
>
> Anything to suggest?
>
> Thank you,
> Giorgio.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

Why not a nuc, or similar device from other vendors that are emerging?
I don't know for the lan security part, but you can find a nuc5cpyh with
celeron or nuc5ppyh with pentium respectively at 140 and 170 euros. Both
have 6w tdp.
You have to add at least memory but with further 20 euros you get 4gb.
Disk optional, you can use sdxc or boot from lan
Just a suggestion for further investigation.
I'm currently using a top line nuc6 as an hypervisor without any problem
with 3-4 vms, so I think a bottom line nuc can serve optimally as a thin
client without its costs.
Some of them have also Kensington anti theft that can be a good idea due to
their size.
Hih,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt training request

2016-06-15 Thread Gonzalo Faramiñan
Thank you Dan.

2016-06-14 11:46 GMT-03:00 Dan Yasny :
> If you want something official, I think RH318, the RHCVA course should cover
> everything you need.

I knew about this Virtualization course. Maybe I'll go for it.

> There are also two (slightly out of date) books available:
> https://www.amazon.ca/Getting-Started-Alexey-Lesovsky-2013-11-22/dp/B01FGLUZMA/ref=sr_1_1?ie=UTF8=1465915525=8-1=ovirt
> https://www.amazon.ca/Getting-Started-Red-Enterprise-Virtualization/dp/1782167404/ref=sr_1_2?s=books=UTF8=1465915578=1-2=red+hat+enterprise+virtualization
>
> For anything else, there's plenty of documentation available
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.0.0 GA postponed to Monday, June 20

2016-06-15 Thread Rafael Martins
- Original Message -
> From: "Gianluca Cecchi" 
> To: "Rafael Martins" 
> Cc: "users" , de...@ovirt.org, annou...@ovirt.org
> Sent: Wednesday, June 15, 2016 7:02:50 PM
> Subject: Re: [ovirt-users] [ANN] oVirt 4.0.0 GA postponed to Monday, June 20
> 
> Il 15/Giu/2016 15:38, "Rafael Martins"  ha scritto:
> >
> > Hi,
> > In order to give time to fix some last minute blockers, the GA is pushed
> to
> > Monday, June 20.
> >
> > This should provide sufficient time for us to properly fix and test oVirt
> 4.0,
> > in order to make sure we have no critical issues in the 4.0.0 version,
> that
> > was originally scheduled for today.
> >
> > Thanks in advance,
> >
> > Rafael Martins
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> Hello,
> Is there a milestone for translation files sync to have them into final ga?
> At this time I'm working on master for Italian in zanata and due to latest
> changes it dropped now to about 84%.
> I'm going to resync it before the end of the weekend.
> Gianluca
> 

I think it is probably too late, but I'm not really into the translation 
process. If someone know it better, please complement.

Thanks,
Rafael
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.0.0 GA postponed to Monday, June 20

2016-06-15 Thread Rafael Martins
- Original Message -
> From: "Nathanaël Blanchet" 
> To: users@ovirt.org, "devel" 
> Sent: Wednesday, June 15, 2016 5:04:10 PM
> Subject: Re: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.0.0 GA postponed to 
> Monday, June 20
> 
> Hi all,
> Can someone explain what makes ovirt 4.0 a major release, comparing to
> the previous ones?
> Over than 1 year to release the minor 3.6 release, and only 8 months for
> 4.0...

Hi,
the draft release notes can give you some overview of the changes. They are a 
lot :)

http://www.ovirt.org/release/4.0.0/

Thanks,
Rafael

> Le 15/06/2016 15:38, Rafael Martins a écrit :
> > Hi,
> > In order to give time to fix some last minute blockers, the GA is pushed to
> > Monday, June 20.
> >
> > This should provide sufficient time for us to properly fix and test oVirt
> > 4.0,
> > in order to make sure we have no critical issues in the 4.0.0 version, that
> > was originally scheduled for today.
> >
> > Thanks in advance,
> >
> > Rafael Martins
> > ___
> > Announce mailing list
> > annou...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/announce
> 
> --
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.0.0 GA postponed to Monday, June 20

2016-06-15 Thread Gianluca Cecchi
Il 15/Giu/2016 15:38, "Rafael Martins"  ha scritto:
>
> Hi,
> In order to give time to fix some last minute blockers, the GA is pushed
to
> Monday, June 20.
>
> This should provide sufficient time for us to properly fix and test oVirt
4.0,
> in order to make sure we have no critical issues in the 4.0.0 version,
that
> was originally scheduled for today.
>
> Thanks in advance,
>
> Rafael Martins
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

Hello,
Is there a milestone for translation files sync to have them into final ga?
At this time I'm working on master for Italian in zanata and due to latest
changes it dropped now to about 84%.
I'm going to resync it before the end of the weekend.
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI

2016-06-15 Thread Simone Tiraboschi
On Wed, Jun 15, 2016 at 5:49 PM, Simone Tiraboschi  wrote:
> On Wed, Jun 15, 2016 at 5:16 PM, Madhuranthakam, Ravi Kumar
>  wrote:
>> Hi ,
>> Thanks for your reply. Looking forward to get this feature integrated in in 
>> upcoming releases .
>>
>> One more question
>>
>> 1) I have few Raw disks (/dev/sdx  or some LVM's ) and I need to directly 
>> attach it to oVirt VM without any file system bottle necks.
>> But this VM is part of data center(shared type) .
>> We are not planning to use any shared storage and shared data center .
>>
>> Here use case is that One Of the VM(HP StoreVirtial Applaince ) is our 
>> software oriented storage which  will consume raw disks which are attached 
>> to local host.
>>
>> How to do that ? I see that attach disk shows only NFS/ISCSI /direct LUn 
>> optins .
>>
>> Thanks for your time.
>
> You can expose it over iSCSI via targetcli and directly attach it to
> the single VM as a direct LUN.

Just another things, you'll probably have also to filter that devices
out of LVM.

>> ~Ravi
>>
>> -Original Message-
>> From: Simone Tiraboschi [mailto:stira...@redhat.com]
>> Sent: Monday, June 13, 2016 5:42 PM
>> To: Madhuranthakam, Ravi Kumar ; Yedidyah 
>> Bar David 
>> Cc: users@ovirt.org
>> Subject: Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI
>>
>> On Thu, Jun 9, 2016 at 9:00 AM, Madhuranthakam, Ravi Kumar 
>>  wrote:
>>> Is there any solution to it on oVirt 3.6?
>>
>> You can try to follow the discussion here:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1240466
>>
>> Basically you have to:
>> - take a backup of the engine with engine-backup
>> - deploy from scratch on an host pointing to the new storage domain
>> - - if you are going to use the engine appliance, here you have to avoid 
>> automatically executing engine setup since:
>> - - - you have to manually copy the backup to the new VM
>> - - - you have tu run engine-backup to restore it,
>> - - - only after that you can execute engine-setup
>> - at the end you can continue with hosted-engine setup ***
>> - then you have to run hosted-engine --deploy again on each host to point to 
>> the new storage domain
>>
>> *** the flow is currently broken here: hosted-engine-setup will fail since:
>> - the old hosted-engine storage domain is already in the engine (since you 
>> restored the DB) but you are deploying on a different one
>> - the engine VM is already in the DB but you are deploying with a new VM
>> - all the hosted-engine host are already in the engine DB
>>
>> So you'll probably need to manually edit the engine-DB just after DB 
>> recovery in order to:
>> - remove the hosted-engine storage domain from the engine DB
>> - remove the hosted-engine VM from the engine DB
>> - remove all the hosted-engine host from the engine DB since you are going 
>> to redeploy them
>>
>> We are looking for adding this capability to engine-backup
>>
>>> I am also planning to move hosted engine from NFS storage to ISCSI .
>>>
>>>
>>>
>>> ~Ravi
>>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI

2016-06-15 Thread Simone Tiraboschi
On Wed, Jun 15, 2016 at 5:16 PM, Madhuranthakam, Ravi Kumar
 wrote:
> Hi ,
> Thanks for your reply. Looking forward to get this feature integrated in in 
> upcoming releases .
>
> One more question
>
> 1) I have few Raw disks (/dev/sdx  or some LVM's ) and I need to directly 
> attach it to oVirt VM without any file system bottle necks.
> But this VM is part of data center(shared type) .
> We are not planning to use any shared storage and shared data center .
>
> Here use case is that One Of the VM(HP StoreVirtial Applaince ) is our 
> software oriented storage which  will consume raw disks which are attached to 
> local host.
>
> How to do that ? I see that attach disk shows only NFS/ISCSI /direct LUn 
> optins .
>
> Thanks for your time.

You can expose it over iSCSI via targetcli and directly attach it to
the single VM as a direct LUN.

> ~Ravi
>
> -Original Message-
> From: Simone Tiraboschi [mailto:stira...@redhat.com]
> Sent: Monday, June 13, 2016 5:42 PM
> To: Madhuranthakam, Ravi Kumar ; Yedidyah 
> Bar David 
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI
>
> On Thu, Jun 9, 2016 at 9:00 AM, Madhuranthakam, Ravi Kumar 
>  wrote:
>> Is there any solution to it on oVirt 3.6?
>
> You can try to follow the discussion here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1240466
>
> Basically you have to:
> - take a backup of the engine with engine-backup
> - deploy from scratch on an host pointing to the new storage domain
> - - if you are going to use the engine appliance, here you have to avoid 
> automatically executing engine setup since:
> - - - you have to manually copy the backup to the new VM
> - - - you have tu run engine-backup to restore it,
> - - - only after that you can execute engine-setup
> - at the end you can continue with hosted-engine setup ***
> - then you have to run hosted-engine --deploy again on each host to point to 
> the new storage domain
>
> *** the flow is currently broken here: hosted-engine-setup will fail since:
> - the old hosted-engine storage domain is already in the engine (since you 
> restored the DB) but you are deploying on a different one
> - the engine VM is already in the DB but you are deploying with a new VM
> - all the hosted-engine host are already in the engine DB
>
> So you'll probably need to manually edit the engine-DB just after DB recovery 
> in order to:
> - remove the hosted-engine storage domain from the engine DB
> - remove the hosted-engine VM from the engine DB
> - remove all the hosted-engine host from the engine DB since you are going to 
> redeploy them
>
> We are looking for adding this capability to engine-backup
>
>> I am also planning to move hosted engine from NFS storage to ISCSI .
>>
>>
>>
>> ~Ravi
>>
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI

2016-06-15 Thread Madhuranthakam, Ravi Kumar
Hi ,
Thanks for your reply. Looking forward to get this feature integrated in in 
upcoming releases .

One more question 

1) I have few Raw disks (/dev/sdx  or some LVM's ) and I need to directly 
attach it to oVirt VM without any file system bottle necks. 
But this VM is part of data center(shared type) .
We are not planning to use any shared storage and shared data center . 

Here use case is that One Of the VM(HP StoreVirtial Applaince ) is our software 
oriented storage which  will consume raw disks which are attached to local host.

How to do that ? I see that attach disk shows only NFS/ISCSI /direct LUn optins 
.

Thanks for your time.


~Ravi

-Original Message-
From: Simone Tiraboschi [mailto:stira...@redhat.com] 
Sent: Monday, June 13, 2016 5:42 PM
To: Madhuranthakam, Ravi Kumar ; Yedidyah 
Bar David 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI

On Thu, Jun 9, 2016 at 9:00 AM, Madhuranthakam, Ravi Kumar 
 wrote:
> Is there any solution to it on oVirt 3.6?

You can try to follow the discussion here:
https://bugzilla.redhat.com/show_bug.cgi?id=1240466

Basically you have to:
- take a backup of the engine with engine-backup
- deploy from scratch on an host pointing to the new storage domain
- - if you are going to use the engine appliance, here you have to avoid 
automatically executing engine setup since:
- - - you have to manually copy the backup to the new VM
- - - you have tu run engine-backup to restore it,
- - - only after that you can execute engine-setup
- at the end you can continue with hosted-engine setup ***
- then you have to run hosted-engine --deploy again on each host to point to 
the new storage domain

*** the flow is currently broken here: hosted-engine-setup will fail since:
- the old hosted-engine storage domain is already in the engine (since you 
restored the DB) but you are deploying on a different one
- the engine VM is already in the DB but you are deploying with a new VM
- all the hosted-engine host are already in the engine DB

So you'll probably need to manually edit the engine-DB just after DB recovery 
in order to:
- remove the hosted-engine storage domain from the engine DB
- remove the hosted-engine VM from the engine DB
- remove all the hosted-engine host from the engine DB since you are going to 
redeploy them

We are looking for adding this capability to engine-backup

> I am also planning to move hosted engine from NFS storage to ISCSI .
>
>
>
> ~Ravi
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI

2016-06-15 Thread Madhuranthakam, Ravi Kumar
Hi ,
Thanks for your reply. Looking forward to get this feature integrated in in 
upcoming releases .

One more question 

1) I have few Raw disks (/dev/sdx  or some LVM's ) and I need to directly 
attach it to oVirt VM without any file system bottle necks. 
But this VM is part of data center(shared type) .
We are not planning to use any shared storage and shared data center . 

Here use case is that One Of the VM(StoreVirtial Applaince ) is our software 
oriented storage which  will consume raw disks which are attached to local host.

How to do that ? I see that attach disk shows only NFS/ISCSI /direct LUn optins 
.

Thanks for your time.


~Ravi

-Original Message-
From: Simone Tiraboschi [mailto:stira...@redhat.com] 
Sent: Monday, June 13, 2016 5:42 PM
To: Madhuranthakam, Ravi Kumar ; Yedidyah 
Bar David 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI

On Thu, Jun 9, 2016 at 9:00 AM, Madhuranthakam, Ravi Kumar 
 wrote:
> Is there any solution to it on oVirt 3.6?

You can try to follow the discussion here:
https://bugzilla.redhat.com/show_bug.cgi?id=1240466

Basically you have to:
- take a backup of the engine with engine-backup
- deploy from scratch on an host pointing to the new storage domain
- - if you are going to use the engine appliance, here you have to avoid 
automatically executing engine setup since:
- - - you have to manually copy the backup to the new VM
- - - you have tu run engine-backup to restore it,
- - - only after that you can execute engine-setup
- at the end you can continue with hosted-engine setup ***
- then you have to run hosted-engine --deploy again on each host to point to 
the new storage domain

*** the flow is currently broken here: hosted-engine-setup will fail since:
- the old hosted-engine storage domain is already in the engine (since you 
restored the DB) but you are deploying on a different one
- the engine VM is already in the DB but you are deploying with a new VM
- all the hosted-engine host are already in the engine DB

So you'll probably need to manually edit the engine-DB just after DB recovery 
in order to:
- remove the hosted-engine storage domain from the engine DB
- remove the hosted-engine VM from the engine DB
- remove all the hosted-engine host from the engine DB since you are going to 
redeploy them

We are looking for adding this capability to engine-backup

> I am also planning to move hosted engine from NFS storage to ISCSI .
>
>
>
> ~Ravi
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Recall: Migrate hosted-engine from NFS to ISCSI

2016-06-15 Thread Madhuranthakam, Ravi Kumar
Madhuranthakam, Ravi Kumar would like to recall the message, "[ovirt-users] 
Migrate hosted-engine from NFS to ISCSI".
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Giorgio Bersano
2016-06-15 12:21 GMT+02:00 Donny Davis :
> Do you have a requirement for 3d acceleration on the VDI guests?

On the first deployment no, it is basically for frontend and backend
office activity.
But we would probably also asked to try it for multimedia activities
like casual guests watching internet videos at the local public
library.
So two different kind of devices, I suppose.

Anything to suggest?

Thank you,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Giorgio Bersano
2016-06-15 12:56 GMT+02:00 Ondra Machacek :
> On 06/15/2016 12:26 PM, Michal Skrivanek wrote:
>>
>>
>>> On 15 Jun 2016, at 12:18, Giorgio Bersano 
>>> wrote:
>>>
>>> Hi everyone,
>>> I've been asked to deploy a VDI solution based on our oVirt
>>> infrastructure.
>>> What we have in production is a 3.6 manager (standalone, not HE) with
>>> a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
>>> fully redundant networking.
>>>
>>> What is not clear to me is the client side, especially because we have
>>> been asked to implement a thin client solution but I've been almost
>>> unable to find suitable devices.
>>
>>
>> if that client can still be a PC, albeit diskless, it’s still easier and
>> probably cheaper than any other special hw.
>>
>>>
>>> Is there anyone in this list willing to share his/her experience on
>>> this topic? Probably my search skill is low but I've only seen
>>> references to IGEL. Other brands?
>>
>>
>> not that i know of, and even that one had (or still have?) some issues
>> with SPICE performance as it’s not kept up to date
>>
>>> There is another strong requirement: our network infrastructure makes
>>> use of 802.1x to authenticate client devices and it would be highly
>>> advisable to respect that constraint.
>>
>>
>> for the VDI connections? I don’t think SPICE supports that, but please
>> bring it up on spice list to make sure.
>> if it would be for oVirt user portal then, I guess with pluggable aaa we
>> can support anything. Ondro?
>>
>
> It depends on use case, if apache module which uses radius is ok, then yes
> it should work.
> The problem is that we currently support only ldap as authorization backend.

Hi, here I'm speaking of wired network authentication (and nothing more).
What we have in place now: network ports are confined in a VLAN only
useful to authenticate the PC (windows). When the PC boots it
interacts with the radius server (freeradius) using PEAP-MsChapv2. If
the PC is registered in the Active Directory and authenticates against
it (at machine level, not user level) the switch port  is given a VLAN
based on attributes stored in the AD and it is enabled to communicate
without restrictions.

With Thin Client we would like to have something similar but it would
be fine even to directly instruct freeradius to enable the port and
set the VLAN on the basis of the thin client MAC address.

I've just discovered that Wyse ThinOS thin clients (Dell) support
802.1x, I wonder if is compatible with oVirt...
Time to search on the spice lists, as Michal suggested.

Thanks,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.0.0 GA postponed to Monday, June 20

2016-06-15 Thread Nathanaël Blanchet

Hi all,
Can someone explain what makes ovirt 4.0 a major release, comparing to 
the previous ones?
Over than 1 year to release the minor 3.6 release, and only 8 months for 
4.0...


Le 15/06/2016 15:38, Rafael Martins a écrit :

Hi,
In order to give time to fix some last minute blockers, the GA is pushed to
Monday, June 20.

This should provide sufficient time for us to properly fix and test oVirt 4.0,
in order to make sure we have no critical issues in the 4.0.0 version, that
was originally scheduled for today.

Thanks in advance,

Rafael Martins
___
Announce mailing list
annou...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/announce


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-15 Thread Charles Kozler
>> Thread-482175::INFO::2016-06-14
>>
12:59:30,429::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
>> Cleaning up stale LV link
'/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
>>
36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.metadata'

> This is also not normal, it means the storage disappeared.


> This seems to indicate there is some kind of issue with your network..
> are you sure that your firewall allows connections over lo interface
> and to the storage server?


Yes very much so. The network is 10.0.16.0/24 - this is the ovirtmgmt +
storage network and is 100% isolated and dedicated with no firewall between
oVirt nodes and storage. There is no firewall on the local server either.
Basically I have:

ovirtmgmt - bond0 in mode 2 (default when not using LACP in oVirt it
appears) - connects to dedicated storage switches. nodes1-3 are 10.0.16.5,
6, and 7 respectively
VM NIC - bond1 - trunk port for VLAN tagging in active/passive bond. This
is the VM network path. This connects to two different switches

storage is located at 10.0.16.100 (cluster IP / storage-vip is hostname),
10.0.16.101 (storage node 1), 10.0.16.102 (storage node 2), 10.0.16.103
(nas01, dedicated storage for ovirt engine outside of clustered storage for
other VMs)

Cluster IP of 10.0.16.100 is where VM storage goes
NAS IP of 10.0.16.103 is where oVirt engine storage is

All paths to the oVirt engine and other nodes are 100% clear with no
failures or firewalls between oVirt nodes and storage

[root@njsevcnp01 ~]# for i in $( seq 100 103 ); do ping -c 1 10.0.16.$i |
grep -i "\(rece\|time=\)"; echo "--"; done
64 bytes from 10.0.16.100: icmp_seq=1 ttl=64 time=0.071 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
--
64 bytes from 10.0.16.101: icmp_seq=1 ttl=64 time=0.065 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
--
64 bytes from 10.0.16.102: icmp_seq=1 ttl=64 time=0.099 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
--
64 bytes from 10.0.16.103: icmp_seq=1 ttl=64 time=0.219 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
--

This is dedicated storage for oVirt environment

[root@njsevcnp01 ~]# df -h | grep -i rhev
nas01:/volume1/vm_os/ovirt36_engine  2.2T  295G  1.9T  14%
/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt36__engine
storage-vip:/fast_ha-gv0 792G  125G  668G  16%
/rhev/data-center/mnt/glusterSD/storage-vip:_fast__ha-gv0
storage-vip:/slow_nonha-gv0  1.8T  212G  1.6T  12%
/rhev/data-center/mnt/glusterSD/storage-vip:_slow__nonha-gv0


>> >
09:24:59,874::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> > Error: 'Failed to start monitor , options {'hostname':
>> > 'njsevcnp01'}: Connection timed out' - trying to restart agent
>> > MainThread::WARNING::2016-06-15

> and connection timeout between agent and broker.

Everything I am providing right now is from njsevcnp01, why would it
timeout between agent and broker on the same box? Because broker is not
accepting connection? But the broker logs show it is accepting and doing
connection handling

Acknowledged on the STMP errors. At this time I am just trying to get
clustering working again because as of now I cannot live migrate the hosted
engine since it appears to be a split brain type of issue

What do I need to do to resolve this stale-data issue and get the cluster
working again / agents and brokers talking to themselves again?

Should I shut down the platform and delete the lock files then bring it
back up again?

Thanks for your help Martin!

On Wed, Jun 15, 2016 at 10:38 AM, Martin Sivak  wrote:

> >
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
> > line 24, in send_email
> > server = smtplib.SMTP(cfg["smtp-server"], port=cfg["smtp-port"])
> >   File "/usr/lib64/python2.7/smtplib.py", line 255, in __init__
> > (code, msg) = self.connect(host, port)
> >   File "/usr/lib64/python2.7/smtplib.py", line 315, in connect
> > self.sock = self._get_socket(host, port, self.timeout)
> >   File "/usr/lib64/python2.7/smtplib.py", line 290, in _get_socket
> > return socket.create_connection((host, port), timeout)
> >   File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
> > raise err
> > error: [Errno 110] Connection timed out
>
> So you have connection timeout here (it is trying to reach the
> localhost smtp server)
>
> >> >
> 09:24:59,874::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> >> > Error: 'Failed to start monitor , options {'hostname':
> >> > 'njsevcnp01'}: Connection timed out' - trying to restart agent
> >> > MainThread::WARNING::2016-06-15
>
> and connection timeout between agent and broker.
>
> > Thread-482175::INFO::2016-06-14
> >
> 12:59:30,429::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
> > Cleaning up stale LV link
> 

Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-15 Thread Martin Sivak
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
> line 24, in send_email
> server = smtplib.SMTP(cfg["smtp-server"], port=cfg["smtp-port"])
>   File "/usr/lib64/python2.7/smtplib.py", line 255, in __init__
> (code, msg) = self.connect(host, port)
>   File "/usr/lib64/python2.7/smtplib.py", line 315, in connect
> self.sock = self._get_socket(host, port, self.timeout)
>   File "/usr/lib64/python2.7/smtplib.py", line 290, in _get_socket
> return socket.create_connection((host, port), timeout)
>   File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
> raise err
> error: [Errno 110] Connection timed out

So you have connection timeout here (it is trying to reach the
localhost smtp server)

>> > 09:24:59,874::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> > Error: 'Failed to start monitor , options {'hostname':
>> > 'njsevcnp01'}: Connection timed out' - trying to restart agent
>> > MainThread::WARNING::2016-06-15

and connection timeout between agent and broker.

> Thread-482175::INFO::2016-06-14
> 12:59:30,429::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
> Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
> 36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.metadata'

This is also not normal, it means the storage disappeared.


This seems to indicate there is some kind of issue with your network..
are you sure that your firewall allows connections over lo interface
and to the storage server?


Martin

On Wed, Jun 15, 2016 at 4:11 PM, Charles Kozler  wrote:
> Marin -
>
> Anything I should be looking for specifically? The only errors I see are
> smtp errors when it tries to send a notification but nothing indicating what
> the notification is / might be. I see this repeated about every minute
>
> Thread-482115::INFO::2016-06-14
> 12:58:54,431::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
> Connection established
> Thread-482109::INFO::2016-06-14
> 12:58:54,491::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
> Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
> 36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.lockspace'
> Thread-482109::INFO::2016-06-14
> 12:58:54,515::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
> Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
> 36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.metadata'
>
> nas01 is the primary storage for the engine (as previously noted)
>
> Thread-482175::INFO::2016-06-14
> 12:59:30,398::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
> Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
> 36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.lockspace'
> Thread-482175::INFO::2016-06-14
> 12:59:30,429::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
> Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
> 36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.metadata'
>
>
> But otherwise the broker looks like its accepting and handling connections
>
> Thread-481980::INFO::2016-06-14
> 12:59:33,105::mem_free::53::mem_free.MemFree::(action) memFree: 26491
> Thread-482193::INFO::2016-06-14
> 12:59:33,977::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
> Connection established
> Thread-482193::INFO::2016-06-14
> 12:59:34,033::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
> Connection closed
> Thread-482194::INFO::2016-06-14
> 12:59:34,034::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
> Connection established
> Thread-482194::INFO::2016-06-14
> 12:59:34,035::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
> Connection closed
> Thread-482195::INFO::2016-06-14
> 12:59:34,035::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
> Connection established
> Thread-482195::INFO::2016-06-14
> 12:59:34,036::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
> Connection closed
> Thread-482196::INFO::2016-06-14
> 12:59:34,037::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
> Connection established
> Thread-482196::INFO::2016-06-14
> 12:59:34,037::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
> Connection closed
> Thread-482197::INFO::2016-06-14
> 12:59:38,544::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
> Connection established
> Thread-482197::INFO::2016-06-14
> 

Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-15 Thread Charles Kozler
Marin -

Anything I should be looking for specifically? The only errors I see are
smtp errors when it tries to send a notification but nothing indicating
what the notification is / might be. I see this repeated about every minute

Thread-482115::INFO::2016-06-14
12:58:54,431::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482109::INFO::2016-06-14
12:58:54,491::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.lockspace'
Thread-482109::INFO::2016-06-14
12:58:54,515::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.metadata'

nas01 is the primary storage for the engine (as previously noted)

Thread-482175::INFO::2016-06-14
12:59:30,398::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.lockspace'
Thread-482175::INFO::2016-06-14
12:59:30,429::storage_backends::120::ovirt_hosted_engine_ha.lib.storage_backends::(_check_symlinks)
Cleaning up stale LV link '/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt
36__engine/c6323975-2966-409d-b9e0-48370a513a98/ha_agent/hosted-engine.metadata'


But otherwise the broker looks like its accepting and handling connections

Thread-481980::INFO::2016-06-14
12:59:33,105::mem_free::53::mem_free.MemFree::(action) memFree: 26491
Thread-482193::INFO::2016-06-14
12:59:33,977::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482193::INFO::2016-06-14
12:59:34,033::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482194::INFO::2016-06-14
12:59:34,034::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482194::INFO::2016-06-14
12:59:34,035::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482195::INFO::2016-06-14
12:59:34,035::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482195::INFO::2016-06-14
12:59:34,036::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482196::INFO::2016-06-14
12:59:34,037::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482196::INFO::2016-06-14
12:59:34,037::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482197::INFO::2016-06-14
12:59:38,544::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482197::INFO::2016-06-14
12:59:38,598::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482198::INFO::2016-06-14
12:59:38,598::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482198::INFO::2016-06-14
12:59:38,599::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482199::INFO::2016-06-14
12:59:38,600::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482199::INFO::2016-06-14
12:59:38,600::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482200::INFO::2016-06-14
12:59:38,601::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
Connection established
Thread-482200::INFO::2016-06-14
12:59:38,602::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Connection closed
Thread-482179::INFO::2016-06-14
12:59:40,339::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load)
System load total=0.0078, engine=0., non-engine=0.0078


Thread-482178::INFO::2016-06-14
12:59:49,745::mem_free::53::mem_free.MemFree::(action) memFree: 26500
Thread-481977::ERROR::2016-06-14
12:59:50,263::notifications::35::ovirt_hosted_engine_ha.broker.notifications.Notifications::(send_email)
[Errno 110] Connection timed out
Traceback (most recent call last):
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 24, in send_email
server = smtplib.SMTP(cfg["smtp-server"], port=cfg["smtp-port"])
  File "/usr/lib64/python2.7/smtplib.py", line 255, in __init__
(code, msg) = self.connect(host, port)
  File "/usr/lib64/python2.7/smtplib.py", line 315, in 

Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-15 Thread Martin Sivak
Charles, check the broker log too please. It is possible that the
broker process is running, but is not accepting connections for
example.

Martin

On Wed, Jun 15, 2016 at 3:32 PM, Charles Kozler  wrote:
> Actually, broker is the only thing acting "right" between broker and agent.
> Broker is up when I bring the system up but agent is restarting all the
> time. Have a look
>
> The 11th is when I restarted this node after doing 'reinstall' in the web UI
>
> ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability
> Communications Broker
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled;
> vendor preset: disabled)
>Active: active (running) since Sat 2016-06-11 13:09:51 EDT; 3 days ago
>  Main PID: 1285 (ovirt-ha-broker)
>CGroup: /system.slice/ovirt-ha-broker.service
>└─1285 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon
>
> Jun 15 09:23:56 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:mgmt_bridge.MgmtBridge:Found bridge ovirtmgmt with ports
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> closed
> Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
> INFO:mem_free.MemFree:memFree: 26408
>
> Uptime of proc ..
>
> # ps -Aef | grep -i broker
> vdsm   1285  1  2 Jun11 ?02:27:50 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon
>
> But the agent... is restarting all the time
>
> # ps -Aef | grep -i ovirt-ha-agent
> vdsm  76116  1  0 09:19 ?00:00:01 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon
>
> 9:19 AM ET is last restart. Even the logs say it
>
> [root@njsevcnp01 ovirt-hosted-engine-ha]# grep -i 'restarting agent'
> agent.log | wc -l
> 232719
>
> And the restarts every
>
> [root@njsevcnp01 ovirt-hosted-engine-ha]# tail -n 300 agent.log | grep -i
> 'restarting agent'
> MainThread::WARNING::2016-06-15
> 09:23:53,029::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '6'
> MainThread::WARNING::2016-06-15
> 09:24:28,953::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '7'
> MainThread::WARNING::2016-06-15
> 09:25:04,879::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '8'
> MainThread::WARNING::2016-06-15
> 09:25:40,790::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '9'
> MainThread::WARNING::2016-06-15
> 09:26:17,136::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '0'
> MainThread::WARNING::2016-06-15
> 09:26:53,063::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '1'
>
> Full log of restart is like this saying "connection timed out" but its not
> saying to *what* is timing out, so I have nothing else to really go on here
>
> [root@njsevcnp01 ovirt-hosted-engine-ha]# tail -n 300 agent.log | grep -i
> restart
> MainThread::ERROR::2016-06-15
> 09:24:23,948::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: 'Failed to start monitor , options {'hostname':
> 'njsevcnp01'}: Connection timed out' - trying to restart agent
> MainThread::WARNING::2016-06-15
> 09:24:28,953::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '7'
> MainThread::ERROR::2016-06-15
> 09:24:59,874::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: 'Failed to start monitor , options {'hostname':
> 'njsevcnp01'}: Connection timed out' - trying to restart agent
> MainThread::WARNING::2016-06-15
> 09:25:04,879::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Restarting agent, attempt '8'
> MainThread::ERROR::2016-06-15
> 09:25:35,785::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: 'Failed to start 

[ovirt-users] [ANN] oVirt 4.0.0 GA postponed to Monday, June 20

2016-06-15 Thread Rafael Martins
Hi,
In order to give time to fix some last minute blockers, the GA is pushed to
Monday, June 20.

This should provide sufficient time for us to properly fix and test oVirt 4.0,
in order to make sure we have no critical issues in the 4.0.0 version, that
was originally scheduled for today.

Thanks in advance,

Rafael Martins
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-15 Thread Charles Kozler
Actually, broker is the only thing acting "right" between broker and agent.
Broker is up when I bring the system up but agent is restarting all the
time. Have a look

The 11th is when I restarted this node after doing 'reinstall' in the web UI

● ovirt-ha-broker.service - oVirt Hosted Engine High Availability
Communications Broker
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
enabled; vendor preset: disabled)
   Active: active (running) since Sat 2016-06-11 13:09:51 EDT; 3 days ago
 Main PID: 1285 (ovirt-ha-broker)
   CGroup: /system.slice/ovirt-ha-broker.service
   └─1285 /usr/bin/python
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon

Jun 15 09:23:56 njsevcnp01 ovirt-ha-broker[1285]:
INFO:mgmt_bridge.MgmtBridge:Found bridge ovirtmgmt with ports
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
established
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
closed
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
established
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
closed
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
established
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
closed
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
established
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
closed
Jun 15 09:23:58 njsevcnp01 ovirt-ha-broker[1285]:
INFO:mem_free.MemFree:memFree: 26408

Uptime of proc ..

# ps -Aef | grep -i broker
vdsm   1285  1  2 Jun11 ?02:27:50 /usr/bin/python
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon

But the agent... is restarting all the time

# ps -Aef | grep -i ovirt-ha-agent
vdsm  76116  1  0 09:19 ?00:00:01 /usr/bin/python
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon

9:19 AM ET is last restart. Even the logs say it

[root@njsevcnp01 ovirt-hosted-engine-ha]# grep -i 'restarting agent'
agent.log | wc -l
232719

And the restarts every

[root@njsevcnp01 ovirt-hosted-engine-ha]# tail -n 300 agent.log | grep -i
'restarting agent'
MainThread::WARNING::2016-06-15
09:23:53,029::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '6'
MainThread::WARNING::2016-06-15
09:24:28,953::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '7'
MainThread::WARNING::2016-06-15
09:25:04,879::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '8'
MainThread::WARNING::2016-06-15
09:25:40,790::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '9'
MainThread::WARNING::2016-06-15
09:26:17,136::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '0'
MainThread::WARNING::2016-06-15
09:26:53,063::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '1'

Full log of restart is like this saying "connection timed out" but its not
saying to *what* is timing out, so I have nothing else to really go on here

[root@njsevcnp01 ovirt-hosted-engine-ha]# tail -n 300 agent.log | grep -i
restart
MainThread::ERROR::2016-06-15
09:24:23,948::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: 'Failed to start monitor , options {'hostname':
'njsevcnp01'}: Connection timed out' - trying to restart agent
MainThread::WARNING::2016-06-15
09:24:28,953::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '7'
MainThread::ERROR::2016-06-15
09:24:59,874::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: 'Failed to start monitor , options {'hostname':
'njsevcnp01'}: Connection timed out' - trying to restart agent
MainThread::WARNING::2016-06-15
09:25:04,879::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '8'
MainThread::ERROR::2016-06-15
09:25:35,785::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: 'Failed to start monitor , options {'hostname':
'njsevcnp01'}: Connection timed out' - trying to restart agent
MainThread::WARNING::2016-06-15
09:25:40,790::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '9'
MainThread::ERROR::2016-06-15
09:26:12,131::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: 'Failed to start monitor , options {'hostname':
'njsevcnp01'}: Connection 

[ovirt-users] Decommission Master Storage Domain

2016-06-15 Thread Neil
Hi guys,

I've searched around a little but don't see much on how to do this.

I'm running ovirt-engine-3.5.6.2-1.el6.noarch on Centos 6.x

I have 4 x Centos 6.x hosts connected to an FC SAN with two different RAID
arrays configured on it, one new RAID and one old RAID.
The new RAID is shared as a new FC storage domain, the old RAID as my old
Master storage domain.

I have moved all VM's using LSM to the new storage domain and I would like
to remove my old storage domain now, so that the old physical hard disks
can be removed out of my SAN.

If I go to "Disks" on the old storage domain I only see two named OVF_store
etc, so it looks like it's ready to be decommissioned.

How can I promote my new domain to the master and remove/destroy my old
master domain and can it all be done without any VM downtime?

Any help is greatly appreciated.

Thanks!

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-15 Thread Martin Sivak
> Jun 14 08:11:11 njsevcnp01 ovirt-ha-agent[15713]: ovirt-ha-agent
> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Connection closed:
> Connection timed out
> Jun 14 08:11:11 njsevcnp01.fixflyer.com ovirt-ha-agent[15713]:
> ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: 'Failed
> to start monitor , options {'hostname': 'njsevcnp01'}:
> Connection timed out' - trying to restart agent

Broker is broken or down. Check the status of ovirt-ha-broker service.

> The other interesting thing is this log from node01. The odd thing is that
> it seems there is some split brain somewhere in oVirt because this log is
> from node02 but it is asking the engine and its getting back "vm not running
> on this host' rather than 'stale data'. But I dont know engine internals

This is another piece that points to broker or storage issues. Agent
collects local data and then publishes them to other nodes through
broker. So it is possible for the agent to know the status of the VM
locally, but not be able to publish it.

hosted-engine command line tool then reads the synchronization
whiteboard too, but it does not see anything that was not published
and ends up reporting stale data.

>> What is the status of the hosted engine services? systemctl status
>> ovirt-ha-agent ovirt-ha-broker

Please check the services.

Best regards

Martin

On Tue, Jun 14, 2016 at 2:16 PM, Charles Kozler  wrote:
> Martin -
>
> One thing I noticed on all of the nodes is this:
>
> Jun 14 08:11:11 njsevcnp01 ovirt-ha-agent[15713]: ovirt-ha-agent
> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Connection closed:
> Connection timed out
> Jun 14 08:11:11 njsevcnp01.fixflyer.com ovirt-ha-agent[15713]:
> ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: 'Failed
> to start monitor , options {'hostname': 'njsevcnp01'}:
> Connection timed out' - trying to restart agent
>
> Then the agent is restarted
>
> [root@njsevcnp01 ~]# ps -Aef | grep -i ovirt-ha-agent | grep -iv grep
> vdsm  15713  1  0 08:09 ?00:00:01 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon
>
> I dont know why the connection would time out because as you can see that
> log is from node01 and I cant figure out why its timing out on the
> connection
>
> The other interesting thing is this log from node01. The odd thing is that
> it seems there is some split brain somewhere in oVirt because this log is
> from node02 but it is asking the engine and its getting back "vm not running
> on this host' rather than 'stale data'. But I dont know engine internals
>
> MainThread::INFO::2016-06-14
> 08:13:05,163::state_machine::171::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host njsevcnp02 (id 2): {hostname: njsevcnp02, host-id: 2, engine-status:
> {reason: vm not running on this host, health: bad, vm: down, detail:
> unknown}, score: 0, stopped: True, maintenance: False, crc32: 25da07df,
> host-ts: 3030}
> MainThread::INFO::2016-06-14
> 08:13:05,163::state_machine::171::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host njsevcnp03 (id 3): {hostname: njsevcnp03, host-id: 3, engine-status:
> {reason: vm not running on this host, health: bad, vm: down, detail:
> unknown}, score: 0, stopped: True, maintenance: False, crc32: c67818cb,
> host-ts: 10877406}
>
>
> And that same log on node02 where the engine is running
>
>
> MainThread::INFO::2016-06-14
> 08:15:44,451::state_machine::171::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host njsevcnp01 (id 1): {hostname: njsevcnp01, host-id: 1, engine-status:
> {reason: vm not running on this host, health: bad, vm: down, detail:
> unknown}, score: 0, stopped: True, maintenance: False, crc32: 260dbf06,
> host-ts: 327}
> MainThread::INFO::2016-06-14
> 08:15:44,451::state_machine::171::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host njsevcnp03 (id 3): {hostname: njsevcnp03, host-id: 3, engine-status:
> {reason: vm not running on this host, health: bad, vm: down, detail:
> unknown}, score: 0, stopped: True, maintenance: False, crc32: c67818cb,
> host-ts: 10877406}
> MainThread::INFO::2016-06-14
> 08:15:44,451::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 2): {engine-health: {health: good, vm: up, detail: up}, bridge:
> True, mem-free: 20702.0, maintenance: False, cpu-load: None, gateway: True}
> MainThread::INFO::2016-06-14
> 08:15:44,452::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1465906544.45 type=state_transition
> detail=StartState-ReinitializeFSM hostname=njsevcnp02
>
>
>
>
>
>
>
>
> On Tue, Jun 14, 2016 at 7:59 AM, Martin Sivak  wrote:
>>
>> Hi,
>>
>> is there anything interesting in the hosted engine log files?
>> /var/log/ovirt-hosted-engine-ha/agent.log
>>
>> There should be something appearing there every 10 seconds 

Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Ondra Machacek

On 06/15/2016 12:26 PM, Michal Skrivanek wrote:



On 15 Jun 2016, at 12:18, Giorgio Bersano  wrote:

Hi everyone,
I've been asked to deploy a VDI solution based on our oVirt infrastructure.
What we have in production is a 3.6 manager (standalone, not HE) with
a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
fully redundant networking.

What is not clear to me is the client side, especially because we have
been asked to implement a thin client solution but I've been almost
unable to find suitable devices.


if that client can still be a PC, albeit diskless, it’s still easier and 
probably cheaper than any other special hw.



Is there anyone in this list willing to share his/her experience on
this topic? Probably my search skill is low but I've only seen
references to IGEL. Other brands?


not that i know of, and even that one had (or still have?) some issues with 
SPICE performance as it’s not kept up to date


There is another strong requirement: our network infrastructure makes
use of 802.1x to authenticate client devices and it would be highly
advisable to respect that constraint.


for the VDI connections? I don’t think SPICE supports that, but please bring it 
up on spice list to make sure.
if it would be for oVirt user portal then, I guess with pluggable aaa we can 
support anything. Ondro?



It depends on use case, if apache module which uses radius is ok, then 
yes it should work.

The problem is that we currently support only ldap as authorization backend.



TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] free-IPA Multi-Master Authentication Problem

2016-06-15 Thread Donny Davis
How did you setup the authentication. DId you use AAA or
engine-manage-domains ?

Do you *have* to use kerberos, or can you just use ldap?

If you have no requirement to use kerberos, then I would just use simple
AAA ldap.

How are you load balancing the IPA servers?  Does fail over work for other
things? IE client machines connected to the IPA realm?

On Tue, Jun 7, 2016 at 9:49 AM, Kilian Ries  wrote:

> Indeed there was a faulty record for the IPA2 - i corrected that. Now the
> engine-log shows the correct ldap-address:
>
> ###
>
> 2016-06-07 15:20:43,940 ERROR
> [org.ovirt.engine.extensions.aaa.builtin.kerberosldap.LdapSearchExceptionHandler]
> (ajp--127.0.0.1-8702-3) Ldap authentication failed. Please check that the
> login name , password and path are correct.
> 2016-06-07 15:20:43,946 ERROR
> [org.ovirt.engine.extensions.aaa.builtin.kerberosldap.DirectorySearcher]
> (ajp--127.0.0.1-8702-3) Failed ldap search server ldap://
> auth02.intern.eu:389 using user kr...@intern.eu due to Kerberos error.
> Please check log for further details.. We should not try the next server
> 2016-06-07 15:20:43,951 ERROR
> [org.ovirt.engine.extensions.aaa.builtin.kerberosldap.LdapAuthenticateUserCommand]
> (ajp--127.0.0.1-8702-3) Failed authenticating user: kries to domain
> intern.eu. Ldap Query Type is getUserByName
> 2016-06-07 15:20:43,954 ERROR
> [org.ovirt.engine.extensions.aaa.builtin.kerberosldap.LdapAuthenticateUserCommand]
> (ajp--127.0.0.1-8702-3) Kerberos error. Please check log for further
> details.
> 2016-06-07 15:20:43,957 ERROR
> [org.ovirt.engine.extensions.aaa.builtin.kerberosldap.LdapBrokerCommandBase]
> (ajp--127.0.0.1-8702-3) Failed to run command LdapAuthenticateUserCommand.
> Domain is intern.eu. User is kries.
> 2016-06-07 15:20:43,961 INFO
> [org.ovirt.engine.core.bll.aaa.LoginBaseCommand] (ajp--127.0.0.1-8702-3)
> Cant login user "kries" with authentication profile "intern.eu" because
> the authentication failed.
> 2016-06-07 15:20:43,968 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ajp--127.0.0.1-8702-3) Correlation ID: null, Call Stack: null, Custom
> Event ID: -1, Message: User kr...@intern.eu failed to log in.
> 2016-06-07 15:20:43,971 WARN
> [org.ovirt.engine.core.bll.aaa.LoginAdminUserCommand]
> (ajp--127.0.0.1-8702-3) CanDoAction of action LoginAdminUser failed for
> user kr...@intern.eu. Reasons: USER_FAILED_TO_AUTHENTICATE
>
> ###
>
> I'm still not able to login to oVirt via IPA2
>
> krb5kdc and dirsrv-acces Log don't show anything new.
>
> 
> Von: Ondra Machacek 
> Gesendet: Montag, 6. Juni 2016 14:31
> An: Kilian Ries; users@ovirt.org
> Betreff: Re: AW: [ovirt-users] free-IPA Multi-Master Authentication Problem
>
> It looks fine, thanks.
> Looking at the oVirt log I see IPA server FQDN:
>
>   auth02.intern.customer-virt.eu.intern.customer-virt.eu
>
> Looking at krb realm, I guess this should be -
> auth02.intern.customer-virt.eu
>
> Do you use SRV records or did you pass --ldap-servers to manage-domains?
> If SRV, then you maybe misconfigured DNS, if --ldap-servers, you should
> edit configuration with proper FQDN.
>
> On 06/06/2016 11:00 AM, Kilian Ries wrote:
> > Hello,
> >
> > here is the krb5kdc log from IPA2:
> >
> >
> > ###
> > Jun 03 17:18:22 auth02.intern.customer-virt.eu krb5kdc[1283](info):
> AS_REQ (1 etypes {23}) 192.168.210.45: NEEDED_PREAUTH:
> kr...@intern.customer-virt.eu for krbtgt/
> intern.customer-virt...@intern.customer-virt.eu, Additional
> pre-authentication required
> > Jun 03 17:18:22 auth02.intern.customer-virt.eu krb5kdc[1283](info):
> closing down fd 12
> > Jun 03 17:18:22 auth02.intern.customer-virt.eu krb5kdc[1283](info):
> AS_REQ (1 etypes {23}) 192.168.210.45: ISSUE: authtime 1464967102, etypes
> {rep=23 tkt=18 ses=23}, kr...@intern.customer-virt.eu for krbtgt/
> intern.customer-virt...@intern.customer-virt.eu
> > Jun 03 17:18:22 auth02.intern.customer-virt.eu krb5kdc[1283](info):
> closing down fd 12
> > Jun 03 17:18:40 auth02.intern.customer-virt.eu krb5kdc[1283](info):
> AS_REQ (1 etypes {23}) 192.168.210.45: NEEDED_PREAUTH:
> kr...@intern.customer-virt.eu for krbtgt/
> intern.customer-virt...@intern.customer-virt.eu, Additional
> pre-authentication required
> > Jun 03 17:18:40 auth02.intern.customer-virt.eu krb5kdc[1283](info):
> closing down fd 12
> > Jun 03 17:18:40 auth02.intern.customer-virt.eu krb5kdc[1284](info):
> AS_REQ (1 etypes {23}) 192.168.210.45: ISSUE: authtime 1464967120, etypes
> {rep=23 tkt=18 ses=23}, kr...@intern.customer-virt.eu for krbtgt/
> intern.customer-virt...@intern.customer-virt.eu
> > Jun 03 17:18:40 auth02.intern.customer-virt.eu krb5kdc[1284](info):
> closing down fd 12
> > Jun 03 17:18:40 auth02.intern.customer-virt.eu krb5kdc[1283](info):
> AS_REQ (1 etypes {23}) 192.168.210.45: NEEDED_PREAUTH:
> kr...@intern.customer-virt.eu for krbtgt/
> intern.customer-virt...@intern.customer-virt.eu, Additional

Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Michal Skrivanek

> On 15 Jun 2016, at 12:18, Giorgio Bersano  wrote:
> 
> Hi everyone,
> I've been asked to deploy a VDI solution based on our oVirt infrastructure.
> What we have in production is a 3.6 manager (standalone, not HE) with
> a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
> fully redundant networking.
> 
> What is not clear to me is the client side, especially because we have
> been asked to implement a thin client solution but I've been almost
> unable to find suitable devices.

if that client can still be a PC, albeit diskless, it’s still easier and 
probably cheaper than any other special hw.

> 
> Is there anyone in this list willing to share his/her experience on
> this topic? Probably my search skill is low but I've only seen
> references to IGEL. Other brands?

not that i know of, and even that one had (or still have?) some issues with 
SPICE performance as it’s not kept up to date

> There is another strong requirement: our network infrastructure makes
> use of 802.1x to authenticate client devices and it would be highly
> advisable to respect that constraint.

for the VDI connections? I don’t think SPICE supports that, but please bring it 
up on spice list to make sure.
if it would be for oVirt user portal then, I guess with pluggable aaa we can 
support anything. Ondro?

> 
> TIA,
> Giorgio.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Donny Davis
You can also use ansible to provision your 802.1x network gear along with
the client machines, this way everything is provisioned the same way every
time.

On Wed, Jun 15, 2016 at 6:21 AM, Donny Davis  wrote:

> Do you have a requirement for 3d acceleration on the VDI guests?
>
> On Wed, Jun 15, 2016 at 6:18 AM, Giorgio Bersano <
> giorgio.bers...@gmail.com> wrote:
>
>> Hi everyone,
>> I've been asked to deploy a VDI solution based on our oVirt
>> infrastructure.
>> What we have in production is a 3.6 manager (standalone, not HE) with
>> a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
>> fully redundant networking.
>>
>> What is not clear to me is the client side, especially because we have
>> been asked to implement a thin client solution but I've been almost
>> unable to find suitable devices.
>>
>> Is there anyone in this list willing to share his/her experience on
>> this topic? Probably my search skill is low but I've only seen
>> references to IGEL. Other brands?
>> There is another strong requirement: our network infrastructure makes
>> use of 802.1x to authenticate client devices and it would be highly
>> advisable to respect that constraint.
>>
>> TIA,
>> Giorgio.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDI experience to share?

2016-06-15 Thread Donny Davis
Do you have a requirement for 3d acceleration on the VDI guests?

On Wed, Jun 15, 2016 at 6:18 AM, Giorgio Bersano 
wrote:

> Hi everyone,
> I've been asked to deploy a VDI solution based on our oVirt infrastructure.
> What we have in production is a 3.6 manager (standalone, not HE) with
> a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
> fully redundant networking.
>
> What is not clear to me is the client side, especially because we have
> been asked to implement a thin client solution but I've been almost
> unable to find suitable devices.
>
> Is there anyone in this list willing to share his/her experience on
> this topic? Probably my search skill is low but I've only seen
> references to IGEL. Other brands?
> There is another strong requirement: our network infrastructure makes
> use of 802.1x to authenticate client devices and it would be highly
> advisable to respect that constraint.
>
> TIA,
> Giorgio.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to build and install ovirt to the Product Environment

2016-06-15 Thread Donny Davis
There is no sense in building the packages yourself, when the oVirt
community takes care of that for you. Especially if you are talking about a
production workload.

On Tue, Jun 14, 2016 at 2:57 AM, Martin Perina  wrote:

>
>
> On Tue, Jun 14, 2016 at 2:26 AM, Dewey Du  wrote:
>
>> Yes, RPMs runs well. But I want to build from source and install it on
>> production also.
>>
>
> ​Hi,
>
> in that case I'd recommend building RPMs from source and install them. But
> be aware that engine is only one part whole set of RPMs which oVirt project
> contains. Anyway if you want to build an RPM from source, please take a
> look at README.adoc in root directory in short here are steps:
>
>   make dist
>   rpmbuild -ts ovirt-engine-X.Y.Z.tar.gz
>   yum-builddep   #  should be replace with real name of
> .src.rpm from previous step
>   rpmbuild -tb ​
>
> ​ovirt-engine-X.Y.Z.tar.gz
>
> Created RPMs are stored in $HOME/rpmbuild/RPMS
>
>
> Be aware that if you want to install those RPMs you will still need other
> RPMs from oVirt project like otopi, ovirt-host-deploy, ovirt-setup-lib,
> ovirt-engine-extension-aaa-jdbc an others. Building all of them is quite
> huge task, so that's why I recomended you RPM installation.
>
> Martin Perina
>
> ​
>
>>
>> On Tue, Jun 14, 2016 at 12:36 AM, Martin Perina 
>> wrote:
>>
>>>
>>>
>>> On Mon, Jun 13, 2016 at 6:27 PM, Nir Soffer  wrote:
>>>
 For such issues better use de...@ovirt.org mailing list:
 http://lists.ovirt.org/mailman/listinfo/devel

 Nir

 On Mon, Jun 13, 2016 at 6:58 PM, Dewey Du  wrote:
 > To build and install ovirt-engine at your home folder under
 ovirt-engine
 > directory execute the folllowing command:
 >
 > $ make clean install-dev PREFIX="${PREFIX}"
 >
 > What about installing a Product Environment. Is the folllowing command
 > right?

>>>
>>> ​Do you want to use oVirt in production? If so, then I'd highly
>>> recommend to use latest stable version installed from RPMs. More info can
>>> be found at
>>>
>>> http://www.ovirt.org/download/
>>>
>>> Martin Perina
>>> ​
>>>
>>>
 >
 > $ make clean install PREFIX="${PREFIX}"
 >
 >
 > ___
 > Users mailing list
 > Users@ovirt.org
 > http://lists.ovirt.org/mailman/listinfo/users
 >
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDI experience to share?

2016-06-15 Thread Giorgio Bersano
Hi everyone,
I've been asked to deploy a VDI solution based on our oVirt infrastructure.
What we have in production is a 3.6 manager (standalone, not HE) with
a 3.5 cluster (CentOS 6) and a 3.6 cluster (CentOS 7), iSCSI storage,
fully redundant networking.

What is not clear to me is the client side, especially because we have
been asked to implement a thin client solution but I've been almost
unable to find suitable devices.

Is there anyone in this list willing to share his/her experience on
this topic? Probably my search skill is low but I've only seen
references to IGEL. Other brands?
There is another strong requirement: our network infrastructure makes
use of 802.1x to authenticate client devices and it would be highly
advisable to respect that constraint.

TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RHEV-M installation failure

2016-06-15 Thread Alexis HAUSER
It is telling you where is the log file to check :

Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160614145427-u8mxun.log

That would give more details
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Solution for the multiple incompatible GPUs

2016-06-15 Thread Martin Polednik

On 15/06/16 09:53 +0200, Arman Khalatyan wrote:

Hi,
I am looking for some solution with multiple GPUs(FX5600quadro and
TESLA2050)
The drivers are not compatible in the bare metal, Therefore I was trying to
use passthrough as described here:
http://www.ovirt.org/develop/release-management/features/engine/hostdev-passthrough/

After installing the nvidia driver on the guest everything looks nice.
After running 2 or 3 times nvidia-smi to see the GPU status, the device
disappear.
I just tested with both gpus, with different drivers from nvidia, same
situation.

My environment is host and guest-Centos7.2, ovirt 3.6.6 engine(but host has
3.6.7RC due to the fix in passthrough gui)

Are there successful stories with  GPU+Ovirt?


Actually the cards you are using should work pretty well. If I
understand correctly, the device disappears from the guest? That would
most likely be problem in NVIDIA's smi tool or drivers. Still,
supplying VDSM logs (from start to VM's destruction) from the host
could help us debug the issue.


Thanks,
Arman.



***

Dr. Arman Khalatyan  eScience -SuperComputing
Leibniz-Institut für Astrophysik Potsdam (AIP)
An der Sternwarte 16, 14482 Potsdam, Germany

***



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host kernel upgrade

2016-06-15 Thread Nathanaël Blanchet



Le 14/06/2016 15:30, Rafael Almeida a écrit :
"kernel is hot patched ", mm which version of CentOS use your? i use: 
ovirt-engine-3.6.6.2 over centos 3.10.0-327.18.2.el7.x86_64 and 
periodically updates the kernel.


I've seen: kpatch and ksplice, What is your implementation?
Yes, it was about kpatch, but you're right, kpatch must be manually 
applied from a diff source kernel, and is not applied by default.

http://jensd.be/651/linux/linux-live-kernel-patching-with-kpatch-on-centos-7


greetings



On 06/14/2016 08:14 AM, Nathanaël Blanchet wrote:
Since el7, you don't need to reboot anymore after your kernel 
upgrade, kernel is hot patched.


Le 14/06/2016 15:09, Rafael Almeida a écrit :

Great, thnx


On 06/13/2016 06:12 PM, Nir Soffer wrote:

On Tue, Jun 14, 2016 at 1:12 AM, Rafael Almeida
 wrote:
Hello, friends, it is safe reboot my host after update the kernel 
in my

centos 7.2 x64, the ovirt engine 3.6 run over this centos in a
independent host. which it is the frequency at which the
host/hypervisors communicates with the engine oVirt?
The hypervisors do not communicate with engine, engine communicate 
with them,

so you can safely reboot the engine host.

Nir








--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Solution for the multiple incompatible GPUs

2016-06-15 Thread Arman Khalatyan
Hi,
I am looking for some solution with multiple GPUs(FX5600quadro and
TESLA2050)
The drivers are not compatible in the bare metal, Therefore I was trying to
use passthrough as described here:
http://www.ovirt.org/develop/release-management/features/engine/hostdev-passthrough/

After installing the nvidia driver on the guest everything looks nice.
After running 2 or 3 times nvidia-smi to see the GPU status, the device
disappear.
I just tested with both gpus, with different drivers from nvidia, same
situation.

My environment is host and guest-Centos7.2, ovirt 3.6.6 engine(but host has
3.6.7RC due to the fix in passthrough gui)

Are there successful stories with  GPU+Ovirt?
Thanks,
Arman.



***

 Dr. Arman Khalatyan  eScience -SuperComputing
 Leibniz-Institut für Astrophysik Potsdam (AIP)
 An der Sternwarte 16, 14482 Potsdam, Germany

***
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RHEV-M installation failure

2016-06-15 Thread Yedidyah Bar David
On Wed, Jun 15, 2016 at 1:15 AM, Grant Lowe  wrote:
> Hi all,
>
>
>
> I’m trying to install an RHEV-M image on an RHEV-V hypervisor.

Please be more specific about what you want to do. What guide are you
following?

What's "RHEV-V"?

> When I do,
> the installation finishes with this error:
>
[snip]
> 2016-06-14 15:00:21 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc
> misc._terminate:170 Hosted Engine deployment failed: this system is not
> reliable, please check the issue, fix and redeploy

This means you are trying to deploy a hosted-engine. Is that what you want?

This error is not enough to understand what failed.

>
> 2016-06-14 15:00:21 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Log file is located at
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160614145427-u8mxun.log

Please check above log and/or post a link to it.

If you can't understand yourself the error from checking this file, it's
quite likely we won't either - in that case, please check/post also vdsm
logs, perhaps engine logs (from the engine vm).

> So what do I need to do? I’m new to the whole oVirt world. So any and all
> help is appreciated.

Welcome and good luck!
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users