Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Artyom Lukianov
I see that I verified it on version 
ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from this 
version and above.
Thanks
- Original Message -
From: Andrew Lau and...@andrewklau.com
To: Artyom Lukianov aluki...@redhat.com
Cc: users users@ovirt.org, Sandro Bonazzola sbona...@redhat.com
Sent: Saturday, May 24, 2014 2:51:15 PM
Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
become operational...

Simply starting the ha-agents manually seems to bring up the VM
however it doesn't come up in the chkconfig list.

The next host that gets configured works fine. What steps get
configured in that final stage that perhaps I could manually run
rather than rerolling for a third time?

On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com wrote:
 Hi,

 Are these patches merged into 3.4.1? I seem to be hitting this issue
 now, twice in a row.
 The second BZ is also marked as private.

 On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com wrote:
 It have number of the same bugs:
 https://bugzilla.redhat.com/show_bug.cgi?id=1080513
 https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this already 
 merged, so if you take the last ovirt it must include it
 The one thing you can do until it, it try to restart host and start 
 deployment process from beginning.
 Thanks

 - Original Message -
 From: Tobias Honacker tob...@honacker.info
 To: users@ovirt.org
 Sent: Thursday, May 1, 2014 6:06:47 PM
 Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...

 Hi all,

 i hit this bug yesterday.

 Packages:

 ovirt-host-deploy-1.2.0-1.el6.noarch
 ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
 ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
 ovirt-release-11.2.0-1.noarch
 ovirt-hosted-engine-ha-1.1.2-1.el6.noarch

 After setting up the hosted engine (running great) the setup canceled with 
 this MSG:

 [ INFO  ] The VDSM Host is now operational
 [ ERROR ] Waiting for cluster 'Default' to become operational...
 [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no 
 attribute '__dict__'
 [ INFO  ] Stage: Clean up
 [ INFO  ] Stage: Pre-termination
 [ INFO  ] Stage: Termination

 What is the next step i have to do that t he HA features of the 
 hosted-engine will take care of keeping the VM alive.

 best regards
 tobias

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Removing Snapshot sub tasks

2014-05-25 Thread Mohyedeen Nazzal
Greetings,
When removing a snapshot, why does the following sub task is executed:

   - Merging snapshot of disk DiskNname ?

Attached a screenshot of the tasks executed.

I'm just wondering why there is a need to perform merge ?


Thanks,
Mohyedeen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 3.2.2 -- 3.2.3 Database rename failed (Solved)

2014-05-25 Thread Eli Mesika


- Original Message -
 From: Neil nwilson...@gmail.com
 To: Sven Kieske s.kie...@mittwald.de, Juan Antonio Hernandez Fernandez 
 jhern...@redhat.com
 Cc: Juergen Gotteswinter squa...@gmail.com, users@ovirt.org
 Sent: Friday, May 23, 2014 11:22:53 AM
 Subject: Re: [ovirt-users] engine upgrade 3.2.2 -- 3.2.3 Database rename 
 failed (Solved)
 
 Hi guys,
 
 I've managed to resolve this problem. Firstly after doing my fresh
 re-install of my original rollback of Dreyou 3.2.2 ovirt-engine
 re-install I hadn't run engine-cleanup, and then when I restored my DB
 I used the restore.sh -u postgres -f /root/ovirt.sql instead of
 doing a manual db restore which between the two of them got rid of the
 issue. I'm assuming it was the engine-cleanup that sorted out the db
 renaming problem though.
 
 Once that was done I then managed to upgrade to 3.3 and I'll now do
 the 3.4 upgrade.

Hi
Keep in mind that backup.sh/restore.sh are obsolete in 3.4 and you should use 
engine-backup utility from now on...
Thanks
Eli Mesika


 
 Thanks very much for those who assisted.
 
 Regards.
 
 Neil Wilson.
 
 On Thu, May 22, 2014 at 6:12 AM, Neil nwilson...@gmail.com wrote:
  Hi guys,  sorry to repost but getting a bit desperate. Is anyone able to
  assist?
 
  Thanks.
 
  Regards.
 
  Neil Wilson
 
  On 21 May 2014 12:06 PM, Neil nwilson...@gmail.com wrote:
 
  Hi guys,
 
  Just a little more info on the problem. I've upgraded another oVirt
  system before from Dreyou and it worked perfectly, however on this
  particular system, we had to restore from backups (DB PKI and
  etc/ovirt-engine) as the physical machine died that was hosting the
  engine, so perhaps this is the reason we encountering this problem
  this time around...
 
  Any help is greatly appreciated.
 
  Thank you.
 
  Regards.
 
  Neil Wilson.
 
 
 
  On Wed, May 21, 2014 at 11:46 AM, Sven Kieske s.kie...@mittwald.de
  wrote:
   Hi,
  
   I don't know the exact resolution for this, but I'll add some people
   who managed to make it work, following this tutorial:
   http://wiki.dreyou.org/dokuwiki/doku.php?id=ovirt_rpm_start33
  
   See this thread on the users ML:
  
   http://lists.ovirt.org/pipermail/users/2013-December/018341.html
  
   HTH
  
  
   Am 20.05.2014 17:00, schrieb Neil:
   Hi guys,
  
   I'm trying to upgrade from Dreyou to the official repo, I've installed
   the official 3.2 repo (I'll do the 3.3 update once this works). I've
   updated to ovirt-engine-setup.noarch 0:3.2.3-1.el6 and when I run
   engine upgrade it bombs out when trying to rename my database with the
   following error...
  
   [root@engine01 /]#  cat
   /var/log/ovirt-engine/ovirt-engine-upgrade_2014_05_20_16_34_21.log
   2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
   pgpass file /etc/ovirt-engine/.pgpass, fetching DB host value
   2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
   pgpass file /etc/ovirt-engine/.pgpass, fetching DB port value
   2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
   pgpass file /etc/ovirt-engine/.pgpass, fetching DB user value
   2014-05-20 16:34:21::DEBUG::common_utils::332::root:: YUM: VERB:
   Loaded plugins: refresh-packagekit, versionlock
   2014-05-20 16:34:21::INFO::engine-upgrade::969::root:: Info:
   /etc/ovirt-engine/.pgpass file found. Continue.
   2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
   pgpass file /etc/ovirt-engine/.pgpass, fetching DB admin value
   2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
   pgpass file /etc/ovirt-engine/.pgpass, fetching DB host value
   2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
   pgpass file /etc/ovirt-engine/.pgpass, fetching DB port value
   2014-05-20 16:34:21::DEBUG::common_utils::481::root:: running sql
   query 'SELECT pg_database_size('engine')' on db server: 'localhost'.
   2014-05-20 16:34:21::DEBUG::common_utils::434::root:: Executing
   command -- '/usr/bin/psql -h localhost -p 5432 -U postgres -d
   postgres -c SELECT pg_database_size('engine')'
   2014-05-20 16:34:21::DEBUG::common_utils::472::root:: output =
   pg_database_size
   --
11976708
   (1 row)
  
  
   2014-05-20 16:34:21::DEBUG::common_utils::473::root:: stderr =
   2014-05-20 16:34:21::DEBUG::common_utils::474::root:: retcode = 0
   2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
   point of '/var/cache/yum' at '/'
   2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
   available space on /var/cache/yum
   2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
   on /var/cache/yum is 172329
   2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
   point of '/var/lib/ovirt-engine/backups' at '/'
   2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
   available space on /var/lib/ovirt-engine/backups
   2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
   on 

Re: [ovirt-users] Removing Snapshot sub tasks

2014-05-25 Thread Meital Bourvine
Hi, 

I'll try to explain - I hope that I'll get it right :) 

Lets say that you have a vm with one disk, and you create a snapshot - what it 
actually does is creating another disk and start writing to it from this point. 
Now lets say that you create another snapshot now - again, it'll create another 
disk and start writing to it. 
Now if you want to remove the first snapshot, you need to merge the information 
that was written on this snapshot, so you won't loose data in the second 
snapshot. 

- Original Message -

 From: Mohyedeen Nazzal mohyedeen.naz...@gmail.com
 To: Users@ovirt.org
 Sent: Sunday, May 25, 2014 9:40:57 AM
 Subject: [ovirt-users] Removing Snapshot sub tasks

 Greetings,
 When removing a snapshot, why does the following sub task is executed:

 * Merging snapshot of disk DiskNname ?

 Attached a screenshot of the tasks executed.

 I'm just wondering why there is a need to perform merge ?

 Thanks,
 Mohyedeen

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Removing Snapshot sub tasks

2014-05-25 Thread Mohyedeen Nazzal
Thanks Meital,
It seems reasonable now.

Thanks again.


On Sun, May 25, 2014 at 10:52 AM, Meital Bourvine mbour...@redhat.comwrote:

 Hi,

 I'll try to explain - I hope that I'll get it right :)

 Lets say that you have a vm with one disk, and you create a snapshot -
 what it actually does is creating another disk and start writing to it from
 this point. Now lets say that you create another snapshot now - again,
 it'll create another disk and start writing to it.
 Now if you want to remove the first snapshot, you need to merge the
 information that was written on this snapshot, so you won't loose data in
 the second snapshot.


 --

 *From: *Mohyedeen Nazzal mohyedeen.naz...@gmail.com
 *To: *Users@ovirt.org
 *Sent: *Sunday, May 25, 2014 9:40:57 AM
 *Subject: *[ovirt-users] Removing Snapshot sub tasks


 Greetings,
 When removing a snapshot, why does the following sub task is executed:

- Merging snapshot of disk DiskNname ?

 Attached a screenshot of the tasks executed.

 I'm just wondering why there is a need to perform merge ?


 Thanks,
 Mohyedeen


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Andrew Lau
On Sun, May 25, 2014 at 4:04 PM, Artyom Lukianov aluki...@redhat.com wrote:
 I see that I verified it on version 
 ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from this 
 version and above.
 Thanks
I can only seem to get 1.1.2.1 is the patched version being released soon?

 - Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: users users@ovirt.org, Sandro Bonazzola sbona...@redhat.com
 Sent: Saturday, May 24, 2014 2:51:15 PM
 Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...

 Simply starting the ha-agents manually seems to bring up the VM
 however it doesn't come up in the chkconfig list.

 The next host that gets configured works fine. What steps get
 configured in that final stage that perhaps I could manually run
 rather than rerolling for a third time?

 On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com wrote:
 Hi,

 Are these patches merged into 3.4.1? I seem to be hitting this issue
 now, twice in a row.
 The second BZ is also marked as private.

 On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com wrote:
 It have number of the same bugs:
 https://bugzilla.redhat.com/show_bug.cgi?id=1080513
 https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this already 
 merged, so if you take the last ovirt it must include it
 The one thing you can do until it, it try to restart host and start 
 deployment process from beginning.
 Thanks

 - Original Message -
 From: Tobias Honacker tob...@honacker.info
 To: users@ovirt.org
 Sent: Thursday, May 1, 2014 6:06:47 PM
 Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...

 Hi all,

 i hit this bug yesterday.

 Packages:

 ovirt-host-deploy-1.2.0-1.el6.noarch
 ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
 ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
 ovirt-release-11.2.0-1.noarch
 ovirt-hosted-engine-ha-1.1.2-1.el6.noarch

 After setting up the hosted engine (running great) the setup canceled with 
 this MSG:

 [ INFO  ] The VDSM Host is now operational
 [ ERROR ] Waiting for cluster 'Default' to become operational...
 [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no 
 attribute '__dict__'
 [ INFO  ] Stage: Clean up
 [ INFO  ] Stage: Pre-termination
 [ INFO  ] Stage: Termination

 What is the next step i have to do that t he HA features of the 
 hosted-engine will take care of keeping the VM alive.

 best regards
 tobias

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Yedidyah Bar David
- Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: users users@ovirt.org
 Sent: Sunday, May 25, 2014 1:02:18 PM
 Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...
 
 On Sun, May 25, 2014 at 4:04 PM, Artyom Lukianov aluki...@redhat.com wrote:
  I see that I verified it on version
  ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from this
  version and above.
  Thanks
 I can only seem to get 1.1.2.1 is the patched version being released soon?

ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch is an internal version and should
not be confused with those on ovirt.org.

[1] contains 1.1.3-1 . The 3.4.1 release notes also mention that BZ 1088572 was
solved by it.

[1] http://resources.ovirt.org/pub/ovirt-3.4/rpm/fc19/noarch/

 
  - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Artyom Lukianov aluki...@redhat.com
  Cc: users users@ovirt.org, Sandro Bonazzola sbona...@redhat.com
  Sent: Saturday, May 24, 2014 2:51:15 PM
  Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to
  become operational...
 
  Simply starting the ha-agents manually seems to bring up the VM
  however it doesn't come up in the chkconfig list.
 
  The next host that gets configured works fine. What steps get
  configured in that final stage that perhaps I could manually run
  rather than rerolling for a third time?
 
  On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com wrote:
  Hi,
 
  Are these patches merged into 3.4.1? I seem to be hitting this issue
  now, twice in a row.
  The second BZ is also marked as private.
 
  On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com
  wrote:
  It have number of the same bugs:
  https://bugzilla.redhat.com/show_bug.cgi?id=1080513
  https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this
  already merged, so if you take the last ovirt it must include it
  The one thing you can do until it, it try to restart host and start
  deployment process from beginning.
  Thanks
 
  - Original Message -
  From: Tobias Honacker tob...@honacker.info
  To: users@ovirt.org
  Sent: Thursday, May 1, 2014 6:06:47 PM
  Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to
  become operational...
 
  Hi all,
 
  i hit this bug yesterday.
 
  Packages:
 
  ovirt-host-deploy-1.2.0-1.el6.noarch
  ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
  ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
  ovirt-release-11.2.0-1.noarch
  ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
 
  After setting up the hosted engine (running great) the setup canceled
  with this MSG:
 
  [ INFO  ] The VDSM Host is now operational
  [ ERROR ] Waiting for cluster 'Default' to become operational...
  [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no
  attribute '__dict__'
  [ INFO  ] Stage: Clean up
  [ INFO  ] Stage: Pre-termination
  [ INFO  ] Stage: Termination
 
  What is the next step i have to do that t he HA features of the
  hosted-engine will take care of keeping the VM alive.
 
  best regards
  tobias
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Andrew Lau
On Sun, May 25, 2014 at 8:52 PM, Yedidyah Bar David d...@redhat.com wrote:
 - Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Artyom Lukianov aluki...@redhat.com
 Cc: users users@ovirt.org
 Sent: Sunday, May 25, 2014 1:02:18 PM
 Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...

 On Sun, May 25, 2014 at 4:04 PM, Artyom Lukianov aluki...@redhat.com wrote:
  I see that I verified it on version
  ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from this
  version and above.
  Thanks
 I can only seem to get 1.1.2.1 is the patched version being released soon?

 ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch is an internal version and 
 should
 not be confused with those on ovirt.org.

I wonder why I get 1.1.2.1 when ran the install only just yesterday..
although I do see 1.1.3.1 in the repo


 [1] contains 1.1.3-1 . The 3.4.1 release notes also mention that BZ 1088572 
 was
 solved by it.

 [1] http://resources.ovirt.org/pub/ovirt-3.4/rpm/fc19/noarch/


  - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Artyom Lukianov aluki...@redhat.com
  Cc: users users@ovirt.org, Sandro Bonazzola sbona...@redhat.com
  Sent: Saturday, May 24, 2014 2:51:15 PM
  Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to
  become operational...
 
  Simply starting the ha-agents manually seems to bring up the VM
  however it doesn't come up in the chkconfig list.
 
  The next host that gets configured works fine. What steps get
  configured in that final stage that perhaps I could manually run
  rather than rerolling for a third time?
 
  On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com wrote:
  Hi,
 
  Are these patches merged into 3.4.1? I seem to be hitting this issue
  now, twice in a row.
  The second BZ is also marked as private.
 
  On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com
  wrote:
  It have number of the same bugs:
  https://bugzilla.redhat.com/show_bug.cgi?id=1080513
  https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this
  already merged, so if you take the last ovirt it must include it
  The one thing you can do until it, it try to restart host and start
  deployment process from beginning.
  Thanks
 
  - Original Message -
  From: Tobias Honacker tob...@honacker.info
  To: users@ovirt.org
  Sent: Thursday, May 1, 2014 6:06:47 PM
  Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to
  become operational...
 
  Hi all,
 
  i hit this bug yesterday.
 
  Packages:
 
  ovirt-host-deploy-1.2.0-1.el6.noarch
  ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
  ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
  ovirt-release-11.2.0-1.noarch
  ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
 
  After setting up the hosted engine (running great) the setup canceled
  with this MSG:
 
  [ INFO  ] The VDSM Host is now operational
  [ ERROR ] Waiting for cluster 'Default' to become operational...
  [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no
  attribute '__dict__'
  [ INFO  ] Stage: Clean up
  [ INFO  ] Stage: Pre-termination
  [ INFO  ] Stage: Termination
 
  What is the next step i have to do that t he HA features of the
  hosted-engine will take care of keeping the VM alive.
 
  best regards
  tobias
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 --
 Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] experience with AMD Kabini (Jaguar) CPUs

2014-05-25 Thread Doron Fediuck


- Original Message -
 From: i iordanov iiorda...@gmail.com
 To: users@ovirt.org
 Sent: Sunday, May 25, 2014 12:51:57 AM
 Subject: [ovirt-users] experience with AMD Kabini (Jaguar) CPUs
 
 Hey guys,
 
 I need a system to test oVirt and Opaque on, but I want it to be
 super-low power and near-silent. Therefore I am thinking of grabbing
 one of the new 25W AMD Kabini CPUs. Will oVirt work well on that
 architecture? If not, what would you recommend?
 
 Many thanks!
 iordan
 
 --
 The conscious mind has only one thread of execution.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

Hey,
oVirt works with libvirt and kvm.

In order to know what is supported with kvm you can check:
http://www.linux-kvm.org/page/Processor_support

As for libvirt, they provide a wiki on troubleshooting it:
http://wiki.libvirt.org/page/Libvirt_identifies_host_processor_as_a_different_model_from_the_hardware_documentation

Hope that helps,
Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SLA : RAM scheduling

2014-05-25 Thread Doron Fediuck


- Original Message -
 From: Gilad Chaplik gchap...@redhat.com
 To: Nathanaël Blanchet blanc...@abes.fr
 Cc: users users@ovirt.org
 Sent: Saturday, May 24, 2014 11:52:10 AM
 Subject: Re: [ovirt-users] SLA : RAM scheduling
 
 - Original Message -
  From: Gilad Chaplik gchap...@redhat.com
  To: Nathanaël Blanchet blanc...@abes.fr
  Cc: Karli Sjöberg karli.sjob...@slu.se, users users@ovirt.org
  Sent: Saturday, May 24, 2014 11:49:48 AM
  Subject: Re: [ovirt-users] SLA : RAM scheduling
  
  Hi Nathanaël,
  
  You have 2 ways to get what you're after (quick/slow):
  1) install 'oVirt's external scheduling proxy', and write an extremely
  simple
  weight function that orders hosts by used memory, then add that to your
  cluster policy.
  2) open an RFE for oVirt 3.4 to have that in
  (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt).
 
 by 3.4, I mean 3.4.x (= anyway for (2) you'll need to upgrade), but not sure
 it will make it.
 
  
  let me know if you consider (1), and I'll assist.
  
  anyway I suggest you'll open an RFE for 3.5.
  
  Thanks,
  Gilad.
  
  - Original Message -
   From: Nathanaël Blanchet blanc...@abes.fr
   To: Karli Sjöberg karli.sjob...@slu.se
   Cc: users users@ovirt.org
   Sent: Friday, May 23, 2014 7:38:40 PM
   Subject: Re: [ovirt-users] SLA : RAM scheduling
   
   even distribution is for cpu only
   
   Le 23/05/2014 17:48, Karli Sjöberg a écrit :
   
   
   
   
   
   Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?=
   blanc...@abes.fr :


Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
 Hello,
 On ovirt 3.4, is it possible to schedule vms distribution depending
 on
 host RAM availibility?
 Concretly, I had to manually move vms all the vms to the second host
 of the cluster, this lead to reach 90% occupation of memory on the
 destination host. When my first host has rebooted, none vms of the
 second host automatically migrated to the first one which had full
 RAM. How to make this happen?
 
... so as to both hosts be RAM evenly distributed... hope to be enough
clear...
   
   Sounds like you just want to apply the cluster policy for even
   distribution.
   Have you assigned any policy for that cluster?
   
   /K
   

--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
   
   --
   Nathanaël Blanchet
   
   Supervision réseau
   Pôle exploitation et maintenance
   Département des systèmes d'information
   227 avenue Professeur-Jean-Louis-Viala
   34193 MONTPELLIER CEDEX 5
   Tél. 33 (0)4 67 54 84 55
   Fax  33 (0)4 67 54 84 14 blanc...@abes.fr
   

Sounds like this RFE:
https://bugzilla.redhat.com/show_bug.cgi?id=1093038

FWIW, you can implement your own logic in Python until we
get to implement the above RFE.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can HA Agent control NFS Mount?

2014-05-25 Thread Doron Fediuck


- Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: users users@ovirt.org
 Sent: Saturday, May 24, 2014 9:59:26 AM
 Subject: [ovirt-users] Can HA Agent control NFS Mount?
 
 Hi,
 
 I was just wondering, within the whole complexity of hosted-engine.
 Would it be possible for the hosted-engine ha-agent control the mount
 point?
 
 I'm basing this off a few people I've been talking to who have their
 NFS server running on the same host that the hosted-engine servers are
 running. Most normally also running that on top of gluster.
 
 The main motive for this, is currently if the nfs server is running on
 the localhost and the server goes for a clean shutdown it will hang
 because the nfs mount is hard mounted and as the nfs server has gone
 away, we're stuck at an infinite hold waiting for it to cleanly
 unmount (which it never will)
 
 If it's possible for instead one of the ha components to unmount this
 nfs mount when it shuts down, this could potentially prevent this.
 There are other alternatives and I know this is not the supported
 scenario, but just hoping to bounce a few ideas.
 
 Thanks,
 Andrew

Hi Andrew,
Indeed we're not looking into the Gluster flow now as it has some
known issues. Additionally (just to make it clear) local nfs will
not provide any tolerance if the hosting server dies. So we should
be looking at a shared storage regardless of the hypervisors.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Persisting glusterfs configs on an oVirt node

2014-05-25 Thread Doron Fediuck


- Original Message -
 From: Simon Barrett simon.barr...@tradingscreen.com
 To: users@ovirt.org
 Sent: Friday, May 23, 2014 11:29:39 AM
 Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node
 
 
 
 I am working through the setup of oVirt node for a 3.4.1 deployment.
 
 
 
 I setup some glusterfs volumes/bricks on oVirt Node Hypervisor release 3.0.4
 (1.0.201401291204.el6) and created a storage domain. All was working OK
 until I rebooted the node and found that the glusterfs configuration had not
 been retained.
 
 
 
 Is there something I should be doing to persist any glusterfs configuration
 so it survives a node reboot?
 
 
 
 Many thanks,
 
 
 
 Simon
 

Hi Simon,
it actually sounds like a bug to me, as node are supposed to support
gluster.

Ryan / Fabian- thoughts?

Either way I suggest you take a look in the below link-
http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persisting_changes

Let s know how it works.

Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Yedidyah Bar David
- Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: Artyom Lukianov aluki...@redhat.com, users users@ovirt.org
 Sent: Sunday, May 25, 2014 2:00:24 PM
 Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...
 
 On Sun, May 25, 2014 at 8:52 PM, Yedidyah Bar David d...@redhat.com wrote:
  - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Artyom Lukianov aluki...@redhat.com
  Cc: users users@ovirt.org
  Sent: Sunday, May 25, 2014 1:02:18 PM
  Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
  to become operational...
 
  On Sun, May 25, 2014 at 4:04 PM, Artyom Lukianov aluki...@redhat.com
  wrote:
   I see that I verified it on version
   ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from
   this
   version and above.
   Thanks
  I can only seem to get 1.1.2.1 is the patched version being released soon?
 
  ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch is an internal version and
  should
  not be confused with those on ovirt.org.
 
 I wonder why I get 1.1.2.1 when ran the install only just yesterday..
 although I do see 1.1.3.1 in the repo

No idea - verified now that it works for me. Perhaps some local caching?
Did you try 'yum clean all'?

 
 
  [1] contains 1.1.3-1 . The 3.4.1 release notes also mention that BZ 1088572
  was
  solved by it.
 
  [1] http://resources.ovirt.org/pub/ovirt-3.4/rpm/fc19/noarch/
 
 
   - Original Message -
   From: Andrew Lau and...@andrewklau.com
   To: Artyom Lukianov aluki...@redhat.com
   Cc: users users@ovirt.org, Sandro Bonazzola sbona...@redhat.com
   Sent: Saturday, May 24, 2014 2:51:15 PM
   Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
   to
   become operational...
  
   Simply starting the ha-agents manually seems to bring up the VM
   however it doesn't come up in the chkconfig list.
  
   The next host that gets configured works fine. What steps get
   configured in that final stage that perhaps I could manually run
   rather than rerolling for a third time?
  
   On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com
   wrote:
   Hi,
  
   Are these patches merged into 3.4.1? I seem to be hitting this issue
   now, twice in a row.
   The second BZ is also marked as private.
  
   On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com
   wrote:
   It have number of the same bugs:
   https://bugzilla.redhat.com/show_bug.cgi?id=1080513
   https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this
   already merged, so if you take the last ovirt it must include it
   The one thing you can do until it, it try to restart host and start
   deployment process from beginning.
   Thanks
  
   - Original Message -
   From: Tobias Honacker tob...@honacker.info
   To: users@ovirt.org
   Sent: Thursday, May 1, 2014 6:06:47 PM
   Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
   to
   become operational...
  
   Hi all,
  
   i hit this bug yesterday.
  
   Packages:
  
   ovirt-host-deploy-1.2.0-1.el6.noarch
   ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
   ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
   ovirt-release-11.2.0-1.noarch
   ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
  
   After setting up the hosted engine (running great) the setup canceled
   with this MSG:
  
   [ INFO  ] The VDSM Host is now operational
   [ ERROR ] Waiting for cluster 'Default' to become operational...
   [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has
   no
   attribute '__dict__'
   [ INFO  ] Stage: Clean up
   [ INFO  ] Stage: Pre-termination
   [ INFO  ] Stage: Termination
  
   What is the next step i have to do that t he HA features of the
   hosted-engine will take care of keeping the VM alive.
  
   best regards
   tobias
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
  --
  Didi
 

-- 
Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Andrew Lau
On Sun, May 25, 2014 at 10:25 PM, Yedidyah Bar David d...@redhat.com wrote:
 - Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: Artyom Lukianov aluki...@redhat.com, users users@ovirt.org
 Sent: Sunday, May 25, 2014 2:00:24 PM
 Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...

 On Sun, May 25, 2014 at 8:52 PM, Yedidyah Bar David d...@redhat.com wrote:
  - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Artyom Lukianov aluki...@redhat.com
  Cc: users users@ovirt.org
  Sent: Sunday, May 25, 2014 1:02:18 PM
  Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
  to become operational...
 
  On Sun, May 25, 2014 at 4:04 PM, Artyom Lukianov aluki...@redhat.com
  wrote:
   I see that I verified it on version
   ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from
   this
   version and above.
   Thanks
  I can only seem to get 1.1.2.1 is the patched version being released soon?
 
  ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch is an internal version and
  should
  not be confused with those on ovirt.org.

 I wonder why I get 1.1.2.1 when ran the install only just yesterday..
 although I do see 1.1.3.1 in the repo

 No idea - verified now that it works for me. Perhaps some local caching?
 Did you try 'yum clean all'?

It was a fresh install, I just tried yum clean all and a yum update, nothing.

Are my repos correct?
[root@ov-hv1-2a-08-23 ~]# cat /etc/yum.repos.d/ovirt.repo

[ovirt-stable]
name=Latest oVirt Releases
baseurl=http://ovirt.org/releases/stable/rpm/EL/$releasever/
enabled=1
skip_if_unavailable=1
gpgcheck=0


# Latest oVirt 3.4 releases

[ovirt-3.4-stable]
name=Latest oVirt 3.4.z Releases
baseurl=http://ovirt.org/releases/3.4/rpm/EL/$releasever/
enabled=1
skip_if_unavailable=1
gpgcheck=0


[ovirt-3.4-prerelease]
name=Latest oVirt 3.4 Pre Releases (Beta to Release Candidate)
baseurl=http://resources.ovirt.org/releases/3.4_pre/rpm/EL/$releasever/
enabled=0
skip_if_unavailable=1
gpgcheck=0


# Latest oVirt 3.3 releases

[ovirt-3.3-stable]
name=Latest oVirt 3.3.z Releases
baseurl=http://resources.ovirt.org/releases/3.3/rpm/EL/$releasever/
enabled=1
skip_if_unavailable=1
gpgcheck=0

[ovirt-3.3-prerelease]
name=Latest oVirt 3.3.z Pre Releases (Beta to Release Candidate)
baseurl=http://resources.ovirt.org/releases/3.3_pre/rpm/EL/$releasever/
enabled=0
skip_if_unavailable=1
gpgcheck=0

I still seem to be getting:
[root@ov-hv1-2a-08-23 ~]# rpm -qa | grep ovirt
ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
ovirt-release-11.2.0-1.noarch
ovirt-host-deploy-1.2.0-1.el6.noarch
ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch



 
  [1] contains 1.1.3-1 . The 3.4.1 release notes also mention that BZ 1088572
  was
  solved by it.
 
  [1] http://resources.ovirt.org/pub/ovirt-3.4/rpm/fc19/noarch/
 
 
   - Original Message -
   From: Andrew Lau and...@andrewklau.com
   To: Artyom Lukianov aluki...@redhat.com
   Cc: users users@ovirt.org, Sandro Bonazzola sbona...@redhat.com
   Sent: Saturday, May 24, 2014 2:51:15 PM
   Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
   to
   become operational...
  
   Simply starting the ha-agents manually seems to bring up the VM
   however it doesn't come up in the chkconfig list.
  
   The next host that gets configured works fine. What steps get
   configured in that final stage that perhaps I could manually run
   rather than rerolling for a third time?
  
   On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com
   wrote:
   Hi,
  
   Are these patches merged into 3.4.1? I seem to be hitting this issue
   now, twice in a row.
   The second BZ is also marked as private.
  
   On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com
   wrote:
   It have number of the same bugs:
   https://bugzilla.redhat.com/show_bug.cgi?id=1080513
   https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this
   already merged, so if you take the last ovirt it must include it
   The one thing you can do until it, it try to restart host and start
   deployment process from beginning.
   Thanks
  
   - Original Message -
   From: Tobias Honacker tob...@honacker.info
   To: users@ovirt.org
   Sent: Thursday, May 1, 2014 6:06:47 PM
   Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
   to
   become operational...
  
   Hi all,
  
   i hit this bug yesterday.
  
   Packages:
  
   ovirt-host-deploy-1.2.0-1.el6.noarch
   ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
   ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
   ovirt-release-11.2.0-1.noarch
   ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
  
   After setting up the hosted engine (running great) the setup canceled
   with this MSG:
  
   [ INFO  ] The VDSM Host is now operational
   [ ERROR ] Waiting for cluster 'Default' to become operational...
   [ ERROR 

Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Yedidyah Bar David
- Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: Artyom Lukianov aluki...@redhat.com, users users@ovirt.org
 Sent: Sunday, May 25, 2014 3:38:07 PM
 Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...
 
 On Sun, May 25, 2014 at 10:25 PM, Yedidyah Bar David d...@redhat.com wrote:
  - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Yedidyah Bar David d...@redhat.com
  Cc: Artyom Lukianov aluki...@redhat.com, users users@ovirt.org
  Sent: Sunday, May 25, 2014 2:00:24 PM
  Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
  to become operational...
 
  On Sun, May 25, 2014 at 8:52 PM, Yedidyah Bar David d...@redhat.com
  wrote:
   - Original Message -
   From: Andrew Lau and...@andrewklau.com
   To: Artyom Lukianov aluki...@redhat.com
   Cc: users users@ovirt.org
   Sent: Sunday, May 25, 2014 1:02:18 PM
   Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster
   'Default'
   to become operational...
  
   On Sun, May 25, 2014 at 4:04 PM, Artyom Lukianov aluki...@redhat.com
   wrote:
I see that I verified it on version
ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from
this
version and above.
Thanks
   I can only seem to get 1.1.2.1 is the patched version being released
   soon?
  
   ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch is an internal version
   and
   should
   not be confused with those on ovirt.org.
 
  I wonder why I get 1.1.2.1 when ran the install only just yesterday..
  although I do see 1.1.3.1 in the repo
 
  No idea - verified now that it works for me. Perhaps some local caching?
  Did you try 'yum clean all'?
 
 It was a fresh install, I just tried yum clean all and a yum update, nothing.
 
 Are my repos correct?
 [root@ov-hv1-2a-08-23 ~]# cat /etc/yum.repos.d/ovirt.repo
 
 [ovirt-stable]
 name=Latest oVirt Releases
 baseurl=http://ovirt.org/releases/stable/rpm/EL/$releasever/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0
 
 
 # Latest oVirt 3.4 releases
 
 [ovirt-3.4-stable]
 name=Latest oVirt 3.4.z Releases
 baseurl=http://ovirt.org/releases/3.4/rpm/EL/$releasever/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0
 
 
 [ovirt-3.4-prerelease]
 name=Latest oVirt 3.4 Pre Releases (Beta to Release Candidate)
 baseurl=http://resources.ovirt.org/releases/3.4_pre/rpm/EL/$releasever/
 enabled=0
 skip_if_unavailable=1
 gpgcheck=0
 
 
 # Latest oVirt 3.3 releases
 
 [ovirt-3.3-stable]
 name=Latest oVirt 3.3.z Releases
 baseurl=http://resources.ovirt.org/releases/3.3/rpm/EL/$releasever/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0
 
 [ovirt-3.3-prerelease]
 name=Latest oVirt 3.3.z Pre Releases (Beta to Release Candidate)
 baseurl=http://resources.ovirt.org/releases/3.3_pre/rpm/EL/$releasever/
 enabled=0
 skip_if_unavailable=1
 gpgcheck=0

Seems ok, but note that the 'resources.ovirt.org/releases' URLs are
obsolete and recent release packages (e.g. the one pointed at from
the 3.4.1 release notes) point at 'resources.ovirt.org/pub'.

 
 I still seem to be getting:
 [root@ov-hv1-2a-08-23 ~]# rpm -qa | grep ovirt
 ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
 ovirt-release-11.2.0-1.noarch
 ovirt-host-deploy-1.2.0-1.el6.noarch
 ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
 ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch

This shows what you have installed. What do you get from
'yum list ovirt-hosted-engine-setup' ?
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Andrew Lau
On Sun, May 25, 2014 at 10:59 PM, Yedidyah Bar David d...@redhat.com wrote:
 - Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Yedidyah Bar David d...@redhat.com
 Cc: Artyom Lukianov aluki...@redhat.com, users users@ovirt.org
 Sent: Sunday, May 25, 2014 3:38:07 PM
 Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to 
 become operational...

 On Sun, May 25, 2014 at 10:25 PM, Yedidyah Bar David d...@redhat.com wrote:
  - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Yedidyah Bar David d...@redhat.com
  Cc: Artyom Lukianov aluki...@redhat.com, users users@ovirt.org
  Sent: Sunday, May 25, 2014 2:00:24 PM
  Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default'
  to become operational...
 
  On Sun, May 25, 2014 at 8:52 PM, Yedidyah Bar David d...@redhat.com
  wrote:
   - Original Message -
   From: Andrew Lau and...@andrewklau.com
   To: Artyom Lukianov aluki...@redhat.com
   Cc: users users@ovirt.org
   Sent: Sunday, May 25, 2014 1:02:18 PM
   Subject: Re: [ovirt-users] Hosted Engine - Waiting for cluster
   'Default'
   to become operational...
  
   On Sun, May 25, 2014 at 4:04 PM, Artyom Lukianov aluki...@redhat.com
   wrote:
I see that I verified it on version
ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch, so it must work from
this
version and above.
Thanks
   I can only seem to get 1.1.2.1 is the patched version being released
   soon?
  
   ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch is an internal version
   and
   should
   not be confused with those on ovirt.org.
 
  I wonder why I get 1.1.2.1 when ran the install only just yesterday..
  although I do see 1.1.3.1 in the repo
 
  No idea - verified now that it works for me. Perhaps some local caching?
  Did you try 'yum clean all'?

 It was a fresh install, I just tried yum clean all and a yum update, nothing.

 Are my repos correct?
 [root@ov-hv1-2a-08-23 ~]# cat /etc/yum.repos.d/ovirt.repo

 [ovirt-stable]
 name=Latest oVirt Releases
 baseurl=http://ovirt.org/releases/stable/rpm/EL/$releasever/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0


 # Latest oVirt 3.4 releases

 [ovirt-3.4-stable]
 name=Latest oVirt 3.4.z Releases
 baseurl=http://ovirt.org/releases/3.4/rpm/EL/$releasever/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0


 [ovirt-3.4-prerelease]
 name=Latest oVirt 3.4 Pre Releases (Beta to Release Candidate)
 baseurl=http://resources.ovirt.org/releases/3.4_pre/rpm/EL/$releasever/
 enabled=0
 skip_if_unavailable=1
 gpgcheck=0


 # Latest oVirt 3.3 releases

 [ovirt-3.3-stable]
 name=Latest oVirt 3.3.z Releases
 baseurl=http://resources.ovirt.org/releases/3.3/rpm/EL/$releasever/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0

 [ovirt-3.3-prerelease]
 name=Latest oVirt 3.3.z Pre Releases (Beta to Release Candidate)
 baseurl=http://resources.ovirt.org/releases/3.3_pre/rpm/EL/$releasever/
 enabled=0
 skip_if_unavailable=1
 gpgcheck=0

 Seems ok, but note that the 'resources.ovirt.org/releases' URLs are
 obsolete and recent release packages (e.g. the one pointed at from
 the 3.4.1 release notes) point at 'resources.ovirt.org/pub'.

I haven't modified the URLs, they were what just came from ovirt-release


 I still seem to be getting:
 [root@ov-hv1-2a-08-23 ~]# rpm -qa | grep ovirt
 ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
 ovirt-release-11.2.0-1.noarch
 ovirt-host-deploy-1.2.0-1.el6.noarch
 ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
 ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch

 This shows what you have installed. What do you get from
 'yum list ovirt-hosted-engine-setup' ?

[root@ov-hv1-2a-08-23 yum.repos.d]# yum list ovirt-hosted-engine-setup
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
ovirt-epel/metalink

   | 3.3 kB 00:00
 * base: mirror.as24220.net
 * epel: mirror.optus.net
 * extras: mirror.as24220.net
 * ovirt-epel: mirror.optus.net
 * ovirt-jpackage-6.0-generic: mirror.ibcp.fr
 * updates: centos.melb.au.glomirror.com.au
ovirt-3.3-stable

   | 2.9 kB 00:00
ovirt-3.4-stable

   | 2.9 kB 00:00
ovirt-glusterfs-epel

   | 2.9 kB 00:00
ovirt-glusterfs-noarch-epel

   | 2.9 kB 00:00
ovirt-jpackage-6.0-generic

   | 1.9 kB 00:00
ovirt-stable

   | 2.9 kB 00:00
Installed Packages
ovirt-hosted-engine-setup.noarch
   1.1.2-1.el6
 @ovirt-3.4-stable

 --
 Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Joop
Reinstall ovirt.repo using the resources.ovirt.org/pub path. There is a 
ovirt-release.rpm, use that.


Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...

2014-05-25 Thread Andrew Lau
Thanks, that fixed it.

Cheers

On Sun, May 25, 2014 at 11:24 PM, Joop jvdw...@xs4all.nl wrote:
 Reinstall ovirt.repo using the resources.ovirt.org/pub path. There is a
 ovirt-release.rpm, use that.

 Joop


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vnic profile custom properties menu not visible in 3.4.1

2014-05-25 Thread Gianluca Cecchi
Hello,
I have an all-in-one environment based on f19 and 3.4.1.
My host main interface (and so ovirtmgmt bridge) is 192.168.1.x
I'm trying to setup a natted network for my VMs (it will be 192.168.125.x).
I already completed the vdsm and libvirt part following Dan blog page here:
http://developerblog.redhat.com/2014/02/25/extending-rhev-vdsm-hooks/

I created a new vnic profile named natted for my ovirtmgmt network.
But I'm not able to see in webadmin how to add a custom property extnet
to this vnic profile and set it to the natted value.
I already restarted vdsmd and then ovirt-engine (not a full restart of the
server yet, after installing vdsm-hook-extnet-4.14.8.1-0.fc19 package

See below my screenshots, I see no way to add it.
https://drive.google.com/file/d/0BwoPbcrMv8mvZF93ekI2a1V2clk/edit?usp=sharing
https://drive.google.com/file/d/0BwoPbcrMv8mvbkFVZEF3X2VhNVE/edit?usp=sharing

Is it perhaps a command line only option?

Based on this page I wouldn't expect so:
http://www.ovirt.org/Features/Vnic_Profiles

I see instead a Please select a key dropdown menu where I can only select
Security Groups...

NOTE: I need this because I have to setup an openvpn tunnel between the
server where I have the all-in-one setup and a remote network.
Unfortunately both the networks, despite different internet providers, use
192.168.1.x for their internal networks (arg!  add more fantasy to
providers menu please... so many private networks available: don't stop to
the first one... ;-)
so that I can't establish routing between the two networks after tunnel
setup. I'm trying to solve using a VM with another network via NAT and
setting up the openvpn tunnel from this VM, hoping it would be able to
route then with the 192.168.1.x destination internal network

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vnic profile custom properties menu not visible in 3.4.1

2014-05-25 Thread Gianluca Cecchi
On Sun, May 25, 2014 at 4:31 PM, Gianluca Cecchi
gianluca.cec...@gmail.comwrote:

[snip]



 But I'm not able to see in webadmin how to add a custom property extnet
 to this vnic profile and set it to the natted value.



 [snip]




 Is it perhaps a command line only option?

 Based on this page I wouldn't expect so:
 http://www.ovirt.org/Features/Vnic_Profiles


 I see instead a Please select a key dropdown menu where I can only
 select Security Groups...

 [snip]

OK.
so after reading this page
http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm_hooks/extnet/README;h=0778dbb3ef85c5ae179fb0f6c9ceeabc268abe89;hb=HEAD

BTW: is it in any package?
[root@tekkaman ovirt-engine]# rpm -ql vdsm-hook-extnet
/usr/libexec/vdsm/hooks/before_device_create/50_extnet
/usr/libexec/vdsm/hooks/before_nic_hotplug/50_extnet
[root@tekkaman ovirt-engine]#

before
[root@tekkaman ovirt-engine]# engine-config -g CustomDeviceProperties
CustomDeviceProperties:  version: 3.0
CustomDeviceProperties:  version: 3.1
CustomDeviceProperties:  version: 3.2
CustomDeviceProperties:  version: 3.3
CustomDeviceProperties:
{type=interface;prop={SecurityGroups=^(?:(?:[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12},
*)*[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}|)$}} version: 3.4

then
[root@tekkaman ovirt-engine]# engine-config -s
CustomDeviceProperties='{type=interface;prop={extnet=^[a-zA-Z0-9_ ---]+$}}'
Please select a version:
1. 3.0
2. 3.1
3. 3.2
4. 3.3
5. 3.4
5

after:
[root@tekkaman ovirt-engine]# engine-config -g CustomDeviceProperties
CustomDeviceProperties:  version: 3.0
CustomDeviceProperties:  version: 3.1
CustomDeviceProperties:  version: 3.2
CustomDeviceProperties:  version: 3.3
CustomDeviceProperties: {type=interface;prop={extnet=^[a-zA-Z0-9_ ---]+$}}
version: 3.4

# systemctl restart ovirt-engine

BTW: is it correct that what before was security groups is now barely
overwritten by extnet? Or is there a syntax with which I can add extnet
without deleting security groups (even if I don't understand its usage,
possibly with openstack integration based on its name...)

Anyway now in oVirt webadmin I can edit the natted vnic profile and
inside the dropdown menu (there isn't any header marked like custom
property or similar, that could be useful)

shutdown and poweroff vm
start vm

and now it correctly gets the ip 192.168.125.91 defined inside the dhcp
range for natted network

virsh dumpxml f19
...
interface type='network'
  mac address='00:1a:4a:a8:01:55'/
  source network='natted'/
  target dev='vnet0'/
  model type='virtio'/
  filterref filter='vdsm-no-mac-spoofing'/
  link state='up'/
  alias name='net0'/
  address type='pci' domain='0x' bus='0x00' slot='0x03'
function='0x0'/
/interface
...

The web admin portal gives the corect new ip of the VM, as sent by vdagent.

The VM is able to navigate through Internet and
http://www.whatismyip.com/
gives for example consistent and equal information both using my all-in-one
server and my VM

Now I'm goint to test if I can solve the openvpn issue

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can HA Agent control NFS Mount?

2014-05-25 Thread Bob Doolittle

Just for the record, what Andrew reports is not specific to GlusterFS.

I have not yet found a way to shut down my single-node Hosted deployment 
cleanly without experiencing NFS hangs/timeouts on the way down.

My NFS storage is local to my host.

Also curious is that when I say poweroff it actually reboots and comes 
up again. Could that be due to the timeouts on the way down?


-Bob

On 05/25/2014 08:13 AM, Doron Fediuck wrote:


- Original Message -

From: Andrew Lau and...@andrewklau.com
To: users users@ovirt.org
Sent: Saturday, May 24, 2014 9:59:26 AM
Subject: [ovirt-users] Can HA Agent control NFS Mount?

Hi,

I was just wondering, within the whole complexity of hosted-engine.
Would it be possible for the hosted-engine ha-agent control the mount
point?

I'm basing this off a few people I've been talking to who have their
NFS server running on the same host that the hosted-engine servers are
running. Most normally also running that on top of gluster.

The main motive for this, is currently if the nfs server is running on
the localhost and the server goes for a clean shutdown it will hang
because the nfs mount is hard mounted and as the nfs server has gone
away, we're stuck at an infinite hold waiting for it to cleanly
unmount (which it never will)

If it's possible for instead one of the ha components to unmount this
nfs mount when it shuts down, this could potentially prevent this.
There are other alternatives and I know this is not the supported
scenario, but just hoping to bounce a few ideas.

Thanks,
Andrew

Hi Andrew,
Indeed we're not looking into the Gluster flow now as it has some
known issues. Additionally (just to make it clear) local nfs will
not provide any tolerance if the hosting server dies. So we should
be looking at a shared storage regardless of the hypervisors.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Creating a VM for Ubuntu 14 (Trusty Tahr)

2014-05-25 Thread Bob Doolittle

Hi,

I notice that when creating a new VM and selecting the OS there is no 
choice available for Ubuntu 14.04 Trusty Tahr at this time.
How much difference does the OS selection make? Would I be safer 
choosing Ubuntu 13.10 Saucy Salamander or Other?


Thanks,
Bob

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can HA Agent control NFS Mount?

2014-05-25 Thread Joop

On 25-5-2014 19:38, Bob Doolittle wrote:


Also curious is that when I say poweroff it actually reboots and 
comes up again. Could that be due to the timeouts on the way down?


Ah, that's something my F19 host does too. Some more info: if engine 
hasn't been started on the host then I can shutdown it and it will 
poweroff. IF engine has been run on it then it will reboot.

Its not vdsm (I think) because my shutdown sequence is (on my f19 host):
 service ovirt-agent-ha stop
 service ovirt-agent-broker stop
 service vdsmd stop
 ssh root@engine01 init 0
init 0

I don't use maintenance mode because when I poweron my host (= my 
desktop) I want engine to power on automatically which it does most of 
the time within 10 min.

I think wdmd or sanlock are causing the reboot instead of poweroff

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can HA Agent control NFS Mount?

2014-05-25 Thread Bob Doolittle


On 05/25/2014 02:51 PM, Joop wrote:

On 25-5-2014 19:38, Bob Doolittle wrote:


Also curious is that when I say poweroff it actually reboots and 
comes up again. Could that be due to the timeouts on the way down?


Ah, that's something my F19 host does too. Some more info: if engine 
hasn't been started on the host then I can shutdown it and it will 
poweroff. IF engine has been run on it then it will reboot.

Its not vdsm (I think) because my shutdown sequence is (on my f19 host):
 service ovirt-agent-ha stop
 service ovirt-agent-broker stop
 service vdsmd stop
 ssh root@engine01 init 0
init 0

I don't use maintenance mode because when I poweron my host (= my 
desktop) I want engine to power on automatically which it does most of 
the time within 10 min.


For comparison, I see this issue and I *do* use maintenance mode 
(because presumably that's the 'blessed' way to shut things down and I'm 
scared to mess this complex system up by straying off the beaten path 
;). My process is:


ssh root@engine init 0
(wait for vdsClient -s 0 list | grep Status: to show the vm as down)
hosted-engine --set-maintenance --mode=global
poweroff

And then on startup:
hosted-engine --set-maintenance --mode=none
hosted-engine --vm-start

There are two issues here. I am not sure if they are related or not.
1. The NFS timeout during shutdown (Joop do you see this also? Or just #2?)
2. The system reboot instead of poweroff (which messes up remote machine 
management)


Thanks,
 Bob


I think wdmd or sanlock are causing the reboot instead of poweroff

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Can i do this with ovirt?

2014-05-25 Thread Grant Tailor
What i want to do is have the hosts use their local storage..i do not want
to use NAS or NFS for storage of host or virtual machine data/files. I wan
each host to have their local SATA disks as their storagejust like in
Ganeti

Can ovirt do this?
It is ok to have ISO images on the ovirt-engine/management host but i do
not want NFS or Shared storage when it comes to the hosts in the
infrastructure.

Please advice.

Thanks!!!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] File system permissions

2014-05-25 Thread Maurice James

Are these the correct permissions for oVirt VM disks on the storage filesystem? 

-rw-rw. 2 vdsm kvm 53687091200 Feb 13 21:17 
b73d4ebf-975b-42d4-8d2e-df3a524a3d94 
-rw-rw. 2 vdsm kvm 1048576 Feb 13 20:25 
b73d4ebf-975b-42d4-8d2e-df3a524a3d94.lease 
-rw-r--r--. 2 vdsm kvm 280 Feb 13 21:17 
b73d4ebf-975b-42d4-8d2e-df3a524a3d94.meta 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can HA Agent control NFS Mount?

2014-05-25 Thread Andrew Lau
On Mon, May 26, 2014 at 5:10 AM, Bob Doolittle b...@doolittle.us.com wrote:

 On 05/25/2014 02:51 PM, Joop wrote:

 On 25-5-2014 19:38, Bob Doolittle wrote:


 Also curious is that when I say poweroff it actually reboots and comes
 up again. Could that be due to the timeouts on the way down?

 Ah, that's something my F19 host does too. Some more info: if engine
 hasn't been started on the host then I can shutdown it and it will poweroff.
 IF engine has been run on it then it will reboot.
 Its not vdsm (I think) because my shutdown sequence is (on my f19 host):
  service ovirt-agent-ha stop
  service ovirt-agent-broker stop
  service vdsmd stop
  ssh root@engine01 init 0
 init 0

 I don't use maintenance mode because when I poweron my host (= my desktop)
 I want engine to power on automatically which it does most of the time
 within 10 min.


 For comparison, I see this issue and I *do* use maintenance mode (because
 presumably that's the 'blessed' way to shut things down and I'm scared to
 mess this complex system up by straying off the beaten path ;). My process
 is:

 ssh root@engine init 0
 (wait for vdsClient -s 0 list | grep Status: to show the vm as down)
 hosted-engine --set-maintenance --mode=global
 poweroff

 And then on startup:
 hosted-engine --set-maintenance --mode=none
 hosted-engine --vm-start

 There are two issues here. I am not sure if they are related or not.
 1. The NFS timeout during shutdown (Joop do you see this also? Or just #2?)
 2. The system reboot instead of poweroff (which messes up remote machine
 management)


For 1. I was wondering if perhaps, we could have an option to specify
the mount options. If I understand correctly, applying a soft mount
instead of a hard mount would prevent this from happening. I'm however
not sure of the implications this would have on the data integrity..

I would really like to see it happen in the ha-agent, as it's the one
which connects/mounts the storage it should also unmount it on boot.
However the stability on it, is flaky at best. I've noticed if `df`
hangs because of another NFS mount having timed-out the agent will
die. That's not a good sign.. this was what actually caused my
hosted-engine to run twice in one case.

 Thanks,
  Bob


 I think wdmd or sanlock are causing the reboot instead of poweroff

 Joop

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users