Re: [ovirt-users] All in one question

2015-08-03 Thread Neil
Hi Mathew,

Wow, thank you very much for the quick response!

I've gone ahead and tried to do what you've suggested, but I'm a bit
confused as to how the storage is going to work...

If I add the storage domain on the Local datacenter, then I can only choose
local_storage and then when I try add this storage as NFS, ovirt says the
storage domain isn't empty (which it isn't) so I'm confused as to how both
hosts will work together on the same storage domain?

All I'm wanting to achieve is a two host cluster with NFS storage from one
host, is it not over complicating things using the AllinOne installation?

Thank you, and apologies if I've perhaps misunderstood.

Regards.

Neil Wilson.


On Mon, Aug 3, 2015 at 10:03 AM, Matthew Lagoe matthew.la...@subrigo.net
wrote:

 Since the storage is NFS it really doesn’t matter where it is located at
 so long as all hosts can talk to it via ip



 Basically if you have the nfs storage locally or otherwise you can simply
 add another host to the datacenter and then you should be able to use the
 storage across the new host



 Keep in mind you will need to make it so the external host is able to
 access the nfs share



 *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On
 Behalf Of *Neil
 *Sent:* Monday, August 03, 2015 12:58 AM
 *To:* users@ovirt.org
 *Subject:* [ovirt-users] All in one question



 Hi guys,



 Please excuse this if it sounds like a dumb question, it's my first time
 doing an All-in-one oVirt installation



 I've installed the All-in-one on one physical machine, and configured this
 as a host in the cluster, and my intention was to use local NFS storage as
 the primary storage domain for the VM's, but then add a second host to the
 cluster which would access this NFS primary storage domain on the original
 All-in-one installation...

 After doing the install when I log in I see that when you do an
 All-in-one install you end up with a local_cluster as well as a
 Default cluster and you can't add another host to the local_cluster, so
 it appears I'll need to add the second host to the Default which I'm
 assuming means I won't be able to share the primary NFS storage between the
 two clusters and I won't get live migration between my two physical hosts
 across the clusters?



 Could anyone confirm if my assumptions are correct please?



 Thank you!



 Regards.



 Neil Wilson.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can't add Datastorge | 'Sanlock lockspace add failure', 'Message too long'

2015-08-03 Thread Punit Dambiwal
Hi Maor,

I submitted the bug here :-
https://bugzilla.redhat.com/show_bug.cgi?id=1249851

Thanks,
Punit

On Mon, Aug 3, 2015 at 4:08 PM, Maor Lipchuk mlipc...@redhat.com wrote:

 Can you please open a bug at
 https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt with all the
 relevant logs so we can track this

 Thanks,
 Maor


 - Original Message -
  From: Punit Dambiwal hypu...@gmail.com
  To: Maor Lipchuk mlipc...@redhat.com
  Cc: Ala Hino ah...@redhat.com, users@ovirt.org
  Sent: Monday, August 3, 2015 5:42:42 AM
  Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock lockspace add
 failure', 'Message too long'
 
  Hi Maor,
 
  Already tired with vdsm and sanlock service restart but nothing help...
 
  Yes...i was facing some gluster issues earlier but those all
 resolved...but
  now i am stuck in Ovirt part...please help to solve this issue..
 
  On Sun, Aug 2, 2015 at 4:22 PM, Maor Lipchuk mlipc...@redhat.com
 wrote:
 
   Hi Punit,
  
   Thanks for the logs, I didn't found the origin of the problem,
   perhaps you can try to see if stopping the VDSM service and restart the
   sanlock service can help.
  
   Ala, I know you handled some Gluster issues recently, do you have any
   insight about this? Have you encountered something similar?
  
   Regards,
   Maor
  
  
   - Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: Maor Lipchuk mlipc...@redhat.com
Cc: users@ovirt.org
Sent: Friday, July 31, 2015 11:48:41 AM
Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock lockspace
 add
   failure', 'Message too long'
   
Hi Maor,
   
Please find the below logs :-
   
1. Engine logs :- http://paste.ubuntu.com/11971901/
2. Sanlock lock (HV1) :- http://paste.ubuntu.com/11971916/
3. VDSM Logs (HV1) :- http://paste.ubuntu.com/11971926/
4. Sanlock lock (HV2) :- http://paste.ubuntu.com/11971950/
5. VDSM Logs (HV2) :- http://paste.ubuntu.com/11971955/
6. Var Messages (HV1) :- http://paste.ubuntu.com/11971967/
7. Var Messages (HV2) :- http://paste.ubuntu.com/11971977/
   
The storage can be mount but in the ovirt it's failed to attached...
   
[image: Inline image 1]
   
On Thu, Jul 30, 2015 at 3:59 PM, Maor Lipchuk mlipc...@redhat.com
   wrote:
   




 - Original Message -
  From: Maor Lipchuk mlipc...@redhat.com
  To: Punit Dambiwal hypu...@gmail.com
  Cc: users@ovirt.org
  Sent: Thursday, July 30, 2015 10:56:02 AM
  Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock
 lockspace
   add
 failure', 'Message too long'
 
  Thanks Punit, I will take a look.
  meanwhile can you please also add the sanlock log and the
 /var/log/messages
  log

 Also, please attach the full engine and vdsm logs as well

 
  Regards,
  Maor
 
 
 
 
  - Original Message -
   From: Punit Dambiwal hypu...@gmail.com
   To: Maor Lipchuk mlipc...@redhat.com
   Cc: users@ovirt.org, Dan Kenigsberg dan...@redhat.com,
 Itamar
 Heim
   ih...@redhat.com
   Sent: Thursday, July 30, 2015 4:15:17 AM
   Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock
   lockspace
 add
   failure', 'Message too long'
  
   Hi Maor,
  
   Ovirt Version :- 3.5.2.1-1.el7.centos
   VDSM :- vdsm-4.16.20-0.el7.centos
   Glusterfs version :- glusterfs-3.6.3-1.el7
  
  
 http://permalink.gmane.org/gmane.comp.emulators.ovirt.user/19732
 All are
   there but still i am facing the same issue..
  
   
   [root@stor1 ~]# gluster volume info 3TB
  
   Volume Name: 3TB
   Type: Replicate
   Volume ID: 78d1f376-178d-4b01-90c0-5dac90b50a6c
   Status: Started
   Number of Bricks: 1 x 3 = 3
   Transport-type: tcp
   Bricks:
   Brick1: stor1:/bricks/b/vol2
   Brick2: stor2:/bricks/b/vol2
   Brick3: stor3:/bricks/b/vol2
   Options Reconfigured:
   storage.owner-gid: 36
   storage.owner-uid: 36
   cluster.server-quorum-type: server
   cluster.quorum-type: auto
   network.remote-dio: enable
   cluster.eager-lock: enable
   performance.stat-prefetch: off
   performance.io-cache: off
   performance.read-ahead: off
   performance.quick-read: off
   auth.allow: *
   user.cifs: enable
   nfs.disable: off
   [root@stor1 ~]#
   
  
   [image: Inline image 1]
  
  
  
   On Wed, Jul 29, 2015 at 11:04 PM, Maor Lipchuk 
   mlipc...@redhat.com
 wrote:
  
Hi Punit,
   
Which oVirt version and VDSM version are you using?
The exception looks very similar to the scenario described
 here:
   
 http://permalink.gmane.org/gmane.comp.emulators.ovirt.user/19732
Does it helps ?
   
Regards,
Maor
   
   
- Original 

Re: [ovirt-users] VM Migration Fails

2015-08-03 Thread Donny Davis
OK, I was just wondering.

I'm glad to see you got it fixed.
On Aug 3, 2015 6:07 PM, s k sokratis1...@outlook.com wrote:

 Because it breaks other custom configuration (non-oVirt related).


 I will probably try SELinux in enforcing mode when migrating to CentOS 7
 hosts.

 --
 Date: Mon, 3 Aug 2015 13:21:48 -0400
 Subject: RE: [ovirt-users] VM Migration Fails
 From: do...@cloudspin.me
 To: sokratis1...@outlook.com
 CC: users@ovirt.org

 Is there any reason that you have SE Linux in permissive. oVirt will run
 with SE Linux in enforcing
 On Aug 3, 2015 1:09 PM, s k sokratis1...@outlook.com wrote:

 I checked the vdsm.log and the source host was throwing this error:

 libvirtError: unsupported configuration: Unable to find security driver
 for label selinux


 I fixed it by disabling selinux (it was running in permissive mode before)
 and rebooting the host since selinux was also disabled on the other host.


 --
 Date: Mon, 3 Aug 2015 12:41:57 -0400
 Subject: Re: [ovirt-users] VM Migration Fails
 From: do...@cloudspin.me
 To: sokratis1...@outlook.com
 CC: users@ovirt.org

 Can you send the log from each node

 /var/log/vdsm/vdsm.log
 On Aug 3, 2015 12:39 PM, s k sokratis1...@outlook.com wrote:

 Hi,


 I'm having trouble migrating VMs in a 2-node cluster. VMs can start
 normally on both nodes if they are shutdown first but live migration fails
 and the following is thrown in the engine.log:



 2015-08-03 19:28:28,566 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-42) [46f46347] Rerun vm
 2ceb9c65-1920-49fe-9db1-6c9470e50a65. Called from vds ovirt-srv-02
 2015-08-03 19:28:28,615 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Failed in MigrateStatusVDS
 method
 2015-08-03 19:28:28,617 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Command
 MigrateStatusVDSCommand(HostName = ovirt-srv-02, HostId =
 be3da0c4-f898-4aa4-89c7-239282a03959,
 vmId=2ceb9c65-1920-49fe-9db1-6c9470e50a65) execution failed. Exception:
 VDSErrorException: VDSGenericException: VDSErrorException: Failed to
 MigrateStatusVDS, error = Fatal error during migration, code = 12
 2015-08-03 19:28:28,623 ERROR
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Correlation ID: 43b38529,
 Job ID: 21468e21-c78c-41ea-bc99-c56ff4526820, Call Stack: null, Custom
 Event ID: -1, Message: Migration failed due to Error: Fatal error during
 migration. Trying to migrate to another Host (VM: testvm01, Source:
 ovirt-srv-02, Destination: ovirt-srv-03).


 Both nodes run on CentOS 6.6 with the following versions:


 OS Version: *RHEL - 6 - 6.el6.centos.12.2*
 Kernel Version: *2.6.32 - 504.30.3.el6.x86_64*
 KVM Version: *0.12.1.2 - 2.448.el6_6.4*
 LIBVIRT Version: *libvirt-0.10.2-46.el6_6.6*
 VDSM Version: *vdsm-4.16.20-1.git3a90f62.el6*


 Any thoughts?


 Thanks,


 Sokratis

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Migration Fails

2015-08-03 Thread s k
Because it breaks other custom configuration (non-oVirt related).

I will probably try SELinux in enforcing mode when migrating to CentOS 7 hosts.

Date: Mon, 3 Aug 2015 13:21:48 -0400
Subject: RE: [ovirt-users] VM Migration Fails
From: do...@cloudspin.me
To: sokratis1...@outlook.com
CC: users@ovirt.org

Is there any reason that you have SE Linux in permissive. oVirt will run with 
SE Linux in enforcing
On Aug 3, 2015 1:09 PM, s k sokratis1...@outlook.com wrote:



I checked the vdsm.log and the source host was throwing this error:
libvirtError: unsupported configuration: Unable to find security driver for 
label selinux

I fixed it by disabling selinux (it was running in permissive mode before) and 
rebooting the host since selinux was also disabled on the other host.

Date: Mon, 3 Aug 2015 12:41:57 -0400
Subject: Re: [ovirt-users] VM Migration Fails
From: do...@cloudspin.me
To: sokratis1...@outlook.com
CC: users@ovirt.org

Can you send the log from each node 
/var/log/vdsm/vdsm.log
On Aug 3, 2015 12:39 PM, s k sokratis1...@outlook.com wrote:



Hi,

I'm having trouble migrating VMs in a 2-node cluster. VMs can start normally on 
both nodes if they are shutdown first but live migration fails and the 
following is thrown in the engine.log:


2015-08-03 19:28:28,566 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-42) [46f46347] Rerun vm 
2ceb9c65-1920-49fe-9db1-6c9470e50a65. Called from vds ovirt-srv-022015-08-03 
19:28:28,615 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Failed in MigrateStatusVDS 
method2015-08-03 19:28:28,617 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Command 
MigrateStatusVDSCommand(HostName = ovirt-srv-02, HostId = 
be3da0c4-f898-4aa4-89c7-239282a03959, 
vmId=2ceb9c65-1920-49fe-9db1-6c9470e50a65) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Fatal error during migration, code = 122015-08-03 
19:28:28,623 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Correlation ID: 43b38529, Job 
ID: 21468e21-c78c-41ea-bc99-c56ff4526820, Call Stack: null, Custom Event ID: 
-1, Message: Migration failed due to Error: Fatal error during migration. 
Trying to migrate to another Host (VM: testvm01, Source: ovirt-srv-02, 
Destination: ovirt-srv-03).

Both nodes run on CentOS 6.6 with the following versions:

OS Version: RHEL - 6 - 6.el6.centos.12.2Kernel Version: 2.6.32 - 
504.30.3.el6.x86_64KVM Version: 0.12.1.2 - 2.448.el6_6.4LIBVIRT Version: 
libvirt-0.10.2-46.el6_6.6VDSM Version: vdsm-4.16.20-1.git3a90f62.el6

Any thoughts?

Thanks,

Sokratis  

___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users


  
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Testing Ovirt 3.6

2015-08-03 Thread wodel youchi
Hi,

I redid the installation with Fc22 for the host and the VM engine, I still
have the same problems
- No VM engine on the webui
- Cannot start a created VM, DB error

Then I tested with Fc22 for th host and Centos7 for the VM engine
- Still no VM engine on webui
- But this time no DB error and the created VM did start.

Regards.



2015-08-03 14:40 GMT+01:00 Sandro Bonazzola sbona...@redhat.com:

 No, no specific known issue.

 On Sat, Aug 1, 2015 at 8:57 PM, Maor Lipchuk mlipc...@redhat.com wrote:

 Sandro, Eyal,
 Is there any known issue of this specific build?

 Regards,
 Maor

 - Original Message -
 From: wodel youchi wodel.you...@gmail.com
 To: Maor Lipchuk mlipc...@redhat.com
 Cc: users users@ovirt.org
 Sent: Saturday, August 1, 2015 3:24:21 PM
 Subject: Re: [ovirt-users] Testing Ovirt 3.6

 Hi,

 Here are the logs

 engine.log

 hosted-engine setup log

 vdsm.log

 agent and broker logs


 About the postgresql function, it's exists gethostnetworksbycluster(uuid)
 but the webgui is calling it with parameters not defined.

 2015-07-31 22:05:20,449 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
 (default task-23) [7acb8bf] Data access error during C
 anDoActionFailure.: org.springframework.jdbc.BadSqlGrammarException:
 PreparedStatementCallback; bad SQL grammar [select * fro
 m  gethostnetworksbycluster(?, ?, ?)]; nested exception is
 org.postgresql.util.PSQLException: ERROR: function gethostnetworks
 bycluster(uuid, unknown, character varying) does not exist
  Hint: No function matches the given name and argument types. You might
 need to add explicit type casts.



 Regards

 2015-08-01 10:01 GMT+01:00 Maor Lipchuk mlipc...@redhat.com:

  Hi wodel,
 
  can u please attach the engine.log, also the hosted engine log.
 
  Regards,
  Maor
 
 
  - Original Message -
   From: wodel youchi wodel.you...@gmail.com
   To: users users@ovirt.org
   Sent: Saturday, August 1, 2015 1:01:57 AM
   Subject: [ovirt-users] Testing Ovirt 3.6
  
   Hi,
  
   I have installed ovirt 3.6 hosted-engine on Fedora22 for test.
   using NFS4 as storage for the vm engine, vms, iso and export.
  
   I am using ovirt-release- master repository
  
   I have some problems
  
   1 - the VM engine is not showing up on the webgui.
  
   2 - I cannot start a VM after it's creation, I get an error about a
  failed
   connection to DB, in the engine's error log I have an exception about
 a
  none
   existant function ' gethostne tworksbycluster'.
  
   3 - I couldn't import my old export domain
  
   4 - I couldn't import my old vm domain.
  
   thanks in advance.
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 




 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt reports question

2015-08-03 Thread qinglong.d...@horebdata.cn
Hi all,

I have installed engine 3.5.3.1, dwh 3.5.3 and reports 3.5.3. I correctly set 
up the engine, dwh and  reports. But when I clicked Reports Portal link on 
the homepage of engine I got page not found. When I clicked Show Report in 
the administrator portal I also got page not found. When I directly input 
https://domain/ovirt-engine-reports; in the address bar of the brower I can 
get to the login page of reports.

Dolny
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt reports question

2015-08-03 Thread Yedidyah Bar David
On Tue, Aug 4, 2015 at 5:52 AM, qinglong.d...@horebdata.cn
qinglong.d...@horebdata.cn wrote:
 Hi all,

 I have installed engine 3.5.3.1, dwh 3.5.3 and reports 3.5.3. I correctly
 set up the engine, dwh and  reports. But when I clicked Reports Portal
 link on the homepage of engine I got page not found. When I clicked Show
 Report in the administrator portal I also got page not found. When I
 directly input https://domain/ovirt-engine-reports; in the address bar of
 the brower I can get to the login page of reports.

page not found with what URL in the browser address bar?

Do you have the correct address in
/etc/ovirt-engine/engine.conf.d/10-setup-reports-access.conf?

Is it resolvable both from your client and inside the engine server?

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] All in one question

2015-08-03 Thread Neil
Hi guys,

Please excuse this if it sounds like a dumb question, it's my first time
doing an All-in-one oVirt installation

I've installed the All-in-one on one physical machine, and configured this
as a host in the cluster, and my intention was to use local NFS storage as
the primary storage domain for the VM's, but then add a second host to the
cluster which would access this NFS primary storage domain on the original
All-in-one installation...
After doing the install when I log in I see that when you do an
All-in-one install you end up with a local_cluster as well as a
Default cluster and you can't add another host to the local_cluster, so
it appears I'll need to add the second host to the Default which I'm
assuming means I won't be able to share the primary NFS storage between the
two clusters and I won't get live migration between my two physical hosts
across the clusters?

Could anyone confirm if my assumptions are correct please?

Thank you!

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can't add Datastorge | 'Sanlock lockspace add failure', 'Message too long'

2015-08-03 Thread Maor Lipchuk
Can you please open a bug at 
https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt with all the relevant 
logs so we can track this

Thanks,
Maor


- Original Message -
 From: Punit Dambiwal hypu...@gmail.com
 To: Maor Lipchuk mlipc...@redhat.com
 Cc: Ala Hino ah...@redhat.com, users@ovirt.org
 Sent: Monday, August 3, 2015 5:42:42 AM
 Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock lockspace add 
 failure', 'Message too long'
 
 Hi Maor,
 
 Already tired with vdsm and sanlock service restart but nothing help...
 
 Yes...i was facing some gluster issues earlier but those all resolved...but
 now i am stuck in Ovirt part...please help to solve this issue..
 
 On Sun, Aug 2, 2015 at 4:22 PM, Maor Lipchuk mlipc...@redhat.com wrote:
 
  Hi Punit,
 
  Thanks for the logs, I didn't found the origin of the problem,
  perhaps you can try to see if stopping the VDSM service and restart the
  sanlock service can help.
 
  Ala, I know you handled some Gluster issues recently, do you have any
  insight about this? Have you encountered something similar?
 
  Regards,
  Maor
 
 
  - Original Message -
   From: Punit Dambiwal hypu...@gmail.com
   To: Maor Lipchuk mlipc...@redhat.com
   Cc: users@ovirt.org
   Sent: Friday, July 31, 2015 11:48:41 AM
   Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock lockspace add
  failure', 'Message too long'
  
   Hi Maor,
  
   Please find the below logs :-
  
   1. Engine logs :- http://paste.ubuntu.com/11971901/
   2. Sanlock lock (HV1) :- http://paste.ubuntu.com/11971916/
   3. VDSM Logs (HV1) :- http://paste.ubuntu.com/11971926/
   4. Sanlock lock (HV2) :- http://paste.ubuntu.com/11971950/
   5. VDSM Logs (HV2) :- http://paste.ubuntu.com/11971955/
   6. Var Messages (HV1) :- http://paste.ubuntu.com/11971967/
   7. Var Messages (HV2) :- http://paste.ubuntu.com/11971977/
  
   The storage can be mount but in the ovirt it's failed to attached...
  
   [image: Inline image 1]
  
   On Thu, Jul 30, 2015 at 3:59 PM, Maor Lipchuk mlipc...@redhat.com
  wrote:
  
   
   
   
   
- Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Punit Dambiwal hypu...@gmail.com
 Cc: users@ovirt.org
 Sent: Thursday, July 30, 2015 10:56:02 AM
 Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock lockspace
  add
failure', 'Message too long'

 Thanks Punit, I will take a look.
 meanwhile can you please also add the sanlock log and the
/var/log/messages
 log
   
Also, please attach the full engine and vdsm logs as well
   

 Regards,
 Maor




 - Original Message -
  From: Punit Dambiwal hypu...@gmail.com
  To: Maor Lipchuk mlipc...@redhat.com
  Cc: users@ovirt.org, Dan Kenigsberg dan...@redhat.com, Itamar
Heim
  ih...@redhat.com
  Sent: Thursday, July 30, 2015 4:15:17 AM
  Subject: Re: [ovirt-users] can't add Datastorge | 'Sanlock
  lockspace
add
  failure', 'Message too long'
 
  Hi Maor,
 
  Ovirt Version :- 3.5.2.1-1.el7.centos
  VDSM :- vdsm-4.16.20-0.el7.centos
  Glusterfs version :- glusterfs-3.6.3-1.el7
 
  http://permalink.gmane.org/gmane.comp.emulators.ovirt.user/19732
All are
  there but still i am facing the same issue..
 
  
  [root@stor1 ~]# gluster volume info 3TB
 
  Volume Name: 3TB
  Type: Replicate
  Volume ID: 78d1f376-178d-4b01-90c0-5dac90b50a6c
  Status: Started
  Number of Bricks: 1 x 3 = 3
  Transport-type: tcp
  Bricks:
  Brick1: stor1:/bricks/b/vol2
  Brick2: stor2:/bricks/b/vol2
  Brick3: stor3:/bricks/b/vol2
  Options Reconfigured:
  storage.owner-gid: 36
  storage.owner-uid: 36
  cluster.server-quorum-type: server
  cluster.quorum-type: auto
  network.remote-dio: enable
  cluster.eager-lock: enable
  performance.stat-prefetch: off
  performance.io-cache: off
  performance.read-ahead: off
  performance.quick-read: off
  auth.allow: *
  user.cifs: enable
  nfs.disable: off
  [root@stor1 ~]#
  
 
  [image: Inline image 1]
 
 
 
  On Wed, Jul 29, 2015 at 11:04 PM, Maor Lipchuk 
  mlipc...@redhat.com
wrote:
 
   Hi Punit,
  
   Which oVirt version and VDSM version are you using?
   The exception looks very similar to the scenario described here:
   http://permalink.gmane.org/gmane.comp.emulators.ovirt.user/19732
   Does it helps ?
  
   Regards,
   Maor
  
  
   - Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: users@ovirt.org, Dan Kenigsberg dan...@redhat.com,
  Itamar
   Heim ih...@redhat.com
Sent: Wednesday, July 29, 2015 5:22:57 AM
Subject: [ovirt-users] can't add Datastorge | 'Sanlock
  lockspace
add
   failure', 'Message too long'
   
 

Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a host

2015-08-03 Thread Konstantinos Christidis

Hello,

Sorry for my late response. I reproduced the error in a lab environment 
(oVirt3.5/CentOS7.1) with 2 hosts (ovhv00 ovhv01) and a replicated 
glusterfs.
I activated the maintenance mode in host ovhv01 and then I stopped 
network.service (instead of a reboot).
The result is always the same. Data Center becomes Non Responsive, my 
storage becomes red and inactive, and most VMs become paused due to 
unknown storage error.


This is the engine log
https://paste.fedoraproject.org/250877/58925314/raw/

Thanks,

K.



On 07/29/2015 12:21 PM, Artyom Lukianov wrote:

Can you please provide engine log(/var/log/ovirt-engine/engine.log)?

- Original Message -
From: Konstantinos Christidis kochr...@ekt.gr
To: users@ovirt.org
Cc: Artyom Lukianov aluki...@redhat.com
Sent: Wednesday, July 29, 2015 9:40:26 AM
Subject: Re: [ovirt-users] Data Center becomes Non Responsive when I reboot a   
host

Maintenance mode is already enabled. All VMs finish migration successfully.
Now I stop glusterd service on this host (systemctl stop
glusterd.service) and nothing bad happens, which means that distributed
replica glusterfs works fine.
Then I stop vdsmd service (systemctl stop vdsmd.service) and everything
works fine.
When I administratively set ovirtmgmt network down or reboot this host,
my Data Center becomes Non Responsive, my storage becomes red and
inactive, and most VMs become paused due to unknown storage error.

K.




On 07/28/2015 06:09 PM, Artyom Lukianov wrote:

Just put host to maintenance mode, if it have vms it will migrate them 
automatically on other host.

- Original Message -
From: Konstantinos Christidis kochr...@ekt.gr
To: users@ovirt.org
Sent: Tuesday, July 28, 2015 1:15:15 PM
Subject: [ovirt-users] Data Center becomes Non Responsive when I reboot a   
host

Hello ovirt users,

I have 4 hosts with a distributed replicated 2x2 GlusterFS storage.
(oVirt3.5/CentOS7)

When I reboot a host (in maintenance mode and not my SPM host) my Data
Center becomes Non Responsive, my storage becomes red and inactive,
and many VMs become paused due to unknown storage error. The same
happens if I administratively set ovirtmgmt network down (to a host in
maintenance mode and not my SPM host) with ifconfig ovirtmgmt down.
I know that management network (ovirtmgmt) is required by default and is
part of oVirt monitoring process but is there anything I can do in order
to reboot a host without causing this mess?

Thanks,

K.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Poor guest write speeds

2015-08-03 Thread Matthew Lagoe
With zfs you can use sync=disabled to enable async on the storage side as well 
just fyi.

 

On one of ours we have 18 x 2tb drives across 6 x 3-way-mirrors with 3 intel 
7310 ssds 3 way mirrored for zil

 

With about 100 vm’s running I get roughly 1400 iops write and 2230 read with 
about 600MBps read and 220 write running the tests from a windows 2008 r2 vm.

 

From: Donny Davis [mailto:do...@cloudspin.me] 
Sent: Sunday, August 02, 2015 05:49 PM
To: Matthew Lagoe
Cc: Alan Murrell; users
Subject: Re: [ovirt-users] Poor guest write speeds

 

Thank you for reporting back.

 

I don't have to use async with my NFS share. Its not the same comparison, but 
just for metrics I am running an

 

 HP dl380 with 2x6cores and 24GB of ram, with 16 disks in raidz2 with 4 vdevs. 
Backend is ZFS. Connections are 10GBE

 

I get great performance from my setup.

 

On Sat, Aug 1, 2015 at 9:00 PM, Matthew Lagoe matthew.la...@subrigo.net wrote:

You can run without async so long as your nfs server can handle sync writes
quickly


-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Alan Murrell
Sent: Saturday, August 01, 2015 05:37 PM
To: users
Subject: Re: [ovirt-users] Poor guest write speeds

Hi Donny (and everyone else who uses NFS-backed storage)

You mentioned that you get really good performance in oVirt and I am curious
what you use for your NFS options in exportfs?  The 'async'
option fixed my performance issues, but of course this would not be a
recommended option in a production environment, at least unless the NFS
server was running with a BBU on the RAID card.

Just curious.  Thanks! :-)

-Alan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





 

-- 

Donny Davis

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] All in one question

2015-08-03 Thread Matthew Lagoe
Since the storage is NFS it really doesn’t matter where it is located at so 
long as all hosts can talk to it via ip

 

Basically if you have the nfs storage locally or otherwise you can simply add 
another host to the datacenter and then you should be able to use the storage 
across the new host

 

Keep in mind you will need to make it so the external host is able to access 
the nfs share

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of Neil
Sent: Monday, August 03, 2015 12:58 AM
To: users@ovirt.org
Subject: [ovirt-users] All in one question

 

Hi guys,

 

Please excuse this if it sounds like a dumb question, it's my first time doing 
an All-in-one oVirt installation

 

I've installed the All-in-one on one physical machine, and configured this as a 
host in the cluster, and my intention was to use local NFS storage as the 
primary storage domain for the VM's, but then add a second host to the cluster 
which would access this NFS primary storage domain on the original All-in-one 
installation...

After doing the install when I log in I see that when you do an All-in-one 
install you end up with a local_cluster as well as a Default cluster and 
you can't add another host to the local_cluster, so it appears I'll need to 
add the second host to the Default which I'm assuming means I won't be able 
to share the primary NFS storage between the two clusters and I won't get live 
migration between my two physical hosts across the clusters?

 

Could anyone confirm if my assumptions are correct please? 

 

Thank you!

 

Regards.

 

Neil Wilson.

 

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] All in one question

2015-08-03 Thread Matthew Lagoe
You will need to add a subfolder or whatnot so that the storage domain you 
specify is empty.

 

 

Then all your problems should go away 

 

From: Neil [mailto:nwilson...@gmail.com] 
Sent: Monday, August 03, 2015 07:49 AM
To: Matthew Lagoe
Cc: users@ovirt.org
Subject: Re: [ovirt-users] All in one question

 

Hi Mathew,

 

Wow, thank you very much for the quick response!

 

I've gone ahead and tried to do what you've suggested, but I'm a bit confused 
as to how the storage is going to work...

 

If I add the storage domain on the Local datacenter, then I can only choose 
local_storage and then when I try add this storage as NFS, ovirt says the 
storage domain isn't empty (which it isn't) so I'm confused as to how both 
hosts will work together on the same storage domain?

 

All I'm wanting to achieve is a two host cluster with NFS storage from one 
host, is it not over complicating things using the AllinOne installation?

 

Thank you, and apologies if I've perhaps misunderstood.

 

Regards.

 

Neil Wilson.

 

 

On Mon, Aug 3, 2015 at 10:03 AM, Matthew Lagoe matthew.la...@subrigo.net 
wrote:

Since the storage is NFS it really doesn’t matter where it is located at so 
long as all hosts can talk to it via ip

 

Basically if you have the nfs storage locally or otherwise you can simply add 
another host to the datacenter and then you should be able to use the storage 
across the new host

 

Keep in mind you will need to make it so the external host is able to access 
the nfs share

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of Neil
Sent: Monday, August 03, 2015 12:58 AM
To: users@ovirt.org
Subject: [ovirt-users] All in one question

 

Hi guys,

 

Please excuse this if it sounds like a dumb question, it's my first time doing 
an All-in-one oVirt installation

 

I've installed the All-in-one on one physical machine, and configured this as a 
host in the cluster, and my intention was to use local NFS storage as the 
primary storage domain for the VM's, but then add a second host to the cluster 
which would access this NFS primary storage domain on the original All-in-one 
installation...

After doing the install when I log in I see that when you do an All-in-one 
install you end up with a local_cluster as well as a Default cluster and 
you can't add another host to the local_cluster, so it appears I'll need to 
add the second host to the Default which I'm assuming means I won't be able 
to share the primary NFS storage between the two clusters and I won't get live 
migration between my two physical hosts across the clusters?

 

Could anyone confirm if my assumptions are correct please? 

 

Thank you!

 

Regards.

 

Neil Wilson.

 

 

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Postponing oVirt 3.6.0 beta

2015-08-03 Thread Sandro Bonazzola
Hi,
in order to solve an issue[1] discovered while testing the candidate build
for the beta release we need to postpone beta release at least by one day
for getting the fix in.
Thanks,

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1249671
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Postponing oVirt 3.5.4 GA to next week

2015-08-03 Thread Sandro Bonazzola
Hi,
in order to provide a more stable release, oVirt 3.5.4 GA has been
postponed to *2015-08-13.*
Fixes improving network stability for VDSM have been added and are
currently under QE verification.
Thanks.

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Migration Fails

2015-08-03 Thread Donny Davis
Can you send the log from each node

/var/log/vdsm/vdsm.log
On Aug 3, 2015 12:39 PM, s k sokratis1...@outlook.com wrote:

 Hi,


 I'm having trouble migrating VMs in a 2-node cluster. VMs can start
 normally on both nodes if they are shutdown first but live migration fails
 and the following is thrown in the engine.log:



 2015-08-03 19:28:28,566 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-42) [46f46347] Rerun vm
 2ceb9c65-1920-49fe-9db1-6c9470e50a65. Called from vds ovirt-srv-02
 2015-08-03 19:28:28,615 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Failed in MigrateStatusVDS
 method
 2015-08-03 19:28:28,617 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Command
 MigrateStatusVDSCommand(HostName = ovirt-srv-02, HostId =
 be3da0c4-f898-4aa4-89c7-239282a03959,
 vmId=2ceb9c65-1920-49fe-9db1-6c9470e50a65) execution failed. Exception:
 VDSErrorException: VDSGenericException: VDSErrorException: Failed to
 MigrateStatusVDS, error = Fatal error during migration, code = 12
 2015-08-03 19:28:28,623 ERROR
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Correlation ID: 43b38529,
 Job ID: 21468e21-c78c-41ea-bc99-c56ff4526820, Call Stack: null, Custom
 Event ID: -1, Message: Migration failed due to Error: Fatal error during
 migration. Trying to migrate to another Host (VM: testvm01, Source:
 ovirt-srv-02, Destination: ovirt-srv-03).


 Both nodes run on CentOS 6.6 with the following versions:


 OS Version: *RHEL - 6 - 6.el6.centos.12.2*
 Kernel Version: *2.6.32 - 504.30.3.el6.x86_64*
 KVM Version: *0.12.1.2 - 2.448.el6_6.4*
 LIBVIRT Version: *libvirt-0.10.2-46.el6_6.6*
 VDSM Version: *vdsm-4.16.20-1.git3a90f62.el6*


 Any thoughts?


 Thanks,


 Sokratis

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Community Newsletter: June/July 2015 Edition

2015-08-03 Thread Brian Proffitt
OSCON, Red Hat Summit, LinuxCon, KVM Forum... so many events this summer
where oVirt is making a splash, it's a wonder we get any work done. Yet,
oVirt continues to gain in popularity and respect for those seeking a
true open source solution for virtual data center management.

-
Software Releases
-

oVirt 3.5.3 Final Release is now available
http://lists.ovirt.org/pipermail/users/2015-June/033283.html

oVirt 3.6.0 Second Alpha Release is now available for testing
http://lists.ovirt.org/pipermail/users/2015-June/033473.html

New MoVirt release - introduction of augmented reality
http://noisedoll.blogspot.com/2015/07/new-movirt-release-introduction-of.html


In the Community


Video Tutorial, oVirt [20-part video series in Portuguese]
https://www.youtube.com/playlist?list=PLuivEGDibLthQL4kjAKYyDXGlX7QxxM2r

An open cloud-based virtual lab environment for computer security
education: A pilot study evaluation of oVirt
http://eprints.leedsbeckett.ac.uk/1478/
http://www.cms.livjm.ac.uk/VIBRANT/wp-content/uploads/2015/06/CloudLabs4.pdf
(Slides)

A fresh look for the oVirt Administrator Portal Dashboard
http://uxd-stackabledesign.rhcloud.com/fresh-look-ovirt-administrator-portal-dashboard/

Red Hat Enterprise Virtualization Hypervisor: KVM now  in the future -
2015 Red Hat Summit Video
https://youtu.be/bOeH-bjTSLw

Exploring open hyperconverged infrastructure solutions
http://summitblog.redhat.com/2015/06/25/exploring-open-hyperconverged-infrastructure-solutions/

Moving Focus to the Upstream
http://community.redhat.com/blog/2015/06/moving-focus-to-the-upstream/

oVirt in the 16th edition of the International Free Software Forum (FISL)
http://dougsland.livejournal.com/124552.html

oVirt's Aurangabad Meetup
http://www.meetup.com/Aurangabad-Ovirt-Meetup/events/223693898/

oVirt Server Virtualization Review [Chinese]
http://ow.ly/QrkuI


Deep Dives and Technical Discussions


Moving your Virtual Machines to oVirt with ease [Video]
https://youtu.be/7vd8X6t9eBk

How to setup oVirt 3.4 virtualization on CentOS 6.6
http://www.serenity-networks.com/linux/how-to-setup-ovirt-3-4-virtualization-on-centos-6-6/

UDS Enterprise  oVirt integration
https://www.udsenterprise.com/en/blog/2015/06/02/uds-enterprise-ovirt-integration/

oVirt Engine - squeezed into a container
http://dummdida.tumblr.com/post/120685318395/ovirt-engine-squeezed-into-a-container

vNuma in RHEVM
http://ramunix.blogspot.co.il/2015/06/vnuma-in-rhevm.html?m=1

oVirt Hosted Engine Backup and Restore [HOWTO]
http://www.ovirt.org/OVirt_Hosted_Engine_Backup_and_Restore

Set CPU family for a specific VM on oVirt 3.5
http://admin-reminder.blogspot.co.il/2015/06/set-cpu-family-for-specific-vm-on-ovirt.html?m=1

Managing Gluster Volume Snapshots using oVirt
http://shtripat.blogspot.in/2015/07/oVirtAndGlusterVolumeSnapshots.html

oVirt with LDAP authentication source
http://firstyear.id.au/entry/30

KVM Nested running on AMD oVirt!
http://cafe-ti.blog.br/2143~kvm-nested-on-amd-running-ovirt.html


Brian Proffitt
-- 

Principal Community Analyst
Open Source and Standards
b...@redhat.com
+1.574.383.9BKP
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Migration Fails

2015-08-03 Thread s k
I checked the vdsm.log and the source host was throwing this error:
libvirtError: unsupported configuration: Unable to find security driver for 
label selinux

I fixed it by disabling selinux (it was running in permissive mode before) and 
rebooting the host since selinux was also disabled on the other host.

Date: Mon, 3 Aug 2015 12:41:57 -0400
Subject: Re: [ovirt-users] VM Migration Fails
From: do...@cloudspin.me
To: sokratis1...@outlook.com
CC: users@ovirt.org

Can you send the log from each node 
/var/log/vdsm/vdsm.log
On Aug 3, 2015 12:39 PM, s k sokratis1...@outlook.com wrote:



Hi,

I'm having trouble migrating VMs in a 2-node cluster. VMs can start normally on 
both nodes if they are shutdown first but live migration fails and the 
following is thrown in the engine.log:


2015-08-03 19:28:28,566 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-42) [46f46347] Rerun vm 
2ceb9c65-1920-49fe-9db1-6c9470e50a65. Called from vds ovirt-srv-022015-08-03 
19:28:28,615 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Failed in MigrateStatusVDS 
method2015-08-03 19:28:28,617 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Command 
MigrateStatusVDSCommand(HostName = ovirt-srv-02, HostId = 
be3da0c4-f898-4aa4-89c7-239282a03959, 
vmId=2ceb9c65-1920-49fe-9db1-6c9470e50a65) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Fatal error during migration, code = 122015-08-03 
19:28:28,623 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Correlation ID: 43b38529, Job 
ID: 21468e21-c78c-41ea-bc99-c56ff4526820, Call Stack: null, Custom Event ID: 
-1, Message: Migration failed due to Error: Fatal error during migration. 
Trying to migrate to another Host (VM: testvm01, Source: ovirt-srv-02, 
Destination: ovirt-srv-03).

Both nodes run on CentOS 6.6 with the following versions:

OS Version: RHEL - 6 - 6.el6.centos.12.2Kernel Version: 2.6.32 - 
504.30.3.el6.x86_64KVM Version: 0.12.1.2 - 2.448.el6_6.4LIBVIRT Version: 
libvirt-0.10.2-46.el6_6.6VDSM Version: vdsm-4.16.20-1.git3a90f62.el6

Any thoughts?

Thanks,

Sokratis  

___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users


  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Migration Fails

2015-08-03 Thread Donny Davis
Is there any reason that you have SE Linux in permissive. oVirt will run
with SE Linux in enforcing
On Aug 3, 2015 1:09 PM, s k sokratis1...@outlook.com wrote:

 I checked the vdsm.log and the source host was throwing this error:

 libvirtError: unsupported configuration: Unable to find security driver
 for label selinux


 I fixed it by disabling selinux (it was running in permissive mode before)
 and rebooting the host since selinux was also disabled on the other host.


 --
 Date: Mon, 3 Aug 2015 12:41:57 -0400
 Subject: Re: [ovirt-users] VM Migration Fails
 From: do...@cloudspin.me
 To: sokratis1...@outlook.com
 CC: users@ovirt.org

 Can you send the log from each node

 /var/log/vdsm/vdsm.log
 On Aug 3, 2015 12:39 PM, s k sokratis1...@outlook.com wrote:

 Hi,


 I'm having trouble migrating VMs in a 2-node cluster. VMs can start
 normally on both nodes if they are shutdown first but live migration fails
 and the following is thrown in the engine.log:



 2015-08-03 19:28:28,566 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-42) [46f46347] Rerun vm
 2ceb9c65-1920-49fe-9db1-6c9470e50a65. Called from vds ovirt-srv-02
 2015-08-03 19:28:28,615 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Failed in MigrateStatusVDS
 method
 2015-08-03 19:28:28,617 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Command
 MigrateStatusVDSCommand(HostName = ovirt-srv-02, HostId =
 be3da0c4-f898-4aa4-89c7-239282a03959,
 vmId=2ceb9c65-1920-49fe-9db1-6c9470e50a65) execution failed. Exception:
 VDSErrorException: VDSGenericException: VDSErrorException: Failed to
 MigrateStatusVDS, error = Fatal error during migration, code = 12
 2015-08-03 19:28:28,623 ERROR
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-8-thread-38) [46f46347] Correlation ID: 43b38529,
 Job ID: 21468e21-c78c-41ea-bc99-c56ff4526820, Call Stack: null, Custom
 Event ID: -1, Message: Migration failed due to Error: Fatal error during
 migration. Trying to migrate to another Host (VM: testvm01, Source:
 ovirt-srv-02, Destination: ovirt-srv-03).


 Both nodes run on CentOS 6.6 with the following versions:


 OS Version: *RHEL - 6 - 6.el6.centos.12.2*
 Kernel Version: *2.6.32 - 504.30.3.el6.x86_64*
 KVM Version: *0.12.1.2 - 2.448.el6_6.4*
 LIBVIRT Version: *libvirt-0.10.2-46.el6_6.6*
 VDSM Version: *vdsm-4.16.20-1.git3a90f62.el6*


 Any thoughts?


 Thanks,


 Sokratis

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM Migration Fails

2015-08-03 Thread s k
Hi,

I'm having trouble migrating VMs in a 2-node cluster. VMs can start normally on 
both nodes if they are shutdown first but live migration fails and the 
following is thrown in the engine.log:


2015-08-03 19:28:28,566 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-42) [46f46347] Rerun vm 
2ceb9c65-1920-49fe-9db1-6c9470e50a65. Called from vds ovirt-srv-022015-08-03 
19:28:28,615 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Failed in MigrateStatusVDS 
method2015-08-03 19:28:28,617 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Command 
MigrateStatusVDSCommand(HostName = ovirt-srv-02, HostId = 
be3da0c4-f898-4aa4-89c7-239282a03959, 
vmId=2ceb9c65-1920-49fe-9db1-6c9470e50a65) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Fatal error during migration, code = 122015-08-03 
19:28:28,623 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-38) [46f46347] Correlation ID: 43b38529, Job 
ID: 21468e21-c78c-41ea-bc99-c56ff4526820, Call Stack: null, Custom Event ID: 
-1, Message: Migration failed due to Error: Fatal error during migration. 
Trying to migrate to another Host (VM: testvm01, Source: ovirt-srv-02, 
Destination: ovirt-srv-03).

Both nodes run on CentOS 6.6 with the following versions:

OS Version: RHEL - 6 - 6.el6.centos.12.2Kernel Version: 2.6.32 - 
504.30.3.el6.x86_64KVM Version: 0.12.1.2 - 2.448.el6_6.4LIBVIRT Version: 
libvirt-0.10.2-46.el6_6.6VDSM Version: vdsm-4.16.20-1.git3a90f62.el6

Any thoughts?

Thanks,

Sokratis  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Testing Ovirt 3.6

2015-08-03 Thread Sandro Bonazzola
No, no specific known issue.

On Sat, Aug 1, 2015 at 8:57 PM, Maor Lipchuk mlipc...@redhat.com wrote:

 Sandro, Eyal,
 Is there any known issue of this specific build?

 Regards,
 Maor

 - Original Message -
 From: wodel youchi wodel.you...@gmail.com
 To: Maor Lipchuk mlipc...@redhat.com
 Cc: users users@ovirt.org
 Sent: Saturday, August 1, 2015 3:24:21 PM
 Subject: Re: [ovirt-users] Testing Ovirt 3.6

 Hi,

 Here are the logs

 engine.log

 hosted-engine setup log

 vdsm.log

 agent and broker logs


 About the postgresql function, it's exists gethostnetworksbycluster(uuid)
 but the webgui is calling it with parameters not defined.

 2015-07-31 22:05:20,449 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
 (default task-23) [7acb8bf] Data access error during C
 anDoActionFailure.: org.springframework.jdbc.BadSqlGrammarException:
 PreparedStatementCallback; bad SQL grammar [select * fro
 m  gethostnetworksbycluster(?, ?, ?)]; nested exception is
 org.postgresql.util.PSQLException: ERROR: function gethostnetworks
 bycluster(uuid, unknown, character varying) does not exist
  Hint: No function matches the given name and argument types. You might
 need to add explicit type casts.



 Regards

 2015-08-01 10:01 GMT+01:00 Maor Lipchuk mlipc...@redhat.com:

  Hi wodel,
 
  can u please attach the engine.log, also the hosted engine log.
 
  Regards,
  Maor
 
 
  - Original Message -
   From: wodel youchi wodel.you...@gmail.com
   To: users users@ovirt.org
   Sent: Saturday, August 1, 2015 1:01:57 AM
   Subject: [ovirt-users] Testing Ovirt 3.6
  
   Hi,
  
   I have installed ovirt 3.6 hosted-engine on Fedora22 for test.
   using NFS4 as storage for the vm engine, vms, iso and export.
  
   I am using ovirt-release- master repository
  
   I have some problems
  
   1 - the VM engine is not showing up on the webgui.
  
   2 - I cannot start a VM after it's creation, I get an error about a
  failed
   connection to DB, in the engine's error log I have an exception about a
  none
   existant function ' gethostne tworksbycluster'.
  
   3 - I couldn't import my old export domain
  
   4 - I couldn't import my old vm domain.
  
   thanks in advance.
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 




-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Prereqs for Self Hosted Engine Hyper Converged Gluster Support

2015-08-03 Thread Michael DePaulo
Hi Didi,

1st of all, note that I am trying this on my home network. This is not
a production environment by any means.

On Mon, Aug 3, 2015 at 1:52 AM, Yedidyah Bar David d...@redhat.com wrote:
 On Mon, Aug 3, 2015 at 6:44 AM, Michael DePaulo mikedep...@gmail.com wrote:
 Hi,

 I am trying to follow these instructions on 3.6 alpha 3:
 http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support

 The 1st time I ran ovirt-hosted-engine-setup (which was unsuccessful),

 What's exactly unsuccessful? Did it continue beyond the closeup stage?

I ran a total of 25 runs yesterday due to multiple issues that I ran
into. The reason why the setup failed seemed to be that the hosted
engine VM did not reboot and start up the engine quickly enough, so it
failed during health.py.
2015-08-02 14:03:26 ERROR otopi.context context._executeMethod:164
Failed to execute stage 'Closing up': timed out

I eventually worked around this by increasing the time.sleep() call to
a larger value in the installed health.py, and then recompiling the
.pyc and .pyo files.
https://github.com/oVirt/ovirt-hosted-engine-setup/blob/c6bc631b81c241c2bf5b5313997e7e575874ac8f/src/plugins/ovirt-hosted-engine-setup/engine/health.py#L161

I think I experienced a bug whereby the while loop is only running 2
or 3 times, but I do not see a bug in the code.

Anyway, I planned to send a separate email about this issue, hopefully
after I have investigated the possible bug with the loop and prepared
a patch.

 I saw the prompt:

 Do you want to configure this host for providing GlusterFS
 storage? (Yes, No)[No]:

 Now I no longer see it.

 Please check/post setup logs (of all relevant runs), from
 /var/log/ovirt-hosted-engine-setup

This log shows the fact that I was not prompted to configure storage,
and it shows the timed out failure during health.py.
http://pastebin.com/5x3Xhss7



 Must gluster not have any peers or volumes already? Is there some
 other prereq that I am missing?

 Not sure, adding Sandro.

 Note that the official support for HC was postponed to 4.0 [1], but some
 parts were already implemented and should work.

 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1175354

 Best,
 --
 Didi

Thank you,
-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vhostmd

2015-08-03 Thread Arsène Gschwind

Hi,

I'm running oVirt 3.5 and trying to setup vhostmd on EL7 but I'm no able 
to find the vdsm-hook-vhostmd for EL7, are there some reason this RPM 
isn't available?
Having no found the hook I've tried to setup a custom disk for vhostmd 
but I could not figure out how to do this since it only allows to add 
disk devices from storage domains.

Anyone setup and use vhostmd?

Thank for any help/hint.
Rgds,
Arsène
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users