[ovirt-users] Mix local and shared storage same data center on ovirt3.6 rc?

2015-10-06 Thread Liam Curtis
Hello,
Sorry for double-post if the case (not sure if case-sensitive)

Loving ovirt...Have reinstalled many a time trying to understand and
thought I had this working, though now that everything operating properly
it seems this functionality is not possible.

I am running hosted engine over glusterfs and would also like to use some
of the other bricks I have set up on the gluster host, but when I try to
create a new gluster cluster in data center,  I get error message:

Failed to connect host  to Storage Pool Default. (which makes sense
as storage is local, but trying to overcome this limitation)

I dont want to use just gluster shared storage in same data center. Any way
to work around this?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Any way to correlate a VM disk (e.g. /dev/vda) to the vdsm ovirt disk?

2015-10-06 Thread Raz Tamir
Hi ccox,
you can see the disk id mapping to device if you execute 'ls -l
/dev/disk/by-id/' .
Second way, and easier, is to make sure you have guest-agent installed on
your guest virtual machine and using rest API you can run GET command:
GET on .../api/vms/{vm_id}/disks

You will see an attribute called "" .
I hope that helps



Thanks,
Raz Tamir
Red Hat Israel

On Tue, Oct 6, 2015 at 11:07 PM,  wrote:

> I want to correlate virtual disks back to their originating storage under
> ovirt. Is there any way to do this?
>
> e.g. (made up example)
>
> /dev/vda
>
> maps to ovirt disk
>
> disk1_vm serial 978e00a3-b4c9-4962-bc4f-ffc9267acdd8
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Mix local and shared storage on ovirt3.6 rc?

2015-10-06 Thread Liam Curtis
Hello all,

Loving ovirt...Have reinstalled many a time trying to understand and
thought I had this working, though now that everything operating properly
it seems this functionality is not possible.

I am running hosted engine over glusterfs and would also like to use some
of the other bricks I have set up on the gluster host, but when I try to
create a new gluster cluster in data center,  I get error message:

Failed to connect host  to Storage Pool Default.

I dont want to use just gluster shared storage. Any way to work around this?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Support for windows10

2015-10-06 Thread Budur Nagaraju
any updates ?

On Tue, Oct 6, 2015 at 4:40 PM, Budur Nagaraju  wrote:

> I selected win8 64bit  under "opeartating system " field ,unable to
> install successfully,
> Do I need to select win2012  but  it's a server version ,pls suggest.
>
>
>
>
> On Tue, Oct 6, 2015 at 4:08 PM, Vinzenz Feenstra 
> wrote:
>
>> On 10/06/2015 12:37 PM, Budur Nagaraju wrote:
>>
>> I'm using Ovirt3.5 and unable to find the windows 10 in the field
>> "operation system.
>>
>> Because there was no Windows 10 when we release ovirt 3.5 :-)
>> You should choose the latest Windows version available, that should be ok
>> then.
>>
>>
>> On Tue, Oct 6, 2015 at 3:57 PM, Vinzenz Feenstra 
>> wrote:
>>
>>> On 10/06/2015 11:15 AM, Eli Mesika wrote:
>>>
 CCing Vinzenz

 - Original Message -

> From: "Budur Nagaraju" 
> To: "users" 
> Sent: Tuesday, October 6, 2015 8:59:50 AM
> Subject: [ovirt-users] Support for windows10
>
> Hi
>
> Does Ovirt supports windows 10 OS ?
>
 From our latest tests we can confirm that the latest builds of Win10
>>> are working on oVirt, some earlier releases had problems which have been
>>> addressed already by Microsoft before the release.
>>> HTH
>>>
>>>
> Thanks,
> Nagaraju
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>>>
>>> --
>>> Regards,
>>>
>>> Vinzenz Feenstra | Senior Software Engineer
>>> RedHat Engineering Virtualization R & D
>>> Phone: +420 532 294 625
>>> IRC: vfeenstr or evilissimo
>>>
>>> Better technology. Faster innovation. Powered by community collaboration.
>>> See how it works at redhat.com
>>>
>>>
>>
>>
>> --
>> Regards,
>>
>> Vinzenz Feenstra | Senior Software Engineer
>> RedHat Engineering Virtualization R & D
>> Phone: +420 532 294 625
>> IRC: vfeenstr or evilissimo
>>
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Support for windows10

2015-10-06 Thread Budur Nagaraju
Hi

Does Ovirt supports windows 10 OS ?

Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] restore

2015-10-06 Thread Yedidyah Bar David
On Fri, Sep 25, 2015 at 10:23 AM, Budur Nagaraju  wrote:
> HI
>
> Getting below error while restoring ovirt,
>
> [root@cstlb1 ovirtrestore]# engine-backup --mode=restore --file=ovirtbackup
> --log=ovirtlog
> Preparing to restore:
> - Unpacking file 'ovirtbackup'
> Restoring:
> - Files
> FATAL: Can't connect to database 'engine'. Please see
> '/usr/bin/engine-backup --help'.
> [root@cstlb1 ovirtrestore]#

Please check [1] and report with more details if you still have problems.

[1] http://www.ovirt.org/Ovirt-engine-backup#Restore

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Any way to correlate a VM disk (e.g. /dev/vda) to the vdsm ovirt disk?

2015-10-06 Thread ccox
I want to correlate virtual disks back to their originating storage under
ovirt. Is there any way to do this?

e.g. (made up example)

/dev/vda

maps to ovirt disk

disk1_vm serial 978e00a3-b4c9-4962-bc4f-ffc9267acdd8

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] LDAP authentication with TLS

2015-10-06 Thread Steve Dainard
Hello,

Trying to configure Ovirt 3.5.3.1-1.el7.centos for LDAP authentication.

I've configured the appropriate aaa profile but I'm getting TLS errors
 when I search for users to add via ovirt:

The connection reader was unable to successfully complete TLS
negotiation: javax_net_ssl_SSLHandshakeException:
sun_security_validator_ValidatorException: No trusted certificate
found caused by sun_security_validator_ValidatorException: No trusted
certificate found

I added the external CA certificate using keytool as per
https://github.com/oVirt/ovirt-engine-extension-aaa-ldap with
appropriate adjustments of course:

keytool -importcert -noprompt -trustcacerts -alias myrootca \
   -file myrootca.pem -keystore myrootca.jks -storepass changeit

I know this certificate works, and can connect to LDAP with TLS as I'm
using the same LDAP configuration/certificate with SSSD.

Can anyone clarify whether I should be adding the external CA
certificate or the LDAP host certificate with keytool or any other
suggestions?

Thanks,
Steve
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LDAP authentication with TLS

2015-10-06 Thread Alon Bar-Lev
Hi,

Can you please send me the profile, the keystore you created and the output of:

openssl s_client -connect server:636 -showcerts < /dev/null

Thanks!

- Original Message -
> From: "Steve Dainard" 
> To: "users" 
> Sent: Tuesday, October 6, 2015 11:50:41 PM
> Subject: [ovirt-users] LDAP authentication with TLS
> 
> Hello,
> 
> Trying to configure Ovirt 3.5.3.1-1.el7.centos for LDAP authentication.
> 
> I've configured the appropriate aaa profile but I'm getting TLS errors
>  when I search for users to add via ovirt:
> 
> The connection reader was unable to successfully complete TLS
> negotiation: javax_net_ssl_SSLHandshakeException:
> sun_security_validator_ValidatorException: No trusted certificate
> found caused by sun_security_validator_ValidatorException: No trusted
> certificate found
> 
> I added the external CA certificate using keytool as per
> https://github.com/oVirt/ovirt-engine-extension-aaa-ldap with
> appropriate adjustments of course:
> 
> keytool -importcert -noprompt -trustcacerts -alias myrootca \
>-file myrootca.pem -keystore myrootca.jks -storepass changeit
> 
> I know this certificate works, and can connect to LDAP with TLS as I'm
> using the same LDAP configuration/certificate with SSSD.
> 
> Can anyone clarify whether I should be adding the external CA
> certificate or the LDAP host certificate with keytool or any other
> suggestions?
> 
> Thanks,
> Steve
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving storage and importing vms issue

2015-10-06 Thread Jiří Sléžka

Hello,

I have not much time to deal with this issue until today. I happily 
recovered lost disk image.


As I mentioned before, I found lost disk image (volume) but it wasn't 
accessible because it's logical volume (I'm using FC storage) wasn't 
active...


I double check that this image is not used anywhere and set it active

lvchange -a y 
/dev/088e7ed9-84c7-4fbd-a570-f37fa986a772/0681822f-3ac8-473b-95ce-380f8ab4de06


then I was able to backup my data (in fact I created new vm disk and dd 
this volume to it)


btw. After successful recovery I still have this orphaned image on my 
storage which is not visible from manager. How can I correctly remove 
it? (from storage and from engine)


It looks like this (http://www.ovirt.org/Features/Orphaned_Images) 
utility would helped a lot ;-)


Cheers, Jiri




any hope and/or hint for me?

Before I moved storage I (partly live) migrated disks to this storage
(we have about 5 LUNS). Probably there could be some issues. Just a
guess - could it mean that some disks would stay on original storage as
orphaned images?

It would be useful to have some low-level util to display all images
(also orphaned) and their properties and correlations with vms.

Kind regards,

Jiri



Dne 10.9.2015 v 16:31 Eli Mesika napsal(a):

Adding Allon M

- Original Message -

From: "Jiří Sléžka" 
To: "Eli Mesika" 
Cc: users@ovirt.org, "Omer Frenkel" 
Sent: Thursday, September 10, 2015 4:07:48 PM
Subject: Re: [ovirt-users] moving storage and importing vms issue

Hello,


- Original Message -

From: "Jiří Sléžka" 
To: emes...@redhat.com
Cc: users@ovirt.org
Sent: Thursday, September 10, 2015 1:50:14 PM
Subject: Re: [ovirt-users] moving storage and importing vms issue

Hello,


- Original Message -

From: "Jiří Sléžka" 
To: users@ovirt.org
Sent: Thursday, September 10, 2015 1:30:29 AM
Subject: [ovirt-users] moving storage and importing vms issue

Hello,

I am working on some consolidation of our RHEV/oVirt servers and
I moved
one storage to new oVirt datacenter (put it into maintenance,
detached
it from old and imported into new datacenter) which worked pretty
good.

Then I tried to import all the vms which worked also great except
for
three of them.

These vms are stucked in VM Import sub-tab and are quietly failing
import attempts (I can only see failed task "Importing VM
clavius-winxp
from configuration to Cluster CIT-oVirt" but no related event and/or
explanation)

There is only one host in this datacenter/cluster which is SPM. I
can't
find anything interesting in vdsm.log (short span of import time
is in
attachment).


Can you please attach also engine.log ?


sure

well, here I can see an error... it looks like some db and/or snapshot
issue.


Yes, seems as ImportVmFromConfigurationCommand tries to add
snapshots with
the empty GUID (000..0)
This cause violation of the primary key of the snapshots table
CCing Omer F on that



well and it looks like I lost also one secondary disk from one
correctly
imported vm.

is there a way to show all images on some storage domain?

I found that my storage is this

[root@ovirt04 ~]# vdsClient -s 0 getStorageDomainInfo
088e7ed9-84c7-4fbd-a570-f37fa986a772
uuid = 088e7ed9-84c7-4fbd-a570-f37fa986a772
vguuid = MkMpr6-o9c1-LBUq-rZ0E-ZRSg-X31T-2aU1PV
state = OK
version = 3
role = Master
type = FCP
class = Data
pool = ['0002-0002-0002-0002-02b9']
name = oVirt-SlowStorage

but I have no luck with finding how to display all images on it.


try

# vdsClient -s getImagesList "088e7ed9-84c7-4fbd-a570-f37fa986a772"


yes, it works :-)

now I have list of imgUUIDs on this storage. When I compare it against
Disks tab in oVirt manager a see 5 images that are not visible in
manager.

346ad5af-9db8-46eb-9a45-172ce3213496
45493042-67f5-4dcd-8dae-5b2c213aa95a
fb8f3165-5976-4094-9d37-ea0b09124547
e15288bc-30ec-4a77-837b-bdc7de37a08b
be5c56de-6a22-4d1a-8579-f0f5d501d90c

now I tried to find anything about these images

[root@ovirt04 ~]# vdsClient -s 0 getVolumesList
"088e7ed9-84c7-4fbd-a570-f37fa986a772"
"0002-0002-0002-0002-02b9"
"346ad5af-9db8-46eb-9a45-172ce3213496"
eeca0e49-ba6d-4b4b-9eb4-731b90b48091 : Exported by virt-v2v.
da00feb8-991d-4b91-b424-6931daf00c83 : Parent is
eeca0e49-ba6d-4b4b-9eb4-731b90b48091



[root@ovirt04 ~]# vdsClient -s 0 getVolumesList
"088e7ed9-84c7-4fbd-a570-f37fa986a772"
"0002-0002-0002-0002-02b9"
"45493042-67f5-4dcd-8dae-5b2c213aa95a"

d2916b5d-50e4-482c-aa6b-e26d2c78ef46 : Exported by virt-v2v.



[root@ovirt04 ~]# vdsClient -s 0 getVolumesList
"088e7ed9-84c7-4fbd-a570-f37fa986a772"
"0002-0002-0002-0002-02b9"
"fb8f3165-5976-4094-9d37-ea0b09124547"
cc83caa4-e366-4fd6-94b7-d16089aa29d6 : Parent is
53c5003d-80de-4dfd-b5d8-50537a3a54d6

53c5003d-80de-4dfd-b5d8-50537a3a54d6 : imported by virt-v2v.



[root@ovirt04 ~]# 

[ovirt-users] Conflicts in hooks

2015-10-06 Thread nicolas

Hi,

We're running version 3.5.3.1-1 of oVirt, along with GlusterFS storage. 
Recently we're having plenty of events like these:


2015-Oct-06, 07:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 07:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 07:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.


In the logs I see messages like these:

2015-10-06 07:38:15,898 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, Call 
Stack: null, Custom Event ID: -1, Message: Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-10-06 07:38:15,913 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, Call 
Stack: null, Custom Event ID: -1, Message: Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-10-06 07:38:15,928 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, Call 
Stack: null, Custom Event ID: -1, Message: Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.


Why might it be caused by?

Thanks,

Nicolás
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Support for windows10

2015-10-06 Thread Eli Mesika
CCing Vinzenz 

- Original Message -
> From: "Budur Nagaraju" 
> To: "users" 
> Sent: Tuesday, October 6, 2015 8:59:50 AM
> Subject: [ovirt-users] Support for windows10
> 
> Hi
> 
> Does Ovirt supports windows 10 OS ?
> 
> Thanks,
> Nagaraju
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Testing ovirt 3.6 RC

2015-10-06 Thread Sandro Bonazzola
On Fri, Oct 2, 2015 at 6:04 PM, wodel youchi  wrote:

> Hi,
>
> Just to not bother you with this problem again.
>
> can the following be a minimum hardware to test ovirt 3.6? (I used the
> same previously with 3.5)
>
> - one machine supporting kvm
> - 8 Gb RAM
> - one nic card
> - storage (in my case NFS4)
> - 2 Gb of RAM to hosted-engine
>

2Gb of ram for hosted engine VM are under the minimum requirements of
ovirt-engine.
A minimum of 4GB of ram is required and 16GB are recommended




>
>
> May be I am doing things the wrong way.
>
> thanks
>
> 2015-09-30 23:05 GMT+01:00 wodel youchi :
>
>> Hi again,
>>
>> I did redeploy from scratch, same problem
>>
>> the VM engine crashes after importing the hosted_storage.
>>
>> 2015-09-30 10:24 GMT+01:00 wodel youchi :
>>
>>> Hi,
>>>
>>> Yes I did update the VM engine.
>>>
>>> 2015-09-30 8:35 GMT+01:00 Simone Tiraboschi :
>>>


 On Tue, Sep 29, 2015 at 9:51 PM, wodel youchi 
 wrote:

> Hi,
>
> is the problem about the VM engine not being shown on webui corrected?
>
> I have updated my installation ovirt 3.6 beta 6 to RC (I didn't
> redeploy from scratch), still the same problem
>
>
> no vm engine on webui, attaching the hosted-storage to the default DC,
> causes the VM engine to crash.
>

 That patch has been merged:
 https://bugzilla.redhat.com/show_bug.cgi?id=1261996

 The patch is on the engine side, not on the host.
 Did you also updated ovirt-engine on the engine VM?


>
> Do I have to redeploy from scratch?
>
> PS: I am using NFS4 for storage.
>
> thanks in advance.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Conflicts in hooks

2015-10-06 Thread knarra

Hi nicolas,

This issue happens if there are any conflicts between the hooks 
scripts present in the servers of your cluster. Try to resolve the 
conflicts of the hooks scripts by clicking on resolve conflicts button 
and the you should no more be seeing these messages.


Thanks
kasturi.

On 10/06/2015 01:13 PM, nico...@devels.es wrote:

Hi,

We're running version 3.5.3.1-1 of oVirt, along with GlusterFS 
storage. Recently we're having plenty of events like these:


2015-Oct-06, 07:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 07:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 07:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.


In the logs I see messages like these:

2015-10-06 07:38:15,898 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Detected conflict in 
hook start-POST-31ganesha-start.sh of Cluster Cluster.
2015-10-06 07:38:15,913 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Detected conflict in 
hook delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-10-06 07:38:15,928 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Detected conflict in 
hook set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.


Why might it be caused by?

Thanks,

Nicolás
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] MAC address recycling

2015-10-06 Thread Daniel Helgenberger


On 02.10.2015 07:54, Martin Mucha wrote:
> 
> 
> - Original Message -
>>
>>
>> On 27.09.2015 12:25, Martin Mucha wrote:
>>> Hi,
>>>
>>> danken, I do not remember I saw such a bug.
>>>
>>> In 3.5 and 3.6 there was some changes in MAC pool implementation and usage
>>> in system, but order, in which macs are assigned remained unchanged. Yes,
>>> if you request MAC from pool, return it, and request again, you will
>>> always end up with same mac.
>>>
>>> when looking for available mac in pool, we iterate through available ranges
>>> selecting first one with some available mac:
>>> org.ovirt.engine.core.bll.network.macpoolmanager.MacsStorage#getRangeWithAvailableMac
>>>
>>> and select 'leftmost' available mac from it:
>>> org.ovirt.engine.core.bll.network.macpoolmanager.Range#findUnusedMac
>>
>> Thanks clearing up this behaviour!
>>
>> I would open an RFE?
> 
> Yes, please. I'm currently on PTO, and this has to be planned anyways (and 
> that's not done by me). Please specify there as well whether depicted 
> solution I wrote about in last mail would be fine to you, or whether you 
> actually need some delaying. I believe new methods for replacing should be 
> ideal solution, but I don't know all your constraints and I did not look 
> sufficiently thoroughly into code to be sure how easy/problematic it is. 
> 
FYI:
https://bugzilla.redhat.com/show_bug.cgi?id=1269301

>>
>>>
>>> I understand your problem, we can either a) impose some delay for returning
>>> macs to pool or b)randomize acquiring macs, but we should as well specify,
>>> how should system behave, when there are not sufficient macs in system.
>>> Since when there are small amount of macs left, a) will block other
>>> requests for mac while there's no need to do so and b) will return same
>>> mac anyways, if there's just one left. And even worse, with low number of
>>> available mac (for example 5) and randomized selection it may work/fail
>>> unpredictably.
>>>
>>> Maybe more proper would be creating new method on mac pool, requiring 'mac
>>> renew/replace' — that's the actual usecase you need; not
>>> delaying/randomizing. You need different mac. Method like "I want some MAC
>>> address(es), but not this one(s); Returned mac addresses would be
>>> immediately available for others, and search for another mac can sensibly
>>> fail (this mac address cannot be replaced, there isnt another one)
>>>
>>> M.
>>>
>>> - Original Message -
 On Thu, Sep 24, 2015 at 01:39:42PM +, Daniel Helgenberger wrote:
> Hello,
>
> I recently experienced an issue with mac address uses in ovirt with
> foreman[1].
>
> Bottom line, a mac address was recycled causing an issue where I could
> not
> rebuild a host because of a stale DHCP reservation record.
>
> What is the current behavior regarding the reuse of MAC addresses for new
> VMs?
> Can I somehow delay a the recycle of a MAC?

 Martin, I recall we had a bug asking not to immediately re-use a
 recently-released MAC address. Was this possible?

>>>
>>
>> --
>> Daniel Helgenberger
>> m box bewegtbild GmbH
>>
>> P: +49/30/2408781-22
>> F: +49/30/2408781-10
>>
>> ACKERSTR. 19
>> D-10115 BERLIN
>>
>>
>> www.m-box.de  www.monkeymen.tv
>>
>> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
>> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
>>
> 

-- 
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de  www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Support for windows10

2015-10-06 Thread Vinzenz Feenstra

On 10/06/2015 11:15 AM, Eli Mesika wrote:

CCing Vinzenz

- Original Message -

From: "Budur Nagaraju" 
To: "users" 
Sent: Tuesday, October 6, 2015 8:59:50 AM
Subject: [ovirt-users] Support for windows10

Hi

Does Ovirt supports windows 10 OS ?
From our latest tests we can confirm that the latest builds of Win10 
are working on oVirt, some earlier releases had problems which have been 
addressed already by Microsoft before the release.

HTH


Thanks,
Nagaraju


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Regards,

Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Support for windows10

2015-10-06 Thread Budur Nagaraju
I'm using Ovirt3.5 and unable to find the windows 10 in the field
"operation system.

On Tue, Oct 6, 2015 at 3:57 PM, Vinzenz Feenstra 
wrote:

> On 10/06/2015 11:15 AM, Eli Mesika wrote:
>
>> CCing Vinzenz
>>
>> - Original Message -
>>
>>> From: "Budur Nagaraju" 
>>> To: "users" 
>>> Sent: Tuesday, October 6, 2015 8:59:50 AM
>>> Subject: [ovirt-users] Support for windows10
>>>
>>> Hi
>>>
>>> Does Ovirt supports windows 10 OS ?
>>>
>> From our latest tests we can confirm that the latest builds of Win10 are
> working on oVirt, some earlier releases had problems which have been
> addressed already by Microsoft before the release.
> HTH
>
>
>>> Thanks,
>>> Nagaraju
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>
> --
> Regards,
>
> Vinzenz Feenstra | Senior Software Engineer
> RedHat Engineering Virtualization R & D
> Phone: +420 532 294 625
> IRC: vfeenstr or evilissimo
>
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Support for windows10

2015-10-06 Thread Vinzenz Feenstra

On 10/06/2015 12:37 PM, Budur Nagaraju wrote:
I'm using Ovirt3.5 and unable to find the windows 10 in the field 
"operation system.

Because there was no Windows 10 when we release ovirt 3.5 :-)
You should choose the latest Windows version available, that should be 
ok then.


On Tue, Oct 6, 2015 at 3:57 PM, Vinzenz Feenstra > wrote:


On 10/06/2015 11:15 AM, Eli Mesika wrote:

CCing Vinzenz

- Original Message -

From: "Budur Nagaraju" >
To: "users" >
Sent: Tuesday, October 6, 2015 8:59:50 AM
Subject: [ovirt-users] Support for windows10

Hi

Does Ovirt supports windows 10 OS ?

From our latest tests we can confirm that the latest builds of
Win10 are working on oVirt, some earlier releases had problems
which have been addressed already by Microsoft before the release.
HTH


Thanks,
Nagaraju


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



-- 
Regards,


Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625 
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community
collaboration.
See how it works at redhat.com 





--
Regards,

Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] To start vm (runonce) with Rest API

2015-10-06 Thread Artyom Lukianov
You run POST request on 
https://your_rhevm_ip/ovirt-engine/api/vms/6ea131f6-814c-430f-9594-7ee214a44c94/start
 with body:












- Original Message -
From: "Qingyun Ao" 
To: users@ovirt.org
Sent: Wednesday, September 30, 2015 9:20:51 AM
Subject: [ovirt-users] To start vm (runonce) with Rest API

Hello, all, 


Before starting a vm in the first time (runonce), we can attach a floppy drive 
to the vm. How can I compose the XML file to do this with oViirt Rest API? 


-- 
Best regards, 
Ao Qingyun 
aoqing...@gmail.com 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Conflicts in hooks

2015-10-06 Thread nicolas

Hi kasturi,

Thanks, it seems solved now - I haven't even realized there was a 
'Gluster Hooks' tab before you wrote!


Regards,

Nicolás

El 2015-10-06 10:21, knarra escribió:

Hi nicolas,

This issue happens if there are any conflicts between the hooks
scripts present in the servers of your cluster. Try to resolve the
conflicts of the hooks scripts by clicking on resolve conflicts button
and the you should no more be seeing these messages.

Thanks
kasturi.

On 10/06/2015 01:13 PM, nico...@devels.es wrote:

Hi,

We're running version 3.5.3.1-1 of oVirt, along with GlusterFS 
storage. Recently we're having plenty of events like these:


2015-Oct-06, 07:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 07:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 07:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 05:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 03:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-06, 01:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 23:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 21:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 19:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-Oct-05, 17:38 Detected conflict in hook 
start-POST-31ganesha-start.sh of Cluster Cluster.


In the logs I see messages like these:

2015-10-06 07:38:15,898 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Detected conflict in 
hook start-POST-31ganesha-start.sh of Cluster Cluster.
2015-10-06 07:38:15,913 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Detected conflict in 
hook delete-POST-57glusterfind-delete-post.py of Cluster Cluster.
2015-10-06 07:38:15,928 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-39) [53af3676] Correlation ID: null, 
Call Stack: null, Custom Event ID: -1, Message: Detected conflict in 
hook set-POST-32gluster_enable_shared_storage.sh of Cluster Cluster.


Why might it be caused by?

Thanks,

Nicolás
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Support for windows10

2015-10-06 Thread Budur Nagaraju
I selected win8 64bit  under "opeartating system " field ,unable to install
successfully,
Do I need to select win2012  but  it's a server version ,pls suggest.




On Tue, Oct 6, 2015 at 4:08 PM, Vinzenz Feenstra 
wrote:

> On 10/06/2015 12:37 PM, Budur Nagaraju wrote:
>
> I'm using Ovirt3.5 and unable to find the windows 10 in the field
> "operation system.
>
> Because there was no Windows 10 when we release ovirt 3.5 :-)
> You should choose the latest Windows version available, that should be ok
> then.
>
>
> On Tue, Oct 6, 2015 at 3:57 PM, Vinzenz Feenstra 
> wrote:
>
>> On 10/06/2015 11:15 AM, Eli Mesika wrote:
>>
>>> CCing Vinzenz
>>>
>>> - Original Message -
>>>
 From: "Budur Nagaraju" 
 To: "users" 
 Sent: Tuesday, October 6, 2015 8:59:50 AM
 Subject: [ovirt-users] Support for windows10

 Hi

 Does Ovirt supports windows 10 OS ?

>>> From our latest tests we can confirm that the latest builds of Win10 are
>> working on oVirt, some earlier releases had problems which have been
>> addressed already by Microsoft before the release.
>> HTH
>>
>>
 Thanks,
 Nagaraju


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>
>> --
>> Regards,
>>
>> Vinzenz Feenstra | Senior Software Engineer
>> RedHat Engineering Virtualization R & D
>> Phone: +420 532 294 625
>> IRC: vfeenstr or evilissimo
>>
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>>
>
>
> --
> Regards,
>
> Vinzenz Feenstra | Senior Software Engineer
> RedHat Engineering Virtualization R & D
> Phone: +420 532 294 625
> IRC: vfeenstr or evilissimo
>
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hanging remove snapshot tasks

2015-10-06 Thread Arsène Gschwind

Hi,

I would like to update ovirt to 3.5.4 from 3.5.3 but engine-setup fails 
due to remove snapshot tasks.


[ INFO  ] Cleaning async tasks and compensations
  The following system tasks have been found running in the system:
  The following commands have been found running in the system:
  The following compensations have been found running in the 
system:
  org.ovirt.engine.core.bll.RemoveSnapshotCommand 
org.ovirt.engine.core.common.businessentities.Snapshot
  org.ovirt.engine.core.bll.RemoveSnapshotCommand 
org.ovirt.engine.core.common.businessentities.Snapshot
  org.ovirt.engine.core.bll.RemoveSnapshotCommand 
org.ovirt.engine.core.common.businessentities.Snapshot
  org.ovirt.engine.core.bll.RemoveSnapshotCommand 
org.ovirt.engine.core.common.businessentities.Snapshot

  Would you like to try to wait for that?

This tasks are not showing up in the task monitor and I have no idea how 
to get them away to be abl to update.


Thanks for any hint/help

rgds, Arsène
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Corrupted VM's

2015-10-06 Thread Nir Soffer
On Tue, Oct 6, 2015 at 10:18 AM, Neil  wrote:

> Hi guys,
>
> I had a strange issue on the 3rd of September and I've only got round to
> checking what caused it now. Basically about 4 or 5 Windows Server VM's got
> completely corrupted. When I press start I'd just get a blank screen and
> nothing would display, I tried various things but no matter what I wouldn't
> even get the Seabios display showing the VM was even posting
> The remaining 10 VM's were fine, it was just these 4 or 5 that got
> corrupted and to recover I had to do a full DR restore of the VM's.
>
> I'm concerned that the issue might appear again, which is why I'm mailing
> the list now, does anyone have any clues as to what might have caused this?
> All logs on the FC SAN were fine and all hosts appeared normal...
>
> The following are my versions...
>
> CentOS release 6.5 (Final)
> ovirt-release34-1.0.3-1.noarch
> ovirt-host-deploy-1.2.3-1.el6.noarch
> ovirt-engine-lib-3.4.4-1.el6.noarch
> ovirt-iso-uploader-3.4.4-1.el6.noarch
> ovirt-engine-cli-3.4.0.5-1.el6.noarch
> ovirt-engine-setup-base-3.4.4-1.el6.noarch
> ovirt-engine-websocket-proxy-3.4.4-1.el6.noarch
> ovirt-engine-backend-3.4.4-1.el6.noarch
> ovirt-engine-tools-3.4.4-1.el6.noarch
> ovirt-engine-dbscripts-3.4.4-1.el6.noarch
> ovirt-engine-3.4.4-1.el6.noarch
> ovirt-engine-setup-3.4.4-1.el6.noarch
> ovirt-engine-sdk-python-3.4.4.0-1.el6.noarch
> ovirt-image-uploader-3.4.3-1.el6.noarch
> ovirt-host-deploy-java-1.2.3-1.el6.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.4.4-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.4-1.el6.noarch
> ovirt-engine-restapi-3.4.4-1.el6.noarch
> ovirt-engine-userportal-3.4.4-1.el6.noarch
> ovirt-engine-webadmin-portal-3.4.4-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.4.4-1.el6.noarch
>
> CentOS release 6.5 (Final)
> vdsm-python-zombiereaper-4.14.11.2-0.el6.noarch
> vdsm-cli-4.14.11.2-0.el6.noarch
> vdsm-python-4.14.11.2-0.el6.x86_64
> vdsm-4.14.11.2-0.el6.x86_64
> vdsm-xmlrpc-4.14.11.2-0.el6.noarch
>
> Below are the sanlock.logs from two of my hosts and attached is my
> ovirt-engine.log from the date of the issue...
>
> Node02
> 2015-09-03 10:34:53+0200 33184492 [7369]: 0e6991ae aio timeout 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 ioto 10 to_count 7
> 2015-09-03 10:34:53+0200 33184492 [7369]: s1 delta_renew read rv -202
> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
> 2015-09-03 10:34:53+0200 33184492 [7369]: s1 renewal error -202
> delta_length 10 last_success 33184461
> 2015-09-03 10:35:04+0200 33184503 [7369]: 0e6991ae aio timeout 0
> 0x7fbd7910:0x7fbd7920:0x7fbd7feff000 ioto 10 to_count 8
> 2015-09-03 10:35:04+0200 33184503 [7369]: s1 delta_renew read rv -202
> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
> 2015-09-03 10:35:04+0200 33184503 [7369]: s1 renewal error -202
> delta_length 11 last_success 33184461
> 2015-09-03 10:35:05+0200 33184504 [7369]: 0e6991ae aio collect 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 result 1048576:0 other free r
> 2015-09-03 10:35:05+0200 33184504 [7369]: 0e6991ae aio collect 0
> 0x7fbd7910:0x7fbd7920:0x7fbd7feff000 result 1048576:0 match reap
> 2015-09-03 11:03:00+0200 33186178 [7369]: 0e6991ae aio timeout 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 ioto 10 to_count 9
> 2015-09-03 11:03:00+0200 33186178 [7369]: s1 delta_renew read rv -202
> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
> 2015-09-03 11:03:00+0200 33186178 [7369]: s1 renewal error -202
> delta_length 10 last_success 33186147
> 2015-09-03 11:03:07+0200 33186185 [7369]: 0e6991ae aio collect 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 result 1048576:0 other free
> 2015-09-03 11:10:18+0200 33186616 [7369]: 0e6991ae aio timeout 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 ioto 10 to_count 10
> 2015-09-03 11:10:18+0200 33186616 [7369]: s1 delta_renew read rv -202
> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
> 2015-09-03 11:10:18+0200 33186616 [7369]: s1 renewal error -202
> delta_length 10 last_success 33186586
> 2015-09-03 11:10:21+0200 33186620 [7369]: 0e6991ae aio collect 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 result 1048576:0 other free
> 2015-09-03 12:39:14+0200 33191953 [7369]: 0e6991ae aio timeout 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 ioto 10 to_count 11
> 2015-09-03 12:39:14+0200 33191953 [7369]: s1 delta_renew read rv -202
> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
> 2015-09-03 12:39:14+0200 33191953 [7369]: s1 renewal error -202
> delta_length 10 last_success 33191922
> 2015-09-03 12:39:19+0200 33191957 [7369]: 0e6991ae aio collect 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 result 1048576:0 other free
> 2015-09-03 12:40:10+0200 33192008 [7369]: 0e6991ae aio timeout 0
> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 ioto 10 to_count 12
> 2015-09-03 12:40:10+0200 33192008 [7369]: s1 delta_renew read rv -202
> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
> 

Re: [ovirt-users] Testing ovirt 3.6 RC

2015-10-06 Thread wodel youchi
Thanks,

I know that the minimum is 4Gb for the VM engine, what I want to know is
about importing the hosted_storage to the default DC
After all the updates, the hosted_storage cannot be imported in the DC, the
VM engine crashes systematically

2015-10-06 10:42 GMT+01:00 Sandro Bonazzola :

>
>
> On Fri, Oct 2, 2015 at 6:04 PM, wodel youchi 
> wrote:
>
>> Hi,
>>
>> Just to not bother you with this problem again.
>>
>> can the following be a minimum hardware to test ovirt 3.6? (I used the
>> same previously with 3.5)
>>
>> - one machine supporting kvm
>> - 8 Gb RAM
>> - one nic card
>> - storage (in my case NFS4)
>> - 2 Gb of RAM to hosted-engine
>>
>
> 2Gb of ram for hosted engine VM are under the minimum requirements of
> ovirt-engine.
> A minimum of 4GB of ram is required and 16GB are recommended
>
>
>
>
>>
>>
>> May be I am doing things the wrong way.
>>
>> thanks
>>
>> 2015-09-30 23:05 GMT+01:00 wodel youchi :
>>
>>> Hi again,
>>>
>>> I did redeploy from scratch, same problem
>>>
>>> the VM engine crashes after importing the hosted_storage.
>>>
>>> 2015-09-30 10:24 GMT+01:00 wodel youchi :
>>>
 Hi,

 Yes I did update the VM engine.

 2015-09-30 8:35 GMT+01:00 Simone Tiraboschi :

>
>
> On Tue, Sep 29, 2015 at 9:51 PM, wodel youchi 
> wrote:
>
>> Hi,
>>
>> is the problem about the VM engine not being shown on webui corrected?
>>
>> I have updated my installation ovirt 3.6 beta 6 to RC (I didn't
>> redeploy from scratch), still the same problem
>>
>>
>> no vm engine on webui, attaching the hosted-storage to the default
>> DC, causes the VM engine to crash.
>>
>
> That patch has been merged:
> https://bugzilla.redhat.com/show_bug.cgi?id=1261996
>
> The patch is on the engine side, not on the host.
> Did you also updated ovirt-engine on the engine VM?
>
>
>>
>> Do I have to redeploy from scratch?
>>
>> PS: I am using NFS4 for storage.
>>
>> thanks in advance.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>

>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Corrupted VM's

2015-10-06 Thread Neil
Hi Nir,

Thank you for coming back to me. I see in the ovirt-engine log the one VM
said it ran out of space, do you think perhaps the SAN itself was over
allocated somehow? Could this cause the issue shown in the logs?

In order to restore I had to delete some old VM's so this would have free'd
up lots of space and in doing so perhaps resolve the problem?

Just a thought really.

Thank you.

Regards.

Neil Wilson.



On Tue, Oct 6, 2015 at 2:24 PM, Nir Soffer  wrote:

> On Tue, Oct 6, 2015 at 10:18 AM, Neil  wrote:
>
>> Hi guys,
>>
>> I had a strange issue on the 3rd of September and I've only got round to
>> checking what caused it now. Basically about 4 or 5 Windows Server VM's got
>> completely corrupted. When I press start I'd just get a blank screen and
>> nothing would display, I tried various things but no matter what I wouldn't
>> even get the Seabios display showing the VM was even posting
>> The remaining 10 VM's were fine, it was just these 4 or 5 that got
>> corrupted and to recover I had to do a full DR restore of the VM's.
>>
>> I'm concerned that the issue might appear again, which is why I'm mailing
>> the list now, does anyone have any clues as to what might have caused this?
>> All logs on the FC SAN were fine and all hosts appeared normal...
>>
>> The following are my versions...
>>
>> CentOS release 6.5 (Final)
>> ovirt-release34-1.0.3-1.noarch
>> ovirt-host-deploy-1.2.3-1.el6.noarch
>> ovirt-engine-lib-3.4.4-1.el6.noarch
>> ovirt-iso-uploader-3.4.4-1.el6.noarch
>> ovirt-engine-cli-3.4.0.5-1.el6.noarch
>> ovirt-engine-setup-base-3.4.4-1.el6.noarch
>> ovirt-engine-websocket-proxy-3.4.4-1.el6.noarch
>> ovirt-engine-backend-3.4.4-1.el6.noarch
>> ovirt-engine-tools-3.4.4-1.el6.noarch
>> ovirt-engine-dbscripts-3.4.4-1.el6.noarch
>> ovirt-engine-3.4.4-1.el6.noarch
>> ovirt-engine-setup-3.4.4-1.el6.noarch
>> ovirt-engine-sdk-python-3.4.4.0-1.el6.noarch
>> ovirt-image-uploader-3.4.3-1.el6.noarch
>> ovirt-host-deploy-java-1.2.3-1.el6.noarch
>> ovirt-engine-setup-plugin-websocket-proxy-3.4.4-1.el6.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.4-1.el6.noarch
>> ovirt-engine-restapi-3.4.4-1.el6.noarch
>> ovirt-engine-userportal-3.4.4-1.el6.noarch
>> ovirt-engine-webadmin-portal-3.4.4-1.el6.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-3.4.4-1.el6.noarch
>>
>> CentOS release 6.5 (Final)
>> vdsm-python-zombiereaper-4.14.11.2-0.el6.noarch
>> vdsm-cli-4.14.11.2-0.el6.noarch
>> vdsm-python-4.14.11.2-0.el6.x86_64
>> vdsm-4.14.11.2-0.el6.x86_64
>> vdsm-xmlrpc-4.14.11.2-0.el6.noarch
>>
>> Below are the sanlock.logs from two of my hosts and attached is my
>> ovirt-engine.log from the date of the issue...
>>
>> Node02
>> 2015-09-03 10:34:53+0200 33184492 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 ioto 10 to_count 7
>> 2015-09-03 10:34:53+0200 33184492 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
>> 2015-09-03 10:34:53+0200 33184492 [7369]: s1 renewal error -202
>> delta_length 10 last_success 33184461
>> 2015-09-03 10:35:04+0200 33184503 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd7910:0x7fbd7920:0x7fbd7feff000 ioto 10 to_count 8
>> 2015-09-03 10:35:04+0200 33184503 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
>> 2015-09-03 10:35:04+0200 33184503 [7369]: s1 renewal error -202
>> delta_length 11 last_success 33184461
>> 2015-09-03 10:35:05+0200 33184504 [7369]: 0e6991ae aio collect 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 result 1048576:0 other free r
>> 2015-09-03 10:35:05+0200 33184504 [7369]: 0e6991ae aio collect 0
>> 0x7fbd7910:0x7fbd7920:0x7fbd7feff000 result 1048576:0 match reap
>> 2015-09-03 11:03:00+0200 33186178 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 ioto 10 to_count 9
>> 2015-09-03 11:03:00+0200 33186178 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
>> 2015-09-03 11:03:00+0200 33186178 [7369]: s1 renewal error -202
>> delta_length 10 last_success 33186147
>> 2015-09-03 11:03:07+0200 33186185 [7369]: 0e6991ae aio collect 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 result 1048576:0 other free
>> 2015-09-03 11:10:18+0200 33186616 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 ioto 10 to_count 10
>> 2015-09-03 11:10:18+0200 33186616 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
>> 2015-09-03 11:10:18+0200 33186616 [7369]: s1 renewal error -202
>> delta_length 10 last_success 33186586
>> 2015-09-03 11:10:21+0200 33186620 [7369]: 0e6991ae aio collect 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 result 1048576:0 other free
>> 2015-09-03 12:39:14+0200 33191953 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 ioto 10 to_count 11
>> 2015-09-03 12:39:14+0200 33191953 [7369]: s1 delta_renew read 

Re: [ovirt-users] Testing ovirt 3.6 RC

2015-10-06 Thread Sandro Bonazzola
On Tue, Oct 6, 2015 at 2:52 PM, wodel youchi  wrote:

> Thanks,
>
> I know that the minimum is 4Gb for the VM engine, what I want to know is
> about importing the hosted_storage to the default DC
> After all the updates, the hosted_storage cannot be imported in the DC,
> the VM engine crashes systematically
>

Adding Roy



>
> 2015-10-06 10:42 GMT+01:00 Sandro Bonazzola :
>
>>
>>
>> On Fri, Oct 2, 2015 at 6:04 PM, wodel youchi 
>> wrote:
>>
>>> Hi,
>>>
>>> Just to not bother you with this problem again.
>>>
>>> can the following be a minimum hardware to test ovirt 3.6? (I used the
>>> same previously with 3.5)
>>>
>>> - one machine supporting kvm
>>> - 8 Gb RAM
>>> - one nic card
>>> - storage (in my case NFS4)
>>> - 2 Gb of RAM to hosted-engine
>>>
>>
>> 2Gb of ram for hosted engine VM are under the minimum requirements of
>> ovirt-engine.
>> A minimum of 4GB of ram is required and 16GB are recommended
>>
>>
>>
>>
>>>
>>>
>>> May be I am doing things the wrong way.
>>>
>>> thanks
>>>
>>> 2015-09-30 23:05 GMT+01:00 wodel youchi :
>>>
 Hi again,

 I did redeploy from scratch, same problem

 the VM engine crashes after importing the hosted_storage.

 2015-09-30 10:24 GMT+01:00 wodel youchi :

> Hi,
>
> Yes I did update the VM engine.
>
> 2015-09-30 8:35 GMT+01:00 Simone Tiraboschi :
>
>>
>>
>> On Tue, Sep 29, 2015 at 9:51 PM, wodel youchi > > wrote:
>>
>>> Hi,
>>>
>>> is the problem about the VM engine not being shown on webui
>>> corrected?
>>>
>>> I have updated my installation ovirt 3.6 beta 6 to RC (I didn't
>>> redeploy from scratch), still the same problem
>>>
>>>
>>> no vm engine on webui, attaching the hosted-storage to the default
>>> DC, causes the VM engine to crash.
>>>
>>
>> That patch has been merged:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1261996
>>
>> The patch is on the engine side, not on the host.
>> Did you also updated ovirt-engine on the engine VM?
>>
>>
>>>
>>> Do I have to redeploy from scratch?
>>>
>>> PS: I am using NFS4 for storage.
>>>
>>> thanks in advance.
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>

>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Managing two DCs with ovirt

2015-10-06 Thread Barak Korren
You could combine method #1 with ManageIQ [1] to get unified management.

Barak.

[1] http://manageiq.org/

On 6 October 2015 at 19:25, Darrell Budic  wrote:
> I use method 1. One thing to consider is that the engine manages HA VMs, 
> migrations, etc. It doesn’t need much bandwidth, but if it can’t talk to 
> nodes, no migrations can happen, either for load balancing or in case of a 
> node or storage failure.
>
> If you had very solid networking, it’s probably fine, but I find it works 
> better in my situation to run a self hosted engine for each cluster.
>
>   -Darrell
>
>> On Oct 5, 2015, at 2:17 PM, wodel youchi  wrote:
>>
>> Hi,
>>
>> I need some help to decide which is better / feasible with ovirt to manage 
>> two or more distant DCs.
>>
>> Let say that we have two distant DCs to virtualize with ovirt.
>>
>> we have two options to manage them:
>>
>> 1- install two engines, one on each DC, the good side is, if one DC is down, 
>> we can still manage the other one. the down side we will have two consoles 
>> to manage.
>>
>> 2- install one engine to manage the two DCs, the good side is the use of one 
>> console to rule them all :-) the down side is if the DC containing the 
>> engine become down, there is way to manage the other one.
>>
>> is there (will be there in the future) a way for example to create a slave 
>> engine in the second DC which can takeover and let the admin to manage the 
>> second DC?
>>
>> thanks in advance
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Managing two DCs with ovirt

2015-10-06 Thread Darrell Budic
I use method 1. One thing to consider is that the engine manages HA VMs, 
migrations, etc. It doesn’t need much bandwidth, but if it can’t talk to nodes, 
no migrations can happen, either for load balancing or in case of a node or 
storage failure.

If you had very solid networking, it’s probably fine, but I find it works 
better in my situation to run a self hosted engine for each cluster.

  -Darrell

> On Oct 5, 2015, at 2:17 PM, wodel youchi  wrote:
> 
> Hi,
> 
> I need some help to decide which is better / feasible with ovirt to manage 
> two or more distant DCs.
> 
> Let say that we have two distant DCs to virtualize with ovirt.
> 
> we have two options to manage them:
> 
> 1- install two engines, one on each DC, the good side is, if one DC is down, 
> we can still manage the other one. the down side we will have two consoles to 
> manage.
> 
> 2- install one engine to manage the two DCs, the good side is the use of one 
> console to rule them all :-) the down side is if the DC containing the engine 
> become down, there is way to manage the other one.
> 
> is there (will be there in the future) a way for example to create a slave 
> engine in the second DC which can takeover and let the admin to manage the 
> second DC?
> 
> thanks in advance
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users