[ovirt-users] Re: Are the Host storage requirements changed in oVirt 4.4?

2020-05-20 Thread Stefano Stagnaro
errata corrige: even if 63 GiB are sufficient for Kickstart, it turns out
that current ovirt-engine-appliance-4.4-20200520111649.1.el8.x86_64.rpm
needs additionals 197 MiB to install.

At the end, I was able to install oVirt Node with Engine Appliance on top
with 64 GiB as local hard drive.

— Stefano

Il giorno mer 20 mag 2020 alle ore 18:22 Stefano Stagnaro <
stefano.stagn...@gmail.com> ha scritto:

> Seems like the new minimum local storage for an Host is now 63 GiB,
> compared to the 55 GiB of the 4.3.
>
> If so, the four Installation guides should be updated reflecting the new
> value.
>
> Thank you and congratulations for the new release.
>
> — Stefano
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGTEDV47SQDWKTDZS3XG2LE7Z3FTZ4WA/


[ovirt-users] Are the Host storage requirements changed in oVirt 4.4?

2020-05-20 Thread Stefano Stagnaro
Seems like the new minimum local storage for an Host is now 63 GiB,
compared to the 55 GiB of the 4.3.

If so, the four Installation guides should be updated reflecting the new
value.

Thank you and congratulations for the new release.

— Stefano
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZH4LTN4FT7XHE3WDFVUP4VCSXSWPKCD/


[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Stefano Stagnaro
Thank you Simone for the clarifications.

I've redeployed with both management and storage FQDNs; now everything seems to 
be in its place.

I only have a couple of questions:

1) In the Gluster deployment Wizard, section 1 (Hosts) and 2 (Additional Hosts) 
are misleading; should be renamed in something like "Host Configuration: 
Storage side" / "Host Configuration: Management side".

2) what is the real function of the "Gluster Network" cluster traffic type? 
What it actually does?

Thanks,
Stefano.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTNXPJMOZEYVHIZV2SJXXOVXMXCXS2XP/


[ovirt-users] [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Stefano Stagnaro
Hi,

I've deployed an oVirt HC starting with latest oVirt Node 4.3.6; this is my 
simple network plan (FQDNs only resolves the front-end addresses):

front-end   back-end
engine.ovirt192.168.110.10
node1.ovirt 192.168.110.11  192.168.210.11
node2.ovirt 192.168.110.12  192.168.210.12
node3.ovirt 192.168.110.13  192.168.210.13

at the end I followed the RHHI-V 1.6 Deployment Guide where, at chapter 9 [1], 
it suggests to create a logical network for Gluster traffic. Now I can see, 
indeed, back-end addresses added in the address pool:

[root@node1 ~]# gluster peer status
Number of Peers: 2

Hostname: node3.ovirt
Uuid: 3fe33e8b-d073-4d7a-8bda-441c42317c92
State: Peer in Cluster (Connected)
Other names:
192.168.210.13

Hostname: node2.ovirt
Uuid: a95a9233-203d-4280-92b9-04217fa338d8
State: Peer in Cluster (Connected)
Other names:
192.168.210.12

The problem is that the Gluster traffic seems still to flow on the management 
interfaces:

[root@node1 ~]# tcpdump -i ovirtmgmt portrange 49152-49664  

   
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ovirtmgmt, link-type EN10MB (Ethernet), capture size 262144 bytes
14:04:58.746574 IP node2.ovirt.49129 > node1.ovirt.49153: Flags [.], ack 
484303246, win 18338, options [nop,nop,TS val 6760049 ecr 6760932], length 0
14:04:58.753050 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
2507489191:2507489347, ack 2889633200, win 20874, options [nop,nop,TS val 
6760055 ecr 6757892], length 156
14:04:58.753131 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
156:312, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753142 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
312:468, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753148 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
468:624, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753203 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
624:780, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753216 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq 
780:936, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892], length 
156
14:04:58.753231 IP node1.ovirt.49152 > node2.ovirt.49131: Flags [.], ack 936, 
win 15566, options [nop,nop,TS val 6760978 ecr 6760055], length 0
...

and no yet to the eth1 I dedicated to gluster:

[root@node1 ~]# tcpdump -i eth1 portrange 49152-49664
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes

What am I missing here? What can I do to force the Gluster traffic to really 
flow on dedicated Gluster network?

Thank you,
Stefano.

[1] https://red.ht/2MiZ4Ge
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U3ZAM3DGE3EBGCWBIM37PTKFNULN2KTF/


[ovirt-users] Re: oVirt conference 2019 - Rome - October 4th

2019-09-12 Thread Stefano Stagnaro
For those who'll not be able to join, I kindly ask to help us spreading the 
event on LinkedIn [1], Twitter [2], Facebook [3] and by sending an email to 
your possibly interested contacts.

Thank you very much for your help.

[1] https://lnkd.in/g93fqfU
[2] https://twitter.com/ovirt/status/1171369779485192192
[3] https://www.facebook.com/events/2279341448854815/

--

Stefano Stagnaro
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BUNBQWRUAOI4S4SYQ3Z4TREW5BPAJIOM/


[ovirt-users] Re: Moving (or removing) an HE host should be prevented

2019-06-18 Thread Stefano Stagnaro
I've been pointed to https://bugzilla.redhat.com/show_bug.cgi?id=1702016 

Thank you Simone. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQOG4OCBZPU7W3PYWJPLLTRPKGUCNAM2/


[ovirt-users] Moving (or removing) an HE host should be prevented

2019-06-06 Thread Stefano Stagnaro
I've realized that moving an HE host to another DC/CL is not prevented, leading 
to an awkward situation for which the host is UP in the new CL but still 
retains the "silver crown". Moreover, "hosted-engine --vm-status" on another HE 
host still show the departed host with score 0. The same situation can be 
achieved removing an HE host that was previously set in Maintenance.

I think those operations should be prevented for HE hosts and a warning message 
like "Please undeploy HE first" should be displayed. Otherwise trigger the HE 
undeployment on host move/remove.

B.R.,
Stefano Stagnaro
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H77DRZ6PR2KEIA2NW5IST4SW4ZOQ6A62/


[ovirt-users] How to kill hanged task listed in engine but not in vdsm-client?

2018-11-26 Thread Stefano Stagnaro
Hello,

I'm struggling with an hanged task. I can see it from RestAPI (and UI) as:






Adding Disk

true
false
2018-11-23T18:40:59.466+01:00
2018-11-23T18:06:18.512+01:00
started



The job is hanged since user has killed fallocate process on the host. However, 
I can't kill task from the hosts itself since vdsm-client returns zero task on 
any host:

# vdsm-client Host getAllTasksInfo
{}

How can I deal with it? I already tried to restart ovirt-engine.service but 
task is still there and prevent me to put the involved storage domain into 
maintenance.

I also tried the "clear" method for service "job" via RestAPI but it fails 
since it's not an external job.

Thank you,
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YLPWQQ5FU2P3NEHDFL54P6M2WQVAXOOS/


[ovirt-users] oVirt 4.2.3 on RHEL/CentOS 7.5

2018-05-14 Thread Stefano Stagnaro
Hello,

since oVirt 4.2.3 release notes still indicates the availability for EL 7.4, I 
would like to know if the release has been tested for EL 7.5 yet and it's safe 
to install the VDSM on it (instead of oVirt Node).

Thank you.
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] oVirt 4.2.3 on RHEL/CentOS 7.5

2018-05-14 Thread Stefano Stagnaro
Hello,

since oVirt 4.2.3 release notes still indicates the availability for EL 7.4, I 
would like to know if the release has been tested for EL 7.5 yet and it's safe 
to install the VDSM on it (instead of oVirt Node).

Thank you.
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Frequent vm migration failure

2018-04-17 Thread Stefano Stagnaro
On Thu, 2018-04-12 at 20:20 +0200, Michal Skrivanek wrote:
> > On 12 Apr 2018, at 18:26, Stefano Stagnaro <stefanos@prismatelecomt
> > esting.com> wrote:
> > Hi,
> > 
> > I recently upgraded an oVirt deployment from 3.6 to 4.0 and then
> > 4.1.9 (my actual release). Since then, when migrating many hosts
> > simultaneously I always experience few migrations failure like 1 on
> > 10 vms. The failure can occur on any host; moreover, after a couple
> > of failure the destination host fall in Error status and I have to
> > manually re-activate or wait 30 min.
> > 
> > Tipical error found on vdsm log is (from the source host):
> > 2018-04-12 17:01:32,097+0200 ERROR (migsrc/3192dfe7) [virt.vm]
> > (vmId='3192dfe7-eeac-4626-8c86-e49facc9006f') migration destination
> > error: Fatal error during migration (migration:287)
> > 
> > Please find the logs of source host (v15.ovirt), destination host
> > (v14.ovirt) and engine here: https://www.dropbox.com/sh/xhf8ry4ih40
> > poxd/AABxiFCIxDe14HSx2DqLE61ya?dl=0
> > 
> > Some of the vm affected from the migration failure are:
> > svn 3192dfe7-eeac-4626-8c86-e49facc9006f
> > wooda8e83ff0-dfed-4074-b6b6-e947b8ebb952
> > qnx66   5697c4a4-9e40-4dd6-aba2-c8ab9904a584
> 
> can you also include qemu log from /var/log/libvirt/qemu/?

Hi Michal, I've added libvirt logs for relevant VMs on the previous
Dropbox share.
> btw you seem to be using the legacy migration policy throttling the
> speed significantly. Please read into the migration enhancements in
> 4.0
> https://www.ovirt.org/develop/release-management/features/virt/migrat
> ion-enhancements/

I've already moved to Minimal Downtime and then to Post-copy with same
results. VM migrations continue to fail randomly.
> Thanks,
> michal

Thanks,Stefano.

> > Thank you very much for your help.
> > 
> > -- 
> > Stefano Stagnaro
> > 
> > Prisma Telecom Testing S.r.l.
> > Via Petrocchi, 4
> > 20127 Milano – Italy
> > 
> > Tel. 02 26113507 int 339
> > e-mail: stefa...@prismatelecomtesting.com
> > skype: stefano.stagnaro
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Frequent vm migration failure

2018-04-12 Thread Stefano Stagnaro
Hi,

I recently upgraded an oVirt deployment from 3.6 to 4.0 and then 4.1.9 (my 
actual release). Since then, when migrating many hosts simultaneously I always 
experience few migrations failure like 1 on 10 vms. The failure can occur on 
any host; moreover, after a couple of failure the destination host fall in 
Error status and I have to manually re-activate or wait 30 min.

Tipical error found on vdsm log is (from the source host):
2018-04-12 17:01:32,097+0200 ERROR (migsrc/3192dfe7) [virt.vm] 
(vmId='3192dfe7-eeac-4626-8c86-e49facc9006f') migration destination error: 
Fatal error during migration (migration:287)

Please find the logs of source host (v15.ovirt), destination host (v14.ovirt) 
and engine here: 
https://www.dropbox.com/sh/xhf8ry4ih40poxd/AABxiFCIxDe14HSx2DqLE61ya?dl=0

Some of the vm affected from the migration failure are:
svn 3192dfe7-eeac-4626-8c86-e49facc9006f
wooda8e83ff0-dfed-4074-b6b6-e947b8ebb952
qnx66   5697c4a4-9e40-4dd6-aba2-c8ab9904a584

Thank you very much for your help.

-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HE vm fails to migrate due to host CPU incompatibility even after changing cluster CPU type

2017-03-09 Thread Stefano Stagnaro
On mer, 2017-03-08 at 13:58 +0100, Simone Tiraboschi wrote:
> 
> 
> On Wed, Mar 8, 2017 at 1:02 PM, Sandro Bonazzola <sbona...@redhat.com> wrote:
> 
> 
> On Sat, Mar 4, 2017 at 12:04 PM, Stefano Stagnaro 
> <stefa...@prismatelecomtesting.com> wrote:
> Hi guys,
> 
> I've started an oVirt 4.1 HE deployment on a Broadwell based 
> server. Then I added to HE a second, older host based on Nehalem. I've 
> downgraded the cluster CPU type to Nehalem to accommodate host2 and it 
> finally reached score 3400. However, when I try to migrate HE vm it fails 
> with the following error:
> 
> 2017-03-03 20:19:51,814 ERROR (migsrc/b0d38435) [virt.vm] 
> (vmId='b0d38435-5774-4ca9-ad24-70b57b5bc25d') unsupported configuration: 
> guest and host CPU are not compatible: Host CPU does not provide required 
> features: pclmuldq, fma, pcid, x2apic, movbe, tsc-deadline, aes, xsave, avx, 
> fsgsbase, bmi1, hle, avx2, smep, bmi2, erms, invpcid, rtm, rdseed, adx, smap, 
> 3dnowprefetch; try using 'Broadwell-noTSX' CPU model (migration:265)
> 
> I believe the problem is in the HE vm XML where the cpu is 
> still configured as Broadwell. how can I change this specific setting without 
> losing the deployment? Please find all the relevant logs at the following 
> link: https://www.dropbox.com/sh/njl9aofhdw10ses/AADf2Ql4GKVIKcbgLivbmjC2a
> 
> Besides that, I believe this is a wrong behavior because HE 
> should follow cluster properties (otherwise do not reach score 3400); do you 
> believe is it worth opening a issue on bugzilla?
> 
> 
> I would consider opening a BZ to track this. Adding some people who 
> may have some insight on the issue.
> 
> 
> The definition for the engine VM is getting extracted by ovirt-ha-agent from 
> the OVF_STORE volume, not sure why the engine doesn't update it when you 
> change cluster properties. I think it's work to fill a bug.
> Stefano, did you tried simply changing the number of cores for the engine VM 
> from the engine to force a configuration update?
> 
> 
> 
> 
> 
> 
>  
> 
> Thank you,
> Stefano.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community 
> collaboration.
> See how it works at redhat.com
> 
> 

Hi Simone,

I tried to change the number of cores and, after a reboot, the cpu model has 
changed to Nehalem. The HostedEngine vm is now migrating correctly.

I will open a ticket anyway pointing to this specific behavior.

Thank you very much for the support!

-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HE vm fails to migrate due to host CPU incompatibility even after changing cluster CPU type

2017-03-04 Thread Stefano Stagnaro

Hi guys,

I've started an oVirt 4.1 HE deployment on a Broadwell based server. 
Then I added to HE a second, older host based on Nehalem. I've 
downgraded the cluster CPU type to Nehalem to accommodate host2 and it 
finally reached score 3400. However, when I try to migrate HE vm it 
fails with the following error:


2017-03-03 20:19:51,814 ERROR (migsrc/b0d38435) [virt.vm] 
(vmId='b0d38435-5774-4ca9-ad24-70b57b5bc25d') unsupported configuration: 
guest and host CPU are not compatible: Host CPU does not provide 
required features: pclmuldq, fma, pcid, x2apic, movbe, tsc-deadline, 
aes, xsave, avx, fsgsbase, bmi1, hle, avx2, smep, bmi2, erms, invpcid, 
rtm, rdseed, adx, smap, 3dnowprefetch; try using 'Broadwell-noTSX' CPU 
model (migration:265)


I believe the problem is in the HE vm XML where the cpu is still 
configured as Broadwell. how can I change this specific setting without 
losing the deployment? Please find all the relevant logs at the 
following link: 
https://www.dropbox.com/sh/njl9aofhdw10ses/AADf2Ql4GKVIKcbgLivbmjC2a


Besides that, I believe this is a wrong behavior because HE should 
follow cluster properties (otherwise do not reach score 3400); do you 
believe is it worth opening a issue on bugzilla?


Thank you,
Stefano.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine installation fails while deploying in hyper-converged configuration

2016-04-22 Thread Stefano Stagnaro
I can confirm the problem.

Thank you!

On ven, 2016-04-22 at 15:02 +0200, Simone Tiraboschi wrote:
> On Fri, Apr 22, 2016 at 2:48 PM, Stefano Stagnaro
> <stefa...@prismatelecomtesting.com> wrote:
> > [root@h4 ~]# /usr/sbin/dmidecode -s system-uuid
> > Not Settable
> 
> Ok, the issue is there.
> Please check you BIOS/UEFI settings.
> 
> > On ven, 2016-04-22 at 14:35 +0200, Simone Tiraboschi wrote:
> >> On Fri, Apr 22, 2016 at 2:26 PM, Stefano Stagnaro
> >> <stefa...@prismatelecomtesting.com> wrote:
> >> > Ciao Simone,
> >> >
> >> > here it is:
> >> >
> >> > [root@h4 ~]# vdsClient -s 0 getVdsCaps
> >> > Traceback (most recent call last):
> >> >   File "/usr/share/vdsm/vdsClient.py", line 3001, in 
> >> > code, message = commands[command][0](commandArgs)
> >> >   File "/usr/share/vdsm/vdsClient.py", line 542, in do_getCap
> >> > return self.ExecAndExit(self.s.getVdsCapabilities())
> >> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
> >> > return self.__send(self.__name, args)
> >> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
> >> > verbose=self.__verbose
> >> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
> >> > return self.single_request(host, handler, request_body, verbose)
> >> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1306, in single_request
> >> > return self.parse_response(response)
> >> >   File "/usr/lib/python2.7/site-packages/vdsm/vdscli.py", line 43, in 
> >> > wrapped_parse_response
> >> > return old_parse_response(*args, **kwargs)
> >> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1482, in parse_response
> >> > return u.close()
> >> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 794, in close
> >> > raise Fault(**self._stack[0])
> >> > Fault: :cannot marshal None 
> >> > unless allow_none is enabled">
> >> >
> >> > Thanks.
> >>
> >> Probably it's failing reading the UUID of that server.
> >> Can you please try executing
> >>  /usr/sbin/dmidecode -s system-uuid
> >> ?
> >>
> >> > On ven, 2016-04-22 at 13:36 +0200, Simone Tiraboschi wrote:
> >> >> On Fri, Apr 22, 2016 at 1:07 PM, Stefano Stagnaro
> >> >> <stefa...@prismatelecomtesting.com> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > while deploying Hosted Engine in hyper-converged configuration, 
> >> >> > installation fails with following error:
> >> >> > [ ERROR ] Failed to execute stage 'Environment customization':  >> >> > 1: ":cannot marshal None unless 
> >> >> > allow_none is enabled">
> >> >>
> >> >> Ciao Stefano,
> >> >> it's failing here:
> >> >>
> >> >> Traceback (most recent call last):
> >> >>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146,
> >> >> in _executeMethod
> >> >> method['method']()
> >> >>   File 
> >> >> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py",
> >> >> line 116, in _customization
> >> >> self.environment[ohostedcons.VDSMEnv.VDS_CLI]
> >> >>   File 
> >> >> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vds_info.py",
> >> >> line 31, in capabilities
> >> >> result = conn.getVdsCapabilities()
> >> >>
> >> >> since it probably get a None value somewhere in the result of 
> >> >> getVdsCapabilities
> >> >>
> >> >> Can you please attach also the output of
> >> >>   vdsClient -s 0 getVdsCaps
> >> >> ?
> >> >>
> >> >> thanks
> >> >>
> >> >>
> >> >> >
> >> >> > I followed Ramesh post: 
> >> >> > http://blogs-ramesh.blogspot.it/2016/01/ovirt-and-gluster-hyperconvergence.html
> >> >> >
> >> >> > My setup includes 3 hosts with:
> >> >> > - ovirt 3.6.3
> >> >> > - glusterfs 3.7.8
> >> >> > - vdsm 4.17.23

Re: [ovirt-users] Hosted Engine installation fails while deploying in hyper-converged configuration

2016-04-22 Thread Stefano Stagnaro
[root@h4 ~]# /usr/sbin/dmidecode -s system-uuid
Not Settable

On ven, 2016-04-22 at 14:35 +0200, Simone Tiraboschi wrote:
> On Fri, Apr 22, 2016 at 2:26 PM, Stefano Stagnaro
> <stefa...@prismatelecomtesting.com> wrote:
> > Ciao Simone,
> >
> > here it is:
> >
> > [root@h4 ~]# vdsClient -s 0 getVdsCaps
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/vdsClient.py", line 3001, in 
> > code, message = commands[command][0](commandArgs)
> >   File "/usr/share/vdsm/vdsClient.py", line 542, in do_getCap
> > return self.ExecAndExit(self.s.getVdsCapabilities())
> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
> > return self.__send(self.__name, args)
> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
> > verbose=self.__verbose
> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
> > return self.single_request(host, handler, request_body, verbose)
> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1306, in single_request
> > return self.parse_response(response)
> >   File "/usr/lib/python2.7/site-packages/vdsm/vdscli.py", line 43, in 
> > wrapped_parse_response
> > return old_parse_response(*args, **kwargs)
> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 1482, in parse_response
> > return u.close()
> >   File "/usr/lib64/python2.7/xmlrpclib.py", line 794, in close
> > raise Fault(**self._stack[0])
> > Fault: :cannot marshal None unless 
> > allow_none is enabled">
> >
> > Thanks.
> 
> Probably it's failing reading the UUID of that server.
> Can you please try executing
>  /usr/sbin/dmidecode -s system-uuid
> ?
> 
> > On ven, 2016-04-22 at 13:36 +0200, Simone Tiraboschi wrote:
> >> On Fri, Apr 22, 2016 at 1:07 PM, Stefano Stagnaro
> >> <stefa...@prismatelecomtesting.com> wrote:
> >> > Hi,
> >> >
> >> > while deploying Hosted Engine in hyper-converged configuration, 
> >> > installation fails with following error:
> >> > [ ERROR ] Failed to execute stage 'Environment customization':  >> > ":cannot marshal None unless allow_none is 
> >> > enabled">
> >>
> >> Ciao Stefano,
> >> it's failing here:
> >>
> >> Traceback (most recent call last):
> >>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146,
> >> in _executeMethod
> >> method['method']()
> >>   File 
> >> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py",
> >> line 116, in _customization
> >> self.environment[ohostedcons.VDSMEnv.VDS_CLI]
> >>   File 
> >> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vds_info.py",
> >> line 31, in capabilities
> >> result = conn.getVdsCapabilities()
> >>
> >> since it probably get a None value somewhere in the result of 
> >> getVdsCapabilities
> >>
> >> Can you please attach also the output of
> >>   vdsClient -s 0 getVdsCaps
> >> ?
> >>
> >> thanks
> >>
> >>
> >> >
> >> > I followed Ramesh post: 
> >> > http://blogs-ramesh.blogspot.it/2016/01/ovirt-and-gluster-hyperconvergence.html
> >> >
> >> > My setup includes 3 hosts with:
> >> > - ovirt 3.6.3
> >> > - glusterfs 3.7.8
> >> > - vdsm 4.17.23.2
> >> > - replica 3 gluster volume
> >> > - no firewall
> >> > - selinux enforcing
> >> > - 1 network for management, 1 network for gluster
> >> >
> >> > I've uploaded logs on pastebin:
> >> > - hosted-engine --deploy http://pastebin.com/aZwwdt51
> >> > - ovirt-hosted-engine-setup.log http://pastebin.com/114zfbAB
> >> > - rhev-data-center-mnt-glusterSD-h4gfs.newvirt:_engine.log 
> >> > http://pastebin.com/GL4yyT95
> >> > - vdsm.log http://pastebin.com/WjSm0cHQ
> >> >
> >> >
> >> > Thank you very much for your help.
> >> >
> >> > --
> >> > Stefano Stagnaro
> >> >
> >> > Prisma Telecom Testing S.r.l.
> >> > Via Petrocchi, 4
> >> > 20127 Milano – Italy
> >> >
> >> > Tel. 02 26113507 int 339
> >> > e-mail: stefa...@prismatelecomtesting.com
> >> > skype: stefano.stagnaro
> >> >
> >> > ___
> >> > Users mailing list
> >> > Users@ovirt.org
> >> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >

-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine installation fails while deploying in hyper-converged configuration

2016-04-22 Thread Stefano Stagnaro
Ciao Simone,

here it is:

[root@h4 ~]# vdsClient -s 0 getVdsCaps
Traceback (most recent call last):
  File "/usr/share/vdsm/vdsClient.py", line 3001, in 
code, message = commands[command][0](commandArgs)
  File "/usr/share/vdsm/vdsClient.py", line 542, in do_getCap
return self.ExecAndExit(self.s.getVdsCapabilities())
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1306, in single_request
return self.parse_response(response)
  File "/usr/lib/python2.7/site-packages/vdsm/vdscli.py", line 43, in 
wrapped_parse_response
return old_parse_response(*args, **kwargs)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1482, in parse_response
return u.close()
  File "/usr/lib64/python2.7/xmlrpclib.py", line 794, in close
raise Fault(**self._stack[0])
Fault: :cannot marshal None unless 
allow_none is enabled">

Thanks.

On ven, 2016-04-22 at 13:36 +0200, Simone Tiraboschi wrote:
> On Fri, Apr 22, 2016 at 1:07 PM, Stefano Stagnaro
> <stefa...@prismatelecomtesting.com> wrote:
> > Hi,
> >
> > while deploying Hosted Engine in hyper-converged configuration, 
> > installation fails with following error:
> > [ ERROR ] Failed to execute stage 'Environment customization':  > ":cannot marshal None unless allow_none is 
> > enabled">
> 
> Ciao Stefano,
> it's failing here:
> 
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146,
> in _executeMethod
> method['method']()
>   File 
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py",
> line 116, in _customization
> self.environment[ohostedcons.VDSMEnv.VDS_CLI]
>   File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vds_info.py",
> line 31, in capabilities
> result = conn.getVdsCapabilities()
> 
> since it probably get a None value somewhere in the result of 
> getVdsCapabilities
> 
> Can you please attach also the output of
>   vdsClient -s 0 getVdsCaps
> ?
> 
> thanks
> 
> 
> >
> > I followed Ramesh post: 
> > http://blogs-ramesh.blogspot.it/2016/01/ovirt-and-gluster-hyperconvergence.html
> >
> > My setup includes 3 hosts with:
> > - ovirt 3.6.3
> > - glusterfs 3.7.8
> > - vdsm 4.17.23.2
> > - replica 3 gluster volume
> > - no firewall
> > - selinux enforcing
> > - 1 network for management, 1 network for gluster
> >
> > I've uploaded logs on pastebin:
> > - hosted-engine --deploy http://pastebin.com/aZwwdt51
> > - ovirt-hosted-engine-setup.log http://pastebin.com/114zfbAB
> > - rhev-data-center-mnt-glusterSD-h4gfs.newvirt:_engine.log 
> > http://pastebin.com/GL4yyT95
> > - vdsm.log http://pastebin.com/WjSm0cHQ
> >
> >
> > Thank you very much for your help.
> >
> > --
> > Stefano Stagnaro
> >
> > Prisma Telecom Testing S.r.l.
> > Via Petrocchi, 4
> > 20127 Milano – Italy
> >
> > Tel. 02 26113507 int 339
> > e-mail: stefa...@prismatelecomtesting.com
> > skype: stefano.stagnaro
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CentOS Virtualization SIG is not aligned with latest oVirt release

2016-04-22 Thread Stefano Stagnaro
Hi Sandro,

thank you very much for the explanation.

On ven, 2016-04-22 at 13:35 +0200, Sandro Bonazzola wrote:
> 
> 
> On Fri, Apr 22, 2016 at 1:26 PM, Stefano Stagnaro
> <stefa...@prismatelecomtesting.com> wrote:
> Hi,
> 
> today oVirt 3.6.5 has been released and I tried to perform a
> fresh installation through the CentOS Virtualization SIG:
> 
> # yum install centos-release-ovirt36
> 
> which has installed, for dependencies, also
> - centos-release-gluster37
> - centos-release-virt-common
> - centos-release-qemu-ev
> - centos-release-storage-common
> 
> I immediately noticed that oVirt is still at version 3.6.3
> (instead of 3.6.5). This makes it difficult to move to CentOS
> Virtualization SIG.
> 
> Besides that, glusterfs is still at version 3.7.8 (instead of
> 3.7.11). I understand this is a different SIG but oVirt is
> affected in some way.
> 
> 
> Virt SIG RPMs are rebuilt from oVirt RPMs and need to pass a testing
> period before going to release repository, so Virt SIG ususally come
> ~1 week after oVirt GA.
> For gluster it's the same, 3.7.11 are currently in
> testing http://cbs.centos.org/koji/buildinfo?buildID=10550
> You can help getting them promoted to release by providing feedback on
> centos-devel mailing list.
> 
> 
>  
> 
> Thank you,
> --
> Stefano Stagnaro
> 
> Prisma Telecom Testing S.r.l.
> Via Petrocchi, 4
> 20127 Milano – Italy
> 
> Tel. 02 26113507 int 339
> e-mail: stefa...@prismatelecomtesting.com
> skype: stefano.stagnaro
> 
> 
> 
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community
> collaboration.
> See how it works at redhat.com
> 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Stefano Stagnaro
The installation ended up correctly with ovirt-release35-005-1.noarch.rpm you 
provided.

Thanks.
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro


On gio, 2015-07-09 at 17:18 +0200, Sandro Bonazzola wrote:
 Il 09/07/2015 11:48, Stefano Stagnaro ha scritto:
  Hi,
  
  adding epel-6 gives another error:
  
  Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
 Requires: vdsm = 4.16.20-0.el6
 Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
 vdsm = 4.16.20-0.el6
 Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
 vdsm = 4.16.20-1.git3a90f62.el6
 Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
 vdsm = 4.16.7-1.gitdb83943.el6
 Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
 vdsm = 4.16.10-0.el6
 Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
 vdsm = 4.16.10-8.gitc937927.el6
 Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
 vdsm = 4.16.14-0.el6
  
  ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
  userspace-rcu are missing in the whitelist.
  
  Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved 
  it.
  
  Can anyone fix it?
 
 Fix: https://gerrit.ovirt.org/43397
 
 test builds are available here: 
 http://jenkins.ovirt.org/job/ovirt-release_master_gerrit/164/
 
 Stefano, can you help verifying?
 
  
  Thanks,
  
 
 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Stefano Stagnaro
Hi,

adding epel-6 gives another error:

Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
   Requires: vdsm = 4.16.20-0.el6
   Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
   vdsm = 4.16.20-0.el6
   Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
   vdsm = 4.16.20-1.git3a90f62.el6
   Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.7-1.gitdb83943.el6
   Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.10-0.el6
   Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.10-8.gitc937927.el6
   Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.14-0.el6

ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
userspace-rcu are missing in the whitelist.

Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved it.

Can anyone fix it?

Thanks,
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

On gio, 2015-07-09 at 02:45 -0400, Darshan Narayana Murthy wrote:
 Hi,
 
Can you please try enabling epel 6 repo. It should have the needed
 dependencies.
 
 -Regards
 Darshan N
 
 - Original Message -
  From: Stefano Stagnaro stefa...@prismatelecomtesting.com
  To: users@ovirt.org
  Sent: Wednesday, July 8, 2015 4:09:42 PM
  Subject: [ovirt-users] GlusterFS host installation failed
  
  Hi,
  
  host installation in a glusterfs cluster is failing due to dependecies
  errors:
  
  Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
  (ovirt-3.5-glusterfs-epel)
 Requires: pyxattr
  Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
  (ovirt-3.5-glusterfs-epel)
 Requires: liburcu-cds.so.1()(64bit)
  Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
  (ovirt-3.5-glusterfs-epel)
 Requires: liburcu-bp.so.1()(64bit)
  
  oVirt Engine Version: 3.5.3.1-1.el6 on CentOS 6.6
  
  I've installed the following repo:
  http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
  
  Thank you,
  --
  Stefano Stagnaro
  
  Prisma Telecom Testing S.r.l.
  Via Petrocchi, 4
  20127 Milano – Italy
  
  Tel. 02 26113507 int 339
  e-mail: stefa...@prismatelecomtesting.com
  skype: stefano.stagnaro
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS host installation failed

2015-07-08 Thread Stefano Stagnaro
Hi,

host installation in a glusterfs cluster is failing due to dependecies errors:

Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
   Requires: pyxattr
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
   Requires: liburcu-cds.so.1()(64bit)
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
   Requires: liburcu-bp.so.1()(64bit)

oVirt Engine Version: 3.5.3.1-1.el6 on CentOS 6.6

I've installed the following repo: 
http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

Thank you,
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sync Error on Master Domain after adding a second one

2015-03-04 Thread Stefano Stagnaro
Hi Liron,

I've reproduced the issue with a fresh deployment of oVirt 3.5.2rc. I've 
provided you with new screencasts and relevant logs for both cases (see inline 
comments):

screencast for case 1: 
https://www.dropbox.com/s/fdrcwmpy03v5xri/Screencast%20from%2004-03-2015%2010%3A53%3A32.webm?dl=1
screencast for case 2: 
https://www.dropbox.com/s/w72bf86n9v2pvdw/Screencast%20from%2004-03-2015%2015%3A18%3A45.webm?dl=1
logs for case 1: 
https://www.dropbox.com/sh/bl24umw0w1anclb/AAC0Oq7c6oXWetw-tp-55c37a?dl=0
logs for case 2: 
https://www.dropbox.com/sh/rp3pdda68nox099/AABtZGKDfFCH3sD6FZPvxRmEa?dl=0

Please note that I'm using different networks for Management (192.168.48.0/24) 
and GlusterFS replica (192.168.50.0/24):

management FQDN GlusterFS FQDN
node 1: s20.ovirt.prismas20gfs.ovirt.prisma
node 2: s21.ovirt.prismas21gfs.ovirt.prisma

On dom, 2015-03-01 at 04:55 -0500, Liron Aravot wrote:
 Hi Stefano,
 thanks for the great input!
 
 I went over the logs (is the screencast uses the same domains? i don't have 
 the logs from that run) - the master domain deactivation (and the master role 
 migration to the new domain) fails with the error to copy the master fs 
 content to the new domain on tar copy (see on [1] the error).
 
 1. Is there a chance that there is any problem inconsistent storage access 
 problem to any of the domains?
Storage domains rely on GlusterFS volumes created on purpose. VMs runs 
correctly.

 2. Does the issue reproduces always or only in some of the runs?
The issue reproduces always but:
case 1) if DATA and DATA_NEW are both created pointing to s20gfs the issue 
reproduces and Master role changes (Screencast 1).
case 2) if DATA is pointing to s20 and DATA_NEW to s20gfs the issue reproduces 
and Muster roles flips but does not change (Screencast 2).

 3. Have you tried to run a operation that creates a task? a creation of a 
 disk for example.
Every operations like creating or moving a disk are working correctly.

 
 thanks,
 Liron.
 
 
 
 [1]:
 Thread-9875::DEBUG::2015-02-25 
 15:06:57,969::clusterlock::349::Storage.SANLock::(release) Cluster lock for 
 domain 08298f60-4919-4f86-9233-827c1089779a success
 fully released
 Thread-9875::ERROR::2015-02-25 
 15:06:57,969::task::866::Storage.TaskManager.Task::(_setError) 
 Task=`2a434209-3e96-4d1e-8d1b-8c7463889f6a`::Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/task.py, line 873, in _run
 return fn(*args, **kargs)
   File /usr/share/vdsm/logUtils.py, line 45, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/storage/hsm.py, line 1246, in deactivateStorageDomain
 pool.deactivateSD(sdUUID, msdUUID, masterVersion)
   File /usr/share/vdsm/storage/securable.py, line 77, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 1097, in deactivateSD
 self.masterMigrate(sdUUID, newMsdUUID, masterVersion)
   File /usr/share/vdsm/storage/securable.py, line 77, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 816, in masterMigrate
 exclude=('./lost+found',))
   File /usr/share/vdsm/storage/fileUtils.py, line 68, in tarCopy
 raise TarCopyFailed(tsrc.returncode, tdst.returncode, out, err)
 TarCopyFailed: (1, 0, '', '')
 Thread-9875::DEBUG::2015-02-25 
 15:06:57,969::task::885::Storage.TaskManager.Task::(_run) 
 Task=`2a434209-3e96-4d1e-8d1b-8c7463889f6a`::Task._run: 2a434209-3e96
 -4d1e-8d1b-8c7463889f6a ('62a034ca-63df-44f2-9a87-735ddd257a6b', 
 '0002-0002-0002-0002-022f', 
 '08298f60-4919-4f86-9233-827c1089779a', 34) {} failed
  - stopping task
 
 - Original Message -
  From: Stefano Stagnaro stefa...@prisma-eng.com
  To: Vered Volansky ve...@redhat.com
  Cc: users@ovirt.org
  Sent: Friday, February 27, 2015 4:54:31 PM
  Subject: Re: [ovirt-users] Sync Error on Master Domain after adding a 
  second one
  
  I think I finally managed to replicate the problem:
  
  1. deploy a datacenter with a virt only cluster and a gluster only cluster
  2. create a first GlusterFS Storage Domain (e.g. DATA) and activate it
  (should become Master)
  3. create a second GlusterFS Storage Domain (e.g. DATA_NEW) and activate it
  4. put DATA in maintenance
  
  Both Storage Domains flows between the following states:
  https://www.dropbox.com/s/x542q1epf40ar5p/Screencast%20from%2027-02-2015%2015%3A09%3A29.webm?dl=1
  
  Webadmin Events shows: Sync Error on Master Domain between Host v10 and
  oVirt Engine. Domain: DATA is marked as Master in oVirt Engine database but
  not on the Storage side. Please consult with Support on how to fix this
  issue.
  
  It seems DATA can be deactivated at the second attempt.
  
  --
  Stefano Stagnaro
  
  Prisma Engineering S.r.l.
  Via Petrocchi, 4
  20127 Milano – Italy
  
  Tel. 02 26113507 int 339
  e-mail: stefa...@prisma-eng.com
  skype: stefano.stagnaro
  
  On mer, 2015-02-25 at 15:41 +0100

Re: [ovirt-users] Sync Error on Master Domain after adding a second one

2015-03-01 Thread Stefano Stagnaro
I've double checked and the correct flow from point 5 is:

DATA -- Preparing for maintenance -- Unknown -- Data Center is being 
initialized -- Active

DATA_R3 -- Active -- Locked -- Unknown -- Active

The weird thing is that after few attempts, DATA is finally went in 
maintenance. Now I'm on 
ovirt-engine-3.5.3-0.0.master.20150226123132.gitbea0538.el6.noarch

I'll try to reproduce it from the beginning.

-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

On mer, 2015-02-25 at 15:41 +0100, Stefano Stagnaro wrote:
 This is what I've done basically:
 
 1. added a new data domain (DATA_R3);
 2. activated the new data domain - both domains in active state;
 3. moved Disks from DATA to DATA_R3;
 4. tried to put the old data domain in maintenance (from webadmin or shell);
 5. both domains became inactive;
 6. DATA_R3 came back in active;
 7. DATA domain went in being initialized;
 8. Webadmin shows the error Sync Error on Master Domain between...;
 9. DATA domain completed the reconstruction and came back in active.
 
 Please find engine and vdsm logs here: 
 https://www.dropbox.com/sh/uuwwo8sxcg4ffqp/AAAx6UrwI3jbsN4oraJuDx9Fa?dl=0
 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sync Error on Master Domain after adding a second one

2015-02-27 Thread Stefano Stagnaro
I think I finally managed to replicate the problem:

1. deploy a datacenter with a virt only cluster and a gluster only cluster
2. create a first GlusterFS Storage Domain (e.g. DATA) and activate it (should 
become Master)
3. create a second GlusterFS Storage Domain (e.g. DATA_NEW) and activate it
4. put DATA in maintenance

Both Storage Domains flows between the following states: 
https://www.dropbox.com/s/x542q1epf40ar5p/Screencast%20from%2027-02-2015%2015%3A09%3A29.webm?dl=1

Webadmin Events shows: Sync Error on Master Domain between Host v10 and oVirt 
Engine. Domain: DATA is marked as Master in oVirt Engine database but not on 
the Storage side. Please consult with Support on how to fix this issue.

It seems DATA can be deactivated at the second attempt.

-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro

On mer, 2015-02-25 at 15:41 +0100, Stefano Stagnaro wrote:
 This is what I've done basically:
 
 1. added a new data domain (DATA_R3);
 2. activated the new data domain - both domains in active state;
 3. moved Disks from DATA to DATA_R3;
 4. tried to put the old data domain in maintenance (from webadmin or shell);
 5. both domains became inactive;
 6. DATA_R3 came back in active;
 7. DATA domain went in being initialized;
 8. Webadmin shows the error Sync Error on Master Domain between...;
 9. DATA domain completed the reconstruction and came back in active.
 
 Please find engine and vdsm logs here: 
 https://www.dropbox.com/sh/uuwwo8sxcg4ffqp/AAAx6UrwI3jbsN4oraJuDx9Fa?dl=0
 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sync Error on Master Domain after adding a second one

2015-02-25 Thread Stefano Stagnaro
This is what I've done basically:

1. added a new data domain (DATA_R3);
2. activated the new data domain - both domains in active state;
3. moved Disks from DATA to DATA_R3;
4. tried to put the old data domain in maintenance (from webadmin or shell);
5. both domains became inactive;
6. DATA_R3 came back in active;
7. DATA domain went in being initialized;
8. Webadmin shows the error Sync Error on Master Domain between...;
9. DATA domain completed the reconstruction and came back in active.

Please find engine and vdsm logs here: 
https://www.dropbox.com/sh/uuwwo8sxcg4ffqp/AAAx6UrwI3jbsN4oraJuDx9Fa?dl=0

-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro


On mer, 2015-02-25 at 07:05 -0500, Vered Volansky wrote:
 Please specify the exact flow building to this error, in terms of adding a 
 new domain, the statuses of both domains when an operation is performed, etc.
 What are the statuses of both domains?
 
 In case this is not the same issue, we'll need to have a look at the full 
 engine  vdsm logs.
 
 - Original Message -
  From: Stefano Stagnaro stefa...@prisma-eng.com
  To: users@ovirt.org
  Sent: Wednesday, February 25, 2015 1:18:41 PM
  Subject: [ovirt-users] Sync Error on Master Domain after adding a second one
  
  I'm testing oVirt 3.5.2 nightly with 1 host for engine, 2 for virt (v10,v11)
  and 2 for GlusterFS (s20,s21). The Master Data Domain (named DATA) rely on
  GlusterFS.
  
  I've added second Data Domain (named DATA_R3) in order to switch the Master
  role and remove the old one. Every time I try to put the old Data Domain in
  maintenance I got the following error:
  
  Sync Error on Master Domain between Host v10 and oVirt Engine. Domain:
  DATA_R3 is marked as Master in oVirt Engine database but not on the Storage
  side. Please consult with Support on how to fix this issue.
  
  Same error if I try to put DATA in maintenance from the shell:
  
  # action storagedomain '62a034ca-63df-44f2-9a87-735ddd257a6b' deactivate
  --datacenter-identifier '0002-0002-0002-0002-022f'
  
  I cannot switch to the new Master Data Domain neither I can put the Data
  Center in maintenance.
  
  I'm not sure if it is related to bug 1183977.  I've already upgraded to
  ovirt-engine-3.5.2-0.0.master.20150224122113.git410d88b.el6.noarch but the
  problem still happen.
  
  Thanks,
  --
  Stefano Stagnaro
  
  Prisma Engineering S.r.l.
  Via Petrocchi, 4
  20127 Milano – Italy
  
  Tel. 02 26113507 int 339
  e-mail: stefa...@prisma-eng.com
  skype: stefano.stagnaro
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Sync Error on Master Domain after adding a second one

2015-02-25 Thread Stefano Stagnaro
I'm testing oVirt 3.5.2 nightly with 1 host for engine, 2 for virt (v10,v11) 
and 2 for GlusterFS (s20,s21). The Master Data Domain (named DATA) rely on 
GlusterFS.

I've added second Data Domain (named DATA_R3) in order to switch the Master 
role and remove the old one. Every time I try to put the old Data Domain in 
maintenance I got the following error:

Sync Error on Master Domain between Host v10 and oVirt Engine. Domain: DATA_R3 
is marked as Master in oVirt Engine database but not on the Storage side. 
Please consult with Support on how to fix this issue.

Same error if I try to put DATA in maintenance from the shell:

# action storagedomain '62a034ca-63df-44f2-9a87-735ddd257a6b' deactivate 
--datacenter-identifier '0002-0002-0002-0002-022f'

I cannot switch to the new Master Data Domain neither I can put the Data Center 
in maintenance.

I'm not sure if it is related to bug 1183977.  I've already upgraded to 
ovirt-engine-3.5.2-0.0.master.20150224122113.git410d88b.el6.noarch but the 
problem still happen.

Thanks,
-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine deployment failed

2014-12-01 Thread Stefano Stagnaro
I've uploaded all the vdsm related logs in the previous Dropbox folder.

Thank you.

On Sun, 2014-11-30 at 02:01 -0500, Yedidyah Bar David wrote:
 - Original Message -
  From: Stefano Stagnaro stefa...@prisma-eng.com
  To: users@ovirt.org
  Sent: Friday, November 28, 2014 9:37:22 PM
  Subject: [ovirt-users] Hosted Engine deployment failed
  
  Hi, I can not manage to deploy oVirt 3.5 with Hosted Engine. Every
  deployments ends up with:
  
  [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary
  password for console connection. The VM may not have been created:
  please check VDSM logs
  
  I'm using an external NFSv4 server for storage. I don't know if it is
  related to BZ#1148712 but deploying with setenforce 0 doesn't change
  the result for me.
  
  Plese find setup logs here:
  https://www.dropbox.com/sh/yvo3zubm6uyqgsh/AAAOeRu8i3tLRUpK2t_ylxoza?dl=0
 
 Please post vdsm logs. Thanks!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine deployment failed

2014-11-28 Thread Stefano Stagnaro
Hi, I can not manage to deploy oVirt 3.5 with Hosted Engine. Every 
deployments ends up with:


[ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary 
password for console connection. The VM may not have been created: 
please check VDSM logs


I'm using an external NFSv4 server for storage. I don't know if it is 
related to BZ#1148712 but deploying with setenforce 0 doesn't change 
the result for me.


Plese find setup logs here: 
https://www.dropbox.com/sh/yvo3zubm6uyqgsh/AAAOeRu8i3tLRUpK2t_ylxoza?dl=0


Thank you,
Stefano.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine fail after upgrading to 3.5

2014-10-29 Thread Stefano Stagnaro
Hi,

please find the new logs at the same place: 
https://www.dropbox.com/sh/qh2rbews45ky2g8/AAC4_4_j94cw6sI_hfaSFg-Fa?dl=0

Thank you,
-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro

On Wed, 2014-10-29 at 08:19 +0100, Jiri Moskovcak wrote:
 On 10/27/2014 06:07 PM, Stefano Stagnaro wrote:
  Hi Jirka,
 
  after truncating the metadata file the Engine is running again.
 
  Unfortunately now the HA migration is not working anymore. If I put the 
  host with the running Engine in local maintenance, the HA go trough 
  EngineMigratingAway - ReinitializeFSM - LocalMaintenance but the VM never 
  migrates.
 
  I can read this error in the agent.log:
 
  MainThread::ERROR::2014-10-27 
  18:02:51,053::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
   Failed to migrate
  Traceback (most recent call last):
 File 
  /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py,
   line 863, in _monitor_migration
   vm_id,
 File 
  /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py,
   line 85, in run_vds_client_cmd
   response['status']['message'])
  DetailedError: Error 12 from migrateStatus: Fatal error during migration
 
  Thank you,
 
 
 Hi,
 to debug the migration failure I gonna need the engine.log.
 
 Thanks,
 Jirka

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine fail after upgrading to 3.5

2014-10-27 Thread Stefano Stagnaro
Hi Jirka,

after truncating the metadata file the Engine is running again.

Unfortunately now the HA migration is not working anymore. If I put the host 
with the running Engine in local maintenance, the HA go trough 
EngineMigratingAway - ReinitializeFSM - LocalMaintenance but the VM never 
migrates.

I can read this error in the agent.log:

MainThread::ERROR::2014-10-27 
18:02:51,053::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
 Failed to migrate
Traceback (most recent call last):
  File 
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py,
 line 863, in _monitor_migration
vm_id,
  File 
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py, 
line 85, in run_vds_client_cmd
response['status']['message'])
DetailedError: Error 12 from migrateStatus: Fatal error during migration

Thank you,
-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro


On Fri, 2014-10-24 at 15:00 +0200, Jiri Moskovcak wrote:
 On 10/24/2014 02:12 PM, Stefano Stagnaro wrote:
  Hi Jirka,
 
  thank you for the reply. I've uploaded all the relevant logs in here: 
  https://www.dropbox.com/sh/qh2rbews45ky2g8/AAC4_4_j94cw6sI_hfaSFg-Fa?dl=0
 
  Thank you,
 
 
 Hi Stefano,
 I'd say, that agent is not able to parse the metadata from the previous 
 version, so as a workaround before I fix it you can try to zero out the 
 metadata file (backup the original just in case..)
 
 1. stop agent and broker on all hosts
 2. truncate the file
 
 this should do the trick:
 
 $ service ovirt-ha-agent stop; service ovirt-ha-broker stop;
 $ truncate --size 0 
 /rhev/data-center/mnt/ov0nfs:_engine/e4e8282e-6bde-4332-ad68-313287b4fc65/ha_agent/hosted-engine.metadata
  
 
 $ truncate --size 1M 
 /rhev/data-center/mnt/ov0nfs:_engine/e4e8282e-6bde-4332-ad68-313287b4fc65/ha_agent/hosted-engine.metadata
 $ service ovirt-ha-broker start; service ovirt-ha-agent start
 
 
 --Jirka

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine fail after upgrading to 3.5

2014-10-24 Thread Stefano Stagnaro
Hi. After upgrading from 3.4 to 3.5 (I've followed the official RHEV 
documentation) the Hosted Engine VM cannot boot up anymore.

hosted-engine --vm-status says Engine status: unknown stale-data
agent.log says: Error: ''NoneType' object has no attribute 'iteritems''

Some logs:
- agent on node ov0h21: http://fpaste.org/144822/
- agent on node ov0h21: http://fpaste.org/144824/
- hosted-engine --vm-status: http://fpaste.org/144825/

Thank you,
-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine fail after upgrading to 3.5

2014-10-24 Thread Stefano Stagnaro
Hi Jirka,

thank you for the reply. I've uploaded all the relevant logs in here: 
https://www.dropbox.com/sh/qh2rbews45ky2g8/AAC4_4_j94cw6sI_hfaSFg-Fa?dl=0

Thank you,
-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro


On Fri, 2014-10-24 at 13:30 +0200, Jiri Moskovcak wrote:
 On 10/24/2014 01:11 PM, Stefano Stagnaro wrote:
  Hi. After upgrading from 3.4 to 3.5 (I've followed the official RHEV 
  documentation) the Hosted Engine VM cannot boot up anymore.
 
  hosted-engine --vm-status says Engine status: unknown stale-data
  agent.log says: Error: ''NoneType' object has no attribute 'iteritems''
 
  Some logs:
  - agent on node ov0h21: http://fpaste.org/144822/
  - agent on node ov0h21: http://fpaste.org/144824/
  - hosted-engine --vm-status: http://fpaste.org/144825/
 
 Hi Stefano,
 can you please provide also the broker log?
 
 Thank you,
 Jirka
 
 
  Thank you,
 
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Reachability problems after putting ovirtmgmt and Gluster on different networks

2014-09-09 Thread Stefano Stagnaro
Hi, I have a question about an HostedEngine deployment. I followed the Jason's 
oVirt 3.4, Glusterized and it seems working. I've two host for virtualization 
(that also do the HA for the HostedEngine) and two other host for GlusterFS.

Now I would like to configure separate networks for the management and Gluster. 
On my Catalyst I've configured two VLANs with two different default gateways. 
My aim is to add the Gluster hosts with the IP of the ovirtmgmt and then add 
the storage domain pointing to the same host but using the IP of the Gluster 
network.

Unfortunately, the HostedEngine has only one nic (on ovirtmgmt) and so I'm not 
able to reach the Gluster host with his Gluster network IP even if the 
L3-switch has all the networks configured. Reading this article 
http://brainscraps.wikia.com/wiki/Setup_Gateway_Routing_On_Multiple_Network_Interfaces
 I believe the problem is on the Gluster host that only has the ovirtmgmt 
gateway.

I've found out that someone has tried to add a second nic on the HostedEngine 
by editing the /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
template prior to start the deployment.

My question is: there is a better way to achieve this (I'm missing something 
for sure)? Is it worth having the ovirtmgmt separate to the storage? In 
production, I'll have five 1gbps ports per host (that could be bonded as 
preferred).

Thank you,
-- 
Stefano Stagnaro

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM HostedEngie is down. Exist message: internal error Failed to acquire lock error -243

2014-07-15 Thread Stefano Stagnaro




Hi,

since the bug 1093366 is evidently blocking the Hosted Engine feature, 
it should be added as blocker for the oVirt 3.4.3 tracker (bug 1107968).


All the more so now that the proposed patches seems to have fixed the 
problem (I've run at least 30 Hosted Engine migrations without errors).


Thank you,
Stefano.

On 06/06/2014 05:12 AM, Andrew Lau wrote:

Hi,

I'm seeing this weird message in my engine log

2014-06-06 03:06:09,380 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-79) RefreshVmList vm id
85d4cfb9-f063-4c7c-a9f8-2b74f5f7afa5 status = WaitForLaunch on vds
ov-hv2-2a-08-23 ignoring it in the refresh until migration is done
2014-06-06 03:06:12,494 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-89) START, DestroyVDSCommand(HostName =
ov-hv2-2a-08-23, HostId = c04c62be-5d34-4e73-bd26-26f805b2dc60,
vmId=85d4cfb9-f063-4c7c-a9f8-2b74f5f7afa5, force=false,
secondsToWait=0, gracefully=false), log id: 62a9d4c1
2014-06-06 03:06:12,561 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-89) FINISH, DestroyVDSCommand, log id:
62a9d4c1
2014-06-06 03:06:12,652 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-89) Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM HostedEngine is down. Exit
message: internal error Failed to acquire lock: error -243.

It also appears to occur on the other hosts in the cluster, except the
host which is running the hosted-engine. So right now 3 servers, it
shows up twice in the engine UI.

The engine VM continues to run peacefully, without any issues on the
host which doesn't have that error.

Any ideas?





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM HostedEngie is down. Exist message: internal error Failed to acquire lock error -243

2014-07-14 Thread Stefano Stagnaro

Hi,

since the bug 1093366 is evidently blocking the Hosted Engine feature, 
it should be added as blocker for the oVirt 3.4.3 tracker (bug 1107968).


All the more so now that the proposed patches seems to have fixed the 
problem (I've run at least 30 Hosted Engine migrations without errors).


Thank you,
Stefano.

On 06/06/2014 05:12 AM, Andrew Lau wrote:

Hi,

I'm seeing this weird message in my engine log

2014-06-06 03:06:09,380 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-79) RefreshVmList vm id
85d4cfb9-f063-4c7c-a9f8-2b74f5f7afa5 status = WaitForLaunch on vds
ov-hv2-2a-08-23 ignoring it in the refresh until migration is done
2014-06-06 03:06:12,494 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-89) START, DestroyVDSCommand(HostName =
ov-hv2-2a-08-23, HostId = c04c62be-5d34-4e73-bd26-26f805b2dc60,
vmId=85d4cfb9-f063-4c7c-a9f8-2b74f5f7afa5, force=false,
secondsToWait=0, gracefully=false), log id: 62a9d4c1
2014-06-06 03:06:12,561 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-89) FINISH, DestroyVDSCommand, log id:
62a9d4c1
2014-06-06 03:06:12,652 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-89) Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM HostedEngine is down. Exit
message: internal error Failed to acquire lock: error -243.

It also appears to occur on the other hosts in the cluster, except the
host which is running the hosted-engine. So right now 3 servers, it
shows up twice in the engine UI.

The engine VM continues to run peacefully, without any issues on the
host which doesn't have that error.

Any ideas?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Bottleneck writing to a VM w/ mounted GlusterFS

2013-09-28 Thread Stefano Stagnaro
Hello,

I'm testing oVirt 3.3 with GlusterFS libgfapi back-end. I'm using a node for 
engine and one for VDSM. From the VMs I'm mounting a second GlusterFS volume on 
a third storage server.

I'm experiencing very bad transfer rates (38MB/s) writing from a client to a VM 
on the mounted GlusterFS. On the other hand, from the VM itself I can move a 
big file from the root vda (libgfapi) to the mounted GlusterFS at 70MB/s.

I can't really figure out where the bottleneck could be. I'm using only the 
default ovirtmgmt network.

Thank you for your help, any hint will be appreciated.

Regards,
-- 
Stefano Stagnaro
IT Manager

Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prisma-eng.com
skype: stefano.stagnaro




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users