Re: [ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Kiril L
This should clear things up a bit -
https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
even

On Thu, Jun 4, 2015 at 9:40 AM, Юрий Полторацкий
 wrote:
> Hi,
>
> Pardon me for jumping in subj, but...
>
> Could you explain me in a few words or give a link where to read about
> "split-brain"?
>
> I made a lab: cluster (virt and gluster services both) based on two server.
> With this options:
>  cluster.server-quorum-type none
>  cluster.quorum-type fixed
>  cluster.quorum-count 1
> I get a cluster that worked just fine even with only one host alive.
>
> I have run VM on a not SPM host B, have made some files, then on the SPM
> host A have stoped network service (host has gone, no power management),
> have made some huge files on the VM again, after that I have rebooted
> manualy the host A and there was no unexpected result. VM have worked
> without problem, I have rebooted the VM several times and there were no
> problems with any files I have made before.
>
> I want to build a cluster with only two hosts, because I have only two good
> servers free, and only 8x3TB HDD (for RAID10 in each host). To get one more
> server and 4HDD is unbelievable in a close future..
>
> Thanks.
>
>
> 2015-06-03 16:54 GMT+03:00 Simone Tiraboschi :
>>
>>
>>
>> - Original Message -
>> > From: "Kiril L" 
>> > To: users@ovirt.org
>> > Sent: Wednesday, June 3, 2015 3:32:52 PM
>> > Subject: [ovirt-users] Is it a plausible configuration?
>> >
>> > Would you please tell me if this configuration is doable or there is
>> > something that i am missing?
>> >
>> > I would like to use only two servers (X and Y) for VDI and Gluster
>> > based storage.
>> > Hosted engine for oVirt and replicated volumes between X and Y for the
>> > gluster storage. Is a third machine Z a must?
>>
>> It works also with just two hosts but it's not that safe: a replica-2
>> GlusterFS volume affected by a split-brain issue could be not self-healing
>> while with replica-3 you could rely on quorum enforcement just cause you are
>> using an odd host number.
>>
>> For oVirt 3.6 we are working on
>> http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
>>
>>
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Юрий Полторацкий
Hi,

Pardon me for jumping in subj, but...

Could you explain me in a few words or give a link where to read about
"split-brain"?

I made a lab: cluster (virt and gluster services both) based on two server.
With this options:
 cluster.server-quorum-type none
 cluster.quorum-type fixed
 cluster.quorum-count 1
I get a cluster that worked just fine even with only one host alive.

I have run VM on a not SPM host B, have made some files, then on the SPM
host A have stoped network service (host has gone, no power management),
have made some huge files on the VM again, after that I have rebooted
manualy the host A and there was no unexpected result. VM have worked
without problem, I have rebooted the VM several times and there were no
problems with any files I have made before.

I want to build a cluster with only two hosts, because I have only two good
servers free, and only 8x3TB HDD (for RAID10 in each host). To get one more
server and 4HDD is unbelievable in a close future..

Thanks.


2015-06-03 16:54 GMT+03:00 Simone Tiraboschi :

>
>
> - Original Message -
> > From: "Kiril L" 
> > To: users@ovirt.org
> > Sent: Wednesday, June 3, 2015 3:32:52 PM
> > Subject: [ovirt-users] Is it a plausible configuration?
> >
> > Would you please tell me if this configuration is doable or there is
> > something that i am missing?
> >
> > I would like to use only two servers (X and Y) for VDI and Gluster
> > based storage.
> > Hosted engine for oVirt and replicated volumes between X and Y for the
> > gluster storage. Is a third machine Z a must?
>
> It works also with just two hosts but it's not that safe: a replica-2
> GlusterFS volume affected by a split-brain issue could be not self-healing
> while with replica-3 you could rely on quorum enforcement just cause you
> are using an odd host number.
>
> For oVirt 3.6 we are working on
> http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
>
>
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Why not bond0

2015-06-03 Thread Michael Burman
Hi,

Create a bond via Setup Networks >> From the Bonding Mode list choose 'custom' 
>> in the new 'Custom Mode' layout enter -->  mode=0 miimon=100 and approve 
operation.
You can't choose bond0 from the list but you can create custom bond modes 0, 3 
and 6 as explained above ^^

Kind regards,
Michael

- Original Message -
From: "肖力" 
To: "Dan Kenigsberg" 
Cc: users@ovirt.org
Sent: Thursday, June 4, 2015 5:11:07 AM
Subject: Re: [ovirt-users] Why not bond0


I confused about bond mode. 
I read from RHEV document: 
Modes 1, 2, 3 and 4 support both virtual machine (bridged) and non-virtual 
machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual 
machine (bridgeless) networks only. 
And in oVirt i can not choice bond mode 0 ,but can choice bond mode 5. 
But in my host , i use bond mode 0 two years, It is work fine. 
Can someone explain this,thk! 





在 2015-05-31 23:38:09,"Dan Kenigsberg"  写道:
>On Sun, May 24, 2015 at 09:50:44AM +0800, 肖力 wrote:
>> Hi nic bond why not choice bond0 ?
>>
>
>Could you rephrase your question? I don't understand it. 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

-- 
Michael Burman
RedHat Israel, RHEV-M QE Network Team

Mobile: 054-5355725
IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Why not bond0

2015-06-03 Thread 肖力


I confused about bond mode.
I read from RHEV document:
Modes 1, 2, 3 and 4 support both virtual machine (bridged) and non-virtual 
machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual 
machine (bridgeless) networks only.
And in oVirt i can not choice bond mode 0 ,but can choice bond mode 5.
But in my host , i use bond mode 0 two years, It is work fine.
Can someone explain this,thk!








在 2015-05-31 23:38:09,"Dan Kenigsberg"  写道:
>On Sun, May 24, 2015 at 09:50:44AM +0800, 肖力 wrote:
>> Hi nic bond why not choice bond0 ?
>>
>
>Could you rephrase your question? I don't understand it.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Kiril L
Commodity hardware for storage and virtualization.

I wanted to use hosts for storage and virtualization. Two hosts are all I
need. But in order to protect data I thought that can go with 3 - using the
third one just for data storage and because of that can use a weak CPU. In
order not get a productivity penalty the 3rd host needs to have the same
CPU as the other two.

>From what i read if the hosts have different generations of CPU models,
they use only the features present in all models.
Does it relates to instructions only?

To be precise X and Y have 2 CPUs. Does it mean that the third host also
needs to have 2 CPUs?
What will happen if I use just one CPU which have less cores (then each in
X and Y) but the same instruction set extensions?
On Jun 3, 2015 6:27 PM, "Simone Tiraboschi"  wrote:

>
>
> - Original Message -
> > From: "Kiril L" 
> > To: users@ovirt.org
> > Sent: Wednesday, June 3, 2015 4:25:30 PM
> > Subject: Re: [ovirt-users] Is it a plausible configuration?
> >
> > On Wed, Jun 3, 2015 at 4:54 PM, Simone Tiraboschi 
> > wrote:
> > >
> > >
> > > - Original Message -
> > >> From: "Kiril L" 
> > >> To: users@ovirt.org
> > >> Sent: Wednesday, June 3, 2015 3:32:52 PM
> > >> Subject: [ovirt-users] Is it a plausible configuration?
> > >>
> > >> Would you please tell me if this configuration is doable or there is
> > >> something that i am missing?
> > >>
> > >> I would like to use only two servers (X and Y) for VDI and Gluster
> > >> based storage.
> > >> Hosted engine for oVirt and replicated volumes between X and Y for the
> > >> gluster storage. Is a third machine Z a must?
> > >
> > > It works also with just two hosts but it's not that safe: a replica-2
> > > GlusterFS volume affected by a split-brain issue could be not
> self-healing
> > > while with replica-3 you could rely on quorum enforcement just cause
> you
> > > are using an odd host number.
> > >
> > > For oVirt 3.6 we are working on
> > >
> http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
> > >
> > >
> > >> ___
> > >> Users mailing list
> > >> Users@ovirt.org
> > >> http://lists.ovirt.org/mailman/listinfo/users
> > >>
> >
> > I do not like the risk part! In that case will have to wait for a
> > third machine then.
> >
> > So in future there will be something useful for me but i did not get
> > it - what exactly will be different between ovirt 3.5 with quoum
> > enforcment and ovirt 3.6 with Hyper Converged Gluster Support?
>
> Using the same piece of commodity hardware for virtualization purposes and
> also as a node of your shared storage is the base idea of hyper-converging.
> So you are basically trying to manually do what the setup could do for you
> in the next release.
> Unfortunately I've to add that this path is not the easiest one and there
> are a lot of aspect to be carefully configured in order to get a robust and
> reliable deployment.
>
>
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Run Once Python SDK

2015-06-03 Thread Mike Lowery
Hello,

I was looking into performing the "run once" option via the SDK with an
attached ISO.  With the plan to be to attach an ISO and force a VM to boot
from the CD.

Basically:

1) Mount an ISO in "run once" mode
2) Make "CD-ROM" the first item in the boot sequence
3) Boot the VM

Any help would be appreciated.

Thanks,

Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [libvirt-users] Bug in Snapshot Removing

2015-06-03 Thread Kashyap Chamarthy
On Sun, May 31, 2015 at 10:56:40PM +, Soeren Malchow wrote:
> Dear all
> 
> I am not sure if the mail just did not get any attention between all
> the mails and this time it is also going to the libvirt mailing list.

Saw your mails, but it was hard to parse them.

> I am experiencing a problem with VM becoming unresponsive when
> removing Snapshots (Live Merge) and i think there is a serious
> problem.

Can you do an independent live merge test just with plain libvirt and
explain how exactly you can trigger it?

For example, I wrote some notes here which I normally use to test live
disk image chain merge:


http://wiki.libvirt.org/page/Live-merge-an-entire-disk-image-chain-including-current-active-disk

Can you reproduce your bug when testing with a variant of the above?

> Here are the previous mails,
> 
> http://lists.ovirt.org/pipermail/users/2015-May/033083.html
> 
> The problem is on a system with everything on the latest version,
> CentOS 7.1 and ovirt 3.5.2.1 all upgrades applied.
> 
> This Problem did NOT exist before upgrading to CentOS 7.1 with an
> environment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the
> libvirt-preview repo activated.
> 
> I think this is a bug in libvirt, not ovirt itself, but i am not sure.
> The actual file throwing the exception is in VDSM
> (/usr/share/vdsm/virt/vm.py, line 697).
> 
> We are very willing to help, test and supply log files in anyway we
> can.
> 
> Regards Soeren
> 

-- 
/kashyap
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bug in Snapshot Removing

2015-06-03 Thread Soeren Malchow
Hi,

This is not happening every time, the last time i had this, it was a
script runnning, and something like th 9. Vm and the 23. Vm had a problem,
and it is not always the same VMS, it is not about the OS (happen for
Windows and Linux alike)

And as i said it also happened when i tried to remove the snapshots
sequentially, here is the code (i know it is probably not the elegant way,
but i am not a developer) and the code actually has correct indentions.

<― snip ―>

print "Snapshot deletion"
try:
time.sleep(300)
Connect()
vms = api.vms.list()
for vm in vms:
print ("Deleting snapshots for %s ") % vm.name
snapshotlist = vm.snapshots.list()
for snapshot in snapshotlist:
if snapshot.description != "Active VM":
time.sleep(30)
snapshot.delete()
try:
while
api.vms.get(name=vm.name).snapshots.get(id=snapshot.id).snapshot_status ==
"locked":
print("Waiting for snapshot %s on %s deletion to
finish") % (snapshot.description, vm.name)
time.sleep(60)
except Exception as e:
print ("Snapshot %s does not exist anymore") %
snapshot.description
print ("Snapshot deletion for %s done") % vm.name
print ("Deletion of snapshots done")
api.disconnect()
except Exception as e:
print ("Something went wrong when deleting the snapshots\n%s") % str(e)



<― snip ―> 


Cheers
Soeren





On 03/06/15 15:20, "Adam Litke"  wrote:

>On 03/06/15 07:36 +, Soeren Malchow wrote:
>>Dear Adam
>>
>>First we were using a python script that was working on 4 threads and
>>therefore removing 4 snapshots at the time throughout the cluster, that
>>still caused problems.
>>
>>Now i took the snapshot removing out of the threaded part an i am just
>>looping through each snapshot on each VM one after another, even with
>>³sleeps² inbetween, but the problem remains.
>>But i am getting the impression that it is a problem with the amount of
>>snapshots that are deleted in a certain time, if i delete manually and
>>one
>>after another (meaning every 10 min or so) i do not have problems, if i
>>delete manually and do several at once and on one VM the next one just
>>after one finished, the risk seems to increase.
>
>Hmm.  In our lab we extensively tested removing a snapshot for a VM
>with 4 disks.  This means 4 block jobs running simultaneously.  Less
>than 10 minutes later (closer to 1 minute) we would remove a second
>snapshot for the same VM (again involving 4 block jobs).  I guess we
>should rerun this flow on a fully updated CentOS 7.1 host to see about
>local reproduction.  Seems your case is much simpler than this though.
>Is this happening every time or intermittently?
>
>>I do not think it is the number of VMS because we had this on hosts with
>>only 3 or 4 Vms running
>>
>>I will try restarting the libvirt and see what happens.
>>
>>We are not using RHEL 7.1 only CentOS 7.1
>>
>>Is there anything else we can look at when this happens again ?
>
>I'll defer to Eric Blake for the libvirt side of this.  Eric, would
>enabling debug logging in libvirtd help to shine some light on the
>problem?
>
>-- 
>Adam Litke

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Simone Tiraboschi


- Original Message -
> From: "Kiril L" 
> To: users@ovirt.org
> Sent: Wednesday, June 3, 2015 4:25:30 PM
> Subject: Re: [ovirt-users] Is it a plausible configuration?
> 
> On Wed, Jun 3, 2015 at 4:54 PM, Simone Tiraboschi 
> wrote:
> >
> >
> > - Original Message -
> >> From: "Kiril L" 
> >> To: users@ovirt.org
> >> Sent: Wednesday, June 3, 2015 3:32:52 PM
> >> Subject: [ovirt-users] Is it a plausible configuration?
> >>
> >> Would you please tell me if this configuration is doable or there is
> >> something that i am missing?
> >>
> >> I would like to use only two servers (X and Y) for VDI and Gluster
> >> based storage.
> >> Hosted engine for oVirt and replicated volumes between X and Y for the
> >> gluster storage. Is a third machine Z a must?
> >
> > It works also with just two hosts but it's not that safe: a replica-2
> > GlusterFS volume affected by a split-brain issue could be not self-healing
> > while with replica-3 you could rely on quorum enforcement just cause you
> > are using an odd host number.
> >
> > For oVirt 3.6 we are working on
> > http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
> >
> >
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> 
> I do not like the risk part! In that case will have to wait for a
> third machine then.
> 
> So in future there will be something useful for me but i did not get
> it - what exactly will be different between ovirt 3.5 with quoum
> enforcment and ovirt 3.6 with Hyper Converged Gluster Support?

Using the same piece of commodity hardware for virtualization purposes and also 
as a node of your shared storage is the base idea of hyper-converging.
So you are basically trying to manually do what the setup could do for you in 
the next release.
Unfortunately I've to add that this path is not the easiest one and there are a 
lot of aspect to be carefully configured in order to get a robust and reliable 
deployment.
 

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent on EL7 not starting up correctly, requires restart

2015-06-03 Thread Daniel Helgenberger


On 03.06.2015 15:00, Vinzenz Feenstra wrote:
> Hi Daniel,
Hello Vinzenz!
>
> the oVirt Guest agent service won't start just by installation via yum.
> You have to run service ovirt-guest-agent start
> However IIRC with the next boot of the VM it would start since it is
> getting enabled by default.
>
> If you deploy the guest agent you have to start the agent explicitly.
> (This behavior is aligned with the Fedora packaging guidelines)

The deployment is done by foreman, the package is installed by kickstart 
directly. When installed, it is enabled.

Yet, after the reboot of the machine, the service is stated by systemd 
but not working. Only after a manual restart can fix this.

However, when I just tried a reboot on a random host it was working... 
So - never mind and thanks ;)

>
> Regards,
>
> On 06/03/2015 12:49 PM, Daniel Helgenberger wrote:
>> Hello,
>>
>> after deploying a few CentOS7.1 hosts I realized that the
>> ovirt-guest-agent does not start up correctly at system boot, requiring
>> a manual restart. Afterwards it is reporting data to engine.
>>
>> I have only these lines:
>>
>> Jun 03 12:39:39 pipeline.int.m-box.de systemd[1]: Starting oVirt Guest
>> Agent...
>> Jun 03 12:39:40 pipeline.int.m-box.de systemd[1]: Started oVirt Guest Agent.
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7136]:
>> pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
>> was met by user "ovirtagent"
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7136]: running
>> '/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
>> on behalf of 'ovirtagent'
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7163]:
>> pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
>> was met by user "ovirtagent"
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7163]: running
>> '/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
>> on behalf of 'ovirtagent'
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7169]:
>> pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
>> was met by user "ovirtagent"
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7169]: running
>> '/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
>> on behalf of 'ovirtagent'
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7175]:
>> pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
>> was met by user "ovirtagent"
>> Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7175]: running
>> '/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
>> on behalf of 'ovirtagent'
>>
>> Versions:
>> Linux pipeline.int.m-box.de 3.10.0-229.4.2.el7.x86_64 #1 SMP Wed May 13
>> 10:06:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>>
>> qemu-guest-agent-2.1.0-4.el7.x86_64
>> ovirt-guest-agent-common-1.0.10-2.el7.noarch
>
>

-- 
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de  www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Kiril L
On Wed, Jun 3, 2015 at 4:54 PM, Simone Tiraboschi  wrote:
>
>
> - Original Message -
>> From: "Kiril L" 
>> To: users@ovirt.org
>> Sent: Wednesday, June 3, 2015 3:32:52 PM
>> Subject: [ovirt-users] Is it a plausible configuration?
>>
>> Would you please tell me if this configuration is doable or there is
>> something that i am missing?
>>
>> I would like to use only two servers (X and Y) for VDI and Gluster
>> based storage.
>> Hosted engine for oVirt and replicated volumes between X and Y for the
>> gluster storage. Is a third machine Z a must?
>
> It works also with just two hosts but it's not that safe: a replica-2 
> GlusterFS volume affected by a split-brain issue could be not self-healing 
> while with replica-3 you could rely on quorum enforcement just cause you are 
> using an odd host number.
>
> For oVirt 3.6 we are working on 
> http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
>
>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>

I do not like the risk part! In that case will have to wait for a
third machine then.

So in future there will be something useful for me but i did not get
it - what exactly will be different between ovirt 3.5 with quoum
enforcment and ovirt 3.6 with Hyper Converged Gluster Support?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Yedidyah Bar David
- Original Message -
> From: "Kiril L" 
> To: users@ovirt.org
> Sent: Wednesday, June 3, 2015 4:32:52 PM
> Subject: [ovirt-users] Is it a plausible configuration?
> 
> Would you please tell me if this configuration is doable or there is
> something that i am missing?
> 
> I would like to use only two servers (X and Y) for VDI and Gluster
> based storage.
> Hosted engine for oVirt and replicated volumes between X and Y for the
> gluster storage. Is a third machine Z a must?

Without that, you risk a split brain.

In 3.6, what you want to do will be supported "out of the box" [1], but will
require 3 hosts [2].

[1] 
http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
[2] http://www.ovirt.org/Features/Self_Hosted_Engine_Gluster_Support#GlusterFS

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Simone Tiraboschi


- Original Message -
> From: "Kiril L" 
> To: users@ovirt.org
> Sent: Wednesday, June 3, 2015 3:32:52 PM
> Subject: [ovirt-users] Is it a plausible configuration?
> 
> Would you please tell me if this configuration is doable or there is
> something that i am missing?
> 
> I would like to use only two servers (X and Y) for VDI and Gluster
> based storage.
> Hosted engine for oVirt and replicated volumes between X and Y for the
> gluster storage. Is a third machine Z a must?

It works also with just two hosts but it's not that safe: a replica-2 GlusterFS 
volume affected by a split-brain issue could be not self-healing while with 
replica-3 you could rely on quorum enforcement just cause you are using an odd 
host number.

For oVirt 3.6 we are working on 
http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support


> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Is it a plausible configuration?

2015-06-03 Thread Kiril L
Would you please tell me if this configuration is doable or there is
something that i am missing?

I would like to use only two servers (X and Y) for VDI and Gluster
based storage.
Hosted engine for oVirt and replicated volumes between X and Y for the
gluster storage. Is a third machine Z a must?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bug in Snapshot Removing

2015-06-03 Thread Adam Litke

On 03/06/15 07:36 +, Soeren Malchow wrote:

Dear Adam

First we were using a python script that was working on 4 threads and
therefore removing 4 snapshots at the time throughout the cluster, that
still caused problems.

Now i took the snapshot removing out of the threaded part an i am just
looping through each snapshot on each VM one after another, even with
³sleeps² inbetween, but the problem remains.
But i am getting the impression that it is a problem with the amount of
snapshots that are deleted in a certain time, if i delete manually and one
after another (meaning every 10 min or so) i do not have problems, if i
delete manually and do several at once and on one VM the next one just
after one finished, the risk seems to increase.


Hmm.  In our lab we extensively tested removing a snapshot for a VM
with 4 disks.  This means 4 block jobs running simultaneously.  Less
than 10 minutes later (closer to 1 minute) we would remove a second
snapshot for the same VM (again involving 4 block jobs).  I guess we
should rerun this flow on a fully updated CentOS 7.1 host to see about
local reproduction.  Seems your case is much simpler than this though.
Is this happening every time or intermittently?


I do not think it is the number of VMS because we had this on hosts with
only 3 or 4 Vms running

I will try restarting the libvirt and see what happens.

We are not using RHEL 7.1 only CentOS 7.1

Is there anything else we can look at when this happens again ?


I'll defer to Eric Blake for the libvirt side of this.  Eric, would
enabling debug logging in libvirtd help to shine some light on the
problem?

--
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent on EL7 not starting up correctly, requires restart

2015-06-03 Thread Vinzenz Feenstra

Hi Daniel,

the oVirt Guest agent service won't start just by installation via yum. 
You have to run service ovirt-guest-agent start
However IIRC with the next boot of the VM it would start since it is 
getting enabled by default.


If you deploy the guest agent you have to start the agent explicitly. 
(This behavior is aligned with the Fedora packaging guidelines)


Regards,

On 06/03/2015 12:49 PM, Daniel Helgenberger wrote:

Hello,

after deploying a few CentOS7.1 hosts I realized that the
ovirt-guest-agent does not start up correctly at system boot, requiring
a manual restart. Afterwards it is reporting data to engine.

I have only these lines:

Jun 03 12:39:39 pipeline.int.m-box.de systemd[1]: Starting oVirt Guest
Agent...
Jun 03 12:39:40 pipeline.int.m-box.de systemd[1]: Started oVirt Guest Agent.
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7136]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7136]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7163]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7163]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7169]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7169]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7175]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7175]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'

Versions:
Linux pipeline.int.m-box.de 3.10.0-229.4.2.el7.x86_64 #1 SMP Wed May 13
10:06:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

qemu-guest-agent-2.1.0-4.el7.x86_64
ovirt-guest-agent-common-1.0.10-2.el7.noarch



--
Regards,

Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Virtualised ovirt-engine?

2015-06-03 Thread Gianluca Cecchi
On Wed, Jun 3, 2015 at 1:36 PM, Christian Ejlertsen  wrote:


> I can find a draft describing virtualising the ovirt engine on the ovirt
> nodes, but i can't find it described much in other places, not saying it's
> not
> there :), but at least my eyes does not find it :)
>

Hello,
if you mean an engine hosted inside the ovirt infra it manages, it is
called Self Hosted Engine:

See description and new installation workflow here:
http://www.ovirt.org/Features/Self_Hosted_Engine

In case you need to migrate to it if you already have an external engine
(it is also referred inside the link above):
http://www.ovirt.org/Migrate_to_Hosted_Engine

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Virtualised ovirt-engine?

2015-06-03 Thread Christian Ejlertsen
Hello

I'm trying to find out a few things about ovirt, besides that it's an
fantastic product.
I can find a draft describing virtualising the ovirt engine on the ovirt
nodes, but i can't find it described much in other places, not saying it's not
there :), but at least my eyes does not find it :)

Does anyone have some insight towards this?

Thank you very much in advance.

- Christian 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-guest-agent on EL7 not starting up correctly, requires restart

2015-06-03 Thread Daniel Helgenberger
Hello,

after deploying a few CentOS7.1 hosts I realized that the
ovirt-guest-agent does not start up correctly at system boot, requiring
a manual restart. Afterwards it is reporting data to engine.

I have only these lines:

Jun 03 12:39:39 pipeline.int.m-box.de systemd[1]: Starting oVirt Guest
Agent...
Jun 03 12:39:40 pipeline.int.m-box.de systemd[1]: Started oVirt Guest Agent.
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7136]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7136]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7163]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7163]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7169]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7169]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7175]:
pam_succeed_if(ovirt-locksession:auth): requirement "user = ovirtagent"
was met by user "ovirtagent"
Jun 03 12:40:02 pipeline.int.m-box.de userhelper[7175]: running
'/usr/share/ovirt-guest-agent/LockActiveSession.py' with root privileges
on behalf of 'ovirtagent'

Versions:
Linux pipeline.int.m-box.de 3.10.0-229.4.2.el7.x86_64 #1 SMP Wed May 13
10:06:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

qemu-guest-agent-2.1.0-4.el7.x86_64
ovirt-guest-agent-common-1.0.10-2.el7.noarch
-- 
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de  www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Merge vanished after upgrade to 3.5.2.1

2015-06-03 Thread Markus Stockhausen
OMG.

Got this message one day after we upgraded to 3.5.2. We hit the bug and I opened
BZ1227693. Before that we were on 3.5.1 and everything worked fine. Just give me
feedback what I can test for you.

Markus


Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von "Soeren 
Malchow [soeren.malc...@mcon.net]
Gesendet: Mittwoch, 3. Juni 2015 09:29
An: Allon Mureinik
Cc: Users@ovirt.org
Betreff: Re: [ovirt-users] Live Merge vanished after upgrade to 3.5.2.1

Dear Allon,

We only upgraded the engine and the vdsm on the hypervisors the OS itself 
stayed the same (Fedora 20), with 3.5.0 Live merge worked, with 3.5.2.1 not.

However, we already migrated to CentOS 7.1 on the Hypervisors since we were not 
really comfortable using the virt-preview on Fedora all the time, therefore we 
can not test anymore

Regards
Soeren

From: Allon Mureinik mailto:amure...@redhat.com>>
Date: Tuesday 2 June 2015 15:11
To: Soeren Malchow mailto:soeren.malc...@mcon.net>>
Cc: "Users@ovirt.org" 
mailto:Users@ovirt.org>>, Adam Litke 
mailto:ali...@redhat.com>>
Subject: Re: [ovirt-users] Live Merge vanished after upgrade to 3.5.2.1

What have you upgraded? The engine? The hypervisors?

Can you include the results of "rpm -qa | grep ovirt" from the engine and "rpm 
-qa | egrep "vdsm|libvirt|qemu"  " on the hypervisors?

From: "Soeren Malchow" mailto:soeren.malc...@mcon.net>>
To: Users@ovirt.org
Sent: Thursday, May 21, 2015 12:32:53 PM
Subject: [ovirt-users] Live Merge vanished after upgrade to 3.5.2.1

Dear all,

In our environment the “Live Merge” capability is gone after the upgrade to 
ovirt 3.5.2.1

It was working before and we had our backup relying in this.

Any idea what happened ?

Environment

Hosted Engine on CentOS 6.6 with ovirt 3.5.2.1
Compute hosts on Fedora 20 with vdsm 4.16.14 and libvirt 1.2.9.1 from the 
libvirt-preview repo (for live merge)
Storage -> CentOS 7.1 with gluster 3.6.3

Cheers
Soeren

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bug in Snapshot Removing

2015-06-03 Thread Soeren Malchow
Dear Adam

First we were using a python script that was working on 4 threads and
therefore removing 4 snapshots at the time throughout the cluster, that
still caused problems.

Now i took the snapshot removing out of the threaded part an i am just
looping through each snapshot on each VM one after another, even with
³sleeps² inbetween, but the problem remains.
But i am getting the impression that it is a problem with the amount of
snapshots that are deleted in a certain time, if i delete manually and one
after another (meaning every 10 min or so) i do not have problems, if i
delete manually and do several at once and on one VM the next one just
after one finished, the risk seems to increase.

I do not think it is the number of VMS because we had this on hosts with
only 3 or 4 Vms running

I will try restarting the libvirt and see what happens.

We are not using RHEL 7.1 only CentOS 7.1

Is there anything else we can look at when this happens again ?

Regards
Soeren 



On 02/06/15 18:53, "Adam Litke"  wrote:

>Hello Soeren.
>
>I've started to look at this issue and I'd agree that at first glance
>it looks like a libvirt issue.  The 'cannot acquire state change lock'
>messages suggest a locking bug or severe contention at least.  To help
>me better understand the problem I have a few questions about your
>setup.
>
>From your earlier report it appears that you have 15 VMs running on
>the failing host.  Are you attempting to remove snapshots from all VMs
>at the same time?  Have you tried with fewer concurrent operations?
>I'd be curious to understand if the problem is connected to the
>number of VMs running or the number of active block jobs.
>
>Have you tried RHEL-7.1 as a hypervisor host?
>
>Rather than rebooting the host, does restarting libvirtd cause the VMs
>to become responsive again?  Note that this operation may cause the
>host to move to Unresponsive state in the UI for a short period of
>time.
>
>Thanks for your report.
>
>On 31/05/15 23:39 +, Soeren Malchow wrote:
>>And sorry, another update, it does kill the VM partly, it was still
>>pingable when i wrote the last mail, but no ssh and no spice console
>>possible
>>
>>From: Soeren Malchow
>>mailto:soeren.malc...@mcon.net>>
>>Date: Monday 1 June 2015 01:35
>>To: Soeren Malchow
>>mailto:soeren.malc...@mcon.net>>,
>>"libvirt-us...@redhat.com"
>>mailto:libvirt-us...@redhat.com>>, users
>>mailto:users@ovirt.org>>
>>Subject: Re: [ovirt-users] Bug in Snapshot Removing
>>
>>Small addition again:
>>
>>This error shows up in the log while removing snapshots WITHOUT
>>rendering the Vms unresponsive
>>
>>‹
>>Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1657]:
>>Timed out during operation: cannot acquire state change lock
>>Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net vdsm[6839]: vdsm
>>vm.Vm ERROR vmId=`56848f4a-cd73-4eda-bf79-7eb80ae569a9`::Error getting
>>block job info
>> 
>>Traceback (most recent call last):
>>File
>>"/usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobsŠ
>>
>>‹
>>
>>
>>
>>From: Soeren Malchow
>>mailto:soeren.malc...@mcon.net>>
>>Date: Monday 1 June 2015 00:56
>>To: "libvirt-us...@redhat.com"
>>mailto:libvirt-us...@redhat.com>>, users
>>mailto:users@ovirt.org>>
>>Subject: [ovirt-users] Bug in Snapshot Removing
>>
>>Dear all
>>
>>I am not sure if the mail just did not get any attention between all the
>>mails and this time it is also going to the libvirt mailing list.
>>
>>I am experiencing a problem with VM becoming unresponsive when removing
>>Snapshots (Live Merge) and i think there is a serious problem.
>>
>>Here are the previous mails,
>>
>>http://lists.ovirt.org/pipermail/users/2015-May/033083.html
>>
>>The problem is on a system with everything on the latest version, CentOS
>>7.1 and ovirt 3.5.2.1 all upgrades applied.
>>
>>This Problem did NOT exist before upgrading to CentOS 7.1 with an
>>environment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the
>>libvirt-preview repo activated.
>>
>>I think this is a bug in libvirt, not ovirt itself, but i am not sure.
>>The actual file throwing the exception is in VDSM
>>(/usr/share/vdsm/virt/vm.py, line 697).
>>
>>We are very willing to help, test and supply log files in anyway we can.
>>
>>Regards
>>Soeren
>>
>
>>___
>>Users mailing list
>>Users@ovirt.org
>>http://lists.ovirt.org/mailman/listinfo/users
>
>
>-- 
>Adam Litke

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Merge vanished after upgrade to 3.5.2.1

2015-06-03 Thread Soeren Malchow
Dear Allon,

We only upgraded the engine and the vdsm on the hypervisors the OS itself 
stayed the same (Fedora 20), with 3.5.0 Live merge worked, with 3.5.2.1 not.

However, we already migrated to CentOS 7.1 on the Hypervisors since we were not 
really comfortable using the virt-preview on Fedora all the time, therefore we 
can not test anymore

Regards
Soeren

From: Allon Mureinik mailto:amure...@redhat.com>>
Date: Tuesday 2 June 2015 15:11
To: Soeren Malchow mailto:soeren.malc...@mcon.net>>
Cc: "Users@ovirt.org" 
mailto:Users@ovirt.org>>, Adam Litke 
mailto:ali...@redhat.com>>
Subject: Re: [ovirt-users] Live Merge vanished after upgrade to 3.5.2.1

What have you upgraded? The engine? The hypervisors?

Can you include the results of "rpm -qa | grep ovirt" from the engine and "rpm 
-qa | egrep "vdsm|libvirt|qemu"  " on the hypervisors?

From: "Soeren Malchow" mailto:soeren.malc...@mcon.net>>
To: Users@ovirt.org
Sent: Thursday, May 21, 2015 12:32:53 PM
Subject: [ovirt-users] Live Merge vanished after upgrade to 3.5.2.1

Dear all,

In our environment the “Live Merge” capability is gone after the upgrade to 
ovirt 3.5.2.1

It was working before and we had our backup relying in this.

Any idea what happened ?

Environment

Hosted Engine on CentOS 6.6 with ovirt 3.5.2.1
Compute hosts on Fedora 20 with vdsm 4.16.14 and libvirt 1.2.9.1 from the 
libvirt-preview repo (for live merge)
Storage -> CentOS 7.1 with gluster 3.6.3

Cheers
Soeren

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Katello integration

2015-06-03 Thread Moti Asayag


- Original Message -
> From: "Nathanaël Blanchet" 
> To: masa...@redhat.com
> Cc: users@ovirt.org
> Sent: Wednesday, June 3, 2015 9:38:57 AM
> Subject: Katello integration
> 
> Hello i read your wiki page about katello but i can't see thé projection as a
> target on the Google sheet
> https://docs.google.com/spreadsheets/d/1vUwi0y54SV7nYZC1DXo_hLPyG7hvtWDHfa6zo8u3WCY/htmlview#

I've updated the Google sheet for it.

> Is this always targeted tout 3.6? Screenshot tell me that web is already
> ready.

It is targeted to 3.6. The feature is currently enabled via the restapi, the 
web support
for errata for engine/hosts/vms is still in progress.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users