[ovirt-users] Storage slowly expanding

2017-08-31 Thread Jim Kusznir
Hi all:

I have several VMs, all thin provisioned, on my small storage (self-hosted
gluster / hyperconverged cluster).  I'm now noticing that some of my VMs
(espicially my only Windows VM) are using even MORE disk space than the
blank it was allocated.

Example: windows VM: virtual size created at creation: 30GB (thin
provisioned).  Actual disk space in use: 19GB.  According to the storage ->
Disks tab, its currently using 39GB.  How do I get that down?

I have two other VMs that are somewhat heavy DB load (Zabbix and Unifi);
both of those are also larger than their created max size despite disk in
machine not being fully utilized.

None of these have snapshots.

How do I fix this?

Thanks!
--Jim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-08-31 Thread Jim Kusznir
Hi all:

Sorry to hijack the thread, but I was about to start essentially the same
thread.

I have a 3 node cluster, all three are hosts and gluster nodes (replica 2 +
arbitrar).  I DO have the mnt_options=backup-volfile-servers= set:

storage=192.168.8.11:/engine
mnt_options=backup-volfile-servers=192.168.8.12:192.168.8.13

I had an issue today where 192.168.8.11 went down.  ALL VMs immediately
paused, including the engine (all VMs were running on host2:192.168.8.12).
I couldn't get any gluster stuff working until host1 (192.168.8.11) was
restored.

What's wrong / what did I miss?

(this was set up "manually" through the article on setting up self-hosted
gluster cluster back when 4.0 was new..I've upgraded it to 4.1 since).

Thanks!
--Jim


On Thu, Aug 31, 2017 at 12:31 PM, Charles Kozler 
wrote:

> Typo..."Set it up and then failed that **HOST**"
>
> And upon that host going down, the storage domain went down. I only have
> hosted storage domain and this new one - is this why the DC went down and
> no SPM could be elected?
>
> I dont recall this working this way in early 4.0 or 3.6
>
> On Thu, Aug 31, 2017 at 3:30 PM, Charles Kozler 
> wrote:
>
>> So I've tested this today and I failed a node. Specifically, I setup a
>> glusterfs domain and selected "host to use: node1". Set it up and then
>> failed that VM
>>
>> However, this did not work and the datacenter went down. My engine stayed
>> up, however, it seems configuring a domain to pin to a host to use will
>> obviously cause it to fail
>>
>> This seems counter-intuitive to the point of glusterfs or any redundant
>> storage. If a single host has to be tied to its function, this introduces a
>> single point of failure
>>
>> Am I missing something obvious?
>>
>> On Thu, Aug 31, 2017 at 9:43 AM, Kasturi Narra  wrote:
>>
>>> yes, right.  What you can do is edit the hosted-engine.conf file and
>>> there is a parameter as shown below [1] and replace h2 and h3 with your
>>> second and third storage servers. Then you will need to restart
>>> ovirt-ha-agent and ovirt-ha-broker services in all the nodes .
>>>
>>> [1] 'mnt_options=backup-volfile-servers=:'
>>>
>>> On Thu, Aug 31, 2017 at 5:54 PM, Charles Kozler 
>>> wrote:
>>>
 Hi Kasturi -

 Thanks for feedback

 > If cockpit+gdeploy plugin would be have been used then that would
 have automatically detected glusterfs replica 3 volume created during
 Hosted Engine deployment and this question would not have been asked

 Actually, doing hosted-engine --deploy it too also auto detects
 glusterfs.  I know glusterfs fuse client has the ability to failover
 between all nodes in cluster, but I am still curious given the fact that I
 see in ovirt config node1:/engine (being node1 I set it to in hosted-engine
 --deploy). So my concern was to ensure and find out exactly how engine
 works when one node goes away and the fuse client moves over to the other
 node in the gluster cluster

 But you did somewhat answer my question, the answer seems to be no (as
 default) and I will have to use hosted-engine.conf and change the parameter
 as you list

 So I need to do something manual to create HA for engine on gluster?
 Yes?

 Thanks so much!

 On Thu, Aug 31, 2017 at 3:03 AM, Kasturi Narra 
 wrote:

> Hi,
>
>During Hosted Engine setup question about glusterfs volume is being
> asked because you have setup the volumes yourself. If cockpit+gdeploy
> plugin would be have been used then that would have automatically detected
> glusterfs replica 3 volume created during Hosted Engine deployment and 
> this
> question would not have been asked.
>
>During new storage domain creation when glusterfs is selected there
> is a feature called 'use managed gluster volumes' and upon checking this
> all glusterfs volumes managed will be listed and you could choose the
> volume of your choice from the dropdown list.
>
> There is a conf file called /etc/hosted-engine/hosted-engine.conf
> where there is a parameter called backup-volfile-servers="h1:h2" and if 
> one
> of the gluster node goes down engine uses this parameter to provide ha /
> failover.
>
>  Hope this helps !!
>
> Thanks
> kasturi
>
>
>
> On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler 
> wrote:
>
>> Hello -
>>
>> I have successfully created a hyperconverged hosted engine setup
>> consisting of 3 nodes - 2 for VM's and the third purely for storage. I
>> manually configured it all, did not use ovirt node or anything. Built the
>> gluster volumes myself
>>
>> However, I noticed that when setting up the hosted engine and even
>> when adding a new storage domain with glusterfs type, it 

Re: [ovirt-users] [ovirt-devel] vdsm vds.dispatcher

2017-08-31 Thread Gary Pedretty
By someone, I assume you mean some other process running on the host, or 
possibly the engine?

Gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Green, green as far as the eyes can see yourself” Matt 22:39





> On Aug 31, 2017, at 6:17 AM, Martin Sivak  > wrote:
> 
> One more thing:
> 
> MOM's getStatistics is actually called by VDSM stats reporting code,
> so my guess here is that someone queries VDSM for stats pretty hard,
> VDSM then asks MOM for details.
> 
> Martin
> 
> On Thu, Aug 31, 2017 at 4:14 PM, Martin Sivak  > wrote:
>> Hi,
>> 
>>> 2017-08-27 23:15:41,199 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:41,200 - mom.RPCServer - INFO - getStatistics()
>>> 2017-08-27 23:15:43,946 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:43,947 - mom.RPCServer - INFO - getStatistics()
>> 
>> These are logs from mom's RPC server, someone is calling MOM way too
>> often. Well about 25 times per minute if my math is right.
>> 
>> The only client I know about is actually VDSM.
>> 
>> Martin
>> 
>> 
>> On Mon, Aug 28, 2017 at 9:17 AM, Gary Pedretty > > wrote:
>>> Be glad to provide logs to help diagnose this.  I see nothing unusual in the
>>> vdsm.log
>>> 
>>> mom.log shows the following almost as frequently as the messages log entries
>>> 
>>> 2017-08-27 23:15:41,199 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:41,200 - mom.RPCServer - INFO - getStatistics()
>>> 2017-08-27 23:15:43,946 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:43,947 - mom.RPCServer - INFO - getStatistics()
>>> 
>>> 

___
Devel mailing list
de...@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/devel 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Configure Wifi Interface with oVirt Host

2017-08-31 Thread Sec For
Hello,

My engine was configured with enp3s0
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fwd: Configure Wifi Interface with oVirt Host

2017-08-31 Thread Sec For
Hello,

My engine was configured with *enp3s0* interface => ovirtmgmt. which says
out of sync (Earlier I was using Lan connection).

Now, I have moved into wifi - where I have *wlp2s0 *interface, When I click
host->setup network -> it don't get wlp2s0 interface to link with ovirt
engine.

How do we connect wifi interface wlp2s0 to ovirt engine?

Cheers
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-08-31 Thread Charles Kozler
Typo..."Set it up and then failed that **HOST**"

And upon that host going down, the storage domain went down. I only have
hosted storage domain and this new one - is this why the DC went down and
no SPM could be elected?

I dont recall this working this way in early 4.0 or 3.6

On Thu, Aug 31, 2017 at 3:30 PM, Charles Kozler 
wrote:

> So I've tested this today and I failed a node. Specifically, I setup a
> glusterfs domain and selected "host to use: node1". Set it up and then
> failed that VM
>
> However, this did not work and the datacenter went down. My engine stayed
> up, however, it seems configuring a domain to pin to a host to use will
> obviously cause it to fail
>
> This seems counter-intuitive to the point of glusterfs or any redundant
> storage. If a single host has to be tied to its function, this introduces a
> single point of failure
>
> Am I missing something obvious?
>
> On Thu, Aug 31, 2017 at 9:43 AM, Kasturi Narra  wrote:
>
>> yes, right.  What you can do is edit the hosted-engine.conf file and
>> there is a parameter as shown below [1] and replace h2 and h3 with your
>> second and third storage servers. Then you will need to restart
>> ovirt-ha-agent and ovirt-ha-broker services in all the nodes .
>>
>> [1] 'mnt_options=backup-volfile-servers=:'
>>
>> On Thu, Aug 31, 2017 at 5:54 PM, Charles Kozler 
>> wrote:
>>
>>> Hi Kasturi -
>>>
>>> Thanks for feedback
>>>
>>> > If cockpit+gdeploy plugin would be have been used then that would
>>> have automatically detected glusterfs replica 3 volume created during
>>> Hosted Engine deployment and this question would not have been asked
>>>
>>> Actually, doing hosted-engine --deploy it too also auto detects
>>> glusterfs.  I know glusterfs fuse client has the ability to failover
>>> between all nodes in cluster, but I am still curious given the fact that I
>>> see in ovirt config node1:/engine (being node1 I set it to in hosted-engine
>>> --deploy). So my concern was to ensure and find out exactly how engine
>>> works when one node goes away and the fuse client moves over to the other
>>> node in the gluster cluster
>>>
>>> But you did somewhat answer my question, the answer seems to be no (as
>>> default) and I will have to use hosted-engine.conf and change the parameter
>>> as you list
>>>
>>> So I need to do something manual to create HA for engine on gluster? Yes?
>>>
>>> Thanks so much!
>>>
>>> On Thu, Aug 31, 2017 at 3:03 AM, Kasturi Narra 
>>> wrote:
>>>
 Hi,

During Hosted Engine setup question about glusterfs volume is being
 asked because you have setup the volumes yourself. If cockpit+gdeploy
 plugin would be have been used then that would have automatically detected
 glusterfs replica 3 volume created during Hosted Engine deployment and this
 question would not have been asked.

During new storage domain creation when glusterfs is selected there
 is a feature called 'use managed gluster volumes' and upon checking this
 all glusterfs volumes managed will be listed and you could choose the
 volume of your choice from the dropdown list.

 There is a conf file called /etc/hosted-engine/hosted-engine.conf
 where there is a parameter called backup-volfile-servers="h1:h2" and if one
 of the gluster node goes down engine uses this parameter to provide ha /
 failover.

  Hope this helps !!

 Thanks
 kasturi



 On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler 
 wrote:

> Hello -
>
> I have successfully created a hyperconverged hosted engine setup
> consisting of 3 nodes - 2 for VM's and the third purely for storage. I
> manually configured it all, did not use ovirt node or anything. Built the
> gluster volumes myself
>
> However, I noticed that when setting up the hosted engine and even
> when adding a new storage domain with glusterfs type, it still asks for
> hostname:/volumename
>
> This leads me to believe that if that one node goes down (ex:
> node1:/data), then ovirt engine wont be able to communicate with that
> volume because its trying to reach it on node 1 and thus, go down
>
> I know glusterfs fuse client can connect to all nodes to provide
> failover/ha but how does the engine handle this?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-08-31 Thread Charles Kozler
So I've tested this today and I failed a node. Specifically, I setup a
glusterfs domain and selected "host to use: node1". Set it up and then
failed that VM

However, this did not work and the datacenter went down. My engine stayed
up, however, it seems configuring a domain to pin to a host to use will
obviously cause it to fail

This seems counter-intuitive to the point of glusterfs or any redundant
storage. If a single host has to be tied to its function, this introduces a
single point of failure

Am I missing something obvious?

On Thu, Aug 31, 2017 at 9:43 AM, Kasturi Narra  wrote:

> yes, right.  What you can do is edit the hosted-engine.conf file and there
> is a parameter as shown below [1] and replace h2 and h3 with your second
> and third storage servers. Then you will need to restart ovirt-ha-agent and
> ovirt-ha-broker services in all the nodes .
>
> [1] 'mnt_options=backup-volfile-servers=:'
>
> On Thu, Aug 31, 2017 at 5:54 PM, Charles Kozler 
> wrote:
>
>> Hi Kasturi -
>>
>> Thanks for feedback
>>
>> > If cockpit+gdeploy plugin would be have been used then that would have
>> automatically detected glusterfs replica 3 volume created during Hosted
>> Engine deployment and this question would not have been asked
>>
>> Actually, doing hosted-engine --deploy it too also auto detects
>> glusterfs.  I know glusterfs fuse client has the ability to failover
>> between all nodes in cluster, but I am still curious given the fact that I
>> see in ovirt config node1:/engine (being node1 I set it to in hosted-engine
>> --deploy). So my concern was to ensure and find out exactly how engine
>> works when one node goes away and the fuse client moves over to the other
>> node in the gluster cluster
>>
>> But you did somewhat answer my question, the answer seems to be no (as
>> default) and I will have to use hosted-engine.conf and change the parameter
>> as you list
>>
>> So I need to do something manual to create HA for engine on gluster? Yes?
>>
>> Thanks so much!
>>
>> On Thu, Aug 31, 2017 at 3:03 AM, Kasturi Narra  wrote:
>>
>>> Hi,
>>>
>>>During Hosted Engine setup question about glusterfs volume is being
>>> asked because you have setup the volumes yourself. If cockpit+gdeploy
>>> plugin would be have been used then that would have automatically detected
>>> glusterfs replica 3 volume created during Hosted Engine deployment and this
>>> question would not have been asked.
>>>
>>>During new storage domain creation when glusterfs is selected there
>>> is a feature called 'use managed gluster volumes' and upon checking this
>>> all glusterfs volumes managed will be listed and you could choose the
>>> volume of your choice from the dropdown list.
>>>
>>> There is a conf file called /etc/hosted-engine/hosted-engine.conf
>>> where there is a parameter called backup-volfile-servers="h1:h2" and if one
>>> of the gluster node goes down engine uses this parameter to provide ha /
>>> failover.
>>>
>>>  Hope this helps !!
>>>
>>> Thanks
>>> kasturi
>>>
>>>
>>>
>>> On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler 
>>> wrote:
>>>
 Hello -

 I have successfully created a hyperconverged hosted engine setup
 consisting of 3 nodes - 2 for VM's and the third purely for storage. I
 manually configured it all, did not use ovirt node or anything. Built the
 gluster volumes myself

 However, I noticed that when setting up the hosted engine and even when
 adding a new storage domain with glusterfs type, it still asks for
 hostname:/volumename

 This leads me to believe that if that one node goes down (ex:
 node1:/data), then ovirt engine wont be able to communicate with that
 volume because its trying to reach it on node 1 and thus, go down

 I know glusterfs fuse client can connect to all nodes to provide
 failover/ha but how does the engine handle this?

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Yuval Turgeman
Yes that would do it, thanks for the update :)

On Thu, Aug 31, 2017 at 5:21 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> all of the nodes that already made updates in the past have
>
> /etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
> /etc/yum.repos.d/ovirt-4.1-pre.repo
>
> i went through the logs in /var/log/ovirt-engine/host-deploy/ and my own
> notes and discovered/remembered that this being presented with RC versions
> started on 20170707 when i updated my nodes from 4.1.2 to
> 4.1.3-0.3.rc3.20170622082156.git47b4302 (!). probably there was a short
> timespan when you erroneously published a RC version in the wrong repo, my
> nodes "caught" it and dragged this along until today when i finally cared
> ;-) I moved the /etc/yum.repos.d/ovirt-4.1-pre*.repo files away and now
> everything seems fine
>
> Regards
> Matthias
>
> Am 2017-08-31 um 15:25 schrieb Yuval Turgeman:
>
>> Hi,
>>
>> Don't quite understand how you got to that 4.1.6 rc, it's only available
>> in the pre release repo, can you paste the yum repos that are enabled on
>> your system ?
>>
>> Thanks,
>> Yuval.
>>
>> On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold <
>> matthias.leop...@meduniwien.ac.at > iwien.ac.at>> wrote:
>>
>> Hi,
>>
>> thanks a lot.
>>
>> So i understand everything is fine with my nodes and i'll wait until
>> the update GUI shows the right version to update (4.1.5 at the
>> moment).
>>
>> Regards
>> Matthias
>>
>>
>> Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:
>>
>> Hi,
>>
>> oVirt node ng is shipped with a placeholder rpm preinstalled.
>> The image-update rpms obsolete the placeholder rpm, so once a
>> new image-update rpm is published, yum update will pull those
>> packages.  So you have 1 system that was a fresh install and the
>> others were upgrades.
>> Next, the post install script for those image-update rpms will
>> install --justdb the image-update rpms to the new image (so
>> running yum update in the new image won't try to pull again the
>> same version).
>>
>> Regarding the 4.1.6 it's very strange, we'll need to check the
>> repos to see why it was published.
>>
>> As for nodectl, if there are no changes, it won't be updated and
>> you'll see an "old" version or a version that doesn't seem to be
>> matching the current image, but it is ok, we are thinking of
>> changing its name to make it less confusing.
>>
>> Hope this helps,
>> Yuval.
>>
>>
>> On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold
>> > 
>> >
>> >> wrote:
>>
>>  hi,
>>
>>  i still don't completely understand the oVirt Node update
>> process
>>  and the involved rpm packages.
>>
>>  We have 4 nodes, all running oVirt Node 4.1.3. Three of
>> them show as
>>  available updates
>> 'ovirt-node-ng-image-update-4.
>> 1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'
>>  (i don't want run release candidates), one of them shows
>>  'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is
>> what i
>>  like). The node that doesn't want to upgrade to
>> '4.1.6-0.1.rc1'
>>  lacks the rpm package
>>  'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch',
>> only has
>> 'ovirt-node-ng-image-update-pl
>> aceholder-4.1.3-1.el7.centos.noarch'.
>>  Also the version of ovirt-node-ng-nodectl is
>>  '4.1.3-0.20170709.0.el7' instead of
>> '4.1.3-0.20170705.0.el7'. This
>>  node was the last one i installed and never made a version
>> update
>>  before.
>>
>>  I only began using oVirt starting with 4.1, but already
>> completed
>>  minor version upgrades of oVirt nodes. IIRC this 'mysterious'
>>  ovirt-node-ng-image-update package comes into place when
>> updating a
>>  node for the first time after initial installation. Usually i
>>  wouldn't care about all of this, but now i have this RC
>> update
>>  situation that i don't want. How is this supposed to work?
>> How can i
>>  resolve it?
>>
>>  thx
>>  matthias
>>
>>  ___
>>  Users mailing list
>> Users@ovirt.org  > >
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>>  

Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Matthias Leopold

Hi,

all of the nodes that already made updates in the past have

/etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
/etc/yum.repos.d/ovirt-4.1-pre.repo

i went through the logs in /var/log/ovirt-engine/host-deploy/ and my own 
notes and discovered/remembered that this being presented with RC 
versions started on 20170707 when i updated my nodes from 4.1.2 to 
4.1.3-0.3.rc3.20170622082156.git47b4302 (!). probably there was a short 
timespan when you erroneously published a RC version in the wrong repo, 
my nodes "caught" it and dragged this along until today when i finally 
cared ;-) I moved the /etc/yum.repos.d/ovirt-4.1-pre*.repo files away 
and now everything seems fine


Regards
Matthias

Am 2017-08-31 um 15:25 schrieb Yuval Turgeman:

Hi,

Don't quite understand how you got to that 4.1.6 rc, it's only available 
in the pre release repo, can you paste the yum repos that are enabled on 
your system ?


Thanks,
Yuval.

On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold 
> wrote:


Hi,

thanks a lot.

So i understand everything is fine with my nodes and i'll wait until
the update GUI shows the right version to update (4.1.5 at the moment).

Regards
Matthias


Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:

Hi,

oVirt node ng is shipped with a placeholder rpm preinstalled.
The image-update rpms obsolete the placeholder rpm, so once a
new image-update rpm is published, yum update will pull those
packages.  So you have 1 system that was a fresh install and the
others were upgrades.
Next, the post install script for those image-update rpms will
install --justdb the image-update rpms to the new image (so
running yum update in the new image won't try to pull again the
same version).

Regarding the 4.1.6 it's very strange, we'll need to check the
repos to see why it was published.

As for nodectl, if there are no changes, it won't be updated and
you'll see an "old" version or a version that doesn't seem to be
matching the current image, but it is ok, we are thinking of
changing its name to make it less confusing.

Hope this helps,
Yuval.


On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold

>> wrote:

 hi,

 i still don't completely understand the oVirt Node update
process
 and the involved rpm packages.

 We have 4 nodes, all running oVirt Node 4.1.3. Three of
them show as
 available updates

'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'

 (i don't want run release candidates), one of them shows
 'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i
 like). The node that doesn't want to upgrade to '4.1.6-0.1.rc1'
 lacks the rpm package
 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch',
only has

'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.

 Also the version of ovirt-node-ng-nodectl is
 '4.1.3-0.20170709.0.el7' instead of
'4.1.3-0.20170705.0.el7'. This
 node was the last one i installed and never made a version
update
 before.

 I only began using oVirt starting with 4.1, but already
completed
 minor version upgrades of oVirt nodes. IIRC this 'mysterious'
 ovirt-node-ng-image-update package comes into place when
updating a
 node for the first time after initial installation. Usually i
 wouldn't care about all of this, but now i have this RC update
 situation that i don't want. How is this supposed to work?
How can i
 resolve it?

 thx
 matthias

 ___
 Users mailing list
Users@ovirt.org  >
http://lists.ovirt.org/mailman/listinfo/users

 >



-- 
Matthias Leopold

IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241 
Fax: +43 1 40160-921200 




--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / 

Re: [ovirt-users] hyperconverged question

2017-08-31 Thread Kasturi Narra
yes, right.  What you can do is edit the hosted-engine.conf file and there
is a parameter as shown below [1] and replace h2 and h3 with your second
and third storage servers. Then you will need to restart ovirt-ha-agent and
ovirt-ha-broker services in all the nodes .

[1] 'mnt_options=backup-volfile-servers=:'

On Thu, Aug 31, 2017 at 5:54 PM, Charles Kozler 
wrote:

> Hi Kasturi -
>
> Thanks for feedback
>
> > If cockpit+gdeploy plugin would be have been used then that would have
> automatically detected glusterfs replica 3 volume created during Hosted
> Engine deployment and this question would not have been asked
>
> Actually, doing hosted-engine --deploy it too also auto detects
> glusterfs.  I know glusterfs fuse client has the ability to failover
> between all nodes in cluster, but I am still curious given the fact that I
> see in ovirt config node1:/engine (being node1 I set it to in hosted-engine
> --deploy). So my concern was to ensure and find out exactly how engine
> works when one node goes away and the fuse client moves over to the other
> node in the gluster cluster
>
> But you did somewhat answer my question, the answer seems to be no (as
> default) and I will have to use hosted-engine.conf and change the parameter
> as you list
>
> So I need to do something manual to create HA for engine on gluster? Yes?
>
> Thanks so much!
>
> On Thu, Aug 31, 2017 at 3:03 AM, Kasturi Narra  wrote:
>
>> Hi,
>>
>>During Hosted Engine setup question about glusterfs volume is being
>> asked because you have setup the volumes yourself. If cockpit+gdeploy
>> plugin would be have been used then that would have automatically detected
>> glusterfs replica 3 volume created during Hosted Engine deployment and this
>> question would not have been asked.
>>
>>During new storage domain creation when glusterfs is selected there is
>> a feature called 'use managed gluster volumes' and upon checking this all
>> glusterfs volumes managed will be listed and you could choose the volume of
>> your choice from the dropdown list.
>>
>> There is a conf file called /etc/hosted-engine/hosted-engine.conf
>> where there is a parameter called backup-volfile-servers="h1:h2" and if one
>> of the gluster node goes down engine uses this parameter to provide ha /
>> failover.
>>
>>  Hope this helps !!
>>
>> Thanks
>> kasturi
>>
>>
>>
>> On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler 
>> wrote:
>>
>>> Hello -
>>>
>>> I have successfully created a hyperconverged hosted engine setup
>>> consisting of 3 nodes - 2 for VM's and the third purely for storage. I
>>> manually configured it all, did not use ovirt node or anything. Built the
>>> gluster volumes myself
>>>
>>> However, I noticed that when setting up the hosted engine and even when
>>> adding a new storage domain with glusterfs type, it still asks for
>>> hostname:/volumename
>>>
>>> This leads me to believe that if that one node goes down (ex:
>>> node1:/data), then ovirt engine wont be able to communicate with that
>>> volume because its trying to reach it on node 1 and thus, go down
>>>
>>> I know glusterfs fuse client can connect to all nodes to provide
>>> failover/ha but how does the engine handle this?
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Yuval Turgeman
Hi,

Don't quite understand how you got to that 4.1.6 rc, it's only available in
the pre release repo, can you paste the yum repos that are enabled on your
system ?

Thanks,
Yuval.

On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> thanks a lot.
>
> So i understand everything is fine with my nodes and i'll wait until the
> update GUI shows the right version to update (4.1.5 at the moment).
>
> Regards
> Matthias
>
>
> Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:
>
>> Hi,
>>
>> oVirt node ng is shipped with a placeholder rpm preinstalled.
>> The image-update rpms obsolete the placeholder rpm, so once a new
>> image-update rpm is published, yum update will pull those packages.  So you
>> have 1 system that was a fresh install and the others were upgrades.
>> Next, the post install script for those image-update rpms will install
>> --justdb the image-update rpms to the new image (so running yum update in
>> the new image won't try to pull again the same version).
>>
>> Regarding the 4.1.6 it's very strange, we'll need to check the repos to
>> see why it was published.
>>
>> As for nodectl, if there are no changes, it won't be updated and you'll
>> see an "old" version or a version that doesn't seem to be matching the
>> current image, but it is ok, we are thinking of changing its name to make
>> it less confusing.
>>
>> Hope this helps,
>> Yuval.
>>
>>
>> On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold <
>> matthias.leop...@meduniwien.ac.at > iwien.ac.at>> wrote:
>>
>> hi,
>>
>> i still don't completely understand the oVirt Node update process
>> and the involved rpm packages.
>>
>> We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
>> available updates
>> 'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.git
>> d646d2f.el7.centos'
>> (i don't want run release candidates), one of them shows
>> 'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i
>> like). The node that doesn't want to upgrade to '4.1.6-0.1.rc1'
>> lacks the rpm package
>> 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch', only has
>> 'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.
>> Also the version of ovirt-node-ng-nodectl is
>> '4.1.3-0.20170709.0.el7' instead of '4.1.3-0.20170705.0.el7'. This
>> node was the last one i installed and never made a version update
>> before.
>>
>> I only began using oVirt starting with 4.1, but already completed
>> minor version upgrades of oVirt nodes. IIRC this 'mysterious'
>> ovirt-node-ng-image-update package comes into place when updating a
>> node for the first time after initial installation. Usually i
>> wouldn't care about all of this, but now i have this RC update
>> situation that i don't want. How is this supposed to work? How can i
>> resolve it?
>>
>> thx
>> matthias
>>
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>>
>>
>>
> --
> Matthias Leopold
> IT Systems & Communications
> Medizinische Universität Wien
> Spitalgasse 23 / BT 88 /Ebene 00
> A-1090 Wien
> Tel: +43 1 40160-21241
> Fax: +43 1 40160-921200
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] unsupported configuration: Unable to find security driver for model selinux

2017-08-31 Thread Charles Kozler
Also, to add to this, I figured all nodes need to "equal" in terms of
selinux now so I went on node 1 and set selinux to permissive, rebooted,
and then vdsmd wouldnt start which would show the host as nonresponsive in
engine UI. Upon inspection of the log it was because of the missing sebool
module. So I ran 'vdsm-tool configure --force' and then vdsmd started fine.
Once doing this the host came up in the web UI

Tested migrating a VM to it and it worked with no issue

Hope this helps someone else who lands in this situation, however, I'd like
to know what the expected environment of ovirt is. It would be helpful to
have some checks along the way for this condition if its a blocker for
functions

On Thu, Aug 31, 2017 at 9:09 AM, Charles Kozler 
wrote:

> Hello,
>
> I recently installed ovirt cluster on 3 nodes and saw that I could only
> migrate one way
>
> Reviewing the logs I found this
>
> 2017-08-31 09:04:30,685-0400 ERROR (migsrc/1eca84bd) [virt.vm]
> (vmId='1eca84bd-2796-469d-a071-6ba2b21d82f4') unsupported configuration:
> Unable to find security driver for model selinux (migration:287)
> 2017-08-31 09:04:30,698-0400 ERROR (migsrc/1eca84bd) [virt.vm]
> (vmId='1eca84bd-2796-469d-a071-6ba2b21d82f4') Failed to migrate
> (migration:429)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 411, in run
> self._startUnderlyingMigration(time.time())
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 487, in _startUnderlyingMigration
> self._perform_with_conv_schedule(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 563, in _perform_with_conv_schedule
> self._perform_migration(duri, muri)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line
> 529, in _perform_migration
> self._vm._dom.migrateToURI3(duri, params, flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 69, in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
> 123, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in
> wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in
> migrateToURI3
> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
> dom=self)
> libvirtError: unsupported configuration: Unable to find security driver
> for model selinux
>
>
> Which led me to this
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1013617
>
> I could migrate from node1 -> node 2 but not node2 -> node1, so obviously
> I had something different with node 1. In this case, it was selinux
>
> On node 1 it is set to disabled but on node 2 it is set to permissive. I
> am not sure how they got different but I wanted to update this list with
> this finding
>
> Node 2 was setup directly via web UI in the engine with host -> new.
> Perhaps I manually set node 1 to disabled
>
> Does ovirt / libvirt expect permissive? Or does it expect enforcing? Or
> does it need to be both the same matching?
>
> thanks!
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Matthias Leopold

Hi,

thanks a lot.

So i understand everything is fine with my nodes and i'll wait until the 
update GUI shows the right version to update (4.1.5 at the moment).


Regards
Matthias


Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:

Hi,

oVirt node ng is shipped with a placeholder rpm preinstalled.
The image-update rpms obsolete the placeholder rpm, so once a new 
image-update rpm is published, yum update will pull those packages.  So 
you have 1 system that was a fresh install and the others were upgrades.
Next, the post install script for those image-update rpms will install 
--justdb the image-update rpms to the new image (so running yum update 
in the new image won't try to pull again the same version).


Regarding the 4.1.6 it's very strange, we'll need to check the repos to 
see why it was published.


As for nodectl, if there are no changes, it won't be updated and you'll 
see an "old" version or a version that doesn't seem to be matching the 
current image, but it is ok, we are thinking of changing its name to 
make it less confusing.


Hope this helps,
Yuval.


On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold 
> wrote:


hi,

i still don't completely understand the oVirt Node update process
and the involved rpm packages.

We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
available updates

'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'
(i don't want run release candidates), one of them shows
'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i
like). The node that doesn't want to upgrade to '4.1.6-0.1.rc1'
lacks the rpm package
'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch', only has
'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.
Also the version of ovirt-node-ng-nodectl is
'4.1.3-0.20170709.0.el7' instead of '4.1.3-0.20170705.0.el7'. This
node was the last one i installed and never made a version update
before.

I only began using oVirt starting with 4.1, but already completed
minor version upgrades of oVirt nodes. IIRC this 'mysterious'
ovirt-node-ng-image-update package comes into place when updating a
node for the first time after initial installation. Usually i
wouldn't care about all of this, but now i have this RC update
situation that i don't want. How is this supposed to work? How can i
resolve it?

thx
matthias

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt engine with different VM id

2017-08-31 Thread Misak Khachatryan
Ok, i did a right click on storage domain and did destroy. It's get's
imported and Engine VM too.

Now it seems OK,

Thank you very much.

Best regards,
Misak Khachatryan


On Thu, Aug 31, 2017 at 5:11 PM, Misak Khachatryan  wrote:
> Hi,
>
> it's grayed out on web interface, is there any other way? Trying to
> detach gives error
>
> VDSM command DetachStorageDomainVDS failed: Storage domain does not
> exist: (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',)
> Failed to detach Storage Domain hosted_storage from Data Center
> Default. (User: admin@internal-authz)
>
>
> Best regards,
> Misak Khachatryan
>
>
> On Thu, Aug 31, 2017 at 4:22 PM, Martin Sivak  wrote:
>> Hi,
>>
>> you can remote the hosted engine storage domain from the engine as
>> well. It should also be re-imported.
>>
>> We had cases where destroying the domain ended up with a locked SD,
>> but removing the SD and re-importing is the proper way here.
>>
>> Best regards
>>
>> PS: Re-adding the mailing list, we should really set a proper Reply-To 
>> header..
>>
>> Martin Sivak
>>
>> On Thu, Aug 31, 2017 at 2:07 PM, Misak Khachatryan  wrote:
>>> Hi,
>>>
>>> I would love to, but:
>>>
>>> Error while executing action:
>>>
>>> HostedEngine:
>>>
>>> Cannot remove VM. The relevant Storage Domain's status is Inactive.
>>>
>>> it seems i should somehow fix storage domain first ...
>>>
>>> engine=# update storage_domain_static set id =
>>> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
>>> 'c44343af-cc4a-4bb7-a548-0c6f609d60d5';
>>> ERROR:  update or delete on table "storage_domain_static" violates
>>> foreign key constraint "disk_profiles_storage_domain_id_fkey" on table
>>> "disk_profiles"
>>> DETAIL:  Key (id)=(c44343af-cc4a-4bb7-a548-0c6f609d60d5) is still
>>> referenced from table "disk_profiles".
>>>
>>> engine=# update disk_profiles set storage_domain_id =
>>> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
>>> 'a6d71571-a13a-415b-9f97-635f17cbe67d';
>>> ERROR:  insert or update on table "disk_profiles" violates foreign key
>>> constraint "disk_profiles_storage_domain_id_fkey"
>>> DETAIL:  Key (storage_domain_id)=(2e2820f3-8c3d-487d-9a56-1b8cd278ec6c)
>>> is not present in table "storage_domain_static".
>>>
>>> engine=# select * from storage_domain_static;
>>>  id  |   storage
>>>  |  storage_name  | storage_domain_type | storage_type |
>>> storage_domain_format_type | _create_date  |
>>> _update_date  | recoverable | last_time_used_as_maste
>>> r | storage_description | storage_comment | wipe_after_delete |
>>> warning_low_space_indicator | critical_space_action_blocker |
>>> first_metadata_device | vg_metadata_device | discard_after_delete
>>> --+--++-+--++---+---+-+
>>> --+-+-+---+-+---+---++--
>>> 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |
>>> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository |
>>>4 |8 | 0  | 2016-11-02
>>> 21:27:22.118586+04 |   | t   |
>>>  | | | f |
>>> |   |
>>>  || f
>>> 51c903f6-df83-4510-ac69-c164742ca6e7 |
>>> 34b72ce0-6ad7-4180-a8a1-2acfd45824d7 | iso|
>>>2 |7 | 0  | 2016-11-02
>>> 23:26:21.296635+04 |   | t   |
>>> 0 | | | f |
>>>   10 | 5 |
>>>   || f
>>> ece1f05c-97c9-4482-a1a5-914397cddd35 |
>>> dd38f31f-7bdc-463c-9ae4-fcd4dc8c99fd | export |
>>>3 |1 | 0  | 2016-12-14
>>> 11:28:15.736746+04 | 2016-12-14 11:33:12.872562+04 | t   |
>>> 0 | Export  | | f |
>>>   10 | 5 |
>>>   || f
>>> 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 |
>>> d1e9e3c8-aaf3-43de-ae80-101e5bd2574f | data   |
>>>0 |7 | 4  | 2016-11-02
>>> 23:24:43.402629+04 | 2017-02-22 17:20:42.721092+04 | t   |
>>> 0 | | | f |
>>>   10 | 5 |
>>>   || f
>>> c44343af-cc4a-4bb7-a548-0c6f609d60d5 |
>>> 

Re: [ovirt-users] oVirt engine with different VM id

2017-08-31 Thread Misak Khachatryan
Hi,

it's grayed out on web interface, is there any other way? Trying to
detach gives error

VDSM command DetachStorageDomainVDS failed: Storage domain does not
exist: (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',)
Failed to detach Storage Domain hosted_storage from Data Center
Default. (User: admin@internal-authz)


Best regards,
Misak Khachatryan


On Thu, Aug 31, 2017 at 4:22 PM, Martin Sivak  wrote:
> Hi,
>
> you can remote the hosted engine storage domain from the engine as
> well. It should also be re-imported.
>
> We had cases where destroying the domain ended up with a locked SD,
> but removing the SD and re-importing is the proper way here.
>
> Best regards
>
> PS: Re-adding the mailing list, we should really set a proper Reply-To 
> header..
>
> Martin Sivak
>
> On Thu, Aug 31, 2017 at 2:07 PM, Misak Khachatryan  wrote:
>> Hi,
>>
>> I would love to, but:
>>
>> Error while executing action:
>>
>> HostedEngine:
>>
>> Cannot remove VM. The relevant Storage Domain's status is Inactive.
>>
>> it seems i should somehow fix storage domain first ...
>>
>> engine=# update storage_domain_static set id =
>> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
>> 'c44343af-cc4a-4bb7-a548-0c6f609d60d5';
>> ERROR:  update or delete on table "storage_domain_static" violates
>> foreign key constraint "disk_profiles_storage_domain_id_fkey" on table
>> "disk_profiles"
>> DETAIL:  Key (id)=(c44343af-cc4a-4bb7-a548-0c6f609d60d5) is still
>> referenced from table "disk_profiles".
>>
>> engine=# update disk_profiles set storage_domain_id =
>> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
>> 'a6d71571-a13a-415b-9f97-635f17cbe67d';
>> ERROR:  insert or update on table "disk_profiles" violates foreign key
>> constraint "disk_profiles_storage_domain_id_fkey"
>> DETAIL:  Key (storage_domain_id)=(2e2820f3-8c3d-487d-9a56-1b8cd278ec6c)
>> is not present in table "storage_domain_static".
>>
>> engine=# select * from storage_domain_static;
>>  id  |   storage
>>  |  storage_name  | storage_domain_type | storage_type |
>> storage_domain_format_type | _create_date  |
>> _update_date  | recoverable | last_time_used_as_maste
>> r | storage_description | storage_comment | wipe_after_delete |
>> warning_low_space_indicator | critical_space_action_blocker |
>> first_metadata_device | vg_metadata_device | discard_after_delete
>> --+--++-+--++---+---+-+
>> --+-+-+---+-+---+---++--
>> 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |
>> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository |
>>4 |8 | 0  | 2016-11-02
>> 21:27:22.118586+04 |   | t   |
>>  | | | f |
>> |   |
>>  || f
>> 51c903f6-df83-4510-ac69-c164742ca6e7 |
>> 34b72ce0-6ad7-4180-a8a1-2acfd45824d7 | iso|
>>2 |7 | 0  | 2016-11-02
>> 23:26:21.296635+04 |   | t   |
>> 0 | | | f |
>>   10 | 5 |
>>   || f
>> ece1f05c-97c9-4482-a1a5-914397cddd35 |
>> dd38f31f-7bdc-463c-9ae4-fcd4dc8c99fd | export |
>>3 |1 | 0  | 2016-12-14
>> 11:28:15.736746+04 | 2016-12-14 11:33:12.872562+04 | t   |
>> 0 | Export  | | f |
>>   10 | 5 |
>>   || f
>> 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 |
>> d1e9e3c8-aaf3-43de-ae80-101e5bd2574f | data   |
>>0 |7 | 4  | 2016-11-02
>> 23:24:43.402629+04 | 2017-02-22 17:20:42.721092+04 | t   |
>> 0 | | | f |
>>   10 | 5 |
>>   || f
>> c44343af-cc4a-4bb7-a548-0c6f609d60d5 |
>> 8b54ce35-3187-4fba-a2c7-6b604d077f5b | hosted_storage |
>>1 |7 | 4  | 2016-11-02
>> 23:26:13.165435+04 | 2017-02-22 17:20:42.721092+04 | t   |
>> 0 | | | f |
>>   10 | 5 |
>>   || f
>> 

[ovirt-users] unsupported configuration: Unable to find security driver for model selinux

2017-08-31 Thread Charles Kozler
Hello,

I recently installed ovirt cluster on 3 nodes and saw that I could only
migrate one way

Reviewing the logs I found this

2017-08-31 09:04:30,685-0400 ERROR (migsrc/1eca84bd) [virt.vm]
(vmId='1eca84bd-2796-469d-a071-6ba2b21d82f4') unsupported configuration:
Unable to find security driver for model selinux (migration:287)
2017-08-31 09:04:30,698-0400 ERROR (migsrc/1eca84bd) [virt.vm]
(vmId='1eca84bd-2796-469d-a071-6ba2b21d82f4') Failed to migrate
(migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411,
in run
self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 487,
in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 563,
in _perform_with_conv_schedule
self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 529,
in _perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69,
in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
123, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 944, in
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirtError: unsupported configuration: Unable to find security driver for
model selinux


Which led me to this

https://bugzilla.redhat.com/show_bug.cgi?id=1013617

I could migrate from node1 -> node 2 but not node2 -> node1, so obviously I
had something different with node 1. In this case, it was selinux

On node 1 it is set to disabled but on node 2 it is set to permissive. I am
not sure how they got different but I wanted to update this list with this
finding

Node 2 was setup directly via web UI in the engine with host -> new.
Perhaps I manually set node 1 to disabled

Does ovirt / libvirt expect permissive? Or does it expect enforcing? Or
does it need to be both the same matching?

thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Yuval Turgeman
Hi,

oVirt node ng is shipped with a placeholder rpm preinstalled.
The image-update rpms obsolete the placeholder rpm, so once a new
image-update rpm is published, yum update will pull those packages.  So you
have 1 system that was a fresh install and the others were upgrades.
Next, the post install script for those image-update rpms will install
--justdb the image-update rpms to the new image (so running yum update in
the new image won't try to pull again the same version).

Regarding the 4.1.6 it's very strange, we'll need to check the repos to see
why it was published.

As for nodectl, if there are no changes, it won't be updated and you'll see
an "old" version or a version that doesn't seem to be matching the current
image, but it is ok, we are thinking of changing its name to make it less
confusing.

Hope this helps,
Yuval.


On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> hi,
>
> i still don't completely understand the oVirt Node update process and the
> involved rpm packages.
>
> We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
> available updates 'ovirt-node-ng-image-update-4.
> 1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos' (i don't want run
> release candidates), one of them shows 
> 'ovirt-node-ng-image-update-4.1.5-1.el7.centos'
> (this is what i like). The node that doesn't want to upgrade to
> '4.1.6-0.1.rc1' lacks the rpm package 
> 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch',
> only has 'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.
> Also the version of ovirt-node-ng-nodectl is '4.1.3-0.20170709.0.el7'
> instead of '4.1.3-0.20170705.0.el7'. This node was the last one i installed
> and never made a version update before.
>
> I only began using oVirt starting with 4.1, but already completed minor
> version upgrades of oVirt nodes. IIRC this 'mysterious'
> ovirt-node-ng-image-update package comes into place when updating a node
> for the first time after initial installation. Usually i wouldn't care
> about all of this, but now i have this RC update situation that i don't
> want. How is this supposed to work? How can i resolve it?
>
> thx
> matthias
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.1.6 Second Release Candidate is now available

2017-08-31 Thread Lev Veyde
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.1.6 for testing, as of August 31th, 2017.

Starting from 4.1.5 oVirt supports libgfapi [5]. Using libgfapi provides a
real performance boost for ovirt when using GlusterFS .
Due  to a known issue [6], using this will break live storage migration.
This is expected to be fixed soon. If you do not use live storage
migration you can give it a try. Use [7] for more details on how to  enable
it.

This release is available now for:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* oVirt Node 4.1

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Live is already available[4]
- oVirt Node is already available[4]

Additional Resources:
* Read more about the oVirt 4.1.6 release highlights:
http://www.ovirt.org/release/4.1. 6
/ 
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1. 6
/ 
[4] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
[5]
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/
[6] https://bugzilla.redhat.com/show_bug.cgi?id=1306562
[7]
http://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/


-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-08-31 Thread Charles Kozler
Hi Kasturi -

Thanks for feedback

> If cockpit+gdeploy plugin would be have been used then that would have
automatically detected glusterfs replica 3 volume created during Hosted
Engine deployment and this question would not have been asked

Actually, doing hosted-engine --deploy it too also auto detects glusterfs.
I know glusterfs fuse client has the ability to failover between all nodes
in cluster, but I am still curious given the fact that I see in ovirt
config node1:/engine (being node1 I set it to in hosted-engine --deploy).
So my concern was to ensure and find out exactly how engine works when one
node goes away and the fuse client moves over to the other node in the
gluster cluster

But you did somewhat answer my question, the answer seems to be no (as
default) and I will have to use hosted-engine.conf and change the parameter
as you list

So I need to do something manual to create HA for engine on gluster? Yes?

Thanks so much!

On Thu, Aug 31, 2017 at 3:03 AM, Kasturi Narra  wrote:

> Hi,
>
>During Hosted Engine setup question about glusterfs volume is being
> asked because you have setup the volumes yourself. If cockpit+gdeploy
> plugin would be have been used then that would have automatically detected
> glusterfs replica 3 volume created during Hosted Engine deployment and this
> question would not have been asked.
>
>During new storage domain creation when glusterfs is selected there is
> a feature called 'use managed gluster volumes' and upon checking this all
> glusterfs volumes managed will be listed and you could choose the volume of
> your choice from the dropdown list.
>
> There is a conf file called /etc/hosted-engine/hosted-engine.conf
> where there is a parameter called backup-volfile-servers="h1:h2" and if one
> of the gluster node goes down engine uses this parameter to provide ha /
> failover.
>
>  Hope this helps !!
>
> Thanks
> kasturi
>
>
>
> On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler 
> wrote:
>
>> Hello -
>>
>> I have successfully created a hyperconverged hosted engine setup
>> consisting of 3 nodes - 2 for VM's and the third purely for storage. I
>> manually configured it all, did not use ovirt node or anything. Built the
>> gluster volumes myself
>>
>> However, I noticed that when setting up the hosted engine and even when
>> adding a new storage domain with glusterfs type, it still asks for
>> hostname:/volumename
>>
>> This leads me to believe that if that one node goes down (ex:
>> node1:/data), then ovirt engine wont be able to communicate with that
>> volume because its trying to reach it on node 1 and thus, go down
>>
>> I know glusterfs fuse client can connect to all nodes to provide
>> failover/ha but how does the engine handle this?
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt engine with different VM id

2017-08-31 Thread Martin Sivak
Hi,

you can remote the hosted engine storage domain from the engine as
well. It should also be re-imported.

We had cases where destroying the domain ended up with a locked SD,
but removing the SD and re-importing is the proper way here.

Best regards

PS: Re-adding the mailing list, we should really set a proper Reply-To header..

Martin Sivak

On Thu, Aug 31, 2017 at 2:07 PM, Misak Khachatryan  wrote:
> Hi,
>
> I would love to, but:
>
> Error while executing action:
>
> HostedEngine:
>
> Cannot remove VM. The relevant Storage Domain's status is Inactive.
>
> it seems i should somehow fix storage domain first ...
>
> engine=# update storage_domain_static set id =
> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
> 'c44343af-cc4a-4bb7-a548-0c6f609d60d5';
> ERROR:  update or delete on table "storage_domain_static" violates
> foreign key constraint "disk_profiles_storage_domain_id_fkey" on table
> "disk_profiles"
> DETAIL:  Key (id)=(c44343af-cc4a-4bb7-a548-0c6f609d60d5) is still
> referenced from table "disk_profiles".
>
> engine=# update disk_profiles set storage_domain_id =
> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
> 'a6d71571-a13a-415b-9f97-635f17cbe67d';
> ERROR:  insert or update on table "disk_profiles" violates foreign key
> constraint "disk_profiles_storage_domain_id_fkey"
> DETAIL:  Key (storage_domain_id)=(2e2820f3-8c3d-487d-9a56-1b8cd278ec6c)
> is not present in table "storage_domain_static".
>
> engine=# select * from storage_domain_static;
>  id  |   storage
>  |  storage_name  | storage_domain_type | storage_type |
> storage_domain_format_type | _create_date  |
> _update_date  | recoverable | last_time_used_as_maste
> r | storage_description | storage_comment | wipe_after_delete |
> warning_low_space_indicator | critical_space_action_blocker |
> first_metadata_device | vg_metadata_device | discard_after_delete
> --+--++-+--++---+---+-+
> --+-+-+---+-+---+---++--
> 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |
> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository |
>4 |8 | 0  | 2016-11-02
> 21:27:22.118586+04 |   | t   |
>  | | | f |
> |   |
>  || f
> 51c903f6-df83-4510-ac69-c164742ca6e7 |
> 34b72ce0-6ad7-4180-a8a1-2acfd45824d7 | iso|
>2 |7 | 0  | 2016-11-02
> 23:26:21.296635+04 |   | t   |
> 0 | | | f |
>   10 | 5 |
>   || f
> ece1f05c-97c9-4482-a1a5-914397cddd35 |
> dd38f31f-7bdc-463c-9ae4-fcd4dc8c99fd | export |
>3 |1 | 0  | 2016-12-14
> 11:28:15.736746+04 | 2016-12-14 11:33:12.872562+04 | t   |
> 0 | Export  | | f |
>   10 | 5 |
>   || f
> 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 |
> d1e9e3c8-aaf3-43de-ae80-101e5bd2574f | data   |
>0 |7 | 4  | 2016-11-02
> 23:24:43.402629+04 | 2017-02-22 17:20:42.721092+04 | t   |
> 0 | | | f |
>   10 | 5 |
>   || f
> c44343af-cc4a-4bb7-a548-0c6f609d60d5 |
> 8b54ce35-3187-4fba-a2c7-6b604d077f5b | hosted_storage |
>1 |7 | 4  | 2016-11-02
> 23:26:13.165435+04 | 2017-02-22 17:20:42.721092+04 | t   |
> 0 | | | f |
>   10 | 5 |
>   || f
> 004ca4dd-c621-463d-b514-ccfe07ef99d7 |
> b31a7de9-e789-4ece-9f99-4b150bf581db | virt4-Local|
>0 |4 | 4  | 2017-03-23
> 09:02:26.37006+04  | 2017-03-23 09:02:31.887534+04 | t   |
> 0 | | | f |
>   10 | 5 |
>   || f
> (6 rows)
>
> engine=# select * from storage_domain_dynamic;
>  id  | available_disk_size |
> used_disk_size 

[ovirt-users] oVirt engine with different VM id

2017-08-31 Thread Misak Khachatryan
Hi,

Yesterday someone powered off our storage, and all my 3 hosts lose
their disks. After 2 days of recovering i managed to bring back
everything, except engine VM, which is online but not visible to
itself.

I did new deployment of VM, restored backup and started engine setup.
After manual database updates, my all VMs and hosts are OK now, but
engine. I have engine VM with different VM id running than in
database.

I've tried this with no luck.

engine=# update vm_static set vm_guid =
'75072b32-6f93-4c38-8f18-825004072c1a' where vm_guid =(select
vm_guid from vm_static where vm_name = 'HostedEngine');
ERROR:  update or delete on table "vm_static" violates foreign key
constraint "fk_disk_vm_element_vm_static" on table "disk_vm_element"
DETAIL:  Key (vm_guid)=(d81ccb53-2594-49db-b69a-04c73b504c59) is still
referenced from table "disk_vm_element".


Right now i've deployed engine on all 3 hosts but see this picture:

[root@virt3 ~]# hosted-engine --vm-status


!! Cluster is in GLOBAL MAINTENANCE mode !!




[root@virt3 ~]#  vdsClient -s 0 list

75072b32-6f93-4c38-8f18-825004072c1a
   Status = Up
   statusTime = 4397337690
   kvmEnable = true
   emulatedMachine = pc
   afterMigrationStatus =
   pid = 5280
   devices = [{'device': 'console', 'specParams': {}, 'type':
'console', 'deviceId': '2b6b0e87-c86a-4144-ad39-40d5bfe25df1',
'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model':
'none'}, 'type': 'balloon', 'target': 16777216, 'alias': 'balloon0'},
{'specParams': {'source': 'random'}, 'alias': 'rng0', 'address':
{'slot': '0x07', 'bus': '0x00', 'domain': '0x', 'type': 'pci',
'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type':
'rng'}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel',
'addr
ess': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port':
'1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel',
'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial',
'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'ch
annel', 'address': {'bus': '0', 'controller': '0', 'type':
'virtio-serial', 'port': '3'}}, {'device': 'scsi', 'alias': 'scsi0',
'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot':
'0x04', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function':
'0x0'}}
, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address':
{'slot': '0x01', 'bus': '0x00', 'domain': '0x', 'type': 'pci',
'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type':
'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain':
'0x00
00', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial',
'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot':
'0x05', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function':
'0x0'}}, {'device': 'vga', 'alias': 'video0', 'type': 'video',
'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x', 'type':
'pci', 'function': '0x0'}}, {'device': 'vnc', 'type': 'graphics',
'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:01:29:95',
'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'spec
Params': {}, 'deviceId': 'd348a068-063b-4a40-9119-a3d34f6c7db4',
'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x', 'type':
'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface',
'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'al
ias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId':
'e738b50b-c200-4429-8489-4519325339c7', 'address': {'bus': '1',
'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'},
'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'},
{'poolI
D': '----', 'volumeInfo': {'path':
'engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d',
'protocol': 'gluster', 'hosts': [{'port': '0', 'transport': 'tcp',
'name': '
virt1'}, {'port': '0', 'transport': 'tcp', 'name': 'virt2'}, {'port':
'0', 'transport': 'tcp', 'name': 'virt3'}]}, 'index': '0', 'iface':
'virtio', 'apparentsize': '62277025792', 'specParams': {}, 'imageID':
'5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'readonly': 'False', 's
hared': 'exclusive', 'truesize': '3255476224', 'type': 'disk',
'domainID': '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'reqsize': '0',
'format': 'raw', 'deviceId': '5deeac2d-18d7-4622-9371-ebf965d2bd6b',
'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type':
'pci', 'function': '0x0'}, 'device': 'disk', 'path':
'/var/run/vdsm/storage/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d',
'propagateErrors': 'off', 'optional': 'false', 'name': 'vda',
'bootOrder': '1', 'v
olumeID': '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'alias':
'virtio-disk0', 'volumeChain': [{'domainID':
'2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'leaseOffset': 0, 'volumeID':
'60aa51b7-32eb-41a9-940d-9489b0375a3d', 

Re: [ovirt-users] Question on Datacenters / clusters / data domains

2017-08-31 Thread Eduardo Mayoral
Thank you very much, Mr Leviim!

This made things clear.

Eduardo Mayoral Jimeno (emayo...@arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153

On 28/08/17 11:14, Shani Leviim wrote:
> Hi Eduardo,
> Welcome aboard!
>
> First, you may find some relevant information in here:
> http://www.ovirt.org/documentation/admin-guide/administration-guide/
>  .
>
> Regarding your questions:
> * A data domain in an oVirt Data Center must be available to every
> Host on the Data Center: Am I right?
> Yes, you're right.
>
> * Can I manually migrate VMs between Datacenters?
> VM migration can't be performed between data canters, so you can't use
> the 'migrate VM' function.
> In order to "migrate" your VM between different data canters, you can
> use 'export' and 'import' functions and an 'export domain':
> By creating an export domain for one of your DC's (each DC can have up
> to one export domain), and exporting your VM to that storage domain,
> you can then detach the export domain from that DC and attach it to
> the other DC, and by importing there your VM you'll finish the
> transaction. 
>
> Another option is to detach the VM's storage domain from one DC and
> attach it the second one.
> That way you'll move the whole storage domain between your DCs. 
>
> If you have any further questions, don't hesitate to ask :)
>
> *Regards,
> *
> *Shani Leviim
> *
>
> On Thu, Aug 24, 2017 at 2:51 PM, Eduardo Mayoral  > wrote:
>
> Hi,
>
> First of all, sorry for the naive question, but I have not
> been able
> to find good guidance on the docs.
>
> I come from the VMWare environment, now I am starting to migrate
> some workload from VMWare to oVirt (v4.1.4 , CentOS 7.3 hosts).
>
> In VMWare I am used to have one datacenter, several host clusters,
> and a bunch of iSCSI Datastores, but we do not map every iSCSI
> LUN/datastore to every host. Actually we used to do that, but we hit
> limits on the number of iSCSI paths with our infrastructure.
>
> Rather than that, we have groups of LUNs/Datastores mapped to the
> ESXi hosts which form a given VMware cluster. Then we have a couple of
> datastores mapped to every ESXi in the vmware datacenter, and we use
> those to store the ISO images and as storage that we use when we
> need to
> migrate VMs between clusters for some reason.
>
> Given the role of the Master data domain and the SPM in oVIrt
> it is
> my understanding that I cannot replicate this kind of setup in
> oVirt: a
> data domain in an oVirt Data Center must be available to every Host on
> the Data Center: Am I right?
>
> So, our current setup is still small, but I am concerned that
> as it
> grows, if I stay with one Datacenter, several clusters and a group of
> data domains mapped to every host I may run again into problems
> with the
> number of iSCSI paths (the limit in VMWare was around 1024), it is
> easy
> to reach that limit as it is (number of hosts) * (number of LUNs) *
> (number of paths/LUN).
>
> If I split my setup in several datacenters controlled by a single
> oVirt-engine in order to keep the number of iSCSI paths
> reasonable. Can
> I manually migrate VMs between Datacenters? I assume that in order
> to do
> that, those datacenters will need to share some data domain , Can this
> be done? Maybe with NFS?
>
> Thanks for your help!
>
> --
> Eduardo Mayoral Jimeno (emayo...@arsys.es )
> Administrador de sistemas. Departamento de Plataformas. Arsys
> internet.
> +34 941 620 145 ext. 5153 
>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Clarification on Broadwell as cluster cpu type

2017-08-31 Thread Gianluca Cecchi
On Thu, Aug 31, 2017 at 11:15 AM, Jakub Niedermertl 
wrote:

> Hello Gianluca,
>
> ultimate source of truth for the engine is line [1] and possibly
> subsequent update clauses. It contains Broadwell-noTSX for 4.1 as well as
> planned 4.2.
>
> Regards
> Jakub
>
> [1]: https://github.com/oVirt/ovirt-engine/blob/
> 5e1e11ec6a560c0bc8d2be849b8b5ba6c6427f34/packaging/
> dbscripts/upgrade/pre_upgrade/_config.sql#L422
>
>
>
Thanks for confirmation, Jakub.
I knew that it works and it is already in the code, in fact I'm indeed
using it on a couple of NUC6i5 systems. See here:
https://drive.google.com/file/d/0BwoPbcrMv8mvQkZCVW5BT0c4eEk/view?usp=sharing


I took the time to submit a doc bug for it:
https://bugzilla.redhat.com/show_bug.cgi?id=1487155

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Clarification on Broadwell as cluster cpu type

2017-08-31 Thread Jakub Niedermertl
Hello Gianluca,

ultimate source of truth for the engine is line [1] and possibly subsequent
update clauses. It contains Broadwell-noTSX for 4.1 as well as planned 4.2.

Regards
Jakub

[1]:
https://github.com/oVirt/ovirt-engine/blob/5e1e11ec6a560c0bc8d2be849b8b5ba6c6427f34/packaging/dbscripts/upgrade/pre_upgrade/_config.sql#L422

On Wed, Aug 30, 2017 at 6:23 PM, Gianluca Cecchi 
wrote:

> Hello,
> having to deploy some HP Gen 9 servers as Hypervisors, with Intel
> E5-2680v4 aka Broadwell cpu, I remember that in oVirt web admin UI I can
> set it as cpu type for a cluster (and also Broadwell-noTSX) since many
> months at least...
> And in fact I already have some with that setting.
> But now incidentally I go here:
> https://www.ovirt.org/documentation/admin-guide/chap-Clusters/
>
> and see that Broadwell is not listed among the supported cpu types... is
> there any reason?
> Also if I go into official RHEV 4.1 docs here it is missing in the list:
> https://access.redhat.com/documentation/en-us/red_hat_
> virtualization/4.1/html/planning_and_prerequisites_guide/requirements#cpu_
> requirements
>
> Should I submit a doc bug?
> Thanks,
> Gianluca
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Node update question

2017-08-31 Thread Matthias Leopold

hi,

i still don't completely understand the oVirt Node update process and 
the involved rpm packages.


We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as 
available updates 
'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos' 
(i don't want run release candidates), one of them shows 
'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i like). 
The node that doesn't want to upgrade to '4.1.6-0.1.rc1' lacks the rpm 
package 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch', only has 
'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'. Also 
the version of ovirt-node-ng-nodectl is '4.1.3-0.20170709.0.el7' instead 
of '4.1.3-0.20170705.0.el7'. This node was the last one i installed and 
never made a version update before.


I only began using oVirt starting with 4.1, but already completed minor 
version upgrades of oVirt nodes. IIRC this 'mysterious' 
ovirt-node-ng-image-update package comes into place when updating a node 
for the first time after initial installation. Usually i wouldn't care 
about all of this, but now i have this RC update situation that i don't 
want. How is this supposed to work? How can i resolve it?


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-08-31 Thread Kasturi Narra
Hi,

   During Hosted Engine setup question about glusterfs volume is being
asked because you have setup the volumes yourself. If cockpit+gdeploy
plugin would be have been used then that would have automatically detected
glusterfs replica 3 volume created during Hosted Engine deployment and this
question would not have been asked.

   During new storage domain creation when glusterfs is selected there is a
feature called 'use managed gluster volumes' and upon checking this all
glusterfs volumes managed will be listed and you could choose the volume of
your choice from the dropdown list.

There is a conf file called /etc/hosted-engine/hosted-engine.conf where
there is a parameter called backup-volfile-servers="h1:h2" and if one of
the gluster node goes down engine uses this parameter to provide ha /
failover.

 Hope this helps !!

Thanks
kasturi



On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler 
wrote:

> Hello -
>
> I have successfully created a hyperconverged hosted engine setup
> consisting of 3 nodes - 2 for VM's and the third purely for storage. I
> manually configured it all, did not use ovirt node or anything. Built the
> gluster volumes myself
>
> However, I noticed that when setting up the hosted engine and even when
> adding a new storage domain with glusterfs type, it still asks for
> hostname:/volumename
>
> This leads me to believe that if that one node goes down (ex:
> node1:/data), then ovirt engine wont be able to communicate with that
> volume because its trying to reach it on node 1 and thus, go down
>
> I know glusterfs fuse client can connect to all nodes to provide
> failover/ha but how does the engine handle this?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users