Re: [ovirt-users] Info from 4.2 beta2 to 4.2

2017-12-22 Thread Sandro Bonazzola
Il 22 Dic 2017 14:22, "Gianluca Cecchi"  ha
scritto:

Hello,
I have an host where I only installed ovirt-node-ng from iso of 4.2 beta2
and then I didn't run any deploy of hosted engine yet.
If I want to start doing it but with the final 4.2 what is the correct step
to do (other than starting from scratch with final 4.2 iso)?

yum update
(after installing 4.2 repo file)
gives me what below and not a full image layer, is it correct?


Yuval, can you please assist here?






Thanks,
Gianluca

# yum update
...
Dependencies Resolved










==
 Package
Arch   Version
 Repository
  Size



==
Updating:



 ansible
noarch 2.4.2.0-0.el7
 ovirt-4.2-centos-
ovirt427.6 M
 collectd
x86_64 5.8.0-2.el7
 ovirt-4.2-centos-
opstools   624 k
 collectd-disk
x86_64 5.8.0-2.el7
 ovirt-4.2-centos-
opstools26 k
 collectd-netlink
x86_64 5.8.0-2.el7
 ovirt-4.2-centos-
opstools27 k
 collectd-write_http
x86_64 5.8.0-2.el7
 ovirt-4.2-centos-
opstools33 k
 gdeploy
noarch 2.0.6-1.el7
 ovirt-4.2-centos-
gluster312 203 k
 glusterfs
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312 558 k
 glusterfs-api
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312  97 k
 glusterfs-cli
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312 197 k
 glusterfs-client-xlators
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312 854 k
 glusterfs-events
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312  62 k
 glusterfs-fuse
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312 141 k
 glusterfs-geo-replication
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312 230 k
 glusterfs-libs
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312 399 k
 glusterfs-rdma
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312  64 k
 glusterfs-server
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312 1.2 M
 openvswitch
x86_64 1:2.7.3-1.1fc27.el7
 ovirt-4.2-centos-
ovirt424.6 M
 openvswitch-ovn-common
x86_64 1:2.7.3-1.1fc27.el7
 ovirt-4.2-centos-
ovirt421.4 M
 openvswitch-ovn-host
x86_64 1:2.7.3-1.1fc27.el7
 ovirt-4.2-centos-
ovirt42802 k
 python2-gluster
x86_64 3.12.3-1.el7
 ovirt-4.2-centos-
gluster312  39 k
 python2-openvswitch
noarch 1:2.7.3-1.1fc27.el7
 ovirt-4.2-centos-
ovirt42167 k
 qemu-img-ev
x86_64 10:2.9.0-16.el7_4.11.
1ovirt-4.2-centos-qemu-
ev 

Re: [ovirt-users] oVirt 4.2.0 is now generally available

2017-12-22 Thread Gary Pedretty
Unable to upgrade a self-hosted engine installation to 4.2.0 due to the 
following error


ovirt-engine-setup-plugin-ovirt-engine conflicts with 
ovirt-engine-4.0.6.3-1.el7.centos.noarch

When trying to do the first step of updating the hosted engine vm


Gary



Gary Pedrettyg...@ravnalaska.net
Systems Manager  www.flyravn.com
Ravn Alaska   /\  W 907-450-7251
5245 Airport Industrial Road /  \/\   C 907-388-2247
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
White as far as the eye can see. Must be winter yourself” Matt 22:39




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Minor issue upgrading to 4.2

2017-12-22 Thread Chris Adams
I upgraded a CentOS 7 oVirt 4.1.7 (initially installed as 3.5 if it
matters) test oVirt cluster to 4.2.0, and ran into one minor issue.  The
update installed firewalld on the host, which was set to start on boot.
This replaced the iptables rules with a blank firewalld setup that only
allowed SSH, which kept the host from working.

Stopping and disabling firewalld, then reloading iptables, got the host
back working.

In a quick search, I didn't see anything noting that firewalld was now
required, and it didn't seem to be configured correctly if oVirt was
trying to use it.

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] non-operational host issues following 4.2 upgrade

2017-12-22 Thread Jason Brooks
I was able to get my hosts active. During the upgrade, by master data
domain's metadata was corrupted -- I had duplicates of some of the
dom_md files, and my metadata file was corrupt. Vdsm was looking at
that metadata file and throwing up its hands. I added a new data
domain but it couldn't take over as master because my old data domain
was messed up. I ended up creating a new metadata file in that domain,
and my hosts came up. I might be nice to have some way of resetting
corrupt metadata or at least of making the error clearer.

I did have a gluster hiccup during the upgrade -- the upgrade brought
my gluster version from 3.8 to 3.12, and the other peers in the
cluster refused connections from my first upgraded host. I upgraded
all the others, and got them all talking to each other again, but it
may have been during that time that my master data domain metadata
became corrupted. I haven't noticed any issues w/ my vms yet, and all
through the migration travail, I was able to keep 5 important VMs
running. They kept chugging away, even though their host and
surrounding hosts were unhealthy.

Anyway, I'm back ;)

Jason

On Thu, Dec 21, 2017 at 9:42 AM, Jason Brooks  wrote:
> On Wed, Dec 20, 2017 at 11:47 PM, Sandro Bonazzola  
> wrote:
>>
>>
>> 2017-12-21 4:26 GMT+01:00 Jason Brooks :
>>>
>>> Hi all, I upgraded my 4 host converged gluster/ovirt lab setup to 4.2
>>> yesterday, and now 3 of my hosts won't connect to my main data domain,
>>> so they're non-operational when I try to activate them.
>>>
>>> Here's what seems like a relevant passage of vdsm.log:
>>> https://paste.fedoraproject.org/paste/JZuxul6-HZjjl8uHzgqL-w
>>
>>
>>
>> Adding some relevant developers.
>> Jason, do you mind opening a bug on
>> https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm to track this?
>
> I filed an issue here: https://bugzilla.redhat.com/show_bug.cgi?id=1528391
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] two questions about 4.2 feature

2017-12-22 Thread Yaniv Kaul
On Fri, Dec 22, 2017 at 1:42 PM, Nathanaël Blanchet 
wrote:

> Hi all,
>
> On 4.2, it seems that it is not possible anymore to move a disk to an
> other storage domain through the vm disk tab (still possible from the
> storage disk tab).
>

We can look at this gap.


>
> Secondly, while the new design is great, is there a possibility to keep
> the old one for any needs?
>

No, it's gone. Let us know what functionality you are missing from it. I
believe (after several months of using the new one) that the new one is
faster and more intuitive.
(well, it's considerably faster performance-wise too, but I'm talking about
faster to navigate to perform specific actions).
Y.



>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] self-hosted-engine

2017-12-22 Thread Yaniv Kaul
On Fri, Dec 22, 2017 at 8:02 PM, Blaster  wrote:

> Just installing 4.2 coming from a 3.6.3 configuration.
>
> When I go to the admin page, it appears all of the management pieces are
> there to manage the hypervisor, similar to the old All In One configuration.
>
> What purpose does the Self Hosted Engine piece do from there?  Do I only
> need that now if I want to manage a cluster?
>

Self-hosted engine allows you to manage several hosts (vs. all-in-one which
was for a single host) without a dedicated host for the Engine.
The Engine is a VM running on specific hosts - which can be 'deployed' or
'undeployed' as candidates to run it easily from within the UI.
Y.


>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2 and host console

2017-12-22 Thread Sandro Bonazzola
Il 22 Dic 2017 10:12 PM, "Yaniv Kaul"  ha scritto:



On Dec 22, 2017 7:33 PM, "Gianluca Cecchi" 
wrote:

Hello, after upgrading engine and then  plain CentOS 7.4 host from 4.1 to
4.2, I see in host section if I select line for the host, right click and
host console... That tries to go to the typical 9090 cockpit Port of
node-ng...
Is this an error or in 4.2 the access to host console is for plain OS nodes
too?
In that case is there any service I have to enable on host?
It seems indeed my host is not currently listening on 9090 Port


Cockpit + firewall settings to enable to get to it.


Cockpit service should be up and running after the upgrade. Ovirt hist
depliy takes care of it. Firewall is configured by the engine unless you
disabled firewall config on the host configuration dialog.

Didi, can you help here? Gianluca, can you share host upgrade logs?



Y.

Thanks,
Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2 and host console

2017-12-22 Thread Yaniv Kaul
On Dec 22, 2017 7:33 PM, "Gianluca Cecchi" 
wrote:

Hello, after upgrading engine and then  plain CentOS 7.4 host from 4.1 to
4.2, I see in host section if I select line for the host, right click and
host console... That tries to go to the typical 9090 cockpit Port of
node-ng...
Is this an error or in 4.2 the access to host console is for plain OS nodes
too?
In that case is there any service I have to enable on host?
It seems indeed my host is not currently listening on 9090 Port


Cockpit + firewall settings to enable to get to it.
Y.

Thanks,
Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 host not compatible

2017-12-22 Thread Paul Dyer
It seems that I am using net_persistence = ifcfg, and that I have lost the
definitions for the logical networks.

I have recovered these, and was able to do setup logical networks.

It is all working now.

Paul

On Fri, Dec 22, 2017 at 1:46 PM, Paul Dyer  wrote:

> My setup is RHEL 7.4, with the host separate from the engine.
>
> The ovirt-release42 rpm was added to the engine host, but not to the
> virtualization host.   The vhost was still running v4.1 rpms.   I installed
> ovirt-release42 on the vhost, then updated the rest of the rpms with "yum
> update".I am still getting an error on activation of the vhost...
>
>  Host parasol does not comply with the cluster Intel networks, the
> following networks are missing on host: 'data30,data40,ovirtmgmt'
>
> It seems like the networks bridges are not there anymore??
>
> Paul
>
>
>
> On Fri, Dec 22, 2017 at 12:46 PM, Paul Dyer  wrote:
>
>> Hi,
>>
>> I have upgraded to ovirt 4.2 without issue.   But I cannot find a way to
>> upgrade the host compatibility in the new OVirt Manager.
>>
>> I get this error when activiating the host...
>>
>> host parasol is compatible with versions (3.6,4.0,4.1) and cannot join
>> Cluster Intel which is set to version 4.2.
>>
>> Thanks,
>> Paul
>>
>>
>> --
>> Paul Dyer,
>> Mercury Consulting Group, RHCE
>> 504-302-8750 <(504)%20302-8750>
>>
>
>
>
> --
> Paul Dyer,
> Mercury Consulting Group, RHCE
> 504-302-8750 <(504)%20302-8750>
>



-- 
Paul Dyer,
Mercury Consulting Group, RHCE
504-302-8750
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] self-hosted-engine

2017-12-22 Thread Blaster
Just installing 4.2 coming from a 3.6.3 configuration.

When I go to the admin page, it appears all of the management pieces are there 
to manage the hypervisor, similar to the old All In One configuration.

What purpose does the Self Hosted Engine piece do from there?  Do I only need 
that now if I want to manage a cluster?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt-Log-Collector - Engine Crashes During

2017-12-22 Thread Langley, Robert
Thank you for chiming in. I appreciate the help.

The Engine's disk has 51GB available for /tmp of the 80GB disk.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade to 4.2 Postgresql Error

2017-12-22 Thread Gabriel Stein
Well, that's the problem! I interrupted the yum process and that destroyed
my yum. I tried to fix it and I ended with a beautiful kernel panic. But
I'm guilty, not ovirt.

I will install on another hosts, but first I will need to recover the host
with kernel panic. I will open another thread with this problem.

All the best!

Gabriel

Gabriel Stein
--
Gabriel Ferraz Stein
Tel.: +49 (0)  170 2881531

2017-12-22 11:35 GMT+01:00 Yaniv Kaul :

>
>
> On Dec 22, 2017 11:56 AM, "Gabriel Stein"  wrote:
>
> Well, it worked, I could upgrade the hosted-engine changing the
> /etc/locale to en-US-UTF8. I deactivated puppet too, if someone are using
> puppet for provisioning.
>
> Now I'm having problems to update the host, but it's just this annoying
> bug from vsdm(you need to restart it)
>
>
> Can you share more information?
> And the log, if possible.
> Y.
>
>
> Thanks a lot!
>
> Best Regards,
>
> Gabriel
> PS: Looking forward for the resolution in Bugzilla.
>
> Gabriel Stein
> --
> Gabriel Ferraz Stein
> Tel.: +49 (0)  170 2881531
>
> 2017-12-21 17:15 GMT+01:00 Simone Tiraboschi :
>
>>
>>
>> On Thu, Dec 21, 2017 at 4:30 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Dec 21, 2017 at 8:41 AM, Sandro Bonazzola 
>>> wrote:
>>>


 2017-12-20 16:07 GMT+01:00 Gabriel Stein :

> Hi!
>
> well, I'm a update fever and I decided to update my ovirt to 4.2.0.
>

 Thanks for this valuable feedback! Simone has already replied and will
 check your setup logs.




>
> How I'm doingi it?
>
> I'm following the rules!
>
> 1 - Global Maintenance Mode
> 2 - Yum Install ovirt-release-4.2
> 3- yum update ovirt-setup*.
>
> But, by the engine-setup I have a conflict with the system collation
> and postgresql. Unfortunatelly I changed via puppet to the german
> collation(de_DE-UTF8) of ovirt-engine hosted vm(because it's a standard 
> for
> us) but this was after the engine-setup and the DB Configuration from
> Postgresql.
>

 Adding also Didi


>
> I think that I can easily change the system collation to us-US-UFT8
> but I'm afraid that I can "destroy" my hosted-engine VM with that change,
> is hosted-engine so sensible?
>
> How I know that error?  The logs are saying that(and the error in on
> postgresql upgrade part of setup)!
>
> *Performing Consistency Checks*
> *-*
> *Checking cluster versions   ok*
> *Checking database user is the install user  ok*
> *Checking database connection settings   ok*
> *Checking for prepared transactions  ok*
> *Checking for reg* system OID user data typesok*
> *Checking for contrib/isn with bigint-passing mismatch   ok*
> *Checking for invalid "line" user columnsok*
> *Creating dump of global objects ok*
> *Creating dump of database schemas*
> *  engine*
> *  ovirt_engine_history*
> *  postgres*
> *  template1*
> *ok*
>
> *lc_collate values for database "postgres" do not match:  old
> "en_US.UTF-8", new "de_DE.UTF-8"*
> *Failure, exiting*
>
> I would be thankful if someone could give me some hint about that!
>

>>> OK, reproduced.
>>> The issue happens if and only if you changed system wide locales after
>>> having installed ovirt-engine but before upgrading it to 4.2.
>>> I'm going to open a bug to track it.
>>>
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1528371
>>
>>
>>>
>>> In engine-setup we are explicitly setting en_US.UTF-8 as the locale of
>>> the engine DB but we are not touching at all the locale of postgres own DB
>>> which match the system wide locale and your issue is indeed on the postgres
>>> DB, not on the engine one.
>>> pg_upgrade cannot change it on the fly on upgrades.
>>>
>>> I tried to find a working fix with env variables but unfortunately
>>> nothing I tired worked.
>>> The only workaround I was able to find is to temporary set the locale
>>> you had at first successful engine-setup execution time (en_US.UTF-8 in
>>> your case but it varies) under /etc/locale.conf and only then execute
>>> engine-setup to upgrade it to 4.2.
>>> After that you could safely switch back /etc/locale.conf to whatever you
>>> need.
>>> All the env variable I tried seams absolutely not relevant for this
>>> specific issue.
>>>
>>>
>>>
>>>

> Best Regards,
>
> Gabriel
> PS: If I go to devconf in Brno I will pay a lot of beers to the
> developer of the engine-setup rollback! 

[ovirt-users] Ovirt 4.2 and host console

2017-12-22 Thread Gianluca Cecchi
Hello, after upgrading engine and then  plain CentOS 7.4 host from 4.1 to
4.2, I see in host section if I select line for the host, right click and
host console... That tries to go to the typical 9090 cockpit Port of
node-ng...
Is this an error or in 4.2 the access to host console is for plain OS nodes
too?
In that case is there any service I have to enable on host?
It seems indeed my host is not currently listening on 9090 Port
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] two questions about 4.2 feature

2017-12-22 Thread Benny Zlotnik
Regarding the first question: there is a bug open for this issue [1]

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1513987

On Fri, Dec 22, 2017 at 1:42 PM, Nathanaël Blanchet 
wrote:

> Hi all,
>
> On 4.2, it seems that it is not possible anymore to move a disk to an
> other storage domain through the vm disk tab (still possible from the
> storage disk tab).
>
> Secondly, while the new design is great, is there a possibility to keep
> the old one for any needs?
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Remove all traces of a host from oVirt Engine

2017-12-22 Thread Simone Tiraboschi
On Fri, Dec 22, 2017 at 3:53 PM, Vinícius Ferrão  wrote:

> Hello,
>
> I’m installing a brand new oVirt 4.2 infrastructure and for whatever
> reasons I don’t know one host got in an unstable state (due to
> misconfigurations of iSCSI networks) and it fails to be up.
>
> After giving up on the host I just reinstalled it and tried to add it
> again on the engine. To make me surprise, the problem persisted. So there’s
> an issue on the engine.
>
> The question is: how can I completely remove all traces os a given host on
> the ovirt engine? The server was removed gracefully from the web interface,
> but it appears that something nasty was keep in the engine.
>

Setting it to maintenance mode and then remove should be enough but maybe
you have something in the storage domains area.
Can you please share your engine.log?



>
> Thanks,
> V.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2.0 is now generally available

2017-12-22 Thread Blaster


> On Dec 22, 2017, at 12:32 AM, Sandro Bonazzola  wrote:
> 
> 
> 
> 2017-12-21 18:20 GMT+01:00 Blaster  >:
> What a great Christmas present!
> 
> I'm still running 3.6.3 All-In-One configuration on Fedora 22.  So it looks 
> like I'll be starting from scratch.
> 
> Is there any decent way yet to take my on disk VM images and easily attach 
> them?  I put in an RFE for that which looks like it didn't make it into 4.2, 
> now slated for 4.2.1.
> 
> The old way I used to do this was to create new VMs with new disks, then just 
> copy over my old VMs to the disk files of the new VMs.  I'm assuming I'll 
> still have to use that method.
> 
> Are you running all-in-one with only this host or do you have more hosts 
> attached to this engine?
> Is your data storage domain local? Or is it shared / provided by a different 
> host?
> 

I am running a single host configuration with local SATA disk.  When I need to 
change hosts, I generally just move the disks (or copy data via NFS) to the new 
host, create new VMs, then move the old VM disks into the new VM directory with 
the new file names and boot. ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Remove all traces of a host from oVirt Engine

2017-12-22 Thread Vinícius Ferrão
Hello,

I’m installing a brand new oVirt 4.2 infrastructure and for whatever reasons I 
don’t know one host got in an unstable state (due to misconfigurations of iSCSI 
networks) and it fails to be up.

After giving up on the host I just reinstalled it and tried to add it again on 
the engine. To make me surprise, the problem persisted. So there’s an issue on 
the engine.

The question is: how can I completely remove all traces os a given host on the 
ovirt engine? The server was removed gracefully from the web interface, but it 
appears that something nasty was keep in the engine.

Thanks,
V.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Info from 4.2 beta2 to 4.2

2017-12-22 Thread Gianluca Cecchi
Hello,
I have an host where I only installed ovirt-node-ng from iso of 4.2 beta2
and then I didn't run any deploy of hosted engine yet.
If I want to start doing it but with the final 4.2 what is the correct step
to do (other than starting from scratch with final 4.2 iso)?

yum update
(after installing 4.2 repo file)
gives me what below and not a full image layer, is it correct?

Thanks,
Gianluca

# yum update
...
Dependencies Resolved

==
 Package   Arch
   Version
  Repository
Size
==
Updating:
 ansible   noarch
   2.4.2.0-0.el7
  ovirt-4.2-centos-ovirt42
   7.6 M
 collectd  x86_64
   5.8.0-2.el7
  ovirt-4.2-centos-opstools
   624 k
 collectd-disk x86_64
   5.8.0-2.el7
  ovirt-4.2-centos-opstools
26 k
 collectd-netlink  x86_64
   5.8.0-2.el7
  ovirt-4.2-centos-opstools
27 k
 collectd-write_http   x86_64
   5.8.0-2.el7
  ovirt-4.2-centos-opstools
33 k
 gdeploy   noarch
   2.0.6-1.el7
  ovirt-4.2-centos-gluster312
   203 k
 glusterfs x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
   558 k
 glusterfs-api x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
97 k
 glusterfs-cli x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
   197 k
 glusterfs-client-xlators  x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
   854 k
 glusterfs-events  x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
62 k
 glusterfs-fusex86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
   141 k
 glusterfs-geo-replication x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
   230 k
 glusterfs-libsx86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
   399 k
 glusterfs-rdmax86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
64 k
 glusterfs-server  x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
   1.2 M
 openvswitch   x86_64
   1:2.7.3-1.1fc27.el7
  ovirt-4.2-centos-ovirt42
   4.6 M
 openvswitch-ovn-commonx86_64
   1:2.7.3-1.1fc27.el7
  ovirt-4.2-centos-ovirt42
   1.4 M
 openvswitch-ovn-host  x86_64
   1:2.7.3-1.1fc27.el7
  ovirt-4.2-centos-ovirt42
   802 k
 python2-gluster   x86_64
   3.12.3-1.el7
  ovirt-4.2-centos-gluster312
39 k
 python2-openvswitch   noarch
   1:2.7.3-1.1fc27.el7
  ovirt-4.2-centos-ovirt42
   167 k
 qemu-img-ev   x86_64
   10:2.9.0-16.el7_4.11.1
  ovirt-4.2-centos-qemu-ev
   2.2 M
 qemu-kvm-common-evx86_64
   10:2.9.0-16.el7_4.11.1
  ovirt-4.2-centos-qemu-ev
   914 k
 qemu-kvm-ev   x86_64
   10:2.9.0-16.el7_4.11.1
  ovirt-4.2-centos-qemu-ev
   2.8 M

Transaction Summary

Re: [ovirt-users] Regarding Ovirt Installation

2017-12-22 Thread Matteo Capuano


Da: Simone Tiraboschi
Inviato: giovedì 21 dicembre 21:24
Oggetto: Re: [ovirt-users] Regarding Ovirt Installation
A: Matteo Capuano
Cc: Martin Sivak, ruth john, users




On Thu, Dec 21, 2017 at 8:19 PM, Matteo Capuano 
> wrote:
@Ruth John:

- if not can anyone please help me to install atleast on one to make me 
understand where am i doing the mistake.

I'm using oVirt on a OVH's dedicated server. The bare metal works as host and 
the engine is installed on a OVH's public cloud VM. I've also nested three 
other oVirts inside the host.
Maybe i could help you , what's your issue? I'd put my money on OVH network 
configuration :)

In 4.2 we have a better OVN support which should definitively help there.
Why not write and share a small blog post with your oVirt on OVH hints?

I'd be glad to do it!
Maybe i'll be able to use some spare time during the holydays.



@Martin Sivak

- one of the new features of oVirt 4.2 is support for Replica 1 all in one 
setup using hosted engine and gluster in hyper-converged mode.

Interesting. Maybe I don't understand it because I'm a newbie but...what's the 
meaning of a replica 1 solution? I mean, if i have only one host where's the 
replica?

Of course no replica with replica 1, but you could start like that then add 
other 2 hosts in the future and gain replica 3 just with a little 
reconfiguration and probably also without any downtime.



Cheer

Matteo

On Thu, Dec 21, 2017 at 2:21 PM, Martin Sivak 
> wrote:
Hi,

one of the new features of oVirt 4.2 is support for Replica 1 all in
one setup using hosted engine and gluster in hyper-converged mode.

So it should be again possible to use just a single host for
everything, I am not sure we have a documentation ready for that
though.

Best regards

Martin Sivak

On Thu, Dec 21, 2017 at 2:15 PM, Simone Tiraboschi 
> wrote:
>
>
> On Thu, Dec 21, 2017 at 1:59 PM, ruth john 
> >
> wrote:
>>
>> Sir, buying nfs storage would cost me a lot. Can't I use it directly on
>> the provided storage?
>
>
> We don't have anymore an all-in-one installation where the engine and vdsm
> runs altogether in the same machine; the proposed replacement is
> hosted-engine which doesn't work with local storage since it's supposed to
> be able to restart the engine VM somewhere else for HA reasons and the local
> storage is against that by definition.
> If you have three machines I'd suggest an hyperconverged gluster deployment
> with replica 3.
>
> If you want to try it on a single machine keep present that NFS in loop-back
> is discouraged so maybe you could try iSCSI or, maybe with a small hack,
> gluster in replica 1 in loopback
>
>
>>
>>
>>
>> On Dec 21, 2017 1:25 PM, "Simone Tiraboschi" 
>> > wrote:
>>
>>
>>
>> On Wed, Dec 20, 2017 at 10:45 PM, ruth john 
>> >
>> wrote:
>>>
>>> I am delighted with the interface and other features of the Ovirt but was
>>> never able to install it properly, is that true OVirt doesn't support
>>> Hetzner Dedicated and OVH dedicated?
>>
>>
>> I personally know about a friend who is running it on an a couple of
>> dedicated OVH machines with NFS storage provided by OVH.
>> No idea about Hetzner.
>>
>>
>>>
>>> if not can anyone please help me to install atleast on one to make me
>>> understand where am i doing the mistake.
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2.0 is now generally available

2017-12-22 Thread Nathanaël Blanchet

It depends on your storage configuration...

 *      if you're using local storage, you can attach an additional
   shared storage to the local one, then move your vms to it and then
   detach this domain from your old ovirt.


After creating from scratch a new domain in your new 4.2 ovirt , you 
will attach the previous shared domain and finally import vms on it. 
This operation is nearly similar to the export domain feature, except it 
only imports vm defintion, so you don't have to copy the disks to the 
new domain.


 *      If you're already using shared storage, detach it form the old
   ovirt and after attching it to the new one, you just have to
   reimport vms defintion.


Le 21/12/2017 à 18:20, Blaster a écrit :

What a great Christmas present!

I'm still running 3.6.3 All-In-One configuration on Fedora 22. So it 
looks like I'll be starting from scratch.


Is there any decent way yet to take my on disk VM images and easily 
attach them?  I put in an RFE for that which looks like it didn't make 
it into 4.2, now slated for 4.2.1.


The old way I used to do this was to create new VMs with new disks, 
then just copy over my old VMs to the disk files of the new VMs.  I'm 
assuming I'll still have to use that method.



On 12/20/2017 3:40 AM, Sandro Bonazzola wrote:


The oVirt project is excited to announce the general availability of 
oVirt 4.2.0, as of December 20, 2017.






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-22 Thread Sandro Bonazzola
2017-12-22 12:57 GMT+01:00 Sandro Bonazzola :

>
>
> 2017-12-22 12:46 GMT+01:00 Jiffin Tony Thottan :
>
>> On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:
>>
>>
>>
>> On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> 2017-12-21 17:01 GMT+01:00 Stefano Danzi :
>>>


 Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:



 2017-12-21 14:26 GMT+01:00 Stefano Danzi :

> Sloved installing glusterfs-gnfs package.
> Anyway could be nice to move hosted engine to gluster
>
>
 Adding some gluster folks. Are we missing a dependency somewhere?
 During the upgrade nfs on gluster stopped to work here and adding the
 missing dep solved.
 Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you
 are on gluster 3.12 (ovirt 4.2)

 Sandro I confirm the version.
 Host are running CentOS 7.4.1708
 before the upgrade there was gluster 3.8 in oVirt 4.1
 now I have gluster 3.12 in oVirt 4.2


>>> Thanks Stefano, I alerted glusterfs team, they'll have a look.
>>>
>>
>>
>> [Adding Jiffin to take a look and confirm]
>>
>> I think this is to do with the separation of nfs components in gluster
>> 3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The
>> recommended nfs solution with gluster is nfs-ganesha, and hence the gluster
>> nfs is no longer installed by default.
>>
>> Hi,
>>
>> For gluster nfs u need to install gluster-gnfs package. As Sahina , it is
>> change from 3.12 onwards I guess
>>
>
> Sahina, can you please ensure that if glusterfs with nfs support is
> selected in ovirt-engine, gluster-gnfs is installed when deploying the host?
>
>
Tracking on https://bugzilla.redhat.com/show_bug.cgi?id=1528615


>
>
>>
>> Regards,
>> Jiffin
>>
>>
>>
>>
>>
>>>


>
> Il 21/12/2017 11:37, Stefano Danzi ha scritto:
>
>
>
> Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
>
>
>
> On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi 
> wrote:
>
>> Hello!
>> I have a test system with one phisical host and hosted engine running
>> on it.
>> Storage is gluster but hosted engine mount it as nfs.
>>
>> After the upgrade gluster no longer activate nfs.
>> The command "gluster volume set engine nfs.disable off" doesn't help.
>>
>> How I can re-enable nfs? O better how I can migrate self hosted
>> engine to native glusterfs?
>>
>
>
> Ciao Stefano,
> could you please attach the output of
>   gluster volume info engine
>
> adding Kasturi here
>
>
> [root@ovirt01 ~]# gluster volume info engine
>
> Volume Name: engine
> Type: Distribute
> Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
> Options Reconfigured:
> server.event-threads: 4
> client.event-threads: 4
> network.ping-timeout: 30
> server.allow-insecure: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> nfs.disable: off
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> features.shard-block-size: 512MB
>
>
>
>
> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
>
> ___
> Users mailing 
> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


 --

 SANDRO BONAZZOLA

 ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

 Red Hat EMEA 
 
 TRIED. TESTED. TRUSTED. 



>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA 
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>>
>>>
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, 

Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-22 Thread Sandro Bonazzola
2017-12-22 12:46 GMT+01:00 Jiffin Tony Thottan :

> On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:
>
>
>
> On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> 2017-12-21 17:01 GMT+01:00 Stefano Danzi :
>>
>>>
>>>
>>> Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
>>>
>>>
>>>
>>> 2017-12-21 14:26 GMT+01:00 Stefano Danzi :
>>>
 Sloved installing glusterfs-gnfs package.
 Anyway could be nice to move hosted engine to gluster


>>> Adding some gluster folks. Are we missing a dependency somewhere?
>>> During the upgrade nfs on gluster stopped to work here and adding the
>>> missing dep solved.
>>> Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you
>>> are on gluster 3.12 (ovirt 4.2)
>>>
>>> Sandro I confirm the version.
>>> Host are running CentOS 7.4.1708
>>> before the upgrade there was gluster 3.8 in oVirt 4.1
>>> now I have gluster 3.12 in oVirt 4.2
>>>
>>>
>> Thanks Stefano, I alerted glusterfs team, they'll have a look.
>>
>
>
> [Adding Jiffin to take a look and confirm]
>
> I think this is to do with the separation of nfs components in gluster
> 3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The
> recommended nfs solution with gluster is nfs-ganesha, and hence the gluster
> nfs is no longer installed by default.
>
> Hi,
>
> For gluster nfs u need to install gluster-gnfs package. As Sahina , it is
> change from 3.12 onwards I guess
>

Sahina, can you please ensure that if glusterfs with nfs support is
selected in ovirt-engine, gluster-gnfs is installed when deploying the host?



>
> Regards,
> Jiffin
>
>
>
>
>
>>
>>>
>>>

 Il 21/12/2017 11:37, Stefano Danzi ha scritto:



 Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:



 On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi 
 wrote:

> Hello!
> I have a test system with one phisical host and hosted engine running
> on it.
> Storage is gluster but hosted engine mount it as nfs.
>
> After the upgrade gluster no longer activate nfs.
> The command "gluster volume set engine nfs.disable off" doesn't help.
>
> How I can re-enable nfs? O better how I can migrate self hosted engine
> to native glusterfs?
>


 Ciao Stefano,
 could you please attach the output of
   gluster volume info engine

 adding Kasturi here


 [root@ovirt01 ~]# gluster volume info engine

 Volume Name: engine
 Type: Distribute
 Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 1
 Transport-type: tcp
 Bricks:
 Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
 Options Reconfigured:
 server.event-threads: 4
 client.event-threads: 4
 network.ping-timeout: 30
 server.allow-insecure: on
 storage.owner-gid: 36
 storage.owner-uid: 36
 cluster.server-quorum-type: server
 cluster.quorum-type: auto
 network.remote-dio: enable
 cluster.eager-lock: enable
 performance.stat-prefetch: off
 performance.io-cache: off
 performance.read-ahead: off
 performance.quick-read: off
 nfs.disable: off
 performance.low-prio-threads: 32
 cluster.data-self-heal-algorithm: full
 cluster.locking-scheme: granular
 cluster.shd-max-threads: 8
 cluster.shd-wait-qlength: 1
 features.shard: on
 user.cifs: off
 features.shard-block-size: 512MB




 ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>




 ___
 Users mailing 
 listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA 
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>>
>>>
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] two questions about 4.2 feature

2017-12-22 Thread Nathanaël Blanchet

Hi all,

On 4.2, it seems that it is not possible anymore to move a disk to an 
other storage domain through the vm disk tab (still possible from the 
storage disk tab).


Secondly, while the new design is great, is there a possibility to keep 
the old one for any needs?


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] share also your successful upgrades experience

2017-12-22 Thread Simone Tiraboschi
On Fri, Dec 22, 2017 at 12:06 PM, Martin Perina  wrote:

>
>
> On Fri, Dec 22, 2017 at 11:59 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> 2017-12-22 11:44 GMT+01:00 Gianluca Cecchi :
>>
>>> On Thu, Dec 21, 2017 at 2:35 PM, Sandro Bonazzola 
>>> wrote:
>>>
 Hi,
 now that oVirt 4.2.0 has been released, we're starting to see some
 reports about issues that for now are related to not so common deployments.
 We'd also like to get some feedback from those who upgraded to this
 amazing release without any issue and add these positive feedback under our
 developers (digital) Christmas tree as a gift for the effort put in this
 release.
 Looking forward to your positive reports!

 Not having positive feedback? Let us know too!
 We are putting an effort in the next weeks to promptly assist whoever
 hit troubles during or after the upgrade. Let us know in this
 users@ovirt.org mailing list (preferred) or on IRC using  irc.oftc.net
 server and #ovirt channel.

 We are also closely monitoring bugzilla.redhat.com for new bugs on
 oVirt project, so you can report issues there as well.

 Thanks,
 --

 SANDRO BONAZZOLA

>>>
>>>
>>> Hi Sandro,
>>> nice to see final 4.2!
>>>
>>> I successfully update a test/lab nested HCI cluster from oVirt 4.1.7 +
>>> Gluster 3.10 to oVirt 4.2 + Gluster 3.12 (automatically picked by the
>>> upgrade)
>>> 3 hosts with CentOS 7.4
>>>
>>
>> Thanks for the report Gianluca!
>>
>>
>>
>>>
>>> Basically following here:
>>> https://ovirt.org/documentation/how-to/hosted-engine/#upgrad
>>> e-hosted-engine
>>>
>>> steps 5,6 substituted by reboot of the upgraded host.
>>>
>>> My running C6 VM had no downtime during upgrade of the three hosts
>>>
>>> Only problem I registered was the first start of engine on the first
>>> upgraded host (that should be step 7 in link above), where I got error on
>>> engine (not able to access web admin portal); I see this in server.log
>>> (see full attach here https://drive.google.com/file/
>>> d/1UQAllZfjueVGkXDsBs09S7THGDFn9YPa/view?usp=sharing )
>>>
>>> 2017-12-22 00:40:17,674+01 INFO  [org.quartz.core.QuartzScheduler]
>>> (ServerService Thread Pool -- 63) Scheduler 
>>> DefaultQuartzScheduler_$_NON_CLUSTERED
>>> started.
>>> 2017-12-22 00:40:17,682+01 INFO  [org.jboss.as.clustering.infinispan]
>>> (ServerService Thread Pool -- 63) WFLYCLINF0002: Started timeout-base cache
>>> from ovirt-engine container
>>> 2017-12-22 00:44:28,611+01 ERROR 
>>> [org.jboss.as.controller.management-operation]
>>> (Controller Boot Thread) WFLYCTL0348: Timeout after [300] seconds waiting
>>> for service container stability. Operation will roll back. Step that first
>>> updated the service container was 'add' at address '[
>>> ("core-service" => "management"),
>>> ("management-interface" => "native-interface")
>>> ]'
>>>
>>
>> Adding Vaclav, maybe something in Wildfly? Martin, any hint on engine
>> side?
>>
>
> ​Yeah, I've already seen such error a few times, it usually happens when
> access to storage is really slow or the host itself is overloaded and
> WildFly is not able to startup properly until default 300 seconds interval​
> is over.
>
> If this is going to happen often, we will have to raise that timeout for
> all installations.
>
>
This is under investigation here:
https://bugzilla.redhat.com/show_bug.cgi?id=1528292

We had more than one evidence in fresh installs with the new ansible flow
but, up to now, no evidence on upgrades.



>
>>
>>
>>
>>> 2017-12-22 00:44:28,722+01 INFO  [org.wildfly.extension.undertow]
>>> (ServerService Thread Pool -- 65) WFLYUT0022: Unregistered web context:
>>> '/ovirt-engine/apidoc' from server 'default-server'
>>>
>>> Then I restarted the hosted engine vm on the same first upgraded host
>>> and it was able now to correctly start and web admin portal ok. The
>>> corresponding lines in server.log had become:
>>>
>>> 2017-12-22 00:48:17,536+01 INFO  [org.quartz.core.QuartzScheduler]
>>> (ServerService Thread Pool -- 63) Scheduler 
>>> DefaultQuartzScheduler_$_NON_CLUSTERED
>>> started.
>>> 2017-12-22 00:48:17,545+01 INFO  [org.jboss.as.clustering.infinispan]
>>> (ServerService Thread Pool -- 63) WFLYCLINF0002: Started timeout-base cache
>>> from ovirt-engine container
>>> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
>>> Thread Pool -- 25) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name :
>>> "ovirt-web-ui.war")
>>> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
>>> Thread Pool -- 25) WFLYSRV0010: Deployed "restapi.war" (runtime-name :
>>> "restapi.war")
>>> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
>>> Thread Pool -- 25) WFLYSRV0010: Deployed "engine.ear" (runtime-name :
>>> "engine.ear")
>>> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
>>> Thread Pool -- 25) 

Re: [ovirt-users] [Call for feedback] share also your successful upgrades experience

2017-12-22 Thread Sandro Bonazzola
2017-12-22 11:44 GMT+01:00 Gianluca Cecchi :

> On Thu, Dec 21, 2017 at 2:35 PM, Sandro Bonazzola 
> wrote:
>
>> Hi,
>> now that oVirt 4.2.0 has been released, we're starting to see some
>> reports about issues that for now are related to not so common deployments.
>> We'd also like to get some feedback from those who upgraded to this
>> amazing release without any issue and add these positive feedback under our
>> developers (digital) Christmas tree as a gift for the effort put in this
>> release.
>> Looking forward to your positive reports!
>>
>> Not having positive feedback? Let us know too!
>> We are putting an effort in the next weeks to promptly assist whoever hit
>> troubles during or after the upgrade. Let us know in this users@ovirt.org
>> mailing list (preferred) or on IRC using  irc.oftc.net server and #ovirt
>> channel.
>>
>> We are also closely monitoring bugzilla.redhat.com for new bugs on oVirt
>> project, so you can report issues there as well.
>>
>> Thanks,
>> --
>>
>> SANDRO BONAZZOLA
>>
>
>
> Hi Sandro,
> nice to see final 4.2!
>
> I successfully update a test/lab nested HCI cluster from oVirt 4.1.7 +
> Gluster 3.10 to oVirt 4.2 + Gluster 3.12 (automatically picked by the
> upgrade)
> 3 hosts with CentOS 7.4
>

Thanks for the report Gianluca!



>
> Basically following here:
> https://ovirt.org/documentation/how-to/hosted-
> engine/#upgrade-hosted-engine
>
> steps 5,6 substituted by reboot of the upgraded host.
>
> My running C6 VM had no downtime during upgrade of the three hosts
>
> Only problem I registered was the first start of engine on the first
> upgraded host (that should be step 7 in link above), where I got error on
> engine (not able to access web admin portal); I see this in server.log
> (see full attach here https://drive.google.com/file/d/
> 1UQAllZfjueVGkXDsBs09S7THGDFn9YPa/view?usp=sharing )
>
> 2017-12-22 00:40:17,674+01 INFO  [org.quartz.core.QuartzScheduler]
> (ServerService Thread Pool -- 63) Scheduler 
> DefaultQuartzScheduler_$_NON_CLUSTERED
> started.
> 2017-12-22 00:40:17,682+01 INFO  [org.jboss.as.clustering.infinispan]
> (ServerService Thread Pool -- 63) WFLYCLINF0002: Started timeout-base cache
> from ovirt-engine container
> 2017-12-22 00:44:28,611+01 ERROR 
> [org.jboss.as.controller.management-operation]
> (Controller Boot Thread) WFLYCTL0348: Timeout after [300] seconds waiting
> for service container stability. Operation will roll back. Step that first
> updated the service container was 'add' at address '[
> ("core-service" => "management"),
> ("management-interface" => "native-interface")
> ]'
>

Adding Vaclav, maybe something in Wildfly? Martin, any hint on engine side?




> 2017-12-22 00:44:28,722+01 INFO  [org.wildfly.extension.undertow]
> (ServerService Thread Pool -- 65) WFLYUT0022: Unregistered web context:
> '/ovirt-engine/apidoc' from server 'default-server'
>
> Then I restarted the hosted engine vm on the same first upgraded host and
> it was able now to correctly start and web admin portal ok. The
> corresponding lines in server.log had become:
>
> 2017-12-22 00:48:17,536+01 INFO  [org.quartz.core.QuartzScheduler]
> (ServerService Thread Pool -- 63) Scheduler 
> DefaultQuartzScheduler_$_NON_CLUSTERED
> started.
> 2017-12-22 00:48:17,545+01 INFO  [org.jboss.as.clustering.infinispan]
> (ServerService Thread Pool -- 63) WFLYCLINF0002: Started timeout-base cache
> from ovirt-engine container
> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
> Thread Pool -- 25) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name :
> "ovirt-web-ui.war")
> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
> Thread Pool -- 25) WFLYSRV0010: Deployed "restapi.war" (runtime-name :
> "restapi.war")
> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
> Thread Pool -- 25) WFLYSRV0010: Deployed "engine.ear" (runtime-name :
> "engine.ear")
> 2017-12-22 00:48:23,248+01 INFO  [org.jboss.as.server] (ServerService
> Thread Pool -- 25) WFLYSRV0010: Deployed "apidoc.war" (runtime-name :
> "apidoc.war")
> 2017-12-22 00:48:24,175+01 INFO  [org.jboss.as.server] (Controller Boot
> Thread) WFLYSRV0212: Resuming server
> 2017-12-22 00:48:24,219+01 INFO  [org.jboss.as] (Controller Boot Thread)
> WFLYSRV0060: Http management interface listening on http://127.0.0.1:8706/
> management
>
> I also updated cluster and DC level to 4.2.
>
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration without shared storage

2017-12-22 Thread Michal Skrivanek

> On 21 Dec 2017, at 17:14, FERNANDO FREDIANI  wrote:
> 
> That is going certainly to be a very welcome feature and if not yet should be 
> on the top of th roadmap. For planned maintenances it solves mostly all 
> downtime problems.
> 
> 

It is in the roadmap[1] but there’s no active work on it yet, oVirt is 
primarily built around shared storage

Thanks,
michal

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1326857
> Fernando
> 
> On 21/12/2017 12:19, Pujan Shah wrote:
>> We have a bit odd setup where some of our clients have dedicated hosts and 
>> we also have some shared hosts. We can migrate client VMs from their 
>> dedicated host to shared host if we need ot do some maintenance. We don't 
>> have shared storage and currently we are using XenServer which supports live 
>> migration without shared storage. We recently started looking into KVM as an 
>> alternative and decided to try ovirt. To our surprise KVM supports live 
>> migration without shared storage but ovirt does not. 
>> (https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/
>>  
>> )
>>   
>> 
>> ​I wanted to know if anyone has dealt with such situation and is this 
>> something others are also looking for?​
>> 
>> 
>> ​Regards,
>> Pujan Shah
>> Systemadministration
>> 
>> --
>> tel.: +49 (0) 221 / 95 168 - 74
>> mail: ​ ​ p...@dom.de 
>> DOM Digital Online Media GmbH,
>> Bismarck Str. 60
>> 50672 Köln
>> 
>> http://www.dom.de/ 
>> 
>> Geschäftsführer: Markus Schulte
>> Handelsregister-Nr.: Amtsgericht Köln HRB 55347
>> UST.-Ident.Nr. DE 814 416 951
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users 
>> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade to 4.2 Postgresql Error

2017-12-22 Thread Yaniv Kaul
On Dec 22, 2017 11:56 AM, "Gabriel Stein"  wrote:

Well, it worked, I could upgrade the hosted-engine changing the /etc/locale
to en-US-UTF8. I deactivated puppet too, if someone are using puppet for
provisioning.

Now I'm having problems to update the host, but it's just this annoying bug
from vsdm(you need to restart it)


Can you share more information?
And the log, if possible.
Y.


Thanks a lot!

Best Regards,

Gabriel
PS: Looking forward for the resolution in Bugzilla.

Gabriel Stein
--
Gabriel Ferraz Stein
Tel.: +49 (0)  170 2881531

2017-12-21 17:15 GMT+01:00 Simone Tiraboschi :

>
>
> On Thu, Dec 21, 2017 at 4:30 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Dec 21, 2017 at 8:41 AM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> 2017-12-20 16:07 GMT+01:00 Gabriel Stein :
>>>
 Hi!

 well, I'm a update fever and I decided to update my ovirt to 4.2.0.

>>>
>>> Thanks for this valuable feedback! Simone has already replied and will
>>> check your setup logs.
>>>
>>>
>>>
>>>

 How I'm doingi it?

 I'm following the rules!

 1 - Global Maintenance Mode
 2 - Yum Install ovirt-release-4.2
 3- yum update ovirt-setup*.

 But, by the engine-setup I have a conflict with the system collation
 and postgresql. Unfortunatelly I changed via puppet to the german
 collation(de_DE-UTF8) of ovirt-engine hosted vm(because it's a standard for
 us) but this was after the engine-setup and the DB Configuration from
 Postgresql.

>>>
>>> Adding also Didi
>>>
>>>

 I think that I can easily change the system collation to us-US-UFT8 but
 I'm afraid that I can "destroy" my hosted-engine VM with that change, is
 hosted-engine so sensible?

 How I know that error?  The logs are saying that(and the error in on
 postgresql upgrade part of setup)!

 *Performing Consistency Checks*
 *-*
 *Checking cluster versions   ok*
 *Checking database user is the install user  ok*
 *Checking database connection settings   ok*
 *Checking for prepared transactions  ok*
 *Checking for reg* system OID user data typesok*
 *Checking for contrib/isn with bigint-passing mismatch   ok*
 *Checking for invalid "line" user columnsok*
 *Creating dump of global objects ok*
 *Creating dump of database schemas*
 *  engine*
 *  ovirt_engine_history*
 *  postgres*
 *  template1*
 *ok*

 *lc_collate values for database "postgres" do not match:  old
 "en_US.UTF-8", new "de_DE.UTF-8"*
 *Failure, exiting*

 I would be thankful if someone could give me some hint about that!

>>>
>> OK, reproduced.
>> The issue happens if and only if you changed system wide locales after
>> having installed ovirt-engine but before upgrading it to 4.2.
>> I'm going to open a bug to track it.
>>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1528371
>
>
>>
>> In engine-setup we are explicitly setting en_US.UTF-8 as the locale of
>> the engine DB but we are not touching at all the locale of postgres own DB
>> which match the system wide locale and your issue is indeed on the postgres
>> DB, not on the engine one.
>> pg_upgrade cannot change it on the fly on upgrades.
>>
>> I tried to find a working fix with env variables but unfortunately
>> nothing I tired worked.
>> The only workaround I was able to find is to temporary set the locale you
>> had at first successful engine-setup execution time (en_US.UTF-8 in your
>> case but it varies) under /etc/locale.conf and only then execute
>> engine-setup to upgrade it to 4.2.
>> After that you could safely switch back /etc/locale.conf to whatever you
>> need.
>> All the env variable I tried seams absolutely not relevant for this
>> specific issue.
>>
>>
>>
>>
>>>
 Best Regards,

 Gabriel
 PS: If I go to devconf in Brno I will pay a lot of beers to the
 developer of the engine-setup rollback! Saved my life!

>>>
>>> The specific developer won't be there, but you're welcome to reach oVirt
>>> people there and share some beer :-)
>>>
>>>
>>>






 Gabriel Stein
 --
 Gabriel Ferraz Stein
 Tel.: +49 (0)  170 2881531

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA 
>>> 
>>> TRIED. 

Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-22 Thread Sahina Bose
On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola 
wrote:

>
>
> 2017-12-21 17:01 GMT+01:00 Stefano Danzi :
>
>>
>>
>> Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
>>
>>
>>
>> 2017-12-21 14:26 GMT+01:00 Stefano Danzi :
>>
>>> Sloved installing glusterfs-gnfs package.
>>> Anyway could be nice to move hosted engine to gluster
>>>
>>>
>> Adding some gluster folks. Are we missing a dependency somewhere?
>> During the upgrade nfs on gluster stopped to work here and adding the
>> missing dep solved.
>> Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you
>> are on gluster 3.12 (ovirt 4.2)
>>
>> Sandro I confirm the version.
>> Host are running CentOS 7.4.1708
>> before the upgrade there was gluster 3.8 in oVirt 4.1
>> now I have gluster 3.12 in oVirt 4.2
>>
>>
> Thanks Stefano, I alerted glusterfs team, they'll have a look.
>


[Adding Jiffin to take a look and confirm]

I think this is to do with the separation of nfs components in gluster 3.12
(see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The recommended
nfs solution with gluster is nfs-ganesha, and hence the gluster nfs is no
longer installed by default.




>
>>
>>
>>>
>>> Il 21/12/2017 11:37, Stefano Danzi ha scritto:
>>>
>>>
>>>
>>> Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
>>>
>>>
>>>
>>> On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi 
>>> wrote:
>>>
 Hello!
 I have a test system with one phisical host and hosted engine running
 on it.
 Storage is gluster but hosted engine mount it as nfs.

 After the upgrade gluster no longer activate nfs.
 The command "gluster volume set engine nfs.disable off" doesn't help.

 How I can re-enable nfs? O better how I can migrate self hosted engine
 to native glusterfs?

>>>
>>>
>>> Ciao Stefano,
>>> could you please attach the output of
>>>   gluster volume info engine
>>>
>>> adding Kasturi here
>>>
>>>
>>> [root@ovirt01 ~]# gluster volume info engine
>>>
>>> Volume Name: engine
>>> Type: Distribute
>>> Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
>>> Options Reconfigured:
>>> server.event-threads: 4
>>> client.event-threads: 4
>>> network.ping-timeout: 30
>>> server.allow-insecure: on
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> network.remote-dio: enable
>>> cluster.eager-lock: enable
>>> performance.stat-prefetch: off
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> nfs.disable: off
>>> performance.low-prio-threads: 32
>>> cluster.data-self-heal-algorithm: full
>>> cluster.locking-scheme: granular
>>> cluster.shd-max-threads: 8
>>> cluster.shd-wait-qlength: 1
>>> features.shard: on
>>> user.cifs: off
>>> features.shard-block-size: 512MB
>>>
>>>
>>>
>>>
>>> ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-22 Thread Sandro Bonazzola
2017-12-21 17:01 GMT+01:00 Stefano Danzi :

>
>
> Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
>
>
>
> 2017-12-21 14:26 GMT+01:00 Stefano Danzi :
>
>> Sloved installing glusterfs-gnfs package.
>> Anyway could be nice to move hosted engine to gluster
>>
>>
> Adding some gluster folks. Are we missing a dependency somewhere?
> During the upgrade nfs on gluster stopped to work here and adding the
> missing dep solved.
> Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you
> are on gluster 3.12 (ovirt 4.2)
>
> Sandro I confirm the version.
> Host are running CentOS 7.4.1708
> before the upgrade there was gluster 3.8 in oVirt 4.1
> now I have gluster 3.12 in oVirt 4.2
>
>
Thanks Stefano, I alerted glusterfs team, they'll have a look.


>
>
>>
>> Il 21/12/2017 11:37, Stefano Danzi ha scritto:
>>
>>
>>
>> Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
>>
>>
>>
>> On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi  wrote:
>>
>>> Hello!
>>> I have a test system with one phisical host and hosted engine running on
>>> it.
>>> Storage is gluster but hosted engine mount it as nfs.
>>>
>>> After the upgrade gluster no longer activate nfs.
>>> The command "gluster volume set engine nfs.disable off" doesn't help.
>>>
>>> How I can re-enable nfs? O better how I can migrate self hosted engine
>>> to native glusterfs?
>>>
>>
>>
>> Ciao Stefano,
>> could you please attach the output of
>>   gluster volume info engine
>>
>> adding Kasturi here
>>
>>
>> [root@ovirt01 ~]# gluster volume info engine
>>
>> Volume Name: engine
>> Type: Distribute
>> Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
>> Options Reconfigured:
>> server.event-threads: 4
>> client.event-threads: 4
>> network.ping-timeout: 30
>> server.allow-insecure: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> nfs.disable: off
>> performance.low-prio-threads: 32
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 1
>> features.shard: on
>> user.cifs: off
>> features.shard-block-size: 512MB
>>
>>
>>
>>
>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users