Re: [ovirt-users] Minor issue upgrading to 4.2

2017-12-24 Thread Misak Khachatryan
Hi,

It seems me too in the same situation, my cluster shows firewall type
as iptables, and my firewalld status is on hosts:

systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
  Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled;
vendor preset: enabled)
  Active: inactive (dead)
Docs: man:firewalld(1)

The problem i hit that one of my VM gets paused second time due storage error.

3 host hyperconverged cluster with glusterfs, oVirt 4.2

Best regards,
Misak Khachatryan


On Sun, Dec 24, 2017 at 3:26 PM, Yaniv Kaul  wrote:
> Sounds like https://bugzilla.redhat.com/show_bug.cgi?id=1511013 - can you
> confirm?
> Y.
>
>
> On Sat, Dec 23, 2017 at 1:56 AM, Chris Adams  wrote:
>>
>> I upgraded a CentOS 7 oVirt 4.1.7 (initially installed as 3.5 if it
>> matters) test oVirt cluster to 4.2.0, and ran into one minor issue.  The
>> update installed firewalld on the host, which was set to start on boot.
>> This replaced the iptables rules with a blank firewalld setup that only
>> allowed SSH, which kept the host from working.
>>
>> Stopping and disabling firewalld, then reloading iptables, got the host
>> back working.
>>
>> In a quick search, I didn't see anything noting that firewalld was now
>> required, and it didn't seem to be configured correctly if oVirt was
>> trying to use it.
>>
>> --
>> Chris Adams 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4.2: Can't add local storage

2017-12-24 Thread Sandro Bonazzola
Il 24 Dic 2017 18:28, "Blaster"  ha scritto:

Fresh install of 4.2 self hosted engine.

Right after install, I tried to add local storage.  I tried to put the
hosted engine VM into maint mode, but it forever just sat there in the
going into maint state, and never actually went into maint state.

The default Datacenter was in uninitialized state.

So I tried adding an NFS storage domain, which initialized the datacenter.
Now when I try and put the hosted engine VM into maint mode I get: Cannot
switch the hosts to maintenance mode. There are no available hosts capable
of running the engine VM.

What am I doing wrong?


You need to add a second host allowing the hosted engine to migrate in
order to move the host to maintenance.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2.0 is now generally available

2017-12-24 Thread Dan Kenigsberg
On Fri, Dec 22, 2017 at 5:35 PM, Blaster  wrote:
>
>
> On Dec 22, 2017, at 12:32 AM, Sandro Bonazzola  wrote:
>
>
>
> 2017-12-21 18:20 GMT+01:00 Blaster :
>>
>> What a great Christmas present!
>>
>> I'm still running 3.6.3 All-In-One configuration on Fedora 22.  So it
>> looks like I'll be starting from scratch.
>>
>> Is there any decent way yet to take my on disk VM images and easily attach
>> them?  I put in an RFE for that which looks like it didn't make it into 4.2,
>> now slated for 4.2.1.

Can you point to it, please?

>>
>> The old way I used to do this was to create new VMs with new disks, then
>> just copy over my old VMs to the disk files of the new VMs.  I'm assuming
>> I'll still have to use that method.
>
>
> Are you running all-in-one with only this host or do you have more hosts
> attached to this engine?
> Is your data storage domain local? Or is it shared / provided by a different
> host?
>
>
> I am running a single host configuration with local SATA disk.  When I need
> to change hosts, I generally just move the disks (or copy data via NFS) to
> the new host, create new VMs, then move the old VM disks into the new VM
> directory with the new file names and boot.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 4.2: Can't add local storage

2017-12-24 Thread Blaster

Fresh install of 4.2 self hosted engine.

Right after install, I tried to add local storage.  I tried to put the 
hosted engine VM into maint mode, but it forever just sat there in the 
going into maint state, and never actually went into maint state.


The default Datacenter was in uninitialized state.

So I tried adding an NFS storage domain, which initialized the 
datacenter.  Now when I try and put the hosted engine VM into maint mode 
I get: Cannot switch the hosts to maintenance mode. There are no 
available hosts capable of running the engine VM.


What am I doing wrong?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Minor issue upgrading to 4.2

2017-12-24 Thread Chris Adams
My cluster shows iptables as the firewall type in the web UI, and
firewall_type is 0 in the database.

Once upon a time, Yaniv Kaul  said:
> Sounds like https://bugzilla.redhat.com/show_bug.cgi?id=1511013 - can you
> confirm?
> Y.
> 
> On Sat, Dec 23, 2017 at 1:56 AM, Chris Adams  wrote:
> 
> > I upgraded a CentOS 7 oVirt 4.1.7 (initially installed as 3.5 if it
> > matters) test oVirt cluster to 4.2.0, and ran into one minor issue.  The
> > update installed firewalld on the host, which was set to start on boot.
> > This replaced the iptables rules with a blank firewalld setup that only
> > allowed SSH, which kept the host from working.
> >
> > Stopping and disabling firewalld, then reloading iptables, got the host
> > back working.
> >
> > In a quick search, I didn't see anything noting that firewalld was now
> > required, and it didn't seem to be configured correctly if oVirt was
> > trying to use it.
> >
> > --
> > Chris Adams 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] took the plunge to 4.2 but not so sure it was a good idea

2017-12-24 Thread Yaniv Kaul
On Sun, Dec 24, 2017 at 1:53 PM, Jason Keltz  wrote:

> Quoting Yaniv Kaul :
>
> On Sun, Dec 24, 2017 at 4:34 AM, Jason Keltz  wrote:
>>
>>
>>> On 12/23/2017 5:38 PM, Jason Keltz wrote:
>>>
>>> Hi..

 I took the plunge to 4.2, but I think maybe I should have waited a
 bit...


>>> Can you specify what did you upgrade, and in which order? Engine, hosts?
>> Cluster level, etc.?
>>
>>
> I was running 4.1.8 everywhere. I upgraded engine (standalone) to 4.2,
> then the 4 hosts. I stopped ovirt-engine, added the new repo for 4.2, ran
> the yum update of ovirt setup, ran engine-setup and that process worked
> flawlessly. No errors. I had just upgraded to 4.1.8 a few days ago, so all
> my ovirt infrastructure was running latest ovirt and I also upgraded engine
> and hosts to latest CentOS and latest kernel with the last 4.1.8 update.  I
> then upgraded cluster level. All the VMs were going to be upgraded as they
> were rebooted, and since it's the reboot that breaks console, and since a
> reinstall brings it back, I'm going to assume it's the switch from 4.1 to
> 4.2 cluster that breaks it.  If I submit this as a bug then what log/logs
> would I submit?


I think vdsm log is going to be very helpful, but also the console (and
potentially engine). ovirt-log-collector should collect everything needed.
Thanks,
Y.


>
>
>
>>
>>> Initially, after upgrade to 4.2, the status of many of my hosts changed
 from "server" to "desktop".  That's okay - I can change them back.


>>> You mean the VM type?
>>
>>
>> Yes.  VM type. Most of the VMs switched from desktop to server after the
> update.
>
>
> My first VM, "archive", I had the ability to access console after the
 upgrade.  I rebooted archive, and I lost the ability (option is grayed
 out).  The VM boots, but I need access to the console.

 My second VM is called "dist".That one, ovirt says is running, but I
 can't access it, can't ping it, and there's no console either, so I
 literally can't get to it. I can reboot it, and shut it down, but it
 would
 be helpful to be able to access it.   What to do?

 I reinstalled "dist" because I needed the VM to be accessible on the

>>> network.  I was going to try detatching the disk from the existing dist
>>> server, and attaching it to a new dist VM, but I ended up inadvertently
>>> deleting the disk image.  I can't believe that under "storage" you can't
>>> detatch a disk from a VM - you can only delete the disk.
>>>
>>> After reinstalling dist, I got back console, and network access!  I tried
>>> rebooting it several times, and console remains... so the loss of console
>>> has something to do with switching from a 4.1 VM to 4.2.
>>>
>>> I've very afraid to reboot my engine because it seems like when I reboot
>>>
 hosts, I lose access to console.

 I rebooted one more VM for which I had console access, and again, I've

>>> lost it (at least network access remains). Now that this situation is
>>> repeatable, I'm going one of the ovirt gurus can send me the magical DB
>>> command to fix it.Probably not a solution to reinstall my 37 VMs from
>>> kickstart.. that would be a headache.
>>>
>>> In addition, when I try to check for "host updates", I get an error that
>>>
 it can't check for host updates.  I ran a yum update on the hosts (after
 upgrading repo to 4.2 and doing a yum update) and all I'm looking for
 it to
 do is clear status, but it doesn't seem to work.

 The error in engine.log when I try to update any of the hosts is:

>>>
>>> 2017-12-23 19:11:36,479-05 INFO [org.ovirt.engine.core.bll.hos
>>> tdeploy.HostUpgradeCheckCommand] (default task-156)
>>> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command:
>>> HostUpgradeCheckCommand internal: false. Entities affected :  ID:
>>> 45f8b331-842e-48e7-9df8-56adddb93836 Type: VDSAction group
>>> EDIT_HOST_CONFIGURATION with role type ADMIN
>>> 2017-12-23 19:11:36,496-05 INFO [org.ovirt.engine.core.dal.dbb
>>> roker.auditloghandling.AuditLogDirector] (default task-156) [] EVENT_ID:
>>> HOST_AVAILABLE_UPDATES_STARTED(884), Started to check for available
>>> updates on host virt1.
>>> 2017-12-23 19:11:36,500-05 INFO [org.ovirt.engine.core.bll.hos
>>> tdeploy.HostUpgradeCheckInternalCommand] (EE-ManagedThreadFactory-comma
>>> ndCoordinator-Thread-7)
>>> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command:
>>> HostUpgradeCheckInternalCommand internal: true. Entities affected : ID:
>>> 45f8b331-842e-48e7-9df8-56adddb93836 Type: VDS
>>> 2017-12-23 19:11:36,504-05 INFO [org.ovirt.engine.core.common.
>>> utils.ansible.AnsibleExecutor]
>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-7)
>>> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Executing Ansible command:
>>> ANSIBLE_STDOUT_CALLBACK=hostupgradeplugin [/usr/bin/ansible-playbook,
>>> --check, --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa,

Re: [ovirt-users] oVirt 4.2.0 is now generally available

2017-12-24 Thread Shirly Radco
--

SHIRLY RADCO

BI SOFTWARE ENGINEER

Red Hat Israel 

TRIED. TESTED. TRUSTED. 

On Sat, Dec 23, 2017 at 12:35 PM, Christophe TREFOIS <
christophe.tref...@uni.lu> wrote:

> Hi,
>
> Is it possible to include metrics store into an existing ELK stack?
>

We support our server side, if you have issues on your setup will not be
able to help, but it is possible.

The oVirt metrics store setup also includes the index templates for
elasticsearch for metrics and logs.

You will need to create them by yourself if you plan to use your own setup.

Best,
Shirly



>
> Thanks,
>
> --
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124 <+352%2046%2066%2044%206124>
> F: +352 46 66 44 6949 <+352%2046%2066%2044%206949>
> http://www.uni.lu/lcsb
>
> [image: Facebook]   [image: Twitter]
>   [image: Google Plus]
>   [image: Linkedin]
>   [image: skype]
> 
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the
> original message and any copies.
> 
>
>
> On 20 Dec 2017, at 10:40, Sandro Bonazzola  wrote:
>
> The oVirt project is excited to announce the general availability of oVirt
> 4.2.0, as of December 20, 2017.
>
> This release unleashes an altogether more powerful and flexible open
> source virtualization solution that encompasses over 1000 individual
> changes and a wide range of enhancements across the engine, storage,
> network, user interface, and analytics.
>
> Key features include:
>
> - A redesigned Administration Portal, with an improved user-interface
> - A new VM portal for non-admin users, for a more streamlined experience
> - High Performance virtual machine type, for easy optimization of high
> performance workloads.
> - oVirt metrics store, a new monitoring solution providing complete
> infrastructure visibility
> - Support for virtual machine connectivity via software-defined networks
> - oVirt now supports Nvidia vGPU
> - The ovirt-ansible-roles set of packages helps users with common
> administration tasks
> - Virt-v2v now supports Debian/Ubuntu and EL and Windows-based virtual
> machines
>
> For more information about these and other features, check out the oVirt
> 4.2.0 blog post .
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le (with 4.1
> cluster compatibility only) architectures for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
> * oVirt Node 4.2 (available for x86_64 only)
>
> See the release notes [3] for installation / upgrade instructions and a
> list of new features and bugs fixed.
>
> If you’re managing more than one oVirt instance, OpenShift Origin or RDO
> we also recommend to try ManageIQ .
>
>
> Notes:
> - oVirt Appliance is already available.
> - oVirt Node is already available [4]
> - oVirt Windows Guest Tools iso is already available [4]
>
> Additional Resources:
> * Read more about the oVirt 4.2.0 release highlights:
> http://www.ovirt.org/release/4.2.0/
> * Get more oVirt project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
>
> [1] https://www.ovirt.org/community/
> [2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
> [3] http://www.ovirt.org/release/4.2.0/
> [4] http://resources.ovirt.org/pub/ovirt-4.2/iso/
>
> --
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] took the plunge to 4.2 but not so sure it was a good idea

2017-12-24 Thread Jason Keltz

Quoting Yaniv Kaul :


On Sun, Dec 24, 2017 at 4:34 AM, Jason Keltz  wrote:



On 12/23/2017 5:38 PM, Jason Keltz wrote:


Hi..

I took the plunge to 4.2, but I think maybe I should have waited a bit...




Can you specify what did you upgrade, and in which order? Engine, hosts?
Cluster level, etc.?



I was running 4.1.8 everywhere. I upgraded engine (standalone) to 4.2,  
then the 4 hosts. I stopped ovirt-engine, added the new repo for 4.2,  
ran the yum update of ovirt setup, ran engine-setup and that process  
worked flawlessly. No errors. I had just upgraded to 4.1.8 a few days  
ago, so all my ovirt infrastructure was running latest ovirt and I  
also upgraded engine and hosts to latest CentOS and latest kernel with  
the last 4.1.8 update.  I then upgraded cluster level. All the VMs  
were going to be upgraded as they were rebooted, and since it's the  
reboot that breaks console, and since a reinstall brings it back, I'm  
going to assume it's the switch from 4.1 to 4.2 cluster that breaks  
it.  If I submit this as a bug then what log/logs would I submit?







Initially, after upgrade to 4.2, the status of many of my hosts changed
from "server" to "desktop".  That's okay - I can change them back.




You mean the VM type?


Yes.  VM type. Most of the VMs switched from desktop to server after  
the update.



My first VM, "archive", I had the ability to access console after the
upgrade.  I rebooted archive, and I lost the ability (option is grayed
out).  The VM boots, but I need access to the console.

My second VM is called "dist".That one, ovirt says is running, but I
can't access it, can't ping it, and there's no console either, so I
literally can't get to it. I can reboot it, and shut it down, but it would
be helpful to be able to access it.   What to do?

I reinstalled "dist" because I needed the VM to be accessible on the

network.  I was going to try detatching the disk from the existing dist
server, and attaching it to a new dist VM, but I ended up inadvertently
deleting the disk image.  I can't believe that under "storage" you can't
detatch a disk from a VM - you can only delete the disk.

After reinstalling dist, I got back console, and network access!  I tried
rebooting it several times, and console remains... so the loss of console
has something to do with switching from a 4.1 VM to 4.2.

I've very afraid to reboot my engine because it seems like when I reboot

hosts, I lose access to console.

I rebooted one more VM for which I had console access, and again, I've

lost it (at least network access remains). Now that this situation is
repeatable, I'm going one of the ovirt gurus can send me the magical DB
command to fix it.Probably not a solution to reinstall my 37 VMs from
kickstart.. that would be a headache.

In addition, when I try to check for "host updates", I get an error that

it can't check for host updates.  I ran a yum update on the hosts (after
upgrading repo to 4.2 and doing a yum update) and all I'm looking for it to
do is clear status, but it doesn't seem to work.

The error in engine.log when I try to update any of the hosts is:


2017-12-23 19:11:36,479-05 INFO [org.ovirt.engine.core.bll.hos
tdeploy.HostUpgradeCheckCommand] (default task-156)
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command:
HostUpgradeCheckCommand internal: false. Entities affected :  ID:
45f8b331-842e-48e7-9df8-56adddb93836 Type: VDSAction group
EDIT_HOST_CONFIGURATION with role type ADMIN
2017-12-23 19:11:36,496-05 INFO [org.ovirt.engine.core.dal.dbb
roker.auditloghandling.AuditLogDirector] (default task-156) [] EVENT_ID:
HOST_AVAILABLE_UPDATES_STARTED(884), Started to check for available
updates on host virt1.
2017-12-23 19:11:36,500-05 INFO [org.ovirt.engine.core.bll.hos
tdeploy.HostUpgradeCheckInternalCommand]  
(EE-ManagedThreadFactory-commandCoordinator-Thread-7)

[ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command:
HostUpgradeCheckInternalCommand internal: true. Entities affected : ID:
45f8b331-842e-48e7-9df8-56adddb93836 Type: VDS
2017-12-23 19:11:36,504-05 INFO  
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]

(EE-ManagedThreadFactory-commandCoordinator-Thread-7)
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Executing Ansible command:
ANSIBLE_STDOUT_CALLBACK=hostupgradeplugin [/usr/bin/ansible-playbook,
--check, --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa,
--inventory=/tmp/ansible-inventory1039100972039373314,
/usr/share/ovirt-engine/playbooks/ovirt-host-upgrade.yml] [Logfile: null]
2017-12-23 19:11:37,897-05 INFO  
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]

(EE-ManagedThreadFactory-commandCoordinator-Thread-7)
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Ansible playbook command has
exited with value: 4
2017-12-23 19:11:37,897-05 ERROR  
[org.ovirt.engine.core.bll.host.HostUpgradeManager]

(EE-ManagedThreadFactory-commandCoordinator-Thread-7)
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Failed to run 

Re: [ovirt-users] Minor issue upgrading to 4.2

2017-12-24 Thread Yaniv Kaul
Sounds like https://bugzilla.redhat.com/show_bug.cgi?id=1511013 - can you
confirm?
Y.

On Sat, Dec 23, 2017 at 1:56 AM, Chris Adams  wrote:

> I upgraded a CentOS 7 oVirt 4.1.7 (initially installed as 3.5 if it
> matters) test oVirt cluster to 4.2.0, and ran into one minor issue.  The
> update installed firewalld on the host, which was set to start on boot.
> This replaced the iptables rules with a blank firewalld setup that only
> allowed SSH, which kept the host from working.
>
> Stopping and disabling firewalld, then reloading iptables, got the host
> back working.
>
> In a quick search, I didn't see anything noting that firewalld was now
> required, and it didn't seem to be configured correctly if oVirt was
> trying to use it.
>
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.2 host not compatible

2017-12-24 Thread Edward Haas
On Fri, Dec 22, 2017 at 11:09 PM, Paul Dyer  wrote:

> It seems that I am using net_persistence = ifcfg, and that I have lost the
> definitions for the logical networks.
>

Could you please share with us when and in what manner you lost these
definitions?
Is it re-creatable?


>
> I have recovered these, and was able to do setup logical networks.
>
> It is all working now.
>
> Paul
>
> On Fri, Dec 22, 2017 at 1:46 PM, Paul Dyer  wrote:
>
>> My setup is RHEL 7.4, with the host separate from the engine.
>>
>> The ovirt-release42 rpm was added to the engine host, but not to the
>> virtualization host.   The vhost was still running v4.1 rpms.   I installed
>> ovirt-release42 on the vhost, then updated the rest of the rpms with "yum
>> update".I am still getting an error on activation of the vhost...
>>
>>  Host parasol does not comply with the cluster Intel networks, the
>> following networks are missing on host: 'data30,data40,ovirtmgmt'
>>
>> It seems like the networks bridges are not there anymore??
>>
>> Paul
>>
>>
>>
>> On Fri, Dec 22, 2017 at 12:46 PM, Paul Dyer  wrote:
>>
>>> Hi,
>>>
>>> I have upgraded to ovirt 4.2 without issue.   But I cannot find a way to
>>> upgrade the host compatibility in the new OVirt Manager.
>>>
>>> I get this error when activiating the host...
>>>
>>> host parasol is compatible with versions (3.6,4.0,4.1) and cannot join
>>> Cluster Intel which is set to version 4.2.
>>>
>>> Thanks,
>>> Paul
>>>
>>>
>>> --
>>> Paul Dyer,
>>> Mercury Consulting Group, RHCE
>>> 504-302-8750 <(504)%20302-8750>
>>>
>>
>>
>>
>> --
>> Paul Dyer,
>> Mercury Consulting Group, RHCE
>> 504-302-8750 <(504)%20302-8750>
>>
>
>
>
> --
> Paul Dyer,
> Mercury Consulting Group, RHCE
> 504-302-8750 <(504)%20302-8750>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] took the plunge to 4.2 but not so sure it was a good idea

2017-12-24 Thread Yaniv Kaul
On Sun, Dec 24, 2017 at 4:34 AM, Jason Keltz  wrote:

>
> On 12/23/2017 5:38 PM, Jason Keltz wrote:
>
>> Hi..
>>
>> I took the plunge to 4.2, but I think maybe I should have waited a bit...
>>
>
Can you specify what did you upgrade, and in which order? Engine, hosts?
Cluster level, etc.?


>
>> Initially, after upgrade to 4.2, the status of many of my hosts changed
>> from "server" to "desktop".  That's okay - I can change them back.
>>
>
You mean the VM type?


>
>> My first VM, "archive", I had the ability to access console after the
>> upgrade.  I rebooted archive, and I lost the ability (option is grayed
>> out).  The VM boots, but I need access to the console.
>>
>> My second VM is called "dist".That one, ovirt says is running, but I
>> can't access it, can't ping it, and there's no console either, so I
>> literally can't get to it. I can reboot it, and shut it down, but it would
>> be helpful to be able to access it.   What to do?
>>
>> I reinstalled "dist" because I needed the VM to be accessible on the
> network.  I was going to try detatching the disk from the existing dist
> server, and attaching it to a new dist VM, but I ended up inadvertently
> deleting the disk image.  I can't believe that under "storage" you can't
> detatch a disk from a VM - you can only delete the disk.
>
> After reinstalling dist, I got back console, and network access!  I tried
> rebooting it several times, and console remains... so the loss of console
> has something to do with switching from a 4.1 VM to 4.2.
>
> I've very afraid to reboot my engine because it seems like when I reboot
>> hosts, I lose access to console.
>>
>> I rebooted one more VM for which I had console access, and again, I've
> lost it (at least network access remains). Now that this situation is
> repeatable, I'm going one of the ovirt gurus can send me the magical DB
> command to fix it.Probably not a solution to reinstall my 37 VMs from
> kickstart.. that would be a headache.
>
> In addition, when I try to check for "host updates", I get an error that
>> it can't check for host updates.  I ran a yum update on the hosts (after
>> upgrading repo to 4.2 and doing a yum update) and all I'm looking for it to
>> do is clear status, but it doesn't seem to work.
>>
>> The error in engine.log when I try to update any of the hosts is:
>
> 2017-12-23 19:11:36,479-05 INFO [org.ovirt.engine.core.bll.hos
> tdeploy.HostUpgradeCheckCommand] (default task-156)
> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command:
> HostUpgradeCheckCommand internal: false. Entities affected :  ID:
> 45f8b331-842e-48e7-9df8-56adddb93836 Type: VDSAction group
> EDIT_HOST_CONFIGURATION with role type ADMIN
> 2017-12-23 19:11:36,496-05 INFO [org.ovirt.engine.core.dal.dbb
> roker.auditloghandling.AuditLogDirector] (default task-156) [] EVENT_ID:
> HOST_AVAILABLE_UPDATES_STARTED(884), Started to check for available
> updates on host virt1.
> 2017-12-23 19:11:36,500-05 INFO [org.ovirt.engine.core.bll.hos
> tdeploy.HostUpgradeCheckInternalCommand] 
> (EE-ManagedThreadFactory-commandCoordinator-Thread-7)
> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command:
> HostUpgradeCheckInternalCommand internal: true. Entities affected : ID:
> 45f8b331-842e-48e7-9df8-56adddb93836 Type: VDS
> 2017-12-23 19:11:36,504-05 INFO 
> [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-7)
> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Executing Ansible command:
> ANSIBLE_STDOUT_CALLBACK=hostupgradeplugin [/usr/bin/ansible-playbook,
> --check, --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa,
> --inventory=/tmp/ansible-inventory1039100972039373314,
> /usr/share/ovirt-engine/playbooks/ovirt-host-upgrade.yml] [Logfile: null]
> 2017-12-23 19:11:37,897-05 INFO 
> [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-7)
> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Ansible playbook command has
> exited with value: 4
> 2017-12-23 19:11:37,897-05 ERROR 
> [org.ovirt.engine.core.bll.host.HostUpgradeManager]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-7)
> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Failed to run check-update of host
> 'virt1-mgmt'.
> 2017-12-23 19:11:37,897-05 ERROR 
> [org.ovirt.engine.core.bll.hostdeploy.HostUpdatesChecker]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-7)
> [ae11a704-3b40-45d3-9850-932f6ed91ed9] Failed to check if updates are
> available for host 'virt1' with error message 'Failed to run check-update
> of host 'virt1-mgmt'.'
> 2017-12-23 19:11:37,904-05 ERROR [org.ovirt.engine.core.dal.dbb
> roker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-commandCoordinator-Thread-7)
> [ae11a704-3b40-45d3-9850-932f6ed91ed9] EVENT_ID:
> HOST_AVAILABLE_UPDATES_FAILED(839), Failed to check for available updates
> on host virt1 with message 'Failed to run check-update of host
> 'virt1-mgmt'.'.
>
>
Can you share the complete 

[ovirt-users] ?????? Host ovritnode1 installation failed. Commandreturned failure code 1 during SSH session 'root@X.X.X.X'.

2017-12-24 Thread M.I.S
Thank you for your help, my problem is solved.




--  --
??: "Kasturi Narra";;
: 2017??12??7??(??) 9:59
??: "M.I.S"<1312121...@qq.com>;
: "users"; 
: Re: [ovirt-users] Host ovritnode1 installation failed. Commandreturned 
failure code 1 during SSH session 'root@X.X.X.X'.



Hello,

 Looks like there is a problem with the repo which is present in your 
system. Can you please disable the repo and try installing the host again? That 
should solve the problem.


Thanks
kasturi


On Thu, Dec 7, 2017 at 1:53 PM, M.I.S <1312121...@qq.com> wrote:
hi,
   I encountered a problem.
   I am getting an error when adding a host to ovirt-engine:Host ovritnode1 
installation failed. Command returned failure code 1 during SSH session 
'root@192.168.1.152'.
   PS: all user and password is correct.


I checked engine log, the information is as follows:



2017-12-06 18:58:52,995-05 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Correlation ID: 
617b92e4, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: 
Installing Host ovirtnode1. Stage: Environment setup.
 
2017-12-06 18:58:53,029-05 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Correlation ID: 
617b92e4, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: 
Installing Host ovirtnode1. Stage: Environment packages setup.
 
2017-12-06 18:59:22,974-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), 
Correlation ID: 617b92e4, Call Stack: null, Custom ID: null, Custom Event ID: 
-1, Message: Failed to install Host ovirtnode1. Yum Cannot queue package 
iproute: Cannot retrieve metalink for repository: ovirt-4.1-epel/x86_64. Please 
verify its path and try again.
 
2017-12-06 18:59:22,999-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), 
Correlation ID: 617b92e4, Call Stack: null, Custom ID: null, Custom Event ID: 
-1, Message: Failed to install Host ovirtnode1. Failed to execute stage 
'Environment packages setup': Cannot retrieve metalink for repository: 
ovirt-4.1-epel/x86_64. Please verify its path and try again.


  How to solve this problem, please help analyze, thank you!

___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-24 Thread Jiffin Tony Thottan

On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:



On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola > wrote:




2017-12-21 17:01 GMT+01:00 Stefano Danzi >:



Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:



2017-12-21 14:26 GMT+01:00 Stefano Danzi >:

Sloved installing glusterfs-gnfs package.
Anyway could be nice to move hosted engine to gluster


Adding some gluster folks. Are we missing a dependency somewhere?
During the upgrade nfs on gluster stopped to work here and
adding the missing dep solved.
Stefano please confirm, you were on gluster 3.8 (oVirt 4.1)
and now you are on gluster 3.12 (ovirt 4.2)


Sandro I confirm the version.
Host are running CentOS 7.4.1708
before the upgrade there was gluster 3.8 in oVirt 4.1
now I have gluster 3.12 in oVirt 4.2


Thanks Stefano, I alerted glusterfs team, they'll have a look.



[Adding Jiffin to take a look and confirm]

I think this is to do with the separation of nfs components in gluster 
3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The 
recommended nfs solution with gluster is nfs-ganesha, and hence the 
gluster nfs is no longer installed by default.



Hi,

For gluster nfs u need to install gluster-gnfs package. As Sahina , it 
is change from 3.12 onwards I guess


Regards,
Jiffin






Il 21/12/2017 11:37, Stefano Danzi ha scritto:



Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:



On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi
> wrote:

Hello!
I have a test system with one phisical host and
hosted engine running on it.
Storage is gluster but hosted engine mount it as nfs.

After the upgrade gluster no longer activate nfs.
The command "gluster volume set engine nfs.disable
off" doesn't help.

How I can re-enable nfs? O better how I can migrate
self hosted engine to native glusterfs?



Ciao Stefano,
could you please attach the output of
  gluster volume info engine

adding Kasturi here


[root@ovirt01 ~]# gluster volume info engine

Volume Name: engine
Type: Distribute
Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
nfs.disable: off
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
features.shard-block-size: 512MB





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





-- 


SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG
VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 







-- 


SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat