Re: [ovirt-users] took the plunge to 4.2 but not so sure it was a good idea

2017-12-23 Thread Jason Keltz


On 12/23/2017 5:38 PM, Jason Keltz wrote:

Hi..

I took the plunge to 4.2, but I think maybe I should have waited a bit...

Initially, after upgrade to 4.2, the status of many of my hosts 
changed from "server" to "desktop".  That's okay - I can change them 
back.


My first VM, "archive", I had the ability to access console after the 
upgrade.  I rebooted archive, and I lost the ability (option is grayed 
out).  The VM boots, but I need access to the console.


My second VM is called "dist".    That one, ovirt says is running, but 
I can't access it, can't ping it, and there's no console either, so I 
literally can't get to it. I can reboot it, and shut it down, but it 
would be helpful to be able to access it.   What to do?


I reinstalled "dist" because I needed the VM to be accessible on the 
network.  I was going to try detatching the disk from the existing dist 
server, and attaching it to a new dist VM, but I ended up inadvertently 
deleting the disk image.  I can't believe that under "storage" you can't 
detatch a disk from a VM - you can only delete the disk.


After reinstalling dist, I got back console, and network access!  I 
tried rebooting it several times, and console remains... so the loss of 
console has something to do with switching from a 4.1 VM to 4.2.


I've very afraid to reboot my engine because it seems like when I 
reboot hosts, I lose access to console.


I rebooted one more VM for which I had console access, and again, I've 
lost it (at least network access remains). Now that this situation is 
repeatable, I'm going one of the ovirt gurus can send me the magical DB 
command to fix it.    Probably not a solution to reinstall my 37 VMs 
from kickstart.. that would be a headache.


In addition, when I try to check for "host updates", I get an error 
that it can't check for host updates.  I ran a yum update on the hosts 
(after upgrading repo to 4.2 and doing a yum update) and all I'm 
looking for it to do is clear status, but it doesn't seem to work.



The error in engine.log when I try to update any of the hosts is:

2017-12-23 19:11:36,479-05 INFO 
[org.ovirt.engine.core.bll.hostdeploy.HostUpgradeCheckCommand] (default 
task-156) [ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command: 
HostUpgradeCheckCommand internal: false. Entities affected :  ID: 
45f8b331-842e-48e7-9df8-56adddb93836 Type: VDSAction group 
EDIT_HOST_CONFIGURATION with role type ADMIN
2017-12-23 19:11:36,496-05 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-156) [] EVENT_ID: HOST_AVAILABLE_UPDATES_STARTED(884), 
Started to check for available updates on host virt1.
2017-12-23 19:11:36,500-05 INFO 
[org.ovirt.engine.core.bll.hostdeploy.HostUpgradeCheckInternalCommand] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-7) 
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Running command: 
HostUpgradeCheckInternalCommand internal: true. Entities affected : ID: 
45f8b331-842e-48e7-9df8-56adddb93836 Type: VDS
2017-12-23 19:11:36,504-05 INFO 
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-7) 
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Executing Ansible command: 
ANSIBLE_STDOUT_CALLBACK=hostupgradeplugin [/usr/bin/ansible-playbook, 
--check, --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, 
--inventory=/tmp/ansible-inventory1039100972039373314, 
/usr/share/ovirt-engine/playbooks/ovirt-host-upgrade.yml] [Logfile: null]
2017-12-23 19:11:37,897-05 INFO 
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-7) 
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Ansible playbook command has 
exited with value: 4
2017-12-23 19:11:37,897-05 ERROR 
[org.ovirt.engine.core.bll.host.HostUpgradeManager] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-7) 
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Failed to run check-update of 
host 'virt1-mgmt'.
2017-12-23 19:11:37,897-05 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.HostUpdatesChecker] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-7) 
[ae11a704-3b40-45d3-9850-932f6ed91ed9] Failed to check if updates are 
available for host 'virt1' with error message 'Failed to run 
check-update of host 'virt1-mgmt'.'
2017-12-23 19:11:37,904-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-7) 
[ae11a704-3b40-45d3-9850-932f6ed91ed9] EVENT_ID: 
HOST_AVAILABLE_UPDATES_FAILED(839), Failed to check for available 
updates on host virt1 with message 'Failed to run check-update of host 
'virt1-mgmt'.'.


Jason.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] took the plunge to 4.2 but not so sure it was a good idea

2017-12-23 Thread Jason Keltz

Hi..

I took the plunge to 4.2, but I think maybe I should have waited a bit...

Initially, after upgrade to 4.2, the status of many of my hosts changed 
from "server" to "desktop".  That's okay - I can change them back.


My first VM, "archive", I had the ability to access console after the 
upgrade.  I rebooted archive, and I lost the ability (option is grayed 
out).  The VM boots, but I need access to the console.


My second VM is called "dist".    That one, ovirt says is running, but I 
can't access it, can't ping it, and there's no console either, so I 
literally can't get to it. I can reboot it, and shut it down, but it 
would be helpful to be able to access it.   What to do?


I've very afraid to reboot my engine because it seems like when I reboot 
hosts, I lose access to console.


In addition, when I try to check for "host updates", I get an error that 
it can't check for host updates.  I ran a yum update on the hosts (after 
upgrading repo to 4.2 and doing a yum update) and all I'm looking for it 
to do is clear status, but it doesn't seem to work.


Let me know the exact log files to provide, and I will provide details.

Thanks!

Jason.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] share also your successful upgrades experience

2017-12-23 Thread Gabriel Stein
Well, I finally upgraded all to 4.2. Unfortunately I broke a Server in the
upgrade process and I needed much more time than expected(the Host was with
Hosted Engine and Gluster). I will not go further on that problems because
I interrupted the upgrade process and try to fix it and I ended with a
kernel panic.

I recommend to use tmux or screen for the upgrade.

My experience:

I used this tutorial for the upgrade process:
https://ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine

You can use it for the Hosted Engine VM and the Hosts. Be patient, don't
interrupt the yum update process, and follow the instructions.
If you have a locale different than US-UTF8, please change it for US-UTF8
before the upgrade process. Where? /etc/locale.conf on CentOS. If you are
using e.g. puppet, please deactivate it on upgrade process, to avoid the
locale change(if you have something on puppet that changes it).

Problems:

I still having problems with glusterfs(Peer Rejected) and unstable, but it
probably happens because I copied the UUID from the broke server from
glusterfs and added the new installed server with same ip back to gluster.


IMHO:
Please check that you have the engine backup done. Save it somewhere, NFS,
rsync-it to another server
When running engine-setup after the yum update on ovirt-engine VM: don't to
the in place upgrade from postgresql. It's really nice to have it, but you
you can avoid risks, why do so?
Keep the Backup from Postgresql.

Previous Version: 4.1.x
Updated to: 4.2

Setup:

CentOS 7.4.108
4 Servers, 3 with Gluster for engine.

If you have questions

Best Regards,

Gabriel


Gabriel Stein
--
Gabriel Ferraz Stein
Tel.: +49 (0)  170 2881531

2017-12-22 12:19 GMT+01:00 Gianluca Cecchi :

> On Fri, Dec 22, 2017 at 12:06 PM, Martin Perina 
> wrote:
>
>>
>>

 Only problem I registered was the first start of engine on the first
 upgraded host (that should be step 7 in link above), where I got error on
 engine (not able to access web admin portal); I see this in server.log
 (see full attach here https://drive.google.com/file/
 d/1UQAllZfjueVGkXDsBs09S7THGDFn9YPa/view?usp=sharing )

 2017-12-22 00:40:17,674+01 INFO  [org.quartz.core.QuartzScheduler]
 (ServerService Thread Pool -- 63) Scheduler 
 DefaultQuartzScheduler_$_NON_CLUSTERED
 started.
 2017-12-22 00:40:17,682+01 INFO  [org.jboss.as.clustering.infinispan]
 (ServerService Thread Pool -- 63) WFLYCLINF0002: Started timeout-base cache
 from ovirt-engine container
 2017-12-22 00:44:28,611+01 ERROR 
 [org.jboss.as.controller.management-operation]
 (Controller Boot Thread) WFLYCTL0348: Timeout after [300] seconds waiting
 for service container stability. Operation will roll back. Step that first
 updated the service container was 'add' at address '[
 ("core-service" => "management"),
 ("management-interface" => "native-interface")
 ]'

>>>
>>> Adding Vaclav, maybe something in Wildfly? Martin, any hint on engine
>>> side?
>>>
>>
>> ​Yeah, I've already seen such error a few times, it usually happens when
>> access to storage is really slow or the host itself is overloaded and
>> WildFly is not able to startup properly until default 300 seconds interval​
>> is over.
>>
>> If this is going to happen often, we will have to raise that timeout for
>> all installations.
>>
>>
> Ok, thanks for the confirmation of what I suspected.
> Actually this particular environment is based on a single NUC where I have
> ESXi 6.0U2 and the 3 hosts are actually VMs of this vSphere environment, so
> that the hosted engine is an L2 VM
> And the replica 3 (with arbiter) of the hosts insists at the end on a
> single physical SSD disk below...
> All in all it is already a great success that this kind of infrastructure
> has been able to update from 4.1 to 4.2... I use it basically as a
> functional testing and btw on vSphere there is also another CentOS 7 VM
> running ;-)
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2.0 is now generally available

2017-12-23 Thread Simone Tiraboschi
Il 23 Dic 2017 04:33, "Gary Pedretty"  ha scritto:

Unable to upgrade a self-hosted engine installation to 4.2.0 due to the
following error


ovirt-engine-setup-plugin-ovirt-engine conflicts with
ovirt-engine-4.0.6.3-1.el7.centos.noarch

When trying to do the first step of updating the hosted engine vm


Hi,
you cannot directly do 4.0 -> 4.2 but you have to do a stepped 4.0 -> 4.1
-> 4.2.



Gary



Gary Pedrettyg...@ravnalaska.net
Systems Manager  www.flyravn.com
Ravn Alaska   /\  W 907-450-7251
5245 Airport Industrial Road /  \/\   C 907-388-2247
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
White as far as the eye can see. Must be winter yourself” Matt 22:39




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2.0 is now generally available

2017-12-23 Thread Christophe TREFOIS
Hi,

Is it possible to include metrics store into an existing ELK stack?

Thanks,
-- 

Dr Christophe Trefois, Dipl.-Ing.  
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine  
6, avenue du Swing 
L-4367 Belvaux  
T: +352 46 66 44 6124 
F: +352 46 66 44 6949  
http://www.uni.lu/lcsb 
       
   
   

This message is confidential and may contain privileged information. 
It is intended for the named recipient only. 
If you receive it in error please notify me and permanently delete the original 
message and any copies. 


  

> On 20 Dec 2017, at 10:40, Sandro Bonazzola  wrote:
> 
> The oVirt project is excited to announce the general availability of oVirt 
> 4.2.0, as of December 20, 2017.
> 
> This release unleashes an altogether more powerful and flexible open source 
> virtualization solution that encompasses over 1000 individual changes and a 
> wide range of enhancements across the engine, storage, network, user 
> interface, and analytics.
> 
> Key features include:
> 
> - A redesigned Administration Portal, with an improved user-interface
> - A new VM portal for non-admin users, for a more streamlined experience
> - High Performance virtual machine type, for easy optimization of high 
> performance workloads.
> - oVirt metrics store, a new monitoring solution providing complete 
> infrastructure visibility
> - Support for virtual machine connectivity via software-defined networks
> - oVirt now supports Nvidia vGPU
> - The ovirt-ansible-roles set of packages helps users with common 
> administration tasks
> - Virt-v2v now supports Debian/Ubuntu and EL and Windows-based virtual 
> machines
> 
> For more information about these and other features, check out the oVirt 
> 4.2.0 blog post .
> 
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
> 
> This release supports Hypervisor Hosts on x86_64 and ppc64le (with 4.1 
> cluster compatibility only) architectures for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
> * oVirt Node 4.2 (available for x86_64 only)
> 
> See the release notes [3] for installation / upgrade instructions and a list 
> of new features and bugs fixed.
> 
> If you’re managing more than one oVirt instance, OpenShift Origin or RDO we 
> also recommend to try ManageIQ .
> 
> 
> Notes:
> - oVirt Appliance is already available.
> - oVirt Node is already available [4]
> - oVirt Windows Guest Tools iso is already available [4]
> 
> Additional Resources:
> * Read more about the oVirt 4.2.0 release highlights: 
> http://www.ovirt.org/release/4.2.0/ 
> * Get more oVirt project updates on Twitter: https://twitter.com/ovirt 
> 
> * Check out the latest project news on the oVirt blog: 
> http://www.ovirt.org/blog/ 
> 
> 
> [1] https://www.ovirt.org/community/ 
> [2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt 
> 
> [3] http://www.ovirt.org/release/4.2.0/ 
> [4] 
> http://resources.ovirt.org/pub/ovirt-4.2/iso/ 
> 
> 
> -- 
> SANDRO BONAZZOLA
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
> Red Hat EMEA 
>   
> TRIED. TESTED. TRUSTED. 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2 and host console

2017-12-23 Thread Gianluca Cecchi
On Sat, Dec 23, 2017 at 11:18 AM, Gianluca Cecchi  wrote:

> On Fri, Dec 22, 2017 at 10:22 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il 22 Dic 2017 10:12 PM, "Yaniv Kaul"  ha scritto:
>>
>>
>>
>> On Dec 22, 2017 7:33 PM, "Gianluca Cecchi" 
>> wrote:
>>
>> Hello, after upgrading engine and then  plain CentOS 7.4 host from 4.1 to
>> 4.2, I see in host section if I select line for the host, right click and
>> host console... That tries to go to the typical 9090 cockpit Port of
>> node-ng...
>> Is this an error or in 4.2 the access to host console is for plain OS
>> nodes too?
>> In that case is there any service I have to enable on host?
>> It seems indeed my host is not currently listening on 9090 Port
>>
>>
>> Cockpit + firewall settings to enable to get to it.
>>
>>
>> Cockpit service should be up and running after the upgrade. Ovirt hist
>> depliy takes care of it. Firewall is configured by the engine unless you
>> disabled firewall config on the host configuration dialog.
>>
>> Didi, can you help here? Gianluca, can you share host upgrade logs?
>>
>>
>>
> Hello,
> this is a plain CentOS, not ovirt-node-ng one, that was upgraded from
> 4.1.7 to 4.2
> So the upgrade path has been to put the host into maintenance, yum update,
> reboot.
>
> Indeed cockpit has been installed as part of the yum update part:
>
> Dec 22 10:40:07 Installed: cockpit-bridge-155-1.el7.centos.x86_64
> Dec 22 10:40:07 Installed: cockpit-system-155-1.el7.centos.noarch
> Dec 22 10:40:40 Installed: cockpit-networkmanager-155-1.el7.centos.noarch
> Dec 22 10:40:42 Installed: cockpit-ws-155-1.el7.centos.x86_64
> Dec 22 10:40:43 Installed: cockpit-155-1.el7.centos.x86_64
> Dec 22 10:40:52 Installed: cockpit-storaged-155-1.el7.centos.noarch
> Dec 22 10:40:57 Installed: cockpit-dashboard-155-1.el7.centos.x86_64
> Dec 22 10:41:35 Installed: cockpit-ovirt-dashboard-0.11.
> 3-0.1.el7.centos.noarch
>
> I also see that there is a systemd cockpit.service unit that is configured
> as static and requires a cockpit.socket unit, that in turn is WantedBy
> sockets.target
>
> But if I run
>
> [root@ovirt01 ~]# remotectl certificate
> remotectl: No certificate found in dir: /etc/cockpit/ws-certs.d
> [root@ovirt01 ~]#
>
> So it seems that the cockpit.service ExecStartPre has not been ever run...
> ExecStartPre=/usr/sbin/remotectl certificate --ensure --user=root
> --group=cockpit-ws --selinux-type=etc_t
>
> [root@ovirt01 ~]# systemctl status cockpit.service
> ● cockpit.service - Cockpit Web Service
>Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static;
> vendor preset: disabled)
>Active: inactive (dead)
>  Docs: man:cockpit-ws(8)
> [root@ovirt01 ~]#
>
> Gianluca
>
>
>
I see these lines in /var/log/messages (on host I have no iptables and no
firewalld active and selinux is permissive):

Dec 22 10:40:40 ovirt01 yum[116794]: Installed:
cockpit-networkmanager-155-1.el7.centos.noarch
Dec 22 10:40:41 ovirt01 systemd: Reloading.
Dec 22 10:40:41 ovirt01 vdsm-tool: module dump_volume_chains could not load
to vdsm-tool: Traceback (most recent call last):#012  File
"/usr/bin/vdsm-tool", line 91, in load_modules#012mod_absp,
mod_desc)#012  File
"/usr/lib/python2.7/site-packages/vdsm/tool/dump_volume_chains.py", line
26, in #012from vdsm import client#012  File
"/usr/lib/python2.7/site-packages/vdsm/client.py", line 106, in
#012from vdsm.api import vdsmapi#012  File
"/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 29, in
#012from vdsm.common.logutils import Suppressed#012ImportError:
No module named logutils
Dec 22 10:40:41 ovirt01 systemd:
[/usr/lib/systemd/system/ip6tables.service:3] Failed to add dependency on
syslog.target,iptables.service, ignoring: Invalid argument
Dec 22 10:40:41 ovirt01 systemd: Binding to IPv6 address not available
since kernel does not support IPv6.
...
Dec 22 10:40:43 ovirt01 yum[116794]: Installed:
cockpit-155-1.el7.centos.x86_64
Dec 22 10:40:43 ovirt01 yum[116794]: Updated:
libmount-2.23.2-43.el7_4.2.x86_64
Dec 22 10:40:43 ovirt01 vdsmd_init_common.sh: vdsm: Running
upgraded_version_check
Dec 22 10:40:43 ovirt01 vdsmd_init_common.sh: vdsm: Running
check_is_configured
Dec 22 10:40:43 ovirt01 vdsm-tool: module dump_volume_chains could not load
to vdsm-tool: Traceback (most recent call last):#012  File
"/usr/bin/vdsm-tool", line 91, in load_modules#012mod_absp,
mod_desc)#012  File
"/usr/lib/python2.7/site-packages/vdsm/tool/dump_volume_chains.py", line
26, in #012from vdsm import client#012  File
"/usr/lib/python2.7/site-packages/vdsm/client.py", line 106, in
#012from vdsm.api import vdsmapi#012  File
"/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 29, in
#012from vdsm.common.logutils import Suppressed#012ImportError:
No module named logutils
...
Dec 22 10:40:57 ovirt01 yum[116794]: Installed:
cockpit-dashboard-155-1.el7.centos.x86_64
Dec 22 10:40:57 ovirt01 

Re: [ovirt-users] Ovirt 4.2 and host console

2017-12-23 Thread Gianluca Cecchi
On Fri, Dec 22, 2017 at 10:22 PM, Sandro Bonazzola 
wrote:

>
>
> Il 22 Dic 2017 10:12 PM, "Yaniv Kaul"  ha scritto:
>
>
>
> On Dec 22, 2017 7:33 PM, "Gianluca Cecchi" 
> wrote:
>
> Hello, after upgrading engine and then  plain CentOS 7.4 host from 4.1 to
> 4.2, I see in host section if I select line for the host, right click and
> host console... That tries to go to the typical 9090 cockpit Port of
> node-ng...
> Is this an error or in 4.2 the access to host console is for plain OS
> nodes too?
> In that case is there any service I have to enable on host?
> It seems indeed my host is not currently listening on 9090 Port
>
>
> Cockpit + firewall settings to enable to get to it.
>
>
> Cockpit service should be up and running after the upgrade. Ovirt hist
> depliy takes care of it. Firewall is configured by the engine unless you
> disabled firewall config on the host configuration dialog.
>
> Didi, can you help here? Gianluca, can you share host upgrade logs?
>
>
>
Hello,
this is a plain CentOS, not ovirt-node-ng one, that was upgraded from 4.1.7
to 4.2
So the upgrade path has been to put the host into maintenance, yum update,
reboot.

Indeed cockpit has been installed as part of the yum update part:

Dec 22 10:40:07 Installed: cockpit-bridge-155-1.el7.centos.x86_64
Dec 22 10:40:07 Installed: cockpit-system-155-1.el7.centos.noarch
Dec 22 10:40:40 Installed: cockpit-networkmanager-155-1.el7.centos.noarch
Dec 22 10:40:42 Installed: cockpit-ws-155-1.el7.centos.x86_64
Dec 22 10:40:43 Installed: cockpit-155-1.el7.centos.x86_64
Dec 22 10:40:52 Installed: cockpit-storaged-155-1.el7.centos.noarch
Dec 22 10:40:57 Installed: cockpit-dashboard-155-1.el7.centos.x86_64
Dec 22 10:41:35 Installed:
cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch

I also see that there is a systemd cockpit.service unit that is configured
as static and requires a cockpit.socket unit, that in turn is WantedBy
sockets.target

But if I run

[root@ovirt01 ~]# remotectl certificate
remotectl: No certificate found in dir: /etc/cockpit/ws-certs.d
[root@ovirt01 ~]#

So it seems that the cockpit.service ExecStartPre has not been ever run...
ExecStartPre=/usr/sbin/remotectl certificate --ensure --user=root
--group=cockpit-ws --selinux-type=etc_t

[root@ovirt01 ~]# systemctl status cockpit.service
● cockpit.service - Cockpit Web Service
   Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor
preset: disabled)
   Active: inactive (dead)
 Docs: man:cockpit-ws(8)
[root@ovirt01 ~]#

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users