[ovirt-users] Ovirt Host Error while executing action: Cannot add Host. Host with the same address already exists

2017-01-13 Thread Jeramy Johnson
Hey guys I wanted to know if anyone ever recieved this error before "Error 
while executing action: Cannot add Host. Host with the same address already 
exists" when trying to re-add a Host that you deleted? I think the host maybe 
partially deleted or have records somewhere. Let me know if anyone can help? 

Thanks,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM service won't start

2017-01-13 Thread paul.greene.va

Oh, I stumbled onto something relevant.

I noticed on the host that was working correctly that the ifcfg-enp6s0 
file included a line for "BRIDGE=ovirtmgmt", and the other two didn't 
have that line. When I added that line to the other two hosts, and 
restarted networking, I was able to get those hosts in a status of "UP".


That file is autogenerated by VDSM, so I wondered if it would survive a 
reboot. When I rebooted, the line had been removed again by VDSM.


So, I guess the final question then is how to get persistence in keeping 
this BRIDGE line from getting removed across reboots?



On 1/13/2017 2:54 PM, Nir Soffer wrote:

On Fri, Jan 13, 2017 at 9:24 PM, paul.greene.va
 wrote:

Output below ...



On 1/13/2017 1:47 PM, Nir Soffer wrote:

On Fri, Jan 13, 2017 at 5:45 PM, paul.greene.va
 wrote:

All,

I'm having an issue with the vdsmd service refusing to start on a fresh
install of RHEL 7.2, RHEV version 4.0.

It initially came up correctly, and the command "ip a" showed a
"vdsmdummy"
interface and a "ovirtmgmt" interface. However after a couple of reboots,
those interfaces disappeared, and running "systemctl status vdsmd"
generated
the message "Dependency failed for Virtual Desktop Service Manager/Job
vdsmd.service/start failed with result 'dependency'". Didn't say what
dependency though

I have 3 hosts where this happening on 2 out of 3 hosts. For some odd
reason, the one host isn't having any problems.

In a Google search I found an instance where system clock timing was out
of
sync, and that messed it up. I checked all three hosts, as well as the
RHEV
manager and they all had chronyd running and the clocks appeared to be in
sync.

After a reboot the virtual interfaces usually initially come up, but go
down
again within a few minutes.

Running journalctl -xe gives these three messages:

"failed to start Virtual Desktop Server Manager network restoration"

"Dependency failed for Virtual Desktop Server Manager"  (but it doesn't
say
which dependency failed"

"Dependency failed for MOM instance configured for VDSM purposes"
(again,
doesn't way which dependency)

Any suggestions?

Can you share the output of:

systemctl status vdsmd
systemctl status mom
systemctl status libvirtd
journalctl -xe

Nir


Sure, here you go 



[root@rhevh03 vdsm]# systemctl status vdsmd
ā vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: inactive (dead)

Jan 13 12:01:53 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
Server Manager.
Jan 13 12:01:53 rhevh03 systemd[1]: Job vdsmd.service/start failed with
result 'dependency'.
Jan 13 13:51:50 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
Server Manager.
Jan 13 13:51:50 rhevh03 systemd[1]: Job vdsmd.service/start failed with
result 'dependency'.
Jan 13 13:55:15 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
Server Manager.
Jan 13 13:55:15 rhevh03 systemd[1]: Job vdsmd.service/start failed with
result 'dependency'.



[root@rhevh03 vdsm]# systemctl status momd
ā momd.service - Memory Overcommitment Manager Daemon
Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
Active: inactive (dead) since Fri 2017-01-13 13:53:09 EST; 2min 26s ago
   Process: 28294 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
/var/run/momd.pid (code=exited, status=0/SUCCESS)
  Main PID: 28298 (code=exited, status=0/SUCCESS)

Jan 13 13:53:09 rhevh03 systemd[1]: Starting Memory Overcommitment Manager
Daemon...
Jan 13 13:53:09 rhevh03 systemd[1]: momd.service: Supervising process 28298
which is not our child. We'll most likely not notice when it exits.
Jan 13 13:53:09 rhevh03 systemd[1]: Started Memory Overcommitment Manager
Daemon.
Jan 13 13:53:09 rhevh03 python[28298]: No worthy mechs found



[root@rhevh03 vdsm]# systemctl status libvirtd
ā libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor
preset: enabled)
   Drop-In: /etc/systemd/system/libvirtd.service.d
āāunlimited-core.conf
Active: active (running) since Fri 2017-01-13 13:50:47 EST; 8min ago
  Docs: man:libvirtd(8)
http://libvirt.org
  Main PID: 27964 (libvirtd)
CGroup: /system.slice/libvirtd.service
āā27964 /usr/sbin/libvirtd --listen

Jan 13 13:50:47 rhevh03 systemd[1]: Starting Virtualization daemon...
Jan 13 13:50:47 rhevh03 systemd[1]: Started Virtualization daemon.
Jan 13 13:53:09 rhevh03 libvirtd[27964]: libvirt version: 2.0.0, package:
10.el7_3.2 (Red Hat, Inc. ,
2016-11-10-04:43:57, x86-034.build.eng.bos.redhat.com)
Jan 13 13:53:09 rhevh03 libvirtd[27964]: hostname: rhevh03
Jan 13 13:53:09 rhevh03 libvirtd[27964]: End of file while reading data:
Input/output error


[root@rhevh03 vdsm]# journalctl -xe
Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: File
"/usr/lib/python2.7/site-packages/vdsm/network/configu

Re: [ovirt-users] VDSM service won't start

2017-01-13 Thread paul.greene.va

Output below ...


On 1/13/2017 1:47 PM, Nir Soffer wrote:

On Fri, Jan 13, 2017 at 5:45 PM, paul.greene.va
 wrote:

All,

I'm having an issue with the vdsmd service refusing to start on a fresh
install of RHEL 7.2, RHEV version 4.0.

It initially came up correctly, and the command "ip a" showed a "vdsmdummy"
interface and a "ovirtmgmt" interface. However after a couple of reboots,
those interfaces disappeared, and running "systemctl status vdsmd" generated
the message "Dependency failed for Virtual Desktop Service Manager/Job
vdsmd.service/start failed with result 'dependency'". Didn't say what
dependency though

I have 3 hosts where this happening on 2 out of 3 hosts. For some odd
reason, the one host isn't having any problems.

In a Google search I found an instance where system clock timing was out of
sync, and that messed it up. I checked all three hosts, as well as the RHEV
manager and they all had chronyd running and the clocks appeared to be in
sync.

After a reboot the virtual interfaces usually initially come up, but go down
again within a few minutes.

Running journalctl -xe gives these three messages:

"failed to start Virtual Desktop Server Manager network restoration"

"Dependency failed for Virtual Desktop Server Manager"  (but it doesn't say
which dependency failed"

"Dependency failed for MOM instance configured for VDSM purposes"  (again,
doesn't way which dependency)

Any suggestions?

Can you share the output of:

systemctl status vdsmd
systemctl status mom
systemctl status libvirtd
journalctl -xe

Nir



Sure, here you go 



[root@rhevh03 vdsm]# systemctl status vdsmd
ā vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; 
vendor preset: enabled)

   Active: inactive (dead)

Jan 13 12:01:53 rhevh03 systemd[1]: Dependency failed for Virtual 
Desktop Server Manager.
Jan 13 12:01:53 rhevh03 systemd[1]: Job vdsmd.service/start failed with 
result 'dependency'.
Jan 13 13:51:50 rhevh03 systemd[1]: Dependency failed for Virtual 
Desktop Server Manager.
Jan 13 13:51:50 rhevh03 systemd[1]: Job vdsmd.service/start failed with 
result 'dependency'.
Jan 13 13:55:15 rhevh03 systemd[1]: Dependency failed for Virtual 
Desktop Server Manager.
Jan 13 13:55:15 rhevh03 systemd[1]: Job vdsmd.service/start failed with 
result 'dependency'.




[root@rhevh03 vdsm]# systemctl status momd
ā momd.service - Memory Overcommitment Manager Daemon
   Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor 
preset: disabled)

   Active: inactive (dead) since Fri 2017-01-13 13:53:09 EST; 2min 26s ago
  Process: 28294 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d 
--pid-file /var/run/momd.pid (code=exited, status=0/SUCCESS)

 Main PID: 28298 (code=exited, status=0/SUCCESS)

Jan 13 13:53:09 rhevh03 systemd[1]: Starting Memory Overcommitment 
Manager Daemon...
Jan 13 13:53:09 rhevh03 systemd[1]: momd.service: Supervising process 
28298 which is not our child. We'll most likely not notice when it exits.
Jan 13 13:53:09 rhevh03 systemd[1]: Started Memory Overcommitment 
Manager Daemon.

Jan 13 13:53:09 rhevh03 python[28298]: No worthy mechs found



[root@rhevh03 vdsm]# systemctl status libvirtd
ā libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; 
vendor preset: enabled)

  Drop-In: /etc/systemd/system/libvirtd.service.d
   āāunlimited-core.conf
   Active: active (running) since Fri 2017-01-13 13:50:47 EST; 8min ago
 Docs: man:libvirtd(8)
   http://libvirt.org
 Main PID: 27964 (libvirtd)
   CGroup: /system.slice/libvirtd.service
   āā27964 /usr/sbin/libvirtd --listen

Jan 13 13:50:47 rhevh03 systemd[1]: Starting Virtualization daemon...
Jan 13 13:50:47 rhevh03 systemd[1]: Started Virtualization daemon.
Jan 13 13:53:09 rhevh03 libvirtd[27964]: libvirt version: 2.0.0, 
package: 10.el7_3.2 (Red Hat, Inc. 
, 2016-11-10-04:43:57, 
x86-034.build.eng.bos.redhat.com)

Jan 13 13:53:09 rhevh03 libvirtd[27964]: hostname: rhevh03
Jan 13 13:53:09 rhevh03 libvirtd[27964]: End of file while reading data: 
Input/output error



[root@rhevh03 vdsm]# journalctl -xe
Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: File 
"/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", 
line 951, in _exec_ifup
Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: _exec_ifup_by_name(iface.name, 
cgroup)
Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: File 
"/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", 
line 937, in _exec_ifup_by_name
Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: raise 
ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '')
Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: 
vdsm.network.errors.ConfigNetworkError: (29, 'Determining IP information 
for ovirtmgmt... failed.')

Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: Traceback (most recent call last):
Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: File "/usr/bin/vdsm-tool", 
line 2

Re: [ovirt-users] VDSM service won't start

2017-01-13 Thread Nir Soffer
On Fri, Jan 13, 2017 at 9:24 PM, paul.greene.va
 wrote:
> Output below ...
>
>
>
> On 1/13/2017 1:47 PM, Nir Soffer wrote:
>>
>> On Fri, Jan 13, 2017 at 5:45 PM, paul.greene.va
>>  wrote:
>>>
>>> All,
>>>
>>> I'm having an issue with the vdsmd service refusing to start on a fresh
>>> install of RHEL 7.2, RHEV version 4.0.
>>>
>>> It initially came up correctly, and the command "ip a" showed a
>>> "vdsmdummy"
>>> interface and a "ovirtmgmt" interface. However after a couple of reboots,
>>> those interfaces disappeared, and running "systemctl status vdsmd"
>>> generated
>>> the message "Dependency failed for Virtual Desktop Service Manager/Job
>>> vdsmd.service/start failed with result 'dependency'". Didn't say what
>>> dependency though
>>>
>>> I have 3 hosts where this happening on 2 out of 3 hosts. For some odd
>>> reason, the one host isn't having any problems.
>>>
>>> In a Google search I found an instance where system clock timing was out
>>> of
>>> sync, and that messed it up. I checked all three hosts, as well as the
>>> RHEV
>>> manager and they all had chronyd running and the clocks appeared to be in
>>> sync.
>>>
>>> After a reboot the virtual interfaces usually initially come up, but go
>>> down
>>> again within a few minutes.
>>>
>>> Running journalctl -xe gives these three messages:
>>>
>>> "failed to start Virtual Desktop Server Manager network restoration"
>>>
>>> "Dependency failed for Virtual Desktop Server Manager"  (but it doesn't
>>> say
>>> which dependency failed"
>>>
>>> "Dependency failed for MOM instance configured for VDSM purposes"
>>> (again,
>>> doesn't way which dependency)
>>>
>>> Any suggestions?
>>
>> Can you share the output of:
>>
>> systemctl status vdsmd
>> systemctl status mom
>> systemctl status libvirtd
>> journalctl -xe
>>
>> Nir
>>
>
> Sure, here you go 
>
>
>
> [root@rhevh03 vdsm]# systemctl status vdsmd
> ā vdsmd.service - Virtual Desktop Server Manager
>Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: enabled)
>Active: inactive (dead)
>
> Jan 13 12:01:53 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
> Server Manager.
> Jan 13 12:01:53 rhevh03 systemd[1]: Job vdsmd.service/start failed with
> result 'dependency'.
> Jan 13 13:51:50 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
> Server Manager.
> Jan 13 13:51:50 rhevh03 systemd[1]: Job vdsmd.service/start failed with
> result 'dependency'.
> Jan 13 13:55:15 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
> Server Manager.
> Jan 13 13:55:15 rhevh03 systemd[1]: Job vdsmd.service/start failed with
> result 'dependency'.
>
>
>
> [root@rhevh03 vdsm]# systemctl status momd
> ā momd.service - Memory Overcommitment Manager Daemon
>Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
> preset: disabled)
>Active: inactive (dead) since Fri 2017-01-13 13:53:09 EST; 2min 26s ago
>   Process: 28294 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
> /var/run/momd.pid (code=exited, status=0/SUCCESS)
>  Main PID: 28298 (code=exited, status=0/SUCCESS)
>
> Jan 13 13:53:09 rhevh03 systemd[1]: Starting Memory Overcommitment Manager
> Daemon...
> Jan 13 13:53:09 rhevh03 systemd[1]: momd.service: Supervising process 28298
> which is not our child. We'll most likely not notice when it exits.
> Jan 13 13:53:09 rhevh03 systemd[1]: Started Memory Overcommitment Manager
> Daemon.
> Jan 13 13:53:09 rhevh03 python[28298]: No worthy mechs found
>
>
>
> [root@rhevh03 vdsm]# systemctl status libvirtd
> ā libvirtd.service - Virtualization daemon
>Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor
> preset: enabled)
>   Drop-In: /etc/systemd/system/libvirtd.service.d
>āāunlimited-core.conf
>Active: active (running) since Fri 2017-01-13 13:50:47 EST; 8min ago
>  Docs: man:libvirtd(8)
>http://libvirt.org
>  Main PID: 27964 (libvirtd)
>CGroup: /system.slice/libvirtd.service
>āā27964 /usr/sbin/libvirtd --listen
>
> Jan 13 13:50:47 rhevh03 systemd[1]: Starting Virtualization daemon...
> Jan 13 13:50:47 rhevh03 systemd[1]: Started Virtualization daemon.
> Jan 13 13:53:09 rhevh03 libvirtd[27964]: libvirt version: 2.0.0, package:
> 10.el7_3.2 (Red Hat, Inc. ,
> 2016-11-10-04:43:57, x86-034.build.eng.bos.redhat.com)
> Jan 13 13:53:09 rhevh03 libvirtd[27964]: hostname: rhevh03
> Jan 13 13:53:09 rhevh03 libvirtd[27964]: End of file while reading data:
> Input/output error
>
>
> [root@rhevh03 vdsm]# journalctl -xe
> Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", line
> 951, in _exec_ifup
> Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: _exec_ifup_by_name(iface.name,
> cgroup)
> Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", line
> 937, in _exec_ifup_by_name
> Jan 13 13:55:15 rhevh03 vdsm-tool[28334]: ra

Re: [ovirt-users] VDSM service won't start

2017-01-13 Thread Nir Soffer
On Fri, Jan 13, 2017 at 5:45 PM, paul.greene.va
 wrote:
> All,
>
> I'm having an issue with the vdsmd service refusing to start on a fresh
> install of RHEL 7.2, RHEV version 4.0.
>
> It initially came up correctly, and the command "ip a" showed a "vdsmdummy"
> interface and a "ovirtmgmt" interface. However after a couple of reboots,
> those interfaces disappeared, and running "systemctl status vdsmd" generated
> the message "Dependency failed for Virtual Desktop Service Manager/Job
> vdsmd.service/start failed with result 'dependency'". Didn't say what
> dependency though
>
> I have 3 hosts where this happening on 2 out of 3 hosts. For some odd
> reason, the one host isn't having any problems.
>
> In a Google search I found an instance where system clock timing was out of
> sync, and that messed it up. I checked all three hosts, as well as the RHEV
> manager and they all had chronyd running and the clocks appeared to be in
> sync.
>
> After a reboot the virtual interfaces usually initially come up, but go down
> again within a few minutes.
>
> Running journalctl -xe gives these three messages:
>
> "failed to start Virtual Desktop Server Manager network restoration"
>
> "Dependency failed for Virtual Desktop Server Manager"  (but it doesn't say
> which dependency failed"
>
> "Dependency failed for MOM instance configured for VDSM purposes"  (again,
> doesn't way which dependency)
>
> Any suggestions?

Can you share the output of:

systemctl status vdsmd
systemctl status mom
systemctl status libvirtd
journalctl -xe

Nir

>
> Paul
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm IOProcessClient WARNING Timeout waiting for communication thread for client

2017-01-13 Thread Nir Soffer
On Fri, Jan 13, 2017 at 7:41 PM, Bill James  wrote:
> resending without logs, except vdsm.log since list limit is too small.
>
>
>
> On 1/13/17 8:50 AM, Bill James wrote:
>
> We have an ovirt system with 3 clusters, all running centos7.
> ovirt engine is running on separate host,
> ovirt-engine-3.6.4.1-1.el7.centos.noarch
> 2 of the clusters are running newer version of ovirt, 3 nodes each,
> ovirt-engine-4.0.3-1.el7.centos.noarch, glusterfs-3.7.16-1.el7.x86_64,
> vdsm-4.18.11-1.el7.centos.x86_64.
> 1 cluster is still running the older version,
> ovirt-engine-3.6.4.1-1.el7.centos.noarch.

Which ioprocess version?

>
> Yes we are in the process of upgrading the whole system to ovirt4.0, but
> takes time
>
> One of the 2 clusters running ovirt4 is complaining of timeouts, vdsm
> talking to gluster. No warnings on the 2 other clusters.
>
>
>
> Thread-720062::DEBUG::2017-01-13
> 07:29:46,814::outOfProcess::87::Storage.oop::(getProcessPool) Creating
> ioprocess /rhev/data-center/mnt/glusterSD/ovirt1-gl.dmz.p
> rod.j2noc.com:_gv1
> Thread-720062::INFO::2017-01-13
> 07:29:46,814::__init__::325::IOProcessClient::(__init__) Starting client
> ioprocess-5874
> Thread-720062::DEBUG::2017-01-13
> 07:29:46,814::__init__::334::IOProcessClient::(_run) Starting ioprocess for
> client ioprocess-5874
> Thread-720062::DEBUG::2017-01-13
> 07:29:46,832::__init__::386::IOProcessClient::(_startCommunication) Starting
> communication thread for client ioprocess-5874
> Thread-720062::WARNING::2017-01-13
> 07:29:46,847::__init__::401::IOProcessClient::(_startCommunication) Timeout
> waiting for communication thread for client ioprocess-5874

This warning is harmless, it means that ioprocess thread did not start
in 1 second.

This probably means that the host is overloaded, typically new threads start
instantly.

Anyway I think we are using too short timeout. Can you open an ioprocess
bug for this?

>
>
> [2017-01-12 07:27:58.685680] I [MSGID: 106488]
> [glusterd-handler.c:1533:__glusterd_handle_cli_get_volume] 0-glusterd:
> Received get vol req
> The message "I [MSGID: 106488]
> [glusterd-handler.c:1533:__glusterd_handle_cli_get_volume] 0-glusterd:
> Received get vol req" repeated 31 times between [2017-01-12 07:27:58.685680]
> and [2017-01-12 07:29:46.971939]
>
>
> attached logs: engine.log supervdsm.log vdsm.log
> etc-glusterfs-glusterd.vol.log cli.log
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] PM proxy

2017-01-13 Thread Sandro Bonazzola
Il 13/Gen/2017 08:17 AM, "Martin Perina"  ha scritto:

Hi Slava,

do you have at least one another host in the same cluster or DC which
doesn't have connection issues (in status Up or Maintenance)?
If so, please turn on debug logging for power management part using
following command:

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller=
127.0.0.1:8706 --connect --user=admin@internal

and enter following inside jboss-cli command prompt:

/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:add
/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:write
-attribute(name=level,value=DEBUG)
quit

Afterwards you will see more details in engine.log why other hosts were
rejected during fence proxy selection process.

Btw above debug log changes are not permanent, they will be reverted on
ovirt-engine restart or using following command:

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller=
127.0.0.1:8706 --connect --user=admin@internal '/subsystem=logging/logger=
org.ovirt.engine.core.bll.pm:remove'


Regards

Martin Perina



Martin do you mind creating a wiki page related to debugging and add the
above procedure?




On Thu, Jan 12, 2017 at 4:42 PM, Slava Bendersky 
wrote:

> Hello Everyone,
> I need help with this error. What possible missing or miss-configured  ?
>
> 2017-01-12 05:17:31,444 ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
> (default task-38) [] Can not run fence action on host 'hosted_engine_1', no
> suitable proxy host was found
>
> I tried from shell on host and it works fine.
> Right now settings default dc, cluster from PM proxy definition.
> Slava.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM service won't start

2017-01-13 Thread Dominik Holler
On Fri, 13 Jan 2017 10:45:43 -0500
"paul.greene.va"  wrote:

> 
> After a reboot the virtual interfaces usually initially come up, but
> go down again within a few minutes.
> 
> Running journalctl -xe gives these three messages:
> 
> "failed to start Virtual Desktop Server Manager network restoration"
> 
> "Dependency failed for Virtual Desktop Server Manager"  (but it
> doesn't say which dependency failed"
> 
> "Dependency failed for MOM instance configured for VDSM purposes"  
> (again, doesn't way which dependency)
> 
> Any suggestions?
> 

Is libvirtd.service running?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] question regarding fencing proxies

2017-01-13 Thread cmc
Hi,

Can someone tell me how the engine decides which power management
proxy/proxies to use (using default cluster/dc config)? I am using
drac 7 for a fence agent in my two host cluster, and have noticed that
one of the hosts cannot contact the drac. My guess is that the engine
is using one host to as a power management proxy and hosts cannot
reach their own drac as they are on the same interface + vlan.

Example scenario:

Engine uses host 2 as power management proxy. It can contact host 1’s
drac, but cannot contact its own drac. In the case of host 2 being
unreachable/kdumping etc, would the engine switch to use host 1 as the
proxy to contact host 2’s drac?

Thanks,

Cam

PS: I'd like to use the APC as an additional fencing agent, each host
has two PSUs connected to two different APCs. Is there a guide on how
to specify two ports on two different PDUs to control power on a host?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDSM service won't start

2017-01-13 Thread paul.greene.va

All,

I'm having an issue with the vdsmd service refusing to start on a fresh 
install of RHEL 7.2, RHEV version 4.0.


It initially came up correctly, and the command "ip a" showed a 
"vdsmdummy" interface and a "ovirtmgmt" interface. However after a 
couple of reboots, those interfaces disappeared, and running "systemctl 
status vdsmd" generated the message "Dependency failed for Virtual 
Desktop Service Manager/Job vdsmd.service/start failed with result 
'dependency'". Didn't say what dependency though


I have 3 hosts where this happening on 2 out of 3 hosts. For some odd 
reason, the one host isn't having any problems.


In a Google search I found an instance where system clock timing was out 
of sync, and that messed it up. I checked all three hosts, as well as 
the RHEV manager and they all had chronyd running and the clocks 
appeared to be in sync.


After a reboot the virtual interfaces usually initially come up, but go 
down again within a few minutes.


Running journalctl -xe gives these three messages:

"failed to start Virtual Desktop Server Manager network restoration"

"Dependency failed for Virtual Desktop Server Manager"  (but it doesn't 
say which dependency failed"


"Dependency failed for MOM instance configured for VDSM purposes"  
(again, doesn't way which dependency)


Any suggestions?

Paul

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] DWH URL in 4.0.6 ??

2017-01-13 Thread Alexander Wels
On Friday, January 13, 2017 9:30:09 AM EST Devin Acosta wrote:
> I upgraded to the latest 4.0.6 and show that the Data Ware House process is
> running, did they change how you access the GUI for it?
> 
> Going to: https://{fqdn}/ovirt-engine-reports/
> no longer functions on any of my deployments?

The DWH reports are no longer available in oVirt since 4.0. The process is 
running because the dashboard uses the data to generate its data. But the 
reports are gone.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 4 / Migration issue

2017-01-13 Thread Devin Acosta
Just for the record for anyone else with this issue, I had to delete the
package:

rpm -e vdsm-hook-openstacknet

That resolved my migration issues.


On Tue, Jan 10, 2017 at 2:46 PM, Devin Acosta 
wrote:

>
> I have a cluster that is running OVirt 4.0.5-2, and I notice in the
> vdsm.log when I try to migrate I keep seeing this error:
>
> Thread-54::ERROR::2017-01-10 16:42:31,465::migration::254::virt.vm::(_recover)
> vmId=`e2390382-ee5b-4552-a980-487143885802`::migration destination error:
> Destination hook failed: Hook Error: ('openstacknet hook: [unexpected
> error]: Traceback (most recent call last):\n  File "/usr/libexec/vdsm/hooks/
> before_device_migrate_destination/50_openstacknet", line 75, in
> \nmain()\n  File "/usr/libexec/vdsm/hooks/
> before_device_migrate_destination/50_openstacknet", line 47, in main\n
>  pluginType = os.environ[PLUGIN_TYPE_KEY]\n  File 
> "/usr/lib64/python2.7/UserDict.py",
> line 23, in __getitem__\nraise KeyError(key)\nKeyError:
> \'plugin_type\'\n\n\n',)
>
> Any idea how to fix this?
>
> --
>
> Devin Acosta
> Red Hat Certified Architect, LinuxStack
> 602-354-1220 <(602)%20354-1220> || de...@linuxguru.co
>



-- 

Devin Acosta
Red Hat Certified Architect, LinuxStack
602-354-1220 || de...@linuxguru.co
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] DWH URL in 4.0.6 ??

2017-01-13 Thread Devin Acosta
I upgraded to the latest 4.0.6 and show that the Data Ware House process is
running, did they change how you access the GUI for it?

Going to: https://{fqdn}/ovirt-engine-reports/
no longer functions on any of my deployments?


-- 

Devin Acosta
Red Hat Certified Architect, LinuxStack
602-354-1220 || de...@linuxguru.co
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to delete VM disk

2017-01-13 Thread cmc
Thanks Alexander, when I upgrade I'll let you know if doesn't resolve
the issue (also am happy to let you know if it does). I haven't got a
time for when I'll upgrade at this point however.

Cheers,

Cam

On Fri, Jan 13, 2017 at 1:36 PM, Alexander Wels  wrote:
> On Thursday, January 12, 2017 2:09:00 PM EST cmc wrote:
>> Hi Alexander,
>>
>> That is correct. When I click remove disk, it gives me a remove disk
>> dialogue, and when I click 'OK' (whether I tick 'remove permanently'
>> or not) it will throw an exception.
>>
>> Thanks,
>>
>> Cam
>>
>
> Hi,
>
> In that case this is highly likely an instance of https://bugzilla.redhat.com/
> show_bug.cgi?id=1391466 where some of the memory cleanup code we added was a
> little to aggressive in cleaning up some of the event handlers that were
> causing memory leaks. 4.0.6 should have that issue fixed.
>
> Basically when the remove disk dialog pops up, the handlers/memory structures
> were cleaned up when they shouldn't have been causing the exception you are
> seeing.
>
> Alexander
>
>> On Thu, Jan 12, 2017 at 1:53 PM, Alexander Wels  wrote:
>> > On Friday, December 30, 2016 11:45:20 AM EST cmc wrote:
>> >> Hi Alexander,
>> >>
>> >> Thanks. I've attached the log. Relevant error is the last entry.
>> >>
>> >> Kind regards,
>> >>
>> >> Cam
>> >
>> > Just to be clear on the flow when this occurs, you do the following on a
>> > VM
>> > that is shut down:
>> >
>> > 1. Select the VM in the VM grid.
>> > 2. Click edit and the edit VM dialog pops up.
>> > 3. In the General tab you scroll down a little until you see the instance
>> >
>> > Images widget that has the disk listed. You have 3 options:
>> >   - Edit (edit disk)
>> >   - + (add new row, that will give you the option to attach/create a disk)
>> >   - - (remove disk)
>> >
>> > You click - (remove disk)?
>> > 4. You get the exception?
>> >
>> > Alexander
>> >
>> >> On Wed, Dec 14, 2016 at 3:12 PM, Alexander Wels  wrote:
>> >> > On Wednesday, December 14, 2016 11:51:49 AM EST cmc wrote:
>> >> >> Having some difficulty in getting the permutation string currently, as
>> >> >> I can't get a cache.html file to appear in the Network section of the
>> >> >> debugger, and both browsers I'm using (Chrome and FIrefox) do not
>> >> >> print the permutation ID at the bottom of the console output. I'll see
>> >> >> if I can get some more detail on how this works from some searching
>> >> >
>> >> > I improved that, I just haven't updated the wiki, as soon as you
>> >> > install
>> >> > the symbol maps, and you can recreate the issue, then the UI.log should
>> >> > have the unobfuscated stack trace, so you don't have to do all that
>> >> > stuff
>> >> > manually anymore.
>> >> >
>> >> >> On Wed, Dec 14, 2016 at 8:21 AM, Fred Rolland 
>> >
>> > wrote:
>> >> >> > The UI log is obfuscated.
>> >> >> > Can you please follow instruction on [1] and reproduce so that we
>> >> >> > get a
>> >> >> > human readable log.
>> >> >> >
>> >> >> > Thanks
>> >> >> >
>> >> >> > [1]
>> >> >> > http://www.ovirt.org/develop/developer-guide/engine/engine-debug-obf
>> >> >> > usc
>> >> >> > ate
>> >> >> > d-ui/>
>> >> >> >
>> >> >> > On Tue, Dec 13, 2016 at 7:42 PM, cmc  wrote:
>> >> >> >> Sorry, forgot the version: 4.0.5.5-1.el7.centos
>> >> >> >>
>> >> >> >> On Tue, Dec 13, 2016 at 5:37 PM, cmc  wrote:
>> >> >> >> > On the VM in the list of VMs, by right-clicking on it. It then
>> >> >> >> > gives
>> >> >> >> > you a pop up window to edit the VM, starting in the 'General'
>> >> >> >> > section
>> >> >> >> > (much as when you create a new one)
>> >> >> >> >
>> >> >> >> > Thanks,
>> >> >> >> >
>> >> >> >> > Cam
>> >> >> >> >
>> >> >> >> > On Tue, Dec 13, 2016 at 5:04 PM, Fred Rolland
>> >> >> >> > 
>> >> >> >> >
>> >> >> >> > wrote:
>> >> >> >> >> Hi,
>> >> >> >> >>
>> >> >> >> >> Which version are you using ?
>> >> >> >> >> When you mention "Edit", on which entity is it performed.?
>> >> >> >> >>
>> >> >> >> >> The disks are currently not part of the edit VM window.
>> >> >> >> >>
>> >> >> >> >> Thanks,
>> >> >> >> >> Freddy
>> >> >> >> >>
>> >> >> >> >> On Tue, Dec 13, 2016 at 6:06 PM, cmc  wrote:
>> >> >> >> >>> This VM wasn't running.
>> >> >> >> >>>
>> >> >> >> >>> On Tue, Dec 13, 2016 at 4:02 PM, Elad Ben Aharon
>> >> >> >> >>> 
>> >> >> >> >>>
>> >> >> >> >>> wrote:
>> >> >> >> >>> > In general, in order to delete a disk while it is attached to
>> >> >> >> >>> > a
>> >> >> >> >>> > running
>> >> >> >> >>> > VM,
>> >> >> >> >>> > the disk has to be deactivated (hotunplugged) first so it
>> >> >> >> >>> > won't
>> >> >> >> >>> > be
>> >> >> >> >>> > accessible for read and write from the VM.
>> >> >> >> >>> > In the 'edit' VM prompt there is no option to deactivate the
>> >> >> >> >>> > disk,
>> >> >> >> >>> > it
>> >> >> >> >>> > should
>> >> >> >> >>> > be done from the disks subtab under the virtual machine.
>> >> >> >> >>> >
>> >> >> >> >>> > On Tue, Dec 13, 2016 at 5:33 PM, cmc 
> wrote:
>> >> >> >> >>> >> Actually, I just tried to create a new 

Re: [ovirt-users] Unable to delete VM disk

2017-01-13 Thread Alexander Wels
On Thursday, January 12, 2017 2:09:00 PM EST cmc wrote:
> Hi Alexander,
> 
> That is correct. When I click remove disk, it gives me a remove disk
> dialogue, and when I click 'OK' (whether I tick 'remove permanently'
> or not) it will throw an exception.
> 
> Thanks,
> 
> Cam
> 

Hi,

In that case this is highly likely an instance of https://bugzilla.redhat.com/
show_bug.cgi?id=1391466 where some of the memory cleanup code we added was a 
little to aggressive in cleaning up some of the event handlers that were 
causing memory leaks. 4.0.6 should have that issue fixed.

Basically when the remove disk dialog pops up, the handlers/memory structures 
were cleaned up when they shouldn't have been causing the exception you are 
seeing.

Alexander

> On Thu, Jan 12, 2017 at 1:53 PM, Alexander Wels  wrote:
> > On Friday, December 30, 2016 11:45:20 AM EST cmc wrote:
> >> Hi Alexander,
> >> 
> >> Thanks. I've attached the log. Relevant error is the last entry.
> >> 
> >> Kind regards,
> >> 
> >> Cam
> > 
> > Just to be clear on the flow when this occurs, you do the following on a
> > VM
> > that is shut down:
> > 
> > 1. Select the VM in the VM grid.
> > 2. Click edit and the edit VM dialog pops up.
> > 3. In the General tab you scroll down a little until you see the instance
> > 
> > Images widget that has the disk listed. You have 3 options:
> >   - Edit (edit disk)
> >   - + (add new row, that will give you the option to attach/create a disk)
> >   - - (remove disk)
> > 
> > You click - (remove disk)?
> > 4. You get the exception?
> > 
> > Alexander
> > 
> >> On Wed, Dec 14, 2016 at 3:12 PM, Alexander Wels  wrote:
> >> > On Wednesday, December 14, 2016 11:51:49 AM EST cmc wrote:
> >> >> Having some difficulty in getting the permutation string currently, as
> >> >> I can't get a cache.html file to appear in the Network section of the
> >> >> debugger, and both browsers I'm using (Chrome and FIrefox) do not
> >> >> print the permutation ID at the bottom of the console output. I'll see
> >> >> if I can get some more detail on how this works from some searching
> >> > 
> >> > I improved that, I just haven't updated the wiki, as soon as you
> >> > install
> >> > the symbol maps, and you can recreate the issue, then the UI.log should
> >> > have the unobfuscated stack trace, so you don't have to do all that
> >> > stuff
> >> > manually anymore.
> >> > 
> >> >> On Wed, Dec 14, 2016 at 8:21 AM, Fred Rolland 
> > 
> > wrote:
> >> >> > The UI log is obfuscated.
> >> >> > Can you please follow instruction on [1] and reproduce so that we
> >> >> > get a
> >> >> > human readable log.
> >> >> > 
> >> >> > Thanks
> >> >> > 
> >> >> > [1]
> >> >> > http://www.ovirt.org/develop/developer-guide/engine/engine-debug-obf
> >> >> > usc
> >> >> > ate
> >> >> > d-ui/>
> >> >> > 
> >> >> > On Tue, Dec 13, 2016 at 7:42 PM, cmc  wrote:
> >> >> >> Sorry, forgot the version: 4.0.5.5-1.el7.centos
> >> >> >> 
> >> >> >> On Tue, Dec 13, 2016 at 5:37 PM, cmc  wrote:
> >> >> >> > On the VM in the list of VMs, by right-clicking on it. It then
> >> >> >> > gives
> >> >> >> > you a pop up window to edit the VM, starting in the 'General'
> >> >> >> > section
> >> >> >> > (much as when you create a new one)
> >> >> >> > 
> >> >> >> > Thanks,
> >> >> >> > 
> >> >> >> > Cam
> >> >> >> > 
> >> >> >> > On Tue, Dec 13, 2016 at 5:04 PM, Fred Rolland
> >> >> >> > 
> >> >> >> > 
> >> >> >> > wrote:
> >> >> >> >> Hi,
> >> >> >> >> 
> >> >> >> >> Which version are you using ?
> >> >> >> >> When you mention "Edit", on which entity is it performed.?
> >> >> >> >> 
> >> >> >> >> The disks are currently not part of the edit VM window.
> >> >> >> >> 
> >> >> >> >> Thanks,
> >> >> >> >> Freddy
> >> >> >> >> 
> >> >> >> >> On Tue, Dec 13, 2016 at 6:06 PM, cmc  wrote:
> >> >> >> >>> This VM wasn't running.
> >> >> >> >>> 
> >> >> >> >>> On Tue, Dec 13, 2016 at 4:02 PM, Elad Ben Aharon
> >> >> >> >>> 
> >> >> >> >>> 
> >> >> >> >>> wrote:
> >> >> >> >>> > In general, in order to delete a disk while it is attached to
> >> >> >> >>> > a
> >> >> >> >>> > running
> >> >> >> >>> > VM,
> >> >> >> >>> > the disk has to be deactivated (hotunplugged) first so it
> >> >> >> >>> > won't
> >> >> >> >>> > be
> >> >> >> >>> > accessible for read and write from the VM.
> >> >> >> >>> > In the 'edit' VM prompt there is no option to deactivate the
> >> >> >> >>> > disk,
> >> >> >> >>> > it
> >> >> >> >>> > should
> >> >> >> >>> > be done from the disks subtab under the virtual machine.
> >> >> >> >>> > 
> >> >> >> >>> > On Tue, Dec 13, 2016 at 5:33 PM, cmc  
wrote:
> >> >> >> >>> >> Actually, I just tried to create a new disk via the 'Edit'
> >> >> >> >>> >> menu
> >> >> >> >>> >> once
> >> >> >> >>> >> I'd deleted it from the 'Disks' tab, and it threw an
> >> >> >> >>> >> exception.
> >> >> >> >>> >> 
> >> >> >> >>> >> Attached is the console log.
> >> >> >> >>> >> 
> >> >> >> >>> >> On Tue, Dec 13, 2016 at 3:24 PM, cmc  
wrote:
> >> >> >> >>> >> > Hi Elad,
> >> >> >> >>> >> > 
> >> >>

Re: [ovirt-users] Hosted Engine: Add another host

2017-01-13 Thread Simone Tiraboschi
On Fri, Jan 13, 2017 at 2:19 PM, gregor  wrote:

> Hi,
>
> finally these are my steps to add a host to my cluster:
>
>
Yes, this is the expected flow.
If you are planning about adding a lot of hosts you could also consider
evaluating the foreman integration:
http://www.ovirt.org/develop/release-management/features/foreman/foremanintegration/


> - Install CentOS 7 minimal
> - Configure NTP (otherwise the installer stopped with an error)
>

Could you please provide more details about this?


> - Configure Network and DNS
> - Add the oVirt repo: yum -y install
> http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
> - Now add the host form the web management
>
> Thanks for your help and links to the resources.
>
> greetings
> gregor
>
> On 12/01/17 09:40, Simone Tiraboschi wrote:
> >
> >
> > On Wed, Jan 11, 2017 at 9:29 PM, Gianluca Cecchi
> > mailto:gianluca.cec...@gmail.com>> wrote:
> >
> > On Wed, Jan 11, 2017 at 5:50 PM, gregor  > > wrote:
> >
> > Hi,
> >
> > I have a hosted-engine setup on one host. Today I try to add
> another
> > host from the UI but this gives me some errors without detail.
> >
> > Is there a way to add a new host from the shell?
> >
> >
> > Deploying additional hosted-engine hosts has been deprecated, deploying
> > from the web ui is the recommended way.
> > Could you please check host-deploy logs on the engine VM to check what
> > went wrong?
> >
> >
> > Not a node [1] because I plan to use docker as well on the host,
> > it's a
> > test environment.
> > Or is it better to install the host as node?
> >
> > cheers
> > gregor
> >
> > [1] http://www.ovirt.org/node/
> >
> >
> > It would be useful to understand the errors you get in web ui,
> > because they could be similar also in command line deploy
> >
> > I think you can follow what happened in 3.6 as described here:
> > https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/
> chap-Installing_Additional_Hosts_to_a_Self-Hosted_Environment.html
> >  Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/
> chap-Installing_Additional_Hosts_to_a_Self-Hosted_Environment.html>
> >
> > For oVirt and CentOS I think that these below should be the commands
> > to run on your second host (see the other details explained in the
> > web page above, that could be different in some way in 4.0 vs 3.6)
> >
> > # yum install
> > http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
> > 
> > # yum install ovirt-hosted-engine-setup
> > # hosted-engine --deploy
> >
> > HIH,
> > Gianluca
> >
> > ___
> > Users mailing list
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> >
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine: Add another host

2017-01-13 Thread gregor
Hi,

finally these are my steps to add a host to my cluster:

- Install CentOS 7 minimal
- Configure NTP (otherwise the installer stopped with an error)
- Configure Network and DNS
- Add the oVirt repo: yum -y install
http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
- Now add the host form the web management

Thanks for your help and links to the resources.

greetings
gregor

On 12/01/17 09:40, Simone Tiraboschi wrote:
> 
> 
> On Wed, Jan 11, 2017 at 9:29 PM, Gianluca Cecchi
> mailto:gianluca.cec...@gmail.com>> wrote:
> 
> On Wed, Jan 11, 2017 at 5:50 PM, gregor  > wrote:
> 
> Hi,
> 
> I have a hosted-engine setup on one host. Today I try to add another
> host from the UI but this gives me some errors without detail.
> 
> Is there a way to add a new host from the shell?
> 
> 
> Deploying additional hosted-engine hosts has been deprecated, deploying
> from the web ui is the recommended way.
> Could you please check host-deploy logs on the engine VM to check what
> went wrong?
>  
> 
> Not a node [1] because I plan to use docker as well on the host,
> it's a
> test environment.
> Or is it better to install the host as node?
> 
> cheers
> gregor
> 
> [1] http://www.ovirt.org/node/
> 
> 
> It would be useful to understand the errors you get in web ui,
> because they could be similar also in command line deploy
> 
> I think you can follow what happened in 3.6 as described here:
> 
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/chap-Installing_Additional_Hosts_to_a_Self-Hosted_Environment.html
> 
> 
> 
> For oVirt and CentOS I think that these below should be the commands
> to run on your second host (see the other details explained in the
> web page above, that could be different in some way in 4.0 vs 3.6)
> 
> # yum install
> http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
> 
> # yum install ovirt-hosted-engine-setup
> # hosted-engine --deploy
> 
> HIH,
> Gianluca
> 
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware upgrade Ovirt Engine

2017-01-13 Thread Roy Golan
On Jan 13, 2017 11:47 AM, "nicola gentile" 
wrote:

Hi,
My environment consists of:
1 engine
2 host
1 storage

I must to do an hardware upgrade only for engine manager.

How do I proceed?

Best regard

Nicola
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Use the engine backup tool [1]. That will take care of DB and
configurations.

Make sure the status of your Ovf_store disks is OK with no event of warning
 on them(this means you have cold backup of your VM config)

You can then just start the upgrade , your VMS will continue with no
interference.

[1]
http://www.ovirt.org/develop/release-,management/features/engine/engine-backup/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Request for oVirt Ansible modules testing feedback

2017-01-13 Thread Nathanaël Blanchet



Le 10/01/2017 à 16:41, Yaniv Kaul a écrit :



On Fri, Jan 6, 2017 at 3:51 PM, Nathanaël Blanchet > wrote:


There was a last error in the script :

snap_service = snaps_service.snapshot_service(snap.id
) instead of snap_service =
snaps_service.snap_service(snap.id )

For those who are interested in using a full remove_vm_snapshot
working script:


Perhaps worth contributing to the examples[1] of the SDK?
Y.

[1] https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples

Done, you're right, this is the best way to do!



# Create the connection to the server:
connection = sdk.Connection(
  url='https://engine/ovirt-engine/api
',
  username='admin@internal',
  password='passwd',
#  ca_file='ca.pem',
  insecure=True,
  debug=True,
  log=logging.getLogger(),
)

# Locate the virtual machines service and use it to find the virtual
# machine:
vms_service = connection.system_service().vms_service()
vm = vms_service.list(search='name=myvm')[0]

# Locate the service that manages the snapshots of the virtual
machine:
vm_service = vms_service.vm_service(vm.id )
snaps_service = vm_service.snapshots_service()
snaps = snaps_service.list()
snap = [s for s in snaps if s.description == 'My snapshot2'][0]

# Remove the snapshot:
snap_service = snaps_service.snapshot_service(snap.id
)
snap_service.remove()

# Close the connection to the server:
connection.close()


Le 06/01/2017 à 14:44, Nathanaël Blanchet a écrit :



Le 06/01/2017 à 13:39, Juan Hernández a écrit :

On 01/06/2017 12:20 PM, Nathanaël Blanchet wrote:


Le 04/01/2017 à 18:55, Juan Hernández a écrit :

On 01/04/2017 05:38 PM, Nathanaël Blanchet wrote:

Le 04/01/2017 à 15:41, Juan Hernández a écrit :

On 01/04/2017 12:30 PM, Yaniv Kaul wrote:

On Wed, Jan 4, 2017 at 1:04 PM,
Nicolas Ecarnot mailto:nico...@ecarnot.net>
>> wrote:

   Hello,

   Le 04/01/2017 à 11:49,
Nathanaël Blanchet a écrit :



   Le 04/01/2017 à 10:09,
Andrea Ghelardi a écrit :


   Personally I don’t
think ansible and ovirt-shell are
   mutually exclusive.

   Those who are in
ansible and devops realms are not
really
   scared by
   making python/ansible
work with ovirt.

   From what I gather,
playbooks are quite a de-facto
   pre-requisite to
   build up a real SaaC
“Software as a Code” environment.



   On the other hand,
ovirt-shell can and is a fast/easy
way to
   perform
   “normal daily tasks”.

   totally agree but
ovirt-shell is deprecated in 4.1 et
will be
   removed in
   4.2. Ansible or sdk4 are
proposed as an alternative.


   Could someone point me to an
URL where sdk4 is fully
documented, as
   I have to get ready for
ovirt-shell deprecation?


The Rest API is partially documented under
https:///api/model .
It's not complete yet. All new
features in 4.0 are documented and
we are
working on the 'older' features now.
(contributions are welcome!)


   I'm sure no one at Redhat
   

Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-13 Thread Mark Greenall
Just been catching up with all the threads and I saw mention of some 
iscsid.conf settings which reminded me we also changed some of those from 
default as per the Dell Optimizing SAN Environment for Linux Guide previously 
mentioned.

Changed from default in /etc/iscsi/iscsid.conf
node.session.initial_login_retry_max = 12
node.session.cmds_max = 1024
node.session.queue_depth = 128
node.startup = manual
node.session.iscsi.FastAbort = No

As mentioned by a couple of people, I do just hope this is a case of an 
optimization conflict between Ovirt and Equallogic. I just don't understand why 
every now and again the host will come up and stay up. In the Ovirt Equallogic 
cluster I have currently battled to get three of the hosts up (and running 
guests), I am left with the fourth host which I'm using for this call and it 
just refuses to stay up. It may not be specifically related to Ovirt 4.x but I 
do know we never used to have this type of a battle getting nodes online. I'm 
quite happy to change settings on this one host but can't make cluster wide 
changes as it will likely bring all the guests down.

As some added information here is the iscsi connection details for one of the 
storage domains. As mentioned we are using the 2 x 10Gb iSCSI HBA's in an LACP 
group in Ovirt and Cisco. Hence we see a login from the same source address 
(but two different interfaces) to the same (single) persistent address which is 
the controllers virtual group address. The Current Portal addresses are the 
Equallogic Active Conrollers eth0 and eth1 addresses.

Target: 
iqn.2001-05.com.equallogic:4-42a846-654479033-f9888b77feb584ec-lnd-ion-db-tprm-dstore01
 (non-flash)
Current Portal: 10.100.214.76:3260,1
Persistent Portal: 10.100.214.77:3260,1
**
Interface:
**
Iface Name: bond1.10
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:a53470a0ae32
Iface IPaddress: 10.100.214.59
Iface HWaddress: 
Iface Netdev: uk1iscsivlan10
SID: 95
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 10.100.214.75:3260,1
Persistent Portal: 10.100.214.77:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:a53470a0ae32
Iface IPaddress: 10.100.214.59
Iface HWaddress: 
Iface Netdev: 
SID: 96
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

Thanks,
Mark
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hardware upgrade Ovirt Engine

2017-01-13 Thread nicola gentile
Hi,
My environment consists of:
1 engine
2 host
1 storage

I must to do an hardware upgrade only for engine manager.

How do I proceed?

Best regard

Nicola
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network

2017-01-13 Thread Marcin Mirecki
Please push the patch into: https://gerrit.ovirt.org/ovirt-provider-ovn
(let me know if you need some directions)



- Original Message -
> From: "Sverker Abrahamsson" 
> To: "Marcin Mirecki" 
> Cc: "Ovirt Users" 
> Sent: Monday, January 9, 2017 1:45:37 PM
> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network
> 
> Ok, found it. The issue is right here:
> 
>  
>  
>  
>  
>  
>  
>  
>  
>  
>   interfaceid="912cba79-982e-4a87-868e-241fedccb59a" />
>  
>  
> 
> There are two elements for virtualport, the first without id and the
> second with. On h2 I had fixed this which was the patch I posted earlier
> although I switched back to use br-int after understanding that was the
> correct way. When that hook was copied to h1 the port gets attached fine.
> 
> Patch with updated testcase attached.
> 
> /Sverker
> 
> 
> Den 2017-01-09 kl. 10:41, skrev Sverker Abrahamsson:
> > This is the content of vdsm.log on h1 at this time:
> >
> > 2017-01-06 20:54:12,636 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> > call VM.create succeeded in 0.01 seconds (__init__:515)
> > 2017-01-06 20:54:12,636 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') VM wrapper has started
> > (vm:1901)
> > 2017-01-06 20:54:12,636 INFO  (vm/6dd5291e) [vds] prepared volume
> > path:
> > /rhev/data-center/mnt/h2-int.limetransit.com:_var_lib_exports_iso/1d49c4bc-0fec-4503-a583-d476fa3a370d/images/----/CentOS-7-x86_64-NetInstall-1611.iso
> > (clientIF:374)
> > 2017-01-06 20:54:12,743 INFO  (vm/6dd5291e) [root]  (hooks:108)
> > 2017-01-06 20:54:12,847 INFO  (vm/6dd5291e) [root]  (hooks:108)
> > 2017-01-06 20:54:12,863 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645')  > encoding='UTF-8'?>
> > http://ovirt.org/vm/tune/1.0"; type="kvm">
> > CentOS7_3
> > 6dd5291e-6556-4d29-8b4e-ea896e627645
> > 1048576
> > 1048576
> > 4294967296
> > 16
> > 
> > 
> > 
> >  > path="/var/lib/libvirt/qemu/channels/6dd5291e-6556-4d29-8b4e-ea896e627645.com.redhat.rhevm.vdsm"
> > />
> > 
> > 
> > 
> >  > path="/var/lib/libvirt/qemu/channels/6dd5291e-6556-4d29-8b4e-ea896e627645.org.qemu.guest_agent.0"
> > />
> > 
> > 
> > 
> > 
> > 
> > 
> >  > vram="32768" />
> > 
> >  > passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >  > interfaceid="912cba79-982e-4a87-868e-241fedccb59a" />
> > 
> > 
> > 
> >  > file="/rhev/data-center/mnt/h2-int.limetransit.com:_var_lib_exports_iso/1d49c4bc-0fec-4503-a583-d476fa3a370d/images/----/CentOS-7-x86_64-NetInstall-1611.iso"
> > startupPolicy="optional" />
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > hvm
> > 
> > 
> > 
> > 
> > 
> > oVirt
> > oVirt Node
> > 7-3.1611.el7.centos
> >  > name="serial">62f1adff-b29e-4a7c-abba-c2c4c73248c6
> >  > name="uuid">6dd5291e-6556-4d29-8b4e-ea896e627645
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > SandyBridge
> > 
> > 
> > 
> > 
> > 
> > 
> >  (vm:1988)
> > 2017-01-06 20:54:13,046 INFO  (libvirt/events) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') CPU running: onResume
> > (vm:4863)
> > 2017-01-06 20:54:13,058 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') Starting connection
> > (guestagent:245)
> > 2017-01-06 20:54:13,060 INFO  (vm/6dd5291e) [virt.vm]
> > (vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') CPU running: domain
> > initialization (vm:4863)
> > 2017-01-06 20:54:15,154 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> > call Host.getVMFullList succeeded in 0.01 seconds (__init__:515)
> > 2017-01-06 20:54:17,571 INFO  (periodic/2) [dispatcher] Run and
> > protect: getVolumeSize(sdUUID=u'2ee54fb8-48f2-4576-8cff-f2346504b08b',
> > spUUID=u'584ebd64-0268-0193-025b-038e',
> > imgUUID=u'5a3aae57-ffe0-4a3b-aa87-8461669db7f9',
> > volUUID=u'b6a88789-fcb1-4d3e-911b-2a4d3b6c69c7', options=None)
> > (logUtils:49)
> > 2017-01-06 20:54:17,573 INFO  (periodic/2) [dispatche

Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-13 Thread Mark Greenall
Hi Nir,

Thanks very much for your feedback. It's really useful information. I keep my 
fingers crossed it leads to a solution for us.

All the settings we currently have were to try and optimise the Equallogic for 
Linux and Ovirt.

The multipath config settings came from this Dell Forum thread re: getting 
EqualLogic to work with Ovirt 
http://en.community.dell.com/support-forums/storage/f/3775/t/19529606

The udev settings were from the Dell Optimizing SAN Environment for Linux Guide 
here: 
https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiXvJes4L7RAhXLAsAKHVWLDyQQFggiMAA&url=http%3A%2F%2Fen.community.dell.com%2Fdell-groups%2Fdtcmedia%2Fm%2Fmediagallery%2F20371245%2Fdownload&usg=AFQjCNG0J8uWEb90m-BwCH_nZJ8lEB3lFA&bvm=bv.144224172,d.d24&cad=rja

Perhaps some of the settings are now conflicting with Ovirt best practice as 
you optimise the releases.

As requested, here is the output of multipath -ll

[root@uk1-ion-ovm-08 rules.d]# multipath -ll
364842a3403798409cf7d555c6b8bb82e dm-237 EQLOGIC ,100E-00
size=1.5T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 48:0:0:0  sdan 66:112 active ready running
  `- 49:0:0:0  sdao 66:128 active ready running
364842a34037924a7bf7d25416b8be891 dm-212 EQLOGIC ,100E-00
size=345G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 42:0:0:0  sdah 66:16  active ready running
  `- 43:0:0:0  sdai 66:32  active ready running
364842a340379c497f47ee5fe6c8b9846 dm-459 EQLOGIC ,100E-00
size=175G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 86:0:0:0  sdbz 68:208 active ready running
  `- 87:0:0:0  sdca 68:224 active ready running
364842a34037944f2807fe5d76d8b1842 dm-526 EQLOGIC ,100E-00
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 96:0:0:0  sdcj 69:112 active ready running
  `- 97:0:0:0  sdcl 69:144 active ready running
364842a3403798426d37e05bc6c8b6843 dm-420 EQLOGIC ,100E-00
size=250G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 82:0:0:0  sdbu 68:128 active ready running
  `- 83:0:0:0  sdbw 68:160 active ready running
364842a340379449fbf7dc5406b8b2818 dm-199 EQLOGIC ,100E-00
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 38:0:0:0  sdad 65:208 active ready running
  `- 39:0:0:0  sdae 65:224 active ready running
364842a34037984543c7d35a86a8bc8ee dm-172 EQLOGIC ,100E-00
size=670G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 36:0:0:0  sdaa 65:160 active ready running
  `- 37:0:0:0  sdac 65:192 active ready running
364842a340379e4303c7dd5a76a8bd8b4 dm-140 EQLOGIC ,100E-00
size=1.5T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 33:0:0:0  sdx  65:112 active ready running
  `- 32:0:0:0  sdy  65:128 active ready running
364842a340379b44c7c7ed53b6c8ba8c0 dm-359 EQLOGIC ,100E-00
size=300G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 69:0:0:0  sdbi 67:192 active ready running
  `- 68:0:0:0  sdbh 67:176 active ready running
364842a3403790415d37ed5bb6c8b68db dm-409 EQLOGIC ,100E-00
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 80:0:0:0  sdbt 68:112 active ready running
  `- 81:0:0:0  sdbv 68:144 active ready running
364842a34037964f7807f15d86d8b8860 dm-527 EQLOGIC ,100E-00
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 98:0:0:0  sdck 69:128 active ready running
  `- 99:0:0:0  sdcm 69:160 active ready running
364842a34037944aebf7d85416b8ba895 dm-226 EQLOGIC ,100E-00
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 46:0:0:0  sdal 66:80  active ready running
  `- 47:0:0:0  sdam 66:96  active ready running
364842a340379f44f7c7e053c6c8b98d2 dm-360 EQLOGIC ,100E-00
size=450G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 70:0:0:0  sdbj 67:208 active ready running
  `- 71:0:0:0  sdbk 67:224 active ready running
364842a34037924276e7e051e6c8b084f dm-308 EQLOGIC ,100E-00
size=120G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 61:0:0:0  sdba 67:64  active ready running
  `- 60:0:0:0  sdaz 67:48  active ready running
364842a34037994b93b7d85a66a8b789a dm-37 EQLOGIC ,100E-00
size=270G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 20:0:0:0  sdl  8:176  active ready running
  `- 21:0:0:0  sdm  8:192  active ready running
364842a340379348d6e7e351e6c8b4865 dm-319 EQLOGIC ,100E-00
size=310G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 62:0:0:0  sdbb 67:80  active ready running
  `- 63:0:0:0  sdbc 67:96  active ready running
364842a34037994cd3b7db5a66a8bc8ff dm-70 EQLOGIC ,100E-00
size=270G features

Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-13 Thread Nicolas Ecarnot

Hi Nir,

Le 13/01/2017 à 00:10, Nir Soffer a écrit :

On Thu, Jan 12, 2017 at 6:01 PM, Nicolas Ecarnot  wrote:

Hi,

As we are using a very similar hardware and usage as Mark (Dell poweredge
hosts, Dell Equallogic SAN, iSCSI, and tons of LUNs for all those VMs), I'm
jumping into this thread.


Can you share your multipath.conf that works with Dell Equallogic SAN?


May you explain how this would be relevant?
Owners of Dell Equallogic PS6xxx SANs are pointing their oVirt hosts 
toward *ONE* ip.


I agree with Yaniv : this is an issue that is not related to 4.xx, but 
seems to never have been taken into account in oVirt.


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users