Re: [ovirt-users] Suddenly all VM's down including HostedEngine & NFSshares (except HE) unmounted

2016-08-21 Thread Matt .
Some extra info:

I see a very high load on all hosts without any VM started:


top - 02:36:36 up 56 min,  1 user,  load average: 9.95, 8.11, 7.67
Tasks: 247 total,   1 running, 246 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.2 us,  0.8 sy,  0.0 ni, 97.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32780472 total, 31680580 free,   676704 used,   423188 buff/cache
KiB Swap: 25165820 total, 25165820 free,0 used. 31827176 avail Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
  951 root  15  -5 1371512  35676   8648 S   4.7  0.1   0:48.57
supervdsmServer
  953 vdsm  20   0 5713492  52204   6496 S   4.7  0.2   1:00.16
ovirt-ha-broker
 4949 vdsm   0 -20 5148232 124364  12144 S   2.7  0.4   2:33.33 vdsm
 4952 vdsm  20   0  599528  21272   4932 S   0.7  0.1   0:16.44 python


In the logs is also this shown:

periodic/91::DEBUG::2016-08-22
02:33:29,621::task::597::Storage.TaskManager.Task::(_updateState)
Task=`65f12a56-dab5-4a19-a9ef-967e4b617087`::moving from state
preparing -> state finished
periodic/91::DEBUG::2016-08-22
02:33:29,621::resourceManager::952::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
periodic/91::DEBUG::2016-08-22
02:33:29,622::resourceManager::989::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
periodic/91::DEBUG::2016-08-22
02:33:29,622::task::995::Storage.TaskManager.Task::(_decref)
Task=`65f12a56-dab5-4a19-a9ef-967e4b617087`::ref 0 aborting False
periodic/89::ERROR::2016-08-22
02:33:29,636::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)
Connection closed: Connection timed out
periodic/89::ERROR::2016-08-22
02:33:29,637::api::253::root::(_getHaInfo) failed to retrieve Hosted
Engine HA info
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 232,
in _getHaInfo
stats = instance.get_all_stats()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats
self._configure_broker_conn(broker)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn
dom_type=dom_type)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 176, in set_storage_domain
.format(sd_type, options, e))
RequestError: Failed to set storage domain FilesystemBackend, options
{'dom_type': 'nfs3', 'sd_uuid':
'4093ad17-bef5-4e4b-9a16-259a98e20321'}: Connection timed out
periodic/89::DEBUG::2016-08-22
02:33:29,638::executor::182::Executor::(_run) Worker was discarded
periodic/86::WARNING::2016-08-22
02:33:30,936::periodic::269::virt.periodic.VmDispatcher::(__call__)
could not run  on
['5576ec24-112e-4995-89f8-57e40c43cc5a']
periodic/90::WARNING::2016-08-22
02:33:32,937::periodic::269::virt.periodic.VmDispatcher::(__call__)
could not run  on
['5576ec24-112e-4995-89f8-57e40c43cc5a']
Reactor thread::INFO::2016-08-22
02:33:33,234::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from :::172.16.30.11:56176


The strange thing is that I can fairly mount my NFS share where my
VM's and HE is on, this is no issue. But in some way vdsmd fails and
the RHEV mounts, which ovit makes dies and a df -h times out.


I'm very curious what is going on here.

2016-08-22 2:01 GMT+02:00 Matt . :
> I seem some very strange behaviour, hosts are random rebooting where I
> get the feeling it crashed on Sanlock or is, that is the last logline
> from /var/log/messages
>
> I saw this happening on the latest kernel (yum update) and different
> hardware, the filer is OK!
>
>
>
> Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5613
> [5294]: hosted-e close_task_aio 2 0x7f3728000960 busy
> Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5613
> [5294]: hosted-e close_task_aio 3 0x7f37280009b0 busy
> Aug 22 01:35:27 host-01 ovirt-ha-broker:
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> established
> Aug 22 01:35:27 host-01 journal: vdsm
> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Connection
> closed: Connection timed out
> Aug 22 01:35:27 host-01 journal: vdsm root ERROR failed to retrieve
> Hosted Engine HA info#012Traceback (most recent call last):#012  File
> "/usr/lib/python2.$
> Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5614
> [5901]: s2 delta_renew read rv -2 offset 0
> /rhev/data-center/mnt/flr-01
> Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5614
> [5901]: s2 renewal error -2 delta_length 10 last_success 5518
> Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5614
> [1024]: s2 kill 6871 sig 15 count 17
> Aug 22 01:35:28 host-01 wdmd[988]: test failed rem 24 now 5614 ping
> 5568 close 5578 renewal 5518 expire 5598 client 1024
> sanlock_4093ad17-bef5-4e4b-9a16-259$
> Aug 22 01:35:28 host-01 sanlock[1024]: 2016-08-

Re: [ovirt-users] Suddenly all VM's down including HostedEngine & NFSshares (except HE) unmounted

2016-08-21 Thread Matt .
I seem some very strange behaviour, hosts are random rebooting where I
get the feeling it crashed on Sanlock or is, that is the last logline
from /var/log/messages

I saw this happening on the latest kernel (yum update) and different
hardware, the filer is OK!



Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5613
[5294]: hosted-e close_task_aio 2 0x7f3728000960 busy
Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5613
[5294]: hosted-e close_task_aio 3 0x7f37280009b0 busy
Aug 22 01:35:27 host-01 ovirt-ha-broker:
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
established
Aug 22 01:35:27 host-01 journal: vdsm
ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Connection
closed: Connection timed out
Aug 22 01:35:27 host-01 journal: vdsm root ERROR failed to retrieve
Hosted Engine HA info#012Traceback (most recent call last):#012  File
"/usr/lib/python2.$
Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5614
[5901]: s2 delta_renew read rv -2 offset 0
/rhev/data-center/mnt/flr-01
Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5614
[5901]: s2 renewal error -2 delta_length 10 last_success 5518
Aug 22 01:35:27 host-01 sanlock[1024]: 2016-08-22 01:35:27+0200 5614
[1024]: s2 kill 6871 sig 15 count 17
Aug 22 01:35:28 host-01 wdmd[988]: test failed rem 24 now 5614 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:28 host-01 sanlock[1024]: 2016-08-22 01:35:28+0200 5615
[1024]: s2 kill 6871 sig 15 count 18
Aug 22 01:35:29 host-01 wdmd[988]: test failed rem 23 now 5615 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:29 host-01 sanlock[1024]: 2016-08-22 01:35:29+0200 5616
[1024]: s2 kill 6871 sig 15 count 19
Aug 22 01:35:30 host-01 wdmd[988]: test failed rem 22 now 5616 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:30 host-01 sanlock[1024]: 2016-08-22 01:35:30+0200 5617
[1024]: s2 kill 6871 sig 15 count 20
Aug 22 01:35:31 host-01 wdmd[988]: test failed rem 21 now 5617 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:31 host-01 ovirt-ha-agent:
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use D$
Aug 22 01:35:31 host-01 ovirt-ha-agent: pending = getattr(dispatcher,
'pending', lambda: 0)
Aug 22 01:35:31 host-01 ovirt-ha-agent:
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use D$
Aug 22 01:35:31 host-01 ovirt-ha-agent: pending = getattr(dispatcher,
'pending', lambda: 0)
Aug 22 01:35:31 host-01 sanlock[1024]: 2016-08-22 01:35:31+0200 5618
[1024]: s2 kill 6871 sig 15 count 21
Aug 22 01:35:32 host-01 wdmd[988]: test failed rem 20 now 5618 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:32 host-01 momd:
/usr/lib/python2.7/site-packages/mom/Collectors/GuestMemory.py:52:
DeprecationWarning: BaseException.message has been deprecat$
Aug 22 01:35:32 host-01 momd: self.stats_error('getVmMemoryStats():
%s' % e.message)
Aug 22 01:35:32 host-01 momd:
/usr/lib/python2.7/site-packages/mom/Collectors/GuestMemory.py:52:
DeprecationWarning: BaseException.message has been deprecat$
Aug 22 01:35:32 host-01 momd: self.stats_error('getVmMemoryStats():
%s' % e.message)
Aug 22 01:35:32 host-01 sanlock[1024]: 2016-08-22 01:35:32+0200 5619
[1024]: s2 kill 6871 sig 15 count 22
Aug 22 01:35:33 host-01 ovirt-ha-broker:
WARNING:engine_health.CpuLoadNoEngine:bad health status: Hosted Engine
is not up!
Aug 22 01:35:33 host-01 wdmd[988]: test failed rem 19 now 5619 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:33 host-01 sanlock[1024]: 2016-08-22 01:35:33+0200 5620
[1024]: s2 kill 6871 sig 15 count 23
Aug 22 01:35:34 host-01 wdmd[988]: test failed rem 18 now 5620 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:34 host-01 sanlock[1024]: 2016-08-22 01:35:34+0200 5621
[1024]: s2 kill 6871 sig 15 count 24
Aug 22 01:35:35 host-01 wdmd[988]: test failed rem 17 now 5621 ping
5568 close 5578 renewal 5518 expire 5598 client 1024
sanlock_4093ad17-bef5-4e4b-9a16-259$
Aug 22 01:35:35 host-01 sanlock[1024]: 2016-08-22 01:35:35+0200 5622
[10454]: 60c7bc7a close_task_aio 0 0x7f37180008c0 busy
Aug 22 01:35:35 host-01 sanlock[1024]: 2016-08-22 01:35:35+0200 5622
[10454]: 60c7bc7a close_task_aio 1 0x7f3718000910 busy
Aug 22 01:35:35 host-01 sanlock[1024]: 2016-08-22 01:35:35+0200 5622
[10454]: 60c7bc7a close_task_aio 2 0x7f3718000960 busy
Aug 22 01:35:35 host-01 sanlock[1024]: 2016-08-22 01:35:35+0200 5622
[10454]: 60c7bc7a close_task_aio 3 0x7f37180009b0 busy
Aug 22 01:35:35 host-01 sanlock[1024]: 2016-08-22 01:35:35+0200 5622
[1024]: s2

Re: [ovirt-users] Suddenly all VM's down including HostedEngine & NFSshares (except HE) unmounted

2016-08-21 Thread Matt .
THe strange things is that there are no IP's duplicated in the ovirt
environment, storage or whatever the VM's make running.

What happens tho is that the statusses of all agents change, and
why... don' t ask me.

There is really nothing in the logs that shows this behaviour.

Restarting broker, agent, Rebooting the hosts, it doesn' t work out.
the only one where I can start the HostedEngine on now is Host-4 where
I was able to start them on other hosts in theit current states also.

Something is wobbeling around the communication between the agents if
you ask me. This happened from 4.0.1

--== Host 1 status ==--

Status up-to-date  : False
Hostname   : host-01.mydomain.tld
Host ID: 1
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 6b73a02e
Host timestamp : 2710
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2710 (Sun Aug 21 21:52:56 2016)
host-id=1
score=0
maintenance=False
state=AgentStopped
stopped=True


--== Host 2 status ==--

Status up-to-date  : False
Hostname   : host-02.mydomain.tld
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 8e647fca
Host timestamp : 509
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=509 (Sun Aug 21 21:53:00 2016)
host-id=2
score=0
maintenance=False
state=AgentStopped
stopped=True


--== Host 3 status ==--

Status up-to-date  : False
Hostname   : host-01.mydomain.tld
Host ID: 3
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 73748f9f
Host timestamp : 2888
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2888 (Sun Aug 21 00:16:12 2016)
host-id=3
score=0
maintenance=False
state=AgentStopped
stopped=True


--== Host 4 status ==--

Status up-to-date  : False
Hostname   : host-02.mydomain.tld
Host ID: 4
Engine status  : unknown stale-data
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 86ef0447
Host timestamp : 67879
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=67879 (Sun Aug 21 18:30:38 2016)
host-id=4
score=3400
maintenance=False
state=GlobalMaintenance
stopped=False



2016-08-21 22:09 GMT+02:00 Charles Kozler :
> This usually happens when SPM falls off or master storage domain was
> unreachable for a brief period of time in some capacity. Your logs should
> say something about an underlying storage problem so oVirt offlined or
> paused the VMs to avoid problems. I'd check the pathway to your master
> storage domain. You're probably right that something had another conflict
> IP. This happened to me one time where someone brought up a system on an IP
> that matched my SPM
>
>
> On Aug 21, 2016 3:33 PM, "Matt ."  wrote:
>>
>> HI All,
>>
>> I'm trying to tackle an issues on 4.0.2 that sunddenly all VM's
>> including the HostedEngine are just down at once.
>>
>> I have also seen that all NFS shares are unmounted except the
>> HostedEngine Storage, which is on the same NFS device as well.
>>
>> I have checked the logs, nothing strange to see there, but as I run a
>> vrrp setup and do some tests also I wonder if there is a duplicate IP
>> brought up, could this make happen the whole system to go down and the
>> Engine or VDSM unmounts the NFS shares ? My switches don't complain.
>>
>> It's strange that the HE share is only available after it happens.
>>
>> If so, this would be quite fragile and we should tackle where it goes
>> wrong.
>>
>> Anyone seen this bahaviour ?
>>
>> Thanks,
>>
>> Matt
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mai

Re: [ovirt-users] Suddenly all VM's down including HostedEngine & NFSshares (except HE) unmounted

2016-08-21 Thread Charles Kozler
This usually happens when SPM falls off or master storage domain was
unreachable for a brief period of time in some capacity. Your logs should
say something about an underlying storage problem so oVirt offlined or
paused the VMs to avoid problems. I'd check the pathway to your master
storage domain. You're probably right that something had another conflict
IP. This happened to me one time where someone brought up a system on an IP
that matched my SPM

On Aug 21, 2016 3:33 PM, "Matt ."  wrote:

> HI All,
>
> I'm trying to tackle an issues on 4.0.2 that sunddenly all VM's
> including the HostedEngine are just down at once.
>
> I have also seen that all NFS shares are unmounted except the
> HostedEngine Storage, which is on the same NFS device as well.
>
> I have checked the logs, nothing strange to see there, but as I run a
> vrrp setup and do some tests also I wonder if there is a duplicate IP
> brought up, could this make happen the whole system to go down and the
> Engine or VDSM unmounts the NFS shares ? My switches don't complain.
>
> It's strange that the HE share is only available after it happens.
>
> If so, this would be quite fragile and we should tackle where it goes
> wrong.
>
> Anyone seen this bahaviour ?
>
> Thanks,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Suddenly all VM's down including HostedEngine & NFSshares (except HE) unmounted

2016-08-21 Thread Matt .
HI All,

I'm trying to tackle an issues on 4.0.2 that sunddenly all VM's
including the HostedEngine are just down at once.

I have also seen that all NFS shares are unmounted except the
HostedEngine Storage, which is on the same NFS device as well.

I have checked the logs, nothing strange to see there, but as I run a
vrrp setup and do some tests also I wonder if there is a duplicate IP
brought up, could this make happen the whole system to go down and the
Engine or VDSM unmounts the NFS shares ? My switches don't complain.

It's strange that the HE share is only available after it happens.

If so, this would be quite fragile and we should tackle where it goes wrong.

Anyone seen this bahaviour ?

Thanks,

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users