[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-30 Thread Johan Bernhardsson
Is storage working as it should?  Does the gluster mount point respond as 
it should? Can you write files to it?  Does the physical drives say that 
they are ok? Can you write (you shouldn't bypass gluster mount point but 
you need to test the drives) to the physical drives?


For me this sounds like broken or almost broken hardware or broken 
underlying filesystems.


If one of the drives malfunction and timeout, gluster will be slow and 
timeout. It runs write in sync so the slowest node will slow down the whole 
system.


/Johan


On May 30, 2018 08:29:46 Jim Kusznir  wrote:
hosted-engine --deploy failed (would not come up on my existing gluster 
storage).  However, I realized no changes were written to my existing 
storage.  So, I went back to trying to get my old engine running.


hosted-engine --vm-status is now taking a very long time (5+minutes) to 
return, and it returns stail information everywhere.  I thought perhaps the 
lockspace is corrupt, so tried to clean that and metadata, but both are 
failing (--cleam-metadata has hung and I can't even ctrl-c out of it).


How can I reinitialize all the lockspace/metadata safely?  There is no 
engine or VMs running currently


--Jim

On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir  wrote:
Well, things went from bad to very, very bad

It appears that during one of the 2 minute lockups, the fencing agents 
decided that another node in the cluster was down.  As a result, 2 of the 3 
nodes were simultaneously reset with fencing agent reboot.  After the nodes 
came back up, the engine would not start.  All running VMs (including VMs 
on the 3rd node that was not rebooted) crashed.


I've now been working for about 3 hours trying to get the engine to come 
up.  I don't know why it won't start.  hosted-engine --vm-start says its 
starting, but it doesn't start (virsh doesn't show any VMs running).  I'm 
currently running --deploy, as I had run out of options for anything else I 
can come up with.  I hope this will allow me to re-import all my existing 
VMs and allow me to start them back up after everything comes back up.


I do have an unverified geo-rep backup; I don't know if it is a good backup 
(there were several prior messages to this list, but I didn't get replies 
to my questions.  It was running in what I believe to be "strange", and the 
data directories are larger than their source).


I'll see if my --deploy works, and if not, I'll be back with another 
message/help request.


When the dust settles and I'm at least minimally functional again, I really 
want to understand why all these technologies designed to offer redundancy 
conspired to reduce uptime and create failures where there weren't any 
otherwise.  I thought with hosted engine, 3 ovirt servers and glusterfs 
with minimum replica 2+arb or replica 3 should have offered strong 
resilience against server failure or disk failure, and should have 
prevented / recovered from data corruption.  Instead, all of the above 
happened (once I get my cluster back up, I still have to try and recover my 
webserver VM, which won't boot due to XFS corrupt journal issues created 
during the gluster crashes).  I think a lot of these issues were rooted 
from the upgrade from 4.1 to 4.2.


--Jim

On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir  wrote:
I also finally found the following in my system log on one server:

[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 
seconds.
[10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

[10679.527144] glusterclogro   D 97209832bf40 0 14933  1 0x0080
[10679.527150] Call Trace:
[10679.527161]  [] schedule+0x29/0x70
[10679.527218]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
[10679.527225]  [] ? wake_up_state+0x20/0x20
[10679.527254]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
[10679.527260]  [] do_fsync+0x67/0xb0
[10679.527268]  [] ? system_call_after_swapgs+0xbc/0x160
[10679.527271]  [] SyS_fsync+0x10/0x20
[10679.527275]  [] system_call_fastpath+0x1c/0x21
[10679.527279]  [] ? system_call_after_swapgs+0xc8/0x160
[10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 
seconds.
[10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

[10679.529956] glusterposixfsy D 972495f84f10 0 14941  1 0x0080
[10679.529961] Call Trace:
[10679.529966]  [] schedule+0x29/0x70
[10679.530003]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
[10679.530008]  [] ? wake_up_state+0x20/0x20
[10679.530038]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
[10679.530042]  [] do_fsync+0x67/0xb0
[10679.530046]  [] ? system_call_after_swapgs+0xbc/0x160
[10679.530050]  [] SyS_fdatasync+0x13/0x20
[10679.530054]  [] system_call_fastpath+0x1c/0x21
[10679.530058]  [] ? system_call_after_swapgs+0xc8/0x160
[10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 
seconds.
[10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

[10679.533732] 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-30 Thread Jim Kusznir
At the moment, it is responding like I would expect.  I do know I have one
failed drive on one brick (hardware failure, OS removed drive completely;
the underlying /dev/sdb is gone).  I have a new disk on order (overnight),
but that is also one brick of one volume that is replica 3, so I would hope
a complete failure like that would restore the system to operational
capabilities.

Since having the gluster-volume-starting problems, I have performed a test
in the engine volume with writing and removing a file and verifying its
happening from all three hosts; that worked.  The engine volume has all of
its bricks, as well two other volumes; its only one volume that is shy one
brick.

--Jim

On Tue, May 29, 2018 at 11:41 PM, Johan Bernhardsson  wrote:

> Is storage working as it should?  Does the gluster mount point respond as
> it should? Can you write files to it?  Does the physical drives say that
> they are ok? Can you write (you shouldn't bypass gluster mount point but
> you need to test the drives) to the physical drives?
>
> For me this sounds like broken or almost broken hardware or broken
> underlying filesystems.
>
> If one of the drives malfunction and timeout, gluster will be slow and
> timeout. It runs write in sync so the slowest node will slow down the whole
> system.
>
> /Johan
>
>
> On May 30, 2018 08:29:46 Jim Kusznir  wrote:
>
>> hosted-engine --deploy failed (would not come up on my existing gluster
>> storage).  However, I realized no changes were written to my existing
>> storage.  So, I went back to trying to get my old engine running.
>>
>> hosted-engine --vm-status is now taking a very long time (5+minutes) to
>> return, and it returns stail information everywhere.  I thought perhaps the
>> lockspace is corrupt, so tried to clean that and metadata, but both are
>> failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
>>
>> How can I reinitialize all the lockspace/metadata safely?  There is no
>> engine or VMs running currently
>>
>> --Jim
>>
>> On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir  wrote:
>>
>>> Well, things went from bad to very, very bad
>>>
>>> It appears that during one of the 2 minute lockups, the fencing agents
>>> decided that another node in the cluster was down.  As a result, 2 of the 3
>>> nodes were simultaneously reset with fencing agent reboot.  After the nodes
>>> came back up, the engine would not start.  All running VMs (including VMs
>>> on the 3rd node that was not rebooted) crashed.
>>>
>>> I've now been working for about 3 hours trying to get the engine to come
>>> up.  I don't know why it won't start.  hosted-engine --vm-start says its
>>> starting, but it doesn't start (virsh doesn't show any VMs running).  I'm
>>> currently running --deploy, as I had run out of options for anything else I
>>> can come up with.  I hope this will allow me to re-import all my existing
>>> VMs and allow me to start them back up after everything comes back up.
>>>
>>> I do have an unverified geo-rep backup; I don't know if it is a good
>>> backup (there were several prior messages to this list, but I didn't get
>>> replies to my questions.  It was running in what I believe to be "strange",
>>> and the data directories are larger than their source).
>>>
>>> I'll see if my --deploy works, and if not, I'll be back with another
>>> message/help request.
>>>
>>> When the dust settles and I'm at least minimally functional again, I
>>> really want to understand why all these technologies designed to offer
>>> redundancy conspired to reduce uptime and create failures where there
>>> weren't any otherwise.  I thought with hosted engine, 3 ovirt servers and
>>> glusterfs with minimum replica 2+arb or replica 3 should have offered
>>> strong resilience against server failure or disk failure, and should have
>>> prevented / recovered from data corruption.  Instead, all of the above
>>> happened (once I get my cluster back up, I still have to try and recover my
>>> webserver VM, which won't boot due to XFS corrupt journal issues created
>>> during the gluster crashes).  I think a lot of these issues were rooted
>>> from the upgrade from 4.1 to 4.2.
>>>
>>> --Jim
>>>
>>> On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir 
>>> wrote:
>>>
 I also finally found the following in my system log on one server:

 [10679.524491] INFO: task glusterclogro:14933 blocked for more than 120
 seconds.
 [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
 disables this message.
 [10679.527144] glusterclogro   D 97209832bf40 0 14933  1
 0x0080
 [10679.527150] Call Trace:
 [10679.527161]  [] schedule+0x29/0x70
 [10679.527218]  [] _xfs_log_force_lsn+0x2e8/0x340
 [xfs]
 [10679.527225]  [] ? wake_up_state+0x20/0x20
 [10679.527254]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
 [10679.527260]  [] do_fsync+0x67/0xb0
 [10679.527268]  [] ? system_call_after_swapgs+0xbc/
 0x160
 [10679.527271]  [] 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-30 Thread Sahina Bose
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir  wrote:

> hosted-engine --deploy failed (would not come up on my existing gluster
> storage).  However, I realized no changes were written to my existing
> storage.  So, I went back to trying to get my old engine running.
>
> hosted-engine --vm-status is now taking a very long time (5+minutes) to
> return, and it returns stail information everywhere.  I thought perhaps the
> lockspace is corrupt, so tried to clean that and metadata, but both are
> failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
>
> How can I reinitialize all the lockspace/metadata safely?  There is no
> engine or VMs running currently
>

I think the first thing to make sure is that your storage is up and
running. So can you mount the gluster volumes and able to access the
contents there?
Please provide the gluster volume info and gluster volume status of the
volumes that you're using.



> --Jim
>
> On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir  wrote:
>
>> Well, things went from bad to very, very bad
>>
>> It appears that during one of the 2 minute lockups, the fencing agents
>> decided that another node in the cluster was down.  As a result, 2 of the 3
>> nodes were simultaneously reset with fencing agent reboot.  After the nodes
>> came back up, the engine would not start.  All running VMs (including VMs
>> on the 3rd node that was not rebooted) crashed.
>>
>> I've now been working for about 3 hours trying to get the engine to come
>> up.  I don't know why it won't start.  hosted-engine --vm-start says its
>> starting, but it doesn't start (virsh doesn't show any VMs running).  I'm
>> currently running --deploy, as I had run out of options for anything else I
>> can come up with.  I hope this will allow me to re-import all my existing
>> VMs and allow me to start them back up after everything comes back up.
>>
>> I do have an unverified geo-rep backup; I don't know if it is a good
>> backup (there were several prior messages to this list, but I didn't get
>> replies to my questions.  It was running in what I believe to be "strange",
>> and the data directories are larger than their source).
>>
>> I'll see if my --deploy works, and if not, I'll be back with another
>> message/help request.
>>
>> When the dust settles and I'm at least minimally functional again, I
>> really want to understand why all these technologies designed to offer
>> redundancy conspired to reduce uptime and create failures where there
>> weren't any otherwise.  I thought with hosted engine, 3 ovirt servers and
>> glusterfs with minimum replica 2+arb or replica 3 should have offered
>> strong resilience against server failure or disk failure, and should have
>> prevented / recovered from data corruption.  Instead, all of the above
>> happened (once I get my cluster back up, I still have to try and recover my
>> webserver VM, which won't boot due to XFS corrupt journal issues created
>> during the gluster crashes).  I think a lot of these issues were rooted
>> from the upgrade from 4.1 to 4.2.
>>
>> --Jim
>>
>> On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir  wrote:
>>
>>> I also finally found the following in my system log on one server:
>>>
>>> [10679.524491] INFO: task glusterclogro:14933 blocked for more than 120
>>> seconds.
>>> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>> disables this message.
>>> [10679.527144] glusterclogro   D 97209832bf40 0 14933  1
>>> 0x0080
>>> [10679.527150] Call Trace:
>>> [10679.527161]  [] schedule+0x29/0x70
>>> [10679.527218]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
>>> [10679.527225]  [] ? wake_up_state+0x20/0x20
>>> [10679.527254]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
>>> [10679.527260]  [] do_fsync+0x67/0xb0
>>> [10679.527268]  [] ? system_call_after_swapgs+0xbc/
>>> 0x160
>>> [10679.527271]  [] SyS_fsync+0x10/0x20
>>> [10679.527275]  [] system_call_fastpath+0x1c/0x21
>>> [10679.527279]  [] ? system_call_after_swapgs+0xc8/
>>> 0x160
>>> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than
>>> 120 seconds.
>>> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>> disables this message.
>>> [10679.529956] glusterposixfsy D 972495f84f10 0 14941  1
>>> 0x0080
>>> [10679.529961] Call Trace:
>>> [10679.529966]  [] schedule+0x29/0x70
>>> [10679.530003]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
>>> [10679.530008]  [] ? wake_up_state+0x20/0x20
>>> [10679.530038]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
>>> [10679.530042]  [] do_fsync+0x67/0xb0
>>> [10679.530046]  [] ? system_call_after_swapgs+0xbc/
>>> 0x160
>>> [10679.530050]  [] SyS_fdatasync+0x13/0x20
>>> [10679.530054]  [] system_call_fastpath+0x1c/0x21
>>> [10679.530058]  [] ? system_call_after_swapgs+0xc8/
>>> 0x160
>>> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120
>>> seconds.
>>> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>> disables this message.
>>> [10679.533732] 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-30 Thread Jim Kusznir
hosted-engine --deploy failed (would not come up on my existing gluster
storage).  However, I realized no changes were written to my existing
storage.  So, I went back to trying to get my old engine running.

hosted-engine --vm-status is now taking a very long time (5+minutes) to
return, and it returns stail information everywhere.  I thought perhaps the
lockspace is corrupt, so tried to clean that and metadata, but both are
failing (--cleam-metadata has hung and I can't even ctrl-c out of it).

How can I reinitialize all the lockspace/metadata safely?  There is no
engine or VMs running currently

--Jim

On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir  wrote:

> Well, things went from bad to very, very bad
>
> It appears that during one of the 2 minute lockups, the fencing agents
> decided that another node in the cluster was down.  As a result, 2 of the 3
> nodes were simultaneously reset with fencing agent reboot.  After the nodes
> came back up, the engine would not start.  All running VMs (including VMs
> on the 3rd node that was not rebooted) crashed.
>
> I've now been working for about 3 hours trying to get the engine to come
> up.  I don't know why it won't start.  hosted-engine --vm-start says its
> starting, but it doesn't start (virsh doesn't show any VMs running).  I'm
> currently running --deploy, as I had run out of options for anything else I
> can come up with.  I hope this will allow me to re-import all my existing
> VMs and allow me to start them back up after everything comes back up.
>
> I do have an unverified geo-rep backup; I don't know if it is a good
> backup (there were several prior messages to this list, but I didn't get
> replies to my questions.  It was running in what I believe to be "strange",
> and the data directories are larger than their source).
>
> I'll see if my --deploy works, and if not, I'll be back with another
> message/help request.
>
> When the dust settles and I'm at least minimally functional again, I
> really want to understand why all these technologies designed to offer
> redundancy conspired to reduce uptime and create failures where there
> weren't any otherwise.  I thought with hosted engine, 3 ovirt servers and
> glusterfs with minimum replica 2+arb or replica 3 should have offered
> strong resilience against server failure or disk failure, and should have
> prevented / recovered from data corruption.  Instead, all of the above
> happened (once I get my cluster back up, I still have to try and recover my
> webserver VM, which won't boot due to XFS corrupt journal issues created
> during the gluster crashes).  I think a lot of these issues were rooted
> from the upgrade from 4.1 to 4.2.
>
> --Jim
>
> On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir  wrote:
>
>> I also finally found the following in my system log on one server:
>>
>> [10679.524491] INFO: task glusterclogro:14933 blocked for more than 120
>> seconds.
>> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>> [10679.527144] glusterclogro   D 97209832bf40 0 14933  1
>> 0x0080
>> [10679.527150] Call Trace:
>> [10679.527161]  [] schedule+0x29/0x70
>> [10679.527218]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
>> [10679.527225]  [] ? wake_up_state+0x20/0x20
>> [10679.527254]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
>> [10679.527260]  [] do_fsync+0x67/0xb0
>> [10679.527268]  [] ? system_call_after_swapgs+0xbc/
>> 0x160
>> [10679.527271]  [] SyS_fsync+0x10/0x20
>> [10679.527275]  [] system_call_fastpath+0x1c/0x21
>> [10679.527279]  [] ? system_call_after_swapgs+0xc8/
>> 0x160
>> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120
>> seconds.
>> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>> [10679.529956] glusterposixfsy D 972495f84f10 0 14941  1
>> 0x0080
>> [10679.529961] Call Trace:
>> [10679.529966]  [] schedule+0x29/0x70
>> [10679.530003]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
>> [10679.530008]  [] ? wake_up_state+0x20/0x20
>> [10679.530038]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
>> [10679.530042]  [] do_fsync+0x67/0xb0
>> [10679.530046]  [] ? system_call_after_swapgs+0xbc/
>> 0x160
>> [10679.530050]  [] SyS_fdatasync+0x13/0x20
>> [10679.530054]  [] system_call_fastpath+0x1c/0x21
>> [10679.530058]  [] ? system_call_after_swapgs+0xc8/
>> 0x160
>> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120
>> seconds.
>> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>> [10679.533732] glusteriotwr13  D 9720a83f 0 15486  1
>> 0x0080
>> [10679.533738] Call Trace:
>> [10679.533747]  [] schedule+0x29/0x70
>> [10679.533799]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
>> [10679.533806]  [] ? wake_up_state+0x20/0x20
>> [10679.533846]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
>> [10679.533852]  [] do_fsync+0x67/0xb0
>> [10679.533858]  [] ? system_call_after_swapgs+0xbc/
>> 0x160
>> [10679.533863]  [] 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
Well, things went from bad to very, very bad

It appears that during one of the 2 minute lockups, the fencing agents
decided that another node in the cluster was down.  As a result, 2 of the 3
nodes were simultaneously reset with fencing agent reboot.  After the nodes
came back up, the engine would not start.  All running VMs (including VMs
on the 3rd node that was not rebooted) crashed.

I've now been working for about 3 hours trying to get the engine to come
up.  I don't know why it won't start.  hosted-engine --vm-start says its
starting, but it doesn't start (virsh doesn't show any VMs running).  I'm
currently running --deploy, as I had run out of options for anything else I
can come up with.  I hope this will allow me to re-import all my existing
VMs and allow me to start them back up after everything comes back up.

I do have an unverified geo-rep backup; I don't know if it is a good backup
(there were several prior messages to this list, but I didn't get replies
to my questions.  It was running in what I believe to be "strange", and the
data directories are larger than their source).

I'll see if my --deploy works, and if not, I'll be back with another
message/help request.

When the dust settles and I'm at least minimally functional again, I really
want to understand why all these technologies designed to offer redundancy
conspired to reduce uptime and create failures where there weren't any
otherwise.  I thought with hosted engine, 3 ovirt servers and glusterfs
with minimum replica 2+arb or replica 3 should have offered strong
resilience against server failure or disk failure, and should have
prevented / recovered from data corruption.  Instead, all of the above
happened (once I get my cluster back up, I still have to try and recover my
webserver VM, which won't boot due to XFS corrupt journal issues created
during the gluster crashes).  I think a lot of these issues were rooted
from the upgrade from 4.1 to 4.2.

--Jim

On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir  wrote:

> I also finally found the following in my system log on one server:
>
> [10679.524491] INFO: task glusterclogro:14933 blocked for more than 120
> seconds.
> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10679.527144] glusterclogro   D 97209832bf40 0 14933  1
> 0x0080
> [10679.527150] Call Trace:
> [10679.527161]  [] schedule+0x29/0x70
> [10679.527218]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
> [10679.527225]  [] ? wake_up_state+0x20/0x20
> [10679.527254]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.527260]  [] do_fsync+0x67/0xb0
> [10679.527268]  [] ? system_call_after_swapgs+0xbc/0x160
> [10679.527271]  [] SyS_fsync+0x10/0x20
> [10679.527275]  [] system_call_fastpath+0x1c/0x21
> [10679.527279]  [] ? system_call_after_swapgs+0xc8/0x160
> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120
> seconds.
> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10679.529956] glusterposixfsy D 972495f84f10 0 14941  1
> 0x0080
> [10679.529961] Call Trace:
> [10679.529966]  [] schedule+0x29/0x70
> [10679.530003]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
> [10679.530008]  [] ? wake_up_state+0x20/0x20
> [10679.530038]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.530042]  [] do_fsync+0x67/0xb0
> [10679.530046]  [] ? system_call_after_swapgs+0xbc/0x160
> [10679.530050]  [] SyS_fdatasync+0x13/0x20
> [10679.530054]  [] system_call_fastpath+0x1c/0x21
> [10679.530058]  [] ? system_call_after_swapgs+0xc8/0x160
> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120
> seconds.
> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10679.533732] glusteriotwr13  D 9720a83f 0 15486  1
> 0x0080
> [10679.533738] Call Trace:
> [10679.533747]  [] schedule+0x29/0x70
> [10679.533799]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
> [10679.533806]  [] ? wake_up_state+0x20/0x20
> [10679.533846]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.533852]  [] do_fsync+0x67/0xb0
> [10679.533858]  [] ? system_call_after_swapgs+0xbc/0x160
> [10679.533863]  [] SyS_fdatasync+0x13/0x20
> [10679.533868]  [] system_call_fastpath+0x1c/0x21
> [10679.533873]  [] ? system_call_after_swapgs+0xc8/0x160
> [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120
> seconds.
> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10919.516663] glusterclogro   D 97209832bf40 0 14933  1
> 0x0080
> [10919.516677] Call Trace:
> [10919.516690]  [] schedule+0x29/0x70
> [10919.516696]  [] schedule_timeout+0x239/0x2c0
> [10919.516703]  [] ? blk_finish_plug+0x14/0x40
> [10919.516768]  [] ? _xfs_buf_ioapply+0x334/0x460 [xfs]
> [10919.516774]  [] wait_for_completion+0xfd/0x140
> [10919.516782]  [] ? wake_up_state+0x20/0x20
> [10919.516821]  [] ? _xfs_buf_read+0x23/0x40 [xfs]
> [10919.516859]  [] 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Krutika Dhananjay
Adding Ravi to look into the heal issue.

As for the fsync hang and subsequent IO errors, it seems a lot like
https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from
qemu had pointed out that this would be fixed by the following commit:

  commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0
Author: Paolo Bonzini 
Date:   Wed Jun 21 16:35:46 2017 +0200

scsi: virtio_scsi: let host do exception handling

virtio_scsi tries to do exception handling after the default 30 seconds
timeout expires.  However, it's better to let the host control the
timeout, otherwise with a heavy I/O load it is likely that an abort will
also timeout.  This leads to fatal errors like filesystems going
offline.

Disable the 'sd' timeout and allow the host to do exception handling,
following the precedent of the storvsc driver.

Hannes has a proposal to introduce timeouts in virtio, but this provides
an immediate solution for stable kernels too.

[mkp: fixed typo]

Reported-by: Douglas Miller 
Cc: "James E.J. Bottomley" 
Cc: "Martin K. Petersen" 
Cc: Hannes Reinecke 
Cc: linux-s...@vger.kernel.org
Cc: sta...@vger.kernel.org
Signed-off-by: Paolo Bonzini 
Signed-off-by: Martin K. Petersen 


Adding Paolo/Kevin to comment.

As for the poor gluster performance, could you disable cluster.eager-lock
and see if that makes any difference:

# gluster volume set  cluster.eager-lock off

Do also capture the volume profile again if you still see performance
issues after disabling eager-lock.

-Krutika


On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir  wrote:

> I also finally found the following in my system log on one server:
>
> [10679.524491] INFO: task glusterclogro:14933 blocked for more than 120
> seconds.
> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10679.527144] glusterclogro   D 97209832bf40 0 14933  1
> 0x0080
> [10679.527150] Call Trace:
> [10679.527161]  [] schedule+0x29/0x70
> [10679.527218]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
> [10679.527225]  [] ? wake_up_state+0x20/0x20
> [10679.527254]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.527260]  [] do_fsync+0x67/0xb0
> [10679.527268]  [] ? system_call_after_swapgs+0xbc/0x160
> [10679.527271]  [] SyS_fsync+0x10/0x20
> [10679.527275]  [] system_call_fastpath+0x1c/0x21
> [10679.527279]  [] ? system_call_after_swapgs+0xc8/0x160
> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120
> seconds.
> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10679.529956] glusterposixfsy D 972495f84f10 0 14941  1
> 0x0080
> [10679.529961] Call Trace:
> [10679.529966]  [] schedule+0x29/0x70
> [10679.530003]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
> [10679.530008]  [] ? wake_up_state+0x20/0x20
> [10679.530038]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.530042]  [] do_fsync+0x67/0xb0
> [10679.530046]  [] ? system_call_after_swapgs+0xbc/0x160
> [10679.530050]  [] SyS_fdatasync+0x13/0x20
> [10679.530054]  [] system_call_fastpath+0x1c/0x21
> [10679.530058]  [] ? system_call_after_swapgs+0xc8/0x160
> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120
> seconds.
> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10679.533732] glusteriotwr13  D 9720a83f 0 15486  1
> 0x0080
> [10679.533738] Call Trace:
> [10679.533747]  [] schedule+0x29/0x70
> [10679.533799]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
> [10679.533806]  [] ? wake_up_state+0x20/0x20
> [10679.533846]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
> [10679.533852]  [] do_fsync+0x67/0xb0
> [10679.533858]  [] ? system_call_after_swapgs+0xbc/0x160
> [10679.533863]  [] SyS_fdatasync+0x13/0x20
> [10679.533868]  [] system_call_fastpath+0x1c/0x21
> [10679.533873]  [] ? system_call_after_swapgs+0xc8/0x160
> [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120
> seconds.
> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [10919.516663] glusterclogro   D 97209832bf40 0 14933  1
> 0x0080
> [10919.516677] Call Trace:
> [10919.516690]  [] schedule+0x29/0x70
> [10919.516696]  [] schedule_timeout+0x239/0x2c0
> [10919.516703]  [] ? blk_finish_plug+0x14/0x40
> [10919.516768]  [] ? _xfs_buf_ioapply+0x334/0x460 [xfs]
> [10919.516774]  [] wait_for_completion+0xfd/0x140
> [10919.516782]  [] ? wake_up_state+0x20/0x20
> [10919.516821]  [] ? _xfs_buf_read+0x23/0x40 [xfs]
> [10919.516859]  [] xfs_buf_submit_wait+0xf9/0x1d0 [xfs]
> [10919.516902]  [] ? xfs_trans_read_buf_map+0x199/0x400
> [xfs]
> [10919.516940]  [] _xfs_buf_read+0x23/0x40 [xfs]
> [10919.516977]  [] xfs_buf_read_map+0xf9/0x160 [xfs]
> [10919.517022]  [] xfs_trans_read_buf_map+0x199/0x400
> [xfs]
> [10919.517057]  [] xfs_da_read_buf+0xd4/0x100 [xfs]
> [10919.517091]  [] xfs_da3_node_read+0x23/0xd0 [xfs]
> 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
I also finally found the following in my system log on one server:

[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120
seconds.
[10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[10679.527144] glusterclogro   D 97209832bf40 0 14933  1
0x0080
[10679.527150] Call Trace:
[10679.527161]  [] schedule+0x29/0x70
[10679.527218]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
[10679.527225]  [] ? wake_up_state+0x20/0x20
[10679.527254]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
[10679.527260]  [] do_fsync+0x67/0xb0
[10679.527268]  [] ? system_call_after_swapgs+0xbc/0x160
[10679.527271]  [] SyS_fsync+0x10/0x20
[10679.527275]  [] system_call_fastpath+0x1c/0x21
[10679.527279]  [] ? system_call_after_swapgs+0xc8/0x160
[10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120
seconds.
[10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[10679.529956] glusterposixfsy D 972495f84f10 0 14941  1
0x0080
[10679.529961] Call Trace:
[10679.529966]  [] schedule+0x29/0x70
[10679.530003]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
[10679.530008]  [] ? wake_up_state+0x20/0x20
[10679.530038]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
[10679.530042]  [] do_fsync+0x67/0xb0
[10679.530046]  [] ? system_call_after_swapgs+0xbc/0x160
[10679.530050]  [] SyS_fdatasync+0x13/0x20
[10679.530054]  [] system_call_fastpath+0x1c/0x21
[10679.530058]  [] ? system_call_after_swapgs+0xc8/0x160
[10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120
seconds.
[10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[10679.533732] glusteriotwr13  D 9720a83f 0 15486  1
0x0080
[10679.533738] Call Trace:
[10679.533747]  [] schedule+0x29/0x70
[10679.533799]  [] _xfs_log_force_lsn+0x2e8/0x340 [xfs]
[10679.533806]  [] ? wake_up_state+0x20/0x20
[10679.533846]  [] xfs_file_fsync+0x107/0x1e0 [xfs]
[10679.533852]  [] do_fsync+0x67/0xb0
[10679.533858]  [] ? system_call_after_swapgs+0xbc/0x160
[10679.533863]  [] SyS_fdatasync+0x13/0x20
[10679.533868]  [] system_call_fastpath+0x1c/0x21
[10679.533873]  [] ? system_call_after_swapgs+0xc8/0x160
[10919.512757] INFO: task glusterclogro:14933 blocked for more than 120
seconds.
[10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[10919.516663] glusterclogro   D 97209832bf40 0 14933  1
0x0080
[10919.516677] Call Trace:
[10919.516690]  [] schedule+0x29/0x70
[10919.516696]  [] schedule_timeout+0x239/0x2c0
[10919.516703]  [] ? blk_finish_plug+0x14/0x40
[10919.516768]  [] ? _xfs_buf_ioapply+0x334/0x460 [xfs]
[10919.516774]  [] wait_for_completion+0xfd/0x140
[10919.516782]  [] ? wake_up_state+0x20/0x20
[10919.516821]  [] ? _xfs_buf_read+0x23/0x40 [xfs]
[10919.516859]  [] xfs_buf_submit_wait+0xf9/0x1d0 [xfs]
[10919.516902]  [] ? xfs_trans_read_buf_map+0x199/0x400
[xfs]
[10919.516940]  [] _xfs_buf_read+0x23/0x40 [xfs]
[10919.516977]  [] xfs_buf_read_map+0xf9/0x160 [xfs]
[10919.517022]  [] xfs_trans_read_buf_map+0x199/0x400
[xfs]
[10919.517057]  [] xfs_da_read_buf+0xd4/0x100 [xfs]
[10919.517091]  [] xfs_da3_node_read+0x23/0xd0 [xfs]
[10919.517126]  [] xfs_da3_node_lookup_int+0x6e/0x2f0
[xfs]
[10919.517160]  [] xfs_dir2_node_lookup+0x4d/0x170 [xfs]
[10919.517194]  [] xfs_dir_lookup+0x1bd/0x1e0 [xfs]
[10919.517233]  [] xfs_lookup+0x69/0x140 [xfs]
[10919.517271]  [] xfs_vn_lookup+0x78/0xc0 [xfs]
[10919.517278]  [] lookup_real+0x23/0x60
[10919.517283]  [] __lookup_hash+0x42/0x60
[10919.517288]  [] SYSC_renameat2+0x3a9/0x5a0
[10919.517296]  [] ? selinux_file_free_security+0x23/0x30
[10919.517304]  [] ? system_call_after_swapgs+0xc8/0x160
[10919.517309]  [] ? system_call_after_swapgs+0xbc/0x160
[10919.517313]  [] ? system_call_after_swapgs+0xc8/0x160
[10919.517318]  [] ? system_call_after_swapgs+0xbc/0x160
[10919.517323]  [] SyS_renameat2+0xe/0x10
[10919.517328]  [] SyS_rename+0x1e/0x20
[10919.517333]  [] system_call_fastpath+0x1c/0x21
[10919.517339]  [] ? system_call_after_swapgs+0xc8/0x160
[11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120
seconds.
[11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[11159.498978] glusteriotwr9   D 971fa0fa1fa0 0 15482  1
0x0080
[11159.498984] Call Trace:
[11159.498995]  [] ? bit_wait+0x50/0x50
[11159.498999]  [] schedule+0x29/0x70
[11159.499003]  [] schedule_timeout+0x239/0x2c0
[11159.499056]  [] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs]
[11159.499082]  [] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs]
[11159.499090]  [] ? ktime_get_ts64+0x52/0xf0
[11159.499093]  [] ? bit_wait+0x50/0x50
[11159.499097]  [] io_schedule_timeout+0xad/0x130
[11159.499101]  [] io_schedule+0x18/0x20
[11159.499104]  [] bit_wait_io+0x11/0x50
[11159.499107]  [] __wait_on_bit_lock+0x61/0xc0
[11159.499113]  [] __lock_page+0x74/0x90
[11159.499118]  [] ? wake_bit_function+0x40/0x40
[11159.499121]  [] __find_lock_page+0x54/0x70

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
I think this is the profile information for one of the volumes that lives
on the SSDs and is fully operational with no down/problem disks:

[root@ovirt2 yum.repos.d]# gluster volume profile data info
Brick: ovirt2.nwfiber.com:/gluster/brick2/data
--
Cumulative Stats:
   Block Size:256b+ 512b+
1024b+
 No. of Reads:  983  2696
1059
No. of Writes:0  1113
 302

   Block Size:   2048b+4096b+
8192b+
 No. of Reads:  852 88608
 53526
No. of Writes:  522812340
 76257

   Block Size:  16384b+   32768b+
 65536b+
 No. of Reads:54351241901
 15024
No. of Writes:21636  8656
8976

   Block Size: 131072b+
 No. of Reads:   524156
No. of Writes:   296071
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
 Fop
 -   ---   ---   ---   

  0.00   0.00 us   0.00 us   0.00 us   4189
 RELEASE
  0.00   0.00 us   0.00 us   0.00 us   1257
RELEASEDIR
  0.00  46.19 us  12.00 us 187.00 us 69
 FLUSH
  0.00 147.00 us  78.00 us 367.00 us 86
REMOVEXATTR
  0.00 223.46 us  24.00 us1166.00 us149
 READDIR
  0.00 565.34 us  76.00 us3639.00 us 88
 FTRUNCATE
  0.00 263.28 us  20.00 us   28385.00 us228
  LK
  0.00  98.84 us   2.00 us 880.00 us   1198
 OPENDIR
  0.00  91.59 us  26.00 us   10371.00 us   3853
STATFS
  0.00 494.14 us  17.00 us  193439.00 us   1171
GETXATTR
  0.00 299.42 us  35.00 us9799.00 us   2044
READDIRP
  0.001965.31 us 110.00 us  382258.00 us321
 XATTROP
  0.01 113.40 us  24.00 us   61061.00 us   8134
STAT
  0.01 755.38 us  57.00 us  607603.00 us   3196
 DISCARD
  0.052690.09 us  58.00 us 2704761.00 us   3206
OPEN
  0.10  119978.25 us  97.00 us 9406684.00 us154
 SETATTR
  0.18 101.73 us  28.00 us  700477.00 us 313379
 FSTAT
  0.231059.84 us  25.00 us 2716124.00 us  38255
LOOKUP
  0.471024.11 us  54.00 us 6197164.00 us  81455
FXATTROP
  1.722984.00 us  15.00 us 37098954.00 us 103020
FINODELK
  5.92   44315.32 us  51.00 us 24731536.00 us  23957
 FSYNC
 13.272399.78 us  25.00 us 22089540.00 us 991005
READ
 37.005980.43 us  52.00 us 22099889.00 us1108976
 WRITE
 41.045452.75 us  13.00 us 22102452.00 us1349053
 INODELK

Duration: 10026 seconds
   Data Read: 80046027759 bytes
Data Written: 44496632320 bytes

Interval 1 Stats:
   Block Size:256b+ 512b+
1024b+
 No. of Reads:  983  2696
1059
No. of Writes:0   838
 185

   Block Size:   2048b+4096b+
8192b+
 No. of Reads:  852 85856
 51575
No. of Writes:  382705802
 57812

   Block Size:  16384b+   32768b+
 65536b+
 No. of Reads:52673232093
 14984
No. of Writes:13499  4908
4242

   Block Size: 131072b+
 No. of Reads:   460040
No. of Writes: 6411
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
 Fop
 -   ---   ---   ---   

  0.00   0.00 us   0.00 us   0.00 us   2093
 RELEASE
  0.00   0.00 us   0.00 us   0.00 us   1093
RELEASEDIR
  0.00  53.38 us  26.00 us 111.00 us 16
 FLUSH
  0.00 145.14 us  78.00 us 367.00 us 71
REMOVEXATTR
  0.00 190.96 us 114.00 us 298.00 us 71
 SETATTR
  0.00 213.38 us  24.00 us1145.00 us 90
 READDIR
  0.00 263.28 us  20.00 us   28385.00 us228
  LK
  0.00 101.76 us   2.00 us 880.00 us   1093
 OPENDIR
  0.01  93.60 us  27.00 us   10371.00 us   3090
STATFS
  0.02 537.47 us  17.00 us  193439.00 us   1038
GETXATTR
  0.03 297.44 us  35.00 us9799.00 us   1990
READDIRP
  0.032357.28 us 110.00 us  382258.00 us253
 XATTROP
  0.04 385.93 us  58.00 us   47593.00 us   2091
OPEN
  0.04 114.86 us  24.00 us   61061.00 us   7715
STAT

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
Thank you for your response.

I have 4 gluster volumes.  3 are replica 2 + arbitrator.  replica bricks
are on ovirt1 and ovirt2, arbitrator on ovirt3.  The 4th volume is replica
3, with a brick on all three ovirt machines.

The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same
in all three machines).  On ovirt3, the SSHD has reported hard IO failures,
and that brick is offline.  However, the other two replicas are fully
operational (although they still show contents in the heal info command
that won't go away, but that may be the case until I replace the failed
disk).

What is bothering me is that ALL 4 gluster volumes are showing horrible
performance issues.  At this point, as the bad disk has been completely
offlined, I would expect gluster to perform at normal speed, but that is
definitely not the case.

I've also noticed that the performance hits seem to come in waves: things
seem to work acceptably (but slow) for a while, then suddenly, its as if
all disk IO on all volumes (including non-gluster local OS disk volumes for
the hosts) pause for about 30 seconds, then IO resumes again.  During those
times, I start getting VM not responding and host not responding notices as
well as the applications having major issues.

I've shut down most of my VMs and am down to just my essential core VMs
(shedded about 75% of my VMs).  I still am experiencing the same issues.

Am I correct in believing that once the failed disk was brought offline
that performance should return to normal?

On Tue, May 29, 2018 at 1:27 PM, Alex K  wrote:

> I would check disks status and accessibility of mount points where your
> gluster volumes reside.
>
> On Tue, May 29, 2018, 22:28 Jim Kusznir  wrote:
>
>> On one ovirt server, I'm now seeing these messages:
>> [56474.239725] blk_update_request: 63 callbacks suppressed
>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0
>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472
>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584
>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048
>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424
>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536
>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0
>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472
>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584
>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
>>
>>
>>
>>
>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir 
>> wrote:
>>
>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
>>>
>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|1|db_ctl_base|ERR|
>>> unix:/var/run/openvswitch/db.sock: database connection failed (No such
>>> file or directory)
>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|1|db_ctl_base|ERR|
>>> unix:/var/run/openvswitch/db.sock: database connection failed (No such
>>> file or directory)
>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|1|db_ctl_base|ERR|
>>> unix:/var/run/openvswitch/db.sock: database connection failed (No such
>>> file or directory)
>>> (appears a lot).
>>>
>>> I also found on the ssh session of that, some sysv warnings about the
>>> backing disk for one of the gluster volumes (straight replica 3).  The
>>> glusterfs process for that disk on that machine went offline.  Its my
>>> understanding that it should continue to work with the other two machines
>>> while I attempt to replace that disk, right?  Attempted writes (touching an
>>> empty file) can take 15 seconds, repeating it later will be much faster.
>>>
>>> Gluster generates a bunch of different log files, I don't know what ones
>>> you want, or from which machine(s).
>>>
>>> How do I do "volume profiling"?
>>>
>>> Thanks!
>>>
>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose  wrote:
>>>
 Do you see errors reported in the mount logs for the volume? If so,
 could you attach the logs?
 Any issues with your underlying disks. Can you also attach output of
 volume profiling?

 On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir 
 wrote:

> Ok, things have gotten MUCH worse this morning.  I'm getting random
> errors from VMs, right now, about a third of my VMs have been paused due 
> to
> storage issues, and most of the remaining VMs are not performing well.
>
> At this point, I am in full EMERGENCY mode, as my production services
> are now impacted, and I'm getting calls coming in with problems...
>
> I'd greatly appreciate help...VMs are running VERY slowly (when they
> run), and they are steadily getting worse.  I don't know why.  I was 
> seeing
> CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at 
> a
> time (while the VM became unresponsive and any VMs I was logged into that
> were linux were giving me the 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
Due to the cluster spiraling downward and increasing customer complaints, I
went ahead and finished the upgrade of the nodes to ovirt 4.2 and gluster
3.12.  It didn't seem to help at all.

I DO have one brick down on ONE of my 4 gluster
filesystems/exports/whatever.  The other 3 are fully available.  However, I
still see heavy IO wait, including on the perfectly healthy filesystem.
its bad enough that I get ovirt e-mails warning of hosts down and back up,
and VMs on the good gluster filesystem are reporting IO Waits of greater
than 60% in top!  I have applications that are crashing due to the IO Wait
issues.

I do think I got glusterfs profiling running, but I don't know how to get a
useful report out (its in the ovirt gui).  I did see read and write
operations showing about 30 seconds; I would have expected that to be MUCH
better.  (As I write this, my core VoIP server is now showing 99.1% IOWait
loadAnd that is customer calls failing/dropping).

PLEASE...how do I FIX this?

--JIm

On Tue, May 29, 2018 at 12:14 PM, Jim Kusznir  wrote:

> On one ovirt server, I'm now seeing these messages:
> [56474.239725] blk_update_request: 63 callbacks suppressed
> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0
> [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472
> [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584
> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048
> [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424
> [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536
> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0
> [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472
> [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584
> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
>
>
>
>
> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir  wrote:
>
>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
>>
>> May 29 11:54:41 ovirt3 ovs-vsctl: 
>> ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock:
>> database connection failed (No such file or directory)
>> May 29 11:54:51 ovirt3 ovs-vsctl: 
>> ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock:
>> database connection failed (No such file or directory)
>> May 29 11:55:01 ovirt3 ovs-vsctl: 
>> ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock:
>> database connection failed (No such file or directory)
>> (appears a lot).
>>
>> I also found on the ssh session of that, some sysv warnings about the
>> backing disk for one of the gluster volumes (straight replica 3).  The
>> glusterfs process for that disk on that machine went offline.  Its my
>> understanding that it should continue to work with the other two machines
>> while I attempt to replace that disk, right?  Attempted writes (touching an
>> empty file) can take 15 seconds, repeating it later will be much faster.
>>
>> Gluster generates a bunch of different log files, I don't know what ones
>> you want, or from which machine(s).
>>
>> How do I do "volume profiling"?
>>
>> Thanks!
>>
>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose  wrote:
>>
>>> Do you see errors reported in the mount logs for the volume? If so,
>>> could you attach the logs?
>>> Any issues with your underlying disks. Can you also attach output of
>>> volume profiling?
>>>
>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir 
>>> wrote:
>>>
 Ok, things have gotten MUCH worse this morning.  I'm getting random
 errors from VMs, right now, about a third of my VMs have been paused due to
 storage issues, and most of the remaining VMs are not performing well.

 At this point, I am in full EMERGENCY mode, as my production services
 are now impacted, and I'm getting calls coming in with problems...

 I'd greatly appreciate help...VMs are running VERY slowly (when they
 run), and they are steadily getting worse.  I don't know why.  I was seeing
 CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a
 time (while the VM became unresponsive and any VMs I was logged into that
 were linux were giving me the CPU stuck messages in my origional post).  Is
 all this storage related?

 I also have two different gluster volumes for VM storage, and only one
 had the issues, but now VMs in both are being affected at the same time and
 same way.

 --Jim

 On Mon, May 28, 2018 at 10:50 PM, Sahina Bose 
 wrote:

> [Adding gluster-users to look at the heal issue]
>
> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir 
> wrote:
>
>> Hello:
>>
>> I've been having some cluster and gluster performance issues lately.
>> I also found that my cluster was out of date, and was trying to apply
>> updates (hoping to fix some of these), and discovered the ovirt 4.1 repos
>> were taken 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Alex K
I would check disks status and accessibility of mount points where your
gluster volumes reside.

On Tue, May 29, 2018, 22:28 Jim Kusznir  wrote:

> On one ovirt server, I'm now seeing these messages:
> [56474.239725] blk_update_request: 63 callbacks suppressed
> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0
> [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472
> [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584
> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048
> [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424
> [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536
> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0
> [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472
> [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584
> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
>
>
>
>
> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir  wrote:
>
>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
>>
>> May 29 11:54:41 ovirt3 ovs-vsctl:
>> ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
>> connection failed (No such file or directory)
>> May 29 11:54:51 ovirt3 ovs-vsctl:
>> ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
>> connection failed (No such file or directory)
>> May 29 11:55:01 ovirt3 ovs-vsctl:
>> ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
>> connection failed (No such file or directory)
>> (appears a lot).
>>
>> I also found on the ssh session of that, some sysv warnings about the
>> backing disk for one of the gluster volumes (straight replica 3).  The
>> glusterfs process for that disk on that machine went offline.  Its my
>> understanding that it should continue to work with the other two machines
>> while I attempt to replace that disk, right?  Attempted writes (touching an
>> empty file) can take 15 seconds, repeating it later will be much faster.
>>
>> Gluster generates a bunch of different log files, I don't know what ones
>> you want, or from which machine(s).
>>
>> How do I do "volume profiling"?
>>
>> Thanks!
>>
>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose  wrote:
>>
>>> Do you see errors reported in the mount logs for the volume? If so,
>>> could you attach the logs?
>>> Any issues with your underlying disks. Can you also attach output of
>>> volume profiling?
>>>
>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir 
>>> wrote:
>>>
 Ok, things have gotten MUCH worse this morning.  I'm getting random
 errors from VMs, right now, about a third of my VMs have been paused due to
 storage issues, and most of the remaining VMs are not performing well.

 At this point, I am in full EMERGENCY mode, as my production services
 are now impacted, and I'm getting calls coming in with problems...

 I'd greatly appreciate help...VMs are running VERY slowly (when they
 run), and they are steadily getting worse.  I don't know why.  I was seeing
 CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a
 time (while the VM became unresponsive and any VMs I was logged into that
 were linux were giving me the CPU stuck messages in my origional post).  Is
 all this storage related?

 I also have two different gluster volumes for VM storage, and only one
 had the issues, but now VMs in both are being affected at the same time and
 same way.

 --Jim

 On Mon, May 28, 2018 at 10:50 PM, Sahina Bose 
 wrote:

> [Adding gluster-users to look at the heal issue]
>
> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir 
> wrote:
>
>> Hello:
>>
>> I've been having some cluster and gluster performance issues lately.
>> I also found that my cluster was out of date, and was trying to apply
>> updates (hoping to fix some of these), and discovered the ovirt 4.1 repos
>> were taken completely offline.  So, I was forced to begin an upgrade to
>> 4.2.  According to docs I found/read, I needed only add the new repo, do 
>> a
>> yum update, reboot, and be good on my hosts (did the yum update, the
>> engine-setup on my hosted engine).  Things seemed to work relatively 
>> well,
>> except for a gluster sync issue that showed up.
>>
>> My cluster is a 3 node hyperconverged cluster.  I upgraded the hosted
>> engine first, then engine 3.  When engine 3 came back up, for some reason
>> one of my gluster volumes would not sync.  Here's sample output:
>>
>> [root@ovirt3 ~]# gluster volume heal data-hdd info
>> Brick 172.172.1.11:/gluster/brick3/data-hdd
>>
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
>>
>> 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
On one ovirt server, I'm now seeing these messages:
[56474.239725] blk_update_request: 63 callbacks suppressed
[56474.239732] blk_update_request: I/O error, dev dm-2, sector 0
[56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472
[56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584
[56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048
[56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424
[56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536
[56474.247347] blk_update_request: I/O error, dev dm-2, sector 0
[56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472
[56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584
[56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048




On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir  wrote:

> I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
>
> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|1|db_ctl_base|ERR|
> unix:/var/run/openvswitch/db.sock: database connection failed (No such
> file or directory)
> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|1|db_ctl_base|ERR|
> unix:/var/run/openvswitch/db.sock: database connection failed (No such
> file or directory)
> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|1|db_ctl_base|ERR|
> unix:/var/run/openvswitch/db.sock: database connection failed (No such
> file or directory)
> (appears a lot).
>
> I also found on the ssh session of that, some sysv warnings about the
> backing disk for one of the gluster volumes (straight replica 3).  The
> glusterfs process for that disk on that machine went offline.  Its my
> understanding that it should continue to work with the other two machines
> while I attempt to replace that disk, right?  Attempted writes (touching an
> empty file) can take 15 seconds, repeating it later will be much faster.
>
> Gluster generates a bunch of different log files, I don't know what ones
> you want, or from which machine(s).
>
> How do I do "volume profiling"?
>
> Thanks!
>
> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose  wrote:
>
>> Do you see errors reported in the mount logs for the volume? If so, could
>> you attach the logs?
>> Any issues with your underlying disks. Can you also attach output of
>> volume profiling?
>>
>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir 
>> wrote:
>>
>>> Ok, things have gotten MUCH worse this morning.  I'm getting random
>>> errors from VMs, right now, about a third of my VMs have been paused due to
>>> storage issues, and most of the remaining VMs are not performing well.
>>>
>>> At this point, I am in full EMERGENCY mode, as my production services
>>> are now impacted, and I'm getting calls coming in with problems...
>>>
>>> I'd greatly appreciate help...VMs are running VERY slowly (when they
>>> run), and they are steadily getting worse.  I don't know why.  I was seeing
>>> CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a
>>> time (while the VM became unresponsive and any VMs I was logged into that
>>> were linux were giving me the CPU stuck messages in my origional post).  Is
>>> all this storage related?
>>>
>>> I also have two different gluster volumes for VM storage, and only one
>>> had the issues, but now VMs in both are being affected at the same time and
>>> same way.
>>>
>>> --Jim
>>>
>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose  wrote:
>>>
 [Adding gluster-users to look at the heal issue]

 On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir 
 wrote:

> Hello:
>
> I've been having some cluster and gluster performance issues lately.
> I also found that my cluster was out of date, and was trying to apply
> updates (hoping to fix some of these), and discovered the ovirt 4.1 repos
> were taken completely offline.  So, I was forced to begin an upgrade to
> 4.2.  According to docs I found/read, I needed only add the new repo, do a
> yum update, reboot, and be good on my hosts (did the yum update, the
> engine-setup on my hosted engine).  Things seemed to work relatively well,
> except for a gluster sync issue that showed up.
>
> My cluster is a 3 node hyperconverged cluster.  I upgraded the hosted
> engine first, then engine 3.  When engine 3 came back up, for some reason
> one of my gluster volumes would not sync.  Here's sample output:
>
> [root@ovirt3 ~]# gluster volume heal data-hdd info
> Brick 172.172.1.11:/gluster/brick3/data-hdd
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4
> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4
> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4
> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4
> 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):

May 29 11:54:41 ovirt3 ovs-vsctl:
ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
connection failed (No such file or directory)
May 29 11:54:51 ovirt3 ovs-vsctl:
ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
connection failed (No such file or directory)
May 29 11:55:01 ovirt3 ovs-vsctl:
ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
connection failed (No such file or directory)
(appears a lot).

I also found on the ssh session of that, some sysv warnings about the
backing disk for one of the gluster volumes (straight replica 3).  The
glusterfs process for that disk on that machine went offline.  Its my
understanding that it should continue to work with the other two machines
while I attempt to replace that disk, right?  Attempted writes (touching an
empty file) can take 15 seconds, repeating it later will be much faster.

Gluster generates a bunch of different log files, I don't know what ones
you want, or from which machine(s).

How do I do "volume profiling"?

Thanks!

On Tue, May 29, 2018 at 11:53 AM, Sahina Bose  wrote:

> Do you see errors reported in the mount logs for the volume? If so, could
> you attach the logs?
> Any issues with your underlying disks. Can you also attach output of
> volume profiling?
>
> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir  wrote:
>
>> Ok, things have gotten MUCH worse this morning.  I'm getting random
>> errors from VMs, right now, about a third of my VMs have been paused due to
>> storage issues, and most of the remaining VMs are not performing well.
>>
>> At this point, I am in full EMERGENCY mode, as my production services are
>> now impacted, and I'm getting calls coming in with problems...
>>
>> I'd greatly appreciate help...VMs are running VERY slowly (when they
>> run), and they are steadily getting worse.  I don't know why.  I was seeing
>> CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a
>> time (while the VM became unresponsive and any VMs I was logged into that
>> were linux were giving me the CPU stuck messages in my origional post).  Is
>> all this storage related?
>>
>> I also have two different gluster volumes for VM storage, and only one
>> had the issues, but now VMs in both are being affected at the same time and
>> same way.
>>
>> --Jim
>>
>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose  wrote:
>>
>>> [Adding gluster-users to look at the heal issue]
>>>
>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir 
>>> wrote:
>>>
 Hello:

 I've been having some cluster and gluster performance issues lately.  I
 also found that my cluster was out of date, and was trying to apply updates
 (hoping to fix some of these), and discovered the ovirt 4.1 repos were
 taken completely offline.  So, I was forced to begin an upgrade to 4.2.
 According to docs I found/read, I needed only add the new repo, do a yum
 update, reboot, and be good on my hosts (did the yum update, the
 engine-setup on my hosted engine).  Things seemed to work relatively well,
 except for a gluster sync issue that showed up.

 My cluster is a 3 node hyperconverged cluster.  I upgraded the hosted
 engine first, then engine 3.  When engine 3 came back up, for some reason
 one of my gluster volumes would not sync.  Here's sample output:

 [root@ovirt3 ~]# gluster volume heal data-hdd info
 Brick 172.172.1.11:/gluster/brick3/data-hdd
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4
 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4
 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4
 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4
 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4
 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4
 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4
 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4
 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
 Status: Connected
 Number of entries: 8

 Brick 172.172.1.12:/gluster/brick3/data-hdd
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4
 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4
 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Sahina Bose
Do you see errors reported in the mount logs for the volume? If so, could
you attach the logs?
Any issues with your underlying disks. Can you also attach output of volume
profiling?

On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir  wrote:

> Ok, things have gotten MUCH worse this morning.  I'm getting random errors
> from VMs, right now, about a third of my VMs have been paused due to
> storage issues, and most of the remaining VMs are not performing well.
>
> At this point, I am in full EMERGENCY mode, as my production services are
> now impacted, and I'm getting calls coming in with problems...
>
> I'd greatly appreciate help...VMs are running VERY slowly (when they run),
> and they are steadily getting worse.  I don't know why.  I was seeing CPU
> peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a
> time (while the VM became unresponsive and any VMs I was logged into that
> were linux were giving me the CPU stuck messages in my origional post).  Is
> all this storage related?
>
> I also have two different gluster volumes for VM storage, and only one had
> the issues, but now VMs in both are being affected at the same time and
> same way.
>
> --Jim
>
> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose  wrote:
>
>> [Adding gluster-users to look at the heal issue]
>>
>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir  wrote:
>>
>>> Hello:
>>>
>>> I've been having some cluster and gluster performance issues lately.  I
>>> also found that my cluster was out of date, and was trying to apply updates
>>> (hoping to fix some of these), and discovered the ovirt 4.1 repos were
>>> taken completely offline.  So, I was forced to begin an upgrade to 4.2.
>>> According to docs I found/read, I needed only add the new repo, do a yum
>>> update, reboot, and be good on my hosts (did the yum update, the
>>> engine-setup on my hosted engine).  Things seemed to work relatively well,
>>> except for a gluster sync issue that showed up.
>>>
>>> My cluster is a 3 node hyperconverged cluster.  I upgraded the hosted
>>> engine first, then engine 3.  When engine 3 came back up, for some reason
>>> one of my gluster volumes would not sync.  Here's sample output:
>>>
>>> [root@ovirt3 ~]# gluster volume heal data-hdd info
>>> Brick 172.172.1.11:/gluster/brick3/data-hdd
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4
>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4
>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4
>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4
>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4
>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4
>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4
>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4
>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
>>> Status: Connected
>>> Number of entries: 8
>>>
>>> Brick 172.172.1.12:/gluster/brick3/data-hdd
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4
>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4
>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4
>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4
>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4
>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4
>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4
>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4
>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
>>> Status: Connected
>>> Number of entries: 8
>>>
>>> Brick 172.172.1.13:/gluster/brick3/data-hdd
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4
>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4
>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4
>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4
>>> 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Jim Kusznir
Ok, things have gotten MUCH worse this morning.  I'm getting random errors
from VMs, right now, about a third of my VMs have been paused due to
storage issues, and most of the remaining VMs are not performing well.

At this point, I am in full EMERGENCY mode, as my production services are
now impacted, and I'm getting calls coming in with problems...

I'd greatly appreciate help...VMs are running VERY slowly (when they run),
and they are steadily getting worse.  I don't know why.  I was seeing CPU
peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a
time (while the VM became unresponsive and any VMs I was logged into that
were linux were giving me the CPU stuck messages in my origional post).  Is
all this storage related?

I also have two different gluster volumes for VM storage, and only one had
the issues, but now VMs in both are being affected at the same time and
same way.

--Jim

On Mon, May 28, 2018 at 10:50 PM, Sahina Bose  wrote:

> [Adding gluster-users to look at the heal issue]
>
> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir  wrote:
>
>> Hello:
>>
>> I've been having some cluster and gluster performance issues lately.  I
>> also found that my cluster was out of date, and was trying to apply updates
>> (hoping to fix some of these), and discovered the ovirt 4.1 repos were
>> taken completely offline.  So, I was forced to begin an upgrade to 4.2.
>> According to docs I found/read, I needed only add the new repo, do a yum
>> update, reboot, and be good on my hosts (did the yum update, the
>> engine-setup on my hosted engine).  Things seemed to work relatively well,
>> except for a gluster sync issue that showed up.
>>
>> My cluster is a 3 node hyperconverged cluster.  I upgraded the hosted
>> engine first, then engine 3.  When engine 3 came back up, for some reason
>> one of my gluster volumes would not sync.  Here's sample output:
>>
>> [root@ovirt3 ~]# gluster volume heal data-hdd info
>> Brick 172.172.1.11:/gluster/brick3/data-hdd
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-
>> 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-
>> 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-
>> 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-
>> 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-
>> 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-
>> 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-
>> 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-
>> 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
>> Status: Connected
>> Number of entries: 8
>>
>> Brick 172.172.1.12:/gluster/brick3/data-hdd
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-
>> 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-
>> 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-
>> 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-
>> 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-
>> 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-
>> 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-
>> 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-
>> 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
>> Status: Connected
>> Number of entries: 8
>>
>> Brick 172.172.1.13:/gluster/brick3/data-hdd
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-
>> 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-
>> 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-
>> 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-
>> 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-
>> 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-
>> 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
>> 

[ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-28 Thread Sahina Bose
[Adding gluster-users to look at the heal issue]

On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir  wrote:

> Hello:
>
> I've been having some cluster and gluster performance issues lately.  I
> also found that my cluster was out of date, and was trying to apply updates
> (hoping to fix some of these), and discovered the ovirt 4.1 repos were
> taken completely offline.  So, I was forced to begin an upgrade to 4.2.
> According to docs I found/read, I needed only add the new repo, do a yum
> update, reboot, and be good on my hosts (did the yum update, the
> engine-setup on my hosted engine).  Things seemed to work relatively well,
> except for a gluster sync issue that showed up.
>
> My cluster is a 3 node hyperconverged cluster.  I upgraded the hosted
> engine first, then engine 3.  When engine 3 came back up, for some reason
> one of my gluster volumes would not sync.  Here's sample output:
>
> [root@ovirt3 ~]# gluster volume heal data-hdd info
> Brick 172.172.1.11:/gluster/brick3/data-hdd
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-
> 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-
> f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-
> b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-
> d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-
> 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-
> 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-
> e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-
> b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
> Status: Connected
> Number of entries: 8
>
> Brick 172.172.1.12:/gluster/brick3/data-hdd
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-
> 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-
> f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-
> 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-
> b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-
> d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-
> e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-
> b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-
> 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
> Status: Connected
> Number of entries: 8
>
> Brick 172.172.1.13:/gluster/brick3/data-hdd
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-
> 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-
> e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-
> b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-
> 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-
> f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-
> 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-
> b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-
> d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
> Status: Connected
> Number of entries: 8
>
> -
> Its been in this state for a couple days now, and bandwidth monitoring
> shows no appreciable data moving.  I've tried repeatedly commanding a full
> heal from all three clusters in the node.  Its always the same files that
> need healing.
>
> When running gluster volume heal data-hdd statistics, I see sometimes
> different information, but always some number of "heal failed" entries.  It
> shows 0 for split brain.
>
> I'm not quite sure what to do.  I suspect it may be due to nodes 1 and 2
> still being on the older ovirt/gluster release, but I'm afraid to upgrade
> and reboot them until I have a good gluster sync (don't need to create a
> split brain issue).  How do I proceed with this?
>
> Second issue: I've been experiencing VERY POOR performance on most of my
>