[Gluster-users] KVM lockups on Gluster 4.1.1

2018-08-27 Thread Dmitry Melekhov

Hello!


Yesterday we hit something like this on 4.1.2

Centos 7.5.


Volume is replicated - two bricks and one arbiter.


We rebooted arbiter, waited for heal end,  and tried to live migrate VM 
to another node ( we run VMs on gluster nodes ):



[2018-08-27 09:56:22.085411] I [MSGID: 115029] 
[server-handshake.c:763:server_setvolume] 0-pool-server: accepted client 
from 
CTX_ID:b55f4a90-e241-48ce-bd4d-268c8a956f4a-GRAPH_ID:0-PID:8887-HOST:son-PC_NAME:pool-

client-6-RECON_NO:-0 (version: 4.1.2)
[2018-08-27 09:56:22.107609] I [MSGID: 115036] 
[server.c:483:server_rpc_notify] 0-pool-server: disconnecting connection 
from 
CTX_ID:b55f4a90-e241-48ce-bd4d-268c8a956f4a-GRAPH_ID:0-PID:8887-HOST:son-PC_NAME:pool-

client-6-RECON_NO:-0
[2018-08-27 09:56:22.107747] I [MSGID: 101055] 
[client_t.c:444:gf_client_unref] 0-pool-server: Shutting down connection 
CTX_ID:b55f4a90-e241-48ce-bd4d-268c8a956f4a-GRAPH_ID:0-PID:8887-HOST:son-PC_NAME:pool-clien

t-6-RECON_NO:-0
[2018-08-27 09:58:37.905829] I [MSGID: 115036] 
[server.c:483:server_rpc_notify] 0-pool-server: disconnecting connection 
from 
CTX_ID:c3eb6cfc-2ef9-470a-89d1-a87170d00da5-GRAPH_ID:0-PID:30292-HOST:father-PC_NAME:p

ool-client-6-RECON_NO:-0
[2018-08-27 09:58:37.905926] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=28c831d8bc55}
[2018-08-27 09:58:37.905959] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=2870a7d6bc55}
[2018-08-27 09:58:37.905979] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=2880a7d6bc55}
[2018-08-27 09:58:37.905997] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=28f031d8bc55}
[2018-08-27 09:58:37.906016] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=28b07dd5bc55}
[2018-08-27 09:58:37.906034] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=28e0a7d6bc55}
[2018-08-27 09:58:37.906056] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=28b845d8bc55}
[2018-08-27 09:58:37.906079] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=2858a7d8bc55}
[2018-08-27 09:58:37.906098] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=2868a8d7bc55}
[2018-08-27 09:58:37.906121] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=28f80bd7bc55}

...

[2018-08-27 09:58:37.907375] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=28a8cdd6bc55}
[2018-08-27 09:58:37.907393] W [inodelk.c:610:pl_inodelk_log_cleanup] 
0-pool-server: releasing lock on 12172afe-f0a4-4e10-bc0f-c5e4e0d9f318 
held by {client=0x7ffb58035bc0, pid=30292 lk-owner=2880cdd6bc55}
[2018-08-27 09:58:37.907476] I [socket.c:3837:socket_submit_reply] 
0-tcp.pool-server: not connected (priv->connected = -1)
[2018-08-27 09:58:37.907520] E [rpcsvc.c:1378:rpcsvc_submit_generic] 
0-rpc-service: failed to submit message (XID: 0xcb88cb, Program: 
GlusterFS 4.x v1, ProgVers: 400, Proc: 30) to rpc-transport 
(tcp.pool-server)
[2018-08-27 09:58:37.910727] E [server.c:137:server_submit_reply] 
(-->/usr/lib64/glusterfs/4.1.2/xlator/debug/io-stats.so(+0x20084) 
[0x7ffb64379084] 
-->/usr/lib64/glusterfs/4.1.2/xlator/protocol/server.so(+0x605
ba) [0x7ffb5fddf5ba] 
-->/usr/lib64/glusterfs/4.1.2/xlator/protocol/server.so(+0xafce) 
[0x7ffb5fd89fce] ) 0-: Reply submission failed
[2018-08-27 09:58:37.910814] E [rpcsvc.c:1378:rpcsvc_submit_generic] 
0-rpc-service: failed to submit message (XID: 0xcb88ce, Program: 
GlusterFS 4.x v1, ProgVers: 400, Proc: 30) to rpc-transport 
(tcp.pool-server)
[2018-08-27 09:58:37.910861] E [server.c:137:server_submit_reply] 
(-->/usr/lib64/glusterfs/4.1.2/xlator/debug/io-stats.so(+0x20084) 
[0x7ffb64379084] 
-->/usr/lib64/glusterfs/4.1.2/xlator/protocol/server.so(+0x605
ba) [0x7ffb5fddf5ba] 
-->/usr/lib64/glusterfs/4.1.2/xlator/protocol/server.so(+0xafce) 
[0x7ffb5fd89fce] ) 0-: Reply submission 

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-27 Thread Hu Bert
Good Morning,

today i update + rebooted all gluster servers, kernel update to
4.9.0-8 and gluster to 3.12.13. Reboots went fine, but on one of the
gluster servers (gluster13) one of the bricks did come up at the
beginning but then lost connection.

OK:

Status of volume: shared
Gluster process TCP Port  RDMA Port  Online  Pid
--
[...]
Brick gluster11:/gluster/bricksdd1/shared 49155 0
Y   2506
Brick gluster12:/gluster/bricksdd1_new/shared49155 0
Y   2097
Brick gluster13:/gluster/bricksdd1_new/shared49155 0
Y   2136

Lost connection:

Brick gluster11:/gluster/bricksdd1/shared  49155 0
 Y   2506
Brick gluster12:/gluster/bricksdd1_new/shared 49155 0
Y   2097
Brick gluster13:/gluster/bricksdd1_new/shared N/A   N/A
N   N/A

gluster volume heal shared info:
Brick gluster13:/gluster/bricksdd1_new/shared
Status: Transport endpoint is not connected
Number of entries: -

reboot was at 06:15:39; brick then worked for a short period, but then
somehow disconnected.

from gluster13:/var/log/glusterfs/glusterd.log:

[2018-08-28 04:27:36.944608] I [MSGID: 106005]
[glusterd-handler.c:6071:__glusterd_brick_rpc_notify] 0-management:
Brick gluster13:/gluster/bricksdd1_new/shared has disconnected from
glusterd.
[2018-08-28 04:28:57.869666] I
[glusterd-utils.c:6056:glusterd_brick_start] 0-management: starting a
fresh brick process for brick /gluster/bricksdd1_new/shared
[2018-08-28 04:35:20.732666] I [MSGID: 106143]
[glusterd-pmap.c:295:pmap_registry_bind] 0-pmap: adding brick
/gluster/bricksdd1_new/shared on port 49157

After 'gluster volume start shared force' (then with new port 49157):

Brick gluster11:/gluster/bricksdd1/shared   49155 0
  Y   2506
Brick gluster12:/gluster/bricksdd1_new/shared  49155 0
 Y   2097
Brick gluster13:/gluster/bricksdd1_new/shared  49157 0
 Y   3994

from /var/log/syslog:

Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: pending frames:
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: frame :
type(0) op(0)
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: frame :
type(0) op(0)
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]:
patchset: git://git.gluster.org/glusterfs.git
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: signal
received: 11
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: time of crash:
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]:
2018-08-28 04:27:36
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]:
configuration details:
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: argp 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: backtrace 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: dlfcn 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: libpthread 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: llistxattr 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: setfsid 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: spinlock 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: epoll.h 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: xattr.h 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: st_atim.tv_nsec 1
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]:
package-string: glusterfs 3.12.13
Aug 28 06:27:36 gluster13 gluster-bricksdd1_new-shared[2136]: -

There are some errors+warnings in the shared.log (volume logfile), but
no error message telling me why
gluster13:/gluster/bricksdd1_new/shared has disconnected.

Well... at the moment load is ok, all 3 servers at about 15 (but i
think it will go up when more users will cause more traffic -> more
work on servers), 'gluster volume heal shared info' shows no entries,
status:

Status of volume: shared
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster11:/gluster/bricksda1/shared   49152 0  Y   2482
Brick gluster12:/gluster/bricksda1/shared   49152 0  Y   2088
Brick gluster13:/gluster/bricksda1/shared   49152 0  Y   2115
Brick gluster11:/gluster/bricksdb1/shared   49153 0  Y   2489
Brick gluster12:/gluster/bricksdb1/shared   49153 0  Y   2094
Brick gluster13:/gluster/bricksdb1/shared   49153 0  Y   2116
Brick gluster11:/gluster/bricksdc1/shared   49154 0  Y   2497
Brick gluster12:/gluster/bricksdc1/shared   49154 0  Y   2095
Brick gluster13:/gluster/bricksdc1/shared   49154 0  Y   2127
Brick gluster11:/gluster/bricksdd1/shared   49155 0  Y   2506
Brick 

Re: [Gluster-users] [Gluster-devel] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-27 Thread Jiffin Tony Thottan



On Monday 27 August 2018 01:57 PM, Pasi Kärkkäinen wrote:

Hi,

On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote:

The Gluster community is pleased to announce the release of Gluster
3.12.13 (packages available at [1,2,3]).

Release notes for the release can be found at [4].

Thanks,
Gluster community

[1] [1]https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.13/
[2] [2]https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] [3]https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes:
[4]https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/


Hmm, I guess release-notes link should say 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.13 instead.. but 
that page doesn't seem to exist (yet) ?


It got fixed now :)

Thanks,
Jiffin





Thanks,

-- Pasi



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-27 Thread Hu Bert
yeah, on debian xyz.log.1 is always the former logfile which has been
rotated by logrotate. Just checked the 3 servers: now it looks good, i
will check it again tomorrow. very strange, maybe logrotate hasn't
worked properly.

the performance problems remain :-)

2018-08-27 15:41 GMT+02:00 Milind Changire :
> On Thu, Aug 23, 2018 at 5:28 PM, Pranith Kumar Karampuri
>  wrote:
>>
>> On Wed, Aug 22, 2018 at 12:01 PM Hu Bert  wrote:
>>>
>>> Just an addition: in general there are no log messages in
>>> /var/log/glusterfs/ (if you don't all 'gluster volume ...'), but on
>>> the node with the lowest load i see in cli.log.1:
>>>
>>> [2018-08-22 06:20:43.291055] I [socket.c:2474:socket_event_handler]
>>> 0-transport: EPOLLERR - disconnecting now
>>> [2018-08-22 06:20:46.291327] I [socket.c:2474:socket_event_handler]
>>> 0-transport: EPOLLERR - disconnecting now
>>> [2018-08-22 06:20:49.291575] I [socket.c:2474:socket_event_handler]
>>> 0-transport: EPOLLERR - disconnecting now
>>>
>>> every 3 seconds. Looks like this bug:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1484885 - but that shoud
>>> have been fixed in the 3.12.x release, and network is fine.
>>
>>
>> +Milind Changire
>
> That's odd. Presuming cli.log.1 is the logrotated file, it should be showing
> older log entries than cli.log. But its not the case here.
> Or maybe, there's something running on the command-line on the node with the
> lowest load.
>
>>
>>>
>>> In cli.log there are only these entries:
>>>
>>> [2018-08-22 06:19:23.428520] I [cli.c:765:main] 0-cli: Started running
>>> gluster with version 3.12.12
>>> [2018-08-22 06:19:23.800895] I [MSGID: 101190]
>>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>>> thread with index 1
>>> [2018-08-22 06:19:23.800978] I [socket.c:2474:socket_event_handler]
>>> 0-transport: EPOLLERR - disconnecting now
>>> [2018-08-22 06:19:23.809366] I [input.c:31:cli_batch] 0-: Exiting with: 0
>>>
>>> Just wondered if this could related anyhow.
>>>
>>> 2018-08-21 8:17 GMT+02:00 Pranith Kumar Karampuri :
>>> >
>>> >
>>> > On Tue, Aug 21, 2018 at 11:40 AM Hu Bert 
>>> > wrote:
>>> >>
>>> >> Good morning :-)
>>> >>
>>> >> gluster11:
>>> >> ls -l /gluster/bricksdd1/shared/.glusterfs/indices/xattrop/
>>> >> total 0
>>> >> -- 1 root root 0 Aug 14 06:14
>>> >> xattrop-006b65d8-9e81-4886-b380-89168ea079bd
>>> >>
>>> >> gluster12:
>>> >> ls -l /gluster/bricksdd1_new/shared/.glusterfs/indices/xattrop/
>>> >> total 0
>>> >> -- 1 root root 0 Jul 17 11:24
>>> >> xattrop-c7c6f765-ce17-4361-95fb-2fd7f31c7b82
>>> >>
>>> >> gluster13:
>>> >> ls -l /gluster/bricksdd1_new/shared/.glusterfs/indices/xattrop/
>>> >> total 0
>>> >> -- 1 root root 0 Aug 16 07:54
>>> >> xattrop-16b696a0-4214-4999-b277-0917c76c983e
>>> >>
>>> >>
>>> >> And here's the output of 'perf ...' which ran almost a minute - file
>>> >> grew pretty fast to a size of 17 GB and system load went up heavily.
>>> >> Had to wait a while until load dropped :-)
>>> >>
>>> >> fyi - load at the moment:
>>> >> load gluster11: ~90
>>> >> load gluster12: ~10
>>> >> load gluster13: ~50
>>> >>
>>> >> perf record --call-graph=dwarf -p 7897 -o
>>> >> /tmp/perf.gluster11.bricksdd1.out
>>> >> [ perf record: Woken up 9837 times to write data ]
>>> >> Warning:
>>> >> Processed 2137218 events and lost 33446 chunks!
>>> >>
>>> >> Check IO/CPU overload!
>>> >>
>>> >> [ perf record: Captured and wrote 16576.374 MB
>>> >> /tmp/perf.gluster11.bricksdd1.out (2047760 samples) ]
>>> >>
>>> >> Here's an excerpt.
>>> >>
>>> >> +1.93% 0.00%  glusteriotwr0[unknown]  [k]
>>> >> 0x
>>> >> +1.89% 0.00%  glusteriotwr28   [unknown]  [k]
>>> >> 0x
>>> >> +1.86% 0.00%  glusteriotwr15   [unknown]  [k]
>>> >> 0x
>>> >> +1.85% 0.00%  glusteriotwr63   [unknown]  [k]
>>> >> 0x
>>> >> +1.83% 0.01%  glusteriotwr0[kernel.kallsyms]  [k]
>>> >> entry_SYSCALL_64_after_swapgs
>>> >> +1.82% 0.00%  glusteriotwr38   [unknown]  [k]
>>> >> 0x
>>> >> +1.82% 0.01%  glusteriotwr28   [kernel.kallsyms]  [k]
>>> >> entry_SYSCALL_64_after_swapgs
>>> >> +1.82% 0.00%  glusteriotwr0[kernel.kallsyms]  [k]
>>> >> do_syscall_64
>>> >> +1.81% 0.00%  glusteriotwr28   [kernel.kallsyms]  [k]
>>> >> do_syscall_64
>>> >> +1.81% 0.00%  glusteriotwr15   [kernel.kallsyms]  [k]
>>> >> entry_SYSCALL_64_after_swapgs
>>> >> +1.81% 0.00%  glusteriotwr36   [unknown]  [k]
>>> >> 0x
>>> >> +1.80% 0.00%  glusteriotwr15   [kernel.kallsyms]  [k]
>>> >> do_syscall_64
>>> >> +1.78% 0.01%  glusteriotwr63   [kernel.kallsyms]  [k]
>>> >> entry_SYSCALL_64_after_swapgs
>>> >> +1.77% 0.00%  glusteriotwr63   [kernel.kallsyms]  [k]
>>> >> do_syscall_64
>>> >> +1.75% 0.01%  glusteriotwr38   

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-27 Thread Milind Changire
On Thu, Aug 23, 2018 at 5:28 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> On Wed, Aug 22, 2018 at 12:01 PM Hu Bert  wrote:
>
>> Just an addition: in general there are no log messages in
>> /var/log/glusterfs/ (if you don't all 'gluster volume ...'), but on
>> the node with the lowest load i see in cli.log.1:
>>
>> [2018-08-22 06:20:43.291055] I [socket.c:2474:socket_event_handler]
>> 0-transport: EPOLLERR - disconnecting now
>> [2018-08-22 06:20:46.291327] I [socket.c:2474:socket_event_handler]
>> 0-transport: EPOLLERR - disconnecting now
>> [2018-08-22 06:20:49.291575] I [socket.c:2474:socket_event_handler]
>> 0-transport: EPOLLERR - disconnecting now
>>
>> every 3 seconds. Looks like this bug:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1484885 - but that shoud
>> have been fixed in the 3.12.x release, and network is fine.
>>
>
> +Milind Changire 
>
That's odd. Presuming cli.log.1 is the logrotated file, it should be
showing older log entries than cli.log. But its not the case here.
Or maybe, there's something running on the command-line on the node with
the lowest load.


>
>> In cli.log there are only these entries:
>>
>> [2018-08-22 06:19:23.428520] I [cli.c:765:main] 0-cli: Started running
>> gluster with version 3.12.12
>> [2018-08-22 06:19:23.800895] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread with index 1
>> [2018-08-22 06:19:23.800978] I [socket.c:2474:socket_event_handler]
>> 0-transport: EPOLLERR - disconnecting now
>> [2018-08-22 06:19:23.809366] I [input.c:31:cli_batch] 0-: Exiting with: 0
>>
>> Just wondered if this could related anyhow.
>>
>> 2018-08-21 8:17 GMT+02:00 Pranith Kumar Karampuri :
>> >
>> >
>> > On Tue, Aug 21, 2018 at 11:40 AM Hu Bert 
>> wrote:
>> >>
>> >> Good morning :-)
>> >>
>> >> gluster11:
>> >> ls -l /gluster/bricksdd1/shared/.glusterfs/indices/xattrop/
>> >> total 0
>> >> -- 1 root root 0 Aug 14 06:14
>> >> xattrop-006b65d8-9e81-4886-b380-89168ea079bd
>> >>
>> >> gluster12:
>> >> ls -l /gluster/bricksdd1_new/shared/.glusterfs/indices/xattrop/
>> >> total 0
>> >> -- 1 root root 0 Jul 17 11:24
>> >> xattrop-c7c6f765-ce17-4361-95fb-2fd7f31c7b82
>> >>
>> >> gluster13:
>> >> ls -l /gluster/bricksdd1_new/shared/.glusterfs/indices/xattrop/
>> >> total 0
>> >> -- 1 root root 0 Aug 16 07:54
>> >> xattrop-16b696a0-4214-4999-b277-0917c76c983e
>> >>
>> >>
>> >> And here's the output of 'perf ...' which ran almost a minute - file
>> >> grew pretty fast to a size of 17 GB and system load went up heavily.
>> >> Had to wait a while until load dropped :-)
>> >>
>> >> fyi - load at the moment:
>> >> load gluster11: ~90
>> >> load gluster12: ~10
>> >> load gluster13: ~50
>> >>
>> >> perf record --call-graph=dwarf -p 7897 -o
>> >> /tmp/perf.gluster11.bricksdd1.out
>> >> [ perf record: Woken up 9837 times to write data ]
>> >> Warning:
>> >> Processed 2137218 events and lost 33446 chunks!
>> >>
>> >> Check IO/CPU overload!
>> >>
>> >> [ perf record: Captured and wrote 16576.374 MB
>> >> /tmp/perf.gluster11.bricksdd1.out (2047760 samples) ]
>> >>
>> >> Here's an excerpt.
>> >>
>> >> +1.93% 0.00%  glusteriotwr0[unknown]  [k]
>> >> 0x
>> >> +1.89% 0.00%  glusteriotwr28   [unknown]  [k]
>> >> 0x
>> >> +1.86% 0.00%  glusteriotwr15   [unknown]  [k]
>> >> 0x
>> >> +1.85% 0.00%  glusteriotwr63   [unknown]  [k]
>> >> 0x
>> >> +1.83% 0.01%  glusteriotwr0[kernel.kallsyms]  [k]
>> >> entry_SYSCALL_64_after_swapgs
>> >> +1.82% 0.00%  glusteriotwr38   [unknown]  [k]
>> >> 0x
>> >> +1.82% 0.01%  glusteriotwr28   [kernel.kallsyms]  [k]
>> >> entry_SYSCALL_64_after_swapgs
>> >> +1.82% 0.00%  glusteriotwr0[kernel.kallsyms]  [k]
>> >> do_syscall_64
>> >> +1.81% 0.00%  glusteriotwr28   [kernel.kallsyms]  [k]
>> >> do_syscall_64
>> >> +1.81% 0.00%  glusteriotwr15   [kernel.kallsyms]  [k]
>> >> entry_SYSCALL_64_after_swapgs
>> >> +1.81% 0.00%  glusteriotwr36   [unknown]  [k]
>> >> 0x
>> >> +1.80% 0.00%  glusteriotwr15   [kernel.kallsyms]  [k]
>> >> do_syscall_64
>> >> +1.78% 0.01%  glusteriotwr63   [kernel.kallsyms]  [k]
>> >> entry_SYSCALL_64_after_swapgs
>> >> +1.77% 0.00%  glusteriotwr63   [kernel.kallsyms]  [k]
>> >> do_syscall_64
>> >> +1.75% 0.01%  glusteriotwr38   [kernel.kallsyms]  [k]
>> >> entry_SYSCALL_64_after_swapgs
>> >> +1.75% 0.00%  glusteriotwr38   [kernel.kallsyms]  [k]
>> >> do_syscall_64
>> >> +1.74% 0.00%  glusteriotwr17   [unknown]  [k]
>> >> 0x
>> >> +1.74% 0.00%  glusteriotwr44   [unknown]  [k]
>> >> 0x
>> >> +1.73% 0.00%  glusteriotwr6[unknown]  [k]
>> >> 

Re: [Gluster-users] [Gluster-devel] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-27 Thread Pasi Kärkkäinen
Hi,

On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote:
>The Gluster community is pleased to announce the release of Gluster
>3.12.13 (packages available at [1,2,3]).
> 
>Release notes for the release can be found at [4].
> 
>Thanks,
>Gluster community
> 
>[1] [1]https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.13/
>[2] [2]https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
>[3] [3]https://build.opensuse.org/project/subprojects/home:glusterfs
>[4] Release notes:
>[4]https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/
> 

Hmm, I guess release-notes link should say 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.13 instead.. but 
that page doesn't seem to exist (yet) ?



Thanks,

-- Pasi

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-27 Thread lemonnierk
Hi,

Seems like you linked the 3.12.12 changelog instead of the 3.12.13 one.
Does it fix the memory leak problem ?

Thanks

On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote:
> The Gluster community is pleased to announce the release of Gluster 
> 3.12.13 (packages available at [1,2,3]).
> 
> Release notes for the release can be found at [4].
> 
> Thanks,
> Gluster community
> 
> 
> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.13/
> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> [4] Release notes: 
> https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/
> 

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


-- 
PGP Fingerprint : 0x624E42C734DAC346
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users