Hi, 

I have received the log file but did not get chance to look into it. 
I will let you know if I find anything or what we can do next. 

-- 
Ashish 


----- Original Message -----

From: "Mauro Tridici" <[email protected]> 
To: "Ashish Pandey" <[email protected]> 
Cc: "Gluster Users" <[email protected]> 
Sent: Wednesday, September 27, 2017 5:35:49 PM 
Subject: Re: [Gluster-users] df command shows transport endpoint mount error on 
gluster client v.3.10.5 + core dump 

Hi Ashish, 

I’m sorry to disturb you again, but I would like to know if you received the 
log files correctly. 
Thank you, 
Mauro Tridici 




Il giorno 26 set 2017, alle ore 10:19, Mauro Tridici < [email protected] > 
ha scritto: 


Hi Ashish, 

in attachment you can find the gdb output (with bt and thread outputs) and the 
complete log file until the crash happened. 

Thank you for your support. 
Mauro Tridici 

<tier2.log-20170924.gz> 

<gdb_bt_output_logs.txt> 


<blockquote>

Il giorno 26 set 2017, alle ore 10:11, Ashish Pandey < [email protected] > ha 
scritto: 

Hi, 

Following are the command to get the debug info for gluster - 
gdb /usr/local/sbin/glusterfs  <core file path> 
Then on gdb prompt you need to type bt or backtrace 
(gdb) bt 
You can also provide the output of "thread apply all bt". 
(gdb) thread apply all bt 
Above commands should be executed on client node on which you have mounted the 
gluster volume and it crashed. 
However, I am not sure if you have enabled the generation of core dump on your 
system and have setup path to coredump. 

https://stackoverflow.com/questions/8305866/how-to-analyze-a-programs-core-dump-file-with-gdb
 
https://unix.stackexchange.com/questions/192716/how-to-set-the-core-dump-file-location-and-name
 
https://stackoverflow.com/questions/2065912/core-dumped-but-core-file-is-not-in-current-directory
 

-- 
Ashish 


----- Original Message -----

From: "Mauro Tridici" < [email protected] > 
To: "Ashish Pandey" < [email protected] > 
Cc: "Gluster Users" < [email protected] > 
Sent: Tuesday, September 26, 2017 1:21:42 PM 
Subject: Re: [Gluster-users] df command shows transport endpoint mount error on 
gluster client v.3.10.5 + core dump 

Hi Ashish, 

thank you for your answer. 
Do you need complete client log file only or something else in particular? 
Unfortunately, I never used “bt” command. Could you please provide me an usage 
example string? 

I will provide all logs you need. 
Thank you again, 
Mauro 


<blockquote>

Il giorno 26 set 2017, alle ore 09:30, Ashish Pandey < [email protected] > ha 
scritto: 

Hi Mauro, 

We would require complete log file to debug this issue. 
Also, could you please provide some more information of the core after 
attaching to gdb and using command "bt". 

--- 
Ashish 


----- Original Message -----

From: "Mauro Tridici" < [email protected] > 
To: "Gluster Users" < [email protected] > 
Sent: Monday, September 25, 2017 5:59:10 PM 
Subject: [Gluster-users] df command shows transport endpoint mount error on 
gluster client v.3.10.5 + core dump 

Dear Gluster Users, 

I implemented a distributed disperse 6x(4+2) gluster (v.3.10.5) volume with the 
following options: 

[root@s01 tier2]# gluster volume info 

Volume Name: tier2 
Type: Distributed-Disperse 
Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 6 x (4 + 2) = 36 
Transport-type: tcp 
Bricks: 
Brick1: s01-stg:/gluster/mnt1/brick 
Brick2: s02-stg:/gluster/mnt1/brick 
Brick3: s03-stg:/gluster/mnt1/brick 
Brick4: s01-stg:/gluster/mnt2/brick 
Brick5: s02-stg:/gluster/mnt2/brick 
Brick6: s03-stg:/gluster/mnt2/brick 
Brick7: s01-stg:/gluster/mnt3/brick 
Brick8: s02-stg:/gluster/mnt3/brick 
Brick9: s03-stg:/gluster/mnt3/brick 
Brick10: s01-stg:/gluster/mnt4/brick 
Brick11: s02-stg:/gluster/mnt4/brick 
Brick12: s03-stg:/gluster/mnt4/brick 
Brick13: s01-stg:/gluster/mnt5/brick 
Brick14: s02-stg:/gluster/mnt5/brick 
Brick15: s03-stg:/gluster/mnt5/brick 
Brick16: s01-stg:/gluster/mnt6/brick 
Brick17: s02-stg:/gluster/mnt6/brick 
Brick18: s03-stg:/gluster/mnt6/brick 
Brick19: s01-stg:/gluster/mnt7/brick 
Brick20: s02-stg:/gluster/mnt7/brick 
Brick21: s03-stg:/gluster/mnt7/brick 
Brick22: s01-stg:/gluster/mnt8/brick 
Brick23: s02-stg:/gluster/mnt8/brick 
Brick24: s03-stg:/gluster/mnt8/brick 
Brick25: s01-stg:/gluster/mnt9/brick 
Brick26: s02-stg:/gluster/mnt9/brick 
Brick27: s03-stg:/gluster/mnt9/brick 
Brick28: s01-stg:/gluster/mnt10/brick 
Brick29: s02-stg:/gluster/mnt10/brick 
Brick30: s03-stg:/gluster/mnt10/brick 
Brick31: s01-stg:/gluster/mnt11/brick 
Brick32: s02-stg:/gluster/mnt11/brick 
Brick33: s03-stg:/gluster/mnt11/brick 
Brick34: s01-stg:/gluster/mnt12/brick 
Brick35: s02-stg:/gluster/mnt12/brick 
Brick36: s03-stg:/gluster/mnt12/brick 
Options Reconfigured: 
features.scrub: Active 
features.bitrot: on 
features.inode-quota: on 
features.quota: on 
performance.client-io-threads: on 
cluster.min-free-disk: 10 
cluster.quorum-type: auto 
transport.address-family: inet 
nfs.disable: on 
server.event-threads: 4 
client.event-threads: 4 
cluster.lookup-optimize: on 
performance.readdir-ahead: on 
performance.parallel-readdir: off 
cluster.readdir-optimize: on 
features.cache-invalidation: on 
features.cache-invalidation-timeout: 600 
performance.stat-prefetch: on 
performance.cache-invalidation: on 
performance.md-cache-timeout: 600 
network.inode-lru-limit: 50000 
performance.io -cache: off 
disperse.cpu-extensions: auto 
performance.io -thread-count: 16 
features.quota-deem-statfs: on 
features.default-soft-limit: 90 
cluster.server-quorum-type: server 
cluster.brick-multiplex: on 
cluster.server-quorum-ratio: 51% 

I also started a long write test (about 69 TB to be written) from different 
gluster clients. 
One of this client returned the error "Transport Endpoint is not connected" 
during the rsync copy process. 
A core dump file has been generated by the issue; this is the core dump 
content: 

GNU gdb (GDB) Red Hat Enterprise Linux (7.2-50.el6) 
Copyright (C) 2010 Free Software Foundation, Inc. 
License GPLv3+: GNU GPL version 3 or later < http://gnu.org/licenses/gpl.html > 
This is free software: you are free to change and redistribute it. 
There is NO WARRANTY, to the extent permitted by law. Type "show copying" 
and "show warranty" for details. 
This GDB was configured as "x86_64-redhat-linux-gnu". 
For bug reporting instructions, please see: 
< http://www.gnu.org/software/gdb/bugs/ >... 
Missing separate debuginfo for the main executable file 
Try: yum --disablerepo='*' --enablerepo='*-debuginfo' install 
/usr/lib/debug/.build-id/fb/4d988b681faa09bb74becacd7a24f4186e8185 
[New Thread 15802] 
[New Thread 15804] 
[New Thread 15805] 
[New Thread 6856] 
[New Thread 30432] 
[New Thread 30486] 
[New Thread 1619] 
[New Thread 15806] 
[New Thread 15810] 
[New Thread 30412] 
[New Thread 15809] 
[New Thread 15799] 
[New Thread 30487] 
[New Thread 15795] 
[New Thread 15797] 
[New Thread 15798] 
[New Thread 15800] 
[New Thread 15801] 
Core was generated by `/usr/sbin/glusterfs --volfile-server=s01-stg 
--volfile-id=/tier2 /tier2'. 
Program terminated with signal 6, Aborted. 
#0 0x00000032a74328a5 in ?? () 
"/core.15795" is a core file. 
Please specify an executable to debug. 

No problem detected on the gluster servers, no brick down and so on... 

Anyone of us experienced the same problem? 
If yes, how did you resolve it? 

You can find the client "/var/log/messages" and "/var/log/glusterfs" log files 
content. 
Thank you in advance. 
Mauro Tridici 

-------------------------------------- 

In /var/log/syslog-ng/messages on the client (OS=centos 6.2, 16 cores, 64GB 
RAM, gluster client v.3.10.5) 

Sep 23 10:42:43 login2 tier2[15795]: pending frames: 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(1) op(WRITE) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(1) op(WRITE) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(1) op(WRITE) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(1) op(WRITE) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0) 
Sep 23 10:42:43 login2 tier2[15795]: patchset: 
git://git.gluster.org/glusterfs.git 
Sep 23 10:42:43 login2 tier2[15795]: signal received: 6 
Sep 23 10:42:43 login2 tier2[15795]: time of crash: 
Sep 23 10:42:43 login2 tier2[15795]: 2017-09-23 08:42:43 
Sep 23 10:42:43 login2 tier2[15795]: configuration details: 
Sep 23 10:42:43 login2 tier2[15795]: argp 1 
Sep 23 10:42:43 login2 tier2[15795]: backtrace 1 
Sep 23 10:42:43 login2 tier2[15795]: dlfcn 1 
Sep 23 10:42:43 login2 tier2[15795]: libpthread 1 
Sep 23 10:42:43 login2 tier2[15795]: llistxattr 1 
Sep 23 10:42:43 login2 tier2[15795]: setfsid 1 
Sep 23 10:42:43 login2 tier2[15795]: spinlock 1 
Sep 23 10:42:43 login2 tier2[15795]: epoll.h 1 
Sep 23 10:42:43 login2 tier2[15795]: xattr.h 1 
Sep 23 10:42:43 login2 tier2[15795]: st_atim.tv_nsec 1 
Sep 23 10:42:43 login2 tier2[15795]: package-string: glusterfs 3.10.5 
Sep 23 10:42:43 login2 tier2[15795]: --------- 
Sep 23 10:42:43 login2 abrt[8605]: file /usr/sbin/glusterfsd seems to be 
deleted 

In /var/log/glusterfs file: 

pending frames: 
frame : type(1) op(WRITE) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(1) op(WRITE) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(1) op(WRITE) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(1) op(WRITE) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
frame : type(0) op(0) 
patchset: git://git.gluster.org/glusterfs.git 
signal received: 6 
time of crash: 
2017-09-23 08:42:43 
configuration details: 
argp 1 
backtrace 1 
dlfcn 1 
libpthread 1 
llistxattr 1 
setfsid 1 
spinlock 1 
epoll.h 1 
xattr.h 1 
st_atim.tv_nsec 1 
package-string: glusterfs 3.10.5 
/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x9c)[0x7fd8d8b8af3c] 
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x338)[0x7fd8d8b96538] 
/lib64/libc.so.6[0x32a7432920] 
/lib64/libc.so.6(gsignal+0x35)[0x32a74328a5] 
/lib64/libc.so.6(abort+0x175)[0x32a7434085] 
/lib64/libc.so.6[0x32a746ffe7] 
/lib64/libc.so.6[0x32a7475916] 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x3181f)[0x7fd8cd51681f]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x31dad)[0x7fd8cd516dad]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x12027)[0x7fd8cd4f7027]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x11dc5)[0x7fd8cd4f6dc5]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x1475f)[0x7fd8cd4f975f]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x15178)[0x7fd8cd4fa178]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x31dc8)[0x7fd8cd516dc8]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x12027)[0x7fd8cd4f7027]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x11dc5)[0x7fd8cd4f6dc5]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x13f24)[0x7fd8cd4f8f24]
 
/usr/lib64/glusterfs/3.10.5/xlator/cluster/disperse.so(+0x33e12)[0x7fd8cd518e12]
 
/usr/lib64/glusterfs/3.10.5/xlator/protocol/client.so(+0x2c420)[0x7fd8cd793420] 
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x7fd8d8955ad5] 
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x195)[0x7fd8d8956c85] 
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x7fd8d8951d68] 
/usr/lib64/glusterfs/3.10.5/rpc-transport/socket.so(+0x9ccd)[0x7fd8ce5f1ccd] 
/usr/lib64/glusterfs/3.10.5/rpc-transport/socket.so(+0xaffe)[0x7fd8ce5f2ffe] 
/usr/lib64/libglusterfs.so.0(+0x87806)[0x7fd8d8be8806] 
/lib64/libpthread.so.0[0x32a8007851] 
/lib64/libc.so.6(clone+0x6d)[0x32a74e811d] 
_______________________________________________ 
Gluster-users mailing list 
[email protected] 
http://lists.gluster.org/mailman/listinfo/gluster-users 





_______________________________________________ 
Gluster-users mailing list 
[email protected] 
http://lists.gluster.org/mailman/listinfo/gluster-users 


</blockquote>




</blockquote>



------------------------- 
Mauro Tridici 

Fondazione CMCC 
CMCC Supercomputing Center 
presso Complesso Ecotekne - Università del Salento - 
Strada Prov.le Lecce - Monteroni sn 
73100 Lecce IT 
http://www.cmcc.it 

mobile: (+39) 327 5630841 
email: [email protected] 


_______________________________________________ 
Gluster-users mailing list 
[email protected] 
http://lists.gluster.org/mailman/listinfo/gluster-users 

_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to