Re: [Gluster-users] Sent and Received peer request (Connected)

2018-01-15 Thread Dj Merrill

On 1/15/2018 8:00 AM, Atin Mukherjee wrote:
What you’d need to do is to set ‘state=3’ for the peer which is not in 
connected state in /var/lib/glusterd/peers/ and then restart 
the glusterd service.



Thank you Atin, that worked perfectly!

On glusterfs2, I edited the uuid file for glusterfs1 and changed the 
state from 5 to 3, then restarted glusterd on glusterfs2.


On glusterfs1, I edited the uuid file for glusterfs2 and changed the 
state from 5 to 3, then restarted glusterd on glusterfs1.


Now all three systems are reporting the proper peer status, and the 
volume status is also now reporting properly, even after a reboot of the 
servers.


Thank you!

-Dj


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Sent and Received peer request (Connected)

2018-01-11 Thread Dj Merrill
This morning I did a rolling update from the latest 3.7.x to 3.12.4,
with no client activity.  "Rolling" as in, shut down the Gluster
services on the first server, update, reboot, wait until up and running,
proceed to the next server.  I anticipated that a 3.12 server might not
properly talk to a 3.7 server but since I had no client activity I was
not overly concerned.

All three servers are now updated to 3.12.4.  The weirdness is that
although the third server reports a proper peer status of "Peer in
Cluster (Connected)" to both server 1 and 2, the first and second
servers report to each other "Sent and Received peer request
(Connected)" but show the proper "Peer in Cluster (Connected)" to the
third server.


[root@glusterfs1]# gluster peer status
Number of Peers: 2

Hostname: glusterfs2
Uuid: 0f4867d9-7be3-4dbc-83f6-ddcda58df607
State: Sent and Received peer request (Connected)

Hostname: glusterfs3
Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f
State: Peer in Cluster (Connected)



[root@glusterfs2]# gluster peer status
Number of Peers: 2

Hostname: glusterfs3
Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f
State: Peer in Cluster (Connected)

Hostname: glusterfs1
Uuid: 339533fd-5820-4077-a2a0-d39d21379960
State: Sent and Received peer request (Connected)



[root@glusterfs3]# gluster peer status
Number of Peers: 2

Hostname: glusterfs2
Uuid: 0f4867d9-7be3-4dbc-83f6-ddcda58df607
State: Peer in Cluster (Connected)

Hostname: glusterfs1
Uuid: 339533fd-5820-4077-a2a0-d39d21379960
State: Peer in Cluster (Connected)


In my notes I have a procedure for Rejected peers, which I tried:

   Stop glusterd (systemctl stop glusterd)
   In /var/lib/glusterd, delete everything except glusterd.info (the
UUID file)
   Start glusterd (systemctl start glusterd)
   Probe one of the good peers (gluster peer probe HOSTNAME), then probe
the second
   Restart glusterd, check 'gluster peer status'

This hasn't changed anything.  I've tried reboots of all servers, etc.


Any thoughts on how to correct this?

Thanks,

-Dj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Gluster 3.9

2016-11-28 Thread Dj Merrill
On 11/28/2016 12:26 PM, Ben Werthmann wrote:
> This may be helpful as
> well: https://www.gluster.org/community/release-schedule/
> 

Definitely, thank you!  :-)

Part of my curiousity was "why" are there three actively supported
versions at the same time, and that helps.

-Dj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] yum errors

2016-09-06 Thread Dj Merrill
On 09/06/2016 01:54 AM, Kaushal M wrote:
>> Following down through the docs on that link, I find the Centos Storage
>> > SIG repo has 3.7.13, and the Storage testing repo has 3.7.15.
>> >
>> > What is a typical timeframe for releases to transition from the testing
>> > repo to the normal repo?

> Releases transistion from testing to the main repo after someone
> provides an ACK that they aren't really broken.
> Ideally that would be within a couple of days, but we forgot about
> 3.7.14 and it languished in testing for almost the whole month.
> We'll try to get 3.7.15 pushed into the main repo today, and you
> should have it available tomorrow.
> 

>> >
>> > Will this be the standard repo location going forth, replacing the repo
>> > on download.gluster.org?

> Yup. CentOS packages will be provided only through the CentOS Storage
> SIG from now on.
> 


Thank you, I appreciate the help!

-Dj


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] yum errors

2016-09-05 Thread Dj Merrill
A few days ago we started getting errors from the Gluster yum repo:

http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-7/x86_64/repodata/repomd.xml:
[Errno 14] HTTP Error 404 - Not Found

Looking into this we found a readme file in that directory indicating:

RPMs for RHEL, CentOS, and other RHEL Clones are available from the
CentOS Storage SIG.

See
  https://wiki.centos.org/SpecialInterestGroup/Storage


Apparently I missed an announcement that the repo location was changing,
so I'm playing catch-up at the moment.


Following down through the docs on that link, I find the Centos Storage
SIG repo has 3.7.13, and the Storage testing repo has 3.7.15.

What is a typical timeframe for releases to transition from the testing
repo to the normal repo?

Will this be the standard repo location going forth, replacing the repo
on download.gluster.org?

Thanks,

-Dj


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill

On 5/26/2016 5:11 PM, Gandalf Corvotempesta wrote:

Upgrade part of the Gluster infrastructure, then migrate your critical
items to the upgraded servers, then upgrade the rest, etc.


This is exactly what i would like to archieve but is not possible.



I am not sure I understand what isn't possible.

Install the new version of Gluster on a new server or reuse an existing 
server.  Set it up as a new Gluster instance (not part of the existing 
one).  Migrate all of your VMs to it.  When they are all migrated, 
upgrade the remaining servers and bring them into the new Gluster instance.


Depending on your VM infrastructure, you might be able to do this with 
no downtime, otherwise each VM will be down for the amount of time it 
takes to migrate, but you aren't taking down the entire Gluster 
infrastructure which is what you have mentioned.


-Dj

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill
On 05/26/2016 04:43 PM, Gandalf Corvotempesta wrote:
> If bring down everything is really needed to upgrade, gluster can not be
> considered highly available
> 
> Bring down single host or server is ok, what is not ok and is nonsense
> is bring down the whole infrastructure as stated on official docs.
> 

High availability has a lot to do with how you design your
infrastructure.  If you only have one server, for example, that isn't
going to provide high availability no matter what the software can do.

If you need 100% uptime for particular critical services then design
around that requirement.  Multiple servers, with enough redundancy and
capability to migrate critical services (VMs whatever) so that they
always stay up.

Upgrade part of the Gluster infrastructure, then migrate your critical
items to the upgraded servers, then upgrade the rest, etc.

This is more of a design and administrative issue than a software issue,
IMHO.

-Dj

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill
On 05/26/2016 02:28 PM, Gandalf Corvotempesta wrote:
> Ad long as clients are able to talk with newer server
> And what about major version like 3.5 to 3.6 or 3.7?
> 

I believe that is one of the design criteria, for minor revs.

For major revs, personally I would want to take things offline and not
want to do it "hot", but one of the people more experienced than I will
have to chime in here.

-Dj

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill
On 05/26/2016 11:57 AM, Gandalf Corvotempesta wrote:
> I've seen that the raccomenfedd procedure is with downtime, shutting
> down all clients and after that upgrade gluster
> 

Our upgrade procedure is to upgrade the servers first (shutdown the
Gluster service on a server, upgrade that server, then reboot, then go
to the next server once it has come back online and sync'ed), then the
clients one by one.  No downtime, except rebooting the individual
machines one by one as running tasks allow.

Gluster services our HPC Grid so it is not uncommon to have a client on
the previous version for several days until jobs finish running on it so
it can be rebooted.  Note this is going between minor revisions, like
3.7.x to 3.7.y.

-Dj


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-22 Thread Dj Merrill
On 04/20/2016 07:32 PM, Atin Mukherjee wrote:
> Unfortunately there is no such document. But I can take you through
> couple of code files [1] [2] where the first one defines all the volume
> tunables and their respective supported op-version where the later has
> the exact number of all those version variables.
> 
> [1]
> https://github.com/gluster/glusterfs/blob/release-3.7/xlators/mgmt/glusterd/src/glusterd-volume-set.c
> [2]
> https://github.com/gluster/glusterfs/blob/release-3.7/libglusterfs/src/globals.h
> 
> ~Atin


Thanks, Atin, this is very helpful!

Looks like I have some research to do to figure out if any of the
features released since op-version=2 would be useful for us.

Is there any documentation outlining "recommended" settings for a 2
server replicated setup running the latest version of Gluster?

Thanks,

-Dj


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-20 Thread Dj Merrill
On 04/20/2016 12:06 PM, Atin Mukherjee wrote:
>> Curious, is there any reason why this isn't automatically updated when
>> managing the updates with "yum update"?
> This is still manual as we want to give users choose whether they want
> to use a new feature or not. If they want, then a manual bump up is
> required.

Hi Atin,
Does that imply that new features are automatically enabled when the
op-version is bumped up?

Is there a list somewhere of changes from op-version=2 to
op-version=30710 and what features are automatically enabled and/or
disabled with each op-version version?

Thanks,

-Dj


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-20 Thread Dj Merrill
On 04/19/2016 05:42 PM, Atin Mukherjee wrote:
>> After a brief search, I discovered the following solution for RHGS: 
>> https://access.redhat.com/solutions/2050753 It suggests updating the 
>> op-version of the cluster after the upgrade. There isn't any evidence of 
>> this procedure in the community documentation (except for in this mailing 
>> list).
>> > 
>> > Unfortunately, nor 30709 or 30708 are valid op-version so I moved to 30707:
> Although the op-version numbering is aligned with the release versions
> but that doesn't go as a strict rule. If any new volume tunable option
> is introduced between the release then the op-version is bumped up.
> Between 3.7.7 and 3.7.9 releases there were no new options introduced
> and hence the op-version goes as 30707 for 3.7.9 release.


Curious, is there any reason why this isn't automatically updated when
managing the updates with "yum update"?

Just checked my system and it was set to "operating-version=2" even
though I have been on 3.7.x for awhile and just updated to 3.7.11.

I just ran "gluster volume set all cluster.op-version 30710" successfully.

What have I been missing all these years?  :-)

Thanks,

-Dj


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.8-1 vs 3.7.8-3

2016-03-07 Thread Dj Merrill

On 3/7/2016 1:09 PM, Kaleb Keithley wrote:

The %changelog of the glusterfs.spec file used to build the rpms!

`rpm -q --changelog glusterfs` (after updating).



Thank you!  :-)

-Dj

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] 3.7.8-1 vs 3.7.8-3

2016-03-07 Thread Dj Merrill
I noticed a release 3.7.8-3 appear for Centos 7 in the glusterfs repo 
over the weekend.  Are there any release notes available noting the 
changes between 3.7.8-1 and 3.7.8-3?  I am probably just looking in the 
wrong place.


Thanks,

-Dj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs client crashes

2016-02-23 Thread Dj Merrill

On 2/23/2016 10:27 AM, Raghavendra Gowdappa wrote:

Came across a glibc bug which could've caused some corruptions. On googling 
about possible problems, we found that there is an issue 
(https://bugzilla.redhat.com/show_bug.cgi?id=1305406) fixed in 
glibc-2.17-121.el7.


We have the latest version available for Centos 7.2 installed, which is 
glibc-2.17-106.


It reports "Your libc is likely buggy."

I'm happy to test again once the 2.17-121 version is available for 
Centos 7.2.


Thank you,

-Dj

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs client crashes

2016-02-22 Thread Dj Merrill

On 2/21/2016 2:23 PM, Dj Merrill wrote:
> Very interesting.  They were reporting both bricks offline, but the
> processes on both servers were still running.  Restarting glusterfsd on
> one of the servers brought them both back online.

I realize I wasn't clear in my comments yesterday and would like to 
elaborate on this a bit further. The "very interesting" comment was 
sparked because when we were running 3.7.6, the bricks were not 
reporting as offline when a client was having an issue, so this is new 
behaviour now that we are running 3.7.8 (or a different issue entirely).


The other point that I was not clear on is that we may have one client 
reporting the "Transport endpoint is not connected" error, but the other 
40+ clients all continue to work properly. This is the case with both 
3.7.6 and 3.7.8.


Curious, how can the other clients continue to work fine if both Gluster 
3.7.8 servers are reporting the bricks as offline?


What does "offline" mean in this context?


Re: the server logs, here is what I've found so far listed on both 
gluster servers (glusterfs1 and glusterfs2):


[2016-02-21 08:06:02.785788] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 
0-glusterfs: No change in volfile, continuing
[2016-02-21 18:48:20.677010] W [socket.c:588:__socket_rwv] 
0-gv0-client-1: readv on (sanitized IP of glusterfs2):49152 failed (No 
data available)
[2016-02-21 18:48:20.677096] I [MSGID: 114018] 
[client.c:2030:client_rpc_notify] 0-gv0-client-1: disconnected from 
gv0-client-1. Client process will keep trying to connect to glusterd 
until brick's port is available
[2016-02-21 18:48:31.148564] E [MSGID: 114058] 
[client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-1: 
failed to get the port number for remote subvolume. Please run 'gluster 
volume status' on server to see if brick process is running.
[2016-02-21 18:48:40.941715] W [socket.c:588:__socket_rwv] 0-glusterfs: 
readv on (sanitized IP of glusterfs2):24007 failed (No data available)
[2016-02-21 18:48:51.184424] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 
0-glusterfs: No change in volfile, continuing
[2016-02-21 18:48:51.972068] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 
0-mgmt: Volume file changed
[2016-02-21 18:48:51.980210] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 
0-mgmt: Volume file changed
[2016-02-21 18:48:51.985211] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 
0-mgmt: Volume file changed
[2016-02-21 18:48:51.995002] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 
0-mgmt: Volume file changed
[2016-02-21 18:48:53.006079] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 
0-glusterfs: No change in volfile, continuing
[2016-02-21 18:48:53.018104] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 
0-glusterfs: No change in volfile, continuing
[2016-02-21 18:48:53.024060] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 
0-glusterfs: No change in volfile, continuing
[2016-02-21 18:48:53.035170] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 
0-glusterfs: No change in volfile, continuing
[2016-02-21 18:48:53.045637] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 
0-gv0-client-1: changing port to 49152 (from 0)
[2016-02-21 18:48:53.051991] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 
0-gv0-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-02-21 18:48:53.052439] I [MSGID: 114046] 
[client-handshake.c:1213:client_setvolume_cbk] 0-gv0-client-1: Connected 
to gv0-client-1, attached to remote volume '/export/brick1/sdb1'.
[2016-02-21 18:48:53.052486] I [MSGID: 114047] 
[client-handshake.c:1224:client_setvolume_cbk] 0-gv0-client-1: Server 
and Client lk-version numbers are not same, reopening the fds
[2016-02-21 18:48:53.052668] I [MSGID: 114035] 
[client-handshake.c:193:client_set_lk_version_cbk] 0-gv0-client-1: 
Server lk version = 1
[2016-02-21 18:48:31.148706] I [MSGID: 114018] 
[client.c:2030:client_rpc_notify] 0-gv0-client-1: disconnected from 
gv0-client-1. Client process will keep trying to connect to glusterd 
until brick's port is available
[2016-02-21 18:49:12.271865] W [socket.c:588:__socket_rwv] 0-glusterfs: 
readv on (sanitized IP of glusterfs2):24007 failed (No data available)
[2016-02-21 18:49:15.637745] W [socket.c:588:__socket_rwv] 
0-gv0-client-1: readv on (sanitized IP of glusterfs2):49152 failed (No 
data available)
[2016-02-21 18:49:15.637824] I [MSGID: 114018] 
[client.c:2030:client_rpc_notify] 0-gv0-client-1: disconnected from 
gv0-client-1. Client process will keep trying to connect to glusterd 
until brick's port is available
[2016-02-21 18:49:24.198431] E [socket.c:2278:socket_connect_finish] 
0-glusterfs: connection to (sanitized IP of glusterfs2):24007 failed 
(Connection refused)
[2016-02-21 18:49:26.204811] E [socket.c:2278:socket_connect_finish] 
0-gv0-client-1: connection to (sanitized IP of glusterfs2):24007 failed 
(Connection refused)
[2016-02-21 18:49:38.366559] I [MSGID: 108031] 
[afr-common.c:1883:afr_local_discovery_cbk] 0-gv0-replicate-0: selecti

Re: [Gluster-users] glusterfs client crashes

2016-02-21 Thread Dj Merrill

On 2/21/2016 1:27 PM, Gaurav Garg wrote:

Its seems that your brick process are offline or all brick process have 
crashed. Could you paste output of #gluster volume status   and #gluster volume 
info command and attach core file.



Very interesting.  They were reporting both bricks offline, but the 
processes on both servers were still running.  Restarting glusterfsd on 
one of the servers brought them both back online.


I am going to have to take a closer look at the logs on the servers.

Even after bringing them back up, the client is still reporting 
"Transport endpoint is not connected".  Is there anything other than a 
reboot that will change this state on the client?



# gluster volume status
Status of volume: gv0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick glusterfs1:/export/brick1/sdb149152 0  Y 
 15073
Brick glusterfs2:/export/brick1/sdb149152 0  Y 
 14068
Self-heal Daemon on localhost   N/A   N/AY 
 14063
Self-heal Daemon on glusterfs1  N/A   N/AY 
 7732


Task Status of Volume gv0
--
There are no active volume tasks

Status of volume: gv1
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick glusterfs1:/export/brick2/sdb249154 0  Y 
 15089
Brick glusterfs2:/export/brick2/sdb249157 0  Y 
 14073
Self-heal Daemon on localhost   N/A   N/AY 
 14063
Self-heal Daemon on glusterfs1  N/A   N/AY 
 7732


Task Status of Volume gv1
--
There are no active volume tasks


# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 1d31ea3c-a240-49fe-a68d-4218ac051b6d
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/export/brick1/sdb1
Brick2: glusterfs2:/export/brick1/sdb1
Options Reconfigured:
performance.cache-max-file-size: 750MB
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
features.quota-timeout: 30
features.quota: off
performance.io-thread-count: 16
performance.write-behind-window-size: 1GB
performance.cache-size: 1GB
nfs.volume-access: read-only
nfs.disable: on
cluster.self-heal-daemon: enable

Volume Name: gv1
Type: Replicate
Volume ID: 7127b90b-e208-4aea-a920-4db195295d7a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/export/brick2/sdb2
Brick2: glusterfs2:/export/brick2/sdb2
Options Reconfigured:
performance.cache-size: 1GB
performance.write-behind-window-size: 1GB
nfs.disable: on
nfs.volume-access: read-only
performance.cache-max-file-size: 750MB
cluster.self-heal-daemon: enable


-Dj

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs client crashes

2016-02-21 Thread Dj Merrill
Several weeks ago we started seeing some weird behaviour on our Gluster 
client systems.  Things would be working fine for several days, then the 
client could no longer access the Gluster filesystems, giving an error:


ls: cannot access /mnt/hpc: Transport endpoint is not connected

We were running version 3.7.6 and this version had been working fine for 
a few months until the above started happening.  Thinking that it may be 
an OS or kernel update causing the issue, when 3.7.8 came out, we 
upgraded in hopes that the issue might be addressed, but we are still 
getting having the issue.


All client machines are running Centos 7.2 with the latest updates, and 
the problem is happening on several machines.  Not every Gluster client 
machine has had the problem, but enough different machines to make us 
think that this is more of a generic issue versus one that only affects 
specific types of machines (Both Intel and AMD CPUs, different system 
manufacturers, etc).


The log file included below from /var/log/glusterfs seems to be showing 
a crash of the glusterfs process if I am interpreting it correctly.  At 
the top you can see an entry made on the 17th, then no further entries 
until the crash today on the 21st.


We would greatly appreciate any help in tracking down the cause and 
possible fix for this.


The only way to temporarily "fix" the machines seems to be a reboot, 
which allows the machines to work properly for a few days before the 
issue happens again (random amount of days, no pattern).



[2016-02-17 23:56:39.685754] I [MSGID: 109036] 
[dht-common.c:8043:dht_log_new_layout_for_dir_selfheal] 0-gv0-dht: 
Setting layout of /tmp/ktreraya/gms-scr/tmp/123277 with [Subvol_name: 
gv0-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 , Hash: 1 ],

pending frames:
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash:
2016-02-21 08:10:40
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.8
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7ff56ddcd042]
/lib64/libglusterfs.so.0(gf_print_trace+0x31d)[0x7ff56dde950d]
/lib64/libc.so.6(+0x35670)[0x7ff56c4bb670]
/lib64/libc.so.6(gsignal+0x37)[0x7ff56c4bb5f7]
/lib64/libc.so.6(abort+0x148)[0x7ff56c4bcce8]
/lib64/libc.so.6(+0x75317)[0x7ff56c4fb317]
/lib64/libc.so.6(+0x7d023)[0x7ff56c503023]
/usr/lib64/glusterfs/3.7.8/xlator/protocol/client.so(client_local_wipe+0x39)[0x7ff5600a46b9]
/usr/lib64/glusterfs/3.7.8/xlator/protocol/client.so(client3_3_getxattr_cbk+0x182)[0x7ff5600a7f62]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7ff56db9ba20]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1bf)[0x7ff56db9bcdf]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7ff56db97823]
/usr/lib64/glusterfs/3.7.8/rpc-transport/socket.so(+0x6636)[0x7ff5627a8636]
/usr/lib64/glusterfs/3.7.8/rpc-transport/socket.so(+0x9294)[0x7ff5627ab294]
/lib64/libglusterfs.so.0(+0x878ea)[0x7ff56de2e8ea]
/lib64/libpthread.so.0(+0x7dc5)[0x7ff56cc35dc5]
/lib64/libc.so.6(clone+0x6d)[0x7ff56c57c28d]


Thank you,

-Dj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users