hi itzik,
Could you attach the zipped glusterd logs present in /etc/glusterd/logs
on all the machines. That will help us figure out what the problem is.
Thanks
Pranith
- Original Message -
From: itzik bar koki...@yahoo.com
To: gluster-users@gluster.org
Sent: Wednesday, November
/gluster/glusterfs/qa-releases/glusterfs-3.1.1qa1.tar.gz
Thanks
Pranith
From: itzik bar koki...@yahoo.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Sent: Thursday, November 11, 2010 3:30:00 PM
Subject: Re: [Gluster-users] Creation of volume has been unsuccessful
127.0.0.1
hi Stefan,
Could you provide the following outputs:
on server-1:
gluster fsm log server-2
on server-2
gluster fsm log server-1
please zip the logs in /usr/local/var/log/glusterfs on both the servers and
attach them to this mail. That should help us identify the issue.
Thanks
Pranith.
-
hi Freddie,
A Peer is Rejected during peer probe if the two peers have conflicting
volumes, i.e. volumes with same name but different contents.
Is this what happened to you?.
Thanks
Pranith.
- Original Message -
From: Freddie Gutierrez liquid...@elitecoders.com
To:
hi Maurice,
I am guessing something went wrong in setting up peers. Could you please
zip the directories /etc/glusterd and /usr/local/var/log/glusterfs and attach
to the mail. Please attach output of sudo gluster peer status as well. That
should help me find out the problem.
Pranith.
. There was a bug in 3.1.0 (1855)
which lead to this problem, but that is fixed in 3.1.1. Did you do all your
operations on 3.1.1 or upgraded from 3.1.0?.
Pranith.
- Original Message -
From: Maurice R Volaski maurice.vola...@einstein.yu.edu
To: Pranith Kumar. Karampuri prani...@gluster.com
Sent
on 192.168.1.3.
and vice versa.
Pranith.
- Original Message -
From: Freddie Gutierrez liquid...@elitecoders.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Sent: Monday, December 20, 2010 11:40:43 PM
Subject: Re: [Gluster-users] What does this mean? State: Peer Rejected
(Connected
Message -
From: Pranith Kumar. Karampuri prani...@gluster.com
To: Maurice R Volaski maurice.vola...@einstein.yu.edu
Cc: gluster-users gluster-users@gluster.org
Sent: Tuesday, December 21, 2010 8:42:21 AM
Subject: Re: [Gluster-users] Why is volume creation unsuccessful?
hi Maurice,
It seems
hi Piotr Skurczak,
gluster at the moment is not intended to be used over WAN. Hence we
won't be able to support this.
Thanks
Pranith.
- Original Message -
From: Piotr Skurczak piotr.skurc...@gmail.com
To: gluster-users@gluster.org
Sent: Sunday, January 9, 2011 11:51:57 PM
Subject:
hi Daniel,
The first problem seems like bug 2296. Non-primary group membership was
treated as other. It is already resolved and will be available in 3.1.3. Do
you know how to trigger the second problem?.
Pranith.
- Original Message -
From: Daniel Zander zan...@ekp.uni-karlsruhe.de
.
Pranith.
- Original Message -
From: Daniel Zander zan...@ekp.uni-karlsruhe.de
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: gluster-users@gluster.org
Sent: Friday, January 28, 2011 3:44:46 PM
Subject: Re: [Gluster-users] GlusterFS 3.1.1: Permission denied and No such
file or directory
hi Patrick,
Because the Volume is created with replica type with 2 bricks per replica,
you can only add bricks only in multiples of two.
Pranith.
- Original Message -
From: Patrick Irvine p...@cybersites.ca
To: gluster-users@gluster.org
Sent: Monday, February 7, 2011 3:51:26 AM
hi Maciej,
I think there is a spurious cli lock present in that glusterd. Please stop
and restart the glusterd. That should fix it.
Pranith.
- Original Message -
From: Maciej Gałkiewicz maciejgalkiew...@ragnarson.com
To: gluster-users@gluster.org
Sent: Wednesday, February 16, 2011
This means one of the peers went down. Please check gluster peer status, to
find out which peer is disconnected.
Pranith
- Original Message -
From: Aaron Roberts arobe...@domicilium.com
To: gluster-users@gluster.org
Sent: Thursday, February 24, 2011 7:00:53 PM
Subject: [Gluster-users]
Rebalance and EPERM, looks like you are hitting
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2369
Pranith.
- Original Message -
From: John Lao j...@cloud9analytics.com
To: gluster-users@gluster.org
Sent: Thursday, February 24, 2011 11:55:52 PM
Subject: [Gluster-users]
hi,
Glusterfs identifies files using a gfid. Same file on both the replicas
contain same gfid. What happens when you edit a text file is a new backup
file(different gfid) is created and the data is written to it and then it is
renamed to the original file thus changing the gfid on the
-Heal_on_Replicate
I executed the same example on my machine and it works fine.
root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
sds
root@pranith-laptop:/mnt/client# find .
.
./a.txt
root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
sds
DD
Pranith
- Original Message -
From: Pranith Kumar
But after vim one.txt and changing the content of that file on one node.,
what exactly do you mean by this?. Did you edit the file on backend?. (or)
Brought one server down and after editing brought the second server back up?.
Pranith
- Original Message -
From: Daniel Müller
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 2:24:14 PM
Subject: AW: [Gluster-users] Strange behaviour glusterd 3.1
Both server running. I edit the file on one node1-server (in /mnt/glusterfs/)
changed the content, saved the file. Ssh
hi Mohit,
I have sent a patch for the bug you have raised for self-heal not working
in one case already.
For a file, it should have either different extended attributes for client-0/1
(assuming two replica) and same gfid extended attribute.
or different extended attribute for gfid.
if it needs self-heal.
Pranith.
- Original Message -
From: Pranith Kumar. Karampuri prani...@gluster.com
To: Mohit Anchlia mohitanch...@gmail.com
Cc: gluster-users@gluster.org
Sent: Saturday, March 12, 2011 7:29:58 AM
Subject: Re: [Gluster-users] Self heal doesn't seem to work when file
hi R.C.,
Could you please give the exact steps when you log the bug. Please also
give the output of gluster peer status on both the machines after restart. zip
the files under /usr/local/var/log/glusterfs/ and /etc/glusterd on both the
machines when this issue happens. This should help us
hi Mohit,
Self heal happens whenever a lookup happens on an in-consistent file. The
commands ls -laR, find do lookup on all the files recursively under the
directory we specify.
Pranith.
- Original Message -
From: Mohit Anchlia mohitanch...@gmail.com
To: Pranith Kumar. Karampuri prani
hi R.C.,
The network ping timeout for glusterfs is around 45 seconds. If samba
client ping timeout is lesser than this then the connection could be aborted.
Could you check if the same happens after setting the ping-timeout to something
lesser than the samba client.
example: gluster
.
you should not inspect/getfattr the files on the mount point, you should do it
on individual bricks.
Pranith.
- Original Message -
From: Mohit Anchlia mohitanch...@gmail.com
To: Pranith Kumar. Karampuri prani...@gluster.com, gluster-users@gluster.org
Sent: Monday, March 21, 2011 10:25:05
One of the our QA engineers faced a similar issue, in his case, the problem was
with iptables on that machine.
Pranith.
- Original Message -
From: James Burnash jburn...@knight.com
To: gluster-users@gluster.org
Sent: Wednesday, March 16, 2011 6:40:25 PM
Subject: [Gluster-users] What
Could you post the output of ls -l of the files for which the write op fails
from the backends. Knowing the strace output of the write operation thats
failing would help.
Pranith.
- Original Message -
From: paul simpson p...@realisestudio.com
To: gluster-users@gluster.org
Sent: Tuesday,
hi,
Whenever a peer goes down all the other machines in the cluster keep on
trying to re-connect to it. And when the peer comes backup again the
re-connectiion will succeed. The only times we have seen problems are change
in ip-address and issue with ip-tables. We will have to investigate
Please refer to
http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
Pranith.
- Original Message -
From: Panos Kontopoulos pa...@kontopoulos.org
To: gluster-users@gluster.org
Sent: Monday, March 21, 2011 12:53:00 PM
Subject:
Mohit,
Could you please zip and attach the logs from all the machines in the
cluster.
Pranith.
- Original Message -
From: Mohit Anchlia mohitanch...@gmail.com
To: gluster-users@gluster.org
Sent: Saturday, April 16, 2011 4:05:05 AM
Subject: [Gluster-users] mount error
LATEST Gluster
Looks like the bug 2500 http://bugs.gluster.com/show_bug.cgi?id=2500.
The fix is not available in 3.1.4 its available in future versions.
Pranith.
- Original Message -
From: Jorge Pascual jordy.pasc...@gmail.com
To: gluster-users@gluster.org
Sent: Tuesday, April 19, 2011 4:38:49 PM
hi Mohit,
Do you know what exact steps are leading to this problem?.
Pranith.
- Original Message -
From: Mohit Anchlia mohitanch...@gmail.com
To: Amar Tumballi a...@gluster.com, gluster-users@gluster.org
Sent: Friday, April 29, 2011 9:49:33 PM
Subject: Re: [Gluster-users] Split
hi Martin,
Could you please send the output of -m trusted* instead of
trusted.afr for the remaining 24 files from both the servers. I would like to
see the gfids of these files on both the machines.
Pranith.
- Original Message -
From: Martin Schenker
To: Pranith Kumar. Karampuri prani...@gluster.com, gluster-users@gluster.org
Sent: Friday, April 29, 2011 11:05:55 PM
Subject: Re: [Gluster-users] Server outage, file sync/self-heal doesn't
sync ALL files?!
Sorry, I had manually sync due to imminent server upgrades.
50 min. after the initial sync
Please check if outputs of getfattr -d -m trusted* on all the brick
directories differ.
Pranith.
- Original Message -
From: Mohit Anchlia mohitanch...@gmail.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: Amar Tumballi a...@gluster.com, gluster-users@gluster.org
Sent: Friday
hi Martin,
IO-stats is loaded by default. Please use the profile commands listed in
the following document to start/stop/display profile output.
http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Running_GlusterFS_Volume_Profile_Command
Pranith
- Original Message -
martin.schen...@profitbricks.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: Gluster General Discussion List gluster-users@gluster.org
Sent: Tuesday, May 10, 2011 11:23:03 AM
Subject: RE: [Gluster-users] How do I load the I/O stats translator for
theprofiling options?
That's what I did
This means glusterd is not upgraded yet, this is strange. Are you sure that you
upgraded glusterfs on all the machines in the cluster?
Pranith
- Original Message -
From: Martin Schenker martin.schen...@profitbricks.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: Gluster General
hi Remi,
Would it be possible to post the logs on the client, so that we can find
what issue you are running into.
Pranith
- Original Message -
From: Remi Broemeling r...@goclio.com
To: gluster-users@gluster.org
Sent: Monday, May 16, 2011 10:47:33 PM
Subject: [Gluster-users] Rebuild
Martin,
Is this a distributed-replicate setup?. Could you attach the vol-file of
the client.
Pranith
- Original Message -
From: Martin Schenker martin.schen...@profitbricks.com
To: gluster-users@gluster.org
Sent: Monday, May 16, 2011 2:49:29 PM
Subject: [Gluster-users] Client and
- Original Message -
From: Martin Schenker martin.schen...@profitbricks.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: gluster-users@gluster.org
Sent: Tuesday, May 17, 2011 11:13:32 AM
Subject: RE: [Gluster-users] Client and server file view, different
results?! Client can't see
, May 17, 2011 9:02:44 PM
Subject: Re: [Gluster-users] Rebuild Distributed/Replicated Setup
Hi Pranith. Sure, here is a pastebin sampling of logs from one of the hosts:
http://pastebin.com/1U1ziwjC
On Mon, May 16, 2011 at 20:48, Pranith Kumar. Karampuri prani...@gluster.com
wrote:
hi
is owned by another group.
In our case: all users are in group2 (a secondary group), their directories,
user1,user2,... are all owned by username:primarygroup with 0700 only root can
see the directories.
Thanks for the effort,
udo.
On 13.05.2011, at 07:09, Pranith Kumar. Karampuri wrote:
hi
and triggering a total self-heal is very expensive operation, I wouldn't do
that.
Pranith.
- Original Message -
From: Remi Broemeling r...@goclio.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: gluster-users@gluster.org
Sent: Wednesday, May 18, 2011 8:21:33 PM
Subject: Re
- Original Message -
From: Pranith Kumar. Karampuri prani...@gluster.com
To: Remi Broemeling r...@goclio.com
Cc: gluster-users@gluster.org
Sent: Thursday, May 19, 2011 2:14:52 PM
Subject: Re: [Gluster-users] Rebuild Distributed/Replicated Setup
hi Remi,
This is a classic case of split-brain. See
Hi Whit,
Could you please zip the logs and send them across. They are generally
located in /usr/local/var/log/glusterfs/
I also need the following outputs:
On white:
gluster fsm log black
On black:
gluster fsm log white
We need to fix the error message saying that the peer is not in cluster.
...@profitbricks.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: gluster-users@gluster.org
Sent: Tuesday, May 17, 2011 3:49:30 PM
Subject: RE: [Gluster-users] Client and server file view, different
results?! Client can't see the right file.
It version 3.1.3 (we tried 3.2.0 for about
Martin,
The output suggests that there are 2 replicas per 1 volume. So it should
be present on only 2 bricks. Why is the file present in 4 bricks?. It should
either be present on pserver1213 or pserver3 5. I am not sure why you are
expecting it to be there on 4 bricks.
Am I missing any
Need the logs from May 13th to 17th.
Pranith.
- Original Message -
From: Martin Schenker martin.schen...@profitbricks.com
To: Pranith Kumar. Karampuri prani...@gluster.com
Cc: gluster-users@gluster.org
Sent: Thursday, May 19, 2011 5:28:06 PM
Subject: RE: [Gluster-users] Client and server
hi Udo,
Any updates/corrections?. I am unable to reproduce the bug, you mentioned.
I must be missing some details.
Pranith
- Original Message -
From: Pranith Kumar. Karampuri prani...@gluster.com
To: Udo Waechter udo.waech...@uni-osnabrueck.de
Cc: Gluster Users gluster-users
Karim,
gfsbrick1 is not able to contact gfsbrick2 yet. Generally happens if the
DNS resolutions dont happen as expected. You can map them manually in
/etc/hosts for both the machines?.
something like:
on gfsbrick1
ip-of-brick2 gfsbrick2
on gfsbrick2
ip-of-brick1 gfsbrick1
Pranith.
-
No.
Pranith
- Original Message -
From: Karim Latouche klatou...@digitalchocolate.com
To: gluster-users@gluster.org
Sent: Tuesday, May 24, 2011 10:32:24 AM
Subject: Re: [Gluster-users] Trouble to add brick
Hi All ,
Thanks for your answers.
I followed your advices and checked everything.
hi James,
I looked at the pastebin sample, I see that all of the attrs are complete
zeros, Could you let me know what is it that I am missing.
Pranith
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on
behalf of Burnash,
of '/' (possible split-brain). Please
fix the file on all backend volumes.
Pranith.
From: Burnash, James [jburn...@knight.com]
Sent: Monday, June 13, 2011 11:56 PM
To: Pranith Kumar. Karampuri; Jeff Darcy(jda...@redhat.com);
gluster-users@gluster.org
To: Pranith Kumar. Karampuri; Jeff Darcy(jda...@redhat.com);
gluster-users@gluster.org
Subject: RE: [Gluster-users] Files present on the backend but havebecome
invisible from clients
Hi Pranith.
Yes, I do see those messages in my mount logs on the client:
root@jc1lnxsamm100
hi Larry,
V3.3 is still in beta. Are you sure you want to upgrade production system
to V3.3.
Pranith
- Original Message -
From: Larry Bates larry.ba...@vitalesafe.com
To: gluster-users@gluster.org
Sent: Saturday, April 21, 2012 1:26:13 AM
Subject: [Gluster-users] Upgrading from V3.0
Setting up server side replication with command-line is not possible. You will
need to write the volfiles to achieve that.
Would it be possible to let us know the use-case where this is useful for you.
Pranith.
- Original Message -
From: lejeczek pelj...@yahoo.co.uk
To: Brian Candler
hi,
Not necessarily. The first time when the file is accessed whichever brick
responds fast is the one where the reads/stat etc happen. Writes/create/rm etc
happen on both the bricks.
Pranith.
- Original Message -
From: Костырев Александр Алексеевич a.kosty...@serverc.ru
To:
Could you post the logs of the mount process so that we can analyse what is
going on.
Did you have data on bricks before you created the volume? Did you upgrade from
3.2?
Pranith
- Original Message -
From: olav johansen luxis2...@gmail.com
To: gluster-users@gluster.org
Sent: Thursday,
Brian,
Small correction: 'sending queries to *both* servers to check they are in
sync - even read accesses.' Read fops like stat/getxattr etc are sent to only
one brick.
Pranith.
- Original Message -
From: Brian Candler b.cand...@pobox.com
To: Fernando Frediani (Qube)
.
Following are some of the examples of read-fops:
.access
.stat
.fstat
.readlink
.getxattr
.fgetxattr
.readv
Pranith.
- Original Message -
From: Brian Candler b.cand...@pobox.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: olav
Brian,
The first point(1) is working as it is intended. Allowing something like
that can get the volume into very complicated state.
Please go through the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=812214
Pranith
- Original Message -
From: Brian Candler
hi Ling Ho,
It seems like you are using rdma, could you confirm?
I am suspecting a memory leak. Could you help me confirm if that is the case.
Please post the output of the following:
1) when you start the brick perform 'kill -USR1 pid-of-brick' This will save
a file
hi Philip,
When this happens could you post the statedump of the process to see what
is causing this memory usage.
Steps to grab statedump of the process:
1) kill -USR1 pid-of-nfs-process
2) the file is located at /tmp/glusterdump.pid-of-nfs-process
Pranith.
- Original Message -
hi Toby,
In 3.3 there is an extra step that needs to be taken to recover from
split-brain. I am automating the process. Once I give the script to the Doc
team it will be documented. Will update you once it is done.
Pranith.
- Original Message -
From: Toby Corkindale
hi Samuel,
This is because of the bug: 832305
Please use http://review.gluster.com/3583 as the fix for this.
Pranith
- Original Message -
From: samuel sam...@gmail.com
To: gluster-users@gluster.org
Sent: Monday, June 25, 2012 4:34:46 PM
Subject: [Gluster-users] managing split brain
Jake,
Granular locking is the only way data-self-heal is performed at the moment.
Could you give us the steps to re-create this issue, so that we can test this
scenario locally. I will raise a bug with the info you provide.
This is roughly the info I am looking for:
1) What is the size of
hi Jake,
Thanks! for the info. I will also try to recreate the problem with the info
so far.
Pranith
- Original Message -
From: Jake Grimmett j...@mrc-lmb.cam.ac.uk
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: gluster-users@gluster.org, Anand Avati anand.av...@gmail.com
Sent
Homer,
Could you give the output of
getfattr -d -m . -e hex
/export/data10/.glusterfs/d9/b0/d9b0c350-33ba-4090-ab08-f91f30dd661f
getfattr -d -m . -e hex
/export/data11/.glusterfs/d9/b0/d9b0c350-33ba-4090-ab08-f91f30dd661f
and also 'stat' of these files.
Pranith.
- Original Message -
Samuli,
As a side note I have to say that I have seen similar problems with
RAID-5 systems even when using them as non-replicated iSCSI target. In
my experience it's definetly not good for hosting VM images.
I think the performance problems he mentioned (I/O performance etc) were
when a
Raised the following bug to track the issue:
https://bugzilla.redhat.com/show_bug.cgi?id=842254
Thanks!!
Pranith
- Original Message -
From: Anand Avati anand.av...@gmail.com
To: Jake Grimmett j...@mrc-lmb.cam.ac.uk
Cc: Pranith Kumar Karampuri pkara...@redhat.com, gluster-users
hi Amar,
This is the format we considered initially but we did not go with this
because it may exceed 80 chars and wrap over for small terminals if we want to
add more fields in future.
Pranith.
- Original Message -
From: Amar Tumballi ama...@redhat.com
To: Gluster Devel
Bryan,
Why not use --xml at the end of the command? That will print the output in
xml format. Would that make it easy to parse?.
Pranith.
- Original Message -
From: Bryan Washer bwas...@netsuite.com
To: Amar Tumballi ama...@redhat.com, Gluster Devel
gluster-de...@nongnu.org,
No. Output formats in that way generally start out nice but as you start adding
more fields, formatting them becomes difficult IMO.
Pranith
- Original Message -
From: Stephan von Krawczynski sk...@ithnet.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: gluster-users gluster-users
Manhong Dai,
Thanks for the script. Could you give the volume configuration also so that
we can re-create the problem in our setups.
Pranith.
- Original Message -
From: Manhong Dai da...@umich.edu
To: Anand Avati anand.av...@gmail.com
Cc: gluster-users@gluster.org
Sent: Tuesday,
hi Dario,
Could you post the output of the following commands:
gluster volume heal VmDir info healed
gluster volume heal VmDir info split-brain
Also provide the output of 'getfattr -d -m . -e hex' On both the bricks for the
two files listed in the output of 'gluster volume heal VmDir info'
/set/2012, alle ore 18:21, Pranith Kumar Karampuri
pkara...@redhat.com ha scritto:
Dario,
Ok that confirms that it is not a split-brain. Could you post the getfattr
output I requested as well?. What is the size of the VM files?.
Pranith
- Original Message -
From: Dario Berzano
these flags are reset. I
thought it was a persistent one. Seems like your Vm files are doing fine.
Pranith.
- Original Message -
From: Dario Berzano dario.berz...@cern.ch
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent: Monday, September 17, 2012 1
s19n,
Makes sense. Could you please raise a bug for this.
Pranith.
- Original Message -
From: s19n mail...@s19n.net
To: gluster-users@gluster.org
Sent: Wednesday, October 10, 2012 3:18:49 PM
Subject: Re: [Gluster-users] Clearing the heal-failed and split-brain status
messages
*
hi,
Could you post output of 'getfattr -d -m . -e hex fpath' on both the
bricks.
Pranith.
- Original Message -
From: Nux! n...@li.nux.ro
To: gluster-users@gluster.org
Sent: Sunday, October 28, 2012 5:19:31 PM
Subject: Re: [Gluster-users] File has different size on different bricks
On
Jonathan,
Could you give us the directory structure details, so that we can try to
re-create the issue. I am assuming each file is about 300kb. Please give us the
depth of the directory structure and how many directories in each level.
Thanks
Pranith
- Original Message -
From:
-
From: Jonathan Lefman jonathan.lef...@essess.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: gluster-users@gluster.org
Sent: Friday, November 2, 2012 8:46:44 AM
Subject: Re: [Gluster-users] Very slow directory listing and high CPU usage on
replicated volume
Thanks Pranith. We have tried
Thanks for reporting this Brian.
Adding Anjana.
Pranith.
- Original Message -
From: Brian Candler b.cand...@pobox.com
To: gluster-users@gluster.org
Sent: Thursday, November 15, 2012 4:24:28 PM
Subject: [Gluster-users] Replacing a failed server - 3.3
I recently had to swap a server for a
This crash is because of some other reason. Fix for which is already merged in
upstream.
http://review.gluster.com/4767
Thanks for reporting the issue.
Pranith.
- Original Message -
From: 符永涛 yongta...@gmail.com
To: Toby Corkindale toby.corkind...@strategicdata.com.au
Cc:
Michael,
Could you please raise a bug[1] with io-cache component in glusterfs.
Mention the content of this mail as the bug description.
Pranith.
[1]- bugzilla.redhat.com
- Original Message -
From: Michael michael.auckl...@gmail.com
To: gluster-users@gluster.org
Sent: Tuesday, April
Michael,
I posted the fix at http://review.gluster.com/4884. Please let me know the
bug id once you raise a bug for it. The patch needs a bug-id to be merged
upstream. Then I can port it back to 3.4
Pranith.
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
Could you attach gluster volume status output, gluster volume info.
Pranith.
- Original Message -
From: Khoi Mai khoi...@up.com
To: gluster-users@gluster.org
Sent: Wednesday, May 1, 2013 2:41:07 AM
Subject: [Gluster-users] Volume heal daemon 3.4alpha3
gluster volume heal
-heal daemon to debug
the issue further.
Pranith.
- Original Message -
From: Khoi Mai khoi...@up.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, May 2, 2013 8:00:24 PM
Subject: Re: [Gluster-users] Volume heal daemon 3.4alpha3
[root
Ryan,
Self-heald log file is glustershd.log* files. glustershd stands for
gluster Self-Heal Daemon. Could you send self-heal daemon log files from all
the machines of the cluster.
Pranith.
- Original Message -
From: Ryan Aydelott ry...@mcs.anl.gov
To: gluster-users@gluster.org
hi Greg,
Could you let us know what are the logs that are appearing in fw1's mount's
logs when fw2 is taken down. It would be nice if you could get us all the
logs(tarball may be?) on fw1 when fw2 is taken down.
Pranith.
- Original Message -
From: Greg Scott
hi,
Could you post the getfattr output for any of these files. If the afr
changelog xattrs are zero then this means that at the time of accessing a file
there were some pending transactions in progress, but by the time self-heal is
attempted, the transactions are completed so so need to
Is this the case in both the bricks of replicate? What is version of glusterfs?
Pranith.
- Original Message -
From: Nux! n...@li.nux.ro
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Users gluster-users@gluster.org
Sent: Monday, August 5, 2013 2:22:07 PM
Subject: Re
Nux,
This can happen if there is a failure on both the files in
write/truncate/ftruncate. I am wondering how the file went into such a state.
Could you throw some light as to what had happened?
Pranith.
- Original Message -
From: Nux! n...@li.nux.ro
To: Pranith Kumar Karampuri
Karampuri pkara...@redhat.com
Cc: Gluster Users gluster-users@gluster.org
Sent: Monday, August 5, 2013 2:52:26 PM
Subject: Re: [Gluster-users] No active sinks for performing self-heal on file
On 05.08.2013 10:15, Pranith Kumar Karampuri wrote:
Nux,
This can happen
and things
should be good now.
step-4: Check the md5sums on the files are same with the backups we took in
step-1. (This step is not really required but I am paranoid)
step-5: Give pranith logs :-)
Pranith.
- Original Message -
From: Nux! n...@li.nux.ro
To: Pranith Kumar Karampuri pkara
hey,
Please change the step-2: The value should be all zeros.
Step-2: On one of the bricks execute, setfattr -n
trusted.afr.488_1152-client-0 -v 0x file-path
Pranith.
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Nux! n
Thanks for the logs Nux. Now that the issue is fixed, let us try to figure out
what lead to this situation for those two files.
Could you let me know when the server is rebooted.
Pranith
- Original Message -
From: Nux! n...@li.nux.ro
To: Pranith Kumar Karampuri pkara...@redhat.com
machine.
Pranith
- Original Message -
From: Nux! n...@li.nux.ro
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Users gluster-users@gluster.org
Sent: Monday, August 5, 2013 5:12:27 PM
Subject: Re: [Gluster-users] No active sinks for performing self-heal on file
Understood!
Would you recommend as best practice healing all the volumes after boot
time (from rc.local or some custom init script)?
For healing to happen all the connections to bricks must be established. Do
healing as soon as that happens.
Pranith
Nux,
I suggest you don't do this. As soon as the bricks are connected to gluster
self-heal daemon, it already does this. And every 10 minutes it checks if there
is anything to be healed and heals.
Pranith.
- Original Message -
From: Nux! n...@li.nux.ro
To: Pranith Kumar Karampuri
1 - 100 of 802 matches
Mail list logo