OK I just hit the other issue too, where .shard doesn't get healed. :)
Investigating as to why that is the case. Give me some time.
-Krutika
On Wed, Aug 31, 2016 at 12:39 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Just figured the steps Anuradha has provided won't work if
ix the documentation appropriately.
-Krutika
On Wed, Aug 31, 2016 at 11:29 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Tried this.
>
> With me, only 'fake2' gets healed after i bring the 'empty' brick back up
> and it stops there unless I do a 'heal-full'.
>
> Is t
full sweep on subvol glustershard-client-2
>>> [2016-08-30 14:52:29.760213] I [MSGID: 108026]
>>> [afr-self-heald.c:656:afr_shd_full_healer] 0-glustershard-replicate-0:
>>> finished full sweep on subvol glustershard-client-2
>>>
>>
>> Just realized its still healin
On Tue, Aug 30, 2016 at 6:20 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
>
>
> On Tue, Aug 30, 2016 at 6:07 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Aug 30, 2016 at 7:18 AM, Krutika Dhananjay <kdhan...@redhat.com>
On Tue, Aug 30, 2016 at 6:07 PM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Tue, Aug 30, 2016 at 7:18 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> Could you also share the glustershd logs?
>>
>
> I'll get them when I get to work
, Aug 29, 2016 at 2:20 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Mon, Aug 29, 2016 at 7:01 AM, Anuradha Talur <ata...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> - Original Message -
>>> > From: "
Ignore. I just realised you're on 3.7.14. So then the problem may not be
with granular entry self-heal feature.
-Krutika
On Tue, Aug 30, 2016 at 10:14 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> OK. Do you also have granular-entry-heal on - just so that I can isolate
> the
ument]
>
> This was after replacing the drive the brick was on and trying to get it
> back into the system by setting the volume's fattr on the brick dir. I’ll
> try the suggested method here on it it shortly.
>
> -Darrell
>
>
> On Aug 29, 2016, at 7:25 AM, Krutika Dhan
at 5:47 PM, David Gossage <dgoss...@carouselchecks.com>
wrote:
>
> On Mon, Aug 29, 2016 at 7:14 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Mon, Aug 29, 2016 at 5:25 AM, Krutika Dhananjay <kdhan...@redhat.com>
>> wrote:
>>
Could you attach both client and brick logs? Meanwhile I will try these
steps out on my machines and see if it is easily recreatable.
-Krutika
On Mon, Aug 29, 2016 at 2:31 PM, David Gossage
wrote:
> Centos 7 Gluster 3.8.3
>
> Brick1:
Here's one from me:
Sharding in GlusterFS - Past, Present and Future
I intend to cover the following in this talk:
* What sharding is, what are its benefits over striping and in general..
* Current design
* Use cases - VM image store/HC/ROBO
* Challenges - atomicity, synchronization across
ad Monday with shards not healing for hours be related to
> this?
>
> On 17 August 2016 at 15:10, Krutika Dhananjay <kdhan...@redhat.com> wrote:
> > Good question.
> >
> > Any attempt from a client to access /.shard or its contents from the
> mount
> > point
Good question.
Any attempt from a client to access /.shard or its contents from the mount
point will be met with an EPERM (Operation not permitted). We do not expose
.shard on the mount point.
-Krutika
On Wed, Aug 17, 2016 at 10:04 AM, Ravishankar N
wrote:
> On
Thanks, I just sent http://review.gluster.org/#/c/15161/1 to reduce the
log-level to DEBUG. Let's see what the maintainers have to say. :)
-Krutika
On Tue, Aug 16, 2016 at 5:50 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Mon, Aug 15, 2016 at 6:24 PM, Krutika Dhananj
g> may contact the Service
>> Desk at 803-777-1800 (serviced...@sc.edu) and the attachment file will
>> be released from quarantine and delivered.
>>
>>
>> On Sat, Aug 13, 2016 at 6:15 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> On
"gluster v heal datastore4 statistics
> heal-count' on a 5 second loop block healing?
>
> To answer my own question - I don't think so as it appear to be
> healing quite quickly now.
>
>
> On 15 August 2016 at 17:17, Krutika Dhananjay <kdhan...@redhat.com> wrote:
> >
Could you please attacj the brick logs and glustershd logs?
Also share the volume configuration please (`gluster volume info`).
-Krutika
On Mon, Aug 15, 2016 at 12:19 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Moved to a new subject as its now an issue on our cluster.
>
> As
1. Could you share the output of `gluster volume heal info`?
2. `gluster volume info`
3. fuse mount logs of the affected volume(s)?
4. glustershd logs
5. Brick logs
-Krutika
On Sat, Aug 13, 2016 at 3:10 AM, David Gossage
wrote:
> On Fri, Aug 12, 2016 at 4:25 PM,
]);
> (gdb) p local->fop
> $1 = GF_FOP_WRITE
> (gdb)
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
>
> --
> From: kdhan...@redhat.com
> Date: Fri, 5 Aug 2016 10:48:36 +0530
>
> Subject: Re: [Gluster-users] Gluster 3.7.13 N
ri, 5 Aug 2016 10:48:36 +0530
>
> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: gluster-users@gluster.org
>
> Also, could you print local->fop please?
>
> -Krutika
>
> On Fri, Aug 5, 2016 at 10:46 AM, Krutika Dhananjay <kdhan..
But if you *do* decide to test the feature, I'd suggest that you do so with
3.8.2 (which is slated to be released on Aug 10) as it contains some
important bug fixes. :)
-Krutika
On Sun, Aug 7, 2016 at 8:39 AM, David Gossage
wrote:
> On Sat, Aug 6, 2016 at 9:58 PM,
Also, could you print local->fop please?
-Krutika
On Fri, Aug 5, 2016 at 10:46 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Were the images being renamed (specifically to a pathname that already
> exists) while they were being written to?
>
> -Krutika
>
> On
Were the images being renamed (specifically to a pathname that already
exists) while they were being written to?
-Krutika
On Thu, Aug 4, 2016 at 1:14 PM, Mahdi Adnan wrote:
> Hi,
>
> Kindly check the following link for all 7 bricks logs;
>
> https://db.tt/YP5qTGXk
>
>
OK.
Could you also print the values of the following variables from the
original core:
i. i
ii. local->inode_list[0]
iii. local->inode_list[1]
-Krutika
On Wed, Aug 3, 2016 at 9:01 PM, Mahdi Adnan wrote:
> Hi,
>
> Unfortunately no, but i can setup a test bench and see
Do you have a test case that consistently recreates this problem?
-Krutika
On Wed, Aug 3, 2016 at 8:32 PM, Mahdi Adnan wrote:
> Hi,
>
> So i have updated to 3.7.14 and i still have the same issue with NFS.
> based on what i have provided so far from logs and dumps do
d make
> sure it doesn't have any freeze or locking issues then will roll this out
> to working cluster.
>
>
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Wed, Jul 27, 2016 at 8:37 AM, David Gossage <
> dgos
Sorry I didn't make myself clear. The reason I asked YOU to do it is
because i tried it on my system and im not getting the backtrace (it's all
question marks).
Attach the core to gdb.
At the gdb prompt, go to frame 2 by typing
(gdb) f 2
There, for each of the variables i asked you to get the
Could you also print and share the values of the following variables from
the backtrace please:
i. cur_block
ii. last_block
iii. local->first_block
iv. odirect
v. fd->flags
vi. local->call_count
-Krutika
On Sat, Jul 30, 2016 at 5:04 PM, Mahdi Adnan
wrote:
> Hi,
>
> I
Yes please, could you file a bug against glusterfs for this issue?
-Krutika
On Wed, Jul 27, 2016 at 1:39 AM, David Gossage
wrote:
> Has a bug report been filed for this issue or should l I create one with
> the logs and results provided so far?
>
> *David Gossage*
FYI, there's been some progress on this issue and the same has been updated
on ovirt-users ML:
http://lists.ovirt.org/pipermail/users/2016-July/041413.html
-Krutika
On Fri, Jul 22, 2016 at 11:23 PM, David Gossage wrote:
>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM,
The option is useful in preventing spurious heals from being reported in
`volume heal info` output.
-Krutika
On Sat, Jul 23, 2016 at 10:05 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> 3.7.13 has been running well now for several weeks for me on a rep 3
> sharded volume, VM
Please find my response inline:
On Mon, Jul 18, 2016 at 4:03 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-18 12:25 GMT+02:00 Krutika Dhananjay <kdhan...@redhat.com>:
> > Hi,
> >
> > The suggestion you gave was in fact consider
On Mon, Jul 18, 2016 at 3:55 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Hi,
>
> The suggestion you gave was in fact considered at the time of writing
> shard translator.
> Here are some of the considerations for sticking with a single directory
> as opposed to a
Hi,
The suggestion you gave was in fact considered at the time of writing shard
translator.
Here are some of the considerations for sticking with a single directory as
opposed to a two-tier classification of shards based on the initial chars
of the uuid string:
i) Even for a 4TB disk with the
You saw this with FUSE mount, correct?
-Krutika
2016-07-13 16:16 GMT+05:30 Frank Rothenstein <
f.rothenst...@bodden-kliniken.de>:
> *bump*
> Any news on this topic?
>
>
>
>
>
>
> __
> BODDEN-KLINIKEN Ribnitz-Damgarten
Frank,
Could you share your volume configuration (`gluster volume info `)?
-Krutika
On Tue, Jul 12, 2016 at 3:44 PM, David Gossage
wrote:
>
>
> On Tue, Jul 12, 2016 at 5:08 AM, Frank Rothenstein <
> f.rothenst...@bodden-kliniken.de> wrote:
>
>> Hi David,
>>
>> As
Yes, the bug was found in gfapi and is under review at
http://review.gluster.org/14854.
This codepath is never exercised with a FUSE mount.
-Krutika
On Thu, Jul 7, 2016 at 12:35 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 7 July 2016 at 16:32, Lindsay Mathieson
Could you share the client (nfs?) logs?
-Krutika
On Thu, Jul 7, 2016 at 1:14 PM, Kevin Lemonnier
wrote:
> Hi,
>
> I recently installed a 3.7.12 gluster to replace our 3.7.6 in production.
> I started moving a few VMs on it, it's accessed using NFS (not ganesha, as
> it's
Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 7 July 2016 at 15:42, Krutika Dhananjay <kdhan...@redhat.com> wrote:
> > could you please share the glusterfs client logs?
>
>
> Alas, being qemu/libgfapi there aren't an client logs :(
>
> I'll shut down the
Yes, could you please share the glusterfs client logs?
-Krutika
On Thu, Jul 7, 2016 at 5:12 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Becoming a serious problem. Since my misadventure with 3.7.12 and
> downgrading back to 3.7.11 i have daily freezes of VM where they
Try 64MB?
-Krutika
On Wed, Jun 29, 2016 at 3:33 PM, Kevin Lemonnier
wrote:
> Hi,
>
> I'm installing a test cluster with 3.7.12 and setting the usual options
> (since virt group doesn't exist on debian) by hand,
> and when I try to configure sharding as usual :
>
>
The option is useful in preventing spurious heals from being reported in
`volume heal info` output.
-Krutika
On Wed, Jun 29, 2016 at 7:24 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Noticed this is the options help:
> Option: cluster.locking-scheme
> Default Value:
Could you share the output of
getfattr -d -m . -e hex
from all of the bricks associated with datastore4?
-Krutika
On Fri, Jun 24, 2016 at 2:04 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> What does this mean?
>
> gluster v heal datastore4 info
> Brick
?
It is critical for some of the debugging we do. :)
-Krutika
On Wed, May 25, 2016 at 2:38 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Hi Kevin,
>
>
> If you actually ran into a 'read-only filesystem' issue, then it could
> possibly because of a bug in AFR
> tha
Hi Kevin,
If you actually ran into a 'read-only filesystem' issue, then it could
possibly because of a bug in AFR
that Pranith recently fixed.
To confirm if that is indeed the case, could you tell me if you saw the
pause after a brick (single brick) was
down while IO was going on?
-Krutika
On
Hi,
I will try to recreate this issue tomorrow on my machines with the steps
that Lindsay provided in this thread. I will let you know the result soon
after that.
-Krutika
On Wednesday, May 18, 2016, Kevin Lemonnier wrote:
> Hi,
>
> Some news on this.
> Over the week end
reply?
-Krutika
On Tue, May 17, 2016 at 4:44 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 17/05/2016 7:56 PM, Krutika Dhananjay wrote:
>
> Would you know where the logs of individual vms are with proxmox?
>
>
> No i don't I'm afraid. Would they be a fun
Would you know where the logs of individual vms are with proxmox?
In those, do you see any libgfapi/gluster log messages?
-Krutika
On Tue, May 17, 2016 at 8:38 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 17 May 2016 at 10:02, WK wrote:
> > That being
Yes, that would probably be useful in terms of at least having access to
the client logs.
-Krutika
On Mon, May 16, 2016 at 12:18 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 16 May 2016 at 16:46, Krutika Dhananjay <kdhan...@redhat.com> wrote:
> > Coul
Hi,
Could you share the mount and glustershd logs for investigation?
-Krutika
On Sun, May 15, 2016 at 12:22 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 15/05/2016 12:45 AM, Lindsay Mathieson wrote:
>
> *First off I tried removing/adding a brick.*
>
> gluster v
> cluster.data-self-heal-algorithm: full
> > performance.readdir-ahead: on
> >
> >
> > It starts at 2 and jumps to 50 because the first server is doing
> something else for now,
> > and I use 50 to be the temporary third node. If everything goes well,
> I'l
Hmm could you get me the output of `getfattr -d -m . -e hex
from all the three bricks?
And also the `ls -l` output of this vm file as seen from the mount point.
-Krutika
On Wed, Apr 27, 2016 at 1:25 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> I'm getting the following file
It is part of group virt settings. So I would assume there is a reason why
the setting was considered necessary.
Why? Did you face any issues?
-Krutika
On Mon, Apr 25, 2016 at 6:36 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> is "performance.stat-prefetch: off" still needed for
in a
> replica? Lindsay caught this issue due to her due diligence taking on 'new'
> tech - and resolved the inconsistency, but tbh this shouldn't be an admin's
> job :(
>
>
>
> On Sun, Apr 24, 2016 at 7:06 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>&
rom the mountpoint will be met with an 'operation not permitted'
error.
-Krutika
On Sun, Apr 24, 2016 at 11:42 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 24/04/2016 2:56 PM, Krutika Dhananjay wrote:
>
>> Nope, it's not necessary for them to all have the xat
Nope, it's not necessary for them to all have the xattr.
Do you see anything at least in .glusterfs/indices/dirty on all bricks?
-Krutika
On Sun, Apr 24, 2016 at 4:17 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 24/04/2016 1:24 AM, Krutika Dhananjay wrote:
>
&
Each shard is also associated with a gfid.
So do you see the gfids of these 8 shards in the .glusterfs/indices/xattrop
directory on any of the bricks?
-Krutika
On Sat, Apr 23, 2016 at 8:51 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 23/04/2016 11:14 PM, Krutika
On Sat, Apr 23, 2016 at 2:44 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Give a file on a volume how do I identify it shards? it looks to be like a
> uuid for the file with a .xxx extension
>
>
>
1) Get the trusted.gfid extended attribute of the main file from the brick.
#
Any heal in progress?
-Krutika
On Wed, Apr 20, 2016 at 8:17 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> 3.7.11 hasn't been going so well :( first glusterfsd crashed and
> zombified requiring a reboot. Now its hogging the CPU, at one time it was
> up to 1000%.
>
> Is it safe
On Tue, Apr 19, 2016 at 5:25 PM, Kevin Lemonnier
wrote:
> Hi,
>
> As stated in another thread, we currently have a 3 nodes cluster with
> sharding enabled used for storing VM disks.
> I am migrating that to a new 3.7.11 cluster to hopefully fix the problems
> with had
On Tue, Apr 19, 2016 at 5:30 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Is it possible to confirm the following understandings on my part?
>
>
> - heal info shows the list of files with uncommitted writes across the
> bricks that would need heals **if** a brick went down before
try to create a 3.7.10 cluster in the week end slowly move
> the VMs on it then,
> Thanks a lot for your help,
>
> Regards
>
>
> On Mon, Apr 18, 2016 at 07:58:44PM +0530, Krutika Dhananjay wrote:
> > Hi,
> >
> > Yeah, so the fuse mount log didn't convey much i
;
> Thanks for your help,
>
>
> On Mon, Apr 18, 2016 at 12:28:28PM +0530, Krutika Dhananjay wrote:
> > Sorry, I was referring to the glusterfs client logs.
> >
> > Assuming you are using FUSE mount, your log file will be in
> > /var/log/glusterfs/.log
> >
&
That's just a base file that all the gfid files are hard-linked to.
Since it is pointless to consume one inode for each gfid that needs a heal,
we use a base file
with an identifiable name (xattrop-*) and then hard-link the actual gfid
files representing pointers for heal
to this file.
-Krutika
brick replacement.
> Should I maybe lower the shard size ? Won't solve the fact that 2 bricks
> on 3 aren't keeping the filesystem usable but might make the healing
> quicker right ?
>
> Thanks
>
> Le 17 avril 2016 17:56:37 GMT+02:00, Krutika Dhananjay <
> kdhan...
As per
https://github.com/gluster/glusterweb/blob/master/source/community/roadmap/3.8/index.md
,
it would be around end-of-May/June.
-Krutika
On Mon, Apr 18, 2016 at 1:34 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 18/04/2016 12:42 AM, Krutika Dhanan
Could you share the client logs and information about the approx time/day
when you saw this issue?
-Krutika
On Sat, Apr 16, 2016 at 12:57 AM, Kevin Lemonnier
wrote:
> Hi,
>
> We have a small glusterFS 3.7.6 cluster with 3 nodes running with proxmox
> VM's on it. I did set
Hmm I don't have an answer to that question right now because I'm no
on-disk fs expert.
As far as gluster is concerned, I don't actually see any issue except for
the following:
as of today, gluster's entry self-heal algorithm crawls the whole parent
directory and compares the
source and sink to
I just checked the code. All `statistics heal-count` does is to count the
number of indices (in other words the number of files to be healed) present
per brick in the .glusterfs/indices/xattrop directory - which is where we
hold empty files representing inodes that need heal - and prints them. I
Hmm with thousands of entries that need heal and need to be reported by
heal-info, it seems that it is quite likely for
heal-info to take some time to report since it takes locks to examine if a
file needs heal or not, and the self-heal daemon
is also a contender for the same lock.
`heal
at 3:32 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Would you happen to know what those 6 entries that need heal correspond
> to? Assuming heal-info reported the status at least once without hanging.
> Also, could you share the contents of glfsheal-datastore.log, spec
, Apr 13, 2016 at 8:13 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 14/04/2016 12:19 AM, Krutika Dhananjay wrote:
>
>> Hmm what version of gluster was the hang seen on?
>>
>
>
> Ah yes, sorry - 3.7.9
>
> The heal was triggered by a "
Hi,
Just curious, are you seeing poor heal performance of VMs by any chance in
3.7.9 *even* with sharding?
-Krutika
On Thu, Apr 14, 2016 at 3:14 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On 04/14/2016 09:46 AM, Lindsay Mathieson wrote:
>
>> Sorry to bring this up again,
You can disable it, we were using that option to work around a caching
issue earlier.
That bug has been fixed now.
# gluster volume set performance.strict-write-ordering off
-Krutika
On Thu, Apr 14, 2016 at 8:27 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 14 April 2016 at
Hmm what version of gluster was the hang seen on?
-Krutika
On Wed, Apr 13, 2016 at 5:09 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> nb. iowait on vna was very high (20%) from other non gluster processes,
> but its now settled down to around 10%
>
> --
> Lindsay Mathieson
>
>
Hi,
There is one bug that was uncovered recently wherein the same file could
possibly get healed twice before marking that it no longer needs a heal.
Pranith sent a patch @ http://review.gluster.org/#/c/13766/ to fix this,
although IIUC this bug existed in versions < 3.7.9 as well.
Also because
If you want the existing files in your volume to get sharded, you would
need to
a. enable sharding on the volume and configure block size, both of which
you have already done,
b. cp the file(s) into the same volume with temporary names
c. once done, you can rename the temporary paths back to their
ould be able to coordinate that on their own.
>
>
> On March 14, 2016 8:43:56 PM PDT, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>>
>> Yes. 'heal-full' should be executed on the node with the highest uuid.
>>
>> Here's how i normally figure out what uuid is t
ormance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
>
>
> same error.
> and still mounting using glusterfs will work just fine.
>
> Respectfully
> *Mahdi A. Mahdi*
> <mahdi.ad...@outlook.com>
>
> On 03/15/2016 11:04 AM, Krut
ance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
>
> and after mounting it in ESXi and trying to clone a VM to it, i got the
> same error.
>
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
> On 03/15/2016 10:44 AM, Krutika Dhananja
ed volume and enabled sharding and mounted it via glusterfs and it
> worked just fine, if i mount it with nfs it will fail and gives me the same
> errors.
>
> Respectfully
> *Mahdi A. Mahdi*
>
> On 03/15/2016 06:24 AM, Krutika Dhananjay wrote:
>
> Hi,
>
>
Yes. 'heal-full' should be executed on the node with the highest uuid.
Here's how i normally figure out what uuid is the highest:
Put all the nodes' uuids in a text file, one per line, sort them and get
the last uuid from the list.
To be more precise:
On any node, you can get the uuids of the
>
>
>
>
>
> [root@-nfs02 ~]# gluster volume info
>
>
>
> Volume Name: opt
>
> Type: Replicate
>
> Volume ID: 5b77070f-5378-45ec-9eda-5f7dd007ff8a
>
> Status: Started
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
Mar 10 2016 20:20:45.
>
>
> Respectfully
> *Mahdi A. Mahdi*
>
> Systems Administrator
> IT. Department
> Earthlink Telecommunications <https://www.facebook.com/earthlinktele>
>
> Cell: 07903316180
> Work: 3352
> Skype: mahdi.ad...@outlook.com
> On 03/14/2016
It would be better to use sharding over stripe for your vm use case. It
offers better distribution and utilisation of bricks and better heal
performance.
And it is well tested.
Couple of things to note before you do that:
1. Most of the bug fixes in sharding have gone into 3.7.8. So it is advised
A gfid mismatch should also be showing up in the form of a split-brain on
the parent directory of the entry in question.
1) In your case, does 'public_html/cello/ior_files' show itself up in the
output of `gluster volume heal info split-brain`?
2) And what version of gluster are you using?
3)
automatically.
HTH,
Krutika
On Fri, Mar 4, 2016 at 5:59 PM, Yannick Perret <yannick.per...@liris.cnrs.fr
> wrote:
> Le 04/03/2016 13:12, Krutika Dhananjay a écrit :
>
> You could configure read-subvolume for individual clients using the
> following mount option:
>
> mount -t glu
Could you try disabling client-side heals and see if it works for you?
Here's what you'd need to do:
#gluster volume set entry-self-heal off
#gluster volume set data-self-heal off
#gluster volume set metadata-self-heal off
-Krutika
On Wed, Mar 2, 2016 at 12:37 AM,
s the read child, use the following:
mount -t glusterfs -o "xlator-option=*replicate*.read-subvolume-index=1"
:/
-Krutika
- Original Message -
> From: "Yannick Perret" <yannick.per...@liris.cnrs.fr>
> To: "Krutika Dhananjay" <kdhan...@red
Hi,
So up until in 3.5.x, there was a read child selection mode called 'first
responder' where the brick that responds first for a particular client becomes
the read child.
After the replication module was rewritten for the most part from 3.6.0, this
mode was removed.
There exists a
Ok, and what version of glusterfs are you using?
-Krutika
- Original Message -
> From: "Yannick Perret" <yannick.per...@liris.cnrs.fr>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Thursday, Marc
What does "nearest" storage server mean? Are the clients residing in the
storage pool too? Or are they external to the cluster?
-Krutika
- Original Message -
> From: "Yannick Perret"
> To: gluster-users@gluster.org
> Sent: Thursday, March 3, 2016 5:38:33
- Original Message -
> From: "Kyle Maas"
> To: gluster-users@gluster.org
> Sent: Thursday, February 25, 2016 11:36:53 PM
> Subject: [Gluster-users] AFR Version used for self-heal
> How can I tell what AFR version a cluster is using for self-heal?
> The
Hi Dominique,
I saw the logs attached. At some point all bricks seem to have gone down as I
see
[2016-01-31 16:17:20.907680] E [MSGID: 108006] [afr-common.c:3999:afr_notify]
0-cluster1-replicate-0: All subvolumes are down. Going offline until atleast
one of them comes back up.
in the client
-
> From: "Klearchos Chaloulos (Nokia - GR/Athens)"
> <klearchos.chalou...@nokia.com>
> To: "EXT Krutika Dhananjay" <kdhan...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Friday, February 5, 2016 8:30:57 PM
> Subject: RE: [Gluster-users] Diffe
Hi,
Could you share the following pieces of information:
1) output of `gluster volume info `
2) the client/mount logs
3) glustershd logs
-Krutika
- Original Message -
> From: "Klearchos Chaloulos (Nokia - GR/Athens)"
>
> To:
Could you share the logs?
I'd like to look at the glustershd logs and etc-glusterfs-glusterd.vol.log
files.
-Krutika
- Original Message -
> From: "Lindsay Mathieson"
> To: "gluster-users"
> Sent: Saturday, January 23, 2016
it best).
3) Share the resultant glustershd.log from all three machines.
-Krutika
- Original Message -
> From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>, "gluster-users"
> <gluster-use
side heal, you need to disable the self-heal-daemon. You can
do that with #gluster volume set cluster.self-heal-daemon off
HTH,
Krutika
- Original Message -
> From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> To: "Krutika Dhananjay" <kdhan.
Also could you please attach glustershd.log files and the client logs?
-Krutika
- Original Message -
> From: "Krutika Dhananjay" <kdhan...@redhat.com>
> To: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> Cc: "gluster-users" <G
101 - 200 of 274 matches
Mail list logo