Having "expanding volume corruption" issue fixed only in 3.13 brunch
you better off recreating the thing
use the trick mentioned here
http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
kill volume, reset attributes, delete .glusterfs, add new and run stat
seems that
In my experience .glusterfs is easily recoverable by going to the
brick path (if you have files there) and running stat for each object
but through mount point, something like:
cd BRICKPATH
sudo find . -path ./.glusterfs -prune -o -exec stat 'MUNTPATH/{}' \;
for example, if you need to recreate
can you please test parallel-readdir or readdir-ahead gives
disconnects? so we know which to disable
parallel-readdir doing magic ran on pdf from last year
https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
-v
On Thu, Jan 25, 2018 at 8:20 AM, Alan
Same here, even after update to 3.12.5-2
[2018-01-26 02:48:58.113996] W [MSGID: 101174]
[graph.c:363:_log_if_unknown_option] 0-q-readdir-ahead-0: option
'parallel-readdir' is not recognized
-v
On Tue, Jan 23, 2018 at 12:09 PM, Alan Orth wrote:
> Hello,
>
> I saw that
On 01/26/2018 07:32 AM, Jim Kinney wrote:
Would it be good to have this small sequence of steps noted down
somewhere other than a mailing list archive? I imagine this is going
to be sought out by a few more users choosing this course.
+1!!!
Considering we are planning to modify the syntax
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote:
> On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N m> wrote:
> >
> > On 01/24/2018 07:20 PM, Hoggins! wrote:
> >
> > Hello,
> >
> > The subject says it all. I have a replica 3 cluster :
> >
> >
On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N wrote:
>
>
> On 01/24/2018 07:20 PM, Hoggins! wrote:
>
> Hello,
>
> The subject says it all. I have a replica 3 cluster :
>
> gluster> volume info thedude
>
> Volume Name: thedude
> Type: Replicate
> Volume ID:
On 01/24/2018 07:20 PM, Hoggins! wrote:
Hello,
The subject says it all. I have a replica 3 cluster :
gluster> volume info thedude
Volume Name: thedude
Type: Replicate
Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e
Status: Started
Snapshot Count: 0
On 01/25/2018 02:14 AM, César E. Portela wrote:
Hi All,
I have two glusterfs servers and doing the backup of these is very
slow, when it does not fail.
I have thousand and thousand and thousand files...
Apparently the directory .glusterfs has some responsibility for the
backup failure.
*sigh* trying again to correct formatting ... apologize for the earlier mess.
Having a memory issue with Gluster 3.12.4 and not sure how to troubleshoot. I
don't *think* this is expected behavior.
This is on an updated CentOS 7 box. The setup is a simple two node replicated
layout where the
Having a memory issue with Gluster 3.12.4 and not sure how to
troubleshoot. I don't *think* this is expected behavior. This is on an
updated CentOS 7 box. The setup is a simple two node replicated layout
where the two nodes act as both server and client. The volume in
question: Volume Name:
By the way, on a slightly related note, I'm pretty sure either
parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We
are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6.
I updated my servers and clients to 3.12.4 and enabled these two options
after reading about them
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades) and
gfs 3.12.5 with following rsync version :
1. ii rsync 3.1.1-3ubuntu1
2. ii rsync 3.1.1-3ubuntu1.2
3. ii rsync
On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen
wrote:
> Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09:
>
>> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen
>> wrote:
>>
>> Hi!
>>>
>>> Thank you very much for your help so far. Could you
Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09:
On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen
wrote:
Hi!
Thank you very much for your help so far. Could you please tell an
example command how to use aux-gid-mount to remove locks? "gluster
vol clear-locks" seems
15 matches
Mail list logo