Hi Ravi,
I'd already done exactly that before, where step 3 was a simple 'rm -rf
/nodirectwritedata/gluster/gvol0'. Have you another suggestion on what the
cleanup or reformat should be?
Thank you.
On Wed, 22 May 2019 at 13:56, Ravishankar N wrote:
> Hmm, so the volume info seems to indicate
Hmm, so the volume info seems to indicate that the add-brick was
successful but the gfid xattr is missing on the new brick (as are the
actual files, barring the .glusterfs folder, according to your previous
mail).
Do you want to try removing and adding it again?
1. `gluster volume
Hi Ravi,
Certainly. On the existing two nodes:
gfs1 # getfattr -d -m. -e hex /nodirectwritedata/gluster/gvol0
getfattr: Removing leading '/' from absolute path names
# file: nodirectwritedata/gluster/gvol0
trusted.afr.dirty=0x
Hi David,
Could you provide the `getfattr -d -m. -e hex
/nodirectwritedata/gluster/gvol0` output of all bricks and the output of
`gluster volume info`?
Thanks,
Ravi
On 22/05/19 4:57 AM, David Cunningham wrote:
Hi Sanju,
Here's what glusterd.log says on the new arbiter server when trying to
Hi Sanju,
Here's what glusterd.log says on the new arbiter server when trying to add
the node:
[2019-05-22 00:15:05.963059] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0x3b2cd)
[0x7fe4ca9102cd]
-->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0xe6b85)
Today's meeting will happen couple of hours from now. i.e. 1PM EST at (
https://bluejeans.com/486278655)
I am not able to see the meeting in my calendar. I am not sure whether this
is the case just for me or is it not visible to others as well.
Either way, I will be waiting at the above mentioned
On Mon, May 20, 2019 at 9:05 PM Vlad Kopylov wrote:
>
> Thank you Prasanna.
>
> Do we have architecture somewhere?
Vlad,
Although the complete set of details might be missing at one place
right now, some pointers to start are available at,
https://github.com/gluster/gluster-block#gluster-block
Hi Martin,
Glad it worked! And yes, 3.7.6 is really old! :)
So the issue is occurring when the vm flushes outstanding data to disk. And
this
is taking > 120s because there's lot of buffered writes to flush, possibly
followed
by an fsync too which needs to sync them to disk (volume profile would
Hello together,
we are still seeking a day and time to talk about interesting Samba /
Glusterfs issues. Here is a new list of possible dates and time.
May 22th – 24th at 12:30 - 14:30 IST or (9:00 - 11:00 CEST)
May 27th – 29th and 31th at 12:30 - 14:30 IST (9:00 - 11:00 CEST)
On May 30th