Hi,
I'm currently removing a few bricks from a distributed dispersed volume using
gluster volume remove-brick, I'm running GLusterFS 6.6. It triggered a
rebalance that is supposed to remove the data from the bricks. Today in the
morning, it had ~50.000 failures on each server. I found a whole
Hi,
I have two broken directories that cannot be deleted from client side. They
probably broke related to this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1698861
I have $dir and $dir_old with identical subdir structure on brick side, they
share a GFID and content is gone from client
)". I'll delete that line from potential_heal. It might be an
interesting edge case though.
Kind regards
Gudrun
Am Montag, den 02.12.2019, 02:15 -0500 schrieb Ashish Pandey:
>
>
> From: "Gudrun Mareike Amedick"
> To: "Ashish Pandey"
> Cc: "Gluster-users&
Am Montag, den 02.12.2019, 02:15 -0500 schrieb Ashish Pandey:
>
>
> From: "Gudrun Mareike Amedick"
> To: "Ashish Pandey"
> Cc: "Gluster-users"
> Sent: Friday, November 29, 2019 8:45:13 PM
> Subject: Re: [Gluster-users] Trying to fix files
/23380/
>
> You can find the steps to use these scripts in README.md file
>
> ---
> Ashish
>
> From: "Gudrun Mareike Amedick"
> To: "Gluster-users"
> Sent: Thursday, November 28, 2019 3:57:18 PM
> Subject: [Gluster-users] Trying to fix files
Hi,
I have a distributed dispersed volume with files that don't want to heal. I'm
trying to fix them manually.
I'm currently working on a file that is present on all bricks, GFID exists in
.glusterfs-structure and getfattr shows identical attributes for all
files. They look like this:
#
kipped for $file3.
Kind regards,
Gudrun
Am Donnerstag, den 31.01.2019, 14:46 +0530 schrieb Nithya Balachandran:
>
>
> On Wed, 30 Jan 2019 at 19:12, Gudrun Mareike Amedick
> wrote:
> > Hi,
> >
> > a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 sch
Hi,
a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 schrieb
Frank Ruehlemann:
> Am Montag, den 28.01.2019, 09:50 +0530 schrieb Nithya Balachandran:
> >
> > On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick <
> > g.amed...@uni-luebeck.de> wrot
Hi all,
we have a problem with a distributed dispersed volume (GlusterFS 3.12). We have
files that lost their permissions or gained sticky bits. The files
themselves seem to be okay.
It looks like this:
# ls -lah $file1
-- 1 www-data www-data 45M Jan 12 07:01 $file1
# ls -lah $file2
0a6420140012
Does that mean that the crawlers didn't finish their jobs?
Kind regards
GudrunAm Montag, den 26.11.2018, 20:20 +0530 schrieb Hari Gowtham:
> Comments inline.
>
> On Mon, Nov 26, 2018 at 7:25 PM Gudrun Mareike Amedick
> wrote:
> >
> >
> &
that directory up with setting dirty and then
> doing a lookup.
> Again for such a huge size, it will consume a lot of resource.
>
> On Mon, Nov 26, 2018 at 3:56 PM Gudrun Mareike Amedick
> wrote:
> >
> >
> > Hi,
> >
> > we have no notifications of O
:
> On Wed, Nov 21, 2018 at 8:55 PM Gudrun Mareike Amedick
> wrote:
> >
> >
> > Hi Hari,
> >
> > I disabled and re-enabled the quota and I saw the crawlers starting.
> > However, this caused a pretty high load on my servers (200+) and this seem
> >
upgrading soon) or could it break things?
Kind regards
Gudrun Amedick
Am Dienstag, den 20.11.2018, 16:59 +0530 schrieb Hari Gowtham:
> reply inline.
> On Tue, Nov 20, 2018 at 3:53 PM Gudrun Mareike Amedick
> wrote:
> >
> >
> > Hi,
> >
> > I think I know what
Hi,
I think I know what happened. According to the logs, the crawlers recieved a
signum(15). They seemed to have died before having finished. Probably too
much to do simultaneously. I have disabled and re-enabled quota and will set
the quotas again with more time.
Is there a way to restart a
Hi,
we're planning a dispersed volume with at least 50 project directories. Each of
those has its own quota ranging between 0.1TB and 200TB. Comparing XFS
project quotas over several servers and bricks to make sure their total matches
the desired value doesn't really sound practical. It would
Hi,
I'm currently facing the same behaviour.
Today, one of my users tried to delete a folder. It failed, saying the
directory wasn't empty. ls -lah showed an empty folder but on the bricks I
found some files. Renaming the directory caused it to reappear.
We're running gluster 3.12.7-1 on
16 matches
Mail list logo