As the crawl is shutdown, a kill should have been issued.
Its not an OOM kill as you didn't see the message in /var/log/messages
So it should either be killed by you through some means like a node restart,
volume stop, or an explicit kill.
To fix this, we need to restart quota.
On Mon, Nov 26,
Hi Hari,
I think I have indeed found a hint at to where the error is. As in, the script
gives me an error. This is what happens:
# python /root/glusterscripts/quotas/quota_fsch.py --sub-dir $broken_dir
$brick
getfattr: Removing leading '/' from absolute path names
getfattr: Removing leading
Comments inline.
On Mon, Nov 26, 2018 at 7:25 PM Gudrun Mareike Amedick
wrote:
>
> Hi Hari,
>
> I'm sorry to bother you again, but I have a few questions concerning the
> script.
>
> Do I understand correctly that I have to execute it once per brick on each
> server?
Yes. On all the servers.
>
Hi Hari,
I'm sorry to bother you again, but I have a few questions concerning the script.
Do I understand correctly that I have to execute it once per brick on each
server?
It is a dispersed volume, so the file size on brick side and on client side can
differ. Is that a problem?
Is it a
Yes. In that case you can run the script and see what errors it is
throwing and then clean that directory up with setting dirty and then
doing a lookup.
Again for such a huge size, it will consume a lot of resource.
On Mon, Nov 26, 2018 at 3:56 PM Gudrun Mareike Amedick
wrote:
>
> Hi,
>
> we
Hi,
we have no notifications of OOM kills in /var/log/messages. So if I understood
this correctly, the crawls finished but my attributes weren't set
correctly? And this script should fix them?
Thanks for your help so far
Gudrun
Am Donnerstag, den 22.11.2018, 13:03 +0530 schrieb Hari Gowtham:
>
On Wed, Nov 21, 2018 at 8:55 PM Gudrun Mareike Amedick
wrote:
>
> Hi Hari,
>
> I disabled and re-enabled the quota and I saw the crawlers starting. However,
> this caused a pretty high load on my servers (200+) and this seem to
> have gotten them killed again. At least, I have no crawlers
Hi Hari,
I disabled and re-enabled the quota and I saw the crawlers starting. However,
this caused a pretty high load on my servers (200+) and this seem to
have gotten them killed again. At least, I have no crawlers running, the quotas
are not matching the output of du -h, and the crawler logs
reply inline.
On Tue, Nov 20, 2018 at 3:53 PM Gudrun Mareike Amedick
wrote:
>
> Hi,
>
> I think I know what happened. According to the logs, the crawlers recieved a
> signum(15). They seemed to have died before having finished. Probably too
> much to do simultaneously. I have disabled and
Hi,
I think I know what happened. According to the logs, the crawlers recieved a
signum(15). They seemed to have died before having finished. Probably too
much to do simultaneously. I have disabled and re-enabled quota and will set
the quotas again with more time.
Is there a way to restart a
Hi,
Can you check if the quota crawl finished? Without it having finished
the quota list will show incorrect values.
Looking at the under accounting, it looks like the crawl is not yet
finished ( it does take a lot of time as it has to crawl the whole
filesystem).
If the crawl has finished and
Hi,
we're running a Distributed Dispersed volume with Gluster 3.12.14 at
Debian 9.6 (Stretch).
We migrated our data (>300TB) from a pure Distributed volume into this
Dispersed volume with cp, followed by multiple rsyncs.
After the migration was successful we enabled quotas again with "gluster
12 matches
Mail list logo