Hi,
just a guess and easy to test/try: inodes? df -i?
regards,
Hubert
Am Fr., 6. März 2020 um 04:42 Uhr schrieb David Cunningham
:
>
> Hi Aravinda,
>
> That's what was reporting 54% used, at the same time that GlusterFS was
> giving no space left on device errors. It's a bit worrying that
Thank you
This is what I was looking for.
> 2020. 3. 6. 12:47, Aravinda VK 작성:
>
> gluster volume set help?
>
> —
> regards
> Aravinda Vishwanathapura
> https://kadalu.io
>
>> On 06-Mar-2020, at 6:39 AM, gil han Choi wrote:
>>
>> Hello
>>
>> I used a command to print out the default
@Amar/Mohit, Do you see any issues with Posix reserve feature?
> On 06-Mar-2020, at 9:11 AM, David Cunningham
> wrote:
>
> Hi Aravinda,
>
> That's what was reporting 54% used, at the same time that GlusterFS was
> giving no space left on device errors. It's a bit worrying that they're not
gluster volume set help?
—
regards
Aravinda Vishwanathapura
https://kadalu.io
> On 06-Mar-2020, at 6:39 AM, gil han Choi wrote:
>
> Hello
>
> I used a command to print out the default values and descriptions of all
> options.
> But I can't remember what command I used and can't find it.
>
Hi Aravinda,
That's what was reporting 54% used, at the same time that GlusterFS was
giving no space left on device errors. It's a bit worrying that they're not
reporting the same thing.
Thank you.
On Fri, 6 Mar 2020 at 16:33, Aravinda VK wrote:
> Hi David,
>
> What is it reporting for
Hi David,
What is it reporting for brick’s `df` output?
```
df /nodirectwritedata/gluster/gvol0
```
—
regards
Aravinda Vishwanathapura
https://kadalu.io
> On 06-Mar-2020, at 2:52 AM, David Cunningham
> wrote:
>
> Hello,
>
> A major concern we have is that "df" was reporting only 54% used
Hello
I used a command to print out the default values and descriptions of all
options.
But I can't remember what command I used and can't find it.
What command can check its contents?
Option: cluster.lookup-unhashed
Default Value: on
Description: This option if set to ON, does a lookup through
Hello,
A major concern we have is that "df" was reporting only 54% used and yet
GlusterFS was giving "No space left on device" errors. We rely on "df" to
report the correct result to monitor the system and ensure stability. Does
anyone know what might have been going on here?
Thanks in advance.
Hi,
Thank you for your message.
Since I had no response within one day, I uninstalled glusterFS on all
nodes and installed version 7.2
I re-created all volumes using the 4 servers in peer and now I rsync all
the data from the "old" bricks to the new volumes. This avoids me also
to do a
Hi Aravinda,
Thanks for the reply. This test server is indeed the master server for
geo-replication to a slave.
I'm really surprised that geo-replication simply keeps writing logs until
all space is consumed, without cleaning them up itself. I didn't see any
warning about it in the
10 matches
Mail list logo