Note that live-streaming is available at:
https://fosdem.org/2018/schedule/streaming/
The talks will be archived too.
- Original Message -
> From: "Raghavendra Gowdappa"
> To: "Gluster Devel" , "gluster-users"
>
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the
‘df’ command shows only part of the available space on the mount point for
multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and
clients.
We have 2 different server configurations.
We noticed something similar.
Out of interest, does du -sh . show the same size?
--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod
Words are my own opinions and do not necessarily represent those of my employer
or partners.
> On 31 Jan 2018, at 12:47 pm,
Hi,
Geo-replication expects the gfids (unique identifier similar to inode
number in backend file systems) to be same
for a file both on master and slave gluster volume. If the data is directly
copied by other means other than geo-replication,
gfid will be different. The crashes you are seeing is
Sam,
For du –sh on my newer volume, the result is 161T. The sum of the Used space in
the df –h output for all the bricks is ~163T. Close enough for me to believe
everything is there. The total for used space in the df –h of the mountpoint it
83T, roughly half what is used.
Relevant lines from
Very similar to what we noticed too.
I suspect it's something to do with the metadata or xattrs stored on the
filesystem that gives quite different results from the actual file sizes.
--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod
Words are my own
Hi Eva,
Can you send us the following:
gluster volume info
gluster volume status
The log files and tcpdump for df on a fresh mount point for that volume.
Thanks,
Nithya
On 31 January 2018 at 07:17, Freer, Eva B. wrote:
> After OS update to CentOS 7.4 or RedHat 6.9 and
I am fighting this issue:
Bug 1540376 – Tiered volume performance degrades badly after a
volume stop/start or system restart.
https://bugzilla.redhat.com/show_bug.cgi?id=1540376
Does anyone have any ideas on what might be causing this, and
what a fix or work-around might be?
Thanks!
~ Jeff
I found this on the mailing list:
*I found the issue.The CentOS 7 RPMs, upon upgrade, modifies the .vol
files. Among other things, it adds "option shared-brick-count \d", using
the number of bricks in the volume.This gives you an average free space per
brick, instead of total free space in
Tested it in two different environments lately with exactly same results.
Was trying to get better read performance from local mounts with
hundreds of thousands maildir email files by using SSD,
hoping that .gluster file stat read will improve which does migrate
to hot tire.
After seeing what
Thank you, Raghavendra. I guess this cosmetic fix will be in 3.12.6?
I'm also looking forward to seeing stability fixes to parallel-readdir and
or readdir-ahead in 3.12.x. :)
Cheers,
On Mon, Jan 29, 2018 at 9:26 AM Raghavendra Gowdappa
wrote:
>
>
> - Original Message
- Original Message -
> From: "Dan Ragle"
> To: "Raghavendra Gowdappa" , "Ravishankar N"
>
> Cc: gluster-users@gluster.org, "Csaba Henk" , "Niels de
> Vos" , "Nithya
>
- Original Message -
> From: "Alan Orth"
> To: "Raghavendra Gowdappa"
> Cc: "gluster-users"
> Sent: Tuesday, January 30, 2018 1:37:40 PM
> Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS
13 matches
Mail list logo