Re: [Gluster-users] [FOSDEM'18] Optimizing Software Defined Storage for the Age of Flash

2018-01-30 Thread Raghavendra Gowdappa
Note that live-streaming is available at: https://fosdem.org/2018/schedule/streaming/ The talks will be archived too. - Original Message - > From: "Raghavendra Gowdappa" > To: "Gluster Devel" , "gluster-users" >

[Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Freer, Eva B.
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the ‘df’ command shows only part of the available space on the mount point for multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and clients. We have 2 different server configurations.

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Sam McLeod
We noticed something similar. Out of interest, does du -sh . show the same size? -- Sam McLeod (protoporpoise on IRC) https://smcleod.net https://twitter.com/s_mcleod Words are my own opinions and do not necessarily represent those of my employer or partners. > On 31 Jan 2018, at 12:47 pm,

Re: [Gluster-users] geo-replication initial setup with existing data

2018-01-30 Thread Kotresh Hiremath Ravishankar
Hi, Geo-replication expects the gfids (unique identifier similar to inode number in backend file systems) to be same for a file both on master and slave gluster volume. If the data is directly copied by other means other than geo-replication, gfid will be different. The crashes you are seeing is

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Freer, Eva B.
Sam, For du –sh on my newer volume, the result is 161T. The sum of the Used space in the df –h output for all the bricks is ~163T. Close enough for me to believe everything is there. The total for used space in the df –h of the mountpoint it 83T, roughly half what is used. Relevant lines from

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Sam McLeod
Very similar to what we noticed too. I suspect it's something to do with the metadata or xattrs stored on the filesystem that gives quite different results from the actual file sizes. -- Sam McLeod (protoporpoise on IRC) https://smcleod.net https://twitter.com/s_mcleod Words are my own

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Nithya Balachandran
Hi Eva, Can you send us the following: gluster volume info gluster volume status The log files and tcpdump for df on a fresh mount point for that volume. Thanks, Nithya On 31 January 2018 at 07:17, Freer, Eva B. wrote: > After OS update to CentOS 7.4 or RedHat 6.9 and

[Gluster-users] Tiered volume performance degrades badly after a volume stop/start or system restart.

2018-01-30 Thread Jeff Byers
I am fighting this issue: Bug 1540376 – Tiered volume performance degrades badly after a volume stop/start or system restart. https://bugzilla.redhat.com/show_bug.cgi?id=1540376 Does anyone have any ideas on what might be causing this, and what a fix or work-around might be? Thanks! ~ Jeff

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Nithya Balachandran
I found this on the mailing list: *I found the issue.The CentOS 7 RPMs, upon upgrade, modifies the .vol files. Among other things, it adds "option shared-brick-count \d", using the number of bricks in the volume.This gives you an average free space per brick, instead of total free space in

Re: [Gluster-users] Tiered volume performance degrades badly after a volume stop/start or system restart.

2018-01-30 Thread Vlad Kopylov
Tested it in two different environments lately with exactly same results. Was trying to get better read performance from local mounts with hundreds of thousands maildir email files by using SSD, hoping that .gluster file stat read will improve which does migrate to hot tire. After seeing what

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-30 Thread Alan Orth
Thank you, Raghavendra. I guess this cosmetic fix will be in 3.12.6? I'm also looking forward to seeing stability fixes to parallel-readdir and or readdir-ahead in 3.12.x. :) Cheers, On Mon, Jan 29, 2018 at 9:26 AM Raghavendra Gowdappa wrote: > > > - Original Message

Re: [Gluster-users] Run away memory with gluster mount

2018-01-30 Thread Raghavendra Gowdappa
- Original Message - > From: "Dan Ragle" > To: "Raghavendra Gowdappa" , "Ravishankar N" > > Cc: gluster-users@gluster.org, "Csaba Henk" , "Niels de > Vos" , "Nithya >

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-30 Thread Raghavendra Gowdappa
- Original Message - > From: "Alan Orth" > To: "Raghavendra Gowdappa" > Cc: "gluster-users" > Sent: Tuesday, January 30, 2018 1:37:40 PM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS