Very similar to what we noticed too. I suspect it's something to do with the metadata or xattrs stored on the filesystem that gives quite different results from the actual file sizes.
-- Sam McLeod (protoporpoise on IRC) https://smcleod.net https://twitter.com/s_mcleod Words are my own opinions and do not necessarily represent those of my employer or partners. > On 31 Jan 2018, at 2:14 pm, Freer, Eva B. <free...@ornl.gov> wrote: > > Sam, > > For du –sh on my newer volume, the result is 161T. The sum of the Used space > in the df –h output for all the bricks is ~163T. Close enough for me to > believe everything is there. The total for used space in the df –h of the > mountpoint it 83T, roughly half what is used. > > Relevant lines from df –h on server-A: > Filesystem Size Used Avail Use% Mounted on > /dev/sda1 59T 42T 17T 72% /bricks/data_A1 > /dev/sdb1 59T 45T 14T 77% /bricks/data_A2 > /dev/sdd1 59T 39M 59T 1% /bricks/data_A4 > /dev/sdc1 59T 1.9T 57T 4% /bricks/data_A3 > server-A:/dataeng 350T 83T 268T 24% /dataeng > > And on server-B: > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 59T 34T 25T 58% /bricks/data_B2 > /dev/sdc1 59T 2.0T 57T 4% /bricks/data_B3 > /dev/sdd1 59T 39M 59T 1% /bricks/data_B4 > /dev/sda1 59T 38T 22T 64% /bricks/data_B1 > server-B:/dataeng 350T 83T 268T 24% /dataeng > > Eva Freer > > From: Sam McLeod <mailingli...@smcleod.net> > Date: Tuesday, January 30, 2018 at 9:43 PM > To: Eva Freer <free...@ornl.gov> > Cc: "gluster-users@gluster.org" <gluster-users@gluster.org>, "Greene, Tami > McFarlin" <gree...@ornl.gov> > Subject: Re: [Gluster-users] df does not show full volume capacity after > update to 3.12.4 > > We noticed something similar. > > Out of interest, does du -sh . show the same size? > > -- > Sam McLeod (protoporpoise on IRC) > https://smcleod.net <https://smcleod.net/> > https://twitter.com/s_mcleod > > Words are my own opinions and do not necessarily represent those of my > employer or partners. > > >> On 31 Jan 2018, at 12:47 pm, Freer, Eva B. <free...@ornl.gov >> <mailto:free...@ornl.gov>> wrote: >> >> After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, >> the ‘df’ command shows only part of the available space on the mount point >> for multi-brick volumes. All nodes are at 3.12.4. This occurs on both >> servers and clients. >> >> We have 2 different server configurations. >> >> Configuration 1: A distributed volume of 8 bricks with 4 on each server. The >> initial configuration had 4 bricks of 59TB each with 2 on each server. Prior >> to the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed the >> size for the volume as 233TB. After the update, we added 2 bricks with 1 on >> each server, but the output of ‘df’ still only listed 233TB for the volume. >> We added 2 more bricks, again with 1 on each server. The output of ‘df’ now >> shows 350TB, but the aggregate of 8 – 59TB bricks should be ~466TB. >> >> Configuration 2: A distributed, replicated volume with 9 bricks on each >> server for a total of ~350TB of storage. After the server update to RHEL 6.9 >> and gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No >> changes were made to this volume after the update. >> >> In both cases, examining the bricks shows that the space and files are still >> there, just not reported correctly with ‘df’. All machines have been >> rebooted and the problem persists. >> >> Any help/advice you can give on this would be greatly appreciated. >> >> Thanks in advance. >> Eva Freer >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org> >> http://lists.gluster.org/mailman/listinfo/gluster-users >> <http://lists.gluster.org/mailman/listinfo/gluster-users> >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users