Hi Eva, Can you send us the following:
gluster volume info gluster volume status The log files and tcpdump for df on a fresh mount point for that volume. Thanks, Nithya On 31 January 2018 at 07:17, Freer, Eva B. <[email protected]> wrote: > After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, > the ‘df’ command shows only part of the available space on the mount point > for multi-brick volumes. All nodes are at 3.12.4. This occurs on both > servers and clients. > > > > We have 2 different server configurations. > > > > Configuration 1: A distributed volume of 8 bricks with 4 on each server. > The initial configuration had 4 bricks of 59TB each with 2 on each server. > Prior to the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed > the size for the volume as 233TB. After the update, we added 2 bricks with > 1 on each server, but the output of ‘df’ still only listed 233TB for the > volume. We added 2 more bricks, again with 1 on each server. The output of > ‘df’ now shows 350TB, but the aggregate of 8 – 59TB bricks should be ~466TB. > > > > Configuration 2: A distributed, replicated volume with 9 bricks on each > server for a total of ~350TB of storage. After the server update to RHEL > 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No > changes were made to this volume after the update. > > > > In both cases, examining the bricks shows that the space and files are > still there, just not reported correctly with ‘df’. All machines have been > rebooted and the problem persists. > > > > Any help/advice you can give on this would be greatly appreciated. > > > > Thanks in advance. > > Eva Freer > > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://lists.gluster.org/mailman/listinfo/gluster-users >
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
