hi, I am running a distributed replicated gluster fs setup with 4 nodes. currently i have no problems but i was wondering when i am running gluster volume status and seeing different free disk space on every node. I am wondering if I should not have the same free and used size on gluster00 and gluster01 and also on gluster02 and gluster03 (as they are the replicated ones)
root@gluster0:~# gluster volume status GV01 detail Status of volume: GV01 ------------------------------------------------------------------------------ Brick : Brick gluster00.storage.domain:/brick/gv01 Port : 49163 Online : Y Pid : 3631 File System : xfs Device : /dev/mapper/vg--gluster0-DATA Mount Options : rw,relatime,attr2,delaylog,noquota Inode Size : 256 Disk Space Free : 5.7TB Total Disk Space : 13.6TB Inode Count : 2923388928 Free Inodes : 2922850330 ------------------------------------------------------------------------------ Brick : Brick gluster01.storage.domain:/brick/gv01 Port : 49163 Online : Y Pid : 2976 File System : xfs Device : /dev/mapper/vg--gluster1-DATA Mount Options : rw,relatime,attr2,delaylog,noquota Inode Size : 256 Disk Space Free : 4.4TB Total Disk Space : 13.6TB Inode Count : 2923388928 Free Inodes : 2922826116 ------------------------------------------------------------------------------ Brick : Brick gluster02.storage.domain:/brick/gv01 Port : 49163 Online : Y Pid : 3051 File System : xfs Device : /dev/mapper/vg--gluster2-DATA Mount Options : rw,relatime,attr2,delaylog,noquota Inode Size : 256 Disk Space Free : 6.4TB Total Disk Space : 13.6TB Inode Count : 2923388928 Free Inodes : 2922851020 ------------------------------------------------------------------------------ Brick : Brick gluster03.storage.domain:/brick/gv01 Port : N/A Online : N Pid : 29822 File System : xfs Device : /dev/mapper/vg--gluster3-DATA Mount Options : rw,relatime,attr2,delaylog,noquota Inode Size : 256 Disk Space Free : 6.2TB Total Disk Space : 13.6TB Inode Count : 2923388928 Free Inodes : 2922847631 friendly regards, Patrick
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
