Re: [Gluster-users] GlusterFS disk full.

2015-09-13 Thread m...@eyes-works.com
I checked now if shapshot exists.
Both servers has no snapshots.
As I have no snapshot, when disk full problem will be happened, I will
have to reboot glusterfsd.

(web1)
[root@web1 ~]# gluster snap list
No snapshots present
[root@web1 ~]# gluster snap info
No snapshots present
[root@web1 ~]# gluster snap status
No snapshots present
[root@web1 ~]# gluster snap delete all
No snapshots present

(web2)
[root@web2 ~]# gluster snap list
No snapshots present
[root@web2 ~]# gluster snap info
No snapshots present
[root@web2 ~]# gluster snap status
No snapshots present
[root@web2 ~]# gluster snap delete all
No snapshots present

-- 
m...@eyes-works.com 

On Fri, 11 Sep 2015 12:59:50 -0400
Alastair Neil  wrote:

> If you have an active snapshot, I expect the space will not be freed until
> you remove the snapshot.
> 
> On 11 September 2015 at 01:44, Fujii Yasuhiro  wrote:
> 
> > Hi.
> >
> > I have a question.
> >
> > GlusterFS hdd space can't be recovered automatically after glusterfs
> > is disk full and I delete files.
> > It will be recovered restarting glusterfsd.
> > I can find deleted file and glusterfsd do not relese the file I don't know
> > why.
> > Is this OK?
> >
> > [version]
> > CentOS release 6.7 (Final)
> > Linux web2 2.6.32-573.3.1.el6.x86_64 #1 SMP Thu Aug 13 22:55:16 UTC
> > 2015 x86_64 x86_64 x86_64 GNU/Linux
> > glusterfs-3.6.5-1.el6.x86_64
> > glusterfs-api-3.6.5-1.el6.x86_64
> > glusterfs-fuse-3.6.5-1.el6.x86_64
> > glusterfs-server-3.6.5-1.el6.x86_64
> > glusterfs-libs-3.6.5-1.el6.x86_64
> > glusterfs-cli-3.6.5-1.el6.x86_6
> >
> > [test]
> > dd if=/dev/zero of=./test.dat bs=1M count=10
> > dd: writing `./test.dat': Input/output error
> > dd: closing output file `./test.dat': No space left on device
> >
> > [root@web2 www_virtual]# df
> > Filesystem   1K-blocks  Used Available Use% Mounted on
> > /dev/xvda1 8124856   2853600   4851880  38% /
> > tmpfs   509256 0509256   0% /dev/shm
> > /dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
> > web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs
> >
> > [root@web2 www_virtual]# rm test.dat
> > rm: remove regular file `test.dat'? y
> >
> > [root@web2 www_virtual]# sync
> > [root@web2 www_virtual]# df
> > Filesystem   1K-blocks  Used Available Use% Mounted on
> > /dev/xvda1 8124856   2856744   4848736  38% /
> > tmpfs   509256 0509256   0% /dev/shm
> > /dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
> > web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs
> >
> > Glusterfs is still disk full.
> > The other glusterfs server is same.
> >
> > [find the deleted file]
> > (server web2)
> > [root@web2 www_virtual]# ls -l /proc/*/fd/* | grep deleted
> > lr-x-- 1 root root 64 Sep 11 14:15 /proc/1753/fd/14 ->
> > /var/lib/glusterd/snaps/missed_snaps_list (deleted)
> > lrwx-- 1 root root 64 Sep 11 14:17 /proc/1775/fd/21 ->
> >
> > /glusterfs/vol01/brick/.glusterfs/1f/c2/1fc2b7b4-ecd6-4eff-a874-962c2283823f
> > (deleted)
> >
> > [root@web2 .glusterfs]# ps ax | grep 1775 | grep -v grep
> >  1775 ?Ssl5:43 /usr/sbin/glusterfsd -s web2 --volfile-id
> > vol_replica_01.web2.glusterfs-vol01-brick -p
> > /var/lib/glusterd/vols/vol_replica_01/run/web2-glusterfs-vol01-brick.pid
> > -S /var/run/1cf98ee59b5dff8cfd793b8ec39851db.socket --brick-name
> > /glusterfs/vol01/brick -l
> > /var/log/glusterfs/bricks/glusterfs-vol01-brick.log --xlator-option
> > *-posix.glusterd-uuid=029cf626-935f-4546-a8df-f9d79a6959da
> > --brick-port 49152 --xlator-option
> > vol_replica_01-server.listen-port=49152
> >
> > [root@web2 www_virtual]# lsof -p 1775
> > COMMANDPID USER   FD   TYPE DEVICESIZE/OFFNODE NAME
> > glusterfs 1775 root  cwdDIR  202,14096   2 /
> > glusterfs 1775 root  rtdDIR  202,14096   2 /
> > glusterfs 1775 root  txtREG  202,1   78056  266837
> > /usr/sbin/glusterfsd
> > glusterfs 1775 root  memREG  202,18560 854
> > /usr/lib64/glusterfs/3.6.5/auth/login.so
> > glusterfs 1775 root  memREG  202,1   13248 853
> > /usr/lib64/glusterfs/3.6.5/auth/addr.so
> > glusterfs 1775 root  memREG  202,1  2249204334
> > /usr/lib64/glusterfs/3.6.5/xlator/protocol/server.so
> > glusterfs 1775 root  memREG  202,1  1187602421
> > /usr/lib64/glusterfs/3.6.5/xlator/debug/io-stats.so
> > glusterfs 1775 root  memREG  202,1  1196882507
> > /usr/lib64/glusterfs/3.6.5/xlator/features/quota.so
> > glusterfs 1775 root  memREG  202,1  1397842491
> > /usr/lib64/glusterfs/3.6.5/xlator/features/marker.so
> > glusterfs 1775 root  memREG  202,1   423042475
> > 

[Gluster-users] GlusterFS disk full.

2015-09-11 Thread m...@eyes-works.com
Hi.

I have a question.

GlusterFS hdd space can't be recovered automatically after glusterfs
is disk full and I delete files.
It will be recovered restarting glusterfsd.
I could find deleted file and I found that glusterfsd didn't relese the
deleted file I don't know why.
Is this OK?

[version]
CentOS release 6.7 (Final)
Linux web2 2.6.32-573.3.1.el6.x86_64 #1 SMP Thu Aug 13 22:55:16 UTC
2015 x86_64 x86_64 x86_64 GNU/Linux
glusterfs-3.6.5-1.el6.x86_64
glusterfs-api-3.6.5-1.el6.x86_64
glusterfs-fuse-3.6.5-1.el6.x86_64
glusterfs-server-3.6.5-1.el6.x86_64
glusterfs-libs-3.6.5-1.el6.x86_64
glusterfs-cli-3.6.5-1.el6.x86_6

[test]
dd if=/dev/zero of=./test.dat bs=1M count=10
dd: writing `./test.dat': Input/output error
dd: closing output file `./test.dat': No space left on device

[root@web2 www_virtual]# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/xvda1 8124856   2853600   4851880  38% /
tmpfs   509256 0509256   0% /dev/shm
/dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs

[root@web2 www_virtual]# rm test.dat
rm: remove regular file `test.dat'? y

[root@web2 www_virtual]# sync
[root@web2 www_virtual]# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/xvda1 8124856   2856744   4848736  38% /
tmpfs   509256 0509256   0% /dev/shm
/dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs

Glusterfs is still disk full.
The other glusterfs server is same.

[find the deleted file]
(server web2)
[root@web2 www_virtual]# ls -l /proc/*/fd/* | grep deleted
lr-x-- 1 root root 64 Sep 11 14:15 /proc/1753/fd/14 ->
/var/lib/glusterd/snaps/missed_snaps_list (deleted)
lrwx-- 1 root root 64 Sep 11 14:17 /proc/1775/fd/21 ->
/glusterfs/vol01/brick/.glusterfs/1f/c2/1fc2b7b4-ecd6-4eff-a874-962c2283823f
(deleted)

[root@web2 .glusterfs]# ps ax | grep 1775 | grep -v grep
 1775 ?Ssl5:43 /usr/sbin/glusterfsd -s web2 --volfile-id
vol_replica_01.web2.glusterfs-vol01-brick -p
/var/lib/glusterd/vols/vol_replica_01/run/web2-glusterfs-vol01-brick.pid
-S /var/run/1cf98ee59b5dff8cfd793b8ec39851db.socket --brick-name
/glusterfs/vol01/brick -l
/var/log/glusterfs/bricks/glusterfs-vol01-brick.log --xlator-option
*-posix.glusterd-uuid=029cf626-935f-4546-a8df-f9d79a6959da
--brick-port 49152 --xlator-option
vol_replica_01-server.listen-port=49152

[root@web2 www_virtual]# lsof -p 1775
COMMANDPID USER   FD   TYPE DEVICESIZE/OFFNODE NAME
glusterfs 1775 root  cwdDIR  202,14096   2 /
glusterfs 1775 root  rtdDIR  202,14096   2 /
glusterfs 1775 root  txtREG  202,1   78056  266837
/usr/sbin/glusterfsd
glusterfs 1775 root  memREG  202,18560 854
/usr/lib64/glusterfs/3.6.5/auth/login.so
glusterfs 1775 root  memREG  202,1   13248 853
/usr/lib64/glusterfs/3.6.5/auth/addr.so
glusterfs 1775 root  memREG  202,1  2249204334
/usr/lib64/glusterfs/3.6.5/xlator/protocol/server.so
glusterfs 1775 root  memREG  202,1  1187602421
/usr/lib64/glusterfs/3.6.5/xlator/debug/io-stats.so
glusterfs 1775 root  memREG  202,1  1196882507
/usr/lib64/glusterfs/3.6.5/xlator/features/quota.so
glusterfs 1775 root  memREG  202,1  1397842491
/usr/lib64/glusterfs/3.6.5/xlator/features/marker.so
glusterfs 1775 root  memREG  202,1   423042475
/usr/lib64/glusterfs/3.6.5/xlator/features/index.so
glusterfs 1775 root  memREG  202,1   343602447
/usr/lib64/glusterfs/3.6.5/xlator/features/barrier.so
glusterfs 1775 root  memREG  202,1   469362561
/usr/lib64/glusterfs/3.6.5/xlator/performance/io-threads.so
glusterfs 1775 root  memREG  202,1  1079842476
/usr/lib64/glusterfs/3.6.5/xlator/features/locks.so
glusterfs 1775 root  memREG  202,1   584403940
/usr/lib64/glusterfs/3.6.5/xlator/system/posix-acl.so
glusterfs 1775 root  memREG  202,1   90880  266322
/lib64/libgcc_s-4.4.7-20120601.so.1
glusterfs 1775 root  memREG  202,1   962002460
/usr/lib64/glusterfs/3.6.5/xlator/features/changelog.so
glusterfs 1775 root  memREG  202,13944  272350
/lib64/libaio.so.1.0.1
glusterfs 1775 root  memREG  202,1  1883764337
/usr/lib64/glusterfs/3.6.5/xlator/storage/posix.so
glusterfs 1775 root  memREG  202,1   65928  273938
/lib64/libnss_files-2.12.so
glusterfs 1775 root  memREG  202,1  122040  264857
/lib64/libselinux.so.1
glusterfs 1775 root  memREG  202,1  110960  273940

Re: [Gluster-users] GlusterFS disk full.

2015-09-11 Thread Alastair Neil
If you have an active snapshot, I expect the space will not be freed until
you remove the snapshot.

On 11 September 2015 at 01:44, Fujii Yasuhiro  wrote:

> Hi.
>
> I have a question.
>
> GlusterFS hdd space can't be recovered automatically after glusterfs
> is disk full and I delete files.
> It will be recovered restarting glusterfsd.
> I can find deleted file and glusterfsd do not relese the file I don't know
> why.
> Is this OK?
>
> [version]
> CentOS release 6.7 (Final)
> Linux web2 2.6.32-573.3.1.el6.x86_64 #1 SMP Thu Aug 13 22:55:16 UTC
> 2015 x86_64 x86_64 x86_64 GNU/Linux
> glusterfs-3.6.5-1.el6.x86_64
> glusterfs-api-3.6.5-1.el6.x86_64
> glusterfs-fuse-3.6.5-1.el6.x86_64
> glusterfs-server-3.6.5-1.el6.x86_64
> glusterfs-libs-3.6.5-1.el6.x86_64
> glusterfs-cli-3.6.5-1.el6.x86_6
>
> [test]
> dd if=/dev/zero of=./test.dat bs=1M count=10
> dd: writing `./test.dat': Input/output error
> dd: closing output file `./test.dat': No space left on device
>
> [root@web2 www_virtual]# df
> Filesystem   1K-blocks  Used Available Use% Mounted on
> /dev/xvda1 8124856   2853600   4851880  38% /
> tmpfs   509256 0509256   0% /dev/shm
> /dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
> web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs
>
> [root@web2 www_virtual]# rm test.dat
> rm: remove regular file `test.dat'? y
>
> [root@web2 www_virtual]# sync
> [root@web2 www_virtual]# df
> Filesystem   1K-blocks  Used Available Use% Mounted on
> /dev/xvda1 8124856   2856744   4848736  38% /
> tmpfs   509256 0509256   0% /dev/shm
> /dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
> web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs
>
> Glusterfs is still disk full.
> The other glusterfs server is same.
>
> [find the deleted file]
> (server web2)
> [root@web2 www_virtual]# ls -l /proc/*/fd/* | grep deleted
> lr-x-- 1 root root 64 Sep 11 14:15 /proc/1753/fd/14 ->
> /var/lib/glusterd/snaps/missed_snaps_list (deleted)
> lrwx-- 1 root root 64 Sep 11 14:17 /proc/1775/fd/21 ->
>
> /glusterfs/vol01/brick/.glusterfs/1f/c2/1fc2b7b4-ecd6-4eff-a874-962c2283823f
> (deleted)
>
> [root@web2 .glusterfs]# ps ax | grep 1775 | grep -v grep
>  1775 ?Ssl5:43 /usr/sbin/glusterfsd -s web2 --volfile-id
> vol_replica_01.web2.glusterfs-vol01-brick -p
> /var/lib/glusterd/vols/vol_replica_01/run/web2-glusterfs-vol01-brick.pid
> -S /var/run/1cf98ee59b5dff8cfd793b8ec39851db.socket --brick-name
> /glusterfs/vol01/brick -l
> /var/log/glusterfs/bricks/glusterfs-vol01-brick.log --xlator-option
> *-posix.glusterd-uuid=029cf626-935f-4546-a8df-f9d79a6959da
> --brick-port 49152 --xlator-option
> vol_replica_01-server.listen-port=49152
>
> [root@web2 www_virtual]# lsof -p 1775
> COMMANDPID USER   FD   TYPE DEVICESIZE/OFFNODE NAME
> glusterfs 1775 root  cwdDIR  202,14096   2 /
> glusterfs 1775 root  rtdDIR  202,14096   2 /
> glusterfs 1775 root  txtREG  202,1   78056  266837
> /usr/sbin/glusterfsd
> glusterfs 1775 root  memREG  202,18560 854
> /usr/lib64/glusterfs/3.6.5/auth/login.so
> glusterfs 1775 root  memREG  202,1   13248 853
> /usr/lib64/glusterfs/3.6.5/auth/addr.so
> glusterfs 1775 root  memREG  202,1  2249204334
> /usr/lib64/glusterfs/3.6.5/xlator/protocol/server.so
> glusterfs 1775 root  memREG  202,1  1187602421
> /usr/lib64/glusterfs/3.6.5/xlator/debug/io-stats.so
> glusterfs 1775 root  memREG  202,1  1196882507
> /usr/lib64/glusterfs/3.6.5/xlator/features/quota.so
> glusterfs 1775 root  memREG  202,1  1397842491
> /usr/lib64/glusterfs/3.6.5/xlator/features/marker.so
> glusterfs 1775 root  memREG  202,1   423042475
> /usr/lib64/glusterfs/3.6.5/xlator/features/index.so
> glusterfs 1775 root  memREG  202,1   343602447
> /usr/lib64/glusterfs/3.6.5/xlator/features/barrier.so
> glusterfs 1775 root  memREG  202,1   469362561
> /usr/lib64/glusterfs/3.6.5/xlator/performance/io-threads.so
> glusterfs 1775 root  memREG  202,1  1079842476
> /usr/lib64/glusterfs/3.6.5/xlator/features/locks.so
> glusterfs 1775 root  memREG  202,1   584403940
> /usr/lib64/glusterfs/3.6.5/xlator/system/posix-acl.so
> glusterfs 1775 root  memREG  202,1   90880  266322
> /lib64/libgcc_s-4.4.7-20120601.so.1
> glusterfs 1775 root  memREG  202,1   962002460
> /usr/lib64/glusterfs/3.6.5/xlator/features/changelog.so
> glusterfs 1775 root  memREG  202,13944  272350
> /lib64/libaio.so.1.0.1
> glusterfs 1775 root  memREG 

[Gluster-users] GlusterFS disk full.

2015-09-11 Thread Fujii Yasuhiro
Hi.

I have a question.

GlusterFS hdd space can't be recovered automatically after glusterfs
is disk full and I delete files.
It will be recovered restarting glusterfsd.
I can find deleted file and glusterfsd do not relese the file I don't know why.
Is this OK?

[version]
CentOS release 6.7 (Final)
Linux web2 2.6.32-573.3.1.el6.x86_64 #1 SMP Thu Aug 13 22:55:16 UTC
2015 x86_64 x86_64 x86_64 GNU/Linux
glusterfs-3.6.5-1.el6.x86_64
glusterfs-api-3.6.5-1.el6.x86_64
glusterfs-fuse-3.6.5-1.el6.x86_64
glusterfs-server-3.6.5-1.el6.x86_64
glusterfs-libs-3.6.5-1.el6.x86_64
glusterfs-cli-3.6.5-1.el6.x86_6

[test]
dd if=/dev/zero of=./test.dat bs=1M count=10
dd: writing `./test.dat': Input/output error
dd: closing output file `./test.dat': No space left on device

[root@web2 www_virtual]# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/xvda1 8124856   2853600   4851880  38% /
tmpfs   509256 0509256   0% /dev/shm
/dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs

[root@web2 www_virtual]# rm test.dat
rm: remove regular file `test.dat'? y

[root@web2 www_virtual]# sync
[root@web2 www_virtual]# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/xvda1 8124856   2856744   4848736  38% /
tmpfs   509256 0509256   0% /dev/shm
/dev/xvdf1   101441468 101419972 0 100% /glusterfs/vol01
web2:/vol_replica_01 101441408 101420032 0 100% /mnt/glusterfs

Glusterfs is still disk full.
The other glusterfs server is same.

[find the deleted file]
(server web2)
[root@web2 www_virtual]# ls -l /proc/*/fd/* | grep deleted
lr-x-- 1 root root 64 Sep 11 14:15 /proc/1753/fd/14 ->
/var/lib/glusterd/snaps/missed_snaps_list (deleted)
lrwx-- 1 root root 64 Sep 11 14:17 /proc/1775/fd/21 ->
/glusterfs/vol01/brick/.glusterfs/1f/c2/1fc2b7b4-ecd6-4eff-a874-962c2283823f
(deleted)

[root@web2 .glusterfs]# ps ax | grep 1775 | grep -v grep
 1775 ?Ssl5:43 /usr/sbin/glusterfsd -s web2 --volfile-id
vol_replica_01.web2.glusterfs-vol01-brick -p
/var/lib/glusterd/vols/vol_replica_01/run/web2-glusterfs-vol01-brick.pid
-S /var/run/1cf98ee59b5dff8cfd793b8ec39851db.socket --brick-name
/glusterfs/vol01/brick -l
/var/log/glusterfs/bricks/glusterfs-vol01-brick.log --xlator-option
*-posix.glusterd-uuid=029cf626-935f-4546-a8df-f9d79a6959da
--brick-port 49152 --xlator-option
vol_replica_01-server.listen-port=49152

[root@web2 www_virtual]# lsof -p 1775
COMMANDPID USER   FD   TYPE DEVICESIZE/OFFNODE NAME
glusterfs 1775 root  cwdDIR  202,14096   2 /
glusterfs 1775 root  rtdDIR  202,14096   2 /
glusterfs 1775 root  txtREG  202,1   78056  266837
/usr/sbin/glusterfsd
glusterfs 1775 root  memREG  202,18560 854
/usr/lib64/glusterfs/3.6.5/auth/login.so
glusterfs 1775 root  memREG  202,1   13248 853
/usr/lib64/glusterfs/3.6.5/auth/addr.so
glusterfs 1775 root  memREG  202,1  2249204334
/usr/lib64/glusterfs/3.6.5/xlator/protocol/server.so
glusterfs 1775 root  memREG  202,1  1187602421
/usr/lib64/glusterfs/3.6.5/xlator/debug/io-stats.so
glusterfs 1775 root  memREG  202,1  1196882507
/usr/lib64/glusterfs/3.6.5/xlator/features/quota.so
glusterfs 1775 root  memREG  202,1  1397842491
/usr/lib64/glusterfs/3.6.5/xlator/features/marker.so
glusterfs 1775 root  memREG  202,1   423042475
/usr/lib64/glusterfs/3.6.5/xlator/features/index.so
glusterfs 1775 root  memREG  202,1   343602447
/usr/lib64/glusterfs/3.6.5/xlator/features/barrier.so
glusterfs 1775 root  memREG  202,1   469362561
/usr/lib64/glusterfs/3.6.5/xlator/performance/io-threads.so
glusterfs 1775 root  memREG  202,1  1079842476
/usr/lib64/glusterfs/3.6.5/xlator/features/locks.so
glusterfs 1775 root  memREG  202,1   584403940
/usr/lib64/glusterfs/3.6.5/xlator/system/posix-acl.so
glusterfs 1775 root  memREG  202,1   90880  266322
/lib64/libgcc_s-4.4.7-20120601.so.1
glusterfs 1775 root  memREG  202,1   962002460
/usr/lib64/glusterfs/3.6.5/xlator/features/changelog.so
glusterfs 1775 root  memREG  202,13944  272350
/lib64/libaio.so.1.0.1
glusterfs 1775 root  memREG  202,1  1883764337
/usr/lib64/glusterfs/3.6.5/xlator/storage/posix.so
glusterfs 1775 root  memREG  202,1   65928  273938
/lib64/libnss_files-2.12.so
glusterfs 1775 root  memREG  202,1  122040  264857
/lib64/libselinux.so.1
glusterfs 1775 root  memREG  202,1  110960  273940
/lib64/libresolv-2.12.so