I'm trying to configure a gluster client on an Amazon EC2 instance. A
setup script, which runs when the instance launches, installs
glusterfs-fuse and mounts the desired gluster volume(s). The script
has worked fine until recently. Now, apparently what happens is that
when the script terminates, the mounted gluster volume gets a sigterm
and umounts itself. Here's the line from the gluster log:

> [2015-01-29 22:39:22.047550] W [glusterfsd.c:1099:cleanup_and_exit] 
> (-->/lib64/libc.so.6(clone+0x6d) [0x7f50cddd11ad] 
> (-->/lib64/libpthread.so.0(+0x7df3) [0x7f50ce488df3] 
> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f50cf184255]))) 0-: 
> received signum (15), shutting down

I would have thought that the mount would live after the setup script
terminates. And, as I mentioned, this all used to work. If I ssh to
the instance and re-mount the gluster volume (as su), it stays mounted
even after I disconnect.

Does anyone have any idea what would cause the volume to get
unmounted?

-- 
Mark Sidell
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to