Package: glusterfs
Version: 3.2.7-3
Severity: important
Choosing severity important for now, I tend to call this an RC bug
but want to give maintainer + other users option for feedback
before.
Working system:
glusterfs, v3.0.5-1 on Debian/squeeze
Broken system:
glusterfs, v3.2.7-3 on Debian/wheezy
# dpkg --list glusterfs\* | awk '/^ii/ {print $2 " " $3}'
glusterfs-client 3.2.7-3
glusterfs-common 3.2.7-3
glusterfs-server 3.2.7-3
Problem description:
After dist-upgrading the system from Debian/squeeze to wheezy the
glusterfs share(s) can't be accessed any longer (reboot and
remounting the share don't change it):
# mount | grep gluster
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
# ls /mnt/glusterfs
ls: cannot access /mnt/glusterfs: Transport endpoint is not connected
I've put the logs to http://people.debian.org/~mika/glusterfs/
I noticed that a fresh deployment of the configuration on
Debian/wheezy doesn't work either, so I assume the configuration
changed between 3.0.5 and 3.2.7 in some details.
Since this might break also other people's system I though it's
worth reporting this issue. If you need further details please let
me know, I can easily reproduce the problem.
Details about configuration:
# grep gluster /etc/fstab
/etc/glusterfs/glusterfs.vol /mnt/glusterfs glusterfs defaults 0 0
# cat /etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume
# cat /etc/glusterfs/glusterfsd.vol
volume posix1
type storage/posix
option directory /var/lib/glusterfs/export
end-volume
volume locks1
type features/locks
subvolumes posix1
end-volume
volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume
volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1
end-volume
# cat /etc/glusterfs/glusterfs.vol
volume sp1-1
type protocol/client
option transport-type tcp
option remote-host sp1
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume sp2-1
type protocol/client
option transport-type tcp
option remote-host sp2
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick1
end-volume
volume mirror-0
type cluster/replicate
subvolumes sp1-1 sp2-1
end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume
volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead
end-volume
volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume
volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume
--
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]
Archive: http://lists.debian.org/[email protected]