Wei Dong wrote:
Hi All,

I'm experiencing a problem of booster when the server side nodes have more than one volumes exported. The symptom is that when I run "ls MOUNT_POINT" with booster, I get something like the following:

ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported separately.


The problem only appears when I use booster. Nothing seems to go wrong when I mount GlusterFS. Also everything is find if I only export one brick from each server. There's also no warning or errors in the log file in all cases.

Any one has some idea on what's happening?

Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar


- Wei
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to