Dear Amar,Hello - did you manage to look into the directory related problems since?
Thank you, Josef. On Jan 3, 2009, at 09:53 , Amar Tumballi (bulde) wrote:
hi 'At Work',I got similar report in another user of glusterfs in macfuse mailing list too. I will look into this mac 'directory' related issues on monday. Will get back to you after I investigate it.Regards, Amar 2009/1/3 At Work <[email protected]>What's more, I see that the proper permissions and UID are being forwarded to the remote filesystem - as the user of the service creating the files exists only on the "head" server, is it possible that it is the remote server is refusing to do a mkdir and chown directories? This would be odd, as it would seem logical that it would be the mount-point server that would decide who gets to read or write.What of the "glusterfs-fuse" error I get every two seconds? Is this in your domain, or should I be asking this of the FUSE developers?Thanks, best.That's it exactly. As it stands I have glusterfs server (or its server.vol file) on the sub-servers setting up (and exporting?) the bricks, and the OS X uses only the client.vol file to import and assemble the remote bricks into a cluster. Also, yes, the problems are as you say: I can read/write files, but I cannot create/upload/ rename directories.Here is a copy of the server.vol files from two servers: matserve01: volume posix01a type storage/posix option directory /raid01a/clients end-volume volume raid01a type features/locks subvolumes posix01a end-volume volume posix01b type storage/posix option directory /raid01b/clients end-volume volume raid01b type features/locks subvolumes posix01b end-volume ### Add network serving capability to above exports. volume server type protocol/server option transport-type tcp subvolumes raid01a raid01boption auth.addr.raid01a.allow 192.168.1.* # Allow access to "raid01a" volume option auth.addr.raid01b.allow 192.168.1.* # Allow access to "raid01b" volumeend-volume matserve02: volume posix02a type storage/posix option directory /raid02a/clients end-volume volume raid02a type features/locks subvolumes posix02a end-volume volume posix02b type storage/posix option directory /raid02b/clients end-volume volume raid02b type features/locks subvolumes posix02b end-volume ### Add network serving capability to above exports. volume server type protocol/server option transport-type tcp subvolumes raid02a raid02boption auth.addr.raid02a.allow 192.168.1.* # Allow access to "raid02a" volume option auth.addr.raid02b.allow 192.168.1.* # Allow access to "raid02b" volumeend-volume ...and the client.vol file from the OS X server. ### Add client feature and attach to remote subvolume of server1 # import RAID a's on matserve01 & matserve02 volume rRaid01a type protocol/client option transport-type tcp/client option remote-host 192.168.1.6 # IP address of the remote brick option remote-subvolume raid01a # name of the remote volume end-volume volume rRaid02a type protocol/client option transport-type tcp/client option remote-host 192.168.1.7 # IP address of the remote brick option remote-subvolume raid02a # name of the remote volume end-volume ## add c, d, e, etc sections as bays expand for each server ################### ### Add client feature and attach to remote subvolume of server2 # combine raid a's volume cluster0102a type cluster/afr subvolumes rRaid01a rRaid02a end-volume ## add c, d, e, etc sections as bays expand for each server ###################...you may notice that I am for the time being assembling but one cluster (a) - for testing purposes.Does all this seem correct to you? On Jan 2, 2009, at 14:17 , Krishna Srinivas wrote:Schomburg, You have 4 servers and one client. Each server has to export 2 directories /raid01a and /raid01b (FUSE do not play any role on theservers). On the client machine the glusterfs mounts using the clientvol file combining all the exported directories. This would be atypical setup in your case. How is your setup? Can you mail the clientvol file? According to your mail creation of directory fails. But creation/read/write of files are fine. Right? KrishnaOn Fri, Jan 2, 2009 at 5:01 PM, Jake Maul <[email protected]> wrote:On Fri, Jan 2, 2009 at 3:55 AM, At Work <[email protected]> wrote:Thank you for your rapid reply. Just one question: by "leave your fstabmount alone" do you mean leave it mount the xfs disk on startup?Yes. Mount your XFS partition via fstab as you normally would.As for the rest.... dunno what to tell ya. Maybe one of the glusterfsdevs can chime in with some ideas. Good luck, JakeThis problem is odd to say the least - when I do a 'mount' after activatingthe glusterfs client and cluster on Leopard, I get the following: glusterfs on /Volumes/raid0102a (fusefs, local, synchronous) ...and on the Debian host server I get:fusectl on /sys/fs/fuse/connections type fusectl (rw) # seems to be afuse connection - should fuse-accessible mounts go here? /dev/sdb1 on /raid01a type xfs (rw) # raid block a /dev/sdc1 on /raid01b type xfs (rw) # raid block b ...and in the glusterfs log I get:2009-01-02 11:06:42 E [fuse-bridge.c:279:fuse_loc_fill] fuse- bridge: failedto search parent for 576 ((null))2009-01-02 11:06:42 E [fuse-bridge.c:703:do_chmod] glusterfs- fuse: 2: CHMOD576 ((null)) (fuse_loc_fill() failed)2009-01-02 11:06:42 E [fuse-bridge.c:279:fuse_loc_fill] fuse- bridge: failedto search parent for 576 ((null))2009-01-02 11:06:42 E [fuse-bridge.c:581:fuse_getattr] glusterfs- fuse: 1:GETATTR 576 (fuse_loc_fill() failed)2009-01-02 11:08:16 E [fuse-bridge.c:279:fuse_loc_fill] fuse- bridge: failedto search parent for 578 ((null))2009-01-02 11:08:16 E [fuse-bridge.c:2193:fuse_getxattr] glusterfs-fuse: 2: GETXATTR (null)/578 (com.apple.FinderInfo) (fuse_loc_fill() failed) 2009-01-02 11:08:16 E [fuse-bridge.c:279:fuse_loc_fill] fuse- bridge: failedto search parent for 578 ((null))2009-01-02 11:08:16 E [fuse-bridge.c:2193:fuse_getxattr] glusterfs-fuse: 2: GETXATTR (null)/578 (com.apple.FinderInfo) (fuse_loc_fill() failed) 2009-01-02 11:08:17 E [fuse-bridge.c:279:fuse_loc_fill] fuse- bridge: failedto search parent for 578 ((null))2009-01-02 11:08:17 E [fuse-bridge.c:2193:fuse_getxattr] glusterfs-fuse: 0: GETXATTR (null)/578 (com.apple.FinderInfo) (fuse_loc_fill() failed) 2009-01-02 11:09:58 E [fuse-bridge.c:279:fuse_loc_fill] fuse- bridge: failedto search parent for 578 ((null))2009-01-02 11:09:58 E [fuse-bridge.c:581:fuse_getattr] glusterfs- fuse: 1:GETATTR 578 (fuse_loc_fill() failed) ...and the last two lines are repeated every few minutes.Am I correct in understanding that I have no need for FUSE on the Debian servers? There seems to be a bridge-failure of some sort going on here.On Jan 2, 2009, at 08:34 , Jake Maul wrote:On the brick server (the content server... the one with theXFS-formatted volume), FUSE is actually not used or even needed as far as I can tell. Leave your fstab mount alone, and treat GlusterFS as apure replacement for NFS's /etc/exports.FUSE only comes into play on the client side, where it's no longer relevant what the underlying filesystem is. If I'm reading you right, your XServe is the client in this scenario. Perhaps Mac OSX's FUSEimplementation is strange somehow, I'm not familiar with it.Otherwise, it sounds to me like you're doing it right. Sounds like either a permissions problem or a bug somewhere (first guesses wouldbe Mac OSX's FUSE, or GlusterFS client on OSX).On Thu, Jan 1, 2009 at 11:55 PM, [email protected] <[email protected] >wrote:Dear All,I'm afraid I'm a bit new to this. I hope I'm not missing the obvious, butinall the documentation I can't seem to find a clear answer to my problem.I have a head server (Leopard X serve) that will be used as a mount point for four sub-servers (Debian Etch) that each have two SATA RAID 5 blocksrunning an XFS filesystem.Before I switched to glusterfs, I would do an NFS export (/etc/ exports)of the XFS filesystem mounted in /etc/fstab. I have since cancelled (commentedout) the NFS export, but I am not quite sure what to do about the fstab: Should I mount the drives using this file, then export the filesystemusingglusterfs? Or should it be glusterfs doing the mounting? What role doesFUSE have in the mount operation?The RAID drives are at /dev/sdb and /dev/sdc, and their filesystems are accessible at /dev/sdb1 and /dev/sdc1 - should I be mounting these with glusterfs (instead of mounting them to a folder in the server root as Iam doing presently)?With my present configuration, all works correctly if I mount the raid drives individually, yet when I mirror two drives across two serversusingAFS things get wonky - I can upload files to a folder (and see that they have indeed been replicated to both drives), yet I am unable to create anew folder (it becomes an inaccessible icon). Thank you for any advice. Best, J.M. Schomburg. _______________________________________________ Gluster-devel mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/gluster-devel_______________________________________________ Gluster-devel mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/gluster-devel_______________________________________________ Gluster-devel mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/gluster-devel_______________________________________________ Gluster-devel mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/gluster-devel_______________________________________________ Gluster-devel mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/gluster-devel -- Amar Tumballi Gluster/GlusterFS Hacker [bulde on #gluster/irc.gnu.org] http://www.zresearch.com - Commoditizing Super Storage!
_______________________________________________ Gluster-devel mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/gluster-devel
