Hi,

Attached pls find the the server and client configuration file using the
distibute and AFR+distribute translator.

The distribute translator still return the different inode across the
storage cluster.

Below show the GlusterFS is 2.0.0.rc4

FSSERVER1:/var/log/glusterfs # glusterfs -V
glusterfs 2.0.0rc4 built on Mar 20 2009 12:30:00
Repository revision: cb602a1d7d41587c24379cb2636961ab91446f86 +
Copyright (c) 2006-2009 Z RESEARCH Inc. <http://www.zresearch.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
FSSERVER1:/var/log/glusterfs #


BR
Woon Chian


-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of
Anand Avati
Sent: Tuesday, March 03, 2009 7:42 PM
To: Hoe Woon Chian
Cc: [email protected]
Subject: Re: [Gluster-users] Clustering Samba with GlusterFS with Backend

> Understand that Samba's Share Lock and Byte Range Lock is very depend 
> on the inode on disk, so I need to make sure the inode number across 
> the cluster disk must be same.
> Senario 1, I use AFR translator, I use stat command to check the inode 
> number of a file, it show me a different number of inode on different 
> client. Senario 2, I use distribute, result is same with different 
> inode number.
>
> Only workable translator is Unify which the device id and inode number 
> is same on client 1, client 2, client .....
> So the only workable Translator is Unify, but also understand that 
> Unify have some performace issue, will Distribute translator replace 
> Unify? Or is that any method to have consistent device id and inode across
cluster disk?

Recent versions of Distribute return the same inode number consistently on
all clients. Please upgrade.

> Another question is how can I get Posix ACL support with GlusterFS?

Currently Posix ACL is not supported

Avati
glusterfs-server.vol

### Export volume "brick" with the contents of "/home/export" directory.
#volume brick-NameSpace
#volume brick-NS
#  type storage/posix                   # POSIX FS translator
#  option directory /HDD/sda3/Nspace        # Export this directory
#end-volume

#volume brick-NS
#  type features/posix-locks
#  option mandatory-locks on
#  subvolumes brick-NameSpace
#end-volume

volume brick0-lock
  type storage/posix
  option directory /HDD/1
end-volume

volume brick0
  type features/posix-locks
  option mandatory-locks on
  subvolumes brick0-lock
end-volume

volume brick1-lock
  type storage/posix
  option directory /HDD/2
end-volume

volume brick1
  type features/posix-locks
  option mandatory-locks on
  subvolumes brick1-lock
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp
# option bind-address 192.168.1.10     # Default is to listen on all interfaces
# option listen-port 6996              # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
  subvolumes brick0 brick1
  option auth.addr.brick0.allow 127.0.0.*,10.10.10.* # Allow access to "brick" 
volume
  option auth.addr.brick1.allow 127.0.0.*,10.10.10.*
#  option auth.addr.brick-NS.allow 127.0.0.*,10.10.10.*
end-volume

lines 1-48/48 (END)
volume HDD1
  type protocol/client
  option transport-type tcp
  option remote-host 127.0.0.1
  option remote-subvolume brick0
end-volume

volume HDD1-remote
  type protocol/client
  option transport-type tcp
  option remote-host 10.10.10.2         # IP address of the remote brick
  option remote-subvolume brick0        # name of the remote volume
end-volume

volume HDD2
  type protocol/client
  option transport-type tcp
  option remote-host 127.0.0.1
  option remote-subvolume brick1
end-volume

volume HDD2-remote
  type protocol/client
  option transport-type tcp
  option remote-host 10.10.10.2
  option remote-subvolume brick1
end-volume

volume distribute
  type cluster/distribute
  subvolumes HDD1 HDD1-remote HDD2 HDD2-remote
end-volume
Version      : glusterfs 2.0.0rc4 built on Mar 20 2009 12:31:19
TLA Revision : cb602a1d7d41587c24379cb2636961ab91446f86 +
Starting Time: 2009-03-20 14:20:05
Command line : /sbin/glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/GFS
PID          : 4481
System name  : Linux
Nodename     : FSSERVER2
Kernel Release : 2.6.27.7-9-pae
Hardware Identifier: i686

Given volfile:
+------------------------------------------------------------------------------+
  1: volume HDD1
  2:   type protocol/client
  3:   option transport-type tcp
  4:   option remote-host 127.0.0.1         # IP address of the remote brick
  5:   option remote-subvolume brick0        # name of the remote volume
  6: end-volume
  7:
  8: volume HDD1-remote
  9:   type protocol/client
 10:   option transport-type tcp
 11:   option remote-host 10.10.10.1
 12:   option remote-subvolume brick0
 13: end-volume
 14:
 15: volume HDD2
 16:   type protocol/client
 17:   option transport-type tcp
 18:   option remote-host 127.0.0.1
 19:   option remote-subvolume brick1
 20: end-volume
 21:
 22: volume HDD2-remote
 23:   type protocol/client
 24:   option transport-type tcp
 25:   option remote-host 10.10.10.1
 26:   option remote-subvolume brick1
 27: end-volume
 28:
 29: volume distribute
 30:   type cluster/distribute
 31:   subvolumes HDD1 HDD1-remote HDD2 HDD2-remote
 32: end-volume
 33:

+------------------------------------------------------------------------------+
2009-03-20 14:20:05 N [glusterfsd.c:1134:main] glusterfs: Successfully started
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] HDD1: 
connection and handshake succeeded
2009-03-20 14:20:05 W [dht-common.c:114:dht_lookup_dir_cbk] distribute: lookup 
of / on HDD1 returned error (Transport endpoint is not connected)
2009-03-20 14:20:05 W [dht-common.c:114:dht_lookup_dir_cbk] distribute: lookup 
of / on HDD1-remote returned error (Transport endpoint is not connected)
2009-03-20 14:20:05 W [dht-common.c:114:dht_lookup_dir_cbk] distribute: lookup 
of / on HDD2 returned error (Transport endpoint is not connected)
2009-03-20 14:20:05 W [dht-common.c:114:dht_lookup_dir_cbk] distribute: lookup 
of / on HDD2-remote returned error (Transport endpoint is not connected)
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] HDD1: 
connection and handshake succeeded
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD1-remote: connection and handshake succeeded
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD1-remote: connection and handshake succeeded
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] HDD2: 
connection and handshake succeeded
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] HDD2: 
connection and handshake succeeded
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD2-remote: connection and handshake succeeded
2009-03-20 14:20:05 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD2-remote: connection and handshake succeeded
FSSERVER2:/var/log/glusterfs # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              50G   16G   32G  33% /
udev                  999M  108K  998M   1% /dev
/dev/sda3              99G  188M   94G   1% /HDD/1
/dev/sda4             142G  188M  135G   1% /HDD/2
glusterfs             417G  747M  395G   1% /mnt/GFS
FSSERVER2:/var/log/glusterfs # mkdir -p /mnt/GFS/Recoverylock
FSSERVER2:/var/log/glusterfs # touch /mnt/GFS/Recoverylock/lock
FSSERVER2:/var/log/glusterfs # stat /mnt/GFS/Recoverylock/lock
  File: `/mnt/GFS/Recoverylock/lock'
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: 12h/18d Inode: 24608776    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2009-03-20 14:20:35.000000000 +0800
Modify: 2009-03-20 14:20:35.000000000 +0800
Change: 2009-03-20 14:20:35.000000000 +0800
FSSERVER2:/var/log/glusterfs #
================================================================================
Version      : glusterfs 2.0.0rc4 built on Mar 20 2009 12:30:00
TLA Revision : cb602a1d7d41587c24379cb2636961ab91446f86 +
Starting Time: 2009-03-20 14:18:17
Command line : /sbin/glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/GFS
PID          : 4246
System name  : Linux
Nodename     : FSSERVER1
Kernel Release : 2.6.27.7-9-pae
Hardware Identifier: i686

Given volfile:
+------------------------------------------------------------------------------+
  1: volume HDD1
  2:   type protocol/client
  3:   option transport-type tcp
  4:   option remote-host 127.0.0.1
  5:   option remote-subvolume brick0
  6: end-volume
  7:
  8: volume HDD1-remote
  9:   type protocol/client
 10:   option transport-type tcp
 11:   option remote-host 10.10.10.2         # IP address of the remote brick
 12:   option remote-subvolume brick0        # name of the remote volume
 13: end-volume
 14:
 15: volume HDD2
 16:   type protocol/client
 17:   option transport-type tcp
 18:   option remote-host 127.0.0.1
 19:   option remote-subvolume brick1
 20: end-volume
 21:
 22: volume HDD2-remote
 23:   type protocol/client
 24:   option transport-type tcp
 25:   option remote-host 10.10.10.2
 26:   option remote-subvolume brick1
 27: end-volume
 28:
 29: volume distribute
 30:   type cluster/distribute
 31:   subvolumes HDD1 HDD1-remote HDD2 HDD2-remote
 32: end-volume
 33:

+------------------------------------------------------------------------------+
2009-03-20 14:18:17 N [glusterfsd.c:1134:main] glusterfs: Successfully started
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] HDD1: 
connection and handshake succeeded
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] HDD1: 
connection and handshake succeeded
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD1-remote: connection and handshake succeeded
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] HDD2: 
connection and handshake succeeded
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] HDD2: 
connection and handshake succeeded
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD1-remote: connection and handshake succeeded
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD2-remote: connection and handshake succeeded
2009-03-20 14:18:17 N [client-protocol.c:5941:client_setvolume_cbk] 
HDD2-remote: connection and handshake succeeded
2009-03-20 14:18:17 W [dht-layout.c:482:dht_layout_normalize] distribute: 
directory / looked up first time
2009-03-20 14:18:17 W [dht-common.c:152:dht_lookup_dir_cbk] distribute: fixing 
assignment on /
FSSERVER1:/var/log/glusterfs # stat /mnt/GFS/Recoverylock/lock
  File: `/mnt/GFS/Recoverylock/lock'
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: 12h/18d Inode: 24608777    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2009-03-20 14:20:35.000000000 +0800
Modify: 2009-03-20 14:20:35.000000000 +0800
Change: 2009-03-20 14:20:35.000000000 +0800
FSSERVER1:/var/log/glusterfs #
### Add client feature and attach to remote subvolume

volume HDD1
  type protocol/client
  option transport-type tcp
  option remote-host 127.0.0.1
  option remote-subvolume brick0
end-volume

volume HDD1-remote
  type protocol/client
  option transport-type tcp
  option remote-host 10.10.10.2         # IP address of the remote brick
  option remote-subvolume brick0        # name of the remote volume
end-volume

volume HDD2
  type protocol/client
  option transport-type tcp
  option remote-host 127.0.0.1
  option remote-subvolume brick1
end-volume

volume HDD2-remote
  type protocol/client
  option transport-type tcp
  option remote-host 10.10.10.2
  option remote-subvolume brick1
end-volume

#volume distribute
#  type cluster/distribute
#  subvolumes client1 client2-remote
#end-volume

### Add readahead feature
#volume readahead
#  type performance/read-ahead
#  option page-size 1MB     # unit in bytes
#  option page-count 2       # cache per file  = (page-count x page-size)
#  subvolumes client
#end-volume

### Add IO-Cache feature
#volume iocache
#  type performance/io-cache
#  option page-size 256KB
#  option page-count 2
#  subvolumes readahead
#end-volume

### Add writeback feature
#volume writeback
#  type performance/write-behind
lines 1-59
glusterfs-server.vol

### Export volume "brick" with the contents of "/home/export" directory.
#volume brick-NameSpace
#volume brick-NS
#  type storage/posix                   # POSIX FS translator
#  option directory /HDD/sda3/Nspace        # Export this directory
#end-volume

#volume brick-NS
#  type features/posix-locks
#  option mandatory-locks on
#  subvolumes brick-NameSpace
#end-volume

volume brick0-lock
  type storage/posix
  option directory /HDD/1
end-volume

volume brick0
  type features/posix-locks
  option mandatory-locks on
  subvolumes brick0-lock
end-volume

volume brick1-lock
  type storage/posix
  option directory /HDD/2
end-volume

volume brick1
  type features/posix-locks
  option mandatory-locks on
  subvolumes brick1-lock
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp
# option bind-address 192.168.1.10     # Default is to listen on all interfaces
# option listen-port 6996              # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
  subvolumes brick0 brick1
  option auth.addr.brick0.allow 127.0.0.*,10.10.10.* # Allow access to "brick" 
volume
  option auth.addr.brick1.allow 127.0.0.*,10.10.10.*
#  option auth.addr.brick-NS.allow 127.0.0.*,10.10.10.*
end-volume

lines 1-48/48 (END)
_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

Reply via email to