Re: [Gluster-devel] Website down?

2007-08-21 Thread Sebastien LELIEVRE
Hi,

Guido Smit a écrit :
 For the last 2 days I'm trying to connect to the website, but no
 success. Is it down?
 

It was, but I've managed to connect to it this morning, and I still can.

What was the problem ?

Sebastien.

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] recomended library versions for glusterfs 1.3

2007-08-17 Thread Sebastien LELIEVRE
Hi everyone,

I would like to add more infos to this situation. Here is the current
behaviour we're experiencing :

Note : for those who would prefer, here is the pastebin of what follows:
http://glusterfs.pastebin.com/m4266582c

We're using FUSE 2.6.5, GlusterFS 1.3 patch 457, libattr 2.4.7 and glibc
2.3.2. kernel is 2.6.16.52

All servers export a '/export/data' directory

Client mounts the glusterfs volume to /mnt/glusterfs with an afr *:3
specification

## Client ##

glusterfs 217G  7.1G  207G   4% /mnt/glusterfs

[EMAIL PROTECTED]:/mnt# find /mnt/glusterfs/var-www/obmbackup -exec
file {} \;
/mnt/glusterfs/var-www/obmbackup: directory
/mnt/glusterfs/var-www/obmbackup/obmdb-20060104:202315-1.0.dump: ASCII text

[EMAIL PROTECTED]:/mnt# cat /var/log/glusterfs/glusterfs.log
2007-08-17 07:27:26 E [afr.c:1442:afr_selfheal_getxattr_cbk]
tbs-clust-data-afr:
(path=/var-www/obmbackup/obmdb-20060104:202315-1.0.dump
child=tbs-clust-or2-data) op_ret=-1 op_errno=38

[EMAIL PROTECTED]:/mnt# touch /mnt/glusterfs/test

## server 1 ##

Server 1 is the consistent brick. So it provides the files that should
be self-healed on other volumes.
for instance, as above with client:

[EMAIL PROTECTED]:~# du -sh /export/data/var-www/obmbackup/*
16M /export/data/var-www/obmbackup/obmdb-20060104:202315-1.0.dump


We check if the extended attributes are enabled:

[EMAIL PROTECTED]:~# setfattr -n user.foo -v bar /export/data/test

[EMAIL PROTECTED]:~# getfattr -n user.foo /export/data/test
getfattr: Removing leading '/' from absolute path names
# file: export/data/test
user.foo=bar

## server 2 ##

This server exported data directory is inconsistent. It should have been
self-healed with the client 'find' command above, but:

[EMAIL PROTECTED]:~# du -sh /export/data/var-www/obmbackup/*
du: `/export/data/var-www/obmbackup/*': No such file or directory

[EMAIL PROTECTED]:~# du -sh /export/data/var-www/*
4.0K/export/data/var-www/obmbackup


We check if the extended attributes are enabled:

[EMAIL PROTECTED]:~# setfattr -n user.foo2 -v bar2 /export/data/test

[EMAIL PROTECTED]:~# getfattr -n user.foo2 /export/data/test
getfattr: Removing leading '/' from absolute path names
# file: export/data/test
user.foo2=bar2


## server 3 ##

This server exported data directory is also inconsistent. It should have
been self-healed with the client 'find' command above, as brick2 should
have too, but the song remains the same:

[EMAIL PROTECTED]:~# du -sh /export/data/var-www/obmbackup/*
du: `/export/data/var-www/obmbackup/*': No such file or directory

[EMAIL PROTECTED]:~# du -sh /export/data/var-www/
8.0K/export/data/var-www

And again, we can check that the extended attributes are enabled:

[EMAIL PROTECTED]:~# setfattr -n user.foo3 -v bar3 /export/data/test

[EMAIL PROTECTED]:~# getfattr -n user.foo3 /export/data/test
getfattr: Removing leading '/' from absolute path names
# file: export/data/test
user.foo3=bar3

Note that after editing this 'test' file (with 'vi' for instance),
extended attributes disappear

If you need any more info, please ask !

Hope the code on which Amar is working would solve the issues above.

Best Regards,

Sebastien.

Amar S. Tumballi a écrit :
 Hi Vincent,
  The versions you are using are perfectly fine. I am investigating about the
 problems you have, with the help of Sebastien. Let me get back to you once
 these issues are solved.
 
 -amar
 
 
 On 8/14/07, Vincent Régnard [EMAIL PROTECTED] wrote:
 Hi all,

 My present gluster configuration is sometimes freezing. I am wondering
 if this has to do with my library versions or the build process. My
 glibc is pretty old (2.3.2) and I had to make a small patch to avoid
 epoll problem as discussed somewhere here earlier:

 --- configure.ac.orig   Thu Jul  5 15:49:08 2007
 +++ configure.acThu Jul  5 15:50:18 2007
 @@ -183,7 +183,7 @@
   AC_DEFINE(HAVE_BACKTRACE, 1, [define if found backtrace])
fi

 -AC_CHECK_HEADERS([sys/epoll.h])
 +dnl AC_CHECK_HEADERS([sys/epoll.h])
AC_CHECK_HEADERS([sys/xattr.h])

AC_SUBST(HAVE_IBVERBS)

 With latest (1.3.0 patch 457) I know have extended attribute problem
 (source code does not compile) and I will certainly have to patch again
 to bypass the trouble which certainly is not a good thing.

 I would like to know what are the recomended library versions for
 gluster 1.3 (glibc, but also others such as libattr, fuse...) ?

 What would be similar recomendations for the linux kernel version and
 for gcc compiler ?

 Thanks in advance for your comments.

 Vincent

 PS: I run
 glusterfs 1.5patch403
 fuse 2.6.5
 libattr 2.4.7
 on linux kernel 2.6.16.52
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Yet another GlusterFS HA specification

2007-08-14 Thread Sebastien LELIEVRE
Hi everyone,

Here is a template for a HA storage configuration with GlusterFS.
Configuration is :

2 nodes on a network A (or2 and or3)
1 node on a network B (op1)

We want to provide HA services for both networks A  B

In this example, we also provide a backup node for network A

All clients from networks A  B should see the same volume at every time

 Server 1 
volume tbs-clust-op1-local-data
   type storage/posix
   option directory /export/GLUSTER/data
end-volume

volume tbs-clust-op1-local-data-locks
  type features/posix-locks
  subvolumes tbs-clust-op1-local-data
end-volume

volume tbs-clust-op1-local-ns
   type storage/posix
   option directory /export/GLUSTER/namespace
end-volume

volume tbs-clust-or2-remote-data
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.28.6
   option remote-subvolume tbs-clust-or2-local-data-locks
end-volume

volume tbs-clust-or3-remote-data
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.28.7
   option remote-subvolume tbs-clust-or3-local-data-locks
end-volume

### Performance xlators
volume tbs-clust-or2-iot
   type performance/io-threads
   option thread-count 2
   subvolumes tbs-clust-or2-remote-data
end-volume

volume tbs-clust-or2-wb
   type performance/write-behind
   subvolumes tbs-clust-or2-iot
end-volume

volume tbs-clust-or3-iot
   type performance/io-threads
   option thread-count 2
   subvolumes tbs-clust-or3-remote-data
end-volume

volume tbs-clust-or3-wb
   type performance/write-behind
   subvolumes tbs-clust-or3-iot
end-volume

### Replication on server-side
volume tbs-clust-data-afr
   type cluster/afr
   subvolumes tbs-clust-op1-local-data-locks tbs-clust-or3-wb
tbs-clust-or2-wb
   option replicate *:3
   option self-heal on
end-volume

### Unification on server-side
volume tbs-clust-unify
   type cluster/unify
   subvolumes tbs-clust-data-afr
   option namespace tbs-clust-op1-local-ns
   option scheduler rr
end-volume

### Global Performance xlators
volume tbs-clust-server
   type performance/io-threads
   option thread-count 8
   option cache-size 32MB
   subvolumes tbs-clust-unify
end-volume

### Volumes exportation
volume server
   type protocol/server
   option transport-type tcp/server
   subvolumes tbs-clust-server
   option auth.ip.tbs-clust-op1-local-data-locks.allow
127.0.0.1,10.24.133.178,192.168.28.6,192.168.28.7,192.168.28.5
   option auth.ip.tbs-clust-op1-local-ns.allow 127.0.0.1,10.24.133.178
   option auth.ip.tbs-clust-server.allow
127.0.0.1,10.24.133.178,192.168.28.6,192.168.28.7,192.168.28.5
end-volume


 Server 2 
volume tbs-clust-or2-local-data
   type storage/posix
   option directory /export/GLUSTER/data
end-volume

volume tbs-clust-or2-local-data-locks
  type features/posix-locks
  subvolumes tbs-clust-or2-local-data
end-volume

volume tbs-clust-or2-local-ns
   type storage/posix
   option directory /export/GLUSTER/namespace
end-volume

volume tbs-clust-or3-remote-data
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.28.7
   option remote-subvolume tbs-clust-or3-local-data-locks
end-volume

volume tbs-clust-op1-remote-data
   type protocol/client
   option transport-type tcp/client
   option remote-host 10.24.133.178
   option remote-subvolume tbs-clust-op1-local-data-locks
end-volume

### Performance xlators
volume tbs-clust-op1-iot
   type performance/io-threads
   option thread-count 4
   subvolumes tbs-clust-op1-remote-data
end-volume

volume tbs-clust-op1-wb
   type performance/write-behind
   subvolumes tbs-clust-op1-iot
end-volume

### Replication on server-side
volume tbs-clust-data-afr
   type cluster/afr
   subvolumes tbs-clust-or2-local-data-locks
tbs-clust-or3-remote-data tbs-clust-op1-wb
   option replicate *:3
   option self-heal on
end-volume

### Unification on server-side
volume tbs-clust-unify
   type cluster/unify
   subvolumes tbs-clust-data-afr
   option namespace tbs-clust-or2-local-ns
   option scheduler rr
end-volume

### Global Performance xlators
volume tbs-clust-server
   type performance/io-threads
   option thread-count 8
   option cache-size 32MB
   subvolumes tbs-clust-unify
end-volume

### Volumes exportation
volume server
   type protocol/server
   option transport-type tcp/server
   subvolumes tbs-clust-server
   option auth.ip.tbs-clust-or2-local-data-locks.allow
127.0.0.1,192.168.28.6,192.168.28.7,10.24.133.178,192.168.28.5
   option auth.ip.tbs-clust-or2-local-ns.allow 127.0.0.1,192.168.28.6
   option auth.ip.tbs-clust-server.allow
127.0.0.1,192.168.28.6,192.168.28.7,10.24.133.178,192.168.28.5
end-volume



Re: [Gluster-devel] writebehind slowness

2007-08-03 Thread Sebastien LELIEVRE
Nathan Allen Stratton a écrit :
 YOu mind sharing your configs?
 

Sorry, I did not mention it, but there are still on the glusterfs
pastebin :

http://glusterfs.pastebin.com/d77e99d7f

It's something like this, but with real IPs (2 distinct datacenters)
instead of loopback

we have or1, or2, or3 (1 datacenter) and op1 (another datacenter)

100Mbits connection between each


 
 
 Nathan Stratton CTO, Voila IP Communications
 nathan at robotics.net  nathan at voilaip.com
 http://www.robotics.net http://www.voilaip.com
 

Sorry for the delay,

Sebastien.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS High Availability setup and DNS round-robin

2007-08-01 Thread Sebastien LELIEVRE
Anand Avati a écrit :

 2. the protocol/client handles round-robin DNS gracefully. you can have just
 one protocol/client section in the client specification which has a hostname
 which resolves to 3 differnt server IPs (either via bind configuration, or
 multiple /etc/hosts entries for differnt IP and same hostname). 

So something like :

   volume santa
 type protocol/client
 option transport-type tcp/client
 option remote-host servers.mygluster.lan
 option remote-subvolume mailspool
   end-volume

Where servers.mygluster.lan resolves either 10.64.98.147, 10.64.98.144,
and 10.64.98.146 (in this example)

 When one server disconnects, glusterfs now automatically tries the
 second IP from the dns reply. (loops over to the next IP per
 reconnection and then does a fresh DNS lookup after one iteration) -
 This option can work as an intermediate solution until the HA xlator
 comes in.



Sebastien.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] block unserialize transport error with tla 383

2007-08-01 Thread Sebastien LELIEVRE
Hi everyone,

Here is an error that was reported to me :

2007-08-01 13:34:19 E [protocol.c:251:gf_block_unserialize_transport]
libglusterfs/protocol: EOF from
  peer (192.168.28.5:1023)

My config files are my latest pastebin on gluster.pastebin

Also, there seems to be a big latency problem

~# time ls /mnt/gluster/home
http2
real0m10.863s
user0m0.000s
sys0m0.000s

Do you need anymore infos ? Please ask !

Regards,

Sebastien.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] core bt libc

2007-07-31 Thread Sebastien LELIEVRE
Hi all

Anand Avati a écrit :
 I still dont see an attachment :(
 
 avati
 

For the record, I'm afraid Gluster-ML strips attachments when posting. I
can't post on the glusterfs with my certificate attached. Not allowing
attachements seems to prevent huge posting  spam, iirc.

Regards,

Sebastien.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS Example

2007-07-31 Thread Sebastien LELIEVRE
Hello the thread

Nathan Allen Stratton a écrit :
 
 Which version of glusterfs are you using? tla, pre ?

 (that issue has been fixed for a little while now, so if your using pre6
 you shoulnd't have come across it)
 
 Yep, that was it, I was using 1.2, latest 1.3 fixes the problme.
 

Indeed, AFR is not available in the 1.2 release.

I'm really interested in your configuration and the issues you may have
(or have not !)

Please, keep us in touch !

Regards,

Sébastien.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Unify namespace-size convention

2007-07-16 Thread Sebastien LELIEVRE
Hi everyone.

I just have a little question:

Would there be a way to define the namespace volume size regards to the
bricks in use?

Let's say it simply : Is there any rule that would say :

I have X bricks of Y Gb-size with Z thousand of files on each,
so I need a namespace volume of *how-to define it* Mb

Does anyone have a clue on this ?

Cheers,

Sebastien.


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Connect taking a loooong time on loopback

2007-07-16 Thread Sebastien LELIEVRE
Hi everyone,

I'm still experiencing the same problem with patch 313

Does anyone have a clue on what could cause such a behaviour ?

Regards,

Sebastien.

Sebastien LELIEVRE a écrit :
 Hi everyone,
 
 I'm begining glusterfs integration on our new datacenter, and my really
 first tests show me really odd things, just see :
 
 Client:
 
 volume brickc
   type protocol/client
   option transport-type tcp/client
   option remote-host 127.0.0.1
   option remote-port 6996
   option remote-subvolume brick
 end-volume
 
 
 Server:
 
 volume brick
   type storage/posix
   option directory /home/slelievre/gluster-test/server1
 end-volume
 
 volume server
   type protocol/server
   option transport-type tcp/server
   option bind-address 127.0.0.1
   option listen-port 6996
   subvolumes brick
   option auth.ip.brick.allow *
 end-volume
 
 
 As you can see on the log below, the client took nearly 10 mins to
 connect to the server, and connection was broken a little after.
 
 We are running on mainline-2.5 patch 266 with fuse 2.6.5 and kernel
 2.6.16 'customed' to our needs
 
 Is this a 'bug' ? If so, is it fixed on latest versions ?
 
 Cheers,
 
 Sebastien.
 
 logs are :
 
 Server-side :
 
 2007-07-13 12:46:37 D [spec.y:116:new_section] libglusterfs/parser: New
 node for 'brick'
 2007-07-13 12:46:37 D [spec.y:132:section_type] libglusterfs/parser:
 Type:brick:storage/posix
 2007-07-13 12:46:37 D [xlator.c:100:xlator_set_type]
 libglusterfs/xlator: attempt to load type storage/posix
 2007-07-13 12:46:37 D [xlator.c:107:xlator_set_type]
 libglusterfs/xlator: attempt to load file
 /usr/lib/glusterfs/1.3.0-pre5.2/xlator/storage/posix.so
 2007-07-13 12:46:37 D [spec.y:152:section_option] libglusterfs/parser:
 Option:brick:directory:/home/slelievre/gluster-test/server1
 2007-07-13 12:46:37 D [spec.y:209:section_end] libglusterfs/parser:
 end:brick
 2007-07-13 12:46:37 D [spec.y:116:new_section] libglusterfs/parser: New
 node for 'server'
 2007-07-13 12:46:37 D [spec.y:132:section_type] libglusterfs/parser:
 Type:server:protocol/server
 2007-07-13 12:46:37 D [xlator.c:100:xlator_set_type]
 libglusterfs/xlator: attempt to load type protocol/server
 2007-07-13 12:46:37 D [xlator.c:107:xlator_set_type]
 libglusterfs/xlator: attempt to load file
 /usr/lib/glusterfs/1.3.0-pre5.2/xlator/protocol/server.so
 2007-07-13 12:46:37 D [spec.y:152:section_option] libglusterfs/parser:
 Option:server:transport-type:tcp/server
 2007-07-13 12:46:37 D [spec.y:152:section_option] libglusterfs/parser:
 Option:server:bind-address:127.0.0.1
 2007-07-13 12:46:37 D [spec.y:152:section_option] libglusterfs/parser:
 Option:server:listen-port:6996
 2007-07-13 12:46:37 D [spec.y:194:section_sub] liglusterfs/parser:
 child:server-brick
 2007-07-13 12:46:37 D [spec.y:152:section_option] libglusterfs/parser:
 Option:server:auth.ip.brick.allow:*
 2007-07-13 12:46:37 D [spec.y:209:section_end] libglusterfs/parser:
 end:server
 2007-07-13 12:46:37 D [posix.c:1776:init] brick: missing
 'inode-lru-limit'. defaulting to 1000
 2007-07-13 12:46:37 D [inode.c:481:__create_inode] brick/inode: create
 inode(1)
 2007-07-13 12:46:37 D [inode.c:279:__active_inode] brick/inode:
 activating inode(1), lru=0/1000
 2007-07-13 12:46:37 D [server-protocol.c:5670:init] protocol/server:
 protocol/server xlator loaded
 2007-07-13 12:46:37 D [transport.c:81:transport_load]
 libglusterfs/transport: attempt to load type tcp/server
 2007-07-13 12:46:37 D [transport.c:86:transport_load]
 libglusterfs/transport: attempt to load file
 /usr/lib/glusterfs/1.3.0-pre5.2/transport/tcp/server.so
 2007-07-13 12:46:50 D [tcp-server.c:136:tcp_server_notify] server:
 Registering socket (8) for new transport object of 127.0.0.1
 2007-07-13 12:46:50 D [server-protocol.c:5024:mop_setvolume]
 server-protocol: mop_setvolume: received port = 1023
 2007-07-13 12:46:50 D [server-protocol.c:5040:mop_setvolume]
 server-protocol: mop_setvolume: IP addr = *, received ip addr = 127.0.0.1
 2007-07-13 12:46:50 D [server-protocol.c:5051:mop_setvolume]
 server-protocol: mop_setvolume: accepted client from 127.0.0.1
 2007-07-13 12:49:07 D [server-protocol.c:5649:server_protocol_cleanup]
 protocol/server: cleaned up xl_private of 0x8054330
 2007-07-13 12:49:07 C [tcp.c:81:tcp_disconnect] server: connection
 disconnected
 2007-07-13 12:49:07 D [tcp-server.c:230:gf_transport_fini] server:
 destroying transport object for 127.0.0.1:1023 (fd=8)
 2007-07-13 12:52:15 D [tcp-server.c:136:tcp_server_notify] server:
 Registering socket (5) for new transport object of 127.0.0.1
 2007-07-13 12:52:15 D [server-protocol.c:5024:mop_setvolume]
 server-protocol: mop_setvolume: received port = 1023
 2007-07-13 12:52:15 D [server-protocol.c:5040:mop_setvolume]
 server-protocol: mop_setvolume: IP addr = *, received ip addr = 127.0.0.1
 2007-07-13 12:52:15 D [server-protocol.c:5051:mop_setvolume]
 server-protocol: mop_setvolume: accepted client from 127.0.0.1
 
 Client-side:
 
 2007-07-13 12:52:15 D [glusterfs.c:118:get_spec_fp

Re: [Gluster-devel] mainline--2.5--patch-258: still no AFR replication

2007-07-03 Thread Sebastien LELIEVRE
Hi Gerry,

Is your underlying FS (let's suppose it is EXT3) has the extended
attributes mount option enabled ?

I'm afraid it would not be as simple as this, but who knows ?

So, question is : are your servers bricks FS mounted with extended
attributes ?

Regards,

Sebastien.

Gerry Reno a écrit :
 Gerry Reno wrote:
 Krishna,
  Still no luck:

 [EMAIL PROTECTED] glusterfs]# find /mnt/glusterfs*
 /mnt/glusterfs0
 /mnt/glusterfs0/file0
 /mnt/glusterfs0/file1
 /mnt/glusterfs1
 /mnt/glusterfs1/file0
 /mnt/glusterfs1/file2
 /mnt/glusterfs1/file1
 /mnt/glusterfs2
 /mnt/glusterfs2/file0
 /mnt/glusterfs2/file2
 /mnt/glusterfs2/file1
 /mnt/glusterfs3
 /mnt/glusterfs3/file0
 /mnt/glusterfs3/file2
 /mnt/glusterfs3/file1
 [EMAIL PROTECTED] glusterfs]# rm /mnt/glusterfs*/*
 rm: remove regular empty file `/mnt/glusterfs0/file0'? y
 rm: remove regular empty file `/mnt/glusterfs0/file1'? y
 rm: remove regular empty file `/mnt/glusterfs1/file0'? y
 rm: remove regular empty file `/mnt/glusterfs1/file1'? y
 rm: remove regular empty file `/mnt/glusterfs1/file2'? y
 rm: remove regular empty file `/mnt/glusterfs2/file0'? y
 rm: remove regular empty file `/mnt/glusterfs2/file1'? y
 rm: remove regular empty file `/mnt/glusterfs2/file2'? y
 rm: remove regular empty file `/mnt/glusterfs3/file0'? y
 rm: remove regular empty file `/mnt/glusterfs3/file1'? y
 rm: remove regular empty file `/mnt/glusterfs3/file2'? y
 [EMAIL PROTECTED] glusterfs]# find /mnt/glusterfs*
 /mnt/glusterfs0
 /mnt/glusterfs1
 /mnt/glusterfs2
 /mnt/glusterfs3
 [EMAIL PROTECTED] glusterfs]# touch /mnt/glusterfs0/file0
 [EMAIL PROTECTED] glusterfs]# find /mnt/glusterfs*
 /mnt/glusterfs0
 /mnt/glusterfs0/file0
 /mnt/glusterfs1
 /mnt/glusterfs2
 /mnt/glusterfs3
 [EMAIL PROTECTED] glusterfs]#



 
 Here is the log from this sequence:
 
 *snip*
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] replicate with 5 nodes - and adding more nodes in the future

2007-06-26 Thread Sebastien LELIEVRE
Hi,

I just wanted to stress this :

Shai a écrit :
 Hello, we are testing glusterfs 1.2 and I have few questions - 

1.2 doesn't bring self-heal with it, so keep in mind that if a drives
crashes, you would have to sync your new drive manually with the others.

BUT, 1.3 is going to correct this, and this is good :)

That's all I had to add

Cheers,

Sebastien LELIEVRE
[EMAIL PROTECTED]   Services to ISP
TBS-internet   http://www.TBS-internet.com

Krishna Srinivas a écrit :
 As of now you need to restart glusterfs if there is any change
 in the config spec file. However in future versions you wont need
 to remount (This is in our road map)
 
 On 6/25/07, Shai DB [EMAIL PROTECTED] wrote:
 thanks for the answer
 this seems easy and neat to setup

 another question is, if i add 2 more nodes to the gang
 how can i setup all the clients with the new configuration, without
 need to
 'remount' the glusterfs ?

 Thanks


 On 6/25/07, Krishna Srinivas [EMAIL PROTECTED] wrote:
 
  On 6/25/07, Shai DB [EMAIL PROTECTED] wrote:
   Hello, we are testing glusterfs 1.2 and I have few questions -
  
  
   1. we are going to store millions of small jpg files that will be
 read
  by
   webserver - is glusterfs good solution for this ?
 
  Yes, definitely.
 
   2. we are going to run both server+clients on each node together with
  apache
  
   3. replicate *:2
  
   the way i think doing replicate is defining on each server 2
 volumes and
   using AFR:
  
   server1: a1, a2
   server2: b1, b2
   server3: c1, c2
   server4: d1, d2
   server5: e1, e2
  
   afr1: a1+b2
   afr2: b1+c2
   afr3: c1+d2
   afr4: d1+e2
   afr5: e1+a2
  
   and then unify = afr1+afr2+afr3+afr4+afr5 with replicate option
  
   is this correct way ?
   and what to do on the future when we add more nodes ? when
 changing the
  afr
   (adding and changing the couples) making glusterfs
   redistribute the files the new way ?
 
  Yes this is the right way. If you add one more server f, the one
 solution
  is to move contents of a2 to f2 and clean up a2 and have it as
 following:
 
  afr5: e1 + f2
  afr6: f1 + a2
 
  Cant think of an easier solution.
 
  But if we assume that you will always add 2 servers when you want to
 add,
  we can have the setup in following way:
  afr1: a1 + b2
  afr2: b1 + a2
  afr3: c1 + d2
  afr4: d1 + c2
  afr5: e1 + f2
  afr6: f1 + e2
 
  Now when you add a pair of servers to this (g, h):
  afr7: f1 + h2
  afr8: h1 +f2
 
  Which is very easy. But you will have to add 2 servers everytime.
  The advantage is that it is easier to visualize the setup and add
  new nodes.
 
  Thinking further, if we assume that you will replicate all the files
  twice (option replicate *:2) you can have the following setup:
  afr1: a + b
  afr2: c + d
  afr3: e + f
 
  This is a very easy setup. It is simple to add a fresh pair (afr4: g
 +h)
 
  You can have whatever setup you want depending on your
  convinience and requirement.
 
  
   4. what happens when a hard drive goes down and replaces, the cluster
  also
   redistribute the files ?
 
  When a hard drive is replaced, missing files will be replicated from
 the
  AFR's other child.
 
  Regards
  Krishna
 
  ---
 
  The best quote ever : '
 
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel

 
 

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] self-heal issues with AFR in 2.5

2007-06-26 Thread Sebastien LELIEVRE
Hi everyone,

I just wanted to give you a little feedback on mainline--2.5. I am using
my 3 test servers, and will soon add 2 more.

The main problem I am experiencing is : self-heal repairs directory
inconsitencies but not file inconcistencies

Here is the current state:

tbs-lab1 (client)
tbs-lab2 (brick1)
tbs-lab3 (brick2)

on the client, afr = brick1 + brick2.

for the test, brick1 has whole data and brick2 is blanked (for now).

brick1 and brick2 are both on a ext3 FS with extended attributes :

/dev/sda2   /glusterfs  ext3defaults,user_xattr 0 2

I am currently using glusterfs--mainline--2.5--patch-220 on each machine.

client : CFLAGS=-O3 ./configure --prefix=/usr/local --sysconfdir=/etc
--disable-server --disable-ibverbs

servers : CFLAGS=-O3 ./configure --prefix=/usr/local --sysconfdir=/etc
--disable-fuse-client --disable-ibverbs

here is the servers spec file (X is the number of the brick, so 1 or 2,
here) :

volume brickX
  type storage/posix
  option directory /glusterfs
end-volume

volume locksX
  type features/posix-locks
  subvolumes brickX
end-volume

volume serverX
  type protocol/server
  option transport-type tcp/server
  subvolumes locksX
  option auth.ip.locksX.allow 192.168.0.5 # client IP
end-volume

volume traceX
  type debug/trace
  subvolumes serverX
  option debug on
end-volume

launching both bricks with this command :

glusterfsd -f /etc/glusterfs/glusterfs-server.vol
--log-file=/var/log/glusterfs/glusterfsd.log --log-level=DEBUG

Now here is the client spec file :

volume brick1c
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.0.6# brick1 IP
  option remote-subvolume locks1
end-volume

volume brick2c
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.0.7# brick2 IP
  option remote-subvolume locks2
end-volume

volume afr
  type cluster/afr
  subvolumes brick1c brick2c
  option replicate *:2
end-volume

volume writebehind
 type performance/write-behind
 option aggregate-size 131072
 subvolumes afr
end-volume

And I mounting the volume as such :

glusterfs -f /etc/glusterfs/glusterfs-client.vol -l
/var/log/glusterfs/glusterfs.log /mnt/glusterfs --log-level=DEBUG

DEBUG is here to give as much information as needed to correct the issue.

So, at this state, we have :
brick1:~# ls -l /glusterfs
total 4
drwxr-xr-x 2 www-data www-data 4096 2007-05-11 11:27 apache2-default

and :
brick2:~# ls -l /glusterfs
total 0

If I do a 'ls -l' on the client, it instantaneously creates
apache2-default directory on brick2 (but not subdirs, which haven't been
accessed yet)


Issue is here : let's try to access a file from the client, let's say :

cp /mnt/glusterfs/apache2-default/apache_pb2_ani.gif /dev/null

brick1:~# ls -l /glusterfs
total 4
drwxr-xr-x 2 www-data www-data 4096 2007-05-11 11:27 apache2-default

brick1:~# ls -l /glusterfs/apache2-default/
total 151
-rw-r--r-- 1 www-data www-data 2160 2007-05-11 11:27 apache_pb2_ani.gif
-rw-r--r-- 1 www-data www-data 2414 2007-05-11 11:27 apache_pb2.gif
 bla bla bla *snip* ...
-rw-r--r-- 1 www-data www-data   26 2007-05-11 11:27 robots.txt


brick2:~# ls -l /glusterfs
total 4
drwxr-xr-x 2 root root 4096 2007-06-26 10:03 apache2-default

brick2:~# ls -l /glusterfs/apache2-default/
total 0

Can you see where I might have forgotten something, or done something
wrong ?

log files are too big to be put on attachement (limit is 40kB)

I can provide them to anyone who wants

Cheers,

Sebastien LELIEVRE
[EMAIL PROTECTED]   Services to ISP
TBS-internet   http://www.TBS-internet.com


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Question about disk space

2007-06-04 Thread Sebastien LELIEVRE
Amar S. Tumballi a écrit :
 Yep,
 i agree with krishna, AFR is not visible to usr. It should happen in
 background, so why the user should be aware of all the 3 or 2 replicated
 copies? For a user, if he writes 1MB, he wants to see 1MB as used space.
 So,
 showing the sizeof the first server is better idea. (or even size of
 smallest is good too).
 
 -bulde

I'm not confident with this idea of first server. Maybe some users
will be interested in the way of replicate files over smaller disk.

the gluster client should be aware of all bricks status.

But you're right, maybe it's should not be an info to show on the df
command, but it would sure be useful to have it on the client to avoid
the user to connect every bricks to retrieve the status of each of them.

Cheers,

Sebastien.

 
 On 6/4/07, Krishna Srinivas [EMAIL PROTECTED] wrote:

 On 6/4/07, Anand Avati [EMAIL PROTECTED] wrote:
  (fwd'ing to gluster-devel)
 
  
   Maybe it could give a sum-up of each replica-bricks ?
 
  this makes sense too, where you show the total storage as X, which
  gets 'consumed' much faster (as in, when you add a 1MB file, 2 or 3 MB
  gets counted for 'used' according to the replication count)

 No, I think this might confuse the users.
 I think showing the info of the first server will be good because
 of the reasons mentioned by you in the previous mail. Lets see
 if we can come up with an ideal solution.

 Krishna

 
  any opinions?
 
 
  --
  Anand V. Avati


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] exporting glusterfs with samba

2007-05-16 Thread Sebastien LELIEVRE
Hi Miklos,

Just to be sure : Are you using the write-behind translator?

Regards,

Sebastien.

Miklos Balazs a écrit :
 Dear Gluster Hackers,
 
 I have configured a Gluster cluster with 2 nodes mirroring each other
 (with afr), and I have experienced very good performance with the fuse
 client under Linux. However, if I export this filesystem with Samba (I
 have tryed exporting from a separate gateway node, and also mounting
 the filesystem on the cluster nodes and exporting from there), I see
 very bad write performance: 5Mbytes/s aggregate bandwidth. I doesn't
 matter if I write with multiple threads, or even writing from multiple
 client machines at the same time, the aggregate bandwidth never gets
 more than 5-6M/s.
 
 Is this a known issue, or has anyone managed to export glusterfs with
 samba with good performance? If so, can You give me some hints on
 optimizing samba?
 
 Thanks! And BTW, You have done great work with Gluster. I was using
 Lustre before, but building and managing a HA cluster with Lustre was
 a pain.. Gluster's architecture is much simpler and cleaner.
 
 Miklos Balazs
 

Sebastien LELIEVRE
[EMAIL PROTECTED]   Services to ISP
TBS-internet   http://www.TBS-internet.com/



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] exporting glusterfs with samba

2007-05-16 Thread Sebastien LELIEVRE
OK,

As a start, can you paste your config files here ?

Welcome in the GlusterFS community !

Cheers,

Sebastien

Miklos Balazs a écrit :
 I have tryed with or without, but it didn't made any difference..
 
 On 5/16/07, Sebastien LELIEVRE [EMAIL PROTECTED] wrote:
 Hi Miklos,

 Just to be sure : Are you using the write-behind translator?

 Regards,

 Sebastien.

 Miklos Balazs a écrit :
  Dear Gluster Hackers,
 
  I have configured a Gluster cluster with 2 nodes mirroring each other
  (with afr), and I have experienced very good performance with the fuse
  client under Linux. However, if I export this filesystem with Samba (I
  have tryed exporting from a separate gateway node, and also mounting
  the filesystem on the cluster nodes and exporting from there), I see
  very bad write performance: 5Mbytes/s aggregate bandwidth. I doesn't
  matter if I write with multiple threads, or even writing from multiple
  client machines at the same time, the aggregate bandwidth never gets
  more than 5-6M/s.
 
  Is this a known issue, or has anyone managed to export glusterfs with
  samba with good performance? If so, can You give me some hints on
  optimizing samba?
 
  Thanks! And BTW, You have done great work with Gluster. I was using
  Lustre before, but building and managing a HA cluster with Lustre was
  a pain.. Gluster's architecture is much simpler and cleaner.
 
  Miklos Balazs
 

 Sebastien LELIEVRE
 [EMAIL PROTECTED]   Services to ISP
 TBS-internet   http://www.TBS-internet.com/




___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] auth.brick.ip.allow restrictions

2007-04-30 Thread Sebastien LELIEVRE
Hello everyone

I'm posting this here because I need an advice from you.

On all examples presented on the web-site, and the user-config posted
here, you always allow a wide range of IP address (something like option
auth.ip.brick.allow 192.168.28.* )

My question is : how to I configure the servers in order to accept just
a list of IP addresses ?

I've tried this config :

volume server
  type protocol/server
  option transport-type tcp/server
interfaces
  subvolumes brick
  option auth.ip.brick.allow 192.168.28.6
  option auth.ip.brick.allow 192.168.28.7
end-volume

and client 192.168.28.6 wasn't able to connect.
Error message on this client side is : Transport endpoint is not connected

and on the server side, the log give us this :

[Apr 30 17:16:57] [DEBUG/tcp-server.c:134/tcp_server_notify()]
tcp/server:Registering socket (5) for new transport object of 192.168.28.6
[Apr 30 17:17:08] [DEBUG/proto-srv.c:2418/mop_setvolume()]
server-protocol:mop_setvolume: received port = 1022
[Apr 30 17:17:08] [DEBUG/proto-srv.c:2434/mop_setvolume()]
server-protocol:mop_setvolume: IP addr = 192.168.28.7, received ip addr
= 192.168.28.6
[Apr 30 17:17:08] [ERROR/common-utils.c:55/full_rw()]
libglusterfs:full_rw: 0 bytes r/w instead of 113 (errno=17)
[Apr 30 17:17:08]
[DEBUG/protocol.c:244/gf_block_unserialize_transport()]
libglusterfs/protocol:gf_block_unserialize_transport: full_read of
header failed
[Apr 30 17:17:08] [DEBUG/proto-srv.c:2868/proto_srv_cleanup()]
protocol/server:cleaned up xl_private of 0x8050998
[Apr 30 17:17:08] [CRITICAL/tcp.c:82/tcp_disconnect()]
transport/tcp:closing socket: 5 priv-connected = 1
[Apr 30 17:17:08] [DEBUG/tcp-server.c:229/gf_transport_fini()]
tcp/server:destroying transport object for 192.168.28.6:1022 (fd=5)
[Apr 30 17:17:19] [DEBUG/tcp-server.c:134/tcp_server_notify()]
tcp/server:Registering socket (5) for new transport object of 192.168.28.6
[Apr 30 17:17:28] [DEBUG/proto-srv.c:2418/mop_setvolume()]
server-protocol:mop_setvolume: received port = 1021
[Apr 30 17:17:28] [DEBUG/proto-srv.c:2434/mop_setvolume()]
server-protocol:mop_setvolume: IP addr = 192.168.28.7, received ip addr
= 192.168.28.6
[Apr 30 17:17:28] [ERROR/common-utils.c:55/full_rw()]
libglusterfs:full_rw: 0 bytes r/w instead of 113 (errno=9)
[Apr 30 17:17:28]
[DEBUG/protocol.c:244/gf_block_unserialize_transport()]
libglusterfs/protocol:gf_block_unserialize_transport: full_read of
header failed
[Apr 30 17:17:28] [DEBUG/proto-srv.c:2868/proto_srv_cleanup()]
protocol/server:cleaned up xl_private of 0x804b1e8
[Apr 30 17:17:28] [CRITICAL/tcp.c:82/tcp_disconnect()]
transport/tcp:closing socket: 5 priv-connected = 1
[Apr 30 17:17:28] [DEBUG/tcp-server.c:229/gf_transport_fini()]
tcp/server:destroying transport object for 192.168.28.6:1021 (fd=5)

On the other hand, client 192.168.28.7 is successfully connected.

Both server and client versions are 1.3.0-pre3 from the latest tla.

Regards,

Enkahel

Sebastien LELIEVRE
[EMAIL PROTECTED]   Services to ISP
TBS-internet   http://www.TBS-internet.com:


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] auth.brick.ip.allow restrictions

2007-04-30 Thread Sebastien LELIEVRE
Majied Najjar a écrit :
 Hi,
 
 You can create a list of IP addresses which is comment delimited.  For 
 example:
 
 option auth.ip.brick.allow 192.168.28.6,192.168.28.7
 

I've tried this and it didn't seem to work. I will try again and look at
the logs

Cheers !

 Hope that helps.
 
 Majied
 
 
 On Mon, 30 Apr 2007 17:38:07 +0200
 Sebastien LELIEVRE [EMAIL PROTECTED] wrote:
 
 Hello everyone

 I'm posting this here because I need an advice from you.

 On all examples presented on the web-site, and the user-config posted
 here, you always allow a wide range of IP address (something like option
 auth.ip.brick.allow 192.168.28.* )

 My question is : how to I configure the servers in order to accept just
 a list of IP addresses ?

 I've tried this config :

 volume server
   type protocol/server
   option transport-type tcp/server
 interfaces
   subvolumes brick
   option auth.ip.brick.allow 192.168.28.6
   option auth.ip.brick.allow 192.168.28.7
 end-volume

 and client 192.168.28.6 wasn't able to connect.
 Error message on this client side is : Transport endpoint is not connected

 and on the server side, the log give us this :

 [Apr 30 17:16:57] [DEBUG/tcp-server.c:134/tcp_server_notify()]
 tcp/server:Registering socket (5) for new transport object of 192.168.28.6
 [Apr 30 17:17:08] [DEBUG/proto-srv.c:2418/mop_setvolume()]
 server-protocol:mop_setvolume: received port = 1022
 [Apr 30 17:17:08] [DEBUG/proto-srv.c:2434/mop_setvolume()]
 server-protocol:mop_setvolume: IP addr = 192.168.28.7, received ip addr
 = 192.168.28.6
 [Apr 30 17:17:08] [ERROR/common-utils.c:55/full_rw()]
 libglusterfs:full_rw: 0 bytes r/w instead of 113 (errno=17)
 [Apr 30 17:17:08]
 [DEBUG/protocol.c:244/gf_block_unserialize_transport()]
 libglusterfs/protocol:gf_block_unserialize_transport: full_read of
 header failed
 [Apr 30 17:17:08] [DEBUG/proto-srv.c:2868/proto_srv_cleanup()]
 protocol/server:cleaned up xl_private of 0x8050998
 [Apr 30 17:17:08] [CRITICAL/tcp.c:82/tcp_disconnect()]
 transport/tcp:closing socket: 5 priv-connected = 1
 [Apr 30 17:17:08] [DEBUG/tcp-server.c:229/gf_transport_fini()]
 tcp/server:destroying transport object for 192.168.28.6:1022 (fd=5)
 [Apr 30 17:17:19] [DEBUG/tcp-server.c:134/tcp_server_notify()]
 tcp/server:Registering socket (5) for new transport object of 192.168.28.6
 [Apr 30 17:17:28] [DEBUG/proto-srv.c:2418/mop_setvolume()]
 server-protocol:mop_setvolume: received port = 1021
 [Apr 30 17:17:28] [DEBUG/proto-srv.c:2434/mop_setvolume()]
 server-protocol:mop_setvolume: IP addr = 192.168.28.7, received ip addr
 = 192.168.28.6
 [Apr 30 17:17:28] [ERROR/common-utils.c:55/full_rw()]
 libglusterfs:full_rw: 0 bytes r/w instead of 113 (errno=9)
 [Apr 30 17:17:28]
 [DEBUG/protocol.c:244/gf_block_unserialize_transport()]
 libglusterfs/protocol:gf_block_unserialize_transport: full_read of
 header failed
 [Apr 30 17:17:28] [DEBUG/proto-srv.c:2868/proto_srv_cleanup()]
 protocol/server:cleaned up xl_private of 0x804b1e8
 [Apr 30 17:17:28] [CRITICAL/tcp.c:82/tcp_disconnect()]
 transport/tcp:closing socket: 5 priv-connected = 1
 [Apr 30 17:17:28] [DEBUG/tcp-server.c:229/gf_transport_fini()]
 tcp/server:destroying transport object for 192.168.28.6:1021 (fd=5)

 On the other hand, client 192.168.28.7 is successfully connected.

 Both server and client versions are 1.3.0-pre3 from the latest tla.

 Regards,

 Enkahel

 Sebastien LELIEVRE
 [EMAIL PROTECTED]   Services to ISP
 TBS-internet   http://www.TBS-internet.com:


 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel
 
 
 ___
 Gluster-devel mailing list
 Gluster-devel@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/gluster-devel
 



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Recovery

2007-04-26 Thread Sebastien LELIEVRE
. Will there not be a
problem?
For this time being, I'm confident to say that every file created by
client2 will not be seen by client1 because they will not be replicated.
I'm aware that client2 will see every files Client1 has ever access twice.

And finally, here is a question I asked on IRC, I will try to develop it:

In AFR mode, let's say that there is a client on every bricks. Will the
AFR translator make the clients write local and then replicate, or
will there be only one write node which will replicate to others ?

The replication is parallel, it writes at the same time. Remember the
write performance I pasted at the begining, then. This would mean that
the client which is writing something on the volume will see its writing
slow down even if one of the bricks is on the same machine as it is. Am
I correct ?

You asked to shoot... ;)

More to come, I'm afraid

Enkahel

Sebastien LELIEVRE
[EMAIL PROTECTED]   Services to ISP
TBS-internet   http://www.TBS-internet.com/


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel