Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-12 Thread Gordan Bobic
Liam Slusser wrote: Even with manually fixing (adding or removing) the extended attributes i was never able to get Gluster to see the missing files. So i ended up writing a quick program that searched the raw bricks filesystem and then checked to make sure the file existed in the Gluster

Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-12 Thread Liam Slusser
On Tue, May 12, 2009 at 12:07 AM, Gordan Bobic gor...@bobich.net wrote: Liam Slusser wrote: Even with manually fixing (adding or removing) the extended attributes i was never able to get Gluster to see the missing files. So i ended up writing a quick program that searched the raw bricks

Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-12 Thread Gordan Bobic
On Tue, 12 May 2009 00:53:06 -0700, Liam Slusser lslus...@gmail.com wrote: Even with manually fixing (adding or removing) the extended attributes i was never able to get Gluster to see the missing files. So i ended up writing a quick program that searched the raw bricks filesystem and then

Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-12 Thread Alexey Filin
Hi, I'm not sure but where is namespace volume in spec-files? http://gluster.org/docs/index.php/User_Guide#Namespace Namespace volume needed because: - persistent inode numbers. - file exists even when node is down. cheers, Alexey. On Tue, May 12, 2009 at 3:29 AM, Liam Slusser

Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-12 Thread Liam Slusser
I was under the impression that the namespace volume is not needed with 2.0 replication. liam On Tue, May 12, 2009 at 12:54 PM, Alexey Filin alexey.fi...@gmail.comwrote: Hi, I'm not sure but where is namespace volume in spec-files? http://gluster.org/docs/index.php/User_Guide#Namespace

Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-12 Thread Gordan Bobic
Liam Slusser wrote: I was under the impression that the namespace volume is not needed with 2.0 replication. It isn't required for AFR. Gordan ___ Gluster-devel mailing list Gluster-devel@nongnu.org

Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-11 Thread Liam Slusser
Even with manually fixing (adding or removing) the extended attributes i was never able to get Gluster to see the missing files. So i ended up writing a quick program that searched the raw bricks filesystem and then checked to make sure the file existed in the Gluster cluster and if it didn't it

replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-06 Thread Liam Slusser
Big thanks to the devel group for fixing all the memory leak issues with the earlier RC releases. 2.0.0 has been great so far without any memory issues what-so-ever. I am seeing some oddities with the replication/distribute translators however. I have three partitions on each gluster server

Re: replicate/distribute oddities in 2.0.0 Was Re: [Gluster-devel] rc8

2009-05-06 Thread Liam Slusser
To answer some of my own question, looks like those files were copied using gluster 1.3.12 which is why they have the different extended attributes: gluster 1.3.12 Attribute glusterfs.createtime has a 10 byte value for file Attribute glusterfs.version has a 1 byte value for file Attribute

Re: [Gluster-devel] rc8

2009-04-23 Thread ender
newest git. WORKS!! This is in order of commands:: git log -1 commit 99351618cd15da15ee875f143154ea7f8e28eaf4 Author: krishna kris...@gluster.com Date: Wed Apr 22 23:43:58 2009 -0700 all nodes: umount /gtank killall glusterfsd ; killall glusterfs; rm -rf /usr/local/lib/gluster* rm

Re: [Gluster-devel] rc8

2009-04-23 Thread Gordan Bobic
Great to hear! :) When can we expect rc9 with these fixes? Gordan ender wrote: newest git. WORKS!! This is in order of commands:: git log -1 commit 99351618cd15da15ee875f143154ea7f8e28eaf4 Author: krishna kris...@gluster.com Date: Wed Apr 22 23:43:58 2009 -0700 all nodes: umount /gtank

Re: [Gluster-devel] rc8

2009-04-23 Thread Brent A Nelson
On Fri, 24 Apr 2009, Anand Avati wrote: On Fri, Apr 24, 2009 at 12:05 AM, Brent A Nelson br...@phys.ufl.edu wrote: Ditto (and the fix for the first-access glitch, where you need to ls the mount before working with it). ;-) Hmm, a fix has been in the code base since a few weeks at least, so

Re: [Gluster-devel] rc8

2009-04-23 Thread Gordan Bobic
On 23/04/2009 20:05, Brent A Nelson wrote: On Fri, 24 Apr 2009, Anand Avati wrote: On Fri, Apr 24, 2009 at 12:05 AM, Brent A Nelson br...@phys.ufl.edu wrote: Ditto (and the fix for the first-access glitch, where you need to ls the mount before working with it). ;-) Hmm, a fix has been in

Re: [Gluster-devel] rc8

2009-04-22 Thread ender
I was just wondering if the self heal bug is planned to be fixed, or if they developers are just ignoring it in hopes it will go away? Everytime i ask someone privately if they can reproduce the problem on there own end, they go silent. (which leads me to believe that they in fact can

Re: [Gluster-devel] rc8

2009-04-22 Thread Anand Avati
Ender, There was a bug fix which went in to git today which fixes a similar bug.. a case where a subset of the files would be recreated if there are a lot of files (~1000 or more) when the node which was down was the first subvolume in the list. Please pull the latest patches and see if it

Re: [Gluster-devel] rc8

2009-04-22 Thread Anand Avati
Liam, An fd leak and a lock structure leak has been fixed in the git repository, which explains a leak in the first subvolume's server. Please pull the latest patches and let us know if it does not fixe your issues. Thanks! Avati On Tue, Apr 21, 2009 at 3:41 PM, Liam Slusser lslus...@gmail.com

Re: [Gluster-devel] rc8

2009-04-22 Thread ender
Closer, but still no cigar.. all nodes: killall glusterfsd; killall glusterfs; all nodes: rm -rf /tank/* all nodes: glusterfsd -f /usr/local/etc/glusterfs/glusterfsd.vol all nodes: mount -t glusterfs /usr/local/etc/glusterfs/glusterfs.vol /gtank node3:~# cp -R gluster /gtank/gluster1 *simulating

Re: [Gluster-devel] rc8

2009-04-22 Thread Anand Avati
Closer, but still no cigar.. *Adding hardware back into the network after replacing bad harddrive(s) node1:~# glusterfsd -f /usr/local/etc/glusterfs/glusterfsd.vol node1:~# mount -t glusterfs /usr/local/etc/glusterfs/glusterfs.vol /gtank node3:~# ls -lR /gtank/ | wc -l 1802 node3:~# ls -lR

Re: [Gluster-devel] rc8

2009-04-22 Thread Serial Thrilla
We should also check the OS and architecture. I never got a response as to what system, OS, architecture the developers test on. Could you please let me know? Thanks Anand Avati wrote: Closer, but still no cigar.. *Adding hardware back into the network after replacing bad harddrive(s)

Re: [Gluster-devel] rc8

2009-04-22 Thread Anand Avati
We should also check the OS and architecture. I never got a response as to what system, OS, architecture the developers test on. Could you please let me know? OS - CentOS 5 Server (x 8) - Xeon Dual proc quad core, 8GB RAM, 16TB SAS raid Client (x 16) - Xeon quad core, 2GB RAM Gig/E and

Re: [Gluster-devel] rc8

2009-04-22 Thread Serial Thrilla
Thanks. I'm assuming that would be x86-64 architecture. I think Ender is running x86-32 based on a previous posting. Perhaps this is somehow involved. I only bring this up because I think that's the root cause of another bug that's in the process of being resolved. Anand Avati wrote: We

Re: [Gluster-devel] rc8

2009-04-22 Thread Liam Slusser
Avati, Big thanks. Looks like that did the trick. I'll report back in the morning if anything has changed but its looking MUCH better. Thanks again! liam On Wed, Apr 22, 2009 at 2:32 PM, Anand Avati av...@gluster.com wrote: Liam, An fd leak and a lock structure leak has been fixed in the

Re: [Gluster-devel] rc8

2009-04-22 Thread Anand Avati
Ender, Please try the latest git. We did find an issue with subdirs getting skipped while syncing. Avati On Thu, Apr 23, 2009 at 3:24 AM, ender en...@enderzone.com wrote: Closer, but still no cigar.. all nodes: killall glusterfsd; killall glusterfs; all nodes: rm -rf /tank/* all nodes:

Re: [Gluster-devel] rc8

2009-04-21 Thread Gordan Bobic
Firefox and Thunderbird still have problems when home directories are running on (AFR/Replicate). For whatever reason, sometimes they don't clean up properly on exit (.parentlock file remains), so they often don't want to start up again until this is cleared manually. Firefox's implementation

Re: [Gluster-devel] rc8

2009-04-21 Thread Geoff Kassel
Hi all, Has anyone seen signs of the data corruption or self-heal bugs for this release candidate yet? It'd be really good if this release candidate finally puts the nails in the coffins of those worrisome bugs. (Still no regression tests that I can see, though...) Cheers, Geoff Kassel.

Re: [Gluster-devel] rc8

2009-04-21 Thread Gordan Bobic
On Tue, 21 Apr 2009 17:33:06 +1000, Geoff Kassel gkas...@users.sourceforge.net wrote: Has anyone seen signs of the data corruption or self-heal bugs for this release candidate yet? It'd be really good if this release candidate finally puts the nails in the coffins of those worrisome bugs.

Re: [Gluster-devel] rc8

2009-04-21 Thread Liam Slusser
There is still a memory leak with rc8 on my setup. The first server in a cluster or two servers starts out using 18M and just slowly increases. After 30mins it has doubled in size to over 30M and just keeps growing - the more memory it uses the worst the performance. Funny that the second

Re: Fwd: [Gluster-devel] rc8

2009-04-21 Thread Gordan Bobic
On Tue, 21 Apr 2009 00:46:20 -0700, Liam Slusser lslus...@gmail.com wrote: First off, i apologize to the both of you for emailing you directly. I don't have access to post to the gluster-devel mailing list but i am still having memory leak issues with the rc8 release. I've tried emailing the

Re: [Gluster-devel] rc8

2009-04-21 Thread Vikas Gorur
2009/4/21 Gordan Bobic gor...@bobich.net: Firefox and Thunderbird still have problems when home directories are running on (AFR/Replicate). For whatever reason, sometimes they don't clean up properly on exit (.parentlock file remains), so they often don't want to start up again until this is

[Gluster-devel] rc8

2009-04-20 Thread Gordan Bobic
First-access failing bug still seems to be present. But other than that, it seems to be distinctly better than rc4. :) Good work! :) Gordan ___ Gluster-devel mailing list Gluster-devel@nongnu.org http://lists.nongnu.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] rc8

2009-04-20 Thread Gordan Bobic
Gordan Bobic wrote: First-access failing bug still seems to be present. But other than that, it seems to be distinctly better than rc4. :) Good work! :) And that massive memory leak is gone, too! The process hasn't grown by a KB after a kernel compile! :D s/Good work/Awesome work/ :)