Pooya,
this bug is fixed in patch 295. can you please confirm if some operations
were performed when one (or more) server was down to reproduce this bug?
thanks,
avati
2007/7/11, Pooya Woodcock [EMAIL PROTECTED]:
This is on patch 293 in 2.5 mainline.
-Pooya
On Jul 10, 2007, at 7:40 PM, Amar
Hi Avati,
Confirmed fixed. Thanks, running great so far.
Pooya
On Jul 10, 2007, at 11:50 PM, Anand Avati wrote:
Pooya,
this bug is fixed in patch 295. can you please confirm if some
operations were performed when one (or more) server was down to
reproduce this bug?
thanks,
avati
DeeDee,
Can you please mail us the server/client specs as well as the activities
you were doing while you encountered the segfault.
--
Gowda (benki)
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
if all the bricks are not up at the time of the gluster client startup
i get the above error message. if all bricks are up, things are fine.
if the brick goes down after a client is up, things are fine -- it is only
at startup.
i'm still seeing this in the latest patch-299
Also on another node, I got this segfault. This one was first then I
got the other one on node4 that I emailed eearlier.
[EMAIL PROTECTED] ~]# gdb glusterfs -c /core.13358
GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by
On 7/11/07, DeeDee Park [EMAIL PROTECTED] wrote:
if all the bricks are not up at the time of the gluster client startup
i get the above error message. if all bricks are up, things are fine.
if the brick goes down after a client is up, things are fine -- it is only
at startup.
i'm still seeing
I have 6 bricks totalling 2.3T of space right now in my test setup. (6GB,
40GB, 250GB, 500GB, 750GB, 750GB)
I run the 'df' command and it currently shows 967M of space available at
the client.
It use to show the correct amount a while back. How can i trace this
so I can find out how much each
Does/Will glusterfs work with the new fuse 2.7.0? I havent tried yet on
a glusterfs test box yet, but I like the new fixes in fuse that I use
for other things.
Regards,
Dale
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
If you are serving up lets say IMAP Directories and Websites from
GlusterFS, is it more beneficial to have many smaller 1U servers with
big drives and perhaps a replica count of 3 or 3 large storage boxes?
Does GlusterFS benefit from this approach at all?
Also what are considerations for
client config attached.
_
http://liveearth.msn.com
Script started on Wed 11 Jul 2007 04:22:18 PM PDT
[EMAIL PROTECTED]:/tmp# gdb -c or3[K[K[Kcore3 glusterfs
GNU gdb 6.4.90-debian
Copyright (C) 2006 Free Software Foundation,
DeeDee,
I'm not a Gluster Developer but I think I can help.
First, it is easier if you send your volume files :-) .
When you mount a Gluster brick running the glusterfs command, you can use
the '-n' option to mount just a 'protocol/client' brick by its name and see
how everything is at the
DeeDee,
I read your spec file from another e-mail, and I think I know the answer.
Please any developer correct-me if I'm wrong.
According with
http://lists.gnu.org/archive/html/gluster-devel/2007-03/msg00106.html the
AFR translator will return the minimum size between all it's volumes when
On 7/11/07, DeeDee Park [EMAIL PROTECTED] wrote:
Thanks, been very helpful. I'll look into the -n option to check out each
brick.
i worked with a deveoper before, and they said my config was all good when
i was having problems. They probably have like 3 copies of my configs.
assume it is
Dale Dude writes:
Does/Will glusterfs work with the new fuse 2.7.0? I havent tried yet on
a glusterfs test box yet, but I like the new fixes in fuse that I use
for other things
Compiled and ran few tests. Fuse 2.7.0 works fine without any
modifications. Extensive community testing will help.
On Wed, Jul 11, 2007 at 09:53:58PM -0300, Daniel van Ham Colchete wrote:
People,
what's the current design of locks in GlusterFS? I couldn't find the answer
looking the sources.
Being more specific: how does cluster/unify and cluter/afr handle flock()
and fcntl byte-ranged advisory
DeeDee,
Thanks for reporting bug, but actually this config has afr enabled. My few
doubts after seeing your config file.
You wrote this config just to test afr/unify? or you had any idea while
writing them, because there are two things i can notice.
* afr has 5 child nodes, but it creates all
Hi DeeDee,
Can you confirm this bug is from patches after 292 (patch-292,
mainline--2.5)? because the code where you got the segfault is removed from
that version (removed as in '#if 0'). (or are you sure, you did a 'make
install'?)
This should not be happening in any patches after that
17 matches
Mail list logo