Update of bug #17282 (project gluster):
Status:None = Fixed
Open/Closed:Open = Closed
___
Reply to this item at:
Update of bug #17289 (project gluster):
Status: Fixed = In Progress
___
Reply to this item at:
http://savannah.nongnu.org/bugs/?func=detailitemitem_id=17289
Update of bug #17312 (project gluster):
Priority: 5 - Normal = 3 - Low
___
Reply to this item at:
http://savannah.nongnu.org/bugs/?func=detailitemitem_id=17312
Update of bug #17285 (project gluster):
Open/Closed:Open = Closed
___
Reply to this item at:
http://savannah.nongnu.org/bugs/?func=detailitemitem_id=17285
Update of bug #17324 (project gluster):
Status: In Progress = Fixed
Open/Closed:Open = Closed
___
Reply to this item at:
Update of bug #17330 (project gluster):
Status:None = Fixed
Open/Closed:Open = Closed
___
Reply to this item at:
Update of bug #17290 (project gluster):
Status:None = In Progress
___
Follow-up Comment #2:
A major leak found in brick.c, fixed. One more leak still remains.
URL:
http://savannah.nongnu.org/bugs/?func=detailitemitem_id=17406
Summary: glusterfsd conf.l fails to build with flex 2.5.31
Project: Gluster
Submitted by: vikasgp
Submitted on: Friday 08/11/2006 at 23:42
Category:
Follow-up Comment #1, bug #17353 (project gluster):
``dircache'' renamed to ``location hints'', module committed.
___
Reply to this item at:
http://savannah.nongnu.org/bugs/?func=detailitemitem_id=17353
Update of bug #17301 (project gluster):
Status:None = Invalid
Open/Closed:Open = Closed
___
Reply to this item at:
Update of bug #17308 (project gluster):
Open/Closed:Open = Closed
___
Reply to this item at:
http://savannah.nongnu.org/bugs/?17308
___
Krishna Srinivas wrote:
Hi All,
Here are the gcov output files:
http://freeshell.in/~krishna/gcov/
You might find lcov (http://ltp.sourceforge.net/coverage/lcov.php)
useful too. It generates pretty HTML from the coverage data.
Vikas
___
Pooya Woodcock wrote:
Sorry I spoke too soon, just ran test.php and it hangs again using
patch 122. I thought I was doing good since apache stopped and started
cleanly from config files on the gluster volume.
?
session_create();
?
The hanging was due to a bug in the posix xlator, which has
On Wed, May 09, 2007 at 09:08:58AM +0200, François Thiebolt wrote:
Dear,
After searching through wiki as well as zresearch.com ... i'm still unable
to find the 'GlusterFS architecture' docs.
They don't exist yet :)
I'm working on a GlusterFS User Guide as well as a GlusterFS Hackers
Guide,
On Wed, Jun 13, 2007 at 09:56:50AM +0200, Steffen Grunewald wrote:
BTW, is it just me feeling that GlusterFS is approaching ZFS in some way?
For small-ish storage needs (what can be reasonably acheived by disks
attached to the same physical machine), I guess GlusterFS provides
much of the
On Thu, Jun 28, 2007 at 02:00:43PM +0200, Steffen Grunewald wrote:
Me again :-(
While experimenting with additional translators like read-ahead and
write-behind,
I found that each time I want to activate/deactivate a translator layer I have
to change the reference in the upper layer of the
Hi,
I've written a small document outlining the procedure to report a bug.
Hopefully this will be a useful pointer for new users, who won't have
to ask the same questions on the mailing list.
http://gluster.org/docs/index.php/Howto_report_a_bug
Vikas
On Wed, Jul 11, 2007 at 09:53:58PM -0300, Daniel van Ham Colchete wrote:
People,
what's the current design of locks in GlusterFS? I couldn't find the answer
looking the sources.
Being more specific: how does cluster/unify and cluter/afr handle flock()
and fcntl byte-ranged advisory
Forgot to CC: list.
-- Forwarded message --
From: Vikas Gorur [EMAIL PROTECTED]
Date: 05-Aug-2007 12:19
Subject: Re: [Gluster-devel] posix-locks problem
To: Kevan Benson [EMAIL PROTECTED]
On 04/08/07, Kevan Benson [EMAIL PROTECTED] wrote:
I'm having a problem getting locking
On 01/11/2007, Harris Landgarten [EMAIL PROTECTED] wrote:
BTW,
Is there any doc on booster?
Here is some info about booster:
The booster translator gives applications a faster path to communicate
read and write requests to GlusterFS. Normally, all requests to GlusterFS from
Jeff:
Sorry for the inordinate delay in responding to this issue. Vacation
time for everyone :)
The MySQL+GlusterFS issue appears to have been fixed. The problem was
FUSE was sending us the Thread ID of the thread holding the lock,
which confused posix-locks when locking/unlocking was done from
On 16/01/2008, Angel [EMAIL PROTECTED] wrote:
Hey!
Im currently testing my beloved i build myself tla636 rpms .
Ive just discovered that client only setup works.
No server needed just plain local dirs and you still keep all the fancy
features.
That's the beauty of translators :)
Vikas
--
On Sat, 19 Jan 2008 18:03:45 +0530, Angel [EMAIL PROTECTED] wrote:
Hi all
Whats IDE are you using for development of GlusterFS? Kdevelop, Eclipse,
Turbo C++2.0 :-P
All of use the one and only Emacs :)
PS: Are you going to expand documentation about internals, on the next
release?? Hackers
On 17/01/2008, Jeff Humes [EMAIL PROTECTED] wrote:
I finally tested this install:
I could not find it on the link listed in the previous email, so I
retrieved it from:
http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.7.2glfs8.tar.gz
I upgraded fuse on the client, rebooted: now
Excerpts from Rodney McDuff's message of Fri Feb 29 10:14:06 +0530 2008:
I see on the Glusterfs wiki FAQ that Glusterfs has been tested on Mac OS
X. Can I get a bit more info on this, ie which gluster revision was
used, which OS version, config flags etc.
The Mac OS X port is not yet
Excerpts from billy cokalopolous's message of Sat Mar 01 02:16:24 +0530 2008:
Thanks for the response Hannes. The db will be read only.
Still looking for a response as to why cpu is so high when computations are
performed on the fs. Even when single node where traffic is sent back via
Excerpts from LI Daobing's message of Tue Mar 04 15:19:29 +0530 2008:
Hello,
there is a checking in pl_open_cbk, it's if (op_ret == 0), i think
it should be if (op_ret = 0), what do you think:
We don't pass the fd as the return value in open_cbk, that goes in the
fd_t. storage/posix returns
Excerpts from Martin Fick's message of Thu Mar 06 03:14:08 +0530 2008:
Is it supposed to be possible to run multiple
independent glusterfsd daemons on the same host but
listening to different ports or ips? I tried it with
different ports and the second instance of the daemon
refused to start
Excerpts from Luke Schierer's message of Fri Mar 28 20:16:00 +0530 2008:
I still get the same failure. I am wondering, given that others have
suceeded, that there might be something else in my environment that is
causing this error. Any ideas?
gcc -fPIC -D_FILE_OFFSET_BITS=64
Excerpts from Daniel Maher's message of Wed Apr 09 16:40:20 +0530 2008:
Hello all,
After upgrading to 1.3.8pre5, i performed a simple failover test of my
two-node HA Gluster cluster (wherein one of the nodes is unplugged from
the network). Unfortunately, the results were - once again -
Excerpts from Brent A Nelson's message of Wed Jul 30 00:05:13 +0530 2008:
I did a few tla replay --reverse operations and found that patch level 258
works fine (except for previously reported fchmod and acl issues). replay
to 259, and it breaks as below. The posix cleanup patch breaks in my
2008/11/11 [EMAIL PROTECTED] [EMAIL PROTECTED]:
Hi Guys,
I am currently having an issue with my namespace directory having files,
with the filesize of 0. Even though in the export directories they are the
correct filesizes.
This is the normal behavior. The namespace directory exists only to
2008/11/14 Onyx [EMAIL PROTECTED]:
Hi,
I just had a problem with a disk failure. It was an ext3 partition, and
suddenly, bc of a problem, it automatically remounted as read-only.
It was not in a glusterfs setup yet, but it got me thinking...
What would have happened if that partition was in
2008/11/14 Onyx [EMAIL PROTECTED]:
Ok, nice!
But wouldn't that have a huge impact on the write performance?
How about Unify and DHT? Will those setups survive a write attempt to a
read-only volume?
Unify and DHT would both return error (Read-only filesystem) in your
situation.
Vikas
--
2008/11/21 Mickey Mazarick [EMAIL PROTECTED]:
This only occurs on about 10% of the files on an afr/unify over ibverbs
mount
I'm seeing the following:
# cat /scripts/setupMachine.sh
cat: /scripts/setupMachine.sh: Input/output error
glusterfs log says:
2008-11-20 20:03:12 W
Hi everyone,
The posix-locks translator shall henceforth be known as the locks
translator, since its responsibilities have widened to include locks
other than those of the POSIX variety.
{applause}
(There will still be a posix-locks.so symlink for backward
compatibility for a while).
Vikas
--
Hi everyone,
There is a file doc/afr.pdf in the TLA repository and the 1.4.0rc
releases which documents AFR for the 1.4 series. The design has
changed a bit from 1.3 and new options have been introduced. Please
refer to it and give us your comments/suggestions.
Vikas
--
Engineer - Z Research
2008/12/18 Keith Freedman freed...@freeformit.com:
I've unpacked the rc4 distribution from
http://ftp.gluster.com/pub/gluster/glusterfs/1.4/ , there is no afr.pdf in
the doc folder
Thanks for spotting. I had forgot to include it in the distribution
tarball. I've made the necessary changes and
2008/12/18 Martin Fick mogul...@yahoo.com:
On Thu, 12/18/08, Martin Fick wrote:
I have some questions about things that were perhaps implied in the
document, but not really discussed. It sounds like the 5 step write
process lays the foundations for transaction based writes and
efficient self
Here are the _general_ guidelines for using performance translators:
* write-behind: Almost always helps on the client side, because it
decreases write latency for applications. Using it is a good idea.
* io-threads: Almost always helps on the server side. We recommend
using it just below the
Corin,
I think you are referring to STACK_WIND/UNWIND when you say
stack-based design. The STACK_WIND/UNWIND macros in effect implement
a continuation-like mechanism in C.
The original reason we introduced this was to make file operations
asynchronous, in that we didn't have to wait for the
2009/2/5 Mickey Mazarick m...@digitaltadpole.com:
I haven't done any full regression testing to see where the problem is but
the later TLA versions are causeing out storage servers to spike to 100% cpu
usage and the clients never see any files. Our initial tests are with
ibverbs/HA but no
Gordon,
(Replying to both your mails)
Implementing multicast would be difficult since multicasting only
works for UDP, and GlusterFS relies on a TCP connection. Moving GlusterFS
to use UDP would require extensive changes and would bring many new problems
of its own.
Regarding the configuration
2009/2/9 Gordan Bobic gor...@bobich.net:
I seem to have my gluster root daemon grown to 280MB, with no performance
translators. I'm hazarding a guess that this could be down to it's log
filesystem having been (re)moved. Is there a way to disable the logging
completely using a fstab parameter?
2009/2/25 Filipe Maia fil...@xray.bmc.uu.se:
Hi,
I would just like to point out to
http://www.ntfs-3g.org/pjd-fstest.html which includes a very nice
posix test suite. Settings the filesystem to ntfs-3g in the
configuration file should make glusterfs pass all tests (at least
using the stable
2009/3/12 Matthias Luft ml...@informatik.uni-mannheim.de:
Hi,
I get the following compile error when trying to compile a fresh git
checkout on FreeBSD 8.0:
Thanks for pointing this out. We are working on fixing this.
Vikas
--
Engineer - Z Research
http://gluster.com/
2009/3/19 Gordan Bobic gor...@bobich.net:
That's unavoidable to some extent, since the first server is the one that
is authoritative for locking. That means that all reads have to make a hit
on the 1st server, even if the data then gets retrieved from another server
in the cluster. Whether
2009/3/19 Gordan Bobic gor...@bobich.net:
On Thu, 19 Mar 2009 16:14:18 +0530, Vikas Gorur vi...@zresearch.com
wrote:
2009/3/19 Gordan Bobic gor...@bobich.net:
How does this affect adding new servers into an existing cluster?
Adding a new server will work --- as and when files are accessed
2009/3/19 nicolas prochazka prochazka.nico...@gmail.com:
hello,
Can somebody help me about attr :
in doc i can read :
The extended attributes can be seen in the backend filesystem using
the getfattr command. (getfattr -n trusted.afr.version file)
If i do that :
getfattr -n
2009/3/23 nicolas prochazka prochazka.nico...@gmail.com:
Hello,
when the both channels in client protocol is disconnected, fd's are
marked as bad in git tree
seems to be more problematic that before.
Before :
Two server , client open a file - Server 1 down - client ok -
Server 1
2009/4/2 nicolas prochazka prochazka.nico...@gmail.com:
Hello,
in afr.c i can read :
afr_first_up_child function which is always call, but if i understand,
there's no load balancing in afr, just the first child up is taken.
Can I just implement a Round Robin function to alternate between
2009/4/3 nicolas prochazka prochazka.nico...@gmail.com:
hello
the latest git version no correct bug relative to afr self healing .
- Now if i create a file in backend server 1 ( first define in
subvolume) , size on gluster mount point is ok ( file is correct),
file is correct on server 1
2009/4/3 Geoff Kassel gkas...@users.sourceforge.net:
Hi,
I'd also like to put my vote in for a public specification document
containing the details of how AFR handles various failure and recovery
scenarios.
I'm working on updating the design document.
Vikas
--
Engineer - Z Research
2009/4/8 nicolas prochazka prochazka.nico...@gmail.com:
hello,
with last git version :
Thanks for pointing this out. This has been fixed in the repository.
Vikas
--
Engineer - Z Research
http://gluster.com/
___
Gluster-devel mailing list
2009/4/21 Gordan Bobic gor...@bobich.net:
Firefox and Thunderbird still have problems when home directories are
running on (AFR/Replicate). For whatever reason, sometimes they don't clean
up properly on exit (.parentlock file remains), so they often don't want to
start up again until this is
Gordan Bobic wrote:
Doesn't this completely kill the concept of self-heal? Surely, the primary
ways to detect that a file needs healing is to check it's mtime?
We don't rely on the mtime to do self-heal. Replicate writes a sort of
journal entry on each write (in the extended attributes) and
- Steve stev...@gmx.net wrote:
However... the problem with the crash is still there if using NUFA.
Maybe I just messed up and tried to many different (and obscure)
combinations because NUFA never failed on me in the past?
If the crash is reproducible, can you run GlusterFS under
Steve,
I just noticed that the version you are using is one month old. A
fix for a crash has gone in since then and I think it might have fixed
this crash. Could you try with the latest repository version and see
if the crash is still there?
Vikas
--
Engineer - Z Research
http://gluster.com/
- Steve stev...@gmx.net wrote:
Is this error known? Maybe fixed in 2.0.2?
As mentioned in the other thread, this might be related to an
issue that has already been fixed. Please try with the latest repository
code and see if it still crashes.
Vikas
--
Engineer - Z Research
- Gordan Bobic gor...@bobich.net wrote:
Interestingly, this problem doesn't arise with Firefox 3.0.11. It does
arise with versions up to and including 3.0.10, and _only_ when /home
if
on GlusterFS, for all version of GlusterFS up to and including 2.0.2.
But not with Firefox 3.0.11.
- nicolas prochazka prochazka.nico...@gmail.com wrote:
Hello guys,
I'm trying some tests with glusterfs 2.0.2 and postgresql , i obtain
:
org.postgresql.util.PSQLException: ERROR: could not seek to end of
segment 0 of relation 1663/16384/1259: Input/output error
Configuration of
- Vidar Hokstad vi...@hokstad.com wrote:
What is your client configuration? Do you have a way to reproduce this?
Vikas
--
Engineer - http://gluster.com/
A: Because it messes up the way people read text.
Q: Why is a top-posting such a bad thing?
--
- Vidar Hokstad vi...@hokstad.com wrote:
See below for what I'm currently running. I'm testing without
readahead at
the moment - I haven't seen it fail with it commented out so far, but
it's
too early to tell.
Before I added the trace translator the error rate was much higher.
After
- Ate Poorthuis atepoorth...@gmail.com wrote:
Hi,
Gluster clients running on PPC on Mac OS X 10.5 are not able to
connect to
the server. The exact same configuration on a OS X intel machine
works
perfectly. I have the following errors in my logs.
It would be very helpful if you can
- Gordan Bobic gor...@bobich.net wrote:
Gordan,
I'd like to clarify the situation with the mount --bind issue you're facing.
When GlusterFS is started, it does two things in the following order:
1) daemonize itself (by calling daemon(3)).
2) Initialize the translator graph (and thus FUSE
- Gordan Bobic gor...@bobich.net wrote:
Does the client periodically re-scan for presence of servers that
dropped
out? Or does the client have to be restarted to notice that a server
has
returned?
The client periodically tries to reconnect. There should be no
need for a client restart.
- Anton anton.va...@gmail.com wrote:
Hello friends,
I have sucessully installed Gluster 2.0.4 in the
configuration of 3 nodes with 2x bricks per node with
replication and distribution. While adding an extra brick,
which had to became a replicated copy of another brick I
have
Mark,
We're working on this and progress can be tracked at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=250
Vikas
--
Engineer - http://gluster.com/
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
- Mark Mielke m...@mark.mielke.cc wrote:
Hi Anand:
Understood that it is unstable - do you want us to help in testing it,
though?
Testing is very valuable and welcome, ofcourse :) The statement was simply
meant as a caveat.
Vikas
--
Engineer - http://gluster.com/
Corentin Chary wrote:
Hi,
I'm trying to use glusterfs with afr.
My setup have 2 servers and 2 clients. / is mounted with user_xattr.
It seems that if you shutdown a server, remove a directory with one or
more childs, then restart the server, the changes won't be replicated
because rmdir is not
Roland,
Thanks for reporting your experience. We are looking into this.
I've logged this as a bug and you can track the progress at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=457
Vikas
___
Gluster-devel mailing list
Roland Fischer wrote:
Hi,
i think the algorithms doesnt work yet. I read the pdf -
http://ftp.gluster.com/pub/gluster/glusterfs/3.0/LATEST/GlusterFS-3.0.0-Release-Notes.pdf
and this article is very interesting:
The “Diff” algorithm is especially beneficial for situations such as
running
Gordan Bobic wrote:
Hi,
I'm noticing something that could be considered weird. I have a 3-node
rootfs-on-gluster cluster, and I'm doing a big yum update on one of
the nodes. On that node and one of the other nodes, glusterfsd is
using about 10-15% CPU. But on the 3rd node, glusterfs is using
Gordan Bobic wrote:
That is plausible since I am using single-process client and server.
Is there a way to tell on a running glfs cluster which node is the
current lock master? The process creating the load was running on the
first listed volume, so I would have expected this to be the
Gordan Bobic wrote:
I thought there were always at least 2 lock servers, no? And they
rotate around depending on which servers are up (when the current lock
server dies, it fails over to the next one). Is that not the case?
Also if the servers leave and re-join, does the lock server
Alex Attarian wrote:
I'll give that a try Gordan! Thanks for the info!
My other question is does gluster perform a read for the extended
attributes on all servers regardless read-subvolume?
This is what my tcpdump shows:
[snip]
Does it have to read the trusted.afr.* attributes from every
Alex Attarian wrote:
Thanks Vikas, but how does it know that it's the first access for the
file? Also what happens when somehow a file got deleted or corrupted
on one of the servers? At what point will gluster replicate that file
again?
Before any operation can be performed on a file by an
Fredrik Widlund wrote:
Can you reproduce this behavior on your end?
We've been able to reproduce this, and are looking into it.
Vikas
--
Engineer - Gluster, Inc.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
On Mar 25, 2010, at 12:25 PM, Ian Rogers wrote:
1. We wish Vikas would be given more time to finish
http://www.gluster.com/community/documentation/index.php/Internals_of_Replicate
:-)
I'll take that as a hint and speed up work on that :)
--
Vikas Gorur
79 matches
Mail list logo