We use glusterfs on top of ZFS (zfsforlinux.com) with de-duplication and
compression.
There are a few caveats with regards to insuring you have your memory
settings configured just right, but otherwise it's a pain free experience.
On Mon, Apr 8, 2013 at 6:21 PM, Justin Clift jcl...@redhat.com
We have had great luck installing glusterFS on centos 4.5 like this:
get The io-patched fuse module from
http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.7.0-glfs8.tar.gz
get glusterfs itself from
.
The more time I spend in the code, the more I appreciate the design. Keep up
the good work!
:august
On Jan 20, 2008 7:17 PM, Anand Avati [EMAIL PROTECTED] wrote:
August,
2008/1/20, August R. Wohlt [EMAIL PROTECTED]:
Hi glusterfs hackers,
I am trying to write a simple xlator similar
Hi glusterfs hackers,
I am trying to write a simple xlator similar to the fixed-id one that comes
with the source code.
The fixed-id xlator passes all calls through to the underlying volume and
then mangles the uid and gid of the stat structures on the way back to the
client, so what you end up
Hi,
I have a setup with two servers mirroring each others' files using
cluster/afr:2. The two servers are on a high-latency connection, so I would
like to ensure that the local server is given preference in i/o over the
distant server. I do not need aggregrate performance, only file replication.
So, if the cluster/afr volume is in the server specification instead of the
client specification, and the 1st server in the afr array goes offline, the
whole afr volume is unavailable for reads?
I ask, because I noticed that if I had afr in server.vol it would hang when
the other one went offline
/
strace / gdb output you'd like to see , just let me know. I'd really like to
use glusterfs in an HA setup, but don't see how with this behavior.
Thanks in advance!!
:august
On 9/7/07, August R. Wohlt [EMAIL PROTECTED] wrote:
Hi all -
I have a setup based on this :
http://www.gluster.org
-options.txt but this file does not appear in my
glusterfs-1.3.1 tarball.
In any case, for those who have similar issues, making transport
timeout much smaller is your friend :-)
Many Thanks!!
:august
On 9/10/07, August R. Wohlt [EMAIL PROTECTED] wrote:
Hi devs et al,
After many hours of sublimation, I
Hi folks,
I notice that a few people have posted spec files with self-heal in
cluster/afr as well as in cluster/unify. I can't seem to demonstrate
this working, however. Is there a way to get healing functionality
with straigh afr?
Take a simple 2 machine afr mirror for example (spec file
That does the trick. And now I recall reading that in the archives as well.
I notice that the only explanation of self-heal in the wiki docs is in the
unify translator page: (
http://www.gluster.org/docs/index.php/Understanding_Unify_Translator).
Any suggestions for a good place to start an
Hi Guido -
it looks like one of their nameservers is corrupt:
Name Server:NS3.DR-PARKINGSERVICES.COM
Name Server:NS1.ZRESEARCH.COM
Name Server:NS2.ZRESEARCH.COM
so depending on which nameserver you get when you try to fetch the website,
you'll either get the site or the spam parking page. Looks
Howdy devs -
I have a pre6 server compiled with fuse-2.7.0-glfs2 on CentOS. The server
has posix locks and io-threads on top of ext3. There are two pre6 clients,
they both have write-behind enabled. The server has been crashing for me all
day today with this stack trace. Let me know if you want
while
the client was trying to reconnect? (this can produce such a log) apart from
that the client logs seem normal.
avati
2007/7/30, August R. Wohlt [EMAIL PROTECTED]:
Hi -
So still the same problem with pre6 hanging regularly with or without
gdb:
To summarize:
After
bricks, and I get the same
thing. Also, I have an ssh connection between the two servers open all the
time which never drops, so I'm ruling out network problems across the
switch.
Help!
:august
On 7/26/07, August R. Wohlt [EMAIL PROTECTED] wrote:
Hi avati,
When I run it without gdb, it still has
-client.c:178:tcp_connect] brick_2: connection on
7 still in progress - try later
2007-07-30 03:08:15 W [client-protocol.c:344:client_protocol_xfer] brick_2:
not connected at the moment to submit frame type(0) op(34)
thanks,
:august
On 7/30/07, August R. Wohlt [EMAIL PROTECTED] wrote:
Hi -
So still
have attached is
NOT a crash, you had to just 'c' (continue) at the gdb. And most likely,
this is what has given the 'hung' effect as well.
Is this reproducible for you?
thanks,
avati
2007/7/26, August R. Wohlt [EMAIL PROTECTED]:
Hi all -
I have client and server set up with the pre6 version
I noticed something very similar this morning with a similar setup. For me
it only shows up on hard-linked files that I do not have permissions to
view:
[EMAIL PROTECTED] ls -ail /backups/20070726/root/ | head -10
total 0
??- ? ? ? ? ? .
??- ? ? ? ? ? ..
Hello,
I downloaded pre6 today and compiled it. glusterfsd starts up successfully,
but if I connect to the socket then disconnect, it segfaults. It does this
every time. The server never segfaulted with pre5 on the same configuration,
though my clients did at random times after heavy load inside
confirm.
thanks,
avati
2007/7/24, August R. Wohlt [EMAIL PROTECTED]:
Hello,
I downloaded pre6 today and compiled it. glusterfsd starts up
successfully,
but if I connect to the socket then disconnect, it segfaults. It does
this
every time. The server never segfaulted with pre5 on the same
Most excellent. thanks.
On 7/24/07, Amar S. Tumballi [EMAIL PROTECTED] wrote:
Hi August,
Hope this link helps you.
http://www.gluster.org/docs/index.php/GlusterFS_Building_RPMs
-amar
On 7/24/07, August R. Wohlt [EMAIL PROTECTED] wrote:
HI avati -
Indeed, I had the pre5_3 rpms
20 matches
Mail list logo