Jon,
Stripe should be used only if the data usage is a very few files, but each,
very very large file ( many GBs in size ).
Rest all can use distribute.
Regards,
Tejas.
- Original Message -
From: Jon Tegner teg...@foi.se
To: Joshua Baker-LePain jl...@duke.edu
Cc: gluster-users
Hello, we're now using glusterfs 3.0.3, and we have over 100 nodes in
the gfs cluster. Now we encountered one problem: we plan to sync some files
to gfs every 5 minutes, but sometimes we found a few files cannot be
transfered to gfs correctly, so we want to locate the server on which the
Steinmetz, Ian wrote:
I've turned on debug logging for the server and client of GlusterFS and
amended them below. I've replaced the IP addresses with x.x.x and
left the last octet for security. It appears I'm able to lock the file
when I run the program directly on the gluster mount point,
On 05/03/2010 09:50 PM, Joshua Baker-LePain wrote:
For purpose 1, clearly I'm looking at a replicated volume. For purpose
2, I'm assuming that distributed is the way to go (rather than striped),
although for reliability reasons I'd likely go replicated then
distributed. For storage bricks, I'm
I don't recall the numbers, but when I did that reads were about as fast as
regular local reads. Also if the post below refers to an older version of
glusterfs, then things might have changed a lot since then. The quick-read
translator combined with 3.x version was supposed to help a lot for
Anyone have any idea on how to upgrade from 2.0.2 to 2.0.8-9? When I do it, the
namespace (metadata?) does not seem to be read... for example, when I upgrade,
a lot of files are missing. If I downgrade, it works. Copying the data to
another
box and re-copying it back is not really an option
Hi,
I am evaluating GlusterFS for the first time, and I have a problem with
a pretty simple setup:
I have 1 server and 9 clients with Debian Lenny, glusterfs installed
from backports (2.0.9-3~bpo50+1);
GlusterFS is used to export /home directory to clients.
What I see, is that randomly people
That is basically correct. This is a production server, and so there
is a small amount of fear in running that script...
Thanks.
- Todd Pfaff pf...@rhpcs.mcmaster.ca wrote:
I suggested that to Robert in response to his first post about this
problem. Understandably though, since that
I already suggested that to Robert in response to his first post about
this problem. Understandably though, since that script is not well
documented anywhere, he'd rather not try it until one of the gluster
developers speaks up and tells him it's completely safe and not a complete
waste of time.
I am running glusterfs 3.0.4 on a pair of machines in a replicated native setup.
Ubuntu 8.04.4, Linux ubuntu 2.6.24-27-server #1 SMP x86_64
Two client machines running the same OS and software are running
apache and sharing their cookie repository over glusterfs.
I am experiencing cookie
We are running our webapp on a gluster mount. We are finding that performance
is a lot slower than local disk. We expected it to be slower but not this much
slower. So, I am looking to you for some guidance on what to do. IE: Not run
off the gluster mount or change config settings, etc.
yesterday i was trying to make that work, i did have to use the
gluster custom version of unfsd to get xen working on an reexported
mount point. My lab is not the best to do benchmark but i did copy
some virtual disks from local storage to nfs and that had a bad
performance.
2010/5/4 Paulo
Hi Jenn,
You may not have seen the posts, but small files do not, as a general rule, do
well on parallel file systems. There are numerous posts on this subject on this
list concerning this subject, and the Gluster developers have devoted a good
bit of energy into trying to address this, but
Not sure if this is helpful, but just in case:
http://evolvingweb.ca/story/drupal-cloud-deploying-rackspace-nginx-and-boost
I also found this reference, which doesn't involve Drupal:
http://nowlab.cse.ohio-state.edu/publications/conf-papers/2008/noronha-icpp08.pdf
Supposedly some of the
On May 4, 2010, at 10:31 AM, Jenn Fountain wrote:
#volume writebehind
#type performance/write-behind
#option cache-size 4MB
#subvolumes mirror-0
#end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
#subvolumes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hm... DRBD is:
a) not scalable at all
b) not really a parallel filesystem
IBM GPFS perhaps, but GPL Linux kernel module versions have quite
limited functionality compared to their closed modules. Alternatively
unsupported Sun Lustre setup with
Shehjar,
Good catch on the typo, I fixed it an restarted. Still had the same
issue. I also was able to get samba working, and file locking seems to
work fine; however, with samba the file owner/group and priv aren't
transferred to clients. This at least points to an issue with how
I have some machines over WAN, in a Replicate cluster, with around 50ms ~ 60ms
between them.
However, I have read-subvolume specified so that it will always use the local
brick instead of the WAN.
I just need this in order to replicate files as easily and as quickly as
possible between various
Hi,
On Wed, May 5, 2010 at 3:14 AM, Hassan Jafri hj2...@caa.columbia.eduwrote:
The latest patched fuse available at
http://ftp.gluster.com/pub/gluster/glusterfs/fuse/ does not work with
2.6.30
kernel. Am I then right to assume that the stock fuse distro packaged with
the kernel is good
Hi,
For kernels above 2.6.18 you need not install the custom patched fuse. You
should be able to compile and use glusterfs without any problems.
Regards,
Sachidananda.
On Wed, May 5, 2010 at 3:14 AM, Hassan Jafri hj2...@caa.columbia.eduwrote:
The latest patched fuse available at
20 matches
Mail list logo