Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-21 Thread Sudipto Mukhopadhyay
Makes sense. Yes I am running the same program. I will be running couple of more tests to verify this. BTW. two more questions on the related topics: 1. How much of performance boost does booster provide? 2. does the following

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-21 Thread Shehjar Tikoo
Sudipto Mukhopadhyay wrote: Makes sense. Yes I am running the same program. I will be running couple of more tests to verify this. BTW. two more questions on the related topics: 1. How much of performance boost does booster provide? That it provides a performance boost is evident from

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-21 Thread Sudipto Mukhopadhyay
This could be tricky as you don't want too lookup too many alternatives!! But, as you are doing LD_PRELOAD, can you not ask the application to specify the paths (I know it's going to be little error prone based on what application supplies) For example: /mnt/glusterfs If the application run

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-21 Thread Shehjar Tikoo
Sudipto Mukhopadhyay wrote: This could be tricky as you don't want too lookup too many alternatives!! But, as you are doing LD_PRELOAD, can you not ask the application to specify the paths (I know it's going to be little error prone based on what application supplies) For example:

[Gluster-users] GlusterFS Failing to write Maya/Renderman Studio Pointcloud files

2009-07-21 Thread Todd Daugherty
I have a 4 node cluster in test production and this is quite the problem. Linux Fedora 10/11 client Fuse 2.74 Gluster 2.0.3 Gentoo 2.6.27-gentoo-r8 server Gluster 2.0.3 When mounted native the filesystem does not complete the writing of Point Cloud files. When mounted via CIFS (glusterfs

[Gluster-users] DHT and AFR question

2009-07-21 Thread Matt M
Hi, I'm setting up gluster to share /usr/local among 24 compute nodes. The basic goal is to be able to change files in /usr/local in one place, and have it replicate out to all the other nodes. What I'd like to avoid is having a single point of failure where one (or several) nodes go down