You're not the only one with this issue. I've been working with the developers to resolve this very same issues. We're dealing with ~10k <2k javascript files we're trying to distribute for cache and have had the same issues.

They've provided me with a patch which helped significantly, but the performance was still weak. As I understand it, they are continuing to work on resolving the issue and as of this morning they told me a potential fix should be released soon.

I do have one additional suggestion to try yet, but I haven't been able to try anything beyond the initial patch they provided me.

Benjamin Krein
www.superk.org

On Jun 18, 2009, at 11:29 AM, Martin Reissner wrote:

Hello Stephan,

did a quick test with the same fileset only replaced distribute by
replicate on the two servers. This was with the GlusterFS patched Fuse
though. Filesystem cache was flushed on all boxes between the tests.

write: 127s
read: 106s

Compared with the distribute results replicate seems to perform worse.
Here's the distribute results on the exact same setup again:

write: 90s
read:  60s

Martin


Stephan von Krawczynski wrote:
On Thu, 18 Jun 2009 15:02:34 +0200
Martin Reissner <[email protected]> wrote:

[...]
NFS:
write: 74s
read:  36s

GlusterFS 1 Server:
write: 332s
read:   59s

GlusterFS 2 Servers with Distribute:
write: 331s
read:   60s

Can you produce the same test for replicate, too?
A really interesting setup for people who want to get rid of NFS...
In theory the minimum time (caused by a FE network) should be below 23s for read and write or maybe 46s for replicate write case. I know you have GBit ethernet, but your disks won't cope with that anyway, so one would be content with a factor 2 in real life. Nevertheless your test really shows that NFS is
not that bad.
A local-disk FUSE fs would be an interesting comparison, too.


_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users









_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

Reply via email to