On Fri, 31 Oct 2008, Jerker Nyberg wrote:
> Hi all,
> 
> I ran a simple write throughput benchmark for Ceph which might be of 
> interest.
> 
> http://www.update.uu.se/~jerker/ceph/
> 
> Adding one more drive and running two cosd servers on each node does not 
> seem to get better performance.

This may be a problem with the switch, or a bottleneck with the host's 
interface.  We've found that most inexpensive gig switches scale very 
poorly once you start using lots of ports at once.

> However, Ceph seem to crash when I write 
> to it configured this way from all the nodes.  Should this way of adding 
> more drives and cosd servers work or have I stumbled into some other bug?
> 
> Do you want more debug information regarding this? Follow link above for 
> specifics about the setup.

It clearly shouldn't crash.  We're doing a lot of testing of the OSD 
cluster right now and have been working these bugs out.  If the problem is 
reproducible, the OSD debug output when cosd is started with '--debug_osd 
20 --debug_ms 1' would help us out!

Thanks-
sage


> 
> Regards,
> Jerker Nyberg.
> 
> 
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Ceph-devel mailing list
> Ceph-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ceph-devel
> 
> 

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to