Hi Brock-

Do you mean 7 cmds's or cmon's?  Multiple cmons help with durability, but 
that's it--they're only used for cluster configuration state (and I 
wouldn't really recommend more than 3 unless you have a really huge 
cluster).  And I wouldn't use more than one cmds until you see it 
saturating your CPU.

In general, metadata writes/updates will be slower than a local fs, 
similar in performance to NFS.  Most reads will be faster due to the 
coherent cache and metadata prefetching.

I did see a slowdown when btrfs is used on the OSDs that I haven't looked 
into yet...

sage


On Mon, 11 Aug 2008, Erwin, Brock A wrote:

> I am running fileop, a program within iozone that tests file operation
> performance.  I also set ceph up with 7 monitors and 1 osd, to see if I
> could get a fileop performance increase.  However, it seems that do
> operations like stat, access, open, create, mkdir, etc. are much slower
> than when I run the test on the local file system.  I noticed each of
> the monitors had about 30% cpu utilization or so.  Where is the
> bottleneck in this case?  Can this performance be increased?
> 
> -Brock 
> 
> P.S. I compiled ceph with O2 optimizations and disabled all debugging
> output.
> 
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Ceph-devel mailing list
> Ceph-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ceph-devel
> 
> 

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to