On Friday, March 16, 2012 at 2:24 PM, Andrey Stepachev wrote:
> 2012/3/16 Noah Watkins <[email protected] (mailto:[email protected])>:
> > 
> > On Mar 16, 2012, at 8:37 AM, Sage Weil wrote:
> > 
> > > Hi Andrey,
> > > 
> > > On Fri, 16 Mar 2012, Andrey Stepachev wrote:
> > > 
> > > possible). I take it TestDSFIO is a standard hadoop benchmark?
> > 
> > Yes, this is. There are a number of benchmarks that ship with Hadoop. 
> > Although this is untested, one reason you might be seeing throughput issues 
> > is with the standard read/write interface that copies bytes across the JNI 
> > interface. On the short list of stuff for the next Java wrapper set is to 
> > use the ByteBuffer interface (NIO) to avoid this copying.
> 
> I'm not sure, that problem on java side. All disks loaded at 100%, so
> I think, that problem clearly on osd part. But i want to test your new
> integration and see, if something changes. You maybe right, but I'm not.


Those are some awfully slow disks. I don't know exactly what this test 
measures, but if you're write-constrained on the HDFS side then Ceph will 
definitely be slower due to little things like the journaling that it does. And 
that is a data safety issue where Ceph is paying much higher costs than HDFS 
does.
But it doesn't mean that Ceph is necessarily slower on good hardware. :)
-Greg


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to