Yeah :-)...We are also waiting for wip-rocksdb to merge in mainstream.

Thanks & Regards
Somnath

-----Original Message-----
From: Mark Nelson [mailto:[email protected]] 
Sent: Thursday, June 05, 2014 11:33 AM
To: Somnath Roy; Samuel Just; [email protected]
Subject: Re: Mon backing store

Hi Somnath,

Sorry to get your hopes up, no ceph+rocksdb benchmarks (yet!).  I was referring 
to the benchmarks that the rocksdb developers published here:

https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks

Sounds like we need to start some performance testing on wip-rocksdb going 
though! :)

Mark

On 06/05/2014 01:26 PM, Somnath Roy wrote:
> Mark,
> Could you please share the performance benchmark result with Rocksdbstore + 
> ceph and leveldbstore+ceph as you mentioned below ?
> BTW, have you measured the WA induced by Rocksdbstore and leveldbstore in the 
> process since that is also a very important factor while backend is flash ?
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of Mark Nelson
> Sent: Thursday, June 05, 2014 11:18 AM
> To: Samuel Just; [email protected]
> Subject: Re: Mon backing store
>
> On 06/05/2014 12:42 PM, Samuel Just wrote:
>> I am starting to wonder whether using leveldb for the mon is actually 
>> introducing an excessive amount unnecessary complexity and 
>> non-determinism.  Given that the monitor workload is read mostly, 
>> except for failure conditions when it becomes write latency 
>> sensitive, might we do better with a strict b-tree style backing db 
>> such as berkely db even at the cost of some performance?  It seems 
>> like something like that might provide more reliable latency properties.
>
> I'm not against trying it, but I'm not convinced it's the right solution.  If 
> the 99th percentile latency is significantly better, that's obviously a win, 
> but I think we are indeed going to take a big performance hit overall.  I'm 
> more in favor of trying rocksdb first.
> I'm certainly not as well versed in the leveldb interface as you or Joao are, 
> but it appears much of our code in LevelDBStore would be reusable.
>    I don't know that rocksdb won't have the same issues that leveldb does, 
> but the rocksdb developers specifically mention leveldb's bad 99th percentile 
> latencies as a driver for it's development:
>
> "By contrast, we’ve published the RocksDB benchmark results for server side 
> workloads on Flash. We also measured the performance of LevelDB on these 
> server-workload benchmarks and found that RocksDB solidly outperforms LevelDB 
> for these IO bound workloads. We found that LevelDB’s single-threaded 
> compaction process was insufficient to drive server workloads. We saw 
> frequent write-stalls with LevelDB that caused 99-percentile latency to be 
> tremendously large. We found that mmap-ing a file into the OS cache 
> introduced performance bottlenecks for reads. We could not make LevelDB 
> consume all the IOs offered by the underlying Flash storage."
>
> Compaction performance and high mmap/page fault/kswapd utilization during 
> reads are two big issues we've hit in leveldb, so I'm inclined to think that 
> rocksdb is at least worthy of some attention.
>
> Here's the benchmark results on their wiki:
>
> https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks
>
>>
>> Thoughts?
>> -Sam
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to [email protected] More majordomo 
>> info at  http://vger.kernel.org/majordomo-info.html
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to [email protected] More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is 
> intended only for the use of the designated recipient(s) named above. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this message in error and that any review, 
> dissemination, distribution, or copying of this message is strictly 
> prohibited. If you have received this communication in error, please notify 
> the sender by telephone or e-mail (as shown above) immediately and destroy 
> any and all copies of this message in your possession (whether hard copies or 
> electronically stored copies).
>

N�����r��y����b�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�m��������zZ+�����ݢj"��!�i

Reply via email to