> All the crashed OSDs had the same rocksdb corruption error?  What kind >  
> hardware (or vm?) are you using?

yes, All the crashed OSDs Has the some rocksdb corruption.

Our  Cluster has 3 Nodes, and Per Node  has 

2 * Intel(R) Xeon(R) E5-2620 v3 @ 2.40GHz CPU

4 * 16G DDR4-2133 memory

10 *  OSD ( Hitachi 1T 7200 RPM SATA  HDD for data,  Intel S3500 800G SATA SSD 
for WAL and DB (WAL Size is 576MB, DB size is 1GB) )

> Also,

> rocksdb:[/clove/vm/clove/ceph/rpmbuild/BUILD/ceph-12.2.0/src/rocksdb/db/compaction_job.cc:1403]
>   > ....> it looks like this is a custom build?  Are there any changes to the 
> > source code?

yes, we builded the source code by ourself, but there are not any changes for 
source code .



Original Mail



Sender:  <[email protected]>
To: WeiQiaoMiao00105316
CC:  <[email protected]>
Date: 2017/09/17 01:56
Subject: Re: [ceph-users] osd crash because rocksdb report  ‘Compaction error: 
Corruption: block checksum mismatch’





On Fri, 15 Sep 2017, [email protected] wrote:
> 
> Hi,all   
> 
>    My cluster running  12.2.0  with bluestore, we used fio tool with
> librbd ioengine make io test  yesterday, and serval osds crash one after
> another.
> 
>    3 * node, 30 OSD, 1TB SATA HDD for OSD data, 1GB SATA SSD  partition for
> db, 576 MB SATA SSD partition for wal.
> 
>    ceph options:
> 
>    bluestore_shard_finishers = true
>    mon_osd_prime_pg_temp = false
>    mon_allow_pool_delete = true
>    mgr_op_latency_sample_interval = 300

All of the crashed OSDs had the same rocksdb corruption error?  What kind 
of hardware (or vm?) are you using?

Also,

rocksdb:[/clove/vm/clove/ceph/rpmbuild/BUILD/ceph-12.2.0/src/rocksdb/db/compaction_job.cc:1403]
  
....

it looks like this is a custom build?  Are there any changes to the 
source code?

Thanks!
sage
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to