[ceph-users] ceph-mon cpu 100%

2015-11-23 Thread Yujian Peng
The mons in my production cluster have a very high cpu usage 100%. I think it may be caused by the leveldb compression. How yo disable leveldb compression ? Just add leveldb_compression = false to the ceph.conf and restart the mons? Thanks a lot! ___

[ceph-users] xfs corruption, data disaster!

2015-05-04 Thread Yujian Peng
Hi, I'm encountering a data disaster. I have a ceph cluster with 145 osd. The data center had a power problem yesterday, and all of the ceph nodes were down. But now I find that 6 disks(xfs) in 4 nodes have data corruption. Some disks are unable to mount, and some disks have IO errors in syslog.

Re: [ceph-users] xfs corruption, data disaster!

2015-05-04 Thread Yujian Peng
Alexandre DERUMIER aderumier@... writes: maybe this could help to repair pgs ? http://www.sebastien-han.fr/blog/2015/04/27/ceph-manually-repair-object/ (6 disk at the same time seem pretty strange. do you have some kind of writeback cache enable of theses disks ?) The only writeback

Re: [ceph-users] xfs corruption, data disaster!

2015-05-04 Thread Yujian Peng
Emmanuel Florac eflorac@... writes: Le Mon, 4 May 2015 07:00:32 + (UTC) Yujian Peng pengyujian5201314 at 126.com écrivait: I'm encountering a data disaster. I have a ceph cluster with 145 osd. The data center had a power problem yesterday, and all of the ceph nodes were down

[ceph-users] ceph data not well distributed.

2015-04-14 Thread Yujian Peng
I have a ceph cluster with 125 osds with the same weight. But I found that data is not well distributed. df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda147929224 2066208 43405264 5% / udev 16434372 4 16434368 1% /dev tmpfs

Re: [ceph-users] Ceph OSD Log INFO Learning

2015-04-14 Thread Yujian Peng
Star Guo starg@... writes:   There is a image in attachment of ceph osd log information. It prints “fault witch nothing to send, going to standby”. What does it mean? Thanks J. Logs like this is OK. ___ ceph-users mailing list

Re: [ceph-users] ceph data not well distributed.

2015-04-14 Thread Yujian Peng
Thanks for your advices! I'll increase the number of PGs to improve the balance. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph cache tier

2015-03-23 Thread Yujian Peng
Hi all, I have a ceph cluster(0.80.7) in production. Now I encounter a bottleneck of iosp, so I want to add a cache tier with SSDs to provide better I/O performance. Here is the procedure: 1. Create a cache pool 2. Set up a cahce tire ceph osd tier add cold-storage hot-storage 3. Set

Re: [ceph-users] Is cache tiering production ready?

2014-12-18 Thread Yujian Peng
Gregory Farnum greg@... writes: Cache tiering is a stable, functioning system. Those particular commands are for testing and development purposes, not something you should run (although they ought to be safe).-Greg Thanks for your reply! I'll put cache tiering into my production cluster!

[ceph-users] Is cache tiering production ready?

2014-12-17 Thread Yujian Peng
Hi, Since firefly, ceph can support cache tiering. Cache tiering: support for creating ‘cache pools’ that store hot, recently accessed objects with automatic demotion of colder data to a base tier. Typically the cache pool is backed by faster storage devices like SSDs. I'm testing cache tiering,

Re: [ceph-users] Placing Different Pools on Different OSDS

2014-12-16 Thread Yujian Peng
I've found the problem. The command ceph osd crush rule create-simple ssd_ruleset ssd root should be ceph osd crush rule create-simple ssd_ruleset ssd host ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Placing Different Pools on Different OSDS

2014-12-15 Thread Yujian Peng
Hi, I want to test ceph cache tire. The test cluster has three machines, each has a ssd and a sata. I've created a crush rule ssd_ruleset to place ssdpool on ssd osd, but cannot assign pgs to ssds. root@ceph10:~# ceph osd crush rule list [ replicated_ruleset, ssd_ruleset] root@ceph10:~#

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-27 Thread Yujian Peng
I would like to extend my sincerest thanks and appreciation to those patient souls who helped me! I will try to tune ceph and kernel vm dirty buffer to see the effect follow your tips. ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Yujian Peng
Hi all, I have a ceph cluster in production. Most of the write requests are small. I found that iops is a bottleneck. I want to move all of the journal datas to partitions on SSDs. Here is the procedures: 1.Set noout flag. ceph osd set noout 1.Stop osd 0 2.Copy the journal datas to new

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Yujian Peng
Mark Nelson, thanks for your help! I will set filestore max sync interval to a couple of values to observe the effects. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Quetions abount osd journal configuration

2014-11-26 Thread Yujian Peng
Thanks a lot! IOPS is a bottleneck in my cluster and the object disks are much slower than SSDs. I don't know whether SSDs will be used as caches if filestore_max_sync_interval is set to a big value. I will set filestore_max_sync_interval to a couple of value to observe the effect. If