The mons in my production cluster have a very high cpu usage 100%.
I think it may be caused by the leveldb compression.
How yo disable leveldb compression ?
Just add leveldb_compression = false to the ceph.conf and restart the mons?
Thanks a lot!
___
Hi,
I'm encountering a data disaster. I have a ceph cluster with 145 osd. The
data center had a power problem yesterday, and all of the ceph nodes were down.
But now I find that 6 disks(xfs) in 4 nodes have data corruption. Some disks
are unable to mount, and some disks have IO errors in syslog.
Alexandre DERUMIER aderumier@... writes:
maybe this could help to repair pgs ?
http://www.sebastien-han.fr/blog/2015/04/27/ceph-manually-repair-object/
(6 disk at the same time seem pretty strange. do you have some kind of
writeback cache enable of theses disks ?)
The only writeback
Emmanuel Florac eflorac@... writes:
Le Mon, 4 May 2015 07:00:32 + (UTC)
Yujian Peng pengyujian5201314 at 126.com écrivait:
I'm encountering a data disaster. I have a ceph cluster with 145 osd.
The data center had a power problem yesterday, and all of the ceph
nodes were down
I have a ceph cluster with 125 osds with the same weight.
But I found that data is not well distributed.
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda147929224 2066208 43405264 5% /
udev 16434372 4 16434368 1% /dev
tmpfs
Star Guo starg@... writes:
There is a image in attachment of ceph osd log information. It prints
“fault witch nothing to send, going to standby”. What does it mean? Thanks J.
Logs like this is OK.
___
ceph-users mailing list
Thanks for your advices!
I'll increase the number of PGs to improve the balance.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
I have a ceph cluster(0.80.7) in production.
Now I encounter a bottleneck of iosp, so I want to add a cache
tier with SSDs to provide better I/O performance. Here is the procedure:
1. Create a cache pool
2. Set up a cahce tire
ceph osd tier add cold-storage hot-storage
3. Set
Gregory Farnum greg@... writes:
Cache tiering is a stable, functioning system. Those particular commands
are for testing and development purposes, not something you should run
(although they ought to be safe).-Greg
Thanks for your reply!
I'll put cache tiering into my production cluster!
Hi,
Since firefly, ceph can support cache tiering.
Cache tiering: support for creating ‘cache pools’ that store hot, recently
accessed objects with automatic demotion of colder data to a base tier.
Typically the cache pool is backed by faster storage devices like SSDs.
I'm testing cache tiering,
I've found the problem.
The command ceph osd crush rule create-simple ssd_ruleset ssd root should
be ceph osd crush rule create-simple ssd_ruleset ssd host
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I want to test ceph cache tire. The test cluster has three machines, each
has a ssd and a sata. I've created a crush rule ssd_ruleset to place ssdpool
on ssd osd, but cannot assign pgs to ssds.
root@ceph10:~# ceph osd crush rule list
[
replicated_ruleset,
ssd_ruleset]
root@ceph10:~#
I would like to extend my sincerest thanks and appreciation to those patient
souls who helped me!
I will try to tune ceph and kernel vm dirty buffer to see the effect follow
your tips.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
I have a ceph cluster in production. Most of the write requests are small. I
found that iops is a bottleneck. I want to move all of the journal datas to
partitions on SSDs. Here is the procedures:
1.Set noout flag. ceph osd set noout
1.Stop osd 0
2.Copy the journal datas to new
Mark Nelson, thanks for your help! I will set filestore max sync interval to a
couple of values to observe the effects.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks a lot!
IOPS is a bottleneck in my cluster and the object disks are much slower than
SSDs. I don't know whether SSDs will be used as caches if
filestore_max_sync_interval is set to a big value. I will set
filestore_max_sync_interval to a couple of value to observe the effect.
If
16 matches
Mail list logo