Re: [ceph-users] cache-tier do not evict

2015-04-09 Thread Chu Duc Minh
What ceph version do you use? Regards, On 9 Apr 2015 18:58, Patrik Plank pat...@plank.me wrote: Hi, i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm pool. these are my settings : ceph osd tier add kvm cache-pool ceph osd tier cache-mode cache-pool writeback

Re: [ceph-users] ceph -s slow return result

2015-03-29 Thread Chu Duc Minh
:45 PM, Chu Duc Minh chu.ducm...@gmail.com wrote: @Kobi Laredo: thank you! It's exactly my problem. # du -sh /var/lib/ceph/mon/ *2.6G * /var/lib/ceph/mon/ # ceph tell mon.a compact compacted leveldb in 10.197506 # du -sh /var/lib/ceph/mon/ *461M*/var/lib/ceph/mon/ Now my ceph -s

Re: [ceph-users] ceph -s slow return result

2015-03-27 Thread Chu Duc Minh
, then times out and contacts a different one. I have also seen it just be slow if the monitors are processing so many updates that they're behind, but that's usually on a very unhappy cluster. -Greg On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh chu.ducm...@gmail.com wrote: On my CEPH cluster

[ceph-users] ceph -s slow return result

2015-03-27 Thread Chu Duc Minh
On my CEPH cluster, ceph -s return result quite slow. Sometimes it return result immediately, sometimes i hang few seconds before return result. Do you think this problem (ceph -s slow return) only relate to ceph-mon(s) process? or maybe it relate to ceph-osd(s) too? (i deleting a big bucket,

Re: [ceph-users] ceph -s slow return result

2015-03-27 Thread Chu Duc Minh
at a time. *Kobi Laredo* *Cloud Systems Engineer* | (*408) 409-KOBI* On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh chu.ducm...@gmail.com wrote: All my monitors running. But i deleting pool .rgw.buckets, now having 13 million objects (just test data). The reason that i must delete

Re: [ceph-users] [SPAM] Changing pg_num = RBD VM down !

2015-03-16 Thread Chu Duc Minh
of 2. So I’d go from 2048 to 4096. I’m not sure if this is the safest way, but it’s worked for me. [image: yp] Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |( 818-649-7235 From: Chu Duc Minh chu.ducm...@gmail.com Date: Monday, March 16, 2015 at 7:49 AM To: Florent B

Re: [ceph-users] [SPAM] Changing pg_num = RBD VM down !

2015-03-16 Thread Chu Duc Minh
I'm using the latest Giant and have the same issue. When i increase PG_num of a pool from 2048 to 2148, my VMs is still ok. When i increase from 2148 to 2400, some VMs die (Qemu-kvm process die). My physical servers (host VMs) running kernel 3.13 and use librbd. I think it's a bug in librbd with

[ceph-users] [URGENT] My CEPH cluster is dying (due to incomplete PG)

2014-11-08 Thread Chu Duc Minh
My ceph cluster have a pg in state incomplete and i can not query them any more. *# ceph pg 6.9d8 query* (hang forever) All my volumes may be lost data because of this PG. # ceph pg dump_stuck inactive ok pg_stat objects mip degrmispunf bytes log disklog state

Re: [ceph-users] [URGENT] My CEPH cluster is dying (due to incomplete PG)

2014-11-08 Thread Chu Duc Minh
[] -1 [] -1 0'0 0.000'0 0.00 Do you have any suggestion? Thank you very much indeed! On Sun, Nov 9, 2014 at 12:52 AM, Chu Duc Minh chu.ducm...@gmail.com wrote: My ceph cluster have a pg in state incomplete and i can not query them any more. *# ceph pg

[ceph-users] rbd import so slow

2014-01-09 Thread Chu Duc Minh
When using RBD backend for Openstack volume, i can easily surpass 200MB/s. But when using rbd import command, eg: # rbd import --pool test Debian.raw volume-Debian-1 --new-format --id volumes I only can import at speed ~ 30MB/s I don't know why rbd import slow? What can i do to improve import

Re: [ceph-users] Speed limit on RadosGW?

2013-10-14 Thread Chu Duc Minh
, that already have millions files? On Wed, Sep 25, 2013 at 7:24 PM, Mark Nelson mark.nel...@inktank.comwrote: On 09/25/2013 02:49 AM, Chu Duc Minh wrote: I have a CEPH cluster with 9 nodes (6 data nodes 3 mon/mds nodes) And i setup 4 separate nodes to test performance of Rados-GW: - 2 node run

[ceph-users] Speed limit on RadosGW?

2013-09-25 Thread Chu Duc Minh
I have a CEPH cluster with 9 nodes (6 data nodes 3 mon/mds nodes) And i setup 4 separate nodes to test performance of Rados-GW: - 2 node run Rados-GW - 2 node run multi-process put file to [multi] Rados-GW Result: a) When i use 1 RadosGW node 1 upload-node, speed upload = 50MB/s /upload-node,