What ceph version do you use?
Regards,
On 9 Apr 2015 18:58, Patrik Plank pat...@plank.me wrote:
Hi,
i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm
pool.
these are my settings :
ceph osd tier add kvm cache-pool
ceph osd tier cache-mode cache-pool writeback
:45 PM, Chu Duc Minh chu.ducm...@gmail.com
wrote:
@Kobi Laredo: thank you! It's exactly my problem.
# du -sh /var/lib/ceph/mon/
*2.6G * /var/lib/ceph/mon/
# ceph tell mon.a compact
compacted leveldb in 10.197506
# du -sh /var/lib/ceph/mon/
*461M*/var/lib/ceph/mon/
Now my ceph -s
, then times out and
contacts a different one.
I have also seen it just be slow if the monitors are processing so many
updates that they're behind, but that's usually on a very unhappy cluster.
-Greg
On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh chu.ducm...@gmail.com
wrote:
On my CEPH cluster
On my CEPH cluster, ceph -s return result quite slow.
Sometimes it return result immediately, sometimes i hang few seconds before
return result.
Do you think this problem (ceph -s slow return) only relate to ceph-mon(s)
process? or maybe it relate to ceph-osd(s) too?
(i deleting a big bucket,
at a time.
*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*
On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh chu.ducm...@gmail.com
wrote:
All my monitors running.
But i deleting pool .rgw.buckets, now having 13 million objects (just
test data).
The reason that i must delete
of 2. So I’d go from 2048 to 4096.
I’m not sure if this is the safest way, but it’s worked for me.
[image: yp]
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
From: Chu Duc Minh chu.ducm...@gmail.com
Date: Monday, March 16, 2015 at 7:49 AM
To: Florent B
I'm using the latest Giant and have the same issue. When i increase PG_num
of a pool from 2048 to 2148, my VMs is still ok. When i increase from 2148
to 2400, some VMs die (Qemu-kvm process die).
My physical servers (host VMs) running kernel 3.13 and use librbd.
I think it's a bug in librbd with
My ceph cluster have a pg in state incomplete and i can not query them
any more.
*# ceph pg 6.9d8 query* (hang forever)
All my volumes may be lost data because of this PG.
# ceph pg dump_stuck inactive
ok
pg_stat objects mip degrmispunf bytes log disklog
state
[]
-1 [] -1 0'0 0.000'0 0.00
Do you have any suggestion?
Thank you very much indeed!
On Sun, Nov 9, 2014 at 12:52 AM, Chu Duc Minh chu.ducm...@gmail.com wrote:
My ceph cluster have a pg in state incomplete and i can not query them
any more.
*# ceph pg
When using RBD backend for Openstack volume, i can easily surpass 200MB/s.
But when using rbd import command, eg:
# rbd import --pool test Debian.raw volume-Debian-1 --new-format --id
volumes
I only can import at speed ~ 30MB/s
I don't know why rbd import slow? What can i do to improve import
,
that already have millions files?
On Wed, Sep 25, 2013 at 7:24 PM, Mark Nelson mark.nel...@inktank.comwrote:
On 09/25/2013 02:49 AM, Chu Duc Minh wrote:
I have a CEPH cluster with 9 nodes (6 data nodes 3 mon/mds nodes)
And i setup 4 separate nodes to test performance of Rados-GW:
- 2 node run
I have a CEPH cluster with 9 nodes (6 data nodes 3 mon/mds nodes)
And i setup 4 separate nodes to test performance of Rados-GW:
- 2 node run Rados-GW
- 2 node run multi-process put file to [multi] Rados-GW
Result:
a) When i use 1 RadosGW node 1 upload-node, speed upload = 50MB/s
/upload-node,
12 matches
Mail list logo