Re: [ceph-users] Ceph pool resize

2017-02-07 Thread Vikhyat Umrao
On Tue, Feb 7, 2017 at 12:15 PM, Patrick McGarry wrote: > Moving this to ceph-user > > On Mon, Feb 6, 2017 at 3:51 PM, nigel davies wrote: > > Hay > > > > I am helping to run an small ceph cluster 2 nodes set up. > > > > We have recently bought a 3rd storage node and the management want to > > i

Re: [ceph-users] rgw geo-replication

2015-04-24 Thread Vikhyat Umrao
On 04/24/2015 05:17 PM, GuangYang wrote: Hi cephers, Recently I am investigating the geo-replication of rgw, from the example at [1], it looks like if we want to do data geo replication between us east and us west, we will need to build *one* (super) RADOS cluster which cross us east and west

Re: [ceph-users] Ceph site is very slow

2015-04-16 Thread Vikhyat Umrao
I hope this will help you : http://docs.ceph.com/docs/master/ Regards, Vikhyat On 04/16/2015 02:39 PM, unixkeeper wrote: it still on DDOS ATTACK? is there have a mirror site could get doc&guide? thx a lot On Wed, Apr 15, 2015 at 11:32 PM, Gregory Farnum > wrote:

Re: [ceph-users] OSD replacement

2015-04-14 Thread Vikhyat Umrao
Hi, I hope you are following this : http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual After removing the osd successfully run the following command : # ceph-deploy --overwrite-conf osd create : --zap-disk It will give you the same osd id for new osd as old one

Re: [ceph-users] ceph Performance vs PG counts

2015-02-10 Thread Vikhyat Umrao
Hi, Just a heads up I hope , you are aware of this tool: http://ceph.com/pgcalc/ Regards, Vikhyat On 02/11/2015 09:11 AM, Sumit Gaur wrote: Hi , I am not sure why PG numbers have not given that much importance in the ceph documents, I am seeing huge variation in performance number by changin

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao
doesn't represent a float osd crush reweight : change 's weight to in crush map Error EINVAL: invalid command What do you think On Feb 10, 2015, at 3:18 PM, Vikhyat Umrao <mailto:vum...@redhat.com>> wrote: sudo ceph osd crushreweight 0.

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao
Hello, Your osd does not have weights , please assign some weight to your ceph cluster osds as Udo said in his last comment. osd crush reweightchange 's weight to in crush map sudo ceph osd crush reweight 0.0095 osd.0 to osd.5. Regards, Vikhya

[ceph-users] [rbd] Ceph RBD kernel client using with cephx

2015-02-09 Thread Vikhyat Umrao
Hi, While using rbd kernel client with cephx , admin user without admin keyring was not able to map the rbd image to a block device and this should be the work flow. But issue is once I unmap rbd image without admin keyring it is allowing to unmap the image and as per my understanding it sho