Re: [ceph-users] Firefly to Hammer Upgrade -- HEALTH_WARN; too many PGs per OSD (480 > max 300)

2015-09-01 Thread 10 minus
and 50 OSDs set pg_num to 4096 * If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself --snip-- On Mon, Aug 31, 2015 at 10:31 AM, Gregory Farnum <gfar...@redhat.com> wrote: > On Mon, Aug 31, 2015 at 8:30 AM, 10 min

[ceph-users] Firefly to Hammer Upgrade -- HEALTH_WARN; too many PGs per OSD (480 > max 300)

2015-08-31 Thread 10 minus
Hi , I 'm in the process of upgrading my ceph cluster from Firefly to Hammer. The ceph cluster has 12 OSD spread across 4 nodes. Mons have been upgraded to hammer, since I have created pools with value 512 and 256 , so am bit confused with the warning message. --snip-- ceph -s cluster

Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-08-26 Thread 10 minus
Hi , We got a good deal on 843T and we are using it in our Openstack setup ..as journals . They have been running for last six months ... No issues . When we compared with Intel SSDs I think it was 3700 they were shade slower for our workload and considerably cheaper. We did not run any

Re: [ceph-users] Is Ceph the right tool for me?

2015-06-26 Thread 10 minus
Hi , As Christian has mentioned ... bit more detailed information will do us good.. Had explored Cephfs -- but performance was an issue vis-a-vis zfs when we tested ( more than a year back) , so we did not get into details. I will let the Cephfs experts chip in here on the present state of Cephfs

[ceph-users] radosgw : Cannot set a new region as default

2015-04-30 Thread 10 minus
Hi, I am in the process of setting up radosgw for a firefly ceph-cluster. I have followed the docs for creating alternate region region map and zones. Now I want to delete the default region . Is it possible to do that ? Also I'm not able to promote my new region regeion1 as default region.

[ceph-users] Powering down a ceph cluster

2015-04-22 Thread 10 minus
Hi, Is there a recommended way of powering down a ceph cluster and bringing it back up ? I have looked thru the docs and cannot find anything wrt it. Thanks in advance ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Is it possible to reinitialize the cluster

2015-04-20 Thread 10 minus
Hi , i have an issue with my ceph cluster were two nodes wereby accident and have been recreated. ceph osd tree # idweight type name up/down reweight -1 14.56 root default -6 14.56 datacenter dc1 -7 14.56 row row1 -9 14.56 rack

[ceph-users] Hammer release data and a Design question

2015-03-26 Thread 10 minus
Hi , I 'm just starting on small Ceph implementation and wanted to know the release date for Hammer. Will it coincide with relase of Openstack. My Conf: (using 10G and Jumboframes on Centos 7 / RHEL7 ) 3x Mons (VMs) : CPU - 2 Memory - 4G Storage - 20 GB 4x OSDs : CPU - Haswell Xeon Memory - 8

Re: [ceph-users] Performance data collection for Ceph

2014-11-17 Thread 10 minus
[mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *10 minus *Sent:* Friday, November 14, 2014 10:26 AM *To:* ceph-users *Subject:* [ceph-users] Performance data collection for Ceph Hi, I 'm trying to collect performance data for Ceph I 'm looking to run some commands .. on regular

[ceph-users] firefly osds stuck in state booting

2014-07-26 Thread 10 minus
Hi, I just setup a test ceph installation on 3 node Centos 6.5 . two of the nodes are used for hosting osds and the third acts as mon . Please note I'm using LVM so had to set up the osd using the manual install guide. --snip-- ceph -s cluster 2929fa80-0841-4cb6-a133-90b2098fc802

[ceph-users] openstack volume to image

2014-05-29 Thread 10 minus
Hi , My cinder backend storage is ceph . Isthere is a mechanism to convert a booted instance (Volume) into an image ? Cheers ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph cinder compute-nodes

2014-05-29 Thread 10 minus
Hi , Thanks Travis .. I was following RDO documentation on howto deploy ceph. Instead of Ceph - Once I read Ceph documenation on it . It was clear. Cheersf ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Ceph Firefly on Centos 6.5 cannot deploy osd

2014-05-24 Thread 10 minus
v121: 492 pgs, 6 pools, 0 bytes data, 0 objects 80636 kB used, 928 GB / 928 GB avail 492 active+clean --snip-- Can I pass these values via ceph.conf ? On Wed, May 21, 2014 at 4:05 PM, 10 minus t10te...@gmail.com wrote: Hi, I have just started to dabble

[ceph-users] ceph cinder compute-nodes

2014-05-24 Thread 10 minus
Hi, I went through the docs fo setting up cinder with ceph. from the docs - I have to perform on every compute node virsh secret-define --file secret.xml The issue I see is that I have to perform this on 5 compute nodes and on cinder it expects to have only one rbd_secret_uuid= uuid as

[ceph-users] Ceph Firefly on Centos 6.5 cannot deploy osd

2014-05-21 Thread 10 minus
Hi, I have just started to dabble with ceph - went thru the docs http://ceph.com/howto/deploying-ceph-with-ceph-deploy/ I have a 3 node setup with 2 nodes for OSD I use ceph-deploy mechanism. The ceph init scripts expects that cluster.conf to be ceph.conf . If I give any other name the init