and 50 OSDs set pg_num to 4096
* If you have more than 50 OSDs, you need to understand the tradeoffs
and how to calculate the pg_num value by yourself
--snip--
On Mon, Aug 31, 2015 at 10:31 AM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Mon, Aug 31, 2015 at 8:30 AM, 10 min
Hi ,
I 'm in the process of upgrading my ceph cluster from Firefly to Hammer.
The ceph cluster has 12 OSD spread across 4 nodes.
Mons have been upgraded to hammer, since I have created pools with value
512 and 256 , so am bit confused with the warning message.
--snip--
ceph -s
cluster
Hi ,
We got a good deal on 843T and we are using it in our Openstack setup ..as
journals .
They have been running for last six months ... No issues .
When we compared with Intel SSDs I think it was 3700 they were shade
slower for our workload and considerably cheaper.
We did not run any
Hi ,
As Christian has mentioned ... bit more detailed information will do us
good..
Had explored Cephfs -- but performance was an issue vis-a-vis zfs when we
tested ( more than a year back) , so we did not get into details.
I will let the Cephfs experts chip in here on the present state of Cephfs
Hi,
I am in the process of setting up radosgw for a firefly ceph-cluster.
I have followed the docs for creating alternate region region map and
zones.
Now I want to delete the default region .
Is it possible to do that ?
Also I'm not able to promote my new region regeion1 as default region.
Hi,
Is there a recommended way of powering down a ceph cluster and bringing it
back up ?
I have looked thru the docs and cannot find anything wrt it.
Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi ,
i have an issue with my ceph cluster were two nodes wereby accident and
have been recreated.
ceph osd tree
# idweight type name up/down reweight
-1 14.56 root default
-6 14.56 datacenter dc1
-7 14.56 row row1
-9 14.56
rack
Hi ,
I 'm just starting on small Ceph implementation and wanted to know the
release date for Hammer.
Will it coincide with relase of Openstack.
My Conf: (using 10G and Jumboframes on Centos 7 / RHEL7 )
3x Mons (VMs) :
CPU - 2
Memory - 4G
Storage - 20 GB
4x OSDs :
CPU - Haswell Xeon
Memory - 8
[mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
Of *10 minus
*Sent:* Friday, November 14, 2014 10:26 AM
*To:* ceph-users
*Subject:* [ceph-users] Performance data collection for Ceph
Hi,
I 'm trying to collect performance data for Ceph
I 'm looking to run some commands .. on regular
Hi,
I just setup a test ceph installation on 3 node Centos 6.5 .
two of the nodes are used for hosting osds and the third acts as mon .
Please note I'm using LVM so had to set up the osd using the manual install
guide.
--snip--
ceph -s
cluster 2929fa80-0841-4cb6-a133-90b2098fc802
Hi ,
My cinder backend storage is ceph . Isthere is a mechanism to convert a
booted instance (Volume) into an image ?
Cheers
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi ,
Thanks Travis .. I was following RDO documentation on howto deploy ceph.
Instead of Ceph - Once I read Ceph documenation on it . It was clear.
Cheersf
___
ceph-users mailing list
ceph-users@lists.ceph.com
v121: 492 pgs, 6 pools, 0 bytes data, 0 objects
80636 kB used, 928 GB / 928 GB avail
492 active+clean
--snip--
Can I pass these values via ceph.conf ?
On Wed, May 21, 2014 at 4:05 PM, 10 minus t10te...@gmail.com wrote:
Hi,
I have just started to dabble
Hi,
I went through the docs fo setting up cinder with ceph.
from the docs - I have to perform on every compute node
virsh secret-define --file secret.xml
The issue I see is that I have to perform this on 5 compute nodes and on
cinder it expects to have only one
rbd_secret_uuid= uuid
as
Hi,
I have just started to dabble with ceph - went thru the docs
http://ceph.com/howto/deploying-ceph-with-ceph-deploy/
I have a 3 node setup with 2 nodes for OSD
I use ceph-deploy mechanism.
The ceph init scripts expects that cluster.conf to be ceph.conf . If I
give any other name the init
15 matches
Mail list logo