Hi,
I'm trying to reach a number of 4096 pg as suggested in the doc, but I
can't have more than 32 pgs per OSD. I suspect this is caused by the
default of 6 pg bits (2^5 = 32, the first bit being for 2^0). Is there a
command to increase it once the OSDs have been linked to the cluster and
Hi,
I've been looking into increasing the performance of my ceph cluster for
openstack that will be moved in production soon. It's a full 1TB SSD
cluster with 16 OSD per node over 6 nodes.
As I searched for possible tweaks to implement, I stumbled upon
unitedstack's presentation at the
with fiemap set to true?
On 2/2/2015 10:47 AM, Haomai Wang wrote:
There exists a more recently discuss in
PR(https://github.com/ceph/ceph/pull/1665).
On Mon, Feb 2, 2015 at 11:05 PM, J-P Methot jpmet...@gtcomm.net wrote:
Hi,
I've been looking into increasing the performance of my ceph cluster
Hi,
I tried to add a caching pool in front of openstack vms and volumes
pools. I believed that the process was transparent, but as soon as I set
the caching for both of these pools, the VMs could not find their
volumes anymore. Obviously when I undid my changes, everything went back
to
Hi,
I'm having an issue wuite similar to this old bug :
http://tracker.ceph.com/issues/5194, except that I'm using centos 6.
Basically, I setup a cluster using ceph-deploy to save some time (this
is a 90+ OSD cluster). I rebooted a node earlier today and now all the
drives are unmounted and
Hi,
We've setup ceph and openstack on a fairly peculiar network
configuration (or at least I think it is) and I'm looking for
information on how to make it work properly.
Basically, we have 3 networks, a management network, a storage network
and a cluster network. The management network is
I had to go through the same experience of changing the public network
address and it's not easy. Ceph seems to keep a record of what ip
address is associated to what OSD and a port number for the process. I
was never able to find out where this record is kept or how to change it
manually.
let me do that. Additionally, since the compute component also
has its own ceph pool, I'm pretty sure it won't let me migrate the data
through openstack.
On 3/26/2015 3:54 PM, Steffen W Sørensen wrote:
On 26/03/2015, at 20.38, J-P Methot jpmet...@gtcomm.net wrote:
Lately I've been going back
Hi,
Lately I've been going back to work on one of my first ceph setup and
now I see that I have created way too many placement groups for the
pools on that setup (about 10 000 too many). I believe this may impact
performances negatively, as the performances on this ceph cluster are
abysmal.
*From: *Eneko Lacunza elacu...@binovo.es
*To: *J-P Methot jpmet...@gtcomm.net, Christian Balzer
ch...@gol.com, ceph-users@lists.ceph.com
*Sent: *Tuesday, 21 April, 2015 8:18:20 AM
*Subject: *Re: [ceph-users] Possible improvements for a slow write
speed (excluding
ratio 1:1.
The replication size is 3, yes. The pools are replicated.
On 4/20/2015 10:43 AM, Barclay Jameson wrote:
Are your journals on separate disks? What is your ratio of journal
disks to data disks? Are you doing replication size 3 ?
On Mon, Apr 20, 2015 at 9:30 AM, J-P Methot jpmet
Hi,
This is similar to another thread running right now, but since our
current setup is completely different from the one described in the
other thread, I thought it may be better to start a new one.
We are running Ceph Firefly 0.80.8 (soon to be upgraded to 0.80.9). We
have 6 OSD hosts
Case in point, here's a little story as to why backup outside ceph is
necessary:
I was working on modifying journal locations for a running test ceph
cluster when, after bringing back a few OSD nodes, two PGs started being
marked as incomplete. That made all operations on the pool hang as,
op priority = 1
If you are concerned about *recovery performance*, you may want to bump this
up, but I doubt it will help much from default settings..
Thanks Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of J-P
Methot
Hi,
Our setup is currently comprised of 5 OSD nodes with 12 OSD each, for a
total of 60 OSDs. All of these are SSDs with 4 SSD journals on each. The
ceph version is hammer v0.94.1 . There is a performance overhead because
we're using SSDs (I've heard it gets better in infernalis, but we're not
if scrubbing started in the cluster or not. That may
considerably slow down the cluster.
-Original Message-
From: Somnath Roy
Sent: Wednesday, August 19, 2015 1:35 PM
To: 'J-P Methot'; ceph-us...@ceph.com
Subject: RE: [ceph-users] Bad performances in recovery
All the writes will go
or not. That may
considerably slow down the cluster.
-Original Message-
From: Somnath Roy
Sent: Wednesday, August 19, 2015 1:35 PM
To: 'J-P Methot'; ceph-us...@ceph.com
Subject: RE: [ceph-users] Bad performances in recovery
All the writes will go through the journal.
It may happen your
Hi,
We're using Ceph Hammer 0.94.1 on centOS 7. On the monitor, when we set
log_to_syslog = true
Ceph starts shooting logs at stdout. I thought at first it might be
rsyslog that is wrongly configured, but I did not find a rule that could
explain this behavior.
Can anybody else replicate this? If
Hi,
We've been considering periodically backing up rbds from ceph to a
different storage backend, just in case. I've thought of a few ways this
could be possible, but I am curious if anybody on this list is currently
doing that.
Are you currently backing up data that is contained in ceph? What
Ceph package is 0.94.5, which is hammer. So yes it could very well be
this bug. Must I assume then that it only affects rbd bench and not the
general functionality of the client?
On 2016-01-25 1:59 PM, Jason Dillaman wrote:
> What release are you testing? You might be hitting this issue [1]
Hi,
We've run into a weird issue on our current test setup. We're currently
testing a small low-cost Ceph setup, with sata drives, 1gbps ethernet
and an Intel SSD for journaling per host. We've linked this to an
openstack setup. Ceph is the latest Hammer release.
We notice that when we do rbd
21 matches
Mail list logo