hello.
ceph transfers osd node in the down status by default , after receiving 3
reports about disabled nodes. Reports are sent per osd heartbeat grace
seconds, but the settings of mon_osd_adjust_heartbeat_gratse = true,
mon_osd_adjust_down_out_interval = true timeout to transfer nodes in down
Hi Loic,
Nope, only the versions from 0.81-trusty to 0.93-1trusty are available
in http://ceph.com/debian-testing/pool/main/c/ceph/
But the firefly deb source packages for 0.80.9-1trusty is not available :(
Cheers,
Valery
On 11/03/15 14:11 , Loic Dachary wrote:
Hi Valery,
They should be
Kevin,
This is a known issue and should be fixed in the latest krbd. The problem is,
it is not backported to 14.04 krbd yet. You need to build it from latest krbd
source if you want to stick with 14.04.
The workaround is, you need to unmap your clients before restarting osds.
Thanks Regards
I'm not sure if it's something I'm doing wrong or just experiencing an
oddity, but when my cache tier flushes dirty blocks out to the base tier,
the writes seem to hit the OSD's straight away instead of coalescing in the
journals, is this correct?
For example if I create a RBD on a standard 3 way
On 11/03/2015, at 15.31, Wido den Hollander w...@42on.com wrote:
On 03/11/2015 03:23 PM, Jimmy Goffaux wrote:
Hello All,
I use Ceph in production for several months. but i have an errors with
Ceph Rados Gateway for multiple users.
I am faced with the following error:
Error trying to
On 11/03/2015, at 08.19, Steffen W Sørensen ste...@me.com wrote:
On 10/03/2015, at 23.31, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote:
What kind of application is that?
Commercial Email platform from Openwave.com
Maybe it could be worked around using an apache rewrite rule. In any case,
Hi,
I am trying hammer 0.93 on Ubuntu 14.04.
rbd is mapped in client ,which is also ubuntu 14.04 .
When i did a stop ceph-osd-all and then a start,client machine crashed and
attached pic was in the console.Not sure if its related to ceph.
Thanks
thanks i will follow this work around.
On Thu, Mar 12, 2015 at 12:18 AM, Somnath Roy somnath@sandisk.com
wrote:
Kevin,
This is a known issue and should be fixed in the latest krbd. The problem
is, it is not backported to 14.04 krbd yet. You need to build it from
latest krbd source if
For each of those pgs, you'll need to identify the pg copy you want to
be the winner and either
1) Remove all of the other ones using ceph-objectstore-tool and
hopefully the winner you left alone will allow the pg to recover and go
active.
2) Export the winner using ceph-objectstore-tool, use
Hi,
I was always in the same situation: I couldn't remove an OSD without
have some PGs definitely stuck to the active+remapped state.
But I remembered I read on IRC that, before to mark out an OSD, it
could be sometimes a good idea to reweight it to 0. So, instead of
doing [1]:
ceph osd out
On Wed, Feb 18, 2015 at 9:19 PM, Florian Haas wrote:
Hey everyone,
I must confess I'm still not fully understanding this problem and
don't exactly know where to start digging deeper, but perhaps other
users have seen this and/or it rings a bell.
System info: Ceph giant on CentOS 7; approx.
On 12/03/2015, at 00.55, Jesus Chavez (jeschave) jesch...@cisco.com wrote:
can anybody tell me a good blog link that explain how to add monitor? I have
tried manually and also with ceph-deploy without success =(
Dunno if these might help U:
can anybody tell me a good blog link that explain how to add monitor? I have
tried manually and also with ceph-deploy without success =(
Help
[cid:image005.png@01D00809.A6D502D0]
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146
Mobile:
I don’t know what is going on =( the system hangs with the message below after
commaand ceph-deploy osd --zap-disk create tauro:sdb”
[tauro][WARNING] No data was received after 300 seconds, disconnecting...
[ceph_deploy.osd][DEBUG ] Host tauro is now ready for osd use.
[ceph_deploy.conf][DEBUG
can anybody tell me a good blog link that explain how to add monitor? I have
tried manually and also with ceph-deploy without success =(
Help
[cid:image005.png@01D00809.A6D502D0]
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146
Mobile:
Le 11/03/2015 05:44, Francois Lafont a écrit :
PS: here is my conf.
[...]
I have this too:
~# ceph osd crush show-tunables
{ choose_local_tries: 0,
choose_local_fallback_tries: 0,
choose_total_tries: 50,
chooseleaf_descend_once: 1,
chooseleaf_vary_r: 0,
straw_calc_version: 1,
On 11/03/2015, at 08.19, Steffen W Sørensen ste...@me.com wrote:
On 10/03/2015, at 23.31, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote:
What kind of application is that?
Commercial Email platform from Openwave.com
Maybe it could be worked around using an apache rewrite rule. In any
On Wed, Mar 11, 2015 at 1:21 PM, LOPEZ Jean-Charles jelo...@redhat.com wrote:
Hi Florent
What are the « rules » for stripe_unit object_size ? - stripe_unit *
stripe_count = object_size
So in your case set stripe_unit = 2
JC
On 11 Mar 2015, at 19:59, Florent B flor...@coppint.com wrote:
Hi Stefan,
If the majority of your hot data fits on the cache tier you will see quite a
marked improvement in read performance and similar write performance
(assuming you would have had your hdds backed by SSD journals).
However for data that is not in the cache tier you will get 10-20% less
Hi Florent
What are the « rules » for stripe_unit object_size ? - stripe_unit *
stripe_count = object_size
So in your case set stripe_unit = 2
JC
On 11 Mar 2015, at 19:59, Florent B flor...@coppint.com wrote:
Hi all,
I'm testing CephFS with Giant and I have a problem when I set these
For clarity too, I've tried to drop the min_size before as suggested,
doesn't make a difference unfortunately
On Wed, Mar 11, 2015 at 9:50 AM, joel.merr...@gmail.com
joel.merr...@gmail.com wrote:
Sure thing, n.b. I increased pg count to see if it would help. Alas not. :)
Thanks again!
Am 11.03.2015 um 11:17 schrieb Nick Fisk:
Hi Nick,
Am 11.03.2015 um 10:52 schrieb Nick Fisk:
Hi Stefan,
If the majority of your hot data fits on the cache tier you will see
quite a marked improvement in read performance
I don't have writes ;-) just around 5%. 95% are writes.
Check out ceph youtube page.
- Karan -
On 11 Mar 2015, at 00:45, Tom Deneau tom.den...@amd.com wrote:
Are the slides or videos from ceph days presentations made available
somewhere? I noticed some links in the Frankfurt Ceph day, but not for the
other Ceph Days.
-- Tom
Hi Nick,
Am 11.03.2015 um 10:52 schrieb Nick Fisk:
Hi Stefan,
If the majority of your hot data fits on the cache tier you will see quite a
marked improvement in read performance
I don't have writes ;-) just around 5%. 95% are writes.
and similar write performance
(assuming you would have
Where can I find the debian trusty source package for v0.80.9?
Cheers,
Valery
On 10/03/15 20:34 , Sage Weil wrote:
This is a bugfix release for firefly. It fixes a performance regression
in librbd, an important CRUSH misbehavior (see below), and several RGW
bugs. We have also backported
Thanks Sage
I will create a “new feature” request on tracker.ceph.com
http://tracker.ceph.com/ so that this discussion should not get buried under
mailing list.
Developers can implement this as per their convenience.
Karan
Am 11.03.2015 um 11:17 schrieb Nick Fisk:
Hi Nick,
Am 11.03.2015 um 10:52 schrieb Nick Fisk:
Hi Stefan,
If the majority of your hot data fits on the cache tier you will see
quite a marked improvement in read performance
I don't have writes ;-) just around 5%. 95% are writes.
and
On 10/03/2015, at 23.31, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote:
What kind of application is that?
Commercial Email platform from Openwave.com
Maybe it could be worked around using an apache rewrite rule. In any case, I
opened issue #11091.
Okay, how, by rewriting the response?
Hi Sage,
On Tue, Mar 10, 2015 at 8:34 PM, Sage Weil sw...@redhat.com wrote:
Adjusting CRUSH maps
* This point release fixes several issues with CRUSH that trigger
excessive data migration when adjusting OSD weights. These are most
obvious when a very small weight
Hi Valery,
They should be here http://ceph.com/debian-testing/
Cheers
On 11/03/2015 10:07, Valery Tschopp wrote:
Where can I find the debian trusty source package for v0.80.9?
Cheers,
Valery
On 10/03/15 20:34 , Sage Weil wrote:
This is a bugfix release for firefly. It fixes a
Ok, you lost all copies from an interval where the pgs went active. The
recovery from this is going to be complicated and fragile. Are the
pools valuable?
-Sam
On 03/11/2015 03:35 AM, joel.merr...@gmail.com wrote:
For clarity too, I've tried to drop the min_size before as suggested,
doesn't
On Wed, 11 Mar 2015, Stefan Priebe - Profihost AG wrote:
Hi Sage,
Am 11.03.2015 um 04:14 schrieb Sage Weil:
On Wed, 11 Mar 2015, Christian Balzer wrote:
On Tue, 10 Mar 2015 12:34:14 -0700 (PDT) Sage Weil wrote:
Adjusting CRUSH maps
* This point release fixes
On Wed, 11 Mar 2015, Gabri Mate wrote:
May I assume this fix will be in Hammer? So can I use this to fix my
cluster after upgrading Giant to Hammer?
Yes, the fix is also in Hammer, but the same procedure should be followed
to opt-in to the new behavior.
sage
On Wed, 11 Mar 2015, Dan van der Ster wrote:
Hi Sage,
On Tue, Mar 10, 2015 at 8:34 PM, Sage Weil sw...@redhat.com wrote:
Adjusting CRUSH maps
* This point release fixes several issues with CRUSH that trigger
excessive data migration when adjusting OSD weights.
On 03/11/2015 03:23 PM, Jimmy Goffaux wrote:
Hello All,
I use Ceph in production for several months. but i have an errors with
Ceph Rados Gateway for multiple users.
I am faced with the following error:
Error trying to create container 'xs02': 409 Conflict: BucketAlreadyExists
Which
I am really stuck adding second monitor =(, ceph-deploy mon create seems to
finish with some error like monitor may not be able to form quorum and they are
not definite in mon initial…
I have found there is a way to get it work and is doing the next commands:
ceph mon add tauro
Hi,
May I assume this fix will be in Hammer? So can I use this to fix my
cluster after upgrading Giant to Hammer?
Best regards,
Mate
On 12:34 Tue 10 Mar , Sage Weil wrote:
This is a bugfix release for firefly. It fixes a performance regression
in librbd, an important CRUSH misbehavior
Sorry about all the unrelated grep issues..
So I've rebuilt and reinstalled and it's still broken.
On the working node, even with the new packages, everything works.
On the new broken node, I've added a mon and it works. But I still cannot start
an OSD on the new node.
What else do you need
Thanks Steffen I have followed everything not sure what is going on, the mon
keyring and client admin are individual? Per mon host? Or do I need to copy
from the first initial mon node?
Thanks again!
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52
I have a single radosgw user with 2 s3 keys and 1 swift key. I have created a
few buckets and I can list all of the contents of bucket A and C but not B with
either S3 (boto) or python-swiftclient. I am able to list the first 1000
entries using radosgw-admin 'bucket list --bucket=bucketB'
Anyone got any info on this?
Is it safe to delete shadow files?
On 2015-03-11 10:03, Ben wrote:
We have a large number of shadow files in our cluster that aren't
being deleted automatically as data is deleted.
Is it safe to delete these files?
Is there something we need to be aware of when
41 matches
Mail list logo