On 07.06.2013, at 16:57, Stefan Priebe wrote:
> Am 07.06.2013 16:31, schrieb Sage Weil:
>> On Fri, 7 Jun 2013, Oliver Schulz wrote:
>
>> Btrfs is the longer-term plan, but we haven't done as much testing there
>> yet, and in particular, there is a bug in 3.9 that is triggered by a
>> power-cycle
Yes, I have changed them on all monitors (3).
Is reading the nearfull off 'ceph pg dump' the correct way of viewing?
- WP
On Fri, Aug 2, 2013 at 12:04 AM, Joao Eduardo Luis wrote:
> On 08/01/2013 12:53 PM, YIP Wai Peng wrote:
>
>> Hi all,
>>
>> I am trying to change the mon osd nearfull / fu
All software has lots of teeny tiny usability issues. The ones that never
really get in your way but make everything a bit more difficult to do. We
call them “paper cuts”, and Ceph has a few of them.
We’d like to start collecting a list of the “paper cuts” that make
Ceph hard to learn and use.
Doh! I left out the most useful information... :)
The session will be Monday at 4:30pm California time. See the schedule
here: http://ceph.com/cds
Ross
On Aug 2, 2013, at 1:04 AM, Ross Turk wrote:
> Hello! I wanted to tell you a bit about a session that we’ve put on the
> schedule for CDS n
Hello! I wanted to tell you a bit about a session that we’ve put on the
schedule for CDS next week: ceph-deploy.
During this session, we’ll be gathering to talk about the recent work on
ceph-deploy and discuss what happens next. If you’re a user of
ceph-deploy and have thoughts about how it cou
Hi everyone! We have finalized the schedule for next week's Ceph Developer
Summit. The schedule is available on the CDS page, along with links to the
blueprints we'll be discussing:
http://ceph.com/cds
The Summit is happening next Monday and Tuesday. On Monday, the sessions
will run from 3pm
Thank you Joao,
I'll get you any information you need.
I can tell you that I've restarted the mon few times and it does seem to change
disk usage.
I just ran 'ceph-mon -i 2 --compact' on my monitor, see how that looks in the
morning.
On 08/02/2013 12:15 AM, Jeppesen, Nelson wrote:
> Thanks f
On 08/02/2013 12:15 AM, Jeppesen, Nelson wrote:
Thanks for the reply, but how can I fix this without an outage?
I tired adding 'mon compact on start = true' but the monitor just hung.
Unfortunately this is a production cluster and can't take the outages (I'm
assuming the cluster will fail with
Thanks for the reply, but how can I fix this without an outage?
I tired adding 'mon compact on start = true' but the monitor just hung.
Unfortunately this is a production cluster and can't take the outages (I'm
assuming the cluster will fail without a monitor). I had three monitors I was
hit wi
220GB is way, way too big. I suspect your monitors need to go through a
successful leveldb compaction. The early releases of Cuttlefish suffered
several issues with store.db growing unbounded. Most were fixed by
0.61.5, I believe.
You may have luck stoping all Ceph daemons, then starting the m
My Mon store.db has been at 220GB for a few months now. Why is this and how can
I fix it? I have one monitor in this cluster and I suspect that I can't add
monitors to the cluster because it is too big. Thank you.
___
ceph-users mailing list
ceph-user
Hi,
I think the high CPU usage was due to the system time not being right. I
activated ntp and it had to do quite big adjustment, and after that the
high CPU usage was gone.
Anyway, I immediately ran into another issue. I ran a simple benchmark:
# rados bench --pool benchmark 300 write --no-clean
Yes, use the async one. We will be getting rid of the non-async one soon.
The default Qemu packages in 6.4 doesn't link to librbd whereas these
custom packages have been built explicitly with the link.
Hoping to announce some positive news soon about why this should become
much simpler in 6.5 and
We've tagged and pushed out packages for another release candidate for
Dumpling. At this point things are looking very good. There are a few
odds and ends with the CLI changes but the core ceph functionality is
looking quite stable. Please test!
Packages are available in the -testing repos:
Greg,
Thanks for the hints. I looked through the logs and found OSD's
with RETRY's. I marked those "out" (marked in orange) and let ceph
rebalance. Then I ran the bench command.
I now have many more errors than before :-(.
health HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 151
On 07/26/2013 08:59 PM, John Wilkins wrote:
(d) If you have three monitors, Paxos will still work. 2 out of 3
monitors is a majority. A failure of a monitor means it's down, but
not out. If it were out of the cluster, then the cluster would assume
only two monitors, which wouldn't work with Paxos
Neil,
What's the difference between your custom qemu packages and the official ones?
There are two kinds of packages in it:
qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64.rpm
qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.x86_64.rpm
What's the difference between them? Does the "async" versi
Hi Mathias,
Have you tried these steps and sticking to the slow interfaces? I would be
curious to see if this is just a problem of how those interfaces are able
to talk to each other.
> *From:* Mathias Lindberg
> *Date:* August 1, 2013, 4:01:38 MDT
> *To:* "ceph-users@lists.ceph.com"
> *Subj
On 08/01/2013 12:53 PM, YIP Wai Peng wrote:
Hi all,
I am trying to change the mon osd nearfull / full ratio. Currently, my
settings are these:
# ceph pg dump | head
full_ratio 0.95
nearfull_ratio 0.85
I edited ceph.conf file and added the configuration options, following
instructions at
http:
Hi,
I've opened a pull request with some additional fixes for this issue:
https://github.com/ceph/ceph/pull/478
Danny
Am 30.07.2013 09:53, schrieb Erik Logtenberg:
> Hi,
>
> This patch adds two buildrequires to the ceph.spec file, that are needed
> to build the rpms under Fedora. Danny Al-Gaaf
Thanks! Neil.
If cephfs and krbd are not used, is the default kernel working well with only
QEMU/kvm/librbd? AFAIK, librbd doesn't have dependency on the kernel version,
right?
-- Original --
From: "Neil Levine";
Date: Thu, Aug 1, 2013 11:13 AM
To: "Da Chun"
Hi all,
I am trying to change the mon osd nearfull / full ratio. Currently, my
settings are these:
# ceph pg dump | head
full_ratio 0.95
nearfull_ratio 0.85
I edited ceph.conf file and added the configuration options, following
instructions at
http://ceph.com/docs/master/rados/configuration/mon
Hi
Having previously had problems during startup with "creating keys" (otherwise a
working setup) one one node when using mkcephfs, i have given ceph-deploy a try
and get stuck on what feels like the same step.
Ceph version is 0.61.7 and OS is centos 6.4.
Steps i have done is:
#ceph-deploy new
23 matches
Mail list logo