Re: [ceph-users] the state of cephfs in giant

2014-10-30 Thread Florian Haas
Hi Sage, sorry to be late to this thread; I just caught this one as I was reviewing the Giant release notes. A few questions below: On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil s...@newdream.net wrote: [...] * ACLs: implemented, tested for kernel client. not implemented for ceph-fuse. [...]

Re: v0.86 released (Giant release candidate)

2014-10-10 Thread Florian Haas
Hi Sage, On Tue, Oct 7, 2014 at 9:20 PM, Sage Weil s...@inktank.com wrote: This is a release candidate for Giant, which will hopefully be out in another week or two (s v0.86). We did a feature freeze about a month ago and since then have been doing only stabilization and bug fixing (and a

Re: [ceph-users] Status of snapshots in CephFS

2014-09-24 Thread Florian Haas
On Fri, Sep 19, 2014 at 5:25 PM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Florian Haas wrote: Hello everyone, Just thought I'd circle back on some discussions I've had with people earlier in the year: Shortly before firefly, snapshot support for CephFS clients was effectively

Re: [ceph-users] Status of snapshots in CephFS

2014-09-24 Thread Florian Haas
On Fri, Sep 19, 2014 at 5:25 PM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Florian Haas wrote: Hello everyone, Just thought I'd circle back on some discussions I've had with people earlier in the year: Shortly before firefly, snapshot support for CephFS clients was effectively

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-24 Thread Florian Haas
On Wed, Sep 24, 2014 at 1:05 AM, Sage Weil sw...@redhat.com wrote: Sam and I discussed this on IRC and have we think two simpler patches that solve the problem more directly. See wip-9487. So I understand this makes Dan's patch (and the config parameter that it introduces) unnecessary, but is

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-23 Thread Florian Haas
On Mon, Sep 22, 2014 at 7:06 PM, Florian Haas flor...@hastexo.com wrote: On Sun, Sep 21, 2014 at 9:52 PM, Sage Weil sw...@redhat.com wrote: On Sun, 21 Sep 2014, Florian Haas wrote: So yes, I think your patch absolutely still has merit, as would any means of reducing the number of snapshots

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-22 Thread Florian Haas
On Sun, Sep 21, 2014 at 9:52 PM, Sage Weil sw...@redhat.com wrote: On Sun, 21 Sep 2014, Florian Haas wrote: So yes, I think your patch absolutely still has merit, as would any means of reducing the number of snapshots an OSD will trim in one go. As it is, the situation looks really really bad

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-21 Thread Florian Haas
On Sat, Sep 20, 2014 at 9:08 PM, Alphe Salas asa...@kepler.cl wrote: Real field testings and proof workout are better than any unit testing ... I would follow Dan s notice of resolution because it based on real problem and not fony style test ground. That statement is almost an insult to the

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-21 Thread Florian Haas
On Sun, Sep 21, 2014 at 4:26 PM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi Florian, September 21 2014 3:33 PM, Florian Haas flor...@hastexo.com wrote: That said, I'm not sure that wip-9487-dumpling is the final fix to the issue. On the system where I am seeing the issue, even

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-19 Thread Florian Haas
On Fri, Sep 19, 2014 at 12:27 AM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Florian Haas wrote: Hi Sage, was the off-list reply intentional? Whoops! Nope :) On Thu, Sep 18, 2014 at 11:47 PM, Sage Weil sw...@redhat.com wrote: So, disaster is a pretty good description. Would

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-18 Thread Florian Haas
Hi Dan, saw the pull request, and can confirm your observations, at least partially. Comments inline. On Thu, Sep 18, 2014 at 2:50 PM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: Do I understand your issue report correctly in that you have found setting osd_snap_trim_sleep to be

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-18 Thread Florian Haas
On Thu, Sep 18, 2014 at 8:56 PM, Mango Thirtyfour daniel.vanders...@cern.ch wrote: Hi Florian, On Sep 18, 2014 7:03 PM, Florian Haas flor...@hastexo.com wrote: Hi Dan, saw the pull request, and can confirm your observations, at least partially. Comments inline. On Thu, Sep 18, 2014 at 2

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-18 Thread Florian Haas
On Thu, Sep 18, 2014 at 9:12 PM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi, September 18 2014 9:03 PM, Florian Haas flor...@hastexo.com wrote: On Thu, Sep 18, 2014 at 8:56 PM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi Florian, On Sep 18, 2014 7:03 PM, Florian Haas

Ceph Puppet modules (again)

2014-03-10 Thread Florian Haas
Hi, Somehow I'm thinking I'm opening a can of worms, but here it goes anyway. I saw some discussion about this here on this list last (Northern Hemisphere) autumn, but not much since. I'd like to ask for some clarification on the current state of the Ceph Puppet modules. Currently there are

Re: Ceph Puppet modules (again)

2014-03-10 Thread Florian Haas
On Mon, Mar 10, 2014 at 7:27 PM, Loic Dachary l...@dachary.org wrote: Hi Florian, New efforts should be directed to https://github.com/stackforge/puppet-ceph (mirrored at https://github.com/ceph/puppet-ceph) and evolving at

Re: github pull requests

2013-03-22 Thread Florian Haas
On Fri, Mar 22, 2013 at 12:15 AM, Gregory Farnum g...@inktank.com wrote: I'm not sure that we handle enough incoming yet that the extra process weight of something like Gerrit or Launchpad is necessary over Github. What are you looking for in that system which Github doesn't provide? -Greg

OSD nodes with =8 spinners, SSD-backed journals, and their performance impact

2013-01-14 Thread Florian Haas
Hi everyone, we ran into an interesting performance issue on Friday that we were able to troubleshoot with some help from Greg and Sam (thanks guys), and in the process realized that there's little guidance around for how to optimize performance in OSD nodes with lots of spinning disks (and

Re: OSD nodes with =8 spinners, SSD-backed journals, and their performance impact

2013-01-14 Thread Florian Haas
Hi Tom, On Mon, Jan 14, 2013 at 2:28 PM, Tom Lanyon t...@netspot.com.au wrote: On 14/01/2013, at 10:47 PM, Florian Haas flor...@hastexo.com wrote: snip http://www.hastexo.com/resources/hints-and-kinks/solid-state-drives-and-ceph-osd-journals It's probably easiest to comment directly

Re: OSD nodes with =8 spinners, SSD-backed journals, and their performance impact

2013-01-14 Thread Florian Haas
Hi Mark, thanks for the comments. On Mon, Jan 14, 2013 at 2:46 PM, Mark Nelson mark.nel...@inktank.com wrote: Hi Florian, Couple of comments: OSDs use a write-ahead mode for local operations: a write hits the journal first, and from there is then being copied into the backing filestore.

Re: OSD nodes with =8 spinners, SSD-backed journals, and their performance impact

2013-01-14 Thread Florian Haas
On 01/14/2013 06:34 PM, Gregory Farnum wrote: On Mon, Jan 14, 2013 at 6:09 AM, Florian Haas flor...@hastexo.com wrote: Hi Mark, thanks for the comments. On Mon, Jan 14, 2013 at 2:46 PM, Mark Nelson mark.nel...@inktank.com wrote: Hi Florian, Couple of comments: OSDs use a write-ahead

Re: Windows port

2013-01-09 Thread Florian Haas
On Tue, Jan 8, 2013 at 3:00 PM, Dino Yancey dino2...@gmail.com wrote: Hi, I am also curious if a Windows port, specifically the client-side, is on the roadmap. This is somewhat OT from the original post, but if all you're interested is using RBD block storage from Windows, you can already do

Re: Integration work

2012-08-28 Thread Florian Haas
On 08/28/2012 11:32 AM, Plaetinck, Dieter wrote: On Tue, 28 Aug 2012 11:12:16 -0700 Ross Turk r...@inktank.com wrote: Hi, ceph-devel! It's me, your friendly community guy. Inktank has an engineering team dedicated to Ceph, and we want to work on the right stuff. From time to time, I'd

Re: wip-crush

2012-08-22 Thread Florian Haas
On 08/22/2012 03:10 AM, Sage Weil wrote: I pushed a branch that changes some of the crush terminology. Instead of having a crush type called pool that requires you to say things like pool=default in the ceph osd crush set ... command, it uses root instead. That hopefully reinforces that

Re: Ceph Benchmark HowTo

2012-07-25 Thread Florian Haas
On Tue, Jul 24, 2012 at 6:19 PM, Tommi Virtanen t...@inktank.com wrote: On Tue, Jul 24, 2012 at 8:55 AM, Mark Nelson mark.nel...@inktank.com wrote: personally I think it's fine to have it on the wiki. I do want to stress that performance is going to be (hopefully!) improving over the next

Re: Ceph Benchmark HowTo

2012-07-25 Thread Florian Haas
Hi Mehdi, great work! A few questions (for you, Mark, and anyone else watching this thread) regarding the content of that wiki page: For the OSD tests, which OSD filesystem are you testing on? Are you using a separate journal device? If yes, what type? For the RADOS benchmarks: # rados bench

Re: Tuning placement group

2012-07-20 Thread Florian Haas
On Fri, Jul 20, 2012 at 9:33 AM, François Charlier francois.charl...@enovance.com wrote: Hello, Readinghttp://ceph.com/docs/master/ops/manage/grow/placement-groups/ and thinking to build a ceph cluster with potentially 1000 OSDs. Using the recommandations on the previously cited link, it

Ceph BoF at OSCON

2012-07-13 Thread Florian Haas
Hi everyone, For those of you attending OSCON in Portland next week, there will be a birds-of-a-feather session on Ceph Monday night. All OSCON attendees interested in Ceph are very welcome. Details about the BoF are in this blog post:

Re: specifying secret in rbd map command

2012-07-09 Thread Florian Haas
On Mon, Jul 9, 2012 at 4:57 PM, Travis Rhoden trho...@gmail.com wrote: Hey folks, I had a bit of unexpected trouble today using the rbd map command to map an RBD to a kernel object. I had previously been using the echo ... /sys/bus/rbd... method of manipulating RBDs. I was looking at the

Re: librbd: error finding header

2012-07-09 Thread Florian Haas
On 07/09/12 12:29, Vladimir Bashkirtsev wrote: On 09/07/12 18:33, Dan Mick wrote: Vladimir: you can do some investigation with the rados command. What does rados -p rbd ls show you? Rather long list of: rb.0.11.2786 rb.0.d.54a2 rb.0.6.2eb5 rb.0.d.8294

Setting a big maxosd kills all mons

2012-07-05 Thread Florian Haas
Hi guys, Someone I worked with today pointed me to a quick and easy way to bring down an entire cluster, by making all mons kill themselves in mass suicide: ceph osd setmaxosd 2147483647 2012-07-05 16:29:41.893862 b5962b70 0 monclient: hunting for new mon I don't know what the actual threshold

Writes to mounted Ceph FS fail silently if client has no write capability on data pool

2012-07-05 Thread Florian Haas
Hi everyone, please enlighten me if I'm misinterpreting something, but I think the Ceph FS layer could handle the following situation better. How to reproduce (this is on a 3.2.0 kernel): 1. Create a client, mine is named test, with the following capabilities: client.test key: key

cephfs show_location produces kernel divide error: 0000 [#1] when run against a directory that is not the filesystem root

2012-07-05 Thread Florian Haas
And one more issue report for today... :) Really easy to reproduce on my 3.2.0 Debian squeeze-backports kernel: mount a Ceph FS, create a directory in it. Then run cephfs dir show_location. dmesg stacktrace: [ 7153.714260] libceph: mon2 192.168.42.116:6789 session established [ 7308.584193]

Re: cephfs show_location produces kernel divide error: 0000 [#1] when run against a directory that is not the filesystem root

2012-07-05 Thread Florian Haas
On Thu, Jul 5, 2012 at 10:04 PM, Gregory Farnum g...@inktank.com wrote: But I have a few more queries while this is fresh. If you create a directory, unmount and remount, and get the location, does that work? Nope, same error. (actually, just flushing caches would probably do it.) Idem. If

Re: Writes to mounted Ceph FS fail silently if client has no write capability on data pool

2012-07-05 Thread Florian Haas
On Thu, Jul 5, 2012 at 10:01 PM, Gregory Farnum g...@inktank.com wrote: Also, going down the rabbit hole, how would this behavior change if I used cephfs to set the default layout on some directory to use a different pool? I'm not sure what you're asking here — if you have access to the

Re: URL-safe base64 encoding for keys

2012-07-03 Thread Florian Haas
On Tue, Jul 3, 2012 at 2:22 PM, Wido den Hollander w...@widodh.nl wrote: Hi, With my CloudStack integration I'm running into a problem with the cephx keys due to '/' being possible in the cephx keys. CloudStack's API expects a URI to be passed when adding a storage pool, e.g.:

Re: URL-safe base64 encoding for keys

2012-07-03 Thread Florian Haas
On Tue, Jul 3, 2012 at 5:04 PM, Yehuda Sadeh yeh...@inktank.com wrote: FWIW (only semi-related), some S3 clients -- s3cmd from s3tools, for example -- seem to choke on the forward slash in radosgw auto-generated secret keys, as well. With radosgw we actually switch a while back to use the

rbd rm allows removal of mapped device, nukes data, then returns -EBUSY

2012-07-02 Thread Florian Haas
Hi everyone, just wanted to check if this was the expected behavior -- it doesn't look like it would be, to me. What I do is create a 1G RBD, and just for the heck of it, make an XFS on it: root@alice:~# rbd create xfsdev --size 1024 root@alice:~# rbd map xfsdev root@alice:~# rbd showmapped id

Re: Radosgw installation and administration docs

2012-07-02 Thread Florian Haas
On Sun, Jul 1, 2012 at 10:22 PM, Chuanyu chua...@cs.nctu.edu.tw wrote: Hi Yehuda, Florian, I follow the wiki, and steps which you discussed, construct my ceph system with rados gateway, and I can use libs3 to upload file via radosgw, (thanks a lot!) but got 405 Method Not Allowed when I use

Does radosgw really need to talk to an MDS?

2012-07-02 Thread Florian Haas
Hi everyone, radosgw(8) states that the following capabilities must be granted to the user that radosgw uses to connect to RADOS. ceph-authtool -n client.radosgw.gateway --cap mon 'allow r' --cap osd 'allow rwx' --cap mds 'allow' /etc/ceph/keyring.radosgw.gateway Could someone explain why we

Assertion failure when radosgw can't authenticate

2012-07-02 Thread Florian Haas
Hi, in cephx enabled clusters (0.47.x), authentication failures from radosgw seem to lead to an uncaught assertion failure: 2012-07-02 11:26:46.559830 b69c5730 0 librados: client.radosgw.charlie authentication error (1) Operation not permitted 2012-07-02 11:26:46.560093 b69c5730 -1 Couldn't

Re: Does radosgw really need to talk to an MDS?

2012-07-02 Thread Florian Haas
On Mon, Jul 2, 2012 at 1:44 PM, Wido den Hollander w...@widodh.nl wrote: You are not allowing the RADOS Gateway to do anything on the MDS. There is no 'r', 'w' or 'x' permission which you are allowing. So there is nothing the rgw has access to on the MDS. Yep, so we might as well leave off

radosgw forgetting subuser permissions when creating a fresh key

2012-06-25 Thread Florian Haas
Hi everyone, I wonder if this is intentional: when I create a new Swift key for an existing subuser, which has previously been assigned full control permissions, those permissions appear to get lost upon key creation. # radosgw-admin subuser create --uid=johndoe --subuser=johndoe:swift

Re: Ceph as a NOVA-INST-DIR/instances/ storage backend

2012-06-25 Thread Florian Haas
On Mon, Jun 25, 2012 at 6:03 PM, Tommi Virtanen t...@inktank.com wrote: On Sat, Jun 23, 2012 at 11:42 AM, Igor Laskovy igor.lask...@gmail.com wrote: Hi all from hot Kiev)) Does anybody use Ceph as a backend storage for NOVA-INST-DIR/instances/ ? Yes.

Re: [Openstack] Ceph/OpenStack integration on Ubuntu precise: horribly broken, or am I doing something wrong?

2012-06-21 Thread Florian Haas
On Fri, Jun 22, 2012 at 7:43 AM, James Page james.p...@ubuntu.com wrote: You can type faster than I can... I'm working on getting this resolved in the current dev release of Ubuntu in the next few days after which it will go through the normal SRU process for Ubuntu 12.04. Sweet, thanks!

Re: rbd locking and handling broken clients

2012-06-14 Thread Florian Haas
On Thu, Jun 14, 2012 at 1:41 AM, Greg Farnum g...@inktank.com wrote: On Wednesday, June 13, 2012 at 1:37 PM, Florian Haas wrote: Greg, My understanding of Ceph code internals is far too limited to comment on your specific points, but allow me to ask a naive question. Couldn't you

Building documentation offline?

2012-06-14 Thread Florian Haas
Hi everyone, it occurred to me this afternoon that admin/build-doc unconditionally tries to fetch some updates from GitHub, which breaks building docs when you don't have a network connection. Would there be any reasonably simple way to make it support offline build, provided the various pip bits

Re: rbd locking and handling broken clients

2012-06-13 Thread Florian Haas
Greg, My understanding of Ceph code internals is far too limited to comment on your specific points, but allow me to ask a naive question. Couldn't you be stealing a lot of ideas from SCSI-3 Persistent Reservations? If you had server-side (OSD) persistence of information of the this device is in

Radosgw installation and administration docs

2012-06-12 Thread Florian Haas
Hi everyone, I have a long flight ahead of me later this week and plan to be spending some time on http://ceph.com/docs/master/ops/radosgw/ -- which currently happens to be a bit, ahem, sparse. There's currently not a lot of documentation on radosgw, and some of it is inconsistent, so if one of

Re: Radosgw installation and administration docs

2012-06-12 Thread Florian Haas
Hi Yehuda, thanks, that resolved a lot of questions for me. A few follow-up comments below: On 06/12/12 18:47, Yehuda Sadeh wrote: On Tue, Jun 12, 2012 at 3:44 AM, Florian Haas flor...@hastexo.com wrote: Hi everyone, I have a long flight ahead of me later this week and plan to be spending

radosgw-admin: mildly confusing man page and usage message

2012-06-11 Thread Florian Haas
Hi, just noticed that radosgw-admin comes with a bit of confusing content in its man page and usage message: EXAMPLES Generate a new user: $ radosgw-admin user gen --display-name=johnny rotten --email=joh...@rotten.com As far as I remember user gen is gone, and it's now user

Re: radosgw-admin: mildly confusing man page and usage message

2012-06-11 Thread Florian Haas
On 06/11/12 23:39, Yehuda Sadeh wrote: If one of the Ceph guys could provide a quick comment on this, I can send a patch to the man page RST. Thanks. Minimum required to create a user: radosgw-admin user create --uid=user id --display-name=display name The user id is actually a user

Re: [Openstack] Ceph + OpenStack [HOW-TO]

2012-06-10 Thread Florian Haas
On 06/10/12 23:32, Sébastien Han wrote: Hello everyone, I recently posted on my website an introduction to ceph and the integration of Ceph in OpenStack. It could be really helpful since the OpenStack documentation has not dealt with it so far. Feel free to comment, express your opinions

Ceph/OpenStack integration on Ubuntu precise: horribly broken, or am I doing something wrong?

2012-06-08 Thread Florian Haas
Hi everyone, apologies for the cross-post, and not sure if this is new information. I did do a cursory check of both list archives and didn't find anything pertinent, so here goes. Feel free to point me to an existing thread if I'm merely regurgitating something that's already known. Either I'm

Re: ceph rbd crashes/stalls while random write 4k blocks

2012-05-25 Thread Florian Haas
On Fri, May 25, 2012 at 8:47 AM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Am 24.05.2012 16:19, schrieb Florian Haas: On Thu, May 24, 2012 at 4:09 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Take a look at these to see if anything looks familiar: http

Re: ceph rbd crashes/stalls while random write 4k blocks

2012-05-24 Thread Florian Haas
Stefan, On 05/24/12 13:07, Stefan Priebe - Profihost AG wrote: Hi list, i'm still testing ceph rbd with kvm. Right now i'm testing a rbd block device within a network booted kvm. Sequential write/reads and random reads are fine. No problems so far. But when i trigger lots of 4k random

Re: ceph rbd crashes/stalls while random write 4k blocks

2012-05-24 Thread Florian Haas
On Thu, May 24, 2012 at 4:09 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Take a look at these to see if anything looks familiar: http://oss.sgi.com/bugzilla/show_bug.cgi?id=922 https://bugs.launchpad.net/ubuntu/+source/linux/+bug/979498

[PATCH] doc: fix snapshot creation/deletion syntax in rbd man page (trivial)

2012-02-17 Thread Florian Haas
Creating a snapshot requires using rbd snap create, as opposed to just rbd create. Also for purposes of clarification, add note that removing a snapshot similarly requires rbd snap rm. Thanks to Josh Durgin for the explanation on IRC. --- man/rbd.8 | 10 +- 1 files changed, 9

Re: Nova on RBD Device

2012-02-07 Thread Florian Haas
On Tue, Feb 7, 2012 at 8:01 PM, Mandell Degerness mand...@pistoncloud.com wrote: Can anyone point me in the right direction for setting up Nova so that it allocates disk space on RBD device(s) rather than on local disk as defined in the --instances_path flag? I've already got nova-volume

Re: interesting point on btrfs, xfs, ext4

2012-01-25 Thread Florian Haas
On Wed, Jan 25, 2012 at 10:15 AM, Tomasz Paszkowski ss7...@gmail.com wrote: http://www.youtube.com/watch?v=FegjLbCnoBw I sat in that talk at LCA and can highly recommend it. Jon Corbet wrote a piece on LWN about it too (currently subscribers only): https://lwn.net/Articles/476263/ Cheers,

[PATCH 0/2] Add resource agents to debian build, trivial CP error

2012-01-05 Thread Florian Haas
Hi, please consider two follow-up patches to the OCF resource agents: the first adds them to the Debian build, as a separate package ceph-resource-agents that depends on resource-agents, the second fixes a trivial (and embarassing, however harmless) cut and paste error. Thanks! Cheers, Florian

[PATCH 1/2] debian: build ceph-resource-agents

2012-01-05 Thread Florian Haas
--- debian/ceph-resource-agents.install |1 + debian/control | 13 + debian/rules|2 ++ 3 files changed, 16 insertions(+), 0 deletions(-) create mode 100644 debian/ceph-resource-agents.install diff --git

[PATCH 0/2] Add Ceph integration with OCF-compliant HA resource managers

2011-12-29 Thread Florian Haas
-ra Florian Haas (3): init script: be LSB compliant for exit code on status Add OCF-compliant resource agent for Ceph daemons Spec: conditionally build ceph-resource-agents package ceph.spec.in| 22 ++ configure.ac|8 ++ src/Makefile.am |4

[PATCH 2/2] Spec: conditionally build ceph-resource-agents package

2011-12-29 Thread Florian Haas
Put OCF resource agents in a separate subpackage, to be enabled with a separate build conditional (--with ocf). Make the subpackage depend on the resource-agents package, which provides the ocf-shellfuncs library that the Ceph RAs use. Signed-off-by: Florian Haas flor...@hastexo.com

[PATCH 1/2] Add OCF-compliant resource agent for Ceph daemons

2011-12-29 Thread Florian Haas
, configure --with-ocf to enable. Signed-off-by: Florian Haas flor...@hastexo.com --- configure.ac|8 ++ src/Makefile.am |4 +- src/ocf/Makefile.am | 23 +++ src/ocf/ceph.in | 177 +++ 4 files changed, 210 insertions(+), 2

Trivial patch to fix init script LSB compliance

2011-12-27 Thread Florian Haas
Hi everyone, please consider merging the following trivial patch that makes the ceph init script return the proper LSB exit code (3) for the status action if the service is gracefully stopped, and only return 1 (as before) if the service has died and left its PID file hanging around. Who cares