Re: [ceph-users] Ceph 9.2 fails to install in COS 7.1.1503: Report and Fix

2015-12-09 Thread Ben Hines
FYI - same issue when installing Hammer, 94.5. I also fixed it by enabling the cr repo. -Ben On Tue, Dec 8, 2015 at 5:13 PM, Goncalo Borges wrote: > Hi Cephers > > This is just to report an issue (and a workaround) regarding dependencies > in Centos 7.1.1503 > >

Re: [ceph-users] Blocked requests after "osd in"

2015-12-09 Thread Jan Schermer
Are you seeing "peering" PGs when the blocked requests are happening? That's what we see regularly when starting OSDs. I'm not sure this can be solved completely (and whether there are major improvements in newer Ceph versions), but it can be sped up by 1) making sure you have free (and not

[ceph-users] Blocked requests after "osd in"

2015-12-09 Thread Christian Kauhaus
Hi, I'm getting blocked requests (>30s) every time when an OSD is set to "in" in our clusters. Once this has happened, backfills run smoothly. I have currently no idea where to start debugging. Has anyone a hint what to examine first in order to narrow this issue? TIA Christian -- Dipl-Inf.

Re: [ceph-users] CephFS: number of PGs for metadata pool

2015-12-09 Thread Jan Schermer
Number of PGs doesn't affect the number of replicas, so don't worry about it. Jan > On 09 Dec 2015, at 13:03, Mykola Dvornik wrote: > > Hi guys, > > I am creating a 4-node/16OSD/32TB CephFS from scratch. > > According to the ceph documentation the metadata pool

[ceph-users] problem after reinstalling system

2015-12-09 Thread Jacek Jarosiewicz
Hi, I have a working ceph cluster with storage nodes running Ubuntu 14.04 and ceph hammer 0.94.5. Now I want to switch to CentOS 7.1 (forget about the reasons for now, I can explain, but it would be a long story and irrelevant to my question). I've set the osd noout flag and

[ceph-users] CephFS: number of PGs for metadata pool

2015-12-09 Thread Mykola Dvornik
Hi guys, I am creating a 4-node/16OSD/32TB CephFS from scratch. According to the ceph documentation the metadata pool should have small amount of PGs since it contains some negligible amount of data compared to data pool. This makes me feel it might not be safe. So I was wondering how to

Re: [ceph-users] Ceph 9.2 fails to install in COS 7.1.1503: Report and Fix

2015-12-09 Thread Loic Dachary
Hi, It also had to be fixed for the development environment (see http://tracker.ceph.com/issues/14019). Cheers On 09/12/2015 09:37, Ben Hines wrote: > FYI - same issue when installing Hammer, 94.5. I also fixed it by enabling > the cr repo. > > -Ben > > On Tue, Dec 8, 2015 at 5:13 PM,

Re: [ceph-users] CephFS: number of PGs for metadata pool

2015-12-09 Thread John Spray
On Wed, Dec 9, 2015 at 1:25 PM, Mykola Dvornik wrote: > Hi Jan, > > Thanks for the reply. I see your point about replicas. However my motivation > was a bit different. > > Consider some given amount of objects that are stored in the metadata pool. > If I understood

Re: [ceph-users] CephFS: number of PGs for metadata pool

2015-12-09 Thread Mykola Dvornik
Good point. Thanks! Triple-failure is essentially what I've faced about a months ago. So now I want to make sure that the new cephfs setup I am deploying at the moment will handle this kind of things better. On Wed, Dec 9, 2015 at 2:41 PM, John Spray wrote: On Wed, Dec 9,

Re: [ceph-users] Blocked requests after "osd in"

2015-12-09 Thread Christian Kauhaus
Am 09.12.2015 um 11:21 schrieb Jan Schermer: > Are you seeing "peering" PGs when the blocked requests are happening? That's > what we see regularly when starting OSDs. Mostly "peering" and "activating". > I'm not sure this can be solved completely (and whether there are major > improvements in

Re: [ceph-users] High disk utilisation

2015-12-09 Thread MATHIAS, Bryn (Bryn)
to update this, the error looks like it comes from updatedb scanning the ceph disks. When we make sure it doesn’t, by putting the ceph mount points in the exclusion file, the problem goes away. Thanks for the help and time. On 30 Nov 2015, at 09:53, MATHIAS, Bryn (Bryn)

Re: [ceph-users] New cluster performance analysis

2015-12-09 Thread Kris Gillespie
One thing I noticed with all my testing, as the speed difference between the SSDs and the spinning rust can be quite high and as your journal needs to flush every X bytes (configurable), the impact of this flush can be hard, as IO to this journal will stop until it’s finished (I believe).

[ceph-users] building ceph rpms, "ceph --version" returns no version

2015-12-09 Thread bruno.canning
Hi All, Long story short: I have built ceph hammer RPMs, everything seems to work OK but running "ceph --version" does not report the version number. I don't get a version number returned from "service ceph status", either. I'm concerned that other components in our system my rely on ceph

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Josh Durgin
This is the problem: http://tracker.ceph.com/issues/14030 As a workaround, you can pass the first diff in via stdin, e.g.: cat snap1.diff | rbd merge-diff - snap2.diff combined.diff Josh On 12/08/2015 11:11 PM, Josh Durgin wrote: On 12/08/2015 10:44 PM, Alex Gorbachev wrote: Hi Josh, On

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-09 Thread Loic Dachary
Hi Felix, It would be great if you could try the fix from https://github.com/dachary/ceph/commit/7395a6a0c5776d4a92728f1abf0e8a87e5d5e4bb . It's only changing the ceph-disk file so you could just get it from

[ceph-users] OS Liberty + Ceph Hammer: Block Device Mapping is Invalid.

2015-12-09 Thread c...@dolphin-it.de
Can someone help me? Help would be highly appreciated ;-) Last message on OpenStack mailing list: Dear OpenStack-users, I just installed my first multi-node OS-setup with Ceph as my storage backend. After configuring cinder, nova and glance as described in the Ceph-HowTo

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Alex Gorbachev
Great, thanks Josh! Using stdin/stdout merge-diff is working. Thank you for looking into this. -- Alex Gorbachev Storcium On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin wrote: > This is the problem: > > http://tracker.ceph.com/issues/14030 > > As a workaround, you can pass

Re: [ceph-users] High disk utilisation

2015-12-09 Thread Christian Balzer
Hello, On Wed, 9 Dec 2015 15:57:36 + MATHIAS, Bryn (Bryn) wrote: > to update this, the error looks like it comes from updatedb scanning the > ceph disks. > > When we make sure it doesn’t, by putting the ceph mount points in the > exclusion file, the problem goes away. > Ah, I didn't even

[ceph-users] Client io blocked when removing snapshot

2015-12-09 Thread Wukongming
Hi, All I used a rbd command to create a 6TB-size image, And then created a snapshot of this image. After that, I kept writing something like modifying files so the snapshots would be cloned one by one. At this time, I did the fellow 2 ops simultaneously. 1. keep client io to this image. 2.

Re: [ceph-users] problem after reinstalling system

2015-12-09 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I had this problem because CentOS and Debian have different versions of leveldb (Debian's was newer) and the old version would not read the new version. I just had to blow away the OSDs and let them backfill. Going from CentOS to Debian didn't

Re: [ceph-users] building ceph rpms, "ceph --version" returns no version

2015-12-09 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 You actually have to walk through part of the make process before you can build the tarball so that the version is added to the source files. I believe the steps are: ./autogen.sh ./configure make dist-[gzip|bzip2|lzip|xz] Then you can copy the

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Alex Gorbachev
Hi Josh, looks like I celebrated too soon: On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin wrote: > This is the problem: > > http://tracker.ceph.com/issues/14030 > > As a workaround, you can pass the first diff in via stdin, e.g.: > > cat snap1.diff | rbd merge-diff - snap2.diff

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Alex Gorbachev
More oddity: retrying several times, the merge-diff sometimes works and sometimes does not, using the same source files. On Wed, Dec 9, 2015 at 10:15 PM, Alex Gorbachev wrote: > Hi Josh, looks like I celebrated too soon: > > On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin

Re: [ceph-users] Blocked requests after "osd in"

2015-12-09 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I noticed this a while back and did some tracing. As soon as the PGs are read in by the OSD (very limited amount of housekeeping done), the OSD is set to the "in" state so that peering with other OSDs can happen and the recovery process can begin.

Re: [ceph-users] problem after reinstalling system

2015-12-09 Thread Christian Balzer
Hello, I seem to vaguely remember a Ceph leveldb package, which might help in this case, or something from the CentOS equivalent to backports maybe. Christian On Wed, 9 Dec 2015 22:18:56 -0700 Robert LeBlanc wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > I had this problem

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Josh Durgin
Hmm, perhaps there's a secondary bug. Can you send the output from strace, i.e. strace.log after running: cat snap1.diff | strace -f -o strace.log rbd merge-diff - snap2.diff combined.diff for a case where it fails? Josh On 12/09/2015 08:38 PM, Alex Gorbachev wrote: More oddity: retrying

[ceph-users] Monitor rename / recreate issue -- probing state

2015-12-09 Thread deeepdish
Hello, I encountered a strange issue when rebuilding monitors reusing same hostnames, however different IPs. Steps to reproduce: - Build monitor using ceph-deploy create mon - Remove monitor via http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ (remove monitor) — I didn’t

Re: [ceph-users] http://gitbuilder.ceph.com/

2015-12-09 Thread Xav Paice
To get us around the immediate problem, I copied the deb I needed from a cache to a private repo - I'm sorry that's not going to help you at all, but if you need a copy, let me know. The documentation upstream shows that the mod_fastcgi is for older apache only, and 2.4 onwards can use