Re: [ceph-users] ceph status showing wrong osd

2018-06-05 Thread Muneendra Kumar M
Hi Paul, Thanks for your reply. Looks like it is contacting the monitor properly as it shows the below o/p from ceph status.Correct me if iam wrong monmap e1: 1 mons at {0=10.38.32.245:16789/0} election epoch 1, quorum 0 0 The reason could be that the osd’s are created

Re: [ceph-users] whiteouts mismatch

2018-06-05 Thread Brad Hubbard
On Tue, Jun 5, 2018 at 4:46 PM, shrey chauhan wrote: > I am consistently getting whiteout mismatches due to which pgs are going in > inconsistent state, and I am not able to figure out why is this happening? > though as it was explained before that whiteouts dont exist and its nothing, > its

[ceph-users] Stop scrubbing

2018-06-05 Thread Marc Roos
Is it possible to stop the current running scrubs/deep-scrubs? http://tracker.ceph.com/issues/11202 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-05 Thread Paul Emmerich
Hi, If anyone wants to play around with Ceph on Debian: I just made our mirror for our dev/test image builds public: wget -q -O- 'https://static.croit.io/keys/release.asc' | apt-key add - echo 'deb https://static.croit.io/debian-mimic/ stretch main' >> /etc/apt/sources.list apt update apt

Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())

2018-06-05 Thread Sage Weil
On Tue, 5 Jun 2018, Paul Emmerich wrote: > 2018-06-05 17:42 GMT+02:00 Nick Fisk : > > > Hi, > > > > After a RBD snapshot was removed, I seem to be having OSD's assert when > > they try and recover pg 1.2ca. The issue seems to follow the > > PG around as OSD's fail. I've seen this bug tracker and

Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())

2018-06-05 Thread Nick Fisk
So, from what I can see I believe this issue is being caused by one of the remaining OSD's acting for this PG containing a snapshot file of the object /var/lib/ceph/osd/ceph-46/current/1.2ca_head/DIR_A/DIR_C/DIR_2/DIR_D/DIR_0/rbd\udata.0c4c14238e1f29.000bf479__head_F930D2CA_ _1

Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())

2018-06-05 Thread Nick Fisk
From: ceph-users On Behalf Of Paul Emmerich Sent: 05 June 2018 17:02 To: n...@fisk.me.uk Cc: ceph-users Subject: Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end()) 2018-06-05 17:42 GMT+02:00 Nick Fisk mailto:n...@fisk.me.uk> >: Hi, After a RBD snapshot was

Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())

2018-06-05 Thread Paul Emmerich
2018-06-05 17:42 GMT+02:00 Nick Fisk : > Hi, > > After a RBD snapshot was removed, I seem to be having OSD's assert when > they try and recover pg 1.2ca. The issue seems to follow the > PG around as OSD's fail. I've seen this bug tracker and associated mailing > list post, but would appreciate if

[ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())

2018-06-05 Thread Nick Fisk
Hi, After a RBD snapshot was removed, I seem to be having OSD's assert when they try and recover pg 1.2ca. The issue seems to follow the PG around as OSD's fail. I've seen this bug tracker and associated mailing list post, but would appreciate if anyone can give any pointers.

Re: [ceph-users] Open-sourcing GRNET's Ceph-related tooling

2018-06-05 Thread Robert Sander
Hi, I just saw this announcement and just wanted to "advertise" our Check_MK plugin for Ceph: https://github.com/HeinleinSupport/check_mk/tree/master/ceph Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax:

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-05 Thread Ronny Aasen
On 04.06.2018 21:08, Joao Eduardo Luis wrote: On 06/04/2018 07:39 PM, Sage Weil wrote: [1] http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000603.html [2] http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000611.html Just a heads up, seems the

Re: [ceph-users] Ceph MeetUp Berlin – May 28

2018-06-05 Thread Robert Sander
Hi, On 27.05.2018 01:48, c...@elchaka.de wrote: > > Very interested to the Slides/vids. Slides are now available: https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/ Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030

[ceph-users] Ceph on ARM meeting canceled

2018-06-05 Thread Leonardo Vaz
Hey Cephers, Sorry for the short notice, but the Ceph on ARM meeting scheduled for today (Jun 5) has been canceled. Kindest regards, Leo -- Leonardo Vaz Ceph Community Manager Open Source and Standards Team ___ ceph-users mailing list

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-05 Thread Paul Emmerich
2018-06-05 6:58 GMT+02:00 kefu chai : > > thanks for sharing this, Paul ! does the built binary require any > runtime dependency offered the testing repo? if the answer is no, i > think we should offer the pre-built package for debian stable then. > It will by default produce binaries linking

Re: [ceph-users] Bluestore : Where is my WAL device ?

2018-06-05 Thread Richard Hesketh
On 05/06/18 14:49, rafael.diazmau...@univ-rennes1.fr wrote: > Hello, > > I run proxmox 5.2 with ceph 12.2 (bluestore). > > I've created an OSD on a Hard Drive (/dev/sda) and tried to put both WAL and > Journal on a SSD part (/dev/sde1) like this : > pveceph createosd /dev/sda --wal_dev

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-05 Thread Vik Tara
On 05/06/18 05:58, kefu chai wrote: > On Tue, Jun 5, 2018 at 6:13 AM, Paul Emmerich wrote: >> Hi, >> >> 2018-06-04 20:39 GMT+02:00 Sage Weil : >>> We'd love to build for stretch, but until there is a newer gcc for that >>> distro it's not possible. We could build packages for 'testing', but

Re: [ceph-users] ceph status showing wrong osd

2018-06-05 Thread Paul Emmerich
It was either created incorrectly (no auth key?) or it can't contact the monitor for some reason. The log file should tell you more. Paul 2018-06-05 13:20 GMT+02:00 Muneendra Kumar M : > Hi, > > I have created a cluster and when I run ceph status it is showing me the > wrong number of osds. >

[ceph-users] ceph status showing wrong osd

2018-06-05 Thread Muneendra Kumar M
Hi, I have created a cluster and when I run ceph status it is showing me the wrong number of osds. cluster 6571de66-75e1-4da7-b1ed-15a8bfed0944 health HEALTH_WARN 2112 pgs stuck inactive 2112 pgs stuck unclean monmap e1: 1 mons at {0=10.38.32.245:16789/0}

Re: [ceph-users] ghost PG : "i don't have pgid xx"

2018-06-05 Thread Olivier Bonvalet
Hi, Good point ! Changing this value, *and* restarting ceph-mgr fix this issue. Now we have to find a way to reduce PG account. Thanks Paul ! Olivier Le mardi 05 juin 2018 à 10:39 +0200, Paul Emmerich a écrit : > Hi, > > looks like you are running into the PG overdose protection of > Luminous

Re: [ceph-users] How to run MySQL (or other database ) on Ceph using KRBD ?

2018-06-05 Thread Ilya Dryomov
On Tue, Jun 5, 2018 at 4:07 AM, 李昊华 wrote: > Thanks for reading my questions! > > I want to run MySQL on Ceph using KRBD because KRBD is faster than librbd. > And I know KRBD is a kernel module and we can use KRBD to mount the RBD > device on the operating systems. > > It is easy to use command

Re: [ceph-users] ghost PG : "i don't have pgid xx"

2018-06-05 Thread Paul Emmerich
Hi, looks like you are running into the PG overdose protection of Luminous (you got > 200 PGs per OSD): try to increase mon_max_pg_per_osd on the monitors to 300 or so to temporarily resolve this. Paul 2018-06-05 9:40 GMT+02:00 Olivier Bonvalet : > Some more informations : the cluster was just

Re: [ceph-users] ghost PG : "i don't have pgid xx"

2018-06-05 Thread Olivier Bonvalet
Some more informations : the cluster was just upgraded from Jewel to Luminous. # ceph pg dump | egrep '(stale|creating)' dumped all 15.32 10947 00 0 0 45870301184 3067 3067stale+active+clean 2018-06-04

[ceph-users] ghost PG : "i don't have pgid xx"

2018-06-05 Thread Olivier Bonvalet
Hi, I have a cluster in "stale" state : a lots of RBD are blocked since ~10 hours. In the status I see PG in stale or down state, but thoses PG doesn't seem to exists anymore : root! stor00-sbg:~# ceph health detail | egrep '(stale|down)' HEALTH_ERR noout,noscrub,nodeep-scrub flag(s) set; 1

[ceph-users] whiteouts mismatch

2018-06-05 Thread shrey chauhan
I am consistently getting whiteout mismatches due to which pgs are going in inconsistent state, and I am not able to figure out why is this happening? though as it was explained before that whiteouts dont exist and its nothing, its still painful to see my pgs in inconsistent statecan any one