Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-04 Thread Wladimir Mutel
On Fri, Jun 01, 2018 at 08:20:12PM +0300, Wladimir Mutel wrote: > > And still, when I do '/disks create ...' in gwcli, it says > that it wants 2 existing gateways. Probably this is related > to the created 2-TPG structure and I should look for more ways > to 'improve' that

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Ronny Aasen
On 04. juni 2018 06:41, Charles Alva wrote: Hi Guys, When will the Ceph Mimic packages for Debian Stretch released? I could not find the packages even after changing the sources.list. I am also eager to test mimic on my ceph debian-mimic only contains ceph-deploy atm. kind regards Ronny

Re: [ceph-users] SSD Bluestore Backfills Slow

2018-06-04 Thread David Turner
I don't believe this really applies to you. The problem here was with an SSD osd that was incorrectly labeled as an HDD osd by ceph. The fix was to inject a sleep seeing if 0 for those osds to speed up recovery. The sleep is needed to not kill hdds to avoid thrashing, but the bug was SSDs were

Re: [ceph-users] Should ceph-volume lvm prepare not be backwards compitable with ceph-disk?

2018-06-04 Thread Alfredo Deza
On Sat, Jun 2, 2018 at 12:31 PM, Oliver Freyermuth wrote: > Am 02.06.2018 um 11:44 schrieb Marc Roos: >> >> >> ceph-disk does not require bootstrap-osd/ceph.keyring and ceph-volume >> does > > I believe that's expected when you use "prepare". > For ceph-volume, "prepare" already bootstraps the

Re: [ceph-users] Bug? if ceph-volume fails, it does not clean up created osd auth id

2018-06-04 Thread Alfredo Deza
ceph-volume has a 'rollback' functionality that if it was able to create an OSD id, and the creation of the OSD fails, it will remove the id. In this case, it failed to create the id, so the tool can't be sure it has to 'clean up'. On Sat, Jun 2, 2018 at 5:52 AM, Marc Roos wrote: > > > [@

Re: [ceph-users] SSD Bluestore Backfills Slow

2018-06-04 Thread Caspar Smit
Hi Reed, "Changing/injecting osd_recovery_sleep_hdd into the running SSD OSD’s on bluestore opened the floodgates." What exactly did you change/inject here? We have a cluster with 10TB SATA HDD's which each have a 100GB SSD based block.db Looking at ceph osd metadata for each of those:

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-04 Thread Wladimir Mutel
On Mon, Jun 04, 2018 at 11:12:58AM +0300, Wladimir Mutel wrote: > /disks> create pool=rbd image=win2016-3tb-1 size=2861589M > CMD: /disks/ create pool=rbd image=win2016-3tb-1 size=2861589M count=1 > max_data_area_mb=None > pool 'rbd' is ok to use > Creating/mapping disk rbd/win2016-3tb-1 >

Re: [ceph-users] Show and Tell: Grafana cluster dashboard

2018-06-04 Thread Lenz Grimmer
On 05/08/2018 07:21 AM, Kai Wagner wrote: > Looks very good. Is it anyhow possible to display the reason why a > cluster is in an error or warning state? Thinking about the output from > ceph -s if this could by shown in case there's a failure. I think this > will not be provided by default but

Re: [ceph-users] SSD-primary crush rule doesn't work as intended

2018-06-04 Thread Horace
I won't run out of write iops when I have ssd journal in place. I know that I can use the dual root method from Sebastien's web site, but I thought the 'storage class' feature is the way to solve this kind of problem.

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Alfredo Deza
There aren't any builds for Debian because the distro does not have compiler backports required for building Mimic On Mon, Jun 4, 2018 at 8:55 AM, Ronny Aasen wrote: > On 04. juni 2018 06:41, Charles Alva wrote: >> >> Hi Guys, >> >> When will the Ceph Mimic packages for Debian Stretch released?

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Sage Weil
[adding ceph-maintainers] On Mon, 4 Jun 2018, Charles Alva wrote: > Hi Guys, > > When will the Ceph Mimic packages for Debian Stretch released? I could not > find the packages even after changing the sources.list. The problem is that we're now using c++17, which requires a newer gcc than

Re: [ceph-users] SSD Bluestore Backfills Slow

2018-06-04 Thread Reed Dier
Appreciate the input. Wasn’t sure if ceph-volume was the one setting these bits of metadata or something else. Appreciate the help guys. Thanks, Reed > The fix is in core Ceph (the OSD/BlueStore code), not ceph-volume. :) > journal_rotational is still a thing in BlueStore; it represents the

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Jack
My reaction when I read that there will be no Mimic soon on Stretch: https://pix.milkywan.fr/JDjOJWnx.png Anyway, thank you for the kind explanation, as well as for getting in touch with the Debian team about this issue On 06/04/2018 08:39 PM, Sage Weil wrote: > [adding ceph-maintainers] > >

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Joao Eduardo Luis
On 06/04/2018 07:39 PM, Sage Weil wrote: > [1] > http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000603.html > [2] > http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000611.html Just a heads up, seems the ceph-maintainers archives are not public.

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Paul Emmerich
Hi, 2018-06-04 20:39 GMT+02:00 Sage Weil : > We'd love to build for stretch, but until there is a newer gcc for that > distro it's not possible. We could build packages for 'testing', but I'm > not sure if those will be usable on stretch. > you can install gcc (and only gcc) from testing on

[ceph-users] mimic: failed to load OSD map for epoch X, got 0 bytes

2018-06-04 Thread Sergey Malinin
Hello, Freshly created OSD won't start after upgrading to mimic: 2018-06-04 17:00:23.135 7f48cbecb240 0 osd.3 0 done with init, starting boot process 2018-06-04 17:00:23.135 7f48cbecb240 1 osd.3 0 start_boot 2018-06-04 17:00:23.135 7f48cbecb240 10 osd.3 0 start_boot - have maps 0..0

Re: [ceph-users] ceph-osd@ service keeps restarting after removing osd

2018-06-04 Thread Michael Burk
On Thu, May 31, 2018 at 4:40 PM Gregory Farnum wrote: > On Thu, May 24, 2018 at 9:15 AM Michael Burk > wrote: > >> Hello, >> >> I'm trying to replace my OSDs with higher capacity drives. I went through >> the steps to remove the OSD on the OSD node: >> # ceph osd out osd.2 >> # ceph osd down

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Charles Alva
I see, thanks for the detailed information, Sage! Kind regards, Charles Alva Sent from Gmail Mobile On Tue, Jun 5, 2018 at 1:39 AM Sage Weil wrote: > [adding ceph-maintainers] > > On Mon, 4 Jun 2018, Charles Alva wrote: > > Hi Guys, > > > > When will the Ceph Mimic packages for Debian

[ceph-users] How to run MySQL (or other database ) on Ceph using KRBD ?

2018-06-04 Thread 李昊华
Thanks for reading my questions! I want to run MySQL on Ceph using KRBD because KRBD is faster than librbd. And I know KRBD is a kernel module and we can use KRBD to mount the RBD device on the operating systems. It is easy to use command line tool to mount the RBD device on the operating

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread kefu chai
On Tue, Jun 5, 2018 at 6:13 AM, Paul Emmerich wrote: > Hi, > > 2018-06-04 20:39 GMT+02:00 Sage Weil : >> >> We'd love to build for stretch, but until there is a newer gcc for that >> distro it's not possible. We could build packages for 'testing', but I'm >> not sure if those will be usable on

[ceph-users] pg inconsistent, scrub stat mismatch on bytes

2018-06-04 Thread Adrian
Hi Cephers, We recently upgraded one of our clusters from hammer to jewel and then to luminous (12.2.5, 5 mons/mgr, 21 storage nodes * 9 osd's). After some deep-scubs we have an inconsistent pg with a log message we've not seen before: HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg

Re: [ceph-users] Jewel/Luminous Filestore/Bluestore for a new cluster

2018-06-04 Thread Subhachandra Chandra
We have been running a Luminous.04 + Bluestor for about 3 months in production. All the daemons run as docker containers and were installed using ceph-ansible. 540 spinning drives with journal/wal/db on the same drive spread across 9 hosts. Using librados object interface directly with steady

Re: [ceph-users] SSD Bluestore Backfills Slow

2018-06-04 Thread Gregory Farnum
On Mon, Jun 4, 2018 at 9:38 AM Reed Dier wrote: > Copying Alfredo, as I’m not sure if something changed with respect to > ceph-volume in 12.2.2 (when this originally happened) to 12.2.5 (I’m sure > plenty did), because I recently had an NVMe drive fail on me unexpectedly > (curse you Micron),

Re: [ceph-users] SSD Bluestore Backfills Slow

2018-06-04 Thread Alfredo Deza
On Mon, Jun 4, 2018 at 12:37 PM, Reed Dier wrote: > Hi Caspar, > > David is correct, in that the issue I was having with SSD OSD’s having NVMe > bluefs_db reporting as HDD creating an artificial throttle based on what > David was mentioning, a prevention to keep spinning rust from thrashing. Not

[ceph-users] Unexpected data

2018-06-04 Thread Marc-Antoine Desrochers
Hi, Im not sure if it's normal or not but each time I add a new osd with ceph-deploy osd create --data /dev/sdg ceph-n1. It add 1GB to my global data but I just format the drive so it's supposed to be at 0 right ? So I have 6 osd in my ceph and it took 6gib. [root@ceph-n1 ~]# ceph -s

Re: [ceph-users] SSD Bluestore Backfills Slow

2018-06-04 Thread Reed Dier
Hi Caspar, David is correct, in that the issue I was having with SSD OSD’s having NVMe bluefs_db reporting as HDD creating an artificial throttle based on what David was mentioning, a prevention to keep spinning rust from thrashing. Not sure if the journal_rotational bit should be 1, but

Re: [ceph-users] Unexpected data

2018-06-04 Thread Paul Emmerich
There's some metadata on Bluestore OSDs (the rocksdb database), it's usually ~1% of your data. The DB will start out at a size of around 1GB, so that's expected. Paul 2018-06-04 15:55 GMT+02:00 Marc-Antoine Desrochers < marc-antoine.desroch...@sogetel.com>: > Hi, > > > > Im not sure if it’s