Re: [ceph-users] No announce for 12.2.8 / available in repositories

2018-09-03 Thread Dan van der Ster
I don't think those issues are known... Could you elaborate on your librbd issues with v12.2.8 ? -- dan On Tue, Sep 4, 2018 at 7:30 AM Linh Vu wrote: > > Version 12.2.8 seems broken. Someone earlier on the ML had a MDS issue. We > accidentally upgraded an openstack compute node from 12.2.7 to

Re: [ceph-users] No announce for 12.2.8 / available in repositories

2018-09-03 Thread Linh Vu
Version 12.2.8 seems broken. Someone earlier on the ML had a MDS issue. We accidentally upgraded an openstack compute node from 12.2.7 to 12.2.8 (librbd) and it caused all kinds of issues writing to the VM disks. From: ceph-users on behalf of Nicolas Huillard

[ceph-users] data_extra_pool for RGW Luminous still needed?

2018-09-03 Thread Nhat Ngo
Hi all, I am new to Ceph and we are setting up a new RadosGW and Ceph storage cluster on Luminous. We are using only EC for our `buckets.data` pool at the moment. However, I just read the Red Hat Ceph object Gateway for Production article and it mentions an extra duplicated `buckets.non-ec`

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-03 Thread Marc Roos
Yes you are right. I had moved the fs_meta (16 pg's) to the ssd's. I had to check the crush rules, but that pool is only 200MB. Still puzzles me why ceph 'out of the box' is not distributing data more evenly. I will try the balancer first thing, when remapping of the newly added node has

Re: [ceph-users] how to swap osds between servers

2018-09-03 Thread Ronny Aasen
On 03.09.2018 17:42, Andrei Mikhailovsky wrote: Hello everyone, I am in the process of adding an additional osd server to my small ceph cluster as well as migrating from filestore to bluestore. Here is my setup at the moment: Ceph - 12.2.5 , running on Ubuntu 16.04 with latest updates 3 x

Re: [ceph-users] Luminous new OSD being over filled

2018-09-03 Thread David C
Hi Marc I like that approach although I think I'd go in smaller weight increments. Still a bit confused by the behaviour I'm seeing, it looks like I've got things weighted correctly. Redhat's docs recommend doing an OSD at a time and I'm sure that's how I've done it on other clusters in the past

[ceph-users] how to swap osds between servers

2018-09-03 Thread Andrei Mikhailovsky
Hello everyone, I am in the process of adding an additional osd server to my small ceph cluster as well as migrating from filestore to bluestore. Here is my setup at the moment: Ceph - 12.2.5 , running on Ubuntu 16.04 with latest updates 3 x osd servers with 10x3TB SAS drives, 2 x Intel

Re: [ceph-users] Luminous new OSD being over filled

2018-09-03 Thread Marc Roos
I am adding a node like this, I think it is more efficient, because in your case you will have data being moved within the added node (between the newly added osd's there). So far no problems with this. Maybe limit your ceph tell osd.* injectargs --osd_max_backfills=X Because pg's being

[ceph-users] Luminous new OSD being over filled

2018-09-03 Thread David C
Hi all Trying to add a new host to a Luminous cluster, I'm doing one OSD at a time. I've only added one so far but it's getting too full. The drive is the same size (4TB) as all others in the cluster, all OSDs have crush weight of 3.63689. Average usage on the drives is 81.70% With the new OSD

Re: [ceph-users] Luminous missing osd_backfill_full_ratio

2018-09-03 Thread David C
In the end if was because I hadn't completed the upgrade with "ceph osd require-osd-release luminous", after setting that I had the default backfill full (0.9 I think) and was able to change it with ceph osd set backfillfull-ratio. Potential gotcha for a Jewel -> Luminous upgrade if you delay the

Re: [ceph-users] Can't upgrade to MDS version 12.2.8

2018-09-03 Thread Marlin Cremers
So we now have a different error, I ran `ceph fs reset k8s` because of the map that was in the strange state. Now I'm getting the following error in the MDS log when it tries to 'join' the cluster (even though its the only one): https://gist.github.com/Marlinc/59d0a9fe3c34fed86c3aba2ebff850fb

Re: [ceph-users] Luminous RGW errors at start

2018-09-03 Thread Janne Johansson
Did you change the default pg_num or pgp_num so the pools that did show up made it go past the mon_max_pg_per_osd ? Den fre 31 aug. 2018 kl 17:20 skrev Robert Stanford : > > I installed a new Luminous cluster. Everything is fine so far. Then I > tried to start RGW and got this error: > >

Re: [ceph-users] Packages for debian in Ceph repo

2018-09-03 Thread Abhishek Lekshmanan
arad...@tma-0.net writes: > Can anyone confirm if the Ceph repos for Debian/Ubuntu contain packages for > Debian? I'm not seeing any, but maybe I'm missing something... > > I'm seeing ceph-deploy install an older version of ceph on the nodes (from > the > Debian repo) and then failing when I

[ceph-users] Ceph Luminous - journal setting

2018-09-03 Thread M Ranga Swami Reddy
Hi - I am using the Ceph Luminous release. here what are the OSD journal settings needed for OSD? NOTE: I used SSDs for journal till Jewel release. Thanks Swami ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] luminous 12.2.6 -> 12.2.7 active+clean+inconsistent PGs workaround (or wait for 12.2.8+ ?)

2018-09-03 Thread SCHAER Frederic
Hi, For those facing (lots of) active+clean+inconsistent PGs after the luminous 12.2.6 metadata corruption and 12.2.7 upgrade, I'd like to explain how I finally got rid of those. Disclaimer : my cluster doesn't contain highly valuable data, and I can sort of recreate what is actually contains

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-03 Thread Eugen Block
Hi Jones, I still don't think creating an OSD on a partition will work. The reason is that SES creates an additional partition per OSD resulting in something like this: vdb 253:16 05G 0 disk ├─vdb1253:17 0 100M 0 part /var/lib/ceph/osd/ceph-1 └─vdb2

Re: [ceph-users] MDS does not always failover to hot standby on reboot

2018-09-03 Thread William Lawton
Which configuration option determines the MDS timeout period? William Lawton From: Gregory Farnum Sent: Thursday, August 30, 2018 5:46 PM To: William Lawton Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] MDS does not always failover to hot standby on reboot Yes, this is a consequence

[ceph-users] Slow requests from bluestore osds

2018-09-03 Thread Marc Schöchlin
Hi, we are also experiencing this type of behavior for some weeks on our not so performance critical hdd pools. We haven't spent so much time on this problem, because there are currently more important tasks - but here are a few details: Running the following loop results in the following