Re: [ceph-users] iSCSI on Ubuntu and HA / Multipathing

2019-07-10 Thread Frédéric Nass
Hi Edward, What "Red Hat Enterprise Linux/CentOS 7.5 (or newer); Linux kernel v4.16 (or newer)" means is that you either need to use RHEL/CentOS 7.5 distribution with a 3.10.0-852+ kernel or any other distribution with a 4.16+ upstream kernel. Regards, Frédéric. - Le 10 Juil 19, à

Re: [ceph-users] iSCSI on Ubuntu and HA / Multipathing

2019-07-10 Thread Michael Christie
On 07/11/2019 05:34 AM, Edward Kalk wrote: > The Docs say : http://docs.ceph.com/docs/nautilus/rbd/iscsi-targets/ > > * Red Hat Enterprise Linux/CentOS 7.5 (or newer); Linux kernel v4.16 > (or newer) > > ^^Is there a version combination of CEPH and Ubuntu that works? Is > anyone running

Re: [ceph-users] Ubuntu 18.04 - Mimic - Nautilus

2019-07-10 Thread Harry G. Coin
If you use the dashboard module and it's working now -- don't change because it's broken totally in Disco and later (ceph v13 as well). On 7/10/19 2:51 PM, Reed Dier wrote: It would appear by looking at the Ceph repo: https://download.ceph.com/ That Nautilus and Mimic are being built for

[ceph-users] iSCSI on Ubuntu and HA / Multipathing

2019-07-10 Thread Edward Kalk
The Docs say : http://docs.ceph.com/docs/nautilus/rbd/iscsi-targets/ Red Hat Enterprise Linux/CentOS 7.5 (or newer); Linux kernel v4.16 (or newer) ^^Is there a version combination of CEPH and Ubuntu that works? Is anyone running iSCSI on

Re: [ceph-users] Ubuntu 18.04 - Mimic - Nautilus

2019-07-10 Thread Kai Wagner
On 10.07.19 20:46, Reed Dier wrote: > It does not appear that that page has been updated in a while. Addressed that already - someone needs to merge it https://github.com/ceph/ceph/pull/28643 -- GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) signature.asc

[ceph-users] Any news on dashboard regression / cython fix?

2019-07-10 Thread Harry G. Coin
Anyone know when the dashboard module will be working on any of the current and newer distros? The dashboard module won't load / is totally broken on all distros shipping binaries with cython>0.29.   That dashboard module provides a big 'at a glance' help. The 'workaround' that I saw posted

Re: [ceph-users] Ubuntu 18.04 - Mimic - Nautilus

2019-07-10 Thread Reed Dier
It would appear by looking at the Ceph repo: https://download.ceph.com/ That Nautilus and Mimic are being built for Xenial and Bionic, where Luminous and Jewel are being built for Trusty and Xenial. Then from Ubuntu in their main repos, they are publishing Jewel for

Re: [ceph-users] Ubuntu 18.04 - Mimic - Nautilus

2019-07-10 Thread Edward Kalk
Interesting. So is it not good that I am running Ubuntu 16.04 and 14.2.1. ? -Ed > On Jul 10, 2019, at 1:46 PM, Reed Dier wrote: > > It does not appear that that page has been updated in a while. > > The official Ceph deb repos only include Mimic and Nautilus packages for > 18.04, > While the

Re: [ceph-users] 3 OSDs stopped and unable to restart

2019-07-10 Thread ifedo...@suse.de
You might want to try manual rocksdb compaction using ceph-kvstore-tool.. Sent from my Huawei tablet Original Message Subject: Re: [ceph-users] 3 OSDs stopped and unable to restartFrom: Brett Chancellor To: Igor Fedotov CC: Ceph Users Once backfilling finished, the cluster was

Re: [ceph-users] Ubuntu 18.04 - Mimic - Nautilus

2019-07-10 Thread Reed Dier
It does not appear that that page has been updated in a while. The official Ceph deb repos only include Mimic and Nautilus packages for 18.04, While the Ubuntu-bionic repos include a Luminous build. Hope that helps. Reed > On Jul 10, 2019, at 1:20 PM, Edward Kalk wrote: > > When reviewing:

Re: [ceph-users] 3 OSDs stopped and unable to restart

2019-07-10 Thread Brett Chancellor
Once backfilling finished, the cluster was super slow, most osd's were filled with heartbeat_map errors. When an OSD restarts it causes a cascade of other osd's to follow suit and restart.. logs like.. -3> 2019-07-10 18:34:50.046 7f34abf5b700 -1 osd.69 1348581 get_health_metrics reporting 21

[ceph-users] Ubuntu 18.04 - Mimic - Nautilus

2019-07-10 Thread Edward Kalk
When reviewing: http://docs.ceph.com/docs/master/start/os-recommendations/ I see there is no mention of “mimic” or “nautilus”. What are the OS recommendations for them, specifically nautilus which is the one I’m running? Is 18.04

Re: [ceph-users] Few OSDs crash on partner nodes when a node is rebooted

2019-07-10 Thread Edward Kalk
Wondering if the fix for http://tracker.ceph.com/issues/38724 will be included in : https://tracker.ceph.com/projects/ceph/roadmap#v14.2.2 ? > On Jul 10, 2019, at 7:55 AM, Edward Kalk wrote: > >

[ceph-users] Few OSDs crash on partner nodes when a node is rebooted

2019-07-10 Thread Edward Kalk
Hello, Has anyone else seen this? Basically I reboot a node and 2-3 OSDs on other hosts crash. Then they fail to start back up and have seem to hit a startup bug. http://tracker.ceph.com/issues/38724 What’s weird is that it seemed to be the same OSDs that

Re: [ceph-users] How does monitor know OSD is dead?

2019-07-10 Thread Nathan Cutler
I don't know if it's relevant here, but I saw similar behavior while implementing a Luminous->Nautilus automated upgrade test. When I used a single-node cluster with 4 OSDs, the Nautilus cluster would not function properly after the reboot. IIRC some OSDs were reported by "ceph -s" as up, even

Re: [ceph-users] writable snapshots in cephfs? GDPR/DSGVO

2019-07-10 Thread Yan, Zheng
On Wed, Jul 10, 2019 at 4:16 PM Lars Täuber wrote: > > Hi everbody! > > Is it possible to make snapshots in cephfs writable? > We need to remove files because of this General Data Protection Regulation > also from snapshots. > It's possible (only delete data), but need to modify both mds and

Re: [ceph-users] writable snapshots in cephfs? GDPR/DSGVO

2019-07-10 Thread Wido den Hollander
On 7/10/19 9:59 AM, Lars Täuber wrote: > Hi everbody! > > Is it possible to make snapshots in cephfs writable? As far as I'm aware: No You would need to remove the complete snapshot and create a new one. > We need to remove files because of this General Data Protection Regulation > also

Re: [ceph-users] BlueStore bitmap allocator under Luminous and Mimic

2019-07-10 Thread Alexandre DERUMIER
> Can't say anything about latency. >>Anybody else? Wido? I'm running it on mimic since 1 month, no problem until now, and it's definility fixing the latency increasing over time. (aka need restart osd each week) Memory usage is almost the same than before. - Mail original - De:

Re: [ceph-users] Ceph features and linux kernel version for upmap

2019-07-10 Thread Lars Marowsky-Bree
On 2019-07-09T16:32:22, Mattia Belluco wrote: > That of course in my case fails with: > Error EPERM: cannot set require_min_compat_client to luminous: 29 > connected client(s) look like jewel (missing 0x800); 1 > connected client(s) look like jewel (missing 0x800); add >

[ceph-users] writable snapshots in cephfs? GDPR/DSGVO

2019-07-10 Thread Lars Täuber
Hi everbody! Is it possible to make snapshots in cephfs writable? We need to remove files because of this General Data Protection Regulation also from snapshots. Thanks and best regards, Lars ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] BlueStore bitmap allocator under Luminous and Mimic

2019-07-10 Thread Wido den Hollander
On 7/10/19 5:56 AM, Konstantin Shalygin wrote: > On 5/28/19 5:16 PM, Marc Roos wrote: >> I switched first of may, and did not notice to much difference in memory >> usage. After the restart of the osd's on the node I see the memory >> consumption gradually getting back to as before. >> Can't