[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Dietmar Rieder
On 2020-03-24 23:37, Sage Weil wrote: > On Tue, 24 Mar 2020, konstantin.ilya...@mediascope.net wrote: >> Is it poosible to provide instructions about upgrading from CentOs7+ >> ceph 14.2.8 to CentOs8+ceph 15.2.0 ? > > You have ~2 options: > > - First, upgrade Ceph packages to 15.2.0. Note that

[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread konstantin . ilyasov
That is why i am asking that question about upgrade instruction. I really don`t understand, how to upgrade/reinstall CentOS 7 to 8 without affecting the work of cluster. As i know, this process is easier on Debian, but we deployed our cluster Nautilus on CentOS because there weren`t any packages

[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Simon Oosthoek
On 25/03/2020 10:10, konstantin.ilya...@mediascope.net wrote: > That is why i am asking that question about upgrade instruction. > I really don`t understand, how to upgrade/reinstall CentOS 7 to 8 without > affecting the work of cluster. > As i know, this process is easier on Debian, but we deploy

[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Wido den Hollander
On 3/25/20 10:24 AM, Simon Oosthoek wrote: > On 25/03/2020 10:10, konstantin.ilya...@mediascope.net wrote: >> That is why i am asking that question about upgrade instruction. >> I really don`t understand, how to upgrade/reinstall CentOS 7 to 8 without >> affecting the work of cluster. >> As i k

[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Sasha Litvak
I assume upgrading cluster running in docker / podman containers should be non issue is it? Just making sure. Also wonder if anything is different in this case from normal container upgrade scenario I.e. monitors -> mgrs -> osds -> mdss -> clients. Thank you, On Wed, Mar 25, 2020, 5:32 AM W

[ceph-users] Using sendfile on Ceph FS results in data stuck in client cache

2020-03-25 Thread Mikael Öhman
Hi all, Using sendfile function to write data to cephfs, the data doesn't end up being written. >From the client that writes the file, it looks correct at first, but from all >other ceph clients, the size is 0 bytes. Re-mounting the filesystem, the data >is lost. I didn't see any errors, the da

[ceph-users] Help: corrupt pg

2020-03-25 Thread Jake Grimmett
Dear All, We are "in a bit of a pickle"... No reply to my message (23/03/2020),  subject  "OSD: FAILED ceph_assert(clone_size.count(clone))" So I'm presuming it's not possible to recover the crashed OSD This is bad news, as one pg may be lost, (we are using EC 8+2, pg dump shows [NONE,NONE,

[ceph-users] Re: How can I recover PGs in state 'unknown', where OSD location seems to be lost?

2020-03-25 Thread Mark S. Holliman
So I've managed to use ceph-objectstore-tool to locate the pgs in 'unknown' state on the OSDs, but how do I tell the rest of the system where to find them? Is there a command for setting a the OSDs associated with a PG? Or, less ideally, is there a table somewhere I can hack to do this by hand?

[ceph-users] Re: Help: corrupt pg

2020-03-25 Thread Eugen Block
Hi, is there any chance to recover the other failing OSDs that seem to have one chunk of this PG? Do the other OSDs fail with the same error? Zitat von Jake Grimmett : Dear All, We are "in a bit of a pickle"... No reply to my message (23/03/2020),  subject  "OSD: FAILED ceph_assert(clo

[ceph-users] Re: March Ceph Science User Group Virtual Meeting

2020-03-25 Thread Kevin Hrpcek
I made a mistake and 9am US central is no longer equal to 4pm central European. The actual time is now 10am US central, so in 20 minutes if people are interested. I'll include UTC from now on. Kevin On 3/18/20 7:43 AM, Kevin Hrpcek wrote: Hello, We will be having a Ceph science/research/big

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos
I had something similar. My osd were disabled, maybe this installer of nautilus does that check systemctl is-enabled ceph-osd@0 https://tracker.ceph.com/issues/44102 -Original Message- From: Ml Ml [mailto:mliebher...@googlemail.com] Sent: 25 March 2020 16:05 To: ceph-users Subje

[ceph-users] OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Ml Ml
Hello list, i upgraded to Debian 10, after that i upgraded from luminous to nautilus. I restarted the mons, then the OSDs. Everything was up and healthy. After rebooting a node, only 3/10 OSD start up: -4 20.07686 host ceph03 4 hdd 2.67020 osd.4 down 1.0 1.0 5

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos
Try this chown ceph.ceph /dev/sdc2 chown ceph.ceph /dev/sdd2 chown ceph.ceph /dev/sde2 chown ceph.ceph /dev/sdf2 chown ceph.ceph /dev/sdg2 chown ceph.ceph /dev/sdh2 -Original Message- From: Ml Ml [mailto:mliebher...@googlemail.com] Sent: 25 March 2020 16:22 To: Marc Roos Subject: Re:

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos
What does the osd error log say? I already have bluestore, if you have file store maybe you should inspect the mounted fs of /dev/sdd2 eg. maybe there permissions need to be changed. But first check the errors of one osd. ( You did you reset the failed service with somethig like this systemctl

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Ml Ml
Still no luck. But the working OSDs have no partition: OSD.1 => /dev/sdj OSD.5 => /dev/sdb OSD.6 => /dev/sdbc OSD.10 => /dev/sdl Where as the rest has: root@ceph03:~# ls -l /dev/sd* brw-rw 1 root disk 8, 0 Mar 25 16:23 /dev/sda brw-rw 1 root disk 8, 1 Mar 25 16:23 /dev/sda1 brw-rw--

[ceph-users] Re: Help: corrupt pg

2020-03-25 Thread Jake Grimmett
Hi Eugen, Many thanks for your reply. The other two OSD's are up and running, and being used by other pgs with no problem, for some reason this pg refuses to use these OSD's. The other two OSDs that are missing from this pg crashed at different times last month, each OSD crashed when we trie

[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Bryan Stillwell
On Mar 24, 2020, at 5:38 AM, Abhishek Lekshmanan wrote: > #. Upgrade monitors by installing the new packages and restarting the > monitor daemons. For example, on each monitor host,:: > > # systemctl restart ceph-mon.target > > Once all monitors are up, verify that the monitor upgrade i

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos
Still down? -Original Message- Cc: ceph-users Subject: [ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus What does the osd error log say? I already have bluestore, if you have file store maybe you should inspect the mounted fs of /dev/sdd2 eg. maybe there permiss

[ceph-users] Re: Using sendfile on Ceph FS results in data stuck in client cache

2020-03-25 Thread Jeff Layton
On Wed, 2020-03-25 at 12:14 +, Mikael Öhman wrote: > Hi all, > > Using sendfile function to write data to cephfs, the data doesn't end up > being written. > From the client that writes the file, it looks correct at first, but from all > other ceph clients, the size is 0 bytes. Re-mounting th

[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Tecnologia Charne.Net
Yes, I was going to suggest the same on this page: https://docs.ceph.com/docs/master/releases/octopus/ -Javier El 25/3/20 a las 14:20, Bryan Stillwell escribió: On Mar 24, 2020, at 5:38 AM, Abhishek Lekshmanan wrote: #. Upgrade monitors by installing the new packages and restarting the

[ceph-users] Luminous upgrade question

2020-03-25 Thread Shain Miley
Hi, We are thinking about upgrading our cluster currently running ceph version 12.2.12. I am wondering if we should be looking at upgrading to the latest version of Mimic or the latest version Nautilus. Can anyone here please provide a suggestion…I continue to be a little bit confused about th

[ceph-users] Re: Luminous upgrade question

2020-03-25 Thread cassiano
I;ve upgraded from luminous to nautilus a few days ago and have only one issue with slow ops on the monitors, that I've struggled to find out that was cause by a malfunctioning client. This issue was causing the monitors to keep crashing constantly. When i've figured out that the client was ac

[ceph-users] Re: Luminous upgrade question

2020-03-25 Thread Marc Roos
I upgraded from Luminous to Nautilus without any problems. Maybe check if you are currently cephfs snapshots, I think those are being disabled by default. Can't remember. -Original Message- Sent: 25 March 2020 19:29 To: ceph-users@ceph.io Subject: [ceph-users] Luminous upgrade quest

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Marc Roos
You have to be carefull with upgrading like this, sometimes upgrades between versions requires a scrub of all osd's. Good luck! :) -Original Message- Cc: ceph-users Subject: Re: [ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus i brought them up manually. Decided

[ceph-users] Re: OSDs wont mount on Debian 10 (Buster) with Nautilus

2020-03-25 Thread Ml Ml
i brought them up manually. Decided to upgrade to octopus, but i am stuck there now. I will open a new thread for it. On Wed, Mar 25, 2020 at 6:23 PM Marc Roos wrote: > > > Still down? > > > -Original Message- > Cc: ceph-users > Subject: [ceph-users] Re: OSDs wont mount on Debian 10 (Bus

[ceph-users] octopus upgrade stuck: Assertion `map->require_osd_release >= ceph_release_t::mimic' failed.

2020-03-25 Thread Ml Ml
Hello List, i followed: https://ceph.io/releases/v15-2-0-octopus-released/ I came from a healthy nautilus and i am stuck at: 5.) Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD host When i try to start an osd like this, i get: /usr/bin/ceph-osd

[ceph-users] Re: octopus upgrade stuck: Assertion `map->require_osd_release >= ceph_release_t::mimic' failed.

2020-03-25 Thread Ml Ml
in the logs it says: 2020-03-25T22:10:00.823+0100 7f0bd5320e00 0 /build/ceph-15.2.0/src/cls/hello/cls_hello.cc:312: loading cls_hello 2020-03-25T22:10:00.823+0100 7f0bd5320e00 0 osd.32 57223 crush map has features 288232576282525696, adjusting msgr requires for clients 2020-03-25T22:10:00.823+0

[ceph-users] Re: Using sendfile on Ceph FS results in data stuck in client cache

2020-03-25 Thread Mikael Öhman
Hi Jeff! (also, I'm also sorry for a resend, I did exactly the same with my message as well!) Unfortunately, the answer wasn't that simple, as I am on the latest C7 kernel as well uname -r 3.10.0-1062.1.2.el7.x86_64 I did some more testing, and it's a bit difficult to trigger this reliably when

[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread vitalif
I have a question regarding this problem - is it possible to rebuild bluestore allocation metadata? I could try it to test if it's an allocator problem... Hi. I'm experiencing some kind of a space leak in Bluestore. I use EC, compression and snapshots. First I thought that the leak was caused

[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread Igor Fedotov
Bluestore fsck/repair detect and fix leaks at Bluestore level but I doubt your issue is here. To be honest I don't understand from the overview why do you think that there are any leaks at all Not sure whether this is relevant but from my experience space "leaks" are sometimes caused by

[ceph-users] Re: Space leak in Bluestore

2020-03-25 Thread Виталий Филиппов
Hi Igor, I think so because 1) space usage increases after each rebalance. Even when the same pg is moved twice (!) 2) I use 4k min_alloc_size from the beginning One crazy hypothesis is that maybe ceph allocates space for uncompressed objects, then compresses them and leaks (uncompressed-compre