Re: [ceph-users] problems after upgrade to 14.2.1

2019-06-20 Thread ST Wong (ITSC)
Thanks. I also didn't encounter the spillover issue on another cluster from 13.2.6 -> 14.2.1. On that cluster, the dashboard also didn't work but reconfiguring it similar to what you did worked. Yes, nice new look. :) I commands like yours but it keeps prompting "all mgr daemons do not

Re: [ceph-users] problems after upgrade to 14.2.1

2019-06-20 Thread Brent Kennedy
Not sure about the spillover stuff, didn't happen to me when I upgraded from Luminous to 14.2.1. The dashboard thing did happen to me. Seems you have to disable the dashboard and then renable it after installing the separate dashboard rpm. Also, make sure to restart the mgr services on each

[ceph-users] problems after upgrade to 14.2.1

2019-06-20 Thread ST Wong (ITSC)
Hi all, We recently upgrade a testing cluster from 13.2.4 to 14.2.1. We encountered 2 problems: 1. Got warning of BlueFS spillover but the usage is low while it's a testing cluster without much activity/data: # ceph -s cluster: id: cc795498-5d16-4b84-9584-1788d0458be9

[ceph-users] libcrush

2019-06-20 Thread Luk
Hi, do You know if libcrush.org will be available ? crush analyze isn't working, site libcrush.org seems to be offline... crush optimize --crushmap report.json --out-path optimized.crush --pool 10 Traceback (most recent call last): File "/usr/local/bin/crush", line 25, in

[ceph-users] Invalid metric type, prometheus module with rbd mirroring

2019-06-20 Thread Brett Chancellor
Has anybody else encountered this issue? Prometheus is failing to scrape the prometheus module, returning invalid metric type "cef431ab_b67a_43f9_9b87_ebe992dec94e_replay_bytes counter" Ceph version: 14.2.1 Prometheus version: 2.10.0-rc.0 This started happening when I setup one way rbd mirroring

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Dan van der Ster
I will try to reproduce with logs and create a tracker once I find the smoking gun... It's very strange -- I had the osd mode set to 'passive', and pool option set to 'force', and the osd was compressing objects for around 15 minutes. Then suddenly it just stopped compressing, until I did 'ceph

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Frank Schilder
Typo below, I meant "I doubled bluestore_compression_min_blob_size_hdd ..." From: Frank Schilder Sent: 20 June 2019 19:02 To: Dan van der Ster; ceph-users Subject: Re: [ceph-users] understanding the bluestore blob, chunk and compression params Hi Dan,

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Igor Fedotov
On 6/20/2019 8:55 PM, Dan van der Ster wrote: On Thu, Jun 20, 2019 at 6:55 PM Igor Fedotov wrote: Hi Dan, bluestore_compression_max_blob_size is applied for objects marked with some additional hints only: if ((alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_SEQUENTIAL_READ) &&

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Dan van der Ster
On Thu, Jun 20, 2019 at 6:55 PM Igor Fedotov wrote: > > Hi Dan, > > bluestore_compression_max_blob_size is applied for objects marked with > some additional hints only: > >if ((alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_SEQUENTIAL_READ) && >(alloc_hints &

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Frank Schilder
Hi Dan, this older thread (https://www.mail-archive.com/ceph-users@lists.ceph.com/msg49339.html) contains details about: - how to get bluestore compression working (must be enabled on pool as well as OSD) - what the best compression ratio is depending on the application (if applications do

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Igor Fedotov
I'd like to see more details (preferably backed with logs) on this... On 6/20/2019 6:23 PM, Dan van der Ster wrote: P.S. I know this has been discussed before, but the compression_(mode|algorithm) pool options [1] seem completely broken -- With the pool mode set to force, we see that sometimes

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Igor Fedotov
Hi Dan, bluestore_compression_max_blob_size is applied for objects marked with some additional hints only:   if ((alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_SEQUENTIAL_READ) &&   (alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_RANDOM_READ) == 0 &&   (alloc_hints &

Re: [ceph-users] Monitor stuck at "probing"

2019-06-20 Thread Gregory Farnum
Just nuke the monitor's store, remove it from the existing quorum, and start over again. Injecting maps correctly is non-trivial and obviously something went wrong, and re-syncing a monitor is pretty cheap. On Thu, Jun 20, 2019 at 6:46 AM ☣Adam wrote: > Anyone have any suggestions for how to

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Dan van der Ster
P.S. I know this has been discussed before, but the compression_(mode|algorithm) pool options [1] seem completely broken -- With the pool mode set to force, we see that sometimes the compression is invoked and sometimes it isn't. AFAICT, the only way to compress every object is to set

[ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-20 Thread Dan van der Ster
Hi all, I'm trying to compress an rbd pool via backfilling the existing data, and the allocated space doesn't match what I expect. Here is the test: I marked osd.130 out and waited for it to erase all its data. Then I set (on the pool) compression_mode=force and compression_algorithm=zstd. Then

Re: [ceph-users] Monitor stuck at "probing"

2019-06-20 Thread ☣Adam
Anyone have any suggestions for how to troubleshoot this issue? Forwarded Message Subject: Monitor stuck at "probing" Date: Fri, 14 Jun 2019 21:40:39 -0500 From: ☣Adam To: ceph-users@lists.ceph.com I have a monitor which I just can't seem to get to join the quorum, even after

Re: [ceph-users] osd daemon cluster_fsid not reflecting actual cluster_fsid

2019-06-20 Thread Eugen Block
Hi, I don't have an answer for you, but could you elaborate on what exactly you're are trying to do and what has worked so far? Which ceph version are you running? I understand that you want to clone your whole cluster, how exactly are you trying to do that? Is this the first OSD you're

Re: [ceph-users] Ceph Upgrades - sanity check - MDS steps

2019-06-20 Thread Stefan Kooman
Quoting James Wilkins (james.wilk...@fasthosts.com): > Hi all, > > Just want to (double) check something – we’re in the process of > luminous -> mimic upgrades for all of our clusters – particularly this > section regarding MDS steps > > • Confirm that only one MDS is online and is rank 0

Re: [ceph-users] Possible to move RBD volumes between pools?

2019-06-20 Thread Konstantin Shalygin
Both pools are in the same Ceph cluster. Do you have any documentation on the live migration process? I'm running 14.2.1 Something like: ``` rbd migration prepare test1 rbd2/test2 rbd migration execute test1 rbd migration commit test1 --force ``` k