Thanks. I also didn't encounter the spillover issue on another cluster from
13.2.6 -> 14.2.1. On that cluster, the dashboard also didn't work but
reconfiguring it similar to what you did worked. Yes, nice new look. :)
I commands like yours but it keeps prompting "all mgr daemons do not
Not sure about the spillover stuff, didn't happen to me when I upgraded from
Luminous to 14.2.1. The dashboard thing did happen to me. Seems you have
to disable the dashboard and then renable it after installing the separate
dashboard rpm. Also, make sure to restart the mgr services on each
Hi all,
We recently upgrade a testing cluster from 13.2.4 to 14.2.1. We encountered 2
problems:
1. Got warning of BlueFS spillover but the usage is low while it's a
testing cluster without much activity/data:
# ceph -s
cluster:
id: cc795498-5d16-4b84-9584-1788d0458be9
Hi,
do You know if libcrush.org will be available ?
crush analyze isn't working, site libcrush.org seems to be offline...
crush optimize --crushmap report.json --out-path optimized.crush --pool 10
Traceback (most recent call last):
File "/usr/local/bin/crush", line 25, in
Has anybody else encountered this issue? Prometheus is failing to scrape
the prometheus module, returning invalid metric type
"cef431ab_b67a_43f9_9b87_ebe992dec94e_replay_bytes counter"
Ceph version: 14.2.1
Prometheus version: 2.10.0-rc.0
This started happening when I setup one way rbd mirroring
I will try to reproduce with logs and create a tracker once I find the
smoking gun...
It's very strange -- I had the osd mode set to 'passive', and pool
option set to 'force', and the osd was compressing objects for around
15 minutes. Then suddenly it just stopped compressing, until I did
'ceph
Typo below, I meant "I doubled bluestore_compression_min_blob_size_hdd ..."
From: Frank Schilder
Sent: 20 June 2019 19:02
To: Dan van der Ster; ceph-users
Subject: Re: [ceph-users] understanding the bluestore blob, chunk and
compression params
Hi Dan,
On 6/20/2019 8:55 PM, Dan van der Ster wrote:
On Thu, Jun 20, 2019 at 6:55 PM Igor Fedotov wrote:
Hi Dan,
bluestore_compression_max_blob_size is applied for objects marked with
some additional hints only:
if ((alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_SEQUENTIAL_READ) &&
On Thu, Jun 20, 2019 at 6:55 PM Igor Fedotov wrote:
>
> Hi Dan,
>
> bluestore_compression_max_blob_size is applied for objects marked with
> some additional hints only:
>
>if ((alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_SEQUENTIAL_READ) &&
>(alloc_hints &
Hi Dan,
this older thread
(https://www.mail-archive.com/ceph-users@lists.ceph.com/msg49339.html) contains
details about:
- how to get bluestore compression working (must be enabled on pool as well as
OSD)
- what the best compression ratio is depending on the application (if
applications do
I'd like to see more details (preferably backed with logs) on this...
On 6/20/2019 6:23 PM, Dan van der Ster wrote:
P.S. I know this has been discussed before, but the
compression_(mode|algorithm) pool options [1] seem completely broken
-- With the pool mode set to force, we see that sometimes
Hi Dan,
bluestore_compression_max_blob_size is applied for objects marked with
some additional hints only:
if ((alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_SEQUENTIAL_READ) &&
(alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_RANDOM_READ) == 0 &&
(alloc_hints &
Just nuke the monitor's store, remove it from the existing quorum, and
start over again. Injecting maps correctly is non-trivial and obviously
something went wrong, and re-syncing a monitor is pretty cheap.
On Thu, Jun 20, 2019 at 6:46 AM ☣Adam wrote:
> Anyone have any suggestions for how to
P.S. I know this has been discussed before, but the
compression_(mode|algorithm) pool options [1] seem completely broken
-- With the pool mode set to force, we see that sometimes the
compression is invoked and sometimes it isn't. AFAICT,
the only way to compress every object is to set
Hi all,
I'm trying to compress an rbd pool via backfilling the existing data,
and the allocated space doesn't match what I expect.
Here is the test: I marked osd.130 out and waited for it to erase all its data.
Then I set (on the pool) compression_mode=force and compression_algorithm=zstd.
Then
Anyone have any suggestions for how to troubleshoot this issue?
Forwarded Message
Subject: Monitor stuck at "probing"
Date: Fri, 14 Jun 2019 21:40:39 -0500
From: ☣Adam
To: ceph-users@lists.ceph.com
I have a monitor which I just can't seem to get to join the quorum, even
after
Hi,
I don't have an answer for you, but could you elaborate on what
exactly you're are trying to do and what has worked so far? Which ceph
version are you running? I understand that you want to clone your
whole cluster, how exactly are you trying to do that? Is this the
first OSD you're
Quoting James Wilkins (james.wilk...@fasthosts.com):
> Hi all,
>
> Just want to (double) check something – we’re in the process of
> luminous -> mimic upgrades for all of our clusters – particularly this
> section regarding MDS steps
>
> • Confirm that only one MDS is online and is rank 0
Both pools are in the same Ceph cluster. Do you have any documentation on
the live migration process? I'm running 14.2.1
Something like:
```
rbd migration prepare test1 rbd2/test2
rbd migration execute test1
rbd migration commit test1 --force
```
k
19 matches
Mail list logo