Glad to help!
You don’t need the -m (unless I’m misunderstanding your intent).
I used “cephadm shell --name mgr.” to get a shell in an environment
that mimics the daemon’s container, and it does appear to physically share the
mounts. That’s how I was able to figure out what parts of the
Zakhar;
When adding nodes I usually set the following:
noin (OSDs register as up, but stay out)
norebalance (new placement shouldn't be calculated when the cluster layout
changes, I've been bit by this not working as expected, so I also set below)
nobackfill (PGs don't move)
I then remove noin,
Am 02.11.21 um 15:02 schrieb Sage Weil:
On Tue, Nov 2, 2021 at 8:29 AM Manuel Lausch wrote:
Hi Sage,
The "osd_fast_shutdown" is set to "false"
As we upgraded to luminous I also had blocked IO issuses with this
enabled.
Some weeks ago I tried out the options "osd_fast_shutdown" and
On Tue, Nov 2, 2021 at 8:29 AM Manuel Lausch wrote:
> Hi Sage,
>
> The "osd_fast_shutdown" is set to "false"
> As we upgraded to luminous I also had blocked IO issuses with this
> enabled.
>
> Some weeks ago I tried out the options "osd_fast_shutdown" and
> "osd_fast_shutdown_notify_mon" and
Hi Sage,
The "osd_fast_shutdown" is set to "false"
As we upgraded to luminous I also had blocked IO issuses with this
enabled.
Some weeks ago I tried out the options "osd_fast_shutdown" and
"osd_fast_shutdown_notify_mon" and also got slow ops while
stopping/starting OSDs. But I didn't ceck if
Hello,
yes you can use a single server to operate multiple clusters.
I have a configuration running, with two independent ceph clusters
running on the same node (of course multiple nodes for the two clusters)
The trick is to work with multiple ceph.conf files, I use two
seperate ceph.conf files
Hi Artur,
we did write a script (in fact a series of scripts) that we use to
manage our users and their quotas. Our script adds a new user to our
LDAP and sets the default quotas for various storage areas. Quota
information is kept in the LDAP. Another script periodically scans the
LDAP for
Hi Users
We have two ceph clusters in our lab. We are experimenting to use a single
server as a client for two ceph clusters. Can we use the same client server
to store keyring for different clusters in ceph.conf file.
Regards
Mosharaf Hossain
___
I suggest continuing with manual PG sizing for now. With 16.2.6 we have
seen the autoscaler scale up the device health metrics to 16000+ PGs on
brand new clusters, which we know is incorrect. It's on our company backlog
to investigate, but far down the backlog. It's bitten us enough times in
the
Hello!
As I understand CephFS user max file and byte quotas
(ceph.quota.max_{files,bytes}) can be set on an MDS (or CephFS client) via
setfattr command (https://docs.ceph.com/en/octopus/cephfs/quota/).
My question is, how can the quotas be set automatically for every new
user's home directory?
Thanks a lot, Edward, for sharing this thorough description! I filed a
tracker to record your findings and improve this set-up process (
https://tracker.ceph.com/issues/53127).
Additionally, did you try with "cephadm shell -n mgr. -m
"? If I'm not wrong, that should give you a shell where
No issue at all, this is the advice I was looking for :-) Seems that
'norebalance' will do the trick. Thanks!
/Z
On Tue, Nov 2, 2021 at 11:24 AM Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com> wrote:
> What's the issue with adding all osd with noout and norebalance and once
> all of them up,
Hi,
It is a quite old cluster (hopefully, not the production one), it was created
in Luminous if I remember well.
Cordialement, Regards,
Lionel BEARD
CLS - IT & Operations
11 rue Hermès, Parc Technologique du Canal
31520 Ramonville Saint-Agne – France
Tél : +33 (0)5 61 39 39 19
-Message
Hi,
I see 2 ways :
Add your OSD with 0 weight, and slowly increase their weight or add OSD 1 by 1.
It’s easy but “stupid” as some PG will move many times.
Check
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/OKCWC5KNQF2FD3V4WI2IGMQBGOYY2LL2/
Hi!
I have a 3-node 16.2.6 cluster with 33 OSDs, and plan to add another 3
nodes of the same configuration to it. What is the best way to add the new
nodes and OSDs so that I can avoid a massive rebalance and performance hit
until all new nodes and OSDs are in place and operational?
I would very
15 matches
Mail list logo