On 1/27/22 16:41, Alwin Antreich wrote:
January 27, 2022 12:27 PM, "Aaron Lauterer" <a.laute...@proxmox.com> wrote:
Thanks for the hint, as I wasn't aware of it. It will not be considered for PVE
managed Ceph
though, so not really an option here.[0]
[0]
https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/CephConfig.pm;h=c388f025b409c660913c08276376
dda0fba2c6c;hb=HEAD#l192
That's where the db config would work.
What these approaches do have in common, is that we spread the config over
multiple places and
cannot set different data pools for different storages.
Yes indeed, it adds to the fragmentation. But this conf file is for each
storage, a data pool per
storage is already possible.
Yeah you are right, for external Ceph clusters, with the extra config file,
this would already be possible to configure per storage.
I rather keep the data pool stored in our storage.cfg and apply the parameter
where needed. From
what I can tell, I missed the image clone in this patch, where the data-pool
also needs to be
applied.
But this way we have the settings for that storage in one place we control and
are also able to
have different EC pools for different storages. Not that I expect it to happen
a lot in practice,
but you never know.
This sure is a good place. But to argue in favor of a separate config file. :)
Wouldn't it make sense to have a parameter for a `client.conf` in the storage
definition? Or maybe
an inherent place like it already exists. This would allow to not only set the
data pool but also
adjust client caching, timeouts, debug levels, ...? [0] The benefit is mostly
for users not having
administrative access to their Ceph cluster.
Correct me if I got something wrong, but adding custom config settings for an external Ceph
cluster, which "I" as PVE admin might only have limited access to, is already
possible via the previously mentioned `/etc/pve/priv/ceph/<storage>.conf`. And I have to
do it manually, so I am aware of it.
In case of hyperconverged setups, I can add anything I want in the
`/etc/pve/ceph.conf` so there is no immediate need for a custom config file per
storage for things like changing debug levels and so on?
Anything that touches the storage setup should rather be stored in the
storage.cfg. And the option where the data objects should be stored falls into
that category IMO.
Of course, the downside is that if I run some commands manually, (for example
rbd create) I will have to provide the --data-pool parameter myself. But even
with a custom config file, I would have to make sure to add it via the `-c`
parameter to have any effect. And since the default ceph.conf is not used
anymore, I would also need to add the mon list and auth parameters myself. So
not much gained there AFAICS versus adding it to the /etc/pve/ceph.conf
directly.
Besides the whole "where to store the data-pool parameter" issue, having custom
client configs per storage would most likely be its own feature request. Basically
extending the current way to hyperconverged storages. Though that would mean some kind of
config merging as the hyperconverged situation relies heavily on the default Ceph config
file.
I still see the custom config file as an option for the admin to add custom
options, not to spread the PVE managed settings when it can be avoided.
Hyper-converged setups can store these settings in the config db. Each storage
would need its
own user to separate the setting.
Now that would open a whole different box of changing how hyperconverged Ceph
clusters are set up ;)
Thanks for listening to my 2 cents. ;)
It's always good to hear other opinions from a different perspective to check
if one is missing something or at least thinking it through even more :)
Cheers,
Alwin
[0] https://docs.ceph.com/en/latest/cephfs/client-config-ref
https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel