Thanks Hector. So many things going through my head and I totally forgot to
explore if just turning off the warnings (if only until I get more disks) was
an option.
This is 1000% more sensible for sure.
> On Feb 8, 2019, at 7:19 PM, Hector Martin wrote:
>
> My practical suggestion would be t
My practical suggestion would be to do nothing for now (perhaps tweaking
the config settings to shut up the warnings about PGs per OSD). Ceph
will gain the ability to downsize pools soon, and in the meantime,
anecdotally, I have a production cluster where we overshot the current
recommendation by 1
Thanks again to Jan, Burkhard, Marc and Hector for responses on this. To
review, I am removing OSDs from a small cluster and running up against the “too
many PGs per OSD problem due to lack of clarity. Here’s a summary of what I
have collected on it:
The CephFS data pool can’t be changed, only
On 08/02/2019 19.29, Marc Roos wrote:
>
>
> Yes that is thus a partial move, not the behaviour you expect from a mv
> command. (I think this should be changed)
CephFS lets you put *data* in separate pools, but not *metadata*. Also,
I think you can't remove the original/default data pool.
The
Thanks Marc and Burkhard. I think what I am learning is it’s best to copy
between filesystems with cpio, if not impossible to do it any other way due to
the “fs metadata in first pool” problem.
FWIW, the mimic docs still describe how to create a differently named cluster
on the same hardware. B
: [ceph-users] Downsizing a cephfs pool
Hi,
you can move the data off to another pool, but you need to keep your
_first_ data pool, since part of the filesystem metadata is stored in
that pool. You cannot remove the first pool.
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Hi,
you can move the data off to another pool, but you need to keep your
_first_ data pool, since part of the filesystem metadata is stored in
that pool. You cannot remove the first pool.
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig-Un
erent pool:
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 folder
getfattr -n ceph.dir.layout.pool folder
-Original Message-
From: Brian Topping [mailto:brian.topp...@gmail.com]
Sent: 08 February 2019 10:02
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Downsizing a cephfs poo
m always creating pools starting 8 pg's and when I know I am at
> what I want in production I can always increase the pg count.
>
>
>
> -Original Message-
> From: Brian Topping [mailto:brian.topp...@gmail.com]
> Sent: 08 February 2019 05:30
> To: Ceph Users
at
what I want in production I can always increase the pg count.
-Original Message-
From: Brian Topping [mailto:brian.topp...@gmail.com]
Sent: 08 February 2019 05:30
To: Ceph Users
Subject: [ceph-users] Downsizing a cephfs pool
Hi all, I created a problem when moving data to Ceph and
Hello,
Brian Topping wrote:
: Hi all, I created a problem when moving data to Ceph and I would be grateful
for some guidance before I do something dumb.
[...]
: Do I need to create new pools and copy again using cpio? Is there a better
way?
I think I will be facing the same prob
Hi all, I created a problem when moving data to Ceph and I would be grateful
for some guidance before I do something dumb.
I started with the 4x 6TB source disks that came together as a single XFS
filesystem via software RAID. The goal is to have the same data on a cephfs
volume, but with these
12 matches
Mail list logo