1837 5739 118k 59708k
Delta:
fs1_data: +2M raw space as expected
fs1_metadata: -22M raw space, because who the fuck knows?
RAW USED: +12252M
--
Regards Flemming Frandsen - Stibo Systems - DK - STEP Release Manager
Please use rele...@stibo.com for all Release Management
less than the
global raw bytes used.
On Mon, Feb 19, 2018 at 2:09 AM, Flemming Frandsen
<flemming.frand...@stibosystems.com> wrote:
Each OSD lives on a separate HDD in bluestore with the journals on 2GB
partitions on a shared SSD.
On 16/02/18 21:08, Gregory Farnum wrote:
What does the cl
.
On Fri, Feb 16, 2018 at 4:02 AM Flemming Frandsen
<flemming.frand...@stibosystems.com
<mailto:flemming.frand...@stibosystems.com>> wrote:
I'm trying out cephfs and I'm in the process of copying over some
real-world data to see what happens.
I have created a number of
ata 10 52178k 0 258G 2285493
fs_nexus_data11 0 0 258G0
fs_nexus_metadata12 4181 0 258G 21
--
Regards Flemming Frandsen - Stibo Systems - DK - STEP Release Manager
Please use r
map you need to edit it directly (either
by dumping it from the cluster, editing with the crush tool, and
importing; or via the ceph cli commands), rather than by updating
config settings. I believe doing so is explained in the ceph docs.
On Fri, Feb 2, 2018 at 4:47 AM Flemming Frand
the crushmap tree a bit, but I did not see how "osd crush
chooseleaf type" relates to that in any way.
--
Regards Flemming Frandsen - Stibo Systems - DK - STEP Release Manager
Please use rele...@stibo.com for all Release Managemen