Re: [ceph-users] Mimic - EC and crush rules - clarification

2018-11-01 Thread Wladimir Mutel
David Turner wrote: Yes, when creating an EC profile, it automatically creates a CRUSH rule specific for that EC profile.  You are also correct that 2+1 doesn't really have any resiliency built in.  2+2 would allow 1 node to go down while still having your data accessible.  It will use 2x data

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-01 Thread Hayashida, Mami
Thank you, both of you. I will try this out very soon. On Wed, Oct 31, 2018 at 8:48 AM, Alfredo Deza wrote: > On Wed, Oct 31, 2018 at 8:28 AM Hayashida, Mami > wrote: > > > > Thank you for your replies. So, if I use the method Hector suggested (by > creating PVs, VGs etc. first), can I add

Re: [ceph-users] Mimic - EC and crush rules - clarification

2018-11-01 Thread David Turner
Yes, when creating an EC profile, it automatically creates a CRUSH rule specific for that EC profile. You are also correct that 2+1 doesn't really have any resiliency built in. 2+2 would allow 1 node to go down while still having your data accessible. It will use 2x data to raw as opposed to the

[ceph-users] Mimic - EC and crush rules - clarification

2018-11-01 Thread Steven Vacaroaia
Hi, I am trying to create an EC pool on my SSD based OSDs and will appreciate if someone clarify / provide advice about the following - best K + M combination for 4 hosts one OSD per host My understanding is that K+M< OSD but using K=2, M=1 does not provide any redundancy ( as soon as 1 OSD i

Re: [ceph-users] Removing MDS

2018-11-01 Thread Rhian Resnick
Morning all, This has been a rough couple days. We thought we had resolved all our performance issues by moving the ceph metadata to some high intensity write disks from Intel but what we didn't notice was that Ceph labeled them as HDD's (thanks dell raid controller). We believe this caused

Re: [ceph-users] EC Metadata Pool Storage

2018-11-01 Thread Jason Dillaman
On Thu, Nov 1, 2018 at 12:46 AM Ashley Merrick wrote: > > Hello, > > I have a small EC Pool I am using with RBD to store a bunch of large files > attached to some VM's for personal storage use. > > Currently I have the EC Meta Data Pool on some SSD's, I have noticed even > though the EC Pool has

Re: [ceph-users] Priority for backfilling misplaced and degraded objects

2018-11-01 Thread Jonas Jelten
Hm I'm not so sure, because we did have a disk outage indeed. When we added many new disks, 50% of objects were misplaced. Then the disk failed and ~2% of objects were degraded. The recovery went on fine, but I would expect that fixing the degraded objects should have a priority over data migrati

Re: [ceph-users] add monitors - not working

2018-11-01 Thread Steven Vacaroaia
I have redeployed the cluster adding all 3 monitors from the beginning It seems that the proper procedure is to have the correct ceph.conf ( with correct fsid and all monitors specified in mon_initial_members and mon_hosts ) deployed on the future monitors and THEN run ceph-deploy mon add It migh

[ceph-users] 订阅

2018-11-01 Thread ma.xiangxi...@eisoo.com
--- maxiangxiang(马祥祥) Tel:15221944636 Email:ma.xiangxi...@eisoo.com department:AS平台开发 基础服务 Be innovative EISOO. Be global EISOO. --

[ceph-users] 订阅

2018-11-01 Thread ma.xiangxi...@eisoo.com
--- maxiangxiang(马祥祥) Tel:15221944636 Email:ma.xiangxi...@eisoo.com department:AS平台开发 基础服务 Be innovative EISOO. Be global EISOO. --

Re: [ceph-users] ceph-bluestore-tool failed

2018-11-01 Thread ST Wong (ITSC)
Hi, thanks. Tried with path option but gets same error: # ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-2 inferring bluefs devices from bluestore path 2018-11-01 15:57:53.395 7f9f91adca00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyr

Re: [ceph-users] Priority for backfilling misplaced and degraded objects

2018-11-01 Thread Janne Johansson
I think that all the misplaced PGs that are in the queue that get writes _while_ waiting for backfill will get the "degraded" status, meaning that before they were just on the wrong place, now they are on the wrong place, AND the newly made PG they should backfill into will get an old dump made fir