Re: [ceph-users] CEPH MDS Damaged Metadata - recovery steps

2019-06-03 Thread Yan, Zheng
On Mon, Jun 3, 2019 at 3:06 PM James Wilkins wrote: > > Hi all, > > After a bit of advice to ensure we’re approaching this the right way. > > (version: 12.2.12, multi-mds, dirfrag is enabled) > > We have corrupt meta-data as identified by ceph > > health: HEALTH_ERR > 2 MDSs

Re: [ceph-users] Ceph expansion/deploy via ansible

2019-06-03 Thread Shawn Iverson
The cephfs_metadata pool makes sense on ssd, but it won't need a lot of space. Chances are that you'll have plenty of ssd storage to spare for other uses. Personally, I'm migrating away from a cache tier and rebuilding my OSDs. I am finding that performance with Bluestore OSDs with the block.db

Re: [ceph-users] Ceph expansion/deploy via ansible

2019-06-03 Thread Daniele Riccucci
Hello, sorry to jump in. I'm looking to expand with SSDs on an HDD cluster. I'm thinking about moving cephfs_metadata to the SSDs (maybe with device class?) or to use them as cache layer in front of the cluster. Any tips on how to do it with ceph-ansible? I can share the config I currently have

Re: [ceph-users] Meaning of Ceph MDS / Rank in "Stopped" state.

2019-06-03 Thread Patrick Donnelly
Hello Wesley, On Wed, May 29, 2019 at 8:35 AM Wesley Dillingham wrote: > On further thought, Im now thinking this is telling me which rank is stopped > (2), not that two ranks are stopped. Correct! > I guess I am still curious about why this information is retained here Time has claimed that

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-06-03 Thread Patrick Donnelly
On Mon, May 27, 2019 at 2:36 AM Oliver Freyermuth wrote: > > Dear Cephalopodians, > > in the process of migrating a cluster from Luminous (12.2.12) to Mimic > (13.2.5), we have upgraded the FUSE clients first (we took the chance during > a time of low activity), > thinking that this should not

[ceph-users] ceph master - src/common/options.cc - size_t / uint64_t incompatibility on ARM 32bit

2019-06-03 Thread Dyweni - Ceph-Users
Hi List / James, In the Ceph master (and also Ceph 14.2.1), file: src/common/options.cc, line # 192: Option::size_t sz{strict_iecstrtoll(val.c_str(), error_message)}; On ARM 32-bit, compiling with CLang 7.1.0, compilation fails hard at this line. The reason is because

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-06-03 Thread Mattia Belluco
Hi Jake, I would definitely go for the "leave the rest unused" solution. Regards, Mattia On 5/29/19 4:25 PM, Jake ` wrote: > Thank you for a lot of detailed and useful information :) > > I'm tempted to ask a related question on SSD endurance... > > If 60GB is the sweet spot for each DB/WAL

Re: [ceph-users] obj_size_info_mismatch error handling

2019-06-03 Thread Dan van der Ster
Hi Reed and Brad, Did you ever learn more about this problem? We currently have a few inconsistencies arriving with the same env (cephfs, v13.2.5) and symptoms. PG Repair doesn't fix the inconsistency, nor does Brad's omap workaround earlier in the thread. In our case, we can fix by cp'ing the

Re: [ceph-users] getting pg inconsistent periodly

2019-06-03 Thread Hervé Ballans
Hi all, For information, I updated my Luminous cluster to the latest version 12.2.12 two weeks ago and, since then, I no longer encounter any problems of inconsistent pgs :) Regards, rv Le 03/05/2019 à 11:54, Hervé Ballans a écrit : Le 24/04/2019 à 10:06, Janne Johansson a écrit : Den ons

[ceph-users] CEPH MDS Damaged Metadata - recovery steps

2019-06-03 Thread James Wilkins
Hi all, After a bit of advice to ensure we’re approaching this the right way. (version: 12.2.12, multi-mds, dirfrag is enabled)   We have corrupt meta-data as identified by ceph       health: HEALTH_ERR             2 MDSs report damaged metadata   Asking the mds via damage ls       {         

Re: [ceph-users] bluestore block.db on SSD, where block.wal?

2019-06-03 Thread Martin Verges
Hello, please see https://www.mail-archive.com/ceph-users@lists.ceph.com/msg54607.html and http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030740.html . -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges