We had to update the OS / kernel, chown all the data to ceph:ceph, and update
the partition type codes on both the OSDs and journals. After this udev and
systemd brought them up automatically.
From: ceph-users on behalf of
We did this with RBD, pacemaker, and corosync without issue - not sure about
CephFS though. You might have to use something like sanlock maybe?
From: ceph-users on behalf of nigel davies
Sent: Wednesday,
https://techcrunch.com/2018/11/12/the-ceph-storage-project-gets-a-dedicated-open-source-foundation/
What does this mean for:
1. Governance
2. Development
3. Community
Forgive me if I’ve missed the discussion previously on this list.
___
We were upgrading from Ceph Hammer to Ceph Jewel, we updated our OS from CentOS
7.1 to CentOS 7.3 prior to this without issue – we ran into 2 issues:
1. FAILED assert(0 == "Missing map in load_pgs")
* We found the following article fixed this issue:
There’s a couple of things I would look into:
* Any packet loss whatsoever – especially on your cluster / private
replication network
* Test against an R3 pool to see if EC on RBD with overwrites is the culprit
* Check to see what processes are in the “R” state during high iowait
Hey folks - I recently deployed Luminous / BlueStore on SSDs to back an
OpenStack cluster that supports our build / deployment infrastructure and I'm
getting 40% slower build times. Any thoughts on what I may need to do with Ceph
to speed things up? I have 30 SSDs backing an 11 compute node
40% slower performance compared to Ceph Jewel / OpenStack Mitaka backed by the
same SSDs ☹ I have 30 OSDs on SSDs (Samsung 860 EVO 1TB each)
From: Sinan Polat
Sent: Thursday, February 21, 2019 8:43 AM
To: ceph-users@lists.ceph.com; Smith, Eric
Subject: Re: [ceph-users] BlueStore / OpenStack
Yes stand-alone OSDs (WAL/DB/Data all on the same disk), this is the same as it
was for Jewel / filestore. Even if they are consumer SSDs why would they be 40%
faster with an older version of Ceph?
From: Mohamad Gebai
Date: Thursday, February 21, 2019 at 9:44 AM
To: "Smith, Eric" , S
, February 21, 2019 at 11:50 AM
To: "Smith, Eric" , "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] BlueStore / OpenStack Rocky performance issues
I didn't mean that the fact they are consumer SSDs is the reason for this
performance impact. I was just pointing it ou
This will cause data migration.
-Original Message-
From: ceph-users On Behalf Of Paul Emmerich
Sent: Monday, March 4, 2019 2:32 PM
To: Kees Meijs
Cc: Ceph Users
Subject: Re: [ceph-users] Altering crush-failure-domain
Yes, these parts of the profile are just used to create a crush
Hey folks - I'm using Luminous (12.2.10) and I was wondering if there's
anything out of the box I need to change performance wise to get the most out
of OpenStack on Ceph. I'm running Rocky (Deployed with Kolla) and running Ceph
deployed via ceph-deploy.
Any tips / tricks / gotchas are greatly
- Manuel Rios Fernandez
Sent: Thursday, August 1, 2019 4:04 PM
To: Smith, Eric ; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Balancer in HEALTH_ERR
Hi Eric,
CEPH006 is the node that we’re evacuating , for that task we added CEPH005.
Thanks
De: Smith, Eric mailto:eric.sm...@ccur.com
From your pastebin data – it appears you need to change the crush weight of the
OSDs on CEPH006? They all have crush weight of 0, when other OSDs seem to have
a crush weight of 10.91309. You might look into the ceph osd crush
reweight-subtree command.
Eric
From: ceph-users on behalf of EDH -
13 matches
Mail list logo