[ceph-users] poor cephFS performance on Nautilus 14.2.9 deployed by ceph_ansible

2020-06-02 Thread Derrick Lin
Hi guys, We just deployed a CEPH 14.2.9 cluster with the following hardware: MDSS x 1 Xeon Gold 5122 3.6Ghz 192GB Mellanox ConnectX-4 Lx 25GbE MON x 3 Xeon Bronze 3103 1.7Ghz 48GB Mellanox ConnectX-4 Lx 25GbE 6 x 600GB 10K SAS OSD x 5 Xeon Silver 4110 2.1Ghz x 2 192GB Mellanox ConnectX-4 Lx

[ceph-users] upgrade ceph and use cephadm - rgw issue

2020-06-02 Thread Andy Goldschmidt
Hi I am trying to upgrade from Mimic (13.2.10) to Octopus (15.x).   Im also tryin to sue cephadm and am following this guide.https://docs.ceph.com/docs/master/cephadm/adoption/ It was all going fine until step 11 and deploying the new RGW's.  I don't have any realms set for my cluster, so how

[ceph-users] Re: professional services and support for newest Ceph

2020-06-02 Thread Patrick Calhoun
Kevin, Ignazio, Marc, Thanks for the information. I now consider myself well-advised. -Patrick On Tue, Jun 2, 2020 at 1:21 PM Marc Roos wrote: > > Ceph is from redhat and redhat is owned by IBM. I think the best > training you could get would be from RedHat. > > I would not advise to learn

[ceph-users] Re: professional services and support for newest Ceph

2020-06-02 Thread Patrick Calhoun
My higher-ed organization is considering multi-day instructor-led training similar to Red Hat's Ceph Storage "CEPH125" course, but perhaps based only on the community codebase instead of the Red Hat Ceph Storage product. Likewise if we were to plan or deploy a Ceph cluster using the community

[ceph-users] Re: Deploy nfs from cephadm

2020-06-02 Thread Michael Fritch
Hi, Do you have a running rpcbind service? $ systemctl status rpcbind NFSv3 requires rpcbind, but this dependency will be removed in a later release of Octopus. I've updated the tracker with more detail. Hope this helps, -Mike John Zachary Dover wrote: > I've created a docs tracker ticket

[ceph-users] Re: crashing OSDs: ceph_assert(h->file->fnode.ino != 1)

2020-06-02 Thread Simon Leinen
Sorry for following up on myself (again), but I had left out an important detail: Simon Leinen writes: > Using the "stupid" allocator, we never had any crashes with this > assert. But the OSDs run more slowly this way. > So what we ended up doing was: When an OSD crashed with this assert, we >

[ceph-users] Re: crashing OSDs: ceph_assert(h->file->fnode.ino != 1)

2020-06-02 Thread Simon Leinen
Igor Fedotov writes: > 2) Main device space is highly fragmented - 0.84012572151981013 where > 1.0 is the maximum. Can't say for sure but I presume it's pretty full > as well. As I said, these disks aren't that full as far as bytes are concerned. But they do have a lot of objects on them! As I

[ceph-users] Re: crashing OSDs: ceph_assert(h->file->fnode.ino != 1)

2020-06-02 Thread Simon Leinen
Simon Leinen writes: >> I can suggest the following workarounds to start the OSD for now: >> 1) switch allocator to stupid by setting 'bluestore allocator' >> parameter to 'stupid'. Presume you have default setting of 'bitmap' >> now.. This will allow more continuous allocations for bluefs space

[ceph-users] Re: OSD upgrades

2020-06-02 Thread Paul Emmerich
Correct, "crush weight" and normal "reweight" are indeed very different. The original post mentions "rebuilding" servers, in this case the correct way is to use "destroy" and then explicitly re-use the OSD afterwards. purge is really only for OSDs that you don't get back (or broken disks that you

[ceph-users] Re: professional services and support for newest Ceph

2020-06-02 Thread Marc Roos
Ceph is from redhat and redhat is owned by IBM. I think the best training you could get would be from RedHat. I would not advise to learn how to use a mouse with a web interface nor this ansible or some other deploy tool. Do it from scratch manually so you know the basics. If you know

[ceph-users] Re: professional services and support for newest Ceph

2020-06-02 Thread Ignazio Cassano
Hello, I am testing ceph from croit and it works fine: very easy web interface for installing and managing ceph and very clear support pricing. Ignazio Il Mar 2 Giu 2020, 19:36 ha scritto: > and theres > > https://croit.io/consulting > > best regards > Kevin M > > - Original Message - >

[ceph-users] Re: professional services and support for newest Ceph

2020-06-02 Thread response
and theres https://croit.io/consulting best regards Kevin M - Original Message - From: "Patrick Calhoun" To: ceph-users@ceph.io Sent: Tuesday, June 2, 2020 5:29:11 PM Subject: [ceph-users] professional services and support for newest Ceph Are there reputable training/support options

[ceph-users] Re: professional services and support for newest Ceph

2020-06-02 Thread John Zachary Dover
Well, there's me, the docs guy that Sage hired. What kinds of training would you like to see? Zac Dover Ceph Docs On Wed, Jun 3, 2020 at 2:29 AM Patrick Calhoun wrote: > Are there reputable training/support options for Ceph that are not geared > toward a specific commercial product (e.g. "Red

[ceph-users] Re: Radosgw PubSub Traffic

2020-06-02 Thread Yuval Lifshitz
Hi Dustin, This is an issue that will happen regardless of pubsub configuration. Tracked here: https://tracker.ceph.com/issues/45816 Yuval On Sun, May 31, 2020 at 11:00 AM Yuval Lifshitz wrote: > Hi Dustin, > Did you create a pubsub zone [1] in your cluster? > (note that this is currently not

[ceph-users] professional services and support for newest Ceph

2020-06-02 Thread Patrick Calhoun
Are there reputable training/support options for Ceph that are not geared toward a specific commercial product (e.g. "Red Hat Ceph Storage,") but instead would cover the newest open source stable release? Thanks, Patrick ___ ceph-users mailing list --

[ceph-users] Re: Deploy nfs from cephadm

2020-06-02 Thread John Zachary Dover
I've created a docs tracker ticket for this issue: https://tracker.ceph.com/issues/45819 Zac Ceph Docs On Wed, Jun 3, 2020 at 12:34 AM Simon Sutter wrote: > Sorry, allways the wrong button... > > So I ran the command: > ceph orch apply nfs cephnfs cephfs.backuptest.data > > And there is now a

[ceph-users] Re: 15.2.3 Crush Map Viewer problem.

2020-06-02 Thread Marco Pizzolo
Hi Lenz, Hopefully this is the part that was required: "tree": { "nodes": [ { "id": -1, "name": "default", "type": "root", "type_id": 11, "children": [ -9, -7, -5, -3 ]

[ceph-users] Re: Deploy nfs from cephadm

2020-06-02 Thread Simon Sutter
Sorry, allways the wrong button... So I ran the command: ceph orch apply nfs cephnfs cephfs.backuptest.data And there is now a not working container: ceph orch ps: nfs.cephnfs.testnode1testnode1 error 6m ago 71m docker.io/ceph/ceph:v15 journalctl

[ceph-users] Deploy nfs from cephadm

2020-06-02 Thread Simon Sutter
Hello Ceph users, I'm trying to deploy nfs-ganesha with cephadm on octopus. According to the docs, it's as easy as running the command in the docs: https://docs.ceph.com/docs/master/cephadm/install/#deploying-nfs-ganesha ___ ceph-users mailing list

[ceph-users] Re: Cephadm Hangs During OSD Apply

2020-06-02 Thread m
Just following up. Is hanging, during the provisioning of an encrypted OSD, expected behavior with the current tooling? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 15.2.3 Crush Map Viewer problem.

2020-06-02 Thread Lenz Grimmer
Hi Marco, looks like you have found a bug. It would be helpful to see the output of the /api/health/full REST API call that is used to generate that tree view. You can obtain this via the dashboard's built-in API browser (click Help icon -> API -> Health -> GET /health/full -> Try it out ->

[ceph-users] Re: OSD upgrades

2020-06-02 Thread Thomas Byrne - UKRI STFC
As you have noted, 'ceph osd reweight 0' is the same as an 'ceph osd out', but it is not the same as removing the OSD from the crush map (or setting crush weight to 0). This explains your observation of the double rebalance when you mark an OSD out (or reweight an OSD to 0), and then remove it

[ceph-users] Re: OSD upgrades

2020-06-02 Thread Paul Emmerich
"reweight 0" and "out" are the exact same thing Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Tue, Jun 2, 2020 at 9:30 AM Wido den Hollander wrote: > > > On

[ceph-users] Re: Thread::try_create(): pthread_create failed

2020-06-02 Thread 展荣臻(信泰)
Thank your reply Our cluster are runing for two years in production,and it has no problem,so we don't upgrade. I check memory on host.Very little memory of free left.Does creating thread failure have anything to do with this? In addition to the kvm virtual machine, there are 22 osds on

[ceph-users] Re: OSD upgrades

2020-06-02 Thread Wido den Hollander
On 6/2/20 5:44 AM, Brent Kennedy wrote: > We are rebuilding servers and before luminous our process was: > > > > 1. Reweight the OSD to 0 > > 2. Wait for rebalance to complete > > 3. Out the osd > > 4. Crush remove osd > > 5. Auth del osd > > 6. Ceph