Hi guys,
We just deployed a CEPH 14.2.9 cluster with the following hardware:
MDSS x 1
Xeon Gold 5122 3.6Ghz
192GB
Mellanox ConnectX-4 Lx 25GbE
MON x 3
Xeon Bronze 3103 1.7Ghz
48GB
Mellanox ConnectX-4 Lx 25GbE
6 x 600GB 10K SAS
OSD x 5
Xeon Silver 4110 2.1Ghz x 2
192GB
Mellanox ConnectX-4 Lx
Hi
I am trying to upgrade from Mimic (13.2.10) to Octopus (15.x). Im also tryin
to sue cephadm and am following this
guide.https://docs.ceph.com/docs/master/cephadm/adoption/
It was all going fine until step 11 and deploying the new RGW's. I don't have
any realms set for my cluster, so how
Kevin, Ignazio, Marc,
Thanks for the information. I now consider myself well-advised.
-Patrick
On Tue, Jun 2, 2020 at 1:21 PM Marc Roos wrote:
>
> Ceph is from redhat and redhat is owned by IBM. I think the best
> training you could get would be from RedHat.
>
> I would not advise to learn
My higher-ed organization is considering multi-day instructor-led training
similar to Red Hat's Ceph Storage "CEPH125" course, but perhaps based only
on the community codebase instead of the Red Hat Ceph Storage product.
Likewise if we were to plan or deploy a Ceph cluster using the community
Hi,
Do you have a running rpcbind service?
$ systemctl status rpcbind
NFSv3 requires rpcbind, but this dependency will be removed in a later release
of Octopus. I've updated the tracker with more detail.
Hope this helps,
-Mike
John Zachary Dover wrote:
> I've created a docs tracker ticket
Sorry for following up on myself (again), but I had left out an
important detail:
Simon Leinen writes:
> Using the "stupid" allocator, we never had any crashes with this
> assert. But the OSDs run more slowly this way.
> So what we ended up doing was: When an OSD crashed with this assert, we
>
Igor Fedotov writes:
> 2) Main device space is highly fragmented - 0.84012572151981013 where
> 1.0 is the maximum. Can't say for sure but I presume it's pretty full
> as well.
As I said, these disks aren't that full as far as bytes are concerned.
But they do have a lot of objects on them! As I
Simon Leinen writes:
>> I can suggest the following workarounds to start the OSD for now:
>> 1) switch allocator to stupid by setting 'bluestore allocator'
>> parameter to 'stupid'. Presume you have default setting of 'bitmap'
>> now.. This will allow more continuous allocations for bluefs space
Correct, "crush weight" and normal "reweight" are indeed very different.
The original post mentions "rebuilding" servers, in this case the correct
way is to use "destroy" and then explicitly re-use the OSD afterwards.
purge is really only for OSDs that you don't get back (or broken disks that
you
Ceph is from redhat and redhat is owned by IBM. I think the best
training you could get would be from RedHat.
I would not advise to learn how to use a mouse with a web interface nor
this ansible or some other deploy tool. Do it from scratch manually so
you know the basics. If you know
Hello, I am testing ceph from croit and it works fine: very easy web
interface for installing and managing ceph and very clear support pricing.
Ignazio
Il Mar 2 Giu 2020, 19:36 ha scritto:
> and theres
>
> https://croit.io/consulting
>
> best regards
> Kevin M
>
> - Original Message -
>
and theres
https://croit.io/consulting
best regards
Kevin M
- Original Message -
From: "Patrick Calhoun"
To: ceph-users@ceph.io
Sent: Tuesday, June 2, 2020 5:29:11 PM
Subject: [ceph-users] professional services and support for newest Ceph
Are there reputable training/support options
Well, there's me, the docs guy that Sage hired.
What kinds of training would you like to see?
Zac Dover
Ceph Docs
On Wed, Jun 3, 2020 at 2:29 AM Patrick Calhoun wrote:
> Are there reputable training/support options for Ceph that are not geared
> toward a specific commercial product (e.g. "Red
Hi Dustin,
This is an issue that will happen regardless of pubsub configuration.
Tracked here: https://tracker.ceph.com/issues/45816
Yuval
On Sun, May 31, 2020 at 11:00 AM Yuval Lifshitz wrote:
> Hi Dustin,
> Did you create a pubsub zone [1] in your cluster?
> (note that this is currently not
Are there reputable training/support options for Ceph that are not geared
toward a specific commercial product (e.g. "Red Hat Ceph Storage,") but
instead would cover the newest open source stable release?
Thanks,
Patrick
___
ceph-users mailing list --
I've created a docs tracker ticket for this issue:
https://tracker.ceph.com/issues/45819
Zac
Ceph Docs
On Wed, Jun 3, 2020 at 12:34 AM Simon Sutter wrote:
> Sorry, allways the wrong button...
>
> So I ran the command:
> ceph orch apply nfs cephnfs cephfs.backuptest.data
>
> And there is now a
Hi Lenz,
Hopefully this is the part that was required:
"tree": {
"nodes": [
{
"id": -1,
"name": "default",
"type": "root",
"type_id": 11,
"children": [
-9,
-7,
-5,
-3
]
Sorry, allways the wrong button...
So I ran the command:
ceph orch apply nfs cephnfs cephfs.backuptest.data
And there is now a not working container:
ceph orch ps:
nfs.cephnfs.testnode1testnode1 error 6m ago 71m
docker.io/ceph/ceph:v15
journalctl
Hello Ceph users,
I'm trying to deploy nfs-ganesha with cephadm on octopus.
According to the docs, it's as easy as running the command in the docs:
https://docs.ceph.com/docs/master/cephadm/install/#deploying-nfs-ganesha
___
ceph-users mailing list
Just following up. Is hanging, during the provisioning of an encrypted OSD,
expected behavior with the current tooling?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Marco,
looks like you have found a bug. It would be helpful to see the output
of the /api/health/full REST API call that is used to generate that tree
view.
You can obtain this via the dashboard's built-in API browser (click Help
icon -> API -> Health -> GET /health/full -> Try it out ->
As you have noted, 'ceph osd reweight 0' is the same as an 'ceph osd out', but
it is not the same as removing the OSD from the crush map (or setting crush
weight to 0). This explains your observation of the double rebalance when you
mark an OSD out (or reweight an OSD to 0), and then remove it
"reweight 0" and "out" are the exact same thing
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Tue, Jun 2, 2020 at 9:30 AM Wido den Hollander wrote:
>
>
> On
Thank your reply
Our cluster are runing for two years in production,and it has no problem,so
we don't upgrade.
I check memory on host.Very little memory of free left.Does creating thread
failure have anything to do with this?
In addition to the kvm virtual machine, there are 22 osds on
On 6/2/20 5:44 AM, Brent Kennedy wrote:
> We are rebuilding servers and before luminous our process was:
>
>
>
> 1. Reweight the OSD to 0
>
> 2. Wait for rebalance to complete
>
> 3. Out the osd
>
> 4. Crush remove osd
>
> 5. Auth del osd
>
> 6. Ceph
25 matches
Mail list logo