Hi Melanie,
On Mon, Sep 6, 2021 at 10:06 AM Desaive, Melanie
wrote:
> When I execute "ceph mon_status --format json-pretty" from our
> ceph-management VM, the correct mon nodes are returned.
>
> But when I execute "ceph daemon osd.xx config show | grep mon_host" on the
> respective storage
Hi,
what is the output of
ceph mon stat
Or
ceph mon dump
and
ceph quorum_status
When you see there your expectes nodes then i should all be fine...
Hth
Mehmet
Am 6. September 2021 18:23:46 MESZ schrieb Josh Baergen
:
>Hi Melanie,
>
>On Mon, Sep 6, 2021 at 10:06 AM Desaive, Melanie
> wrote:
Hi all,
I am quite new to Ceph and could use some advice:
We are running a Ceph cluster with mon services on some of the storage nodes.
Last week it became necessary to change the mon nodes to different hosts. We
already deployed one new mon and deinstalled an old one. We would now like to
Am 06.09.21 um 16:44 schrieb Simon Sutter:
|node1|node2|node3|node4|node5|node6|node7|node8|
|1x1TB|1x1TB|1x1TB|1x1TB|1x1TB|1x1TB|1x1TB|1x1TB|
|4x2TB|4x2TB|4x2TB|4x2TB|4x2TB|4x2TB|4x2TB|4x2TB|
|1x6TB|1x6TB|1x6TB|1x6TB|1x6TB|1x6TB|1x6TB|1x6TB|
"ceph osd df tree" should show the data
Hello
> >
> > >> - The one 6TB disk, per node?
> > >
> > > You get bad distribution of data, why not move drives around between
> > these to clusters, so you have more the same in each.
> > >
> >
> > I would assume that this behaves exactly the other way around. As long
> > as you have the same
Hi Eugen,
thanks for the idea but i didn’t have anything mounted that i could unmount
> On 6. Sep 2021, at 09:15, Eugen Block wrote:
>
> Hi,
>
> I just got the same message in my lab environment (octopus) which I had
> redeployed. The client's keyring had changed after redeployment and I
Hi Marc,
thanks for getting back to me.
Fuse debug output is: 7f84c1c9e700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2]
Any idea what that tells me?
Thanks,
Hendrik
> On 3. Sep 2021, at 19:38, Marc wrote:
>
> Maybe try and mount with
Hi Dan,
unfortunately, setting these parameters crashed the MDS cluster and we now have
severe performance issues. Particularly bad is mds_recall_max_decay_rate. Even
just setting it to the default value immediately makes all MDS daemons
unresponsive and get failed by the MONs. I already set
>
> >> - The one 6TB disk, per node?
> >
> > You get bad distribution of data, why not move drives around between
> these to clusters, so you have more the same in each.
> >
>
> I would assume that this behaves exactly the other way around. As long
> as you have the same number of block devices
Am 06.09.21 um 11:54 schrieb Marc:
- The one 6TB disk, per node?
You get bad distribution of data, why not move drives around between these to
clusters, so you have more the same in each.
I would assume that this behaves exactly the other way around. As long
as you have the same number
Hello
Thanks for this first input, I already found, at least one of those 6TB Disks
is a WD Blue WD60EZAZ which is according to WD with SMR.
I will replace everything with SMR in it, but in the process of replacing
hardware, should I replace all disks with for example all 3TB disks?
And what
>
>
> At the moment, the nodes look like this:
> 8 Nodes
> Worst CPU: i7-3930K (up to i7-6850K)
> Worst ammount of RAM: 24GB (up to 64GB)
> HDD Layout:
> 1x 1TB
> 4x 2TB
> 1x 6TB
> all sata, some just 5400rpm
>
> I had to put the OS on the 6TB HDDs, because there are no more sata
> connections
Thanks, I'll check it out.
Christian Wuerdig 于2021年9月3日周五 上午6:16写道:
> This probably provides a reasonable overview -
> https://ceph.io/en/news/blog/2020/public-telemetry-dashboards/,
> specifically the grafana dashboard is here:
> https://telemetry-public.ceph.com
> Keep in mind not all
Hi Frank,
That's unfortunate! Most of those options relax warnings and relax
when a client is considered having too many caps.
The option mds_recall_max_caps might be CPU intensive -- the MDS would
be busy recalling caps if indeed you have clients which are hammering
the MDSs with metadata
Hi,
are any of those old disks SMR ones? Because they will absolutely
destroy any kind of performance (ceph does not use writecaches due to
powerloss concerns, so they kinda do their whole magic for each
writerequest).
Greetings
On 9/6/21 10:47 AM, Simon Sutter wrote:
Hello everyone!
I
Hello everyone!
I have built two clusters with old hardware, which is lying around, the
possibility to upgrade is there.
The clusters main usecase is hot backup. This means it's getting written 24/7
where 99% is writing and 1% is reading.
It should be based on harddisks.
At the moment, the
Thanks, Mathew for the Update.
The upgrade got failed for some random wired reasons, Checking further
Ceph's status shows that "Ceph health is OK" and times it gives certain
warnings but I think that is ok.
but what if we see the Version mismatch between the daemons, i.e few
services have
Hi,
I just got the same message in my lab environment (octopus) which I
had redeployed. The client's keyring had changed after redeployment
and I think I had a stale mount. After 'umount' and 'mount' with the
proper keyring it worked as expected.
Zitat von Hendrik Peyerl :
Hello All,
18 matches
Mail list logo