Hi Mathias,
Can you provide MGR logs and, if you deployed with cephadm, cephadm
deployment logs?
Regards
On Thu, Jan 20, 2022 at 12:11 PM Kuhring, Mathias <
mathias.kuhr...@bih-charite.de> wrote:
> Dear all,
>
> recently, our dashboard is not able to connect to our RGW anymore:
>
>
Dear Ceph folks,
I am testing the use of EC pool as data pool together with 3 way replicated SSD
pool as CephFS metadata pool, on Nautilus 14.2.22. And the got the following
messages:
[root@horeb34 ceph-bin]# ceph fs new cephfs cephfs-meta ceph-data
Error EINVAL: pool 'ceph-data' (id '24') is
Hi,
this is a disk space warning. If the MONs get below 30% free disk
space you'll get a warning since a MON store can grow in case of
recovery for a longer period of time. Use 'df -h' and you'll probably
see /var/lib/containers/ with less than 30% free space. You can either
decrease
Hi Samuel,
You have to use pool affinity.
For example, with 3 pools in ceph pacific:
- pool_fs_meta -> pool replicated, SSD only
- pool_fs_data -> pool replicated, hdd only
- pool_fs_data_ec -> pool EC, hdd
#ceph fs new cephfsvol pool_fs_meta pool_fs_data
#ceph osd pool set pool_fs_data_ec
Hi Tony,
What exact version of Ceph are you using?
rgw-api-host and rgw-api-port are deprecated in most recent versions.
Can you provide MGR logs? If you deployed with cephadm, can you provide
the cephadm logs?
On Thu, Jan 20, 2022 at 1:34 AM Tony Liu wrote:
> Hi,
>
> I have 3 rgw services
Den fre 21 jan. 2022 kl 09:26 skrev Janne Johansson :
>
> Add space to /var/ceph the same way one adds disk space to any other
> kind of server.
/var/lib/ceph of course, my bad.
> Den fre 21 jan. 2022 kl 08:26 skrev Michel Niyoyita :
> >
> > Dear Team ,
> >
> > I have a warning on my cluster
Can you also provide the ceph report?
You can get it by running:
ceph report
# example for only rgw map: ceph report | jq '.servicemap.services.rgw'
Regards
On Fri, Jan 21, 2022 at 11:39 AM Alfonso Martinez Hidalgo <
almar...@redhat.com> wrote:
> Hi Mathias,
>
> Can you provide MGR logs and,
Add space to /var/ceph the same way one adds disk space to any other
kind of server.
Den fre 21 jan. 2022 kl 08:26 skrev Michel Niyoyita :
>
> Dear Team ,
>
> I have a warning on my cluster which I deployed using Ansible on ubuntu
> 20.04 and with pacific ceph version , which says :
>
>
Thank you very much team now all its working fine
On Fri, Jan 21, 2022 at 10:28 AM Eugen Block wrote:
> Hi,
>
> this is a disk space warning. If the MONs get below 30% free disk
> space you'll get a warning since a MON store can grow in case of
> recovery for a longer period of time. Use 'df
Thank you for sharing the resolution! I'll add that info to the tracker.
And yes, I fully agree that Dashboard should gracefully handle this issue.
Kind Regards,
Ernesto
On Thu, Jan 20, 2022 at 4:59 PM E Taka <0eta...@gmail.com> wrote:
> Hello Ernesto,
>
> I found the reason. One of the users
Hi Yaarit,
Thanks for confirming.
telemetry is enabled on our clusters, so are contributing data on ~1270
disks.
Are you able to use data from backblaze?
Deciding on when an OSD is starting to fail is a dark art, we are still
hoping that the Disk Failure Predication module will take the
Hi Igor,
I want to give you a short update, since I tried now for quite some time to
reproduce the problem as you suggested. I've tried to simulate every imaginable
load that the cluster might have done before the three OSD crashed.
I rebooted the servers many times while the Custer was under
Hi Christoph,
I do not have any answer for you, but I find the question very interesting.
I wonder if it is possible to let the HDDs sleep or if the OSD daemons prevent
a hold of the spindle motors. Or can it even create some problems for the OSD
deamon if the HDD spines down?
However, it
>
> I wonder if it is possible to let the HDDs sleep or if the OSD daemons
> prevent a hold of the spindle motors. Or can it even create some problems
> for the OSD deamon if the HDD spines down?
> However, it should be easy to check on a cluster without any load and
> optimally on a Custer that
> enough spin ups of the spindle motor to be concerned about that.
> I have backup storage servers (no ceph) that are running for many years
> now. The HDDs in this server are spinning only for one or two hours per
> day and compared to HHDs in productive server that reading and writing 24
> / 7,
I would not recommend this on Ceph. There was a project where somebody
tried to make RADOS amenable to spinning down drives, but I don't
think it ever amounted to anything.
The issue is just that the OSDs need to do disk writes whenever they
get new OSDMaps, there's a lot of random stuff that
> On 21.01.2022, at 14:36, Marc wrote:
>
>>
>> I wonder if it is possible to let the HDDs sleep or if the OSD daemons
>> prevent a hold of the spindle motors. Or can it even create some problems
>> for the OSD deamon if the HDD spines down?
>> However, it should be easy to check on a cluster
On 21.01.22 14:23, Sebastian Mazza wrote:
Or can it even create some problems for the OSD deamon if the HDD spins down?
The OSD daemon would crash I would assume.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 /
Hello all,
Recently I discovered a problem where after taking an OSD out of the cluster
and purging it on the OSD node I found that a systemd unit was left over on the
system until it was rebooted. This poses a problem in particular for things
like prometheus who track these units and think
Hey Sebastian,
thanks a lot for your help and the update.
On 1/21/2022 4:58 PM, Sebastian Mazza wrote:
Hi Igor,
I want to give you a short update, since I tried now for quite some time to
reproduce the problem as you suggested. I've tried to simulate every imaginable
load that the cluster
> When having software raid solutions, I was also thinking about spinning them
> down and researching how to do this. I can't exactly remember, but a simple
> hdparm/sdparm command was not sufficient. Now I am bit curious if you solved
> this problem with mdadm / software raid?
>
On the first
There's got to be some obvious way I haven't found for this common ceph
use case, that happens at least once every couple weeks. I hope
someone on this list knows and can give a link. The scenario goes like
this, on a server with a drive providing boot capability, the rest osds:
1. First,
> The OSD daemon would crash I would assume.
Since I don't understand why the OSDs should crash just because a disk goes
into standby, I just tried it now.
The result is very unspectacular und fits perfectly to Gregorys great
explanation.
The drive goes into standby for around 2 or 3 seconds
Hey Igor,
thank you for your response and your suggestions.
>> I've tried to simulate every imaginable load that the cluster might have
>> done before the three OSD crashed.
>> I rebooted the servers many times while the Custer was under load. If more
>> than a single node was rebooted at the
24 matches
Mail list logo