Amudhan,
Have you looked at the logs and did you try enabling debug to see why the
OSDs are marked down? There should be some reason right? Just focus on the
MON and take one node/OSD by enabling debug to see what is happening.
https://docs.ceph.com/en/latest/cephadm/operations/.
Thanks,
Suresh
> dhils...@performair.com
> www.PerformAir.com
>
> -Original Message-
> From: Suresh Rama [mailto:sstk...@gmail.com]
> Sent: Thursday, July 8, 2021 7:25 PM
> To: ceph-users
> Subject: [ceph-users] Issue with Nautilus upgrade from Luminous
>
> Dear All,
>
Dear All,
We have 13 Ceph clusters and we started upgrading one by one from Luminous
to Nautilus. Post upgrade started fixing the warning alerts and had issues
setting "*ceph config set mon mon_crush_min_required_version firefly" *yielded
no results. Updated the mon config and restart the daemon
Just query the PG to see what is it that reporting and take action
accordingly
On Thu, Jan 28, 2021, 7:13 PM Void Star Nill
wrote:
> Hello all,
>
> One of our clusters running nautilus release 14.2.15 is reporting health
> error. It reports that there are inconsistent PGs. However, when I inspec
Dear All,
Hope you all had a great Christmas and much needed time off with family!
Have any of you used "*device management and failure prediction"* in
Nautilus? If yes, what is your feedback? Do you use LOCAL or CLOUD
prediction models?
https://ceph.io/update/new-in-nautilus-device-management
We had same issue and this is stable after upgrading from 14.2.11 to
14.2.15. Also, the size of the DB is not same for the one failed to join
since the information it had to sync is huge. The compact on reboot does
the job but it takes a long time to catch up. You can force the join by
quorum e
Thanks stefan. Will review your feedback. Matt suggested the same.
On Wed, Dec 16, 2020, 4:38 AM Stefan Kooman wrote:
> On 12/16/20 10:21 AM, Matthew Vernon wrote:
> > Hi,
> >
> > On 15/12/2020 20:44, Suresh Rama wrote:
> >
> > TL;DR: use a real NTP client,
Dear All,
We have a 38 node HP Apollo cluster with 24 3.7T Spinning disk and 2 NVME
for journal. This is one of our 13 clusters which was upgraded from
Luminous to Nautilus (14.2.11). When one of our openstack customers uses
elastic search (they offer Logging as a Service) to their end users
rep
Hi All,
We encountered an issue while upgrading our Ceph cluster from Luminous
12.2.12 to Nautilus 14.2.11. We used
https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous
and ceph-ansible to upgrade the cluster. We use HDD for data and NVME for
WAL and DB.
*Clust
This should give you the answer to your question
https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/
Regards,
Suresh
On Sun, Jun 7, 2020, 5:10 AM wrote:
> I am new to Ceph so I hope this is not a question of me not reading the
> documentation well enough.
>
> I have setup
Ping with 9000 MTU won't get response as I said and it should be 8972. Glad
it is working but you should know what happened to avoid this issue later.
On Sun, May 24, 2020, 3:04 AM Amudhan P wrote:
> No, ping with MTU size 9000 didn't work.
>
> On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar
Hi,
It should be ping -M do -s 8972 IP ADDRESS.
You can't ping with 9000 size. If you can't ping with 8972 size, then
somewhere in the path MTU config is wrong.
Regards,
Suresh
On Sat, May 23, 2020, 1:35 PM apely agamakou wrote:
> Hi,
>
> Please check you MTU limit at the switch level, cand
Since you deleted the stack this is meaningless. You can simply delete the
volumes from pool using the rbd.
The proper way to delete is to delete them before destroying the stack.
If the stack is alive and you have issues deleting, you can take two
approach.
1) run openstack volume delete with
13 matches
Mail list logo