Not sure how much it would help the performance with osd's backed with ssd db
and wal devices. Even if you go this route with one ssd per 10 hdd, you might
want to set the failure domain per host in crush rules in case ssd is out of
service. But from the practice ssd will not help too much to
From my understanding you do not have a separate DB/WAL device per OSD. Since
RocksDB uses bluefs for OMAP storage, we can check the usage and free size for
bluefs on problematic osd's.ceph-bluestore-tool --path
/var/lib/ceph/osd/ceph-OSD_ID --command bluefs-bdev-sizesProbably it can shed
some
Hello Carsten, as an fyi, there is actually a bootstrap flag specifically
for clusters intended to be one node called "--single-host-defaults" (which
would make bootstrap command "cephadm bootstrap --mon-ip
--single-host-defaults") if you want some better settings for single node
clusters. As for
Am 09.11.21 um 15:58 schrieb Pierre GINDRAUD:
> I come back about radosgw deployment, I've test cephadm ingress
> service and then theses are my findings :
>
> Haproxy service is deployed but not "managed" by cephadm, here the
> sources
>
Hello Josh,
just wanted to confirm that setting bluefs_buffered_io immediately
helped hotfix the problem. I've also updated to 14.2.22 and we'll
discuss adding more NVME modules to move OSD databases out of spinners
to prevent further occurances
thanks a lot for your time!
with best regards
I'm not sure if I'm misremembering, but somewhere in the back of my
mind I believe there was once a report in this list that you should
have more than one MON to be able to deploy, but I'm really not sure
about this. But it's worth a try, I think.
Zitat von "Scharfenberg, Carsten" :
I’m
I’m trying to setup a single-node ceph “cluster”. For my test that would be
sufficient. Of course it could be that ceph orch isn’t meant to be used on a
single node only.
So maybe it’s worth to try out three nodes…
root@terraformdemo:~# ceph orch ls
NAME PORTS
> IIRC you get a HEALTH_WARN message that there are OSDs with old metadata
> format. You can suppress that warning, but I guess operators feel like
> they want to deal with the situation and get it fixed rather than ignore it.
Yes, and if suppress the waning gets forgotten you run into other
Am Di., 9. Nov. 2021 um 11:08 Uhr schrieb Dan van der Ster
:
>
> Hi Ansgar,
>
> To clarify the messaging or docs, could you say where you learned that
> you should enable the bluestore_fsck_quick_fix_on_mount setting? Is
> that documented somewhere, or did you have it enabled from previously?
>
Thanks Yury,
ceph-volume always listed these devices as available. But ceph orch does not.
They do not seem to exist for ceph orch.
Also adding them manually does not help (I’ve tried that before and now again):
root@terraformdemo:~# ceph orch daemon add osd 192.168.72.10:/dev/sdc
Hi,
I've had the best zapping experience with ceph-volume. Have you tried this:
# ceph orch device zap host1 /dev/sdc --force
This worked quite well for me, or you can also try:
# ceph-volume lvm zap --destroy /dev/sdc
Using virtual disks should be just fine.
Zitat von "Scharfenberg,
Thanks for your support, guys.
Unfortunately I do not know the tool sgdisk. It’s also not available from the
standard Debian package repository.
So I’ve tried out Yury’s approach to use dd… without success:
root@terraformdemo:~# dd if=/dev/zero of=/dev/sdc bs=1M count=1024
1024+0 records in
On Tue, Nov 9, 2021 at 11:29 AM Stefan Kooman wrote:
>
> On 11/9/21 11:07, Dan van der Ster wrote:
> > Hi Ansgar,
> >
> > To clarify the messaging or docs, could you say where you learned that
> > you should enable the bluestore_fsck_quick_fix_on_mount setting? Is
> > that documented somewhere,
On 03/11/2021 16:03, Simon Oosthoek wrote:
> On 03/11/2021 15:48, Stefan Kooman wrote:
>> On 11/3/21 15:35, Simon Oosthoek wrote:
>>> Dear list,
>>>
>>> I've recently found it is possible to supply ceph-ansible with
>>> information about a crush location, however I fail to understand how
>>> this
Hi Ansgar,
I've submitted the following PR to recover broken OMAPs:
https://github.com/ceph/ceph/pull/43820
One needs a custom build for now to use it though. And this might be a
bit risky to apply since this hasn't passed all the QA procedures...
For now I'm aware about one success and
Hi Ansgar,
To clarify the messaging or docs, could you say where you learned that
you should enable the bluestore_fsck_quick_fix_on_mount setting? Is
that documented somewhere, or did you have it enabled from previously?
The default is false so the corruption only occurs when users actively
> > I did an upgrade from 14.2.23 to 16.2.6 not knowing that the current
> > minor version had this nasty bug! [1] [2]
>
> I'm sorry you hit this bug. We tried to warn users through
> documentation. Apparently this is not enough and other ways of informing
> operators about such (rare) incidents
Am 08.11.21 um 23:59 schrieb Igor Fedotov:
> Hi folks,
>
> having a LTS release cycle could be a great topic for upcoming "Ceph User +
> Dev Monthly meeting".
>
> The first one is scheduled on November 18, 2021, 14:00-15:00 UTC
>
> https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
>
> Any
On Mon, Nov 8, 2021 at 3:01 PM Joost Nieuwenhuijse wrote:
>
> On 08/11/2021 05:52, Venky Shankar wrote:
> > On Sun, Nov 7, 2021 at 3:28 PM Joost Nieuwenhuijse
> > wrote:
> >>
> >> Hi,
> >>
> >> I've enabled a simple CephFS snapshot schedule
> >>
> >> # ceph fs snap-schedule list /
> >> / 8h 6h
19 matches
Mail list logo