[ceph-users] Re: Question if WAL/block.db partition will benefit us

2021-11-09 Thread prosergey07
Not sure how much it would help the performance with osd's backed with ssd db and wal devices. Even if you go this route with one ssd per 10 hdd, you might want to set the failure domain per host in crush rules in case ssd is out of service. But from the practice ssd will not help too much to

[ceph-users] Re: allocate_bluefs_freespace failed to allocate

2021-11-09 Thread prosergey07
From my understanding you do not have a separate DB/WAL device per OSD. Since RocksDB uses bluefs for OMAP storage, we can check the usage and free size for bluefs on problematic osd's.ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-OSD_ID --command bluefs-bdev-sizesProbably it can shed some

[ceph-users] Re: fresh pacific installation does not detect available disks

2021-11-09 Thread Adam King
Hello Carsten, as an fyi, there is actually a bootstrap flag specifically for clusters intended to be one node called "--single-host-defaults" (which would make bootstrap command "cephadm bootstrap --mon-ip --single-host-defaults") if you want some better settings for single node clusters. As for

[ceph-users] Re: Expose rgw using consul or service discovery

2021-11-09 Thread Sebastian Wagner
Am 09.11.21 um 15:58 schrieb Pierre GINDRAUD: > I come back about radosgw deployment, I've test cephadm ingress > service and then theses are my findings : > > Haproxy service is deployed but not "managed" by cephadm, here the > sources >

[ceph-users] Re: osd daemons still reading disks at full speed while there is no pool activity

2021-11-09 Thread Nikola Ciprich
Hello Josh, just wanted to confirm that setting bluefs_buffered_io immediately helped hotfix the problem. I've also updated to 14.2.22 and we'll discuss adding more NVME modules to move OSD databases out of spinners to prevent further occurances thanks a lot for your time! with best regards

[ceph-users] Re: fresh pacific installation does not detect available disks

2021-11-09 Thread Eugen Block
I'm not sure if I'm misremembering, but somewhere in the back of my mind I believe there was once a report in this list that you should have more than one MON to be able to deploy, but I'm really not sure about this. But it's worth a try, I think. Zitat von "Scharfenberg, Carsten" : I’m

[ceph-users] Re: fresh pacific installation does not detect available disks

2021-11-09 Thread Scharfenberg, Carsten
I’m trying to setup a single-node ceph “cluster”. For my test that would be sufficient. Of course it could be that ceph orch isn’t meant to be used on a single node only. So maybe it’s worth to try out three nodes… root@terraformdemo:~# ceph orch ls NAME PORTS

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Ansgar Jazdzewski
> IIRC you get a HEALTH_WARN message that there are OSDs with old metadata > format. You can suppress that warning, but I guess operators feel like > they want to deal with the situation and get it fixed rather than ignore it. Yes, and if suppress the waning gets forgotten you run into other

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Ansgar Jazdzewski
Am Di., 9. Nov. 2021 um 11:08 Uhr schrieb Dan van der Ster : > > Hi Ansgar, > > To clarify the messaging or docs, could you say where you learned that > you should enable the bluestore_fsck_quick_fix_on_mount setting? Is > that documented somewhere, or did you have it enabled from previously? >

[ceph-users] Re: fresh pacific installation does not detect available disks

2021-11-09 Thread Scharfenberg, Carsten
Thanks Yury, ceph-volume always listed these devices as available. But ceph orch does not. They do not seem to exist for ceph orch. Also adding them manually does not help (I’ve tried that before and now again): root@terraformdemo:~# ceph orch daemon add osd 192.168.72.10:/dev/sdc

[ceph-users] Re: fresh pacific installation does not detect available disks

2021-11-09 Thread Eugen Block
Hi, I've had the best zapping experience with ceph-volume. Have you tried this: # ceph orch device zap host1 /dev/sdc --force This worked quite well for me, or you can also try: # ceph-volume lvm zap --destroy /dev/sdc Using virtual disks should be just fine. Zitat von "Scharfenberg,

[ceph-users] Re: fresh pacific installation does not detect available disks

2021-11-09 Thread Scharfenberg, Carsten
Thanks for your support, guys. Unfortunately I do not know the tool sgdisk. It’s also not available from the standard Debian package repository. So I’ve tried out Yury’s approach to use dd… without success: root@terraformdemo:~# dd if=/dev/zero of=/dev/sdc bs=1M count=1024 1024+0 records in

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Dan van der Ster
On Tue, Nov 9, 2021 at 11:29 AM Stefan Kooman wrote: > > On 11/9/21 11:07, Dan van der Ster wrote: > > Hi Ansgar, > > > > To clarify the messaging or docs, could you say where you learned that > > you should enable the bluestore_fsck_quick_fix_on_mount setting? Is > > that documented somewhere,

[ceph-users] Re: ceph-ansible and crush location

2021-11-09 Thread Simon Oosthoek
On 03/11/2021 16:03, Simon Oosthoek wrote: > On 03/11/2021 15:48, Stefan Kooman wrote: >> On 11/3/21 15:35, Simon Oosthoek wrote: >>> Dear list, >>> >>> I've recently found it is possible to supply ceph-ansible with >>> information about a crush location, however I fail to understand how >>> this

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Igor Fedotov
Hi Ansgar, I've submitted the following PR to recover broken OMAPs: https://github.com/ceph/ceph/pull/43820 One needs a custom build for now to use it though. And this might be a bit risky to apply since this hasn't passed all the QA procedures... For now I'm aware about one success and

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Dan van der Ster
Hi Ansgar, To clarify the messaging or docs, could you say where you learned that you should enable the bluestore_fsck_quick_fix_on_mount setting? Is that documented somewhere, or did you have it enabled from previously? The default is false so the corruption only occurs when users actively

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Marc
> > I did an upgrade from 14.2.23 to 16.2.6 not knowing that the current > > minor version had this nasty bug! [1] [2] > > I'm sorry you hit this bug. We tried to warn users through > documentation. Apparently this is not enough and other ways of informing > operators about such (rare) incidents

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-11-09 Thread Peter Lieven
Am 08.11.21 um 23:59 schrieb Igor Fedotov: > Hi folks, > > having a LTS release cycle could be a great topic for upcoming "Ceph User + > Dev Monthly meeting". > > The first one is scheduled  on November 18, 2021, 14:00-15:00 UTC > > https://pad.ceph.com/p/ceph-user-dev-monthly-minutes > > Any

[ceph-users] Re: cephfs snap-schedule stopped working?

2021-11-09 Thread Venky Shankar
On Mon, Nov 8, 2021 at 3:01 PM Joost Nieuwenhuijse wrote: > > On 08/11/2021 05:52, Venky Shankar wrote: > > On Sun, Nov 7, 2021 at 3:28 PM Joost Nieuwenhuijse > > wrote: > >> > >> Hi, > >> > >> I've enabled a simple CephFS snapshot schedule > >> > >> # ceph fs snap-schedule list / > >> / 8h 6h