I’m trying to create bluestore osds with separate --block.wal --block.db
devices on a write intensive SSD
I’ve split the SSD (/dev/sda) into two partditions sda1 and sda2 for db
and wal
I seems to me the osd uuid is getting changed and I’m only able to start
the last OSD
Do I need to
I'm seconding what Greg is saying There is no reason to set nobackfill and
norecover just for restarting OSDs. That will only cause the problems
you're seeing without giving you any benefit. There are reasons to use
norecover and nobackfill but unless you're manually editing the crush map,
having
ok now I understand, thanks for all this helpful answers!
On Sat, Apr 7, 2018, 15:26 David Turner wrote:
> I'm seconding what Greg is saying There is no reason to set nobackfill
> and norecover just for restarting OSDs. That will only cause the problems
> you're seeing
Deep scrub doesn't help.
After some steps (not sure what exact list)
ceph does remap this pg to another osd, but PG doesn't move
# ceph pg map 11.206
osdmap e176314 pg 11.206 (11.206) -> up [955,198,801] acting [787,697]
Hangs in this state forever, 'ceph pg 11.206 query' hangs as well
On Sat,
On Thu, Apr 5, 2018 at 6:33 AM, Ansgar Jazdzewski
wrote:
> hi folks,
>
> i just figured out that my ODS's did not start because the filsystem
> is not mounted.
Would love to see some ceph-volume logs (both ceph-volume.log and
ceph-volume-systemd.log) because we do
How do you resolve these issues?
Apr 7 22:39:21 c03 ceph-osd: 2018-04-07 22:39:21.928484 7f0826524700 -1
osd.13 pg_epoch: 19008 pg[17.13( v 19008'6019891
(19008'6018375,19008'6019891] local-lis/les=18980/18981 n=3825
ec=3636/3636 lis/c 18980/18980 les/c/f 18981/18982/0 18980/18980/18903)
On Sat, Apr 7, 2018 at 11:59 AM, Gary Verhulp wrote:
>
>
>
>
>
> I’m trying to create bluestore osds with separate --block.wal --block.db
> devices on a write intensive SSD
>
>
>
> I’ve split the SSD (/dev/sda) into two partditions sda1 and sda2 for db and
> wal
>
>
>
>
>
>
On Fri, Apr 6, 2018 at 10:27 PM, Jeffrey Zhang
wrote:
> Yes, I am using ceph-volume.
>
> And i found where the keyring comes from.
>
> bluestore will save all the information at the starting of disk
> (BDEV_LABEL_BLOCK_SIZE=4096)
> this area is used for saving
The general recommendation is to target around 100 PG/OSD. Have you tried
the https://ceph.com/pgcalc/ tool?
On Wed, 4 Apr 2018 at 21:38, Osama Hasebou wrote:
> Hi Everyone,
>
> I would like to know what kind of setup had the Ceph community been using
> for their
I had several kernel mapped rbds as well as ceph-fuse mounted CephFS
clients when I upgraded from Jewel to Luminous. I rolled out the client
upgrades over a few weeks after the upgrade. I had tested that the client
use cases I had would be fine running Jewel connecting to a Luminous
cluster so
10 matches
Mail list logo