Hi all,
I am planning for a new Ceph cluster that will provide RBD storage for
OpenStack and Kubernetes. Additionally, there may need a need for a small
amount of RGW storage.
Which option would be better:
1. Defining separate pools for OpenStack images/ephemeral
vms/volumes/backups (as seen
Hi all,
I am building a new cluster that will be using Luminous, Filestore, NVME
journals and 10k sas drives.
Is there a way to estimate proper values for:
filestore_queue_max_bytes
filestore_queue_max_ops
journal_max_write_bytes
journal_max_write_entries
or is it a matter of testing and trial
Hi all,
I have a few questions on using BlueStore.
With FileStore it is not uncommon to see 1 nvme device being used as the
journal device for up to 12 OSDs.
Can an adequately sized nvme device also be used as the wal/db device for
up to 12 OSDs?
Are there any rules of thumb for sizing wal/db?
ot;errors": [
> > "read_error"
> > "errors": [],
> > "errors": [],
> > $ sudo ceph pg repair 9.27
> > instructing pg 9.27 on osd.78 to repair
> > $ sudo ceph pg repair 9.260
Hi all,
I was wondering if anyone out the increase the value for
bluestore_prefer_deferred_size
to effectively defer all writes.
If so, did you experience any unforeseen side effects?
thx
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
I have been receiving alerts for:
Possible data damage: 1 pg inconsistent
almost daily for a few weeks now. When I check:
rados list-inconsistent-obj $PG --format=json-pretty
I will always see a read_error. When I run a deep scrub on the PG I will
see:
head candidate had a read error