hi all,
thanks for all the feedback. it's clear we should stick to the 1GB/TB
for the memory.
any (changes to) recommendation for the CPU? in particular, is it still
the rather vague "1 HT core per OSD" (or was it "1 1Ghz HT core per
OSD"? it would be nice if we had some numbers like required spe
I haven't set up the mgr service yet, but your daemon folder is missing
it's keyring file (/var/lib/ceph/mgr/ceph-0/keyring). It's exactly what the
error message says. When you set it up did you run a command pile ceph auth
add? If you did, then you just need to ask the cluster what the auth key is
The reason for an entire core peer osd is that it's trying to avoid context
switching your CPU to death. If you have a quad-core processor with HT, I
wouldn't recommend more than 8 osds on the box. I probably would go with 7
myself to keep one core available for system operations. This
recommendati
hi david,
sure i understand that. but how bad does it get when you oversubscribe
OSDs? if context switching itself is dominant, then using HT should
allow to run double the amount of OSDs on same CPU (on OSD per HT core);
but if the issue is actual cpu cycles, HT won't help that much either (1
OSD
I was under the impression the memory requirements for Bluestore would be
around 2-3GB per OSD regardless of capacity.
CPU wise, I would lean towards working out how much total Ghz you require
and then get whatever CPU you need to get there, but with a preference of
Ghz over cores. Yes, there will
Did you do any of that testing to involve a degraded cluster, backfilling,
peering, etc? A healthy cluster running normally uses sometimes 4x less
memory and CPU resources as a cluster consistently peering and degraded.
On Sat, Aug 12, 2017, 2:40 PM Nick Fisk wrote:
> I was under the impression
On 12 August 2017 at 23:04, David Turner wrote:
> I haven't set up the mgr service yet, but your daemon folder is missing
> it's keyring file (/var/lib/ceph/mgr/ceph-0/keyring). It's exactly what
> the error message says. When you set it up did you run a command pile ceph
> auth add? If you did,