On 08/14/2017 02:42 PM, Nick Fisk wrote:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Ronny Aasen
Sent: 14 August 2017 18:55
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] luminous/bluetsore osd memory requirements
On 10.08.2017 17
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Ronny Aasen
> Sent: 14 August 2017 18:55
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] luminous/bluetsore osd memory requirements
>
> On 10.08.2017 17:
On 10.08.2017 17:30, Gregory Farnum wrote:
This has been discussed a lot in the performance meetings so I've
added Mark to discuss. My naive recollection is that the per-terabyte
recommendation will be more realistic than it was in the past (an
effective increase in memory needs), but also
Hi there,
can someone share her/his experiences regarding this question? Maybe
differentiated according to the different available algorithms?
Sat, 12 Aug 2017 14:40:05 +0200
Stijn De Weirdt ==> Gregory Farnum
, Mark Nelson ,
Subject: Re: [ceph-users] luminous/bluetsore osd memory requirements
Did you do any of that testing to involve a degraded cluster, backfilling,
peering, etc? A healthy cluster running normally uses sometimes 4x less memory
and CPU resources as a cluster consistently peering and degraded
eph potentials in the mix.
>
> Hope that helps.
>
> Nick
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > Of Stijn De Weirdt
> > Sent: 12 August 2017 14:41
> > To: David Turner <drakonst...@
gt; Sent: 12 August 2017 14:41
> To: David Turner <drakonst...@gmail.com>; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] luminous/bluetsore osd memory requirements
>
> hi david,
>
> sure i understand that. but how bad does it get when you oversubscribe
> OSDs?
hi david,
sure i understand that. but how bad does it get when you oversubscribe
OSDs? if context switching itself is dominant, then using HT should
allow to run double the amount of OSDs on same CPU (on OSD per HT core);
but if the issue is actual cpu cycles, HT won't help that much either (1
The reason for an entire core peer osd is that it's trying to avoid context
switching your CPU to death. If you have a quad-core processor with HT, I
wouldn't recommend more than 8 osds on the box. I probably would go with 7
myself to keep one core available for system operations. This
hi all,
thanks for all the feedback. it's clear we should stick to the 1GB/TB
for the memory.
any (changes to) recommendation for the CPU? in particular, is it still
the rather vague "1 HT core per OSD" (or was it "1 1Ghz HT core per
OSD"? it would be nice if we had some numbers like required
This has been discussed a lot in the performance meetings so I've added
Mark to discuss. My naive recollection is that the per-terabyte
recommendation will be more realistic than it was in the past (an
effective increase in memory needs), but also that it will be under much
better control than
is the default.
Wido
> Marcus Haarmann
>
>
> Von: "Stijn De Weirdt" <stijn.dewei...@ugent.be>
> An: "ceph-users" <ceph-users@lists.ceph.com>
> Gesendet: Donnerstag, 10. August 2017 10:34:48
> Betreff: [ceph-users] luminous/bluetsore osd memory r
stores each file as a single object, while
the rbd is configured
to allocate larger objects.
Marcus Haarmann
Von: "Stijn De Weirdt" <stijn.dewei...@ugent.be>
An: "ceph-users" <ceph-users@lists.ceph.com>
Gesendet: Donnerstag, 10. August 2017 10:34:48
Betreff:
hi all,
we are planning to purchse new OSD hardware, and we are wondering if for
upcoming luminous with bluestore OSDs, anything wrt the hardware
recommendations from
http://docs.ceph.com/docs/master/start/hardware-recommendations/
will be different, esp the memory/cpu part. i understand from
14 matches
Mail list logo