Hello,
On Fri, 20 May 2016 10:57:10 -0700 Anthony D'Atri wrote:
> [ too much to quote ]
>
> Dense nodes often work better for object-focused workloads than
> block-focused, the impact of delayed operations is simply speed vs. a
> tenant VM crashing.
>
Especially if they're don't have SSD
On Thu, 19 May 2016 10:26:37 -0400 Benjeman Meekhof wrote:
> Hi Christian,
>
> Thanks for your insights. To answer your question the NVMe devices
> appear to be some variety of Samsung:
>
> Model: Dell Express Flash NVMe 400GB
> Manufacturer: SAMSUNG
> Product ID: a820
>
Alright, these
[ too much to quote ]
Dense nodes often work better for object-focused workloads than block-focused,
the impact of delayed operations is simply speed vs. a tenant VM crashing.
Re RAID5 volumes to decrease the number of OSD’s: This sort of approach is
getting increasing attention in that it
Hi Christian,
Thanks for your insights. To answer your question the NVMe devices
appear to be some variety of Samsung:
Model: Dell Express Flash NVMe 400GB
Manufacturer: SAMSUNG
Product ID: a820
regards,
Ben
On Wed, May 18, 2016 at 10:01 PM, Christian Balzer wrote:
>
> Hello,
FWIW, we ran tests back in the dumpling era that more or less showed the
same thing. Increasing the merge/split thresholds does help. We
suspect it's primarily due to the PG splitting being spread out over a
longer period of time so the effect lessens. We're looking at some
options to
Hello Kris,
On Wed, 18 May 2016 19:31:49 -0700 Kris Jurka wrote:
>
>
> On 5/18/2016 7:15 PM, Christian Balzer wrote:
>
> >> We have hit the following issues:
> >>
> >> - Filestore merge splits occur at ~40 MObjects with default
> >> settings. This is a really, really bad couple of days
On 5/18/2016 7:15 PM, Christian Balzer wrote:
We have hit the following issues:
- Filestore merge splits occur at ~40 MObjects with default settings.
This is a really, really bad couple of days while things settle.
Could you elaborate on that?
As in which settings affect this and what
Hello,
On Wed, 18 May 2016 08:14:51 -0500 Brian Felton wrote:
> At my current gig, we are running five (soon to be six) pure object
> storage clusters in production with the following specs:
>
> - 9 nodes
> - 32 cores, 256 GB RAM per node
> - 72 6 TB SAS spinners per node (648 total per
Hello,
On Wed, 18 May 2016 12:32:25 -0400 Benjeman Meekhof wrote:
> Hi Lionel,
>
> These are all very good points we should consider, thanks for the
> analysis. Just a couple clarifications:
>
> - NVMe in this system are actually slotted in hot-plug front bays so a
> failure can be swapped
Hi Blair,
We use 36 OSDs nodes with journals on HDD running in a 90% object storage
cluster.
The servers have 128 GB RAM and 40 cores (HT) for the storage nodes with 4
TB SAS drives, and 256 GB and 48 cores for the storage nodes with 6 TB SAS
drives.
We use 2x10 Gb bonded for the client network,
Hi Lionel,
These are all very good points we should consider, thanks for the
analysis. Just a couple clarifications:
- NVMe in this system are actually slotted in hot-plug front bays so a
failure can be swapped online. However I do see your point about this
otherwise being a non-optimal
At my current gig, we are running five (soon to be six) pure object storage
clusters in production with the following specs:
- 9 nodes
- 32 cores, 256 GB RAM per node
- 72 6 TB SAS spinners per node (648 total per cluster)
- 7,2 erasure coded pool for RGW buckets
- ZFS as the filesystem on
On Wed, 18 May 2016 08:56:51 + Van Leeuwen, Robert wrote:
> >We've hit issues (twice now) that seem (have not
> >figured out exactly how to confirm this yet) to be related to kernel
> >dentry slab cache exhaustion - symptoms were a major slow down in
> >performance and slow requests all over
>We've hit issues (twice now) that seem (have not
>figured out exactly how to confirm this yet) to be related to kernel
>dentry slab cache exhaustion - symptoms were a major slow down in
>performance and slow requests all over the place on writes, watching
>OSD iostat would show a single drive
Hello,
On Wed, 18 May 2016 15:54:59 +1000 Blair Bethwaite wrote:
> Hi all,
>
> What are the densest node configs out there, and what are your
> experiences with them and tuning required to make them work? If we can
> gather enough info here then I'll volunteer to propose some upstream
> docs
> Op 18 mei 2016 om 7:54 schreef Blair Bethwaite :
>
>
> Hi all,
>
> What are the densest node configs out there, and what are your
> experiences with them and tuning required to make them work? If we can
> gather enough info here then I'll volunteer to propose some
Hi all,
What are the densest node configs out there, and what are your
experiences with them and tuning required to make them work? If we can
gather enough info here then I'll volunteer to propose some upstream
docs covering this.
At Monash we currently have some 32-OSD nodes (running RHEL7),
17 matches
Mail list logo