Re: [ceph-users] dense storage nodes

2016-05-23 Thread Christian Balzer
Hello, On Fri, 20 May 2016 10:57:10 -0700 Anthony D'Atri wrote: > [ too much to quote ] > > Dense nodes often work better for object-focused workloads than > block-focused, the impact of delayed operations is simply speed vs. a > tenant VM crashing. > Especially if they're don't have SSD

Re: [ceph-users] dense storage nodes

2016-05-20 Thread Christian Balzer
On Thu, 19 May 2016 10:26:37 -0400 Benjeman Meekhof wrote: > Hi Christian, > > Thanks for your insights. To answer your question the NVMe devices > appear to be some variety of Samsung: > > Model: Dell Express Flash NVMe 400GB > Manufacturer: SAMSUNG > Product ID: a820 > Alright, these

Re: [ceph-users] dense storage nodes

2016-05-20 Thread Anthony D'Atri
[ too much to quote ] Dense nodes often work better for object-focused workloads than block-focused, the impact of delayed operations is simply speed vs. a tenant VM crashing. Re RAID5 volumes to decrease the number of OSD’s: This sort of approach is getting increasing attention in that it

Re: [ceph-users] dense storage nodes

2016-05-19 Thread Benjeman Meekhof
Hi Christian, Thanks for your insights. To answer your question the NVMe devices appear to be some variety of Samsung: Model: Dell Express Flash NVMe 400GB Manufacturer: SAMSUNG Product ID: a820 regards, Ben On Wed, May 18, 2016 at 10:01 PM, Christian Balzer wrote: > > Hello,

Re: [ceph-users] dense storage nodes

2016-05-19 Thread Mark Nelson
FWIW, we ran tests back in the dumpling era that more or less showed the same thing. Increasing the merge/split thresholds does help. We suspect it's primarily due to the PG splitting being spread out over a longer period of time so the effect lessens. We're looking at some options to

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Christian Balzer
Hello Kris, On Wed, 18 May 2016 19:31:49 -0700 Kris Jurka wrote: > > > On 5/18/2016 7:15 PM, Christian Balzer wrote: > > >> We have hit the following issues: > >> > >> - Filestore merge splits occur at ~40 MObjects with default > >> settings. This is a really, really bad couple of days

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Kris Jurka
On 5/18/2016 7:15 PM, Christian Balzer wrote: We have hit the following issues: - Filestore merge splits occur at ~40 MObjects with default settings. This is a really, really bad couple of days while things settle. Could you elaborate on that? As in which settings affect this and what

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Christian Balzer
Hello, On Wed, 18 May 2016 08:14:51 -0500 Brian Felton wrote: > At my current gig, we are running five (soon to be six) pure object > storage clusters in production with the following specs: > > - 9 nodes > - 32 cores, 256 GB RAM per node > - 72 6 TB SAS spinners per node (648 total per

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Christian Balzer
Hello, On Wed, 18 May 2016 12:32:25 -0400 Benjeman Meekhof wrote: > Hi Lionel, > > These are all very good points we should consider, thanks for the > analysis. Just a couple clarifications: > > - NVMe in this system are actually slotted in hot-plug front bays so a > failure can be swapped

Re: [ceph-users] dense storage nodes

2016-05-18 Thread George Mihaiescu
Hi Blair, We use 36 OSDs nodes with journals on HDD running in a 90% object storage cluster. The servers have 128 GB RAM and 40 cores (HT) for the storage nodes with 4 TB SAS drives, and 256 GB and 48 cores for the storage nodes with 6 TB SAS drives. We use 2x10 Gb bonded for the client network,

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Benjeman Meekhof
Hi Lionel, These are all very good points we should consider, thanks for the analysis. Just a couple clarifications: - NVMe in this system are actually slotted in hot-plug front bays so a failure can be swapped online. However I do see your point about this otherwise being a non-optimal

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Brian Felton
At my current gig, we are running five (soon to be six) pure object storage clusters in production with the following specs: - 9 nodes - 32 cores, 256 GB RAM per node - 72 6 TB SAS spinners per node (648 total per cluster) - 7,2 erasure coded pool for RGW buckets - ZFS as the filesystem on

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Christian Balzer
On Wed, 18 May 2016 08:56:51 + Van Leeuwen, Robert wrote: > >We've hit issues (twice now) that seem (have not > >figured out exactly how to confirm this yet) to be related to kernel > >dentry slab cache exhaustion - symptoms were a major slow down in > >performance and slow requests all over

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Van Leeuwen, Robert
>We've hit issues (twice now) that seem (have not >figured out exactly how to confirm this yet) to be related to kernel >dentry slab cache exhaustion - symptoms were a major slow down in >performance and slow requests all over the place on writes, watching >OSD iostat would show a single drive

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Christian Balzer
Hello, On Wed, 18 May 2016 15:54:59 +1000 Blair Bethwaite wrote: > Hi all, > > What are the densest node configs out there, and what are your > experiences with them and tuning required to make them work? If we can > gather enough info here then I'll volunteer to propose some upstream > docs

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Wido den Hollander
> Op 18 mei 2016 om 7:54 schreef Blair Bethwaite : > > > Hi all, > > What are the densest node configs out there, and what are your > experiences with them and tuning required to make them work? If we can > gather enough info here then I'll volunteer to propose some

[ceph-users] dense storage nodes

2016-05-17 Thread Blair Bethwaite
Hi all, What are the densest node configs out there, and what are your experiences with them and tuning required to make them work? If we can gather enough info here then I'll volunteer to propose some upstream docs covering this. At Monash we currently have some 32-OSD nodes (running RHEL7),