Re: [ceph-users] How to increase the size of requests written to a ceph image

2018-03-20 Thread Russell Glaue
: 0, "num_snap_trimming": 0, "op_queue_age_hist": { "histogram": [], "upper_bound": 1 }, "fs_perf_stat": { "commit_latency_ms": 0, "apply_latency_ms": 49

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-12-08 Thread Russell Glaue
y the second. > > I still think you have a significant latency/iops issue: a 36 all SSDs > cluster should give much higher that 2.5K iops > > Maged > > > On 2017-12-07 23:57, Russell Glaue wrote: > > I want to provide an update to my interesting situation. > (New sto

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-12-07 Thread Russell Glaue
27, 2017 at 4:21 PM, Russell Glaue <rgl...@cait.org> wrote: > Yes, several have recommended the fio test now. > I cannot perform a fio test at this time. Because the post referred to > directs us to write the fio test data directly to the disk device, e.g. > /dev/sdj. I'd h

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Russell Glaue
. > > If the "old" device doesn't test well in fio/dd testing, then the drives > are (as expected) not a great choice for journals and you might want to > look at hardware/backplane/RAID configuration differences that are somehow > allowing them to perform adequately. > &g

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Russell Glaue
around anymore, but last I remember, this was a hardware issue and could not be resolved with firmware. Paging Kyle Bader... On Fri, Oct 27, 2017 at 9:24 AM, Russell Glaue <rgl...@cait.org> wrote: > We have older crucial M500 disks operating without such problems. So, I > have to believe it i

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Russell Glaue
performance at times (based on unquantified anecdotal personal > experience with other consumer model SSDs). I wouldn't touch these > with a long stick for anything but small toy-test clusters. > > On Fri, Oct 27, 2017 at 3:44 AM, Russell Glaue <rgl...@cait.org> wrote: > > &

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-26 Thread Russell Glaue
t, and the other disks are lower than 80%. So, for whatever reason, shutting down the OSDs and starting them back up, allowed many (not all) of the OSDs performance to improve on the problem host. Maged > > On 2017-10-25 23:44, Russell Glaue wrote: > > Thanks to all. > I took the OS

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-25 Thread Russell Glaue
latency(s): 0.00107473 Cleaning up (deleting benchmark objects) Clean up completed and total clean up time :16.269393 On Fri, Oct 20, 2017 at 1:35 PM, Russell Glaue <rgl...@cait.org> wrote: > On the machine in question, the 2nd newest, we are using the LSI MegaRAID > SAS-3 3008 [

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-23 Thread Russell Glaue
<ch...@gol.com> wrote: > > Hello, > > On Fri, 20 Oct 2017 13:35:55 -0500 Russell Glaue wrote: > > > On the machine in question, the 2nd newest, we are using the LSI MegaRAID > > SAS-3 3008 [Fury], which allows us a "Non-RAID" option, and has no > batte

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-20 Thread Russell Glaue
> raid controllers. > > On Thu, Oct 19, 2017, 8:15 PM Christian Balzer <ch...@gol.com> wrote: > >> >> Hello, >> >> On Thu, 19 Oct 2017 17:14:17 -0500 Russell Glaue wrote: >> >> > That is a good idea. >> > However, a previous rebalanc

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-19 Thread Russell Glaue
, just stop all > the OSDs on the second questionable server, mark the OSDs on that server as > out, let the cluster rebalance and when all PGs are active+clean just > replay the test. > > All IOs should then go only to the other 3 servers. > > JC > > On Oct 19, 2017, at 13

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-19 Thread Russell Glaue
lways switch hardware between nodes and see if the > problem follows the component. > > On Thu, Oct 19, 2017 at 4:49 PM Russell Glaue <rgl...@cait.org> wrote: > >> No, I have not ruled out the disk controller and backplane making the >> disks slower. >> Is there a w

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-19 Thread Russell Glaue
backplane in the server running > slower? > > On Thu, Oct 19, 2017 at 4:42 PM Russell Glaue <rgl...@cait.org> wrote: > >> I ran the test on the Ceph pool, and ran atop on all 4 storage servers, >> as suggested. >> >> Out of the 4 servers: >> 3 of th

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-19 Thread Russell Glaue
em. > > On 2017-10-18 21:35, Russell Glaue wrote: > > I cannot run the write test reviewed at the ceph-how-to-test-if-your- > ssd-is-suitable-as-a-journal-device blog. The tests write directly to the > raw disk device. > Reading an infile (created with urandom) on one SSD,

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-18 Thread Russell Glaue
as outlined earlier will show if the drives are > performing well or not. Also how many osds do you have ? > > On 2017-10-18 19:26, Russell Glaue wrote: > > The SSD drives are Crucial M500 > A Ceph user did some benchmarks and found it had good performance > https://forum.

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-18 Thread Russell Glaue
org> wrote: > measuring resource load as outlined earlier will show if the drives are > performing well or not. Also how many osds do you have ? > > On 2017-10-18 19:26, Russell Glaue wrote: > > The SSD drives are Crucial M500 > A Ceph user did some benchmarks and fo

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-18 Thread Russell Glaue
or you have > too few disks (which i doubt is the case). If only 1 disk %busy is high, > there may be something wrong with this disk should be removed. > > Maged > > On 2017-10-18 18:13, Russell Glaue wrote: > > In my previous post, in one of my points I was wondering if the r

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-18 Thread Russell Glaue
etter chance > in generating larger requests. Depending on your kernel, the io scheduler > may be different for rbd (blq-mq) vs sdx (cfq) but again i would think the > request size is a result not a cause. > > Maged > > On 2017-10-17 23:12, Russell Glaue wrote: > > I am ru