t; >
> >
> >
> >
> > Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for
> > Windows 10
> >
> > From: Reed Dier<mailto:reed.d...@focusvq.com>
> > Sent: Friday, October 21, 2016 10:06 AM
> > To: Christian Balzer<ma
.d...@focusvq.com>
> Sent: Friday, October 21, 2016 10:06 AM
> To: Christian Balzer<mailto:ch...@gol.com>
> Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] New cephfs cluster performance issues- Jewel -
> cache pressure, capabilit
986> for Windows 10
From: Reed Dier<mailto:reed.d...@focusvq.com>
Sent: Friday, October 21, 2016 10:06 AM
To: Christian Balzer<mailto:ch...@gol.com>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] New cephfs cluster performance issues-
> On Oct 19, 2016, at 7:54 PM, Christian Balzer wrote:
>
>
> Hello,
>
> On Wed, 19 Oct 2016 12:28:28 + Jim Kilborn wrote:
>
>> I have setup a new linux cluster to allow migration from our old SAN based
>> cluster to a new cluster with ceph.
>> All systems running centos
911 IOPS
>
>
> 2.0 MB/sec 501 IOPS
>
>
> 4K Read
>
>
> 28 MB/sec 7001 IOPS
>
>
> 8 MB/sec 1945 IOPS
>
>
> 13 MB/sec 3256 IOPS
>
>
> 4K Rand Read
>
>
> 263 KB/sec
>
>
> 5 MB/sec 1246 IOPS
>
>
>
uster performance issues- Jewel - cache
pressure, capability release, poor iostat await avg queue size
Thanks Christion for the additional information and comments.
· upgraded the kernels, but still had poor performance
· Removed all the pools and recreated with just a replica
;
Sent: Wednesday, October 19, 2016 7:54 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Cc: Jim Kilborn<mailto:j...@kilborns.com>
Subject: Re: [ceph-users] New cephfs cluster performance issues- Jewel - cache
pressure, capability release, poor iostat await avg queue size
Hello,
On Wed, 19 Oct 2016 12:28:28 + Jim Kilborn wrote:
> I have setup a new linux cluster to allow migration from our old SAN based
> cluster to a new cluster with ceph.
> All systems running centos 7.2 with the 3.10.0-327.36.1 kernel.
As others mentioned, not a good choice, but also not
fwlink/?LinkId=550986> for Windows 10
>
>
>
> From: John Spray<mailto:jsp...@redhat.com>
> Sent: Wednesday, October 19, 2016 9:10 AM
> To: Jim Kilborn<mailto:j...@kilborns.com>
> Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> Subject: R
gt;
> Sent: Wednesday, October 19, 2016 9:10 AM
> To: Jim Kilborn<mailto:j...@kilborns.com>
> Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] New cephfs cluster performance issues- Jewel -
> cache pressure, capability release, poor io
ct: Re: [ceph-users] New cephfs cluster performance issues- Jewel - cache
pressure, capability release, poor iostat await avg queue size
On Wed, Oct 19, 2016 at 1:28 PM, Jim Kilborn <j...@kilborns.com> wrote:
> I have setup a new linux cluster to allow migration from our old SAN based
>
On Wed, Oct 19, 2016 at 1:28 PM, Jim Kilborn wrote:
> I have setup a new linux cluster to allow migration from our old SAN based
> cluster to a new cluster with ceph.
> All systems running centos 7.2 with the 3.10.0-327.36.1 kernel.
> I am basically running stock ceph
I have setup a new linux cluster to allow migration from our old SAN based
cluster to a new cluster with ceph.
All systems running centos 7.2 with the 3.10.0-327.36.1 kernel.
I am basically running stock ceph settings, with just turning the write cache
off via hdparm on the drives, and
13 matches
Mail list logo