Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-12 Thread Виталий Филиппов
Hi Felix, Better use fio. Like fio -ioengine=rbd -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128 -rw=randwrite -pool=rpool_hdd -runtime=60 -rbdname=testimg (for peak parallel random iops) Or the same with -iodepth=1 for the latency test. Here you usually get Or the same with

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-11 Thread John Petrini
der), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > - > - > > > Von: John Petrini > Datum: Freitag, 7. Juni 2019 um 15:49 > An: "Stolte, Felix&quo

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-11 Thread Stolte, Felix
  >     > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher >     >     > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt > (Vorsitzender), >     >     > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, >     >

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread John Petrini
> > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > > > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > > > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > > > Prof. Dr. Seb

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
> > >you do tests before and after this change, and know what the > > difference > >is iops? And is the advantage more or less when your sata hdd's are > >slower? > > > > > >-

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Sinan Polat
gt; > > >you do tests before and after this change, and know what the > > difference > >is iops? And is the advantage more or less when your sata hdd's are > >slower? > > > > > >-Original Message-

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
l Message- > From: Stolte, Felix [mailto:f.sto...@fz-juelich.de] >Sent: donderdag 6 juni 2019 10:47 >To: ceph-users >Subject: [ceph-users] Expected IO in luminous Ceph Cluster > >Hello folks, > >we are running a ceph cluster

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Sinan Polat
> > >-Original Message----- > From: Stolte, Felix [mailto:f.sto...@fz-juelich.de] >Sent: donderdag 6 juni 2019 10:47 >To: ceph-users >Subject: [ceph-users] Expected IO in luminous Ceph Cluster > >Hello folks, > >we are ru

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
know what the difference is iops? And is the advantage more or less when your sata hdd's are slower? -Original Message- From: Stolte, Felix [mailto:f.sto...@fz-juelich.de] Sent: donderdag 6 juni 2019 10:47 To: ceph-users Subject: [ceph-users] Expected IO in lum

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-06 Thread Marc Roos
-juelich.de] Sent: donderdag 6 juni 2019 10:47 To: ceph-users Subject: [ceph-users] Expected IO in luminous Ceph Cluster Hello folks, we are running a ceph cluster on Luminous consisting of 21 OSD Nodes with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB (1:3 Ratio). OSDs have 10Gb

[ceph-users] Expected IO in luminous Ceph Cluster

2019-06-06 Thread Stolte, Felix
Hello folks, we are running a ceph cluster on Luminous consisting of 21 OSD Nodes with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB (1:3 Ratio). OSDs have 10Gb for Public and Cluster Network. The cluster is running stable for over a year. We didn’t had a closer look on IO