Hi Felix,
Better use fio.
Like fio -ioengine=rbd -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128
-rw=randwrite -pool=rpool_hdd -runtime=60 -rbdname=testimg (for peak parallel
random iops)
Or the same with -iodepth=1 for the latency test. Here you usually get
Or the same with
der), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> -
> -
>
>
> Von: John Petrini
> Datum: Freitag, 7. Juni 2019 um 15:49
> An: "Stolte, Felix&quo
> > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> > > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt
> (Vorsitzender),
> > > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> >
> > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen
Huthmacher
> > > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt
(Vorsitzender),
> > > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald
Bolt,
> > > Prof. Dr. Seb
>
> >you do tests before and after this change, and know what the
> > difference
> >is iops? And is the advantage more or less when your sata hdd's
are
> >slower?
> >
> >
> >-
gt; >
> >you do tests before and after this change, and know what the
> > difference
> >is iops? And is the advantage more or less when your sata hdd's are
> >slower?
> >
> >
> >-Original Message-
l Message-
> From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
>Sent: donderdag 6 juni 2019 10:47
>To: ceph-users
>Subject: [ceph-users] Expected IO in luminous Ceph Cluster
>
>Hello folks,
>
>we are running a ceph cluster
>
>
>-Original Message-----
> From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
>Sent: donderdag 6 juni 2019 10:47
>To: ceph-users
>Subject: [ceph-users] Expected IO in luminous Ceph Cluster
>
>Hello folks,
>
>we are ru
know what the difference
is iops? And is the advantage more or less when your sata hdd's are
slower?
-Original Message-
From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
Sent: donderdag 6 juni 2019 10:47
To: ceph-users
Subject: [ceph-users] Expected IO in lum
-juelich.de]
Sent: donderdag 6 juni 2019 10:47
To: ceph-users
Subject: [ceph-users] Expected IO in luminous Ceph Cluster
Hello folks,
we are running a ceph cluster on Luminous consisting of 21 OSD Nodes
with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB
(1:3 Ratio). OSDs have 10Gb
Hello folks,
we are running a ceph cluster on Luminous consisting of 21 OSD Nodes with 9 8TB
SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB (1:3 Ratio). OSDs
have 10Gb for Public and Cluster Network. The cluster is running stable for
over a year. We didn’t had a closer look on IO
11 matches
Mail list logo