> --
>
> Gesendet: Mittwoch, 03. Januar 2018 um 16:20 Uhr
> Von: "Steven Vacaroaia" <ste...@gmail.com>
> An: "Brady Deetz" <bde...@gmail.com>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Betreff: Re: [ceph-users] ceph luminous -
det: Mittwoch, 03. Januar 2018 um 16:20 Uhr
Von: "Steven Vacaroaia" <ste...@gmail.com>
An: "Brady Deetz" <bde...@gmail.com>
Cc: ceph-users <ceph-users@lists.ceph.com>
Betreff: Re: [ceph-users] ceph luminous - performance issue
Thanks for your willingness to hel
Thanks for your willingness to help
DELL R620, 1 CPU, 8 cores, 64 GB RAM
cluster network is using 2 bonded 10 GB NICs ( mode=4), MTU=9000
SSD drives are Enterprise grade - 400 GB SSD Toshiba PX04SHB040
HDD drives are - 10k RPM, 600 GB Toshiba AL13SEB600
Steven
On 3 January 2018 at 09:41,
Can you provide more detail regarding the infrastructure backing this
environment? What hard drive, ssd, and processor are you using? Also, what
is providing networking?
I'm seeing 4k blocksize tests here. Latency is going to destroy you.
On Jan 3, 2018 8:11 AM, "Steven Vacaroaia"
Hi,
I am doing a PoC with 3 DELL R620 and 12 OSD , 3 SSD drives ( one on each
server), bluestore
I configured the OSD using the following ( /dev/sda is my SSD drive)
ceph-disk prepare --zap-disk --cluster ceph --bluestore /dev/sde
--block.wal /dev/sda --block.db /dev/sda
Unfortunately both fio