Hi, Maxime.
Linux SMR is only starting with version 4.9 kernel.
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
2017-02-03 10:26 GMT+03:00 Maxime Guyot :
> Hi everyone,
>
>
>
> I’m wondering if anyone in the ML is running a cluster with archive type
> HDDs,
Hi all,
I am still confused about my CephFS sandbox.
When I am performing simple FIO test into single file with size of 3G I
have too many IOps:
cephnode:~ # fio payloadrandread64k3G
test: (g=0): rw=randread, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio,
iodepth=2
fio-2.13
Starting 1 process
> Op 2 februari 2017 om 15:35 schreef Ahmed Khuraidah :
>
>
> Hi all,
>
> I am still confused about my CephFS sandbox.
>
> When I am performing simple FIO test into single file with size of 3G I
> have too many IOps:
>
> cephnode:~ # fio payloadrandread64k3G
> test:
Hi,
We are testing a multi-site CEPH cluster using 0.94.5 release.
There are 2 sites with 2 CEPH nodes in each site.
Each node is running a monitor and a bunch of OSDs.
The CRUSH rules are configured to require a copy of data in each site.
The sites are connected by a private high-speed link.
In
Hi everyone,
I’m wondering if anyone in the ML is running a cluster with archive type HDDs,
like the HGST Ultrastar Archive (10TB@7.2k RPM) or the Seagate Enterprise
Archive (8TB@5.9k RPM)?
As far as I read they both fall in the enterprise class HDDs so *might* be
suitable for a low
You may want to add this in your FIO recipe.
* exec_prerun=echo 3 > /proc/sys/vm/drop_caches
Regards,
On Fri, Feb 3, 2017 at 12:36 AM, Wido den Hollander wrote:
>
>> Op 2 februari 2017 om 15:35 schreef Ahmed Khuraidah :
>>
>>
>> Hi all,
>>
>> I am still
On Thu, Feb 2, 2017 at 8:01 AM, Ilia Sokolinski wrote:
> Hi,
>
> We are testing a multi-site CEPH cluster using 0.94.5 release.
> There are 2 sites with 2 CEPH nodes in each site.
> Each node is running a monitor and a bunch of OSDs.
> The CRUSH rules are configured to