The information you're giving sounds a little contradictory, but my
guess is that you're seeing the impacts of object promotion and
flushing. You can sample the operations the OSDs are doing at any
given time by running ops_in_progress (or similar, I forget exact
phrasing) command on the OSD admin socket. I'm not sure if "rados df"
is going to report cache movement activity or not.

That though would mostly be written to the SSDs, not the hard drives —
although the hard drives could still get metadata updates written when
objects are flushed. What data exactly are you seeing that's leading
you to believe writes are happening against these drives? What is the
exact CephFS and cache pool configuration?
-Greg

On Mon, Mar 16, 2015 at 2:36 PM, Erik Logtenberg <e...@logtenberg.eu> wrote:
> Hi,
>
> I forgot to mention: while I am seeing these writes in iotop and
> /proc/diskstats for the hdd's, I am -not- seeing any writes in "rados
> df" for the pool residing on these disks. There is only one pool active
> on the hdd's and according to rados df it is getting zero writes when
> I'm just reading big files from cephfs.
>
> So apparently the osd's are doing some non-trivial amount of writing on
> their own behalf. What could it be?
>
> Thanks,
>
> Erik.
>
>
> On 03/16/2015 10:26 PM, Erik Logtenberg wrote:
>> Hi,
>>
>> I am getting relatively bad performance from cephfs. I use a replicated
>> cache pool on ssd in front of an erasure coded pool on rotating media.
>>
>> When reading big files (streaming video), I see a lot of disk i/o,
>> especially writes. I have no clue what could cause these writes. The
>> writes are going to the hdd's and they stop when I stop reading.
>>
>> I mounted everything with noatime and nodiratime so it shouldn't be
>> that. On a related note, the Cephfs metadata is stored on ssd too, so
>> metadata-related changes shouldn't hit the hdd's anyway I think.
>>
>> Any thoughts? How can I get more information about what ceph is doing?
>> Using iotop I only see that the osd processes are busy but it doesn't
>> give many hints as to what they are doing.
>>
>> Thanks,
>>
>> Erik.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to