Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-20 Thread Yan, Zheng
On Mon, Mar 19, 2018 at 11:45 PM, Nicolas Huillard wrote: > Le lundi 19 mars 2018 à 15:30 +0300, Sergey Malinin a écrit : >> Default for mds_log_events_per_segment is 1024, in my set up I ended >> up with 8192. >> I calculated that value like IOPS / log segments * 5 seconds

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote: > > Then I tried to reduce the number of MDS, from 4 to 1,  > Le lundi 19 mars 2018 à 19:15 +0300, Sergey Malinin a écrit : > Forgot to mention, that in my setup the issue gone when I had > reverted back to single MDS and switched

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Sergey Malinin
Forgot to mention, that in my setup the issue gone when I had reverted back to single MDS and switched dirfrag off. On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote: > Then I tried to reduce the number of MDS, from 4 to 1, ___ ceph-users

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
Le lundi 19 mars 2018 à 15:30 +0300, Sergey Malinin a écrit : > Default for mds_log_events_per_segment is 1024, in my set up I ended > up with 8192. > I calculated that value like IOPS / log segments * 5 seconds (afaik > MDS performs journal maintenance once in 5 seconds by default). I tried 4096

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Sergey Malinin
Default for mds_log_events_per_segment is 1024, in my set up I ended up with 8192. I calculated that value like IOPS / log segments * 5 seconds (afaik MDS performs journal maintenance once in 5 seconds by default). On Monday, March 19, 2018 at 15:20, Nicolas Huillard wrote: > I can't find any

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
f of > Nicolas Huillard <nhuill...@dolomede.fr> > Sent: Monday, March 19, 2018 12:01:09 PM > To: ceph-users@lists.ceph.com > Subject: [ceph-users] Huge amount of cephfs metadata writes while > only reading data (rsync from storage, to single disk) > > Hi all, > > I'm e

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Sergey Malinin
lomede.fr> Sent: Monday, March 19, 2018 12:01:09 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk) Hi all, I'm experimenting with a new little storage cluster. I wanted to take advantage of

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Gregory Farnum
The MDS has to write to its local journal when clients open files, in case of certain kinds of failures. I guess it doesn't distinguish between read-only (when it could *probably* avoid writing them down? Although it's not as simple a thing as it sounds) and writeable file opens. So every file

[ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
Hi all, I'm experimenting with a new little storage cluster. I wanted to take advantage of the week-end to copy all data (1TB, 10M objects) from the cluster to a single SATA disk. I expected to saturate the SATA disk while writing to it, but the storage cluster actually saturates its network