> -----Original Message-----
> From: Jaegeuk Kim [mailto:jaeg...@kernel.org]
> Sent: Friday, September 25, 2015 1:21 AM
> To: Marc Lehmann
> Cc: Chao Yu; linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: [f2fs-dev] SMR drive test 2; 128GB partition; no obvious 
> corruption, much more
> sane behaviour, weird overprovisioning
> 
> On Thu, Sep 24, 2015 at 01:43:24AM +0200, Marc Lehmann wrote:
> > On Thu, Sep 24, 2015 at 01:30:22AM +0200, Marc Lehmann <schm...@schmorp.de> 
> > wrote:
> > > > One thing I note is that gc_min_sleep_time is not be set in your script,
> > > > so in some condition gc may still do the sleep with gc_min_sleep_time 
> > > > (30
> > > > seconds by default) instead of gc_max_sleep_time which we expect.
> > >
> > > Ah, sorry, I actually set gc_min_sleep_time to 100, but forgot to include
> > > it.
> >
> > Sorry, that sounded confusing - I set it to 100 in previous tests, and 
> > forgot
> > to include it, so it was running with 30000. When experimenting, I actually
> > do get the gc to do more frequent operations now.
> >
> > Is there any obvious harm setting it to a very low value (such as 100 or 
> > 10)?
> >
> > I assume all it does is have less time buffer between the last operation
> > and the gc starting. When I write in batches, or when I know the fs will be
> > idle, there shouldn't be any harm, performance wise, of letting it work all
> > the time.
> 
> Yeah, I don't think it does matter with very small time periods, since the 
> timer
> is set after background GC is done.
> But, we use msecs_to_jiffies(), so hope not to use something like 10 ms, since
> each backgroudn GC conducts reading victim blocks into page cache and then 
> just
> sets them as dirty.
> That indicates, after a while, we hope flusher will write them all to disk and
> finally we got a free section.
> So, IMO, we need to give some time slots to flusher as well.
> 
> For example, if write bandwidth is 30MB/s and section size is 128MB, it needs
> about 4secs to write one section.

It's better for us to consider VM dirty data flush policy, IIRC, Fengguang
did the optimization work of writeback, if dirty ratio (dirty bytes?)is not
high, VM will flush data slightly slowly, but as dirty ratio  increase, VM
will flush data aggressively. If we want a large usage of max bandwidth, the
value of following interface could be consider when tuned up with gc policy
of f2fs.

/proc/sys/vm/
        dirty_background_bytes
        dirty_background_ratio
        dirty_expire_centisecs

Thanks,

> So, how about setting
>  - gc_min_time to 1~2 secs,
>  - gc_max_time to 3~4 secs,
>  - gc_idle_time to 10 secs,
>  - reclaim_segments to 64 (sync when 1 section becomes prefree)
> 
> Thanks,
> 
> >
> > --
> >                 The choice of a       Deliantra, the free code+content MORPG
> >       -----==-     _GNU_              http://www.deliantra.net
> >       ----==-- _       generation
> >       ---==---(_)__  __ ____  __      Marc Lehmann
> >       --==---/ / _ \/ // /\ \/ /      schm...@schmorp.de
> >       -=====/_/_//_/\_,_/ /_/\_\


------------------------------------------------------------------------------
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to