Hi Marc,

> -----Original Message-----
> From: Marc Lehmann [mailto:schm...@schmorp.de]
> Sent: Wednesday, September 23, 2015 2:01 PM
> To: Jaegeuk Kim
> Cc: linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: [f2fs-dev] SMR drive test 2; 128GB partition; no obvious 
> corruption, much more
> sane behaviour, weird overprovisioning
> 
> On Wed, Sep 23, 2015 at 06:15:24AM +0200, Marc Lehmann <schm...@schmorp.de> 
> wrote:
> > > However, when I tried to do mkfs.f2fs with the same options, I got about 
> > > 18GB.
> > > Could you share the mkfs.f2fs messages and fsck.f2fs -d3 as well?
> >
> > When I re-ran the mkfs.f2fs, I got:
> 
> I get the feeling I did something idiotic, but for the life of it, I don't
> know what. I see the mkfs.f2fs in my test log, I see it in my command
> history, but for the life of it, I can't reproduce it.
> 
> So let's disregard this and go to the next test - I redid the 128G partitipn
> test, with 6 active logs, no -o and -s64:
> 
>    mkfs.f2fs -lTEST -s64 -t0 -a0
> 
> This allowed me to arrive at this state, at which rsync stopped making
> progress:
> 
>    root@shag:/sys/fs/f2fs/dm-1# df -H /mnt
>    Filesystem                Size  Used Avail Use% Mounted on
>    /dev/mapper/vg_test-test  138G  137G  803k 100% /mnt
> 
> This would be about perfect (I even got ENOSPC for the first
> time!). However, when I do my "delete every nth file":
> 
>    /dev/mapper/vg_test-test  138G  135G  1.8G  99% /mnt
> 
> The disk still sits mostly idle. I did verify that "sync" indeed reduces
> Pre-Free to 0, and I do see some activity every ~30s now, though:
> 
> http://ue.tst.eu/ac1ec447de214edc4e007623da2dda72.txt (see the dsk/sde
> columns).
> 
> If I start writing, I guess I trigger the foreground gc:
> 
> http://ue.tst.eu/1dfbac9166552a95551855000d820ce9.txt
> 
> The first few lines there are some background gc activity (I guess), then I
> started an rsync to write data - net/total shows the data rsync transfers.
> After that, there is constant ~40mb read/write activity, but very little
> actual write data gets to the disk (rsync makes progress at <100kb/s).
> 
> At some point I stop rsync (the single line line with 0/0 for sde read
> write, after the second header), followed by sync a second later. Sync
> does it's job, and then there is no activity for a bit, until I start
> rsync again, which immediatelly triggers the 40/40 mode, and makes little
> progress.
> 
> So little to no gc activity, even though the filesystem really needs some
> GC activity at this point.
> 
> If I play around with gc_* like this:
> 
>    echo 1 >gc_idle
>    echo 1000 >gc_max_sleep_time
>    echo 5000 >gc_no_gc_sleep_time

One thing I note is that gc_min_sleep_time is not be set in your script,
so in some condition gc may still do the sleep with gc_min_sleep_time (30
seconds by default) instead of gc_max_sleep_time which we expect.

So setting gc_min_sleep_time/gc_max_sleep_time as a pair is a better way
of controlling sleeping time of gc.

> 
> Then I get a lot more activity:
> 
> http://ue.tst.eu/f05ee3ff52dc7814ee8352cc2d67f364.txt
> 
> But still, as you can see, a lot of the time the disk and the cpu are
> idle.
> 
> In any case, I think I am getting somewhere - until now all my tests ended in
> unusable filesystem sooner or later, this is the firts one which shows mostly
> expected behaviour.
> 
> Maybe -s128 (or -s256) with which I did my previous tests are problematic?
> Maybe the active_logs=2 caused problems (but I only used this option 
> recently)?
> 
> And the previous problems can be explaioned by using inline_dentry and/or
> extent_cache.
> 
> Anyway, this behaviour is what I would expect, mostly.
> 
> Now, I could go with -s64 (128MB segments still span 4-7 zones with this
> disk). Or maybe something uneven, such as -s90, if that doesn't cause
> problems.
> 
> Also, if it were possible to tune the gc to be more aggressive when idle

In 4.3 rc1 kernel, we have add a new ioctl to trigger in batches gc, maybe
we can use it as one option.

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?
id=c1c1b58359d45e1a9f236ce5a40d50720c07c70e

Thanks,

> (and mostly off if the disk is free), and possibly, if the loss of space
> by metadata could br reduced, I'd risk f2fs in production in one system
> here.
> 
> Greetings,
> 
> --
>                 The choice of a       Deliantra, the free code+content MORPG
>       -----==-     _GNU_              http://www.deliantra.net
>       ----==-- _       generation
>       ---==---(_)__  __ ____  __      Marc Lehmann
>       --==---/ / _ \/ // /\ \/ /      schm...@schmorp.de
>       -=====/_/_//_/\_,_/ /_/\_\
> 
> ------------------------------------------------------------------------------
> Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
> Get real-time metrics from all of your servers, apps and tools
> in one place.
> SourceForge users - Click here to start your Free Trial of Datadog now!
> http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


------------------------------------------------------------------------------
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to