Hi Chao,
On Tue, Apr 16, 2019 at 11:51 AM Chao Yu <[email protected]> wrote:
> /* maximum retry quota flush count */
> #define DEFAULT_RETRY_QUOTA_FLUSH_COUNT 8
>
> I added above flush count number to limit cycle index, so that we won't be
> stuck
> for long time once we are under heavy racy in between checkpoint() and quota
> updating.
I saw this. I actually first attempted increasing this to 16. It didn't fix it.
> Once we skip flushing quota, in current checkpoint, quota sysfile may be
> corrupted, but if there is no sudden power-cut, I expect we can have chance to
> flush all data of quota file in next checkpoint, then the quota file is
> integrated again.
>
> So could you track the root cause why we set the CP_QUOTA_NEED_FSCK_FLAG flag
> in
> checkpoint() from umount?
Ok, I'll check this out. Please allow me a day or two for this.
> Do we skip flush quota due to flush count exceeds the
> DEFAULT_RETRY_QUOTA_FLUSH_COUNT?
I'll set this to an unrealistic number(e.g. 500) and add a log to check too.
> Could you check your source code? did you apply dfede78aa918 ("fsck.f2fs:
> detect
> and recover corrupted quota file")? This patch can enable fsck to repair quota
> file corruption once kernel set CP_QUOTA_NEED_FSCK_FLAG flag.
Oh right. Yeah my fsck.f2fs was not up-to-date(pie-release branch).
False alarm.
Thanks.
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel