Hi,
2013/9/11 Chao Yu chao2...@samsung.com
Hi Kim,
I did some tests as you mention of using random instead of spin_lock.
The test model is as following:
eight threads race to grab one of eight locks for one thousand times,
and I used four methods to generate lock num:
Hi Russ,
The usage of fs_locks is for the recovery, so it doesn't matter
with stress-testing.
Actually what I've concerned is that we should not grab two or
more fs_locks in the same call path.
Thanks,
2013/9/11 Russ Knize russ.kn...@motorola.com:
Hi Jaegeuk/Gu,
I've removed the lock and have
Hi Gu,
2013/9/11 Gu Zheng guz.f...@cn.fujitsu.com:
Hi Jaegeuk, Chao,
On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
Hi,
At first, thank you for the report and please follow the email writing
rules. :)
Anyway, I agree to the below issue.
One thing that I can think of is that we don't need
Jaegeuk,
My tests include forced kernel panics while fsstress is running, which
generates a lot of recovery activity. Sorry I wasn't more clear.
I understand your concern, which is why I first tried to keep the
fs_lock in the xattr_handler-set() path from VFS while removing it
from the call
Hi,
On 11/09/2013 21:19, Kim Jaegeuk wrote:
Hi Russ,
The usage of fs_locks is for the recovery, so it doesn't matter
with stress-testing.
Actually what I've concerned is that we should not grab two or
more fs_locks in the same call path.
Thanks,
I am wondering why we don't use other kind
Hi Kim
-Original Message-
From: Kim Jaegeuk [mailto:jaegeuk@gmail.com]
Sent: Wednesday, September 11, 2013 9:15 PM
To: chao2...@samsung.com
Cc: ???; 谭姝; linux-fsde...@vger.kernel.org;
linux-ker...@vger.kernel.org;
linux-f2fs-devel@lists.sourceforge.net
Subject: Re: Re:
Hi Gu
-Original Message-
From: Gu Zheng [mailto:guz.f...@cn.fujitsu.com]
Sent: Wednesday, September 11, 2013 1:38 PM
To: jaegeuk@samsung.com
Cc: chao2...@samsung.com; shu@samsung.com;
linux-fsde...@vger.kernel.org; linux-ker...@vger.kernel.org;
From: Yu Chao chao2...@samsung.com
There is a performance problem: when all sbi-fs_lock are holded, then
all the following threads may get the same next_lock value from
sbi-next_lock_num
in function mutex_lock_op, and wait for the same lock(fs_lock[next_lock]),
it may cause performance reduce.