>From Sachin Punadikar :
Sachin Punadikar has uploaded this change for review. (
https://review.gerrithub.io/372645
Change subject: Allow '.' and '..' as target path in symlink creation for NFSv4
..
Allow '.' and '..' as targe
It looks like the assert() below and the comment in
mdcache_lru_clean() may not be valid in all cases. For example, if
cache is getting cleaned in the context of the LRU background thread,
the op_ctx will be NULL and the code may get into the 'else' part
(lru_run() -> lru_run_lane() -> _mdcache_lru
It should be valid. lru_run_lane() sets up op_ctx, so it should be set
correctly even in the LRU thread case.
Daniel
On 08/04/2017 09:54 AM, Pradeep wrote:
It looks like the assert() below and the comment in
mdcache_lru_clean() may not be valid in all cases. For example, if
cache is getting c
Hi Daniel,
I could not find where op_ctx gets populated in lru_run_lane(). I'm using 2.5.1.
https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019
On 8/4/17, Daniel Gryniewicz wrote:
> It should be valid. lru_run_lane() sets up op_ctx, so
Here:
https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1127
On 08/04/2017 10:36 AM, Pradeep wrote:
Hi Daniel,
I could not find where op_ctx gets populated in lru_run_lane(). I'm using 2.5.1.
https://github.com/nfs-ganesha/nfs-ganesha/bl
Thanks Daniel. I see it being initialized. But then it is overwritten
from saved_ctx, right?
https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1161
On 8/4/17, Daniel Gryniewicz wrote:
> Here:
>
> https://github.com/nfs-ganesha/nfs-ganesha/b
Yep. We save the old context, run our loop, and then restore the old
context.
On 08/04/2017 10:45 AM, Pradeep wrote:
Thanks Daniel. I see it being initialized. But then it is overwritten
from saved_ctx, right?
https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_
For the first export, saved_ctx will be NULL. So the assignment at
line 1161 makes op_ctx also NULL. So when mdcache_lru_unref() is
called, op_ctx will be NULL. This will cause the assert in
mdcache_lru_clean() abort.
Perhaps we can move the assignment in line 1161 after mdcache_lru_unref() call?
Yeah, probably.
Daniel
On 08/04/2017 10:55 AM, Pradeep wrote:
For the first export, saved_ctx will be NULL. So the assignment at
line 1161 makes op_ctx also NULL. So when mdcache_lru_unref() is
called, op_ctx will be NULL. This will cause the assert in
mdcache_lru_clean() abort.
Perhaps we ca
> Yeah, probably.
>
> Daniel
>
> On 08/04/2017 10:55 AM, Pradeep wrote:
> > For the first export, saved_ctx will be NULL. So the assignment at
> > line 1161 makes op_ctx also NULL. So when mdcache_lru_unref() is
> > called, op_ctx will be NULL. This will cause the assert in
> > mdcache_lru_clean
I think we need to ensure that the partition lock is taken before the
qlane lock. I have a patch for this, but it introduced a refcount
issue, so I'm debugging.
Daniel
On 08/03/2017 08:52 PM, Pradeep wrote:
Thanks Franks. I merged your patch and now hitting another deadlock.
Here are the two
Hello,
I'm hitting a case where mdcache keeps growing well beyond the high
water mark. Here is a snapshot of the lru_state:
1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
10, chunks_used = 16462,
It has grown to 2.3 million entries and each entry is ~1.6K.
I looked at t
> I'm hitting a case where mdcache keeps growing well beyond the high water
> mark. Here is a snapshot of the lru_state:
>
> 1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
10,
> chunks_used = 16462,
>
> It has grown to 2.3 million entries and each entry is ~1.6K.
>
> I l
On 08/04/2017 01:14 PM, Pradeep wrote:
Hello,
I'm hitting a case where mdcache keeps growing well beyond the high
water mark. Here is a snapshot of the lru_state:
1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
10, chunks_used = 16462,
It has grown to 2.3 million entrie
My mistake. As you both correctly pointed out, refcnt needs to be 1 for reclaim.
It is initialized with 2. So some must be doing an unref()/put() to make it 1.
On 8/4/17, Daniel Gryniewicz wrote:
> On 08/04/2017 01:14 PM, Pradeep wrote:
>> Hello,
>>
>> I'm hitting a case where mdcache keeps growi
When you create a new entry, you get back a ref with it, that you are
expected to release when you're done. In addition, the hash table has a
ref, so the initial ref of an entry is 2.
Otherwise, you'd have to create it, and immediately get_ref(), which is
still racy.
Daniel
On 08/04/2017 0
Branch next
Tag:V2.6-dev-2
Release Highlights
* Fix 4 pynfs 4.1 test cases
* Fix deadlock in lru_reap_impl and simplify reapers
* Add new Reaper_Work_Per_Lane option
* export: add a parameter to indicating lock type for foreach_gsh_export
* update FATTR4_XATTR_SUPPORT
* GPFS: add sys/sysmac
17 matches
Mail list logo