[Nfs-ganesha-devel] Announce Push of V2.6-dev-2

2017-08-04 Thread Frank Filz
Branch next

Tag:V2.6-dev-2

Release Highlights

* Fix 4 pynfs 4.1 test cases

* Fix deadlock in lru_reap_impl and simplify reapers

* Add new Reaper_Work_Per_Lane option

* export: add a parameter to indicating lock type for foreach_gsh_export

* update FATTR4_XATTR_SUPPORT

* GPFS: add sys/sysmacros.h include to export.c

* FSAL_GLUSTER: Use glfs_xreaddirplus_r for readdir

* FSAL_RGW: adopt new rgw_mount2 with bucket specified

* CMake - Have 'make dist' generate the correct tarball name

* Add dbus interface to recalculate fd limits

Signed-off-by: Frank S. Filz 

Contents:

a1dd47c Frank S. Filz V2.6-dev-2
860368f Malahal Naineni Add dbus interface to recalculate fd limits
c8bc40b Daniel Gryniewicz CMake - Have 'make dist' generate the correct
tarball name
814e9cd Gui Hecheng FSAL_RGW: adopt new rgw_mount2 with bucket specified
39119aa Soumya Koduri FSAL_GLUSTER: Use glfs_xreaddirplus_r for readdir
3441818 Dominique Martinet GPFS: add sys/sysmacros.h include to export.c
d2b7016 Daniel Gryniewicz Add new Reaper_Work_Per_Lane option
18baba1 Kinglong Mee export: add a parameter to indicating lock type for
foreach_gsh_export
ac16ad4 Marc Eshel update FATTR4_XATTR_SUPPORT
f0a6a4e Frank S. Filz Fix CREATE_SESSION replay cache
18ca372 Frank S. Filz Fix EID6h and EID7 pynfs 4.1 test cases
4aec481 Frank S. Filz Fix deadlock in lru_reap_impl and simplify reapers


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-04 Thread Daniel Gryniewicz
When you create a new entry, you get back a ref with it, that you are 
expected to release when you're done.  In addition, the hash table has a 
ref, so the initial ref of an entry is 2.


Otherwise, you'd have to create it, and immediately get_ref(), which is 
still racy.


Daniel

On 08/04/2017 02:09 PM, Pradeep wrote:

My mistake. As you both correctly pointed out, refcnt needs to be 1 for reclaim.
It is initialized with 2. So some must be doing an unref()/put() to make it 1.

On 8/4/17, Daniel Gryniewicz  wrote:

On 08/04/2017 01:14 PM, Pradeep wrote:

Hello,

I'm hitting a case where mdcache keeps growing well beyond the high
water mark. Here is a snapshot of the lru_state:

1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
10, chunks_used = 16462,

It has grown to 2.3 million entries and each entry is ~1.6K.

I looked at the first entry in lane 0, L1 queue:

(gdb) p LRU[0].L1
$9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
LRU_ENTRY_L1, size = 254628}
(gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
$10 = (mdcache_entry_t *) 0x7fad64256b00
(gdb) p $10->lru
$11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
(gdb) p $10->fh_hk.inavl
$13 = true

Lane 1:
(gdb) p LRU[1].L1
$18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
LRU_ENTRY_L1, size = 253006}
(gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
$21 = (mdcache_entry_t *) 0x7fad625bff00
(gdb) p $21->lru
$22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 }, qid =
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}

(gdb) p $21->fh_hk.inavl
$24 = true

As per LRU_ENTRY_RECLAIMABLE(), these entry should be reclaimable. Not
sure why it is not able to claim it. Any ideas?



refcnt == 2 is not reclaimable.  Reclaimable is refcnt == 1.  It checks
for 2 because it just took a ref.  Unless you're actually processing
that lane, and so seeing the ref taken during that processing, refcnt
will be 3 when processing, and it won't be reclaimed.

Daniel

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-04 Thread Pradeep
My mistake. As you both correctly pointed out, refcnt needs to be 1 for reclaim.
It is initialized with 2. So some must be doing an unref()/put() to make it 1.

On 8/4/17, Daniel Gryniewicz  wrote:
> On 08/04/2017 01:14 PM, Pradeep wrote:
>> Hello,
>>
>> I'm hitting a case where mdcache keeps growing well beyond the high
>> water mark. Here is a snapshot of the lru_state:
>>
>> 1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
>> 10, chunks_used = 16462,
>>
>> It has grown to 2.3 million entries and each entry is ~1.6K.
>>
>> I looked at the first entry in lane 0, L1 queue:
>>
>> (gdb) p LRU[0].L1
>> $9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
>> LRU_ENTRY_L1, size = 254628}
>> (gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
>> $10 = (mdcache_entry_t *) 0x7fad64256b00
>> (gdb) p $10->lru
>> $11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
>> (gdb) p $10->fh_hk.inavl
>> $13 = true
>>
>> Lane 1:
>> (gdb) p LRU[1].L1
>> $18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
>> LRU_ENTRY_L1, size = 253006}
>> (gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
>> $21 = (mdcache_entry_t *) 0x7fad625bff00
>> (gdb) p $21->lru
>> $22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 }, qid =
>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
>>
>> (gdb) p $21->fh_hk.inavl
>> $24 = true
>>
>> As per LRU_ENTRY_RECLAIMABLE(), these entry should be reclaimable. Not
>> sure why it is not able to claim it. Any ideas?
>>
>
> refcnt == 2 is not reclaimable.  Reclaimable is refcnt == 1.  It checks
> for 2 because it just took a ref.  Unless you're actually processing
> that lane, and so seeing the ref taken during that processing, refcnt
> will be 3 when processing, and it won't be reclaimed.
>
> Daniel
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-04 Thread Daniel Gryniewicz

On 08/04/2017 01:14 PM, Pradeep wrote:

Hello,

I'm hitting a case where mdcache keeps growing well beyond the high
water mark. Here is a snapshot of the lru_state:

1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
10, chunks_used = 16462,

It has grown to 2.3 million entries and each entry is ~1.6K.

I looked at the first entry in lane 0, L1 queue:

(gdb) p LRU[0].L1
$9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
LRU_ENTRY_L1, size = 254628}
(gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
$10 = (mdcache_entry_t *) 0x7fad64256b00
(gdb) p $10->lru
$11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
(gdb) p $10->fh_hk.inavl
$13 = true

Lane 1:
(gdb) p LRU[1].L1
$18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
LRU_ENTRY_L1, size = 253006}
(gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
$21 = (mdcache_entry_t *) 0x7fad625bff00
(gdb) p $21->lru
$22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 }, qid =
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}

(gdb) p $21->fh_hk.inavl
$24 = true

As per LRU_ENTRY_RECLAIMABLE(), these entry should be reclaimable. Not
sure why it is not able to claim it. Any ideas?



refcnt == 2 is not reclaimable.  Reclaimable is refcnt == 1.  It checks 
for 2 because it just took a ref.  Unless you're actually processing 
that lane, and so seeing the ref taken during that processing, refcnt 
will be 3 when processing, and it won't be reclaimed.


Daniel

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-04 Thread Frank Filz
> I'm hitting a case where mdcache keeps growing well beyond the high water
> mark. Here is a snapshot of the lru_state:
> 
> 1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
10,
> chunks_used = 16462,
> 
> It has grown to 2.3 million entries and each entry is ~1.6K.
> 
> I looked at the first entry in lane 0, L1 queue:
> 
> (gdb) p LRU[0].L1
> $9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
> LRU_ENTRY_L1, size = 254628}
> (gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
> $10 = (mdcache_entry_t *) 0x7fad64256b00
> (gdb) p $10->lru
> $11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
> (gdb) p $10->fh_hk.inavl
> $13 = true

The refcount 2 prevents reaping.

There could be a refcount leak.

Hmm, though, I thought the entries_hwmark was a hard limit, guess not...

Frank

> Lane 1:
> (gdb) p LRU[1].L1
> $18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
> LRU_ENTRY_L1, size = 253006}
> (gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
> $21 = (mdcache_entry_t *) 0x7fad625bff00
> (gdb) p $21->lru
> $22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 }, qid =
> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
> 
> (gdb) p $21->fh_hk.inavl
> $24 = true
> 
> As per LRU_ENTRY_RECLAIMABLE(), these entry should be reclaimable. Not
> sure why it is not able to claim it. Any ideas?
> 
> Thanks,
> Pradeep
> 
>

--
> Check out the vibrant tech community on one of the world's most engaging
> tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-04 Thread Pradeep
Hello,

I'm hitting a case where mdcache keeps growing well beyond the high
water mark. Here is a snapshot of the lru_state:

1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
10, chunks_used = 16462,

It has grown to 2.3 million entries and each entry is ~1.6K.

I looked at the first entry in lane 0, L1 queue:

(gdb) p LRU[0].L1
$9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
LRU_ENTRY_L1, size = 254628}
(gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
$10 = (mdcache_entry_t *) 0x7fad64256b00
(gdb) p $10->lru
$11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
(gdb) p $10->fh_hk.inavl
$13 = true

Lane 1:
(gdb) p LRU[1].L1
$18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
LRU_ENTRY_L1, size = 253006}
(gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
$21 = (mdcache_entry_t *) 0x7fad625bff00
(gdb) p $21->lru
$22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 }, qid =
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}

(gdb) p $21->fh_hk.inavl
$24 = true

As per LRU_ENTRY_RECLAIMABLE(), these entry should be reclaimable. Not
sure why it is not able to claim it. Any ideas?

Thanks,
Pradeep

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] deadlock in lru_reap_impl()

2017-08-04 Thread Daniel Gryniewicz
I think we need to ensure that the partition lock is taken before the 
qlane lock.  I have a patch for this, but it introduced a refcount 
issue, so I'm debugging.


Daniel

On 08/03/2017 08:52 PM, Pradeep wrote:
Thanks Franks. I merged your patch and now hitting another deadlock. 
Here are the two threads:


This thread below holds the partition lock in 'read' mode and try to 
acquire queue lock:


Thread 143 (Thread 0x7faf82f72700 (LWP 143573)):
#0  0x7fafd1c371bd in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x7fafd1c32d02 in _L_lock_791 () from /lib64/libpthread.so.0
#2  0x7fafd1c32c08 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x005221fd in _mdcache_lru_ref (entry=0x7fae78d19000, 
flags=2, func=0x58ec80 <__func__.23467> "mdcache_find_keyed", line=881) 
at 
/usr/src/debug/nfs-ganesha-2.5.1-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1813
#4  0x00532686 in mdcache_find_keyed (key=0x7faf82f70760, 
entry=0x7faf82f707e8) at 
/usr/src/debug/nfs-ganesha-2.5.1-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:881


 874 *entry = cih_get_by_key_latch(key, &latch,
 875 CIH_GET_RLOCK | 
CIH_GET_UNLOCK_ON_MISS,

 876 __func__, __LINE__);
 877 if (likely(*entry)) {
 878 fsal_status_t status;
 879
 880 /* Initial Ref on entry */
 881 status = mdcache_lru_ref(*entry, LRU_REQ_INITIAL);


This thread is already holding queue lock and trying to acquire 
partition lock in write mode:


Thread 188 (Thread 0x7faf9979f700 (LWP 143528)):
#0  0x7fafd1c3403e in pthread_rwlock_wrlock () from 
/lib64/libpthread.so.0
#1  0x0052fc61 in cih_remove_checked (entry=0x7fad62914e00) at 
/usr/src/debug/nfs-ganesha-2.5.1-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_hash.h:394
#2  0x00530b3e in mdc_clean_entry (entry=0x7fad62914e00) at 
/usr/src/debug/nfs-ganesha-2.5.1-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:272
#3  0x0051df7e in mdcache_lru_clean (entry=0x7fad62914e00) at 
/usr/src/debug/nfs-ganesha-2.5.1-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:590
#4  0x00522cca in _mdcache_lru_unref (entry=0x7fad62914e00, 
flags=8, func=0x58b700 <__func__.23710> "lru_reap_impl", line=690) at 
/usr/src/debug/nfs-ganesha-2.5.1-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1922
#5  0x0051ea38 in lru_reap_impl (qid=LRU_ENTRY_L1) at 
/usr/src/debug/nfs-ganesha-2.5.1-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:690





On Fri, Jul 28, 2017 at 1:34 PM, Frank Filz > wrote:


Hmm, well, that’s easy to fix…

__ __

Instead of:

__ __

mdcache_lru_unref(entry, LRU_UNREF_QLOCKED);

goto next_lane;

__ __

It could:

__ __

QUNLOCK(qlane);

mdcache_put(entry);

continue;

__ __

Fix posted here:

__ __

https://review.gerrithub.io/371764


__ __

Frank

__ __

__ __

*From:*Pradeep [mailto:pradeep.tho...@gmail.com
]
*Sent:* Friday, July 28, 2017 12:44 PM
*To:* nfs-ganesha-devel@lists.sourceforge.net

*Subject:* [Nfs-ganesha-devel] deadlock in lru_reap_impl()

__ __

__ __

I'm hitting another deadlock in mdcache with 2.5.1 base.  In this
case two threads are in different places in lru_reap_impl()

__ __

Thread 1:

__ __

 636 QLOCK(qlane);

 637 lru = glist_first_entry(&lq->q,
mdcache_lru_t, q);

 638 if (!lru)

 639 goto next_lane;

 640 refcnt = atomic_inc_int32_t(&lru->refcnt);

 641 entry = container_of(lru, mdcache_entry_t,
lru);

 642 if (unlikely(refcnt !=
(LRU_SENTINEL_REFCOUNT + 1))) {

 643 /* cant use it. */

 644 mdcache_lru_unref(entry,
LRU_UNREF_QLOCKED);

__ __

​mdcache_lru_unref() could lead to the set of calls below:​

__ __

​mdcache_lru_unref() -> mdcache_lru_clean() -> mdc_clean_entry()
-> cih_remove_checked()

__ __

This tries to get partition lock which is held by 'Thread 2' which
is trying to acquire queue lane lock.

__ __

Thread 2:

 650 if (cih_latch_entry(&entry->fh_hk.key,
&latch, CIH_GET_WLOCK,

 651 __func__, __LINE__)) {

 652 QLOCK(qlane);

__ __

St

Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Frank Filz
> Yeah, probably.
> 
> Daniel
> 
> On 08/04/2017 10:55 AM, Pradeep wrote:
> > For the first export, saved_ctx will be NULL. So the assignment at
> > line 1161 makes op_ctx also NULL. So when mdcache_lru_unref()  is
> > called, op_ctx will be NULL. This will cause the assert in
> > mdcache_lru_clean() abort.
> >
> > Perhaps we can move the assignment in line 1161 after
> mdcache_lru_unref() call?

That sounds like a good fix. The put_gsh_export(export); also needs to be
moved so there is a valid export reference for the mdcache_lru_unref() call.

Frank

> > On 8/4/17, Daniel Gryniewicz  wrote:
> >> Yep.  We save the old context, run our loop, and then restore the old
> >> context.
> >>
> >> On 08/04/2017 10:45 AM, Pradeep wrote:
> >>> Thanks Daniel. I see it being initialized. But then it is
> >>> overwritten from saved_ctx, right?
> >>>
> >>> https://github.com/nfs-ganesha/nfs-
> ganesha/blob/next/src/FSAL/Stacka
> >>> ble_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1161
> >>>
> >>> On 8/4/17, Daniel Gryniewicz  wrote:
>  Here:
> 
>  https://github.com/nfs-ganesha/nfs-
> ganesha/blob/next/src/FSAL/Stack
>  able_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1127
> 
>  On 08/04/2017 10:36 AM, Pradeep wrote:
> > Hi Daniel,
> >
> > I could not find where op_ctx gets populated in lru_run_lane().
> > I'm using 2.5.1.
> >
> > https://github.com/nfs-ganesha/nfs-
> ganesha/blob/next/src/FSAL/Stac
> > kable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019
> >
> > On 8/4/17, Daniel Gryniewicz  wrote:
> >> It should be valid.  lru_run_lane() sets up op_ctx, so it should
> >> be set correctly even in the LRU thread case.
> >>
> >> Daniel
> >>
> >> On 08/04/2017 09:54 AM, Pradeep wrote:
> >>> It looks like the assert() below and the comment in
> >>> mdcache_lru_clean() may not be valid in all cases. For example,
> >>> if cache is getting cleaned in the context of the LRU background
> >>> thread, the op_ctx will be NULL and the code may get into the
> >>> 'else' part
> >>> (lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
> >>> mdcache_lru_clean()):
> >>>
> >>> Do any of the calls after the 'if-else' block use 'op_ctx'? If
> >>> those don't us 'op_ctx', the 'else' part can be safely removed,
right?
> >>>
> >>>if (export_id >= 0 && op_ctx != NULL &&
> >>> op_ctx->ctx_export != NULL &&
> >>> op_ctx->ctx_export->export_id !=
> >>> export_id) { 
> >>> } else {
> >>> /* We MUST have a valid op_ctx based
> >>> on the conditions
> >>>  * we could get here.
> >>> first_export_id coild be
> >>> -1
> >>> or it
> >>>  * could match the current op_ctx
export.
> >>> In
> >>> either case
> >>>  * we will trust the current op_ctx.
> >>>  */
> >>> assert(op_ctx);
> >>> assert(op_ctx->ctx_export);
> >>> LogFullDebug(COMPONENT_CACHE_INODE,
> >>>  "Trusting op_ctx export
> >>> id %"PRIu16,
> >>>
> >>> op_ctx->ctx_export->export_id);
> >>> 
> >>>
> >>> 
> >>> -- Check out the vibrant tech community on one of
> >>> the world's most engaging tech sites, Slashdot.org!
> >>> http://sdm.link/slashdot
> >>> ___
> >>> Nfs-ganesha-devel mailing list
> >>> Nfs-ganesha-devel@lists.sourceforge.net
> >>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> >>>
> >>
> >>
> >> -
> >> - Check out the vibrant tech community on one of the
> >> world's most engaging tech sites, Slashdot.org!
> >> http://sdm.link/slashdot
> >> ___
> >> Nfs-ganesha-devel mailing list
> >> Nfs-ganesha-devel@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> >>
> 
> 
> >>
> >>
> 
> 
>

--
> Check out the vibrant tech community on one of the world's most engaging
> tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


-

Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Daniel Gryniewicz

Yeah, probably.

Daniel

On 08/04/2017 10:55 AM, Pradeep wrote:

For the first export, saved_ctx will be NULL. So the assignment at
line 1161 makes op_ctx also NULL. So when mdcache_lru_unref()  is
called, op_ctx will be NULL. This will cause the assert in
mdcache_lru_clean() abort.

Perhaps we can move the assignment in line 1161 after mdcache_lru_unref() call?


On 8/4/17, Daniel Gryniewicz  wrote:

Yep.  We save the old context, run our loop, and then restore the old
context.

On 08/04/2017 10:45 AM, Pradeep wrote:

Thanks Daniel. I see it being initialized. But then it is overwritten
from saved_ctx, right?

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1161

On 8/4/17, Daniel Gryniewicz  wrote:

Here:

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1127

On 08/04/2017 10:36 AM, Pradeep wrote:

Hi Daniel,

I could not find where op_ctx gets populated in lru_run_lane(). I'm
using
2.5.1.

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019

On 8/4/17, Daniel Gryniewicz  wrote:

It should be valid.  lru_run_lane() sets up op_ctx, so it should be
set
correctly even in the LRU thread case.

Daniel

On 08/04/2017 09:54 AM, Pradeep wrote:

It looks like the assert() below and the comment in
mdcache_lru_clean() may not be valid in all cases. For example, if
cache is getting cleaned in the context of the LRU background thread,
the op_ctx will be NULL and the code may get into the 'else' part
(lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
mdcache_lru_clean()):

Do any of the calls after the 'if-else' block use 'op_ctx'? If those
don't us 'op_ctx', the 'else' part can be safely removed, right?

   if (export_id >= 0 && op_ctx != NULL &&
op_ctx->ctx_export != NULL &&
op_ctx->ctx_export->export_id != export_id) {

} else {
/* We MUST have a valid op_ctx based on
the
conditions
 * we could get here. first_export_id coild
be
-1
or it
 * could match the current op_ctx export.
In
either case
 * we will trust the current op_ctx.
 */
assert(op_ctx);
assert(op_ctx->ctx_export);
LogFullDebug(COMPONENT_CACHE_INODE,
 "Trusting op_ctx export id
%"PRIu16,

op_ctx->ctx_export->export_id);


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel










--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Pradeep
For the first export, saved_ctx will be NULL. So the assignment at
line 1161 makes op_ctx also NULL. So when mdcache_lru_unref()  is
called, op_ctx will be NULL. This will cause the assert in
mdcache_lru_clean() abort.

Perhaps we can move the assignment in line 1161 after mdcache_lru_unref() call?


On 8/4/17, Daniel Gryniewicz  wrote:
> Yep.  We save the old context, run our loop, and then restore the old
> context.
>
> On 08/04/2017 10:45 AM, Pradeep wrote:
>> Thanks Daniel. I see it being initialized. But then it is overwritten
>> from saved_ctx, right?
>>
>> https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1161
>>
>> On 8/4/17, Daniel Gryniewicz  wrote:
>>> Here:
>>>
>>> https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1127
>>>
>>> On 08/04/2017 10:36 AM, Pradeep wrote:
 Hi Daniel,

 I could not find where op_ctx gets populated in lru_run_lane(). I'm
 using
 2.5.1.

 https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019

 On 8/4/17, Daniel Gryniewicz  wrote:
> It should be valid.  lru_run_lane() sets up op_ctx, so it should be
> set
> correctly even in the LRU thread case.
>
> Daniel
>
> On 08/04/2017 09:54 AM, Pradeep wrote:
>> It looks like the assert() below and the comment in
>> mdcache_lru_clean() may not be valid in all cases. For example, if
>> cache is getting cleaned in the context of the LRU background thread,
>> the op_ctx will be NULL and the code may get into the 'else' part
>> (lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
>> mdcache_lru_clean()):
>>
>> Do any of the calls after the 'if-else' block use 'op_ctx'? If those
>> don't us 'op_ctx', the 'else' part can be safely removed, right?
>>
>>   if (export_id >= 0 && op_ctx != NULL &&
>>op_ctx->ctx_export != NULL &&
>>op_ctx->ctx_export->export_id != export_id) {
>> 
>>} else {
>>/* We MUST have a valid op_ctx based on
>> the
>> conditions
>> * we could get here. first_export_id coild
>> be
>> -1
>> or it
>> * could match the current op_ctx export.
>> In
>> either case
>> * we will trust the current op_ctx.
>> */
>>assert(op_ctx);
>>assert(op_ctx->ctx_export);
>>LogFullDebug(COMPONENT_CACHE_INODE,
>> "Trusting op_ctx export id
>> %"PRIu16,
>>
>> op_ctx->ctx_export->export_id);
>> 
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
>>>
>>>
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Daniel Gryniewicz
Yep.  We save the old context, run our loop, and then restore the old 
context.


On 08/04/2017 10:45 AM, Pradeep wrote:

Thanks Daniel. I see it being initialized. But then it is overwritten
from saved_ctx, right?

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1161

On 8/4/17, Daniel Gryniewicz  wrote:

Here:

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1127

On 08/04/2017 10:36 AM, Pradeep wrote:

Hi Daniel,

I could not find where op_ctx gets populated in lru_run_lane(). I'm using
2.5.1.

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019

On 8/4/17, Daniel Gryniewicz  wrote:

It should be valid.  lru_run_lane() sets up op_ctx, so it should be set
correctly even in the LRU thread case.

Daniel

On 08/04/2017 09:54 AM, Pradeep wrote:

It looks like the assert() below and the comment in
mdcache_lru_clean() may not be valid in all cases. For example, if
cache is getting cleaned in the context of the LRU background thread,
the op_ctx will be NULL and the code may get into the 'else' part
(lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
mdcache_lru_clean()):

Do any of the calls after the 'if-else' block use 'op_ctx'? If those
don't us 'op_ctx', the 'else' part can be safely removed, right?

  if (export_id >= 0 && op_ctx != NULL &&
   op_ctx->ctx_export != NULL &&
   op_ctx->ctx_export->export_id != export_id) {

   } else {
   /* We MUST have a valid op_ctx based on the
conditions
* we could get here. first_export_id coild be
-1
or it
* could match the current op_ctx export. In
either case
* we will trust the current op_ctx.
*/
   assert(op_ctx);
   assert(op_ctx->ctx_export);
   LogFullDebug(COMPONENT_CACHE_INODE,
"Trusting op_ctx export id
%"PRIu16,
op_ctx->ctx_export->export_id);


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Pradeep
Thanks Daniel. I see it being initialized. But then it is overwritten
from saved_ctx, right?

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1161

On 8/4/17, Daniel Gryniewicz  wrote:
> Here:
>
> https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1127
>
> On 08/04/2017 10:36 AM, Pradeep wrote:
>> Hi Daniel,
>>
>> I could not find where op_ctx gets populated in lru_run_lane(). I'm using
>> 2.5.1.
>>
>> https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019
>>
>> On 8/4/17, Daniel Gryniewicz  wrote:
>>> It should be valid.  lru_run_lane() sets up op_ctx, so it should be set
>>> correctly even in the LRU thread case.
>>>
>>> Daniel
>>>
>>> On 08/04/2017 09:54 AM, Pradeep wrote:
 It looks like the assert() below and the comment in
 mdcache_lru_clean() may not be valid in all cases. For example, if
 cache is getting cleaned in the context of the LRU background thread,
 the op_ctx will be NULL and the code may get into the 'else' part
 (lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
 mdcache_lru_clean()):

 Do any of the calls after the 'if-else' block use 'op_ctx'? If those
 don't us 'op_ctx', the 'else' part can be safely removed, right?

  if (export_id >= 0 && op_ctx != NULL &&
   op_ctx->ctx_export != NULL &&
   op_ctx->ctx_export->export_id != export_id) {
 
   } else {
   /* We MUST have a valid op_ctx based on the
 conditions
* we could get here. first_export_id coild be
 -1
 or it
* could match the current op_ctx export. In
 either case
* we will trust the current op_ctx.
*/
   assert(op_ctx);
   assert(op_ctx->ctx_export);
   LogFullDebug(COMPONENT_CACHE_INODE,
"Trusting op_ctx export id
 %"PRIu16,
op_ctx->ctx_export->export_id);
 

 --
 Check out the vibrant tech community on one of the world's most
 engaging tech sites, Slashdot.org! http://sdm.link/slashdot
 ___
 Nfs-ganesha-devel mailing list
 Nfs-ganesha-devel@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

>>>
>>>
>>> --
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>> ___
>>> Nfs-ganesha-devel mailing list
>>> Nfs-ganesha-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Daniel Gryniewicz

Here:

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1127

On 08/04/2017 10:36 AM, Pradeep wrote:

Hi Daniel,

I could not find where op_ctx gets populated in lru_run_lane(). I'm using 2.5.1.

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019

On 8/4/17, Daniel Gryniewicz  wrote:

It should be valid.  lru_run_lane() sets up op_ctx, so it should be set
correctly even in the LRU thread case.

Daniel

On 08/04/2017 09:54 AM, Pradeep wrote:

It looks like the assert() below and the comment in
mdcache_lru_clean() may not be valid in all cases. For example, if
cache is getting cleaned in the context of the LRU background thread,
the op_ctx will be NULL and the code may get into the 'else' part
(lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
mdcache_lru_clean()):

Do any of the calls after the 'if-else' block use 'op_ctx'? If those
don't us 'op_ctx', the 'else' part can be safely removed, right?

 if (export_id >= 0 && op_ctx != NULL &&
  op_ctx->ctx_export != NULL &&
  op_ctx->ctx_export->export_id != export_id) {

  } else {
  /* We MUST have a valid op_ctx based on the
conditions
   * we could get here. first_export_id coild be -1
or it
   * could match the current op_ctx export. In
either case
   * we will trust the current op_ctx.
   */
  assert(op_ctx);
  assert(op_ctx->ctx_export);
  LogFullDebug(COMPONENT_CACHE_INODE,
   "Trusting op_ctx export id
%"PRIu16,
   op_ctx->ctx_export->export_id);


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Pradeep
Hi Daniel,

I could not find where op_ctx gets populated in lru_run_lane(). I'm using 2.5.1.

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c#L1019

On 8/4/17, Daniel Gryniewicz  wrote:
> It should be valid.  lru_run_lane() sets up op_ctx, so it should be set
> correctly even in the LRU thread case.
>
> Daniel
>
> On 08/04/2017 09:54 AM, Pradeep wrote:
>> It looks like the assert() below and the comment in
>> mdcache_lru_clean() may not be valid in all cases. For example, if
>> cache is getting cleaned in the context of the LRU background thread,
>> the op_ctx will be NULL and the code may get into the 'else' part
>> (lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
>> mdcache_lru_clean()):
>>
>> Do any of the calls after the 'if-else' block use 'op_ctx'? If those
>> don't us 'op_ctx', the 'else' part can be safely removed, right?
>>
>> if (export_id >= 0 && op_ctx != NULL &&
>>  op_ctx->ctx_export != NULL &&
>>  op_ctx->ctx_export->export_id != export_id) {
>> 
>>  } else {
>>  /* We MUST have a valid op_ctx based on the
>> conditions
>>   * we could get here. first_export_id coild be -1
>> or it
>>   * could match the current op_ctx export. In
>> either case
>>   * we will trust the current op_ctx.
>>   */
>>  assert(op_ctx);
>>  assert(op_ctx->ctx_export);
>>  LogFullDebug(COMPONENT_CACHE_INODE,
>>   "Trusting op_ctx export id
>> %"PRIu16,
>>   op_ctx->ctx_export->export_id);
>> 
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Daniel Gryniewicz
It should be valid.  lru_run_lane() sets up op_ctx, so it should be set 
correctly even in the LRU thread case.


Daniel

On 08/04/2017 09:54 AM, Pradeep wrote:

It looks like the assert() below and the comment in
mdcache_lru_clean() may not be valid in all cases. For example, if
cache is getting cleaned in the context of the LRU background thread,
the op_ctx will be NULL and the code may get into the 'else' part
(lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
mdcache_lru_clean()):

Do any of the calls after the 'if-else' block use 'op_ctx'? If those
don't us 'op_ctx', the 'else' part can be safely removed, right?

if (export_id >= 0 && op_ctx != NULL &&
 op_ctx->ctx_export != NULL &&
 op_ctx->ctx_export->export_id != export_id) {

 } else {
 /* We MUST have a valid op_ctx based on the conditions
  * we could get here. first_export_id coild be -1 or it
  * could match the current op_ctx export. In either 
case
  * we will trust the current op_ctx.
  */
 assert(op_ctx);
 assert(op_ctx->ctx_export);
 LogFullDebug(COMPONENT_CACHE_INODE,
  "Trusting op_ctx export id %"PRIu16,
  op_ctx->ctx_export->export_id);


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] assert() in mdcache_lru_clean()

2017-08-04 Thread Pradeep
It looks like the assert() below and the comment in
mdcache_lru_clean() may not be valid in all cases. For example, if
cache is getting cleaned in the context of the LRU background thread,
the op_ctx will be NULL and the code may get into the 'else' part
(lru_run() -> lru_run_lane() -> _mdcache_lru_unref() ->
mdcache_lru_clean()):

Do any of the calls after the 'if-else' block use 'op_ctx'? If those
don't us 'op_ctx', the 'else' part can be safely removed, right?

   if (export_id >= 0 && op_ctx != NULL &&
op_ctx->ctx_export != NULL &&
op_ctx->ctx_export->export_id != export_id) {

} else {
/* We MUST have a valid op_ctx based on the conditions
 * we could get here. first_export_id coild be -1 or it
 * could match the current op_ctx export. In either case
 * we will trust the current op_ctx.
 */
assert(op_ctx);
assert(op_ctx->ctx_export);
LogFullDebug(COMPONENT_CACHE_INODE,
 "Trusting op_ctx export id %"PRIu16,
 op_ctx->ctx_export->export_id);


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Allow '.' and '..' as target path in symlink creation for NFSv4

2017-08-04 Thread GerritHub
>From Sachin Punadikar :

Sachin Punadikar has uploaded this change for review. ( 
https://review.gerrithub.io/372645


Change subject: Allow '.' and '..' as target path in symlink creation for NFSv4
..

Allow '.' and '..' as target path in symlink creation for NFSv4

Symlink creation with target path as '.' and '..' fails with error.
$ ln -s . sym
ln: creating symbolic link `sym': Invalid argument
For symbolic link creation any arbitrary link should be accepted.
Modified the code to accept the target path as valid UTF8 character string.

Change-Id: I4e3e549860fa59c28397e74da5778d8228c17a4e
Signed-off-by: Sachin Punadikar 
---
M src/Protocols/NFS/nfs4_op_create.c
1 file changed, 2 insertions(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/45/372645/1
-- 
To view, visit https://review.gerrithub.io/372645
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I4e3e549860fa59c28397e74da5778d8228c17a4e
Gerrit-Change-Number: 372645
Gerrit-PatchSet: 1
Gerrit-Owner: Sachin Punadikar 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel