I confirmed that the initial mdcache_new_entry() call saves attributes 
adjusted by my NULL derived layer. I also confirmed that the sub_handle 
provided to mdcache_new_entry() is from PROXY FSAL rather than the NULL 
derived FSAL. The PROXY FSAL sub_handle was created by 
pxy_do_readdir()'s call to pxy_lookup_impl(). The mdcache_new_entry() 
call to mdcache_alloc_handle() saves the PROXY FSAL sub_handle.

I believe the processing above describes why 1 and 3 (from the 
12/22/2016 10:07 AM message) below skips the stackable layer between 
MDCACHE and PROXY.

Despite having an explanation for why the processing order above causes 
later processing to skip the stackable layer between MDCACHE and PROXY, 
I don't understand how to fix this in the stackable layer's read 
directory callback.


On 12/22/2016 10:19 AM, Daniel Gryniewicz wrote:
> I'll take a look when I get a chance, but it probably won't be until
> Jan, since I'm off starting tomorrow until the new year.
>
> For the record, the simplest way to get your code to us is to fork
> Ganesha on github, and commit your code to the fork, then send us the
> URL for your fork.
>
> Daniel
>
> On 12/22/2016 10:07 AM, Kevin C. wrote:
>> To reduce the size of this email, the attached tgz file contains a patch
>> to the existing FSAL NULL rather than creating my FSAL NULL derived layer.
>>
>> Here is some additional information I believe to be relevant.
>>
>>   1. I believe the adjusted directory entries are correctly cached before
>>      nfs4_readdir_callback() calls fsal_status = obj->obj_ops.test_access().
>>   2. mdcache_is_attrs_valid() returns false because "flags |=
>>      MDCACHE_TRUST_ATTRS;" is executed and that causes
>>      "((entry->mde_flags & flags) != flags)" to be false.
>>   3. mdcache_test_access() calls fsal_test_access() and the
>>      fsal_test_access() obj_hdl->obj_ops.getattrs() call skips over the
>>      NULL derived layer and goes to the PROXY getattrs().
>>   4. I've also attached a gdb debug log that might provide additional
>>      useful info
>>       1. Near the top of the log, it shows that pxy_readdir() was called
>>          by my NULL derived layer.
>>       2. Below mdcache_test_access() is where the NULL derived layer is
>>          being ignored and unadjusted attributes get saved in the cache.
>>       3. I haven't yet traced why the NULL derived layer is ignored when
>>          reading (if the directory is listed/viewed before the read).
>>
>> I believe it would be easy to stack over VFS but I'm trying to keep my
>> testing as realistic as possible so I'm stacking my NULL derived layer
>> over PROXY.
>>
>> I suggest you populate your bottom (e.g. PROXY or VFS) file system with
>> the attached 1.txt file so that you don't need to do any writes (and
>> afterwards flush caches or restart). Through the NULL derived layer, the
>> file size is shown as 19 bytes. The NULL derived test layer is inserting
>> 32 bytes every 256 bytes. If you "cat" 1.txt before listing/viewing the
>> directory, the NULL derived layer is not bypassed and the read data is
>> as expected. If you ls 1.txt after reading the file but before listing
>> the directory or "cat *", the adjusted size is listed.
>>
>> This is prototype (proof of concept) code so it isn't yet optimized or
>> realistic processing.
>>
>> If I truly understood the layering, I might already understand how to
>> stop ignoring the NULL derived layer below the mdcache_test_access()
>> calls. I'll continue to learn as I get more time to work with NFS-Ganesha.
>>
>> Thanks for taking a look.
>>
>> Kevin
>>
>>
>> On 12/21/2016 10:30 AM, Daniel Gryniewicz wrote:
>>> Okay, that callpath calls through getattrs(), so it should get back
>>> whatever the modified getattrs() call returns.
>>>
>>> If you'd like to post code, I'll take a look when I can.  I'm off over
>>> the holidays, so there's no hurry from my end.
>>>
>>> Daniel
>>>
>>> On 12/21/2016 09:58 AM, Kevin C. wrote:
>>>> I have meetings starting soon and lasting for several hours so I cannot
>>>> be as detailed as I'd like in this reply.
>>>>
>>>> I believe I see the processing that you describe.
>>>>
>>>> I had modified the NULL derived layer's *_readdir_cb() function.
>>>>
>>>> I enabled full debug logging for all components and started reviewing
>>>> that information. I'm not yet done reviewing the information but it
>>>> looks like mdcache_refresh_atttributes() processing is setting the file
>>>> size to the unadjusted size.
>>>>
>>>> If you desire, I can provide patches so you could see the NULL derived
>>>> layer. If you have access to very large transfer locations, I could even
>>>> provide VirtualBox virtual PCs with my test configuration.
>>>>
>>>>
>>>> On 12/21/2016 08:13 AM, Daniel Gryniewicz wrote:
>>>>> Hi, Kevin.
>>>>>
>>>>> Welcome!  Glad someone's getting use out of stacking.
>>>>>
>>>>> First, your stack.  It sounds like you're stacking like this:
>>>>>
>>>>> MDCACHE
>>>>>       |
>>>>>      NULL (modified)
>>>>>       |
>>>>>     PROXY
>>>>>
>>>>> Correct?
>>>>>
>>>>> The readdir() path is somewhat different than the lookup/open path.  In
>>>>> a normal lookup (say), the call comes into the protocol layer, and is
>>>>> passed to MDCACHE, and so down the stack, to the bottom FSAL, which
>>>>> creates a handle.  The handle is then passed back up the stack, each
>>>>> layer wrapping it as it goes, until MDCACHE creates it's handle and
>>>>> caches it.  Future attempts to get the metadata (getattr() calls) are
>>>>> handled directly by MDCACHE until the cache entry times out, or until an
>>>>> event causes cache invalidation on the entry.
>>>>>
>>>>> For readdir(), however, no lookup is ever made.  The Protocol layer
>>>>> calls readdir(), which calls into MDCACHE.  MDCACHE then caches the
>>>>> *entire* directory worth of dirents for the directory (except for large
>>>>> directories, but we'll skip that for a second).  This involves calling
>>>>> readdir() down the stack starting at 0 and going until there are no more
>>>>> entries.
>>>>>
>>>>> However, a readdir() call doesn't just create directory entries, it also
>>>>> materializes handles for each object in the directory, and passes them
>>>>> back up.  This includes attributes, which are cached at the MDCACHE
>>>>> layer as stated before.
>>>>>
>>>>> This means that any modification of size or other attributes that you do
>>>>> on create(), lookup(), or getattr() must also be done during readdir(),
>>>>> or the full original attributes will be cached, causing the incorrect
>>>>> values to be returned until the entry times out.
>>>>>
>>>>> Does this help?
>>>>>
>>>>> Daniel
>>>>>
>>>>> On 12/20/2016 02:49 PM, Kevin C. wrote:
>>>>>> I'm trying to simplify migration away from a legacy non-standard NFS
>>>>>> server by creating a stackable FSAL. For my tests, I’m using a slightly
>>>>>> modified copy of FSAL NULL above FSAL PROXY.
>>>>>>
>>>>>> Reads from the legacy non-standard NFS server includes data not
>>>>>> originally provided by user application writes. Writes to the legacy
>>>>>> non-standard NFS server also requires insertion of data not provided by
>>>>>> the user application. I am unable to change the legacy system (anytime
>>>>>> soon).
>>>>>>
>>>>>> For all tests I’ve run, my FSAL NULL derived layer is successfully
>>>>>> inserting test data.
>>>>>>
>>>>>> When file system operations are done in a certain order (e.g. reading
>>>>>> files before listing directory contents), my FSAL NULL derived layer
>>>>>> successfully removes the inserted test data from the read data stream
>>>>>> presented to user applications and FSAL adjusted file sizes are shown by
>>>>>> file manager GUI or "ls".
>>>>>>
>>>>>> If the PROXY NFS server directory contents are listed first (via "ls" or
>>>>>> file manager GUI) or output via "cat //mnt/nfstst//*", it seems like the
>>>>>> FSAL NULL derived layer is bypassed. The user applications receive the
>>>>>> extra/unwanted data as well as the expected user application data and
>>>>>> directory listings show file sizes that are NOT adjusted to subtract the
>>>>>> extra/unwanted data portions from the size.
>>>>>>
>>>>>> I hope that someone more familiar with NFS-Ganesha's architecture can
>>>>>> help identify the processing path(s) that I have failed to identify
>>>>>> (i.e. adjust via the stackableFSAL) so that I can eliminate this
>>>>>> operation order dependency.
>>>>>>
>>>>>> My original work was done on the 2.4 stable branch. The development
>>>>>> process indicates all patches should be made to the current development
>>>>>> branch. I've recreated these results with the 2.5 development branch. If
>>>>>> your input helps me find a bug, I'll submit a fix.
>>>>>>
>>>>>> I’m requesting input from more experienced NFS-Ganesha developers.
>>>>>> Please provide any input that helps me eliminate this processing order
>>>>>> dependency.
>>>>>>
>>>>>> Thanks in advance,
>>>>>> Kevin
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------------
>>>>>> Developer Access Program for Intel Xeon Phi Processors
>>>>>> Access to Intel Xeon Phi processor-based developer platforms.
>>>>>> With one year of Intel Parallel Studio XE.
>>>>>> Training and support from Colfax.
>>>>>> Order your platform today.http://sdm.link/intel
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Nfs-ganesha-devel mailing list
>>>>>> Nfs-ganesha-devel@lists.sourceforge.net
>>>>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> Developer Access Program for Intel Xeon Phi Processors
>>>>> Access to Intel Xeon Phi processor-based developer platforms.
>>>>> With one year of Intel Parallel Studio XE.
>>>>> Training and support from Colfax.
>>>>> Order your platform today.http://sdm.link/intel
>>>>> _______________________________________________
>>>>> Nfs-ganesha-devel mailing list
>>>>> Nfs-ganesha-devel@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>>>
>>>> ------------------------------------------------------------------------------
>>>> Developer Access Program for Intel Xeon Phi Processors
>>>> Access to Intel Xeon Phi processor-based developer platforms.
>>>> With one year of Intel Parallel Studio XE.
>>>> Training and support from Colfax.
>>>> Order your platform today.http://sdm.link/intel
>>>> _______________________________________________
>>>> Nfs-ganesha-devel mailing list
>>>> Nfs-ganesha-devel@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>>
>>> ------------------------------------------------------------------------------
>>> Developer Access Program for Intel Xeon Phi Processors
>>> Access to Intel Xeon Phi processor-based developer platforms.
>>> With one year of Intel Parallel Studio XE.
>>> Training and support from Colfax.
>>> Order your platform today.http://sdm.link/intel
>>> _______________________________________________
>>> Nfs-ganesha-devel mailing list
>>> Nfs-ganesha-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>
>>
>>
>> ------------------------------------------------------------------------------
>> Developer Access Program for Intel Xeon Phi Processors
>> Access to Intel Xeon Phi processor-based developer platforms.
>> With one year of Intel Parallel Studio XE.
>> Training and support from Colfax.
>> Order your platform today.http://sdm.link/intel
>>
>>
>>
>> _______________________________________________
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today.http://sdm.link/intel
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to