Okay, that callpath calls through getattrs(), so it should get back 
whatever the modified getattrs() call returns.

If you'd like to post code, I'll take a look when I can.  I'm off over 
the holidays, so there's no hurry from my end.

Daniel

On 12/21/2016 09:58 AM, Kevin C. wrote:
> I have meetings starting soon and lasting for several hours so I cannot
> be as detailed as I'd like in this reply.
>
> I believe I see the processing that you describe.
>
> I had modified the NULL derived layer's *_readdir_cb() function.
>
> I enabled full debug logging for all components and started reviewing
> that information. I'm not yet done reviewing the information but it
> looks like mdcache_refresh_atttributes() processing is setting the file
> size to the unadjusted size.
>
> If you desire, I can provide patches so you could see the NULL derived
> layer. If you have access to very large transfer locations, I could even
> provide VirtualBox virtual PCs with my test configuration.
>
>
> On 12/21/2016 08:13 AM, Daniel Gryniewicz wrote:
>> Hi, Kevin.
>>
>> Welcome!  Glad someone's getting use out of stacking.
>>
>> First, your stack.  It sounds like you're stacking like this:
>>
>> MDCACHE
>>      |
>>     NULL (modified)
>>      |
>>    PROXY
>>
>> Correct?
>>
>> The readdir() path is somewhat different than the lookup/open path.  In
>> a normal lookup (say), the call comes into the protocol layer, and is
>> passed to MDCACHE, and so down the stack, to the bottom FSAL, which
>> creates a handle.  The handle is then passed back up the stack, each
>> layer wrapping it as it goes, until MDCACHE creates it's handle and
>> caches it.  Future attempts to get the metadata (getattr() calls) are
>> handled directly by MDCACHE until the cache entry times out, or until an
>> event causes cache invalidation on the entry.
>>
>> For readdir(), however, no lookup is ever made.  The Protocol layer
>> calls readdir(), which calls into MDCACHE.  MDCACHE then caches the
>> *entire* directory worth of dirents for the directory (except for large
>> directories, but we'll skip that for a second).  This involves calling
>> readdir() down the stack starting at 0 and going until there are no more
>> entries.
>>
>> However, a readdir() call doesn't just create directory entries, it also
>> materializes handles for each object in the directory, and passes them
>> back up.  This includes attributes, which are cached at the MDCACHE
>> layer as stated before.
>>
>> This means that any modification of size or other attributes that you do
>> on create(), lookup(), or getattr() must also be done during readdir(),
>> or the full original attributes will be cached, causing the incorrect
>> values to be returned until the entry times out.
>>
>> Does this help?
>>
>> Daniel
>>
>> On 12/20/2016 02:49 PM, Kevin C. wrote:
>>> I'm trying to simplify migration away from a legacy non-standard NFS
>>> server by creating a stackable FSAL. For my tests, I’m using a slightly
>>> modified copy of FSAL NULL above FSAL PROXY.
>>>
>>> Reads from the legacy non-standard NFS server includes data not
>>> originally provided by user application writes. Writes to the legacy
>>> non-standard NFS server also requires insertion of data not provided by
>>> the user application. I am unable to change the legacy system (anytime
>>> soon).
>>>
>>> For all tests I’ve run, my FSAL NULL derived layer is successfully
>>> inserting test data.
>>>
>>> When file system operations are done in a certain order (e.g. reading
>>> files before listing directory contents), my FSAL NULL derived layer
>>> successfully removes the inserted test data from the read data stream
>>> presented to user applications and FSAL adjusted file sizes are shown by
>>> file manager GUI or "ls".
>>>
>>> If the PROXY NFS server directory contents are listed first (via "ls" or
>>> file manager GUI) or output via "cat //mnt/nfstst//*", it seems like the
>>> FSAL NULL derived layer is bypassed. The user applications receive the
>>> extra/unwanted data as well as the expected user application data and
>>> directory listings show file sizes that are NOT adjusted to subtract the
>>> extra/unwanted data portions from the size.
>>>
>>> I hope that someone more familiar with NFS-Ganesha's architecture can
>>> help identify the processing path(s) that I have failed to identify
>>> (i.e. adjust via the stackableFSAL) so that I can eliminate this
>>> operation order dependency.
>>>
>>> My original work was done on the 2.4 stable branch. The development
>>> process indicates all patches should be made to the current development
>>> branch. I've recreated these results with the 2.5 development branch. If
>>> your input helps me find a bug, I'll submit a fix.
>>>
>>> I’m requesting input from more experienced NFS-Ganesha developers.
>>> Please provide any input that helps me eliminate this processing order
>>> dependency.
>>>
>>> Thanks in advance,
>>> Kevin
>>>
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Developer Access Program for Intel Xeon Phi Processors
>>> Access to Intel Xeon Phi processor-based developer platforms.
>>> With one year of Intel Parallel Studio XE.
>>> Training and support from Colfax.
>>> Order your platform today.http://sdm.link/intel
>>>
>>>
>>>
>>> _______________________________________________
>>> Nfs-ganesha-devel mailing list
>>> Nfs-ganesha-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>
>>
>> ------------------------------------------------------------------------------
>> Developer Access Program for Intel Xeon Phi Processors
>> Access to Intel Xeon Phi processor-based developer platforms.
>> With one year of Intel Parallel Studio XE.
>> Training and support from Colfax.
>> Order your platform today.http://sdm.link/intel
>> _______________________________________________
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>
>
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today.http://sdm.link/intel
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today.http://sdm.link/intel
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to