On 05/28/2018 02:21 AM, Michal Hocko wrote:
> On Sun 27-05-18 15:47:22, Mike Rapoport wrote:
>> On Fri, May 25, 2018 at 10:16:24AM +0200, Michal Hocko wrote:
>>> On Fri 25-05-18 08:17:15, Dave Chinner wrote:
>>>> On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote:
>>> [...]
>>>>> +FS/IO code then simply calls the appropriate save function right at the
>>>>> +layer where a lock taken from the reclaim context (e.g. shrinker) and
>>>>> +the corresponding restore function when the lock is released. All that
>>>>> +ideally along with an explanation what is the reclaim context for easier
>>>>> +maintenance.
>>>>
>>>> This paragraph doesn't make much sense to me. I think you're trying
>>>> to say that we should call the appropriate save function "before
>>>> locks are taken that a reclaim context (e.g a shrinker) might
>>>> require access to."
>>>>
>>>> I think it's also worth making a note about recursive/nested
>>>> save/restore stacking, because it's not clear from this description
>>>> that this is allowed and will work as long as inner save/restore
>>>> calls are fully nested inside outer save/restore contexts.
>>>
>>> Any better?
>>>
>>> -FS/IO code then simply calls the appropriate save function right at the
>>> -layer where a lock taken from the reclaim context (e.g. shrinker) and
>>> -the corresponding restore function when the lock is released. All that
>>> -ideally along with an explanation what is the reclaim context for easier
>>> -maintenance.
>>> +FS/IO code then simply calls the appropriate save function before any
>>> +lock shared with the reclaim context is taken.  The corresponding
>>> +restore function when the lock is released. All that ideally along with
>>
>> Maybe: "The corresponding restore function is called when the lock is
>> released"
> 
> This will get rewritten some more based on comments from Dave
>  
>>> +an explanation what is the reclaim context for easier maintenance.
>>> +
>>> +Please note that the proper pairing of save/restore function allows nesting
>>> +so memalloc_noio_save is safe to be called from an existing NOIO or NOFS 
>>> scope.
>>  
>> so it is safe to call memalloc_noio_save from an existing NOIO or NOFS
>> scope
> 
> Here is what I have right now on top
> 
> diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst 
> b/Documentation/core-api/gfp_mask-from-fs-io.rst
> index c0ec212d6773..0cff411693ab 100644
> --- a/Documentation/core-api/gfp_mask-from-fs-io.rst
> +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst
> @@ -34,12 +34,15 @@ scope will inherently drop __GFP_FS respectively __GFP_IO 
> from the given
>  mask so no memory allocation can recurse back in the FS/IO.
>  
>  FS/IO code then simply calls the appropriate save function before any
> -lock shared with the reclaim context is taken.  The corresponding
> -restore function when the lock is released. All that ideally along with
> -an explanation what is the reclaim context for easier maintenance.
> -
> -Please note that the proper pairing of save/restore function allows nesting
> -so memalloc_noio_save is safe to be called from an existing NOIO or NOFS 
> scope.
> +critical section wrt. the reclaim is started - e.g. lock shared with the

Please spell out "with respect to".

> +reclaim context or when a transaction context nesting would be possible
> +via reclaim. The corresponding restore function when the critical

"The corresponding restore ... ends."  << That is not a complete sentence.
It's missing something.

> +section ends. All that ideally along with an explanation what is
> +the reclaim context for easier maintenance.
> +
> +Please note that the proper pairing of save/restore function allows
> +nesting so it is safe to call ``memalloc_noio_save`` respectively
> +``memalloc_noio_restore`` from an existing NOIO or NOFS scope.

Please note that the proper pairing of save/restore functions allows
nesting so it is safe to call ``memalloc_noio_save`` or
``memalloc_noio_restore`` respectively from an existing NOIO or NOFS scope.


>  
>  What about __vmalloc(GFP_NOFS)
>  ==============================
> 


-- 
~Randy

Reply via email to