On Mon, Aug 10, 2020 at 9:50 AM David Wysochanski <[email protected]> wrote:
>
> On Mon, Aug 10, 2020 at 6:09 AM David Howells <[email protected]> wrote:
> >
> > David Wysochanski <[email protected]> wrote:
> >
> > > Looks like fscache_shape_request() overrides any 'max_pages' value
> > > (actually
> > > it is cachefiles_shape_request) , so it's unclear why the netfs would pass
> > > in a 'max_pages' if it is not honored - seems like a bug maybe or it's not
> > > obvious
> >
> > I think the problem is that cachefiles_shape_request() is applying the limit
> > too early. It's using it to cut down the number of pages in the original
> > request (only applicable to readpages), but then the shaping to fit cache
> > granules can exceed that, so it needs to be applied later also.
> >
> > Does the attached patch help?
> >
> > David
> > ---
> > diff --git a/fs/cachefiles/content-map.c b/fs/cachefiles/content-map.c
> > index 2bfba2e41c39..ce05cf1d9a6e 100644
> > --- a/fs/cachefiles/content-map.c
> > +++ b/fs/cachefiles/content-map.c
> > @@ -134,7 +134,8 @@ void cachefiles_shape_request(struct fscache_object
> > *obj,
> > _enter("{%lx,%lx,%x},%llx,%d",
> > start, end, max_pages, i_size, shape->for_write);
> >
> > - if (start >= CACHEFILES_SIZE_LIMIT / PAGE_SIZE) {
> > + if (start >= CACHEFILES_SIZE_LIMIT / PAGE_SIZE ||
> > + max_pages < CACHEFILES_GRAN_PAGES) {
> > shape->to_be_done = FSCACHE_READ_FROM_SERVER;
> > return;
> > }
> > @@ -144,10 +145,6 @@ void cachefiles_shape_request(struct fscache_object
> > *obj,
> > if (shape->i_size > CACHEFILES_SIZE_LIMIT)
> > i_size = CACHEFILES_SIZE_LIMIT;
> >
> > - max_pages = round_down(max_pages, CACHEFILES_GRAN_PAGES);
> > - if (end - start > max_pages)
> > - end = start + max_pages;
> > -
> > granule = start / CACHEFILES_GRAN_PAGES;
> > if (granule / 8 >= object->content_map_size) {
> > cachefiles_expand_content_map(object, i_size);
> > @@ -185,6 +182,10 @@ void cachefiles_shape_request(struct fscache_object
> > *obj,
> > start = round_down(start, CACHEFILES_GRAN_PAGES);
> > end = round_up(end, CACHEFILES_GRAN_PAGES);
> >
> > + /* Trim to the maximum size the netfs supports */
> > + if (end - start > max_pages)
> > + end = round_down(start + max_pages,
> > CACHEFILES_GRAN_PAGES);
> > +
> > /* But trim to the end of the file and the starting page */
> > eof = (i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
> > if (eof <= shape->proposed_start)
> >
>
> I tried this and got the same panic - I think i_size is the culprit
> (it is larger than max_pages). I'll send you a larger trace offline
> with cachefiles/fscache debugging enabled if that helps, but below is
> some custom tracing that may be enough because it shows before / after
> shaping values.
>
FWIW, after testing the aforementioned patch, and tracing it,
it is not i_size after all. I added this small patch on top of the
patch to cachefiles_shape_request() and no more panics. Though
this may not address the full underlying issues, it at least gets
past this point and max_pages seems to work better.
---
diff --git a/fs/fscache/read_helper.c b/fs/fscache/read_helper.c
index a464c3e3188a..fa67339e7304 100644
--- a/fs/fscache/read_helper.c
+++ b/fs/fscache/read_helper.c
@@ -318,8 +318,8 @@ static int fscache_read_helper(struct
fscache_io_request *req,
switch (type) {
case FSCACHE_READ_PAGE_LIST:
shape.proposed_start = lru_to_page(pages)->index;
- shape.proposed_nr_pages =
- lru_to_last_page(pages)->index -
shape.proposed_start + 1;
+ shape.proposed_nr_pages = min_t(unsigned int, max_pages,
+ lru_to_last_page(pages)->index -
shape.proposed_start + 1);
break;
case FSCACHE_READ_LOCKED_PAGE:
--
Linux-cachefs mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cachefs