Re: [PATCH v2 7/7] ABI: sysfs-kernel-mm-cma: fix two cross-references

2021-04-01 Thread John Hubbard
either way, this improvement is nice to have, so: Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA What: /sys/kernel/mm/cma//alloc_pages_success Date: Feb 2021

Re: [PATCH v7 3/8] mm/rmap: Split try_to_munlock from try_to_unmap

2021-03-30 Thread John Hubbard
On 3/30/21 8:56 PM, John Hubbard wrote: On 3/30/21 3:56 PM, Alistair Popple wrote: ... +1 for renaming "munlock*" items to "mlock*", where applicable. good grief. At least the situation was weird enough to prompt further investigation :) Renaming to mlock* doesn't

Re: [PATCH v7 3/8] mm/rmap: Split try_to_munlock from try_to_unmap

2021-03-30 Thread John Hubbard
to_munlock. - Alistair No objections here. :) thanks, -- John Hubbard NVIDIA

Re: [PATCH v7 3/8] mm/rmap: Split try_to_munlock from try_to_unmap

2021-03-30 Thread John Hubbard
uot;mlock*", where applicable. good grief. Although, it seems reasonable to tack such renaming patches onto the tail end of this series. But whatever works. thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm: gup: remove FOLL_SPLIT

2021-03-30 Thread John Hubbard
ce351c ("s390/gmap: improve THP splitting"), July 29, 2020, removes the above use of FOLL_SPLIT. And "git grep", just to be sure, shows it is not there in today's linux.git. So I guess the https://github.com/0day-ci/linux repo needs a better way to stay in sync? thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm: gup: remove FOLL_SPLIT

2021-03-30 Thread John Hubbard
p -nw FOLL_SPLIT Documentation/vm/transhuge.rst:57:follow_page, the FOLL_SPLIT bit can be specified as a parameter to Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA diff --git a/include/linux/mm.h b/include/linux/mm.h index 8ba434287387..3568836841f9 100644 --- a/include/linux

Re: [PATCH v3] kernel/resource: Fix locking in request_free_mem_region

2021-03-29 Thread John Hubbard
ike a change to me. I do think it's worth mentioning. thanks, -- John Hubbard NVIDIA

Re: [PATCH v3] kernel/resource: Fix locking in request_free_mem_region

2021-03-29 Thread John Hubbard
commit log, and therefore quite surprising. It seems like the right thing to do but it also seems like a different fix! I'm not saying that it should be a separate patch, but it does seem worth loudly mentioning in the commit log, yes? return res; } + write_unlock(_lock); + free_resource(res); + return ERR_PTR(-ERANGE); } thanks, -- John Hubbard NVIDIA

Re: [PATCH v7] mm: cma: support sysfs

2021-03-24 Thread John Hubbard
On 3/24/21 3:11 PM, Dmitry Osipenko wrote: 25.03.2021 01:01, John Hubbard пишет: On 3/24/21 2:31 PM, Dmitry Osipenko wrote: ... +#include + +struct cma_kobject { +    struct cma *cma; +    struct kobject kobj; If you'll place the kobj as the first member of the struct, then container_of

Re: [PATCH v7] mm: cma: support sysfs

2021-03-24 Thread John Hubbard
such case. thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm: cma: fix corruption cma_sysfs_alloc_pages_count

2021-03-24 Thread John Hubbard
-minc...@kernel.org/ Reported-by: Dmitry Osipenko Tested-by: Dmitry Osipenko Suggested-by: Dmitry Osipenko Suggested-by: John Hubbard Suggested-by: Matthew Wilcox Signed-off-by: Minchan Kim --- I belive it's worth to have separate patch rather than replacing original patch. It will also help

Re: [PATCH v6] mm: cma: support sysfs

2021-03-24 Thread John Hubbard
t imagine it could grow up in cma_sysfs in future), I don't think it would be a problem. If we really want to make it more clear, maybe? cma->cma_kobj->kobj It would be consistent with other variables in cma_sysfs_init. OK, that's at least better than it was. thanks, -- John Hubbard NVIDIA

Re: [PATCH v6] mm: cma: support sysfs

2021-03-24 Thread John Hubbard
On 3/23/21 10:44 PM, Minchan Kim wrote: On Tue, Mar 23, 2021 at 09:47:27PM -0700, John Hubbard wrote: On 3/23/21 8:27 PM, Minchan Kim wrote: ... +static int __init cma_sysfs_init(void) +{ + unsigned int i; + + cma_kobj_root = kobject_create_and_add("cma"

Re: [PATCH v6] mm: cma: support sysfs

2021-03-23 Thread John Hubbard
ything allocated on previous iterations of the loop. thanks, -- John Hubbard NVIDIA

Re: [PATCH v6] mm: cma: support sysfs

2021-03-23 Thread John Hubbard
cma_kobj_root, "%s", cma->name); + if (err) { + kobject_put(_kobj->kobj); + kobject_put(cma_kobj_root); + return err; Hopefully this little bit of logic could also go into the cleanup routine. + } + } + + return 0; +} +subsys_initcall(cma_sysfs_init); thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm: cma: Fix potential null dereference on pointer cma

2021-03-17 Thread John Hubbard
issue. Fix this by only calling As far as I can tell, it's not possible to actually cause that null failure with the existing kernel code paths. *Might* be worth mentioning that here (unless I'm wrong), but either way, looks good, so: Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA

Re: [PATCH v2] mm: vmstat: add cma statistics

2021-03-03 Thread John Hubbard
, 17 insertions(+), 3 deletions(-) Seems reasonable, and the diffs look good. Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 18e75974d4e3..21d7c7f72f1c 100644 --- a/include/linux/vm_event_item.h +++ b

Re: [PATCH] mm: vmstat: add cma statistics

2021-02-17 Thread John Hubbard
er line is a weak idea at best, even though it's used here already. Each item is important and needs to be visually compared to it's output item later. So one per line might have helped avoid mismatches, and I think we should change to that to encourage that trend. thanks, --

Re: [PATCH 0/9] Add support for SVM atomics in Nouveau

2021-02-10 Thread John Hubbard
can forcefully break this whenever we feel like by revoking the page, moving it, and then reinstating the gpu pte again and let it continue. Oh yes, that's true. If that's no possible then what we need here instead is an mlock() type of thing I think. No need for that, then. thanks, -- John

Re: [PATCH v3 3/4] mm/gup: add a range variant of unpin_user_pages_dirty_lock()

2021-02-10 Thread John Hubbard
failed to find any logic errors, so: Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA +{ + struct page *next, *page; + unsigned int nr = 1; + + if (i >= npages) + return; + + next = *list + i; + page = compound_head(n

Re: [PATCH v3] mm: cma: support sysfs

2021-02-10 Thread John Hubbard
/cma_alloc_pages_[attempts|fails] /sys/kernel/mm/cma/BLUETOOTH/cma_alloc_pages_[attempts|fails] Signed-off-by: Minchan Kim --- Looks good. Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA From v2 - https://lore.kernel.org/linux-mm/20210208180142.2765456-1-minc...@kernel.org

Re: [PATCH v2] mm: cma: support sysfs

2021-02-09 Thread John Hubbard
e a kobject that you never free represent this object also is not normal :) OK, thanks for taking the time to explain that, much appreciated! thanks, -- John Hubbard NVIDIA

Re: [PATCH v2] mm: cma: support sysfs

2021-02-09 Thread John Hubbard
not "improper"; it's a reasonable step, given the limitations of the current sysfs design. I just wanted to say that out loud, as my proposal sinks to the bottom of the trench here. haha :) thanks, -- John Hubbard NVIDIA

Re: [PATCH 0/9] Add support for SVM atomics in Nouveau

2021-02-09 Thread John Hubbard
that do a long series of atomic operations. Such a program would be a little weird, but it's hard to rule out. - long term pin: the page cannot be moved, all migration must fail. Also this will have an impact on COW behaviour for fork (but not sure where those patches are, John Hubbard will know

Re: [PATCH v2] mm: cma: support sysfs

2021-02-09 Thread John Hubbard
hould just use a static kobject, with a cautionary comment to would-be copy-pasters, that explains the design constraints above. That way, we still get a nice, less-code implementation, a safe design, and it only really costs us a single carefully written comment. thanks, -- John Hubbard NVIDIA

Re: [PATCH v2] mm: cma: support sysfs

2021-02-08 Thread John Hubbard
On 2/8/21 10:27 PM, John Hubbard wrote: On 2/8/21 10:13 PM, Greg KH wrote: On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote: On 2/8/21 3:36 PM, Minchan Kim wrote: ...     char name[CMA_MAX_NAME]; +#ifdef CONFIG_CMA_SYSFS +    struct cma_stat    *stat; This should

Re: [PATCH v2] mm: cma: support sysfs

2021-02-08 Thread John Hubbard
On 2/8/21 10:13 PM, Greg KH wrote: On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote: On 2/8/21 3:36 PM, Minchan Kim wrote: ... char name[CMA_MAX_NAME]; +#ifdef CONFIG_CMA_SYSFS + struct cma_stat *stat; This should not be a pointer. By making it a pointer, you've

Re: [PATCH v2] mm: cma: support sysfs

2021-02-08 Thread John Hubbard
On 2/8/21 9:18 PM, John Hubbard wrote: On 2/8/21 8:19 PM, Minchan Kim wrote: On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote: On 2/8/21 3:36 PM, Minchan Kim wrote: ...     char name[CMA_MAX_NAME]; +#ifdef CONFIG_CMA_SYSFS +    struct cma_stat    *stat; This should

Re: [PATCH v2] mm: cma: support sysfs

2021-02-08 Thread John Hubbard
On 2/8/21 8:19 PM, Minchan Kim wrote: On Mon, Feb 08, 2021 at 05:57:17PM -0800, John Hubbard wrote: On 2/8/21 3:36 PM, Minchan Kim wrote: ... char name[CMA_MAX_NAME]; +#ifdef CONFIG_CMA_SYSFS + struct cma_stat *stat; This should not be a pointer. By making it a pointer, you've

Re: [PATCH v2] mm: cma: support sysfs

2021-02-08 Thread John Hubbard
methods to be used *if* you are dealing with kobjects. That's a narrower point. I can't imagine that he would have insisted on having additional allocations just so that kobj freeing methods could be used. :) thanks, -- John Hubbard NVIDIA

Re: [PATCH v2] mm: cma: support sysfs

2021-02-08 Thread John Hubbard
goto out; + } + } while (++i < cma_area_count) This clearly is not going to compile! Don't forget to build and test the patches. + + return 0; +out: + while (--i >= 0) { + cma = _areas[i]; + kobject_put(>stat->kobj); + } + + kfree(cma_stats); + kobject_put(cma_kobj); + + return -ENOMEM; +} +subsys_initcall(cma_sysfs_init); thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm: cma: support sysfs

2021-02-08 Thread John Hubbard
red) Any feedback on point (6) specifically ? Well, /proc these days is for process-specific items. And CMA areas are system-wide. So that's an argument against it. However...if there is any process-specific CMA allocation info to report, then /proc is just the right place for it. thanks, -- Joh

Re: [PATCH] mm: cma: support sysfs

2021-02-05 Thread John Hubbard
On 2/5/21 1:28 PM, Minchan Kim wrote: On Fri, Feb 05, 2021 at 12:25:52PM -0800, John Hubbard wrote: On 2/5/21 8:15 AM, Minchan Kim wrote: ... OK. But...what *is* your goal, and why is this useless (that's what orthogonal really means here) for your goal? As I mentioned, the goal is to monitor

Re: [PATCH] mm: cma: support sysfs

2021-02-05 Thread John Hubbard
if the problem is caused by pinning/fragmentation or by over-utilization. I agree. That seems about right, now that we've established that cma areas are a must-have. thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm: cma: support sysfs

2021-02-05 Thread John Hubbard
useful but I'd like to enable it under CONFIG_CMA_SYSFS_ALLOC_RANGE as separate patchset. I will stop harassing you very soon, just want to bottom out on understanding the real goals first. :) thanks, -- John Hubbard NVIDIA

Re: [PATCH] selftests/vm: rename file run_vmtests to run_vmtests.sh

2021-02-05 Thread John Hubbard
h So I guess this is OK, given that I see "run_vmtests" in both -next and main. Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA

Re: [PATCH v2 1/4] mm/gup: add compound page list iterator

2021-02-05 Thread John Hubbard
a bit easier to verify that it is correct. However, given that the patch is correct and works as-is, the above is really just an optional idea, so please feel free to add: Reviewed-by: John Hubbard Thanks! Hopefully I can retain that if the snippet above is preferred? Joao Yes. Still looks

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
On 2/4/21 10:24 PM, Minchan Kim wrote: On Thu, Feb 04, 2021 at 09:49:54PM -0800, John Hubbard wrote: On 2/4/21 9:17 PM, Minchan Kim wrote: ... # cat vmstat | grep -i cma nr_free_cma 261718 # cat meminfo | grep -i cma CmaTotal:1048576 kB CmaFree: 1046872 kB OK, given that CMA

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
:1048576 kB CmaFree: 1046872 kB OK, given that CMA is already in those two locations, maybe we should put this information in one or both of those, yes? thanks, -- John Hubbard NVIDIA

Re: [PATCH v2 3/4] mm/gup: add a range variant of unpin_user_pages_dirty_lock()

2021-02-04 Thread John Hubbard
ead, ntails, FOLL_PIN); + } +} +EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); + /** * unpin_user_pages() - release an array of gup-pinned pages. * @pages: array of pages to be marked dirty and released. Didn't spot any actual problems with how this works. thanks, -- John Hubbard NVIDIA

Re: [PATCH v2 1/4] mm/gup: add compound page list iterator

2021-02-04 Thread John Hubbard
re of *ntails. However, given that the patch is correct and works as-is, the above is really just an optional idea, so please feel free to add: Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA + page = compound_head(*list); + + for (nr = 1; nr

Re: [PATCH] xfs: fix unused variable build warning in xfs_log.c

2021-02-04 Thread John Hubbard
On 2/4/21 7:30 PM, Darrick J. Wong wrote: On Thu, Feb 04, 2021 at 07:18:14PM -0800, John Hubbard wrote: Delete the unused "log" variable in xfs_log_cover(). Fixes: 303591a0a9473 ("xfs: cover the log during log quiesce") Cc: Brian Foster Cc: Christoph Hellwig Cc: Darrick

[PATCH] xfs: fix unused variable build warning in xfs_log.c

2021-02-04 Thread John Hubbard
Delete the unused "log" variable in xfs_log_cover(). Fixes: 303591a0a9473 ("xfs: cover the log during log quiesce") Cc: Brian Foster Cc: Christoph Hellwig Cc: Darrick J. Wong Cc: Allison Henderson Signed-off-by: John Hubbard --- Hi, I just ran into this on today's linux-

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
On 2/4/21 5:44 PM, Minchan Kim wrote: On Thu, Feb 04, 2021 at 04:24:20PM -0800, John Hubbard wrote: On 2/4/21 4:12 PM, Minchan Kim wrote: ... Then, how to know how often CMA API failed? Why would you even need to know that, *in addition* to knowing specific page allocation numbers

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
On 2/4/21 4:25 PM, John Hubbard wrote: On 2/4/21 3:45 PM, Suren Baghdasaryan wrote: ... 2) The overall CMA allocation attempts/failures (first two items above) seem an odd pair of things to track. Maybe that is what was easy to track, but I'd vote for just omitting them. Then, how to know how

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
ple of items into /proc/vmstat, as I just mentioned in my other response, and calling it good. thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
. It seems to fit right in there, yes? thanks, -- John Hubbard NVIDIA

Re: [PATCH 1/4] mm/gup: add compound page list iterator

2021-02-04 Thread John Hubbard
On 2/4/21 11:53 AM, Jason Gunthorpe wrote: On Wed, Feb 03, 2021 at 03:00:01PM -0800, John Hubbard wrote: +static inline void compound_next(unsigned long i, unsigned long npages, +struct page **list, struct page **head, +unsigned

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
On 2/4/21 12:07 PM, Minchan Kim wrote: On Thu, Feb 04, 2021 at 12:50:58AM -0800, John Hubbard wrote: On 2/3/21 7:50 AM, Minchan Kim wrote: Since CMA is getting used more widely, it's more important to keep monitoring CMA statistics for system health since it's directly related to user

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread John Hubbard
cma, const struct page *pages, unsigned int count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); + A single additional blank line seems to be the only change to this file. :) thanks, -- John Hubbard NVIDIA

Re: [PATCH 4/4] RDMA/umem: batch page unpin in __ib_mem_release()

2021-02-03 Thread John Hubbard
y, the for_each_sg() code and its behavior with sg->length and sg_page(sg) confuses me because I'm new to it, and I don't quite understand how this works. Especially with SG_CHAIN. I'm assuming that you've monitored /proc/vmstat for nr_foll_pin* ? sg_free_table(>sg_head); } thanks, -- John Hubbard NVIDIA

Re: [PATCH 3/4] mm/gup: add a range variant of unpin_user_pages_dirty_lock()

2021-02-03 Thread John Hubbard
hould rename it to something like: unpin_user_compound_page_dirty_lock() ? thanks, -- John Hubbard NVIDIA

Re: [PATCH 3/4] mm/gup: add a range variant of unpin_user_pages_dirty_lock()

2021-02-03 Thread John Hubbard
return 1; return min_t(unsigned int, (head + compound_nr(head) - page), npages); thanks, -- John Hubbard NVIDIA + for (ntails = 1; ntails < npages; ntails++) { if (compound_head(pages[ntails]) != head) break; @@ -229,20 +234,32 @@ stat

Re: [PATCH 2/4] mm/gup: decrement head page once for group of subpages

2021-02-03 Thread John Hubbard
elated one below) finally done! Everything looks correct here. Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA + struct page *head; + unsigned int ntails; if (!make_dirty) { unpin_user_pages(pages, npages); return; } - for

Re: [PATCH 1/4] mm/gup: add compound page list iterator

2021-02-03 Thread John Hubbard
npages; i += ntails, \ +compound_next(i, npages, list, , )) + /** * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages * @pages: array of pages to be maybe marked dirty, and definitely released. thanks, -- John Hubbard NVIDIA

Re: [PATCH v2 net-next 3/4] net: introduce common dev_page_is_reserved()

2021-01-30 Thread John Hubbard
ot; seems better to me, and especially anything *other* than "reserved" is a good idea, IMHO. thanks, -- John Hubbard NVIDIA

Re: [PATCH v7 14/14] selftests/vm: test faulting in kernel, and verify pinnable pages

2021-01-24 Thread John Hubbard
On 1/24/21 3:18 PM, John Hubbard wrote: On 1/21/21 7:37 PM, Pavel Tatashin wrote: When pages are pinned they can be faulted in userland and migrated, and they can be faulted right in kernel without migration. In either case, the pinned pages must end-up being pinnable (not movable). Add a new

Re: [PATCH v7 14/14] selftests/vm: test faulting in kernel, and verify pinnable pages

2021-01-24 Thread John Hubbard
y* the new -z option. I'll poke around the rest of the patchset and see if that is expected and normal, but either way the test code itself looks correct and seems to be passing my set of "run a bunch of different gup_test options" here, so feel free to add: Reviewed-by: John Hubba

Re: [PATCH v7 13/14] selftests/vm: test flag is broken

2021-01-24 Thread John Hubbard
flag" That is just a minor documentation point, so either way, feel free to add: Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA With the fix, dump works like this: root@virtme:/# gup_test -c page #0, starting from user virt addr: 0x7f8acb9e4000 page:d3d2e

Re: [PATCH 0/1] mm: restore full accuracy in COW page reuse

2021-01-15 Thread John Hubbard
I proposed this exact idea a few days ago [1]. It's remarkable that we both picked nearly identical values for the layout! :) But as the responses show, security problems prevent pursuing that approach. [1] https://lore.kernel.org/r/45806a5a-65c2-67ce-fc92-dc8c2144d...@nvidia.com thanks, -- John Hubbard NVIDIA

Re: [PATCH 0/1] mm: restore full accuracy in COW page reuse

2021-01-10 Thread John Hubbard
We already have all the unpin_user_pages() calls in place, and so it's "merely" a matter of adding a flag to 74 call sites. The real question is whether we can get away with supporting only a very low count of FOLL_PIN and FOLL_GET pages. Can we? thanks, -- John Hubbard NVIDIA

Re: [PATCH 2/2] mm: soft_dirty: userfaultfd: introduce wrprotect_tlb_flush_pending

2021-01-07 Thread John Hubbard
to the RDMA cases, but still does what we need here. thanks, -- John Hubbard NVIDIA

Re: [PATCH 2/2] mm: soft_dirty: userfaultfd: introduce wrprotect_tlb_flush_pending

2021-01-07 Thread John Hubbard
On 1/7/21 2:00 PM, Linus Torvalds wrote: On Thu, Jan 7, 2021 at 1:53 PM John Hubbard wrote: Now, I do agree that from a QoI standpoint, it would be really lovely if we actually enforced it. I'm not entirely sure we can, but maybe it would be reasonable to use that mm->has_pin

Re: [PATCH 2/2] mm: soft_dirty: userfaultfd: introduce wrprotect_tlb_flush_pending

2021-01-07 Thread John Hubbard
pages that can be waited for, and pages that should not be waited for in the kernel. I hope this helps, but if it's too much of a side-track, please disregard. thanks, -- John Hubbard NVIDIA

Re: [PATCH v4 10/10] selftests/vm: test faulting in kernel, and verify pinnable pages

2020-12-19 Thread John Hubbard
s, "skip faulting pages in from user space". That's a lot clearer! And you can tell it's better, because we don't have to write a chunk of documentation explaining the odd quirks. Ha, it feels better! What do you think? (Again, if you want me to send over some diffs that put this all together, let me know. I'm kind of embarrassed at all the typing required here.) thanks, -- John Hubbard NVIDIA

Re: [PATCH v4 10/10] selftests/vm: test faulting in kernel, and verify pinnable pages

2020-12-18 Thread John Hubbard
- p[0] = 0; + if (touch) { + gup.flags |= FOLL_TOUCH; + } else { + for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE) + p[0] = 0; + } OK. /* Only report timing information on the *_BENCHMARK command

Re: [PATCH v4 09/10] selftests/vm: test flag is broken

2020-12-18 Thread John Hubbard
On 12/18/20 1:06 AM, John Hubbard wrote: Add a new test_flags field, to allow raw gup_flags to work. I think .test_control_flags field would be a good name, to make it very clear that it's not destined for gup_flags. Just .test_flags is not quite as clear a distinction from .gup_flags

Re: [PATCH v4 09/10] selftests/vm: test flag is broken

2020-12-18 Thread John Hubbard
write = 1; break; + case 'W': + write = 0; + break; case 'f': file = optarg; break; thanks, -- John Hubbard NVIDIA

Re: [PATCH 18/25] btrfs: Use readahead_batch_length

2020-12-17 Thread John Hubbard
= contig_start + readahead_batch_length(rac); + u64 contig_end = contig_start + readahead_batch_length(rac) - 1; Yes, confirmed on my end, too. thanks, -- John Hubbard NVIDIA

Re: [PATCH 18/25] btrfs: Use readahead_batch_length

2020-12-17 Thread John Hubbard
it out early. thanks, -- John Hubbard NVIDIA

Re: [PATCH v14 10/10] secretmem: test: add basic selftest for memfd_secret(2)

2020-12-11 Thread John Hubbard
y! Just these: bool vma_is_secretmem(struct vm_area_struct *vma); bool page_is_secretmem(struct page *page); bool secretmem_active(void); ...or am I just Doing It Wrong? :) thanks, -- John Hubbard NVIDIA

Re: [PATCH v3 5/6] mm/gup: migrate pinned pages out of movable zone

2020-12-11 Thread John Hubbard
to let the callers retry? Obviously that would be a separate patch and I'm not sure it's safe to make that change, but the current loop seems buried maybe too far down. Thoughts, anyone? thanks, -- John Hubbard NVIDIA

Re: [PATCH 5/6] mm: honor PF_MEMALLOC_NOMOVABLE for all allocations

2020-12-03 Thread John Hubbard
ags; So, keeping "current" in the function name makes its intent misleading. OK, I see. That sounds OK then. thanks, -- John Hubbard NVIDIA

Re: [PATCH 6/6] mm/gup: migrate pinned pages out of movable zone

2020-12-03 Thread John Hubbard
at this point it's a relief to finally see the nested ifdefs get removed. One naming nit/idea below, but this looks fine as is, so: Reviewed-by: John Hubbard diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 0f8d1583fa8e..00bab23d1ee5 100644 --- a/include/linux/migrate.h +++ b/include

Re: [PATCH 5/6] mm: honor PF_MEMALLOC_NOMOVABLE for all allocations

2020-12-03 Thread John Hubbard
gs, which right now happen to only cover CMA flags, so the original name seems accurate, right? thanks, John Hubbard NVIDIA { #ifdef CONFIG_CMA - unsigned int pflags = current->flags; - - if (!(pflags & PF_MEMALLOC_NOMOVABLE) && - gfp

Re: [PATCH 4/6] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_NOMOVABLE

2020-12-03 Thread John Hubbard
lingering rename candidates after this series is applied. And it's a good rename. Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA --- include/linux/sched.h| 2 +- include/linux/sched/mm.h | 21 + mm/gup.c | 4 ++-- mm/hugetlb.c

Re: [PATCH 3/6] mm/gup: make __gup_longterm_locked common

2020-12-03 Thread John Hubbard
-#endif /* CONFIG_FS_DAX || CONFIG_CMA */ static bool is_valid_gup_flags(unsigned int gup_flags) { At last some simplification here, yea! Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA

Re: [PATCH 2/6] mm/gup: don't pin migrated cma pages in movable zone

2020-12-03 Thread John Hubbard
; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, + .gfp_mask = GFP_USER | __GFP_NOWARN, }; check_again: Reviewed-by: John Hubbard ...while I was here, I noticed

Re: [PATCH 1/6] mm/gup: perform check_dax_vmas only when FS_DAX is enabled

2020-12-03 Thread John Hubbard
) +{ + return false; +} +#endif #ifdef CONFIG_CMA static long check_and_migrate_cma_pages(struct mm_struct *mm, Looks obviously correct, and the follow-up simplication is very nice. Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA

Re: Pinning ZONE_MOVABLE pages

2020-11-23 Thread John Hubbard
opinion from the community on an appropriate path forward for this problem. If what I described sounds reasonable, or if there are other ideas on how to address the problem that I am seeing. I'm also in favor with avoiding (3) for now and maybe forever, depending on how it goes. Good luck... :) thanks, --

Re: [mm/gup] 47e29d32af: phoronix-test-suite.npb.FT.A.total_mop_s -45.0% regression

2020-11-18 Thread John Hubbard
On 11/18/20 10:17 AM, Dan Williams wrote: On Wed, Nov 18, 2020 at 5:51 AM Jan Kara wrote: On Mon 16-11-20 19:35:31, John Hubbard wrote: On 11/16/20 6:48 PM, kernel test robot wrote: Greeting, FYI, we noticed a -45.0% regression of phoronix-test-suite.npb.FT.A.total_mop_s due to commit

Re: [mm/gup] 47e29d32af: phoronix-test-suite.npb.FT.A.total_mop_s -45.0% regression

2020-11-16 Thread John Hubbard
n counts for huge pages") https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master ...but that commit happened in April, 2020. Surely if this were a serious issue we would have some other indication...is this worth following up on?? I'm inclined to ignore it, honestly. th

Re: [PATCH v3 1/7] compiler-clang: add build check for clang 10.0.1

2020-11-16 Thread John Hubbard
ror Sorry, your version of Clang is too old - please use 10.0.1 or newer. #endif +#endif /* Compiler specific definitions for Clang compiler */ thanks, -- John Hubbard NVIDIA

Re: [PATCH v2] mm/gup_test: GUP_TEST depends on DEBUG_FS

2020-11-08 Thread John Hubbard
On 11/8/20 12:37 AM, Barry Song wrote: Without DEBUG_FS, all the code in gup_test becomes meaningless. For sure kernel provides debugfs stub while DEBUG_FS is disabled, but the point here is that GUP_TEST can do nothing without DEBUG_FS. Cc: John Hubbard Cc: Ralph Campbell Cc: Randy Dunlap

Re: [PATCH 2/2] tomoyo: Fixed typo in documentation

2020-11-08 Thread John Hubbard
On 11/8/20 7:41 PM, Souptick Joarder wrote: On Sat, Nov 7, 2020 at 2:27 PM John Hubbard wrote: On 11/7/20 12:24 AM, Souptick Joarder wrote: Fixed typo s/Poiner/Pointer Fixes: 5b636857fee6 ("TOMOYO: Allow using argv[]/envp[] of execve() as conditions.") Signed-off-by: Souptick J

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-07 Thread John Hubbard
t, having options appear and disappear on me, in this system. If they all had this "comment" behavior by default, to show up as a placeholder, I think it would be a better user experience. thanks, -- John Hubbard NVIDIA

Re: [PATCH 1/2] tomoyo: Convert get_user_pages*() to pin_user_pages*()

2020-11-07 Thread John Hubbard
On 11/7/20 8:12 PM, Tetsuo Handa wrote: On 2020/11/08 11:17, John Hubbard wrote: Excuse me, but Documentation/core-api/pin_user_pages.rst says "CASE 5: Pinning in order to _write_ to the data within the page" while tomoyo_dump_page() is for "_read_ the data within the pa

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-07 Thread John Hubbard
abled" depends on !GUP_TEST && !DEBUG_FS Sweet--I just applied that here, and it does exactly what I wanted: puts a nice clear message on the "make menuconfig" screen. No more hidden item. Brilliant! Let's go with that, shall we? thanks, -- John Hubbard NVIDIA

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-07 Thread John Hubbard
On 11/7/20 7:14 PM, John Hubbard wrote: On 11/7/20 6:58 PM, Song Bao Hua (Barry Song) wrote: On 11/7/20 2:20 PM, Randy Dunlap wrote: On 11/7/20 11:16 AM, John Hubbard wrote: On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote: From: John Hubbard [mailto:jhubb...@nvidia.com] ... But if you

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-07 Thread John Hubbard
On 11/7/20 6:58 PM, Song Bao Hua (Barry Song) wrote: On 11/7/20 2:20 PM, Randy Dunlap wrote: On 11/7/20 11:16 AM, John Hubbard wrote: On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote: From: John Hubbard [mailto:jhubb...@nvidia.com] ... But if you really disagree, then I'd go with, just

Re: [PATCH 1/2] tomoyo: Convert get_user_pages*() to pin_user_pages*()

2020-11-07 Thread John Hubbard
On 11/7/20 5:13 PM, Tetsuo Handa wrote: On 2020/11/08 4:17, John Hubbard wrote: On 11/7/20 1:04 AM, John Hubbard wrote: On 11/7/20 12:24 AM, Souptick Joarder wrote: In 2019, we introduced pin_user_pages*() and now we are converting get_user_pages*() to the new API as appropriate. [1] &

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-07 Thread John Hubbard
On 11/7/20 4:24 PM, Randy Dunlap wrote: On 11/7/20 4:03 PM, John Hubbard wrote: On 11/7/20 2:20 PM, Randy Dunlap wrote: On 11/7/20 11:16 AM, John Hubbard wrote: On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote: ... OK, thanks, I see how you get that list now. JFTR, those are not 42

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-07 Thread John Hubbard
On 11/7/20 2:20 PM, Randy Dunlap wrote: On 11/7/20 11:16 AM, John Hubbard wrote: On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote: -Original Message- From: John Hubbard [mailto:jhubb...@nvidia.com] ...    config GUP_BENCHMARK    bool "Enable infrastru

Re: [PATCH 1/2] tomoyo: Convert get_user_pages*() to pin_user_pages*()

2020-11-07 Thread John Hubbard
On 11/7/20 1:04 AM, John Hubbard wrote: On 11/7/20 12:24 AM, Souptick Joarder wrote: In 2019, we introduced pin_user_pages*() and now we are converting get_user_pages*() to the new API as appropriate. [1] & [2] could be referred for more information. This is case 5 as per documen

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-07 Thread John Hubbard
On 11/7/20 11:05 AM, Song Bao Hua (Barry Song) wrote: -Original Message- From: John Hubbard [mailto:jhubb...@nvidia.com] ... config GUP_BENCHMARK bool "Enable infrastructure for get_user_pages() and related calls benchmarking" + depends on DEBUG_FS I thi

Re: [PATCH 1/2] tomoyo: Convert get_user_pages*() to pin_user_pages*()

2020-11-07 Thread John Hubbard
p us on the straight and narrow, just in case I'm misunderstanding something. [1] https://lore.kernel.org/r/e78fb7af-627b-ce80-275e-51f97f1f3...@nvidia.com thanks, -- John Hubbard NVIDIA [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lw

Re: [PATCH 2/2] tomoyo: Fixed typo in documentation

2020-11-07 Thread John Hubbard
On 11/7/20 12:24 AM, Souptick Joarder wrote: Fixed typo s/Poiner/Pointer Fixes: 5b636857fee6 ("TOMOYO: Allow using argv[]/envp[] of execve() as conditions.") Signed-off-by: Souptick Joarder Cc: John Hubbard --- security/tomoyo/domain.c | 2 +- 1 file changed, 1 insertion(+),

Re: [PATCH] mm/gup_benchmark: GUP_BENCHMARK depends on DEBUG_FS

2020-11-06 Thread John Hubbard
On 11/4/20 2:05 AM, Barry Song wrote: Without DEBUG_FS, all the code in gup_benchmark becomes meaningless. For sure kernel provides debugfs stub while DEBUG_FS is disabled, but the point here is that GUP_BENCHMARK can do nothing without DEBUG_FS. Cc: John Hubbard Cc: Ralph Campbell Inspired

Re: [PATCH v5 05/15] mm/frame-vector: Use FOLL_LONGTERM

2020-11-05 Thread John Hubbard
not seeing a pud_mkhugespecial anywhere. So not sure this works, but probably just me missing something again. It means ioremap can't create an IO page PUD, it has to be broken up. Does ioremap even create anything larger than PTEs? From my reading, yes. See ioremap_try_huge_pmd(). thanks, -- John

Re: [PATCH v5 05/15] mm/frame-vector: Use FOLL_LONGTERM

2020-11-04 Thread John Hubbard
nly implementation that can pin pages. Thus it's still * useful to have gup_huge_pmd even if we can't operate on ptes. */ thanks, -- John Hubbard NVIDIA

  1   2   3   4   5   6   7   8   9   10   >