that adds some clarity.
thanks,
John Hubbard
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
does not use the
lock prefix, so it is not atomic.
thanks,
John Hubbard
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
...@kvack.org /a
Other than the refinements suggested above, I can't seem to find anything
wrong with this patch, so:
Reviewed-by: John Hubbard jhubb...@nvidia.com
thanks,
John H.
points are extremely minor, so:
Reviewed-by: John Hubbard jhubb...@nvidia.com
thanks,
John H.
than that, looks good.
Reviewed-by: John Hubbard jhubb...@nvidia.com
thanks,
John H.
On Fri, 27 Jun 2014, Jérôme Glisse wrote:
From: Jérôme Glisse jgli...@redhat.com
The event information will be usefull for new user of mmu_notifier API.
The event argument differentiate between a vma disappearing, a page
being write protected or simply a page being unmaped. This allow new
On Wed, 3 Jun 2015, Jerome Glisse wrote:
On Tue, Jun 02, 2015 at 02:32:01AM -0700, John Hubbard wrote:
On Thu, 21 May 2015, j.gli...@gmail.com wrote:
From: Jérôme Glisse jgli...@redhat.com
The mmu_notifier_invalidate_range_start() and
mmu_notifier_invalidate_range_end()
can
On Thu, 21 May 2015, j.gli...@gmail.com wrote:
From: Jérôme Glisse jgli...@redhat.com
The mmu_notifier_invalidate_range_start() and
mmu_notifier_invalidate_range_end()
can be considered as forming an atomic section for the cpu page table update
point of view. Between this two function the
On Wed, 3 Jun 2015, Jerome Glisse wrote:
On Mon, Jun 01, 2015 at 04:10:46PM -0700, John Hubbard wrote:
On Mon, 1 Jun 2015, Jerome Glisse wrote:
On Fri, May 29, 2015 at 08:43:59PM -0700, John Hubbard wrote:
On Thu, 21 May 2015, j.gli...@gmail.com wrote:
From: Jérôme Glisse jgli
On Thu, 21 May 2015, j.gli...@gmail.com wrote:
From: Jérôme Glisse jgli...@redhat.com
Listener of mm event might not have easy way to get the struct page
behind and address invalidated with mmu_notifier_invalidate_page()
s/behind and address/behind an address/
function as this happens
On Mon, 1 Jun 2015, Jerome Glisse wrote:
On Fri, May 29, 2015 at 08:43:59PM -0700, John Hubbard wrote:
On Thu, 21 May 2015, j.gli...@gmail.com wrote:
From: Jérôme Glisse jgli...@redhat.com
The event information will be useful for new user of mmu_notifier API.
The event argument
,
John Hubbard
Why doing this ?
Mirroring a process address space is mandatory with OpenCL 2.0 and
with other GPU compute API. OpenCL 2.0 allow different level of
implementation and currently only the lowest 2 are supported on
Linux. To implement the highest level, where CPU and GPU
another one, I haven't
bottomed out on that), if you agree with the above approach of
always sending a precise event, instead of protection changed.
That's all I saw. This is not a complicated patch, even though it's
touching a lot of files, and I think everything else is correct.
thanks,
John
sgu...@nvidia.com>
> Signed-off-by: Mark Hairgrove <mhairgr...@nvidia.com>
> Signed-off-by: John Hubbard <jhubb...@nvidia.com>
> Signed-off-by: Jatin Kumar <jaku...@nvidia.com>
> ---
> include/linux/hmm.h | 83
> mm/hmm.c| 221
On 01/30/2017 05:57 PM, Dave Hansen wrote:
On 01/30/2017 05:36 PM, Anshuman Khandual wrote:
Let's say we had a CDM node with 100x more RAM than the rest of the
system and it was just as fast as the rest of the RAM. Would we still
want it isolated like this? Or would we want a different
n turn provides a safer way to
achieve the mapping.
Therefore, stop EXPORT-ing ioremap_page_range.
---
I may get some heat for this if another out-of-tree driver needs that symbol, but if no one else
pops up and shrieks, you can add:
Reviewed-by: John Hubbard <jhubb...@nvidia.com>
th
On 01/22/2017 05:14 PM, zhong jiang wrote:
On 2017/1/22 20:58, zhongjiang wrote:
From: zhong jiang
Recently, I find the ioremap_page_range had been abusing. The improper
address mapping is a issue. it will result in the crash. so, remove
the symbol. It can be replaced
On 01/27/2017 02:52 PM, Jérôme Glisse wrote:
Cliff note: HMM offers 2 things (each standing on its own). First
it allows to use device memory transparently inside any process
without any modifications to process program code. Second it allows
to mirror process address space on a device.
Change
providing a sort of coherent memory. HMM provides software based
coherence, while NUMA assumes hardware-based memory coherence as a prerequisite.
I hope that helps, and doesn't just further muddy the waters?
--
John Hubbard
NVIDIA
Thanks,
-Bob
w people who tested a small subset of the patches,
I'll get them to report back as well. I think John Hubbard has been
testing iterations as well. CC'ing other interested people as well
Balbir
Yes, Evgeny Baskakov and I have been testing each of the posted versions. We are using both
migration a
Hi Anshuman,
I'd question the need to avoid kernel allocations in device memory.
Maybe we should simply allow these pages to *potentially* participate in
everything that N_MEMORY pages do: huge pages, kernel allocations, for
example.
No, allowing kernel allocations on CDM has two problems.
On 02/10/2017 02:06 AM, Anshuman Khandual wrote:
There are certain devices like specialized accelerator, GPU cards, network
cards, FPGA cards etc which might contain onboard memory which is coherent
along with the existing system RAM while being accessed either from the CPU
or from the device.
On 01/16/2017 11:51 PM, Michal Hocko wrote:
On Mon 16-01-17 13:57:43, John Hubbard wrote:
On 01/16/2017 01:48 PM, Michal Hocko wrote:
On Mon 16-01-17 13:15:08, John Hubbard wrote:
On 01/16/2017 11:40 AM, Michal Hocko wrote:
On Mon 16-01-17 11:09:37, John Hubbard wrote:
On 01/16/2017
On 01/18/2017 12:21 AM, Michal Hocko wrote:
On Tue 17-01-17 21:59:13, John Hubbard wrote:
On 01/16/2017 11:51 PM, Michal Hocko wrote:
On Mon 16-01-17 13:57:43, John Hubbard wrote:
On 01/16/2017 01:48 PM, Michal Hocko wrote:
On Mon 16-01-17 13:15:08, John Hubbard wrote:
On 01/16/2017
On 01/19/2017 12:45 AM, Michal Hocko wrote:
On Thu 19-01-17 00:37:08, John Hubbard wrote:
On 01/18/2017 12:21 AM, Michal Hocko wrote:
On Tue 17-01-17 21:59:13, John Hubbard wrote:
[...]
* Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL should not be passed in.
* Passing
On 01/16/2017 12:47 AM, Michal Hocko wrote:
On Sun 15-01-17 20:34:13, John Hubbard wrote:
On 01/12/2017 07:37 AM, Michal Hocko wrote:
[...]
diff --git a/mm/util.c b/mm/util.c
index 3cb2164f4099..7e0c240b5760 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -324,6 +324,48 @@ unsigned long vm_mmap
On 01/16/2017 01:48 PM, Michal Hocko wrote:
On Mon 16-01-17 13:15:08, John Hubbard wrote:
On 01/16/2017 11:40 AM, Michal Hocko wrote:
On Mon 16-01-17 11:09:37, John Hubbard wrote:
On 01/16/2017 12:47 AM, Michal Hocko wrote:
On Sun 15-01-17 20:34:13, John Hubbard wrote
On 01/16/2017 11:40 AM, Michal Hocko wrote:
On Mon 16-01-17 11:09:37, John Hubbard wrote:
On 01/16/2017 12:47 AM, Michal Hocko wrote:
On Sun 15-01-17 20:34:13, John Hubbard wrote:
[...]
Is that "Reclaim modifiers" line still true, or is it a leftover from an
earlier approach? I
On 01/19/2017 01:56 AM, Michal Hocko wrote:
On Thu 19-01-17 01:09:35, John Hubbard wrote:
[...]
So that leaves us with maybe this for documentation?
* Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL should not be passed in.
* Passing in __GFP_REPEAT is supported, and will cause
ve found a bug in a corner case that involves invalid GPU
memory (of course, it's always possible that the bug is on our side),
which Jerome is investigating now. If you spot the bug by inspection,
you'll get some major told-you-so points. :)
The performance is looking good on the testing we’ve done
On 01/12/2017 07:37 AM, Michal Hocko wrote:
From: Michal Hocko
Using kmalloc with the vmalloc fallback for larger allocations is a
common pattern in the kernel code. Yet we do not have any common helper
for that and so users have invented their own helpers. Some of them are
On 03/23/2017 07:41 PM, Huang, Ying wrote:
David Rientjes writes:
On Mon, 20 Mar 2017, Huang, Ying wrote:
From: Huang Ying
Now vzalloc() is used in swap code to allocate various data
structures, such as swap cache, swap slots cache, cluster info,
[...]
Hi Ying,
I'm a little surprised to see vmalloc calls replaced with
kmalloc-then-vmalloc calls, because that actually makes fragmentation
worse (contrary to the above claim). That's because you will consume
contiguous memory (even though you don't need it to be contiguous),
whereas before,
On 03/23/2017 09:52 PM, Huang, Ying wrote:
John Hubbard <jhubb...@nvidia.com> writes:
On 03/23/2017 07:41 PM, Huang, Ying wrote:
David Rientjes <rient...@google.com> writes:
On Mon, 20 Mar 2017, Huang, Ying wrote:
From: Huang Ying <ying.hu...@intel.com>
Now vzalloc() is
On 03/24/2017 09:52 AM, Tim Chen wrote:
On Fri, 2017-03-24 at 06:56 -0700, Dave Hansen wrote:
On 03/24/2017 12:33 AM, John Hubbard wrote:
There might be some additional information you are using to come up with
that conclusion, that is not obvious to me. Any thoughts there? These
calls use
brace yourself before saying yes... :)
thanks
John Hubbard
NVIDIA
Signed-off-by: Jérôme Glisse <jgli...@redhat.com>
---
Documentation/vm/hmm.txt | 362 +++
1 file changed, 362 insertions(+)
create mode 100644 Documentation/vm/hmm.txt
diff
own type anyway), so:
Reviewed-by: John Hubbard <jhubb...@nvidia.com>
thanks
John Hubbard
NVIDIA
@@ -145,6 +145,7 @@ static inline unsigned long migrate_pfn_size(unsigned long
mpfn)
{
return mpfn & MIGRATE_PFN_HUGE ? PMD_SIZE : PAGE_SIZE;
}
+#endif
/*
* struct migrate_vma_
, in a 32-bit pfn.
So, given the current HMM design, I think we are going to have to provide a 32-bit version of these
routines (migrate_pfn_to_page, and related) that is a no-op, right?
thanks
John Hubbard
NVIDIA
On 03/16/2017 05:45 PM, Balbir Singh wrote:
On Fri, Mar 17, 2017 at 11:22 AM, John Hubbard <jhubb...@nvidia.com> wrote:
On 03/16/2017 04:05 PM, Andrew Morton wrote:
On Thu, 16 Mar 2017 12:05:26 -0400 Jérôme Glisse <jgli...@redhat.com>
wrote:
+static inline struct page *migrate
On 03/14/2017 06:33 AM, Anshuman Khandual wrote:
On 03/08/2017 04:37 PM, John Hubbard wrote:
[...]
There was a discussion, on an earlier version of this patchset, in which
someone pointed out that a slight over-allocation on a device that has
much more memory than the CPU has, could use up
ory
actually becoming "addressable" someday, is a good argument for using a
different name.
thanks,
--
John Hubbard
NVIDIA
On 03/08/2017 10:37 PM, Minchan Kim wrote:
>[...]
I think it's the matter of taste.
if (try_to_unmap(xxx))
something
else
something
It's perfectly understandable to me. IOW, if try_to_unmap returns true,
it means it did unmap successfully.
, none of the existing callers need to set an error in the
mapping when this fails, so I just added this to make it clear for any
new callers in the future.
Yes, somehow, even in this tiny patchset, I missed those two new comment lines.
arghh. :)
Well, everything looks great, then.
thanks,
John H
re, (maybe in a cover letter).
Because otherwise, it's too easy for earlier, important problems to be forgotten.
And reviewers don't want to have to repeat themselves, of course.
thanks
John Hubbard
NVIDIA
* CDM node's zones are part of it's own NOFALLBACK zonelist
These above changes ensure the
On 03/08/2017 01:48 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:25:48AM -0800, john.hubb...@gmail.com wrote:
From: John Hubbard <jhubb...@nvidia.com>
Hi,
Say, I'm 99% sure that this was just an oversight, so
I'm sticking my neck out here and floating a patch to
Put Thing
On 03/08/2017 02:12 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:59:33AM -0800, John Hubbard wrote:
On 03/08/2017 01:48 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:25:48AM -0800, john.hubb...@gmail.com wrote:
From: John Hubbard <jhubb...@nvidia.com>
Hi,
Sa
From: John Hubbard <jhubb...@nvidia.com>
Originally, kref_get and kref_put were available as
standard routines that even non-GPL device drivers
could use. However, as an unintended side effect of
the recent kref_*() upgrade[1], these calls are now
effectively GPL, because they get
On 03/08/2017 01:50 AM, Greg Kroah-Hartman wrote:
On Wed, Mar 08, 2017 at 01:25:49AM -0800, john.hubb...@gmail.com wrote:
From: John Hubbard <jhubb...@nvidia.com>
Originally, kref_get and kref_put were available as
standard routines that even non-GPL device drivers
could use.
As I
switch (ret = try_to_unmap(page,
- ttu_flags | TTU_BATCH_FLUSH)) {
- case SWAP_FAIL:
Again: the SWAP_FAIL makes it crystal clear which case we're in.
I also wonder if UNMAP_FAIL or TTU_RESULT_FAIL is a better name?
thanks,
John Hubbard
NVIDIA
interrupted, maybe?
The code changes look perfect, though. And although I'm not a fs guy, it seems
pretty clear that with all the callers passing in 1 all this time, nobody is likely
to complain about this simplification.
thanks,
John Hubbard
NVIDIA
No existing caller uses this on normal files, so
From: John Hubbard <jhubb...@nvidia.com>
Hi,
Say, I'm 99% sure that this was just an oversight, so
I'm sticking my neck out here and floating a patch to
Put Things Back. I'm hoping that there is not some
firm reason to GPL-protect the basic kref_get and
kref_put routines, because when des
depending
on whether the 32-bit virtual (fake) PCI domain fits within 16 bits. (If not, then we can rush out a
driver update to fix it, but there will be a window of time with some breakage there.)
[1] http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf , seciton
6.5.6, page 397
thanks,
--
John Hubbard
NVIDIA
Thanks,
- Haiyang
ooks like your patch
was not rejected, but I can't tell if (!rejected == accepted), there. :)
We'll continue testing, but I expect at this point that anything we find
can be patched up after HMM finally gets merged.
thanks,
John Hubbard
NVIDIA
>
> Everything else is the same. Below is
On 06/29/2017 07:25 PM, Mikulas Patocka wrote:
> The __vmalloc function has a parameter gfp_mask with the allocation flags,
> however it doesn't fully respect the GFP_NOIO and GFP_NOFS flags. The
> pages are allocated with the specified gfp flags, but the pagetables are
> always allocated with
On 07/06/2017 02:52 PM, Ross Zwisler wrote:
[...]
>
> The naming collision between Jerome's "Heterogeneous Memory Management
> (HMM)" and this "Heterogeneous Memory (HMEM)" series is unfortunate, but I
> was trying to stick with the word "Heterogeneous" because of the naming of
> the ACPI 6.2
On 07/06/2017 02:52 PM, Ross Zwisler wrote:
[...]
> diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
> index b1aacfc..31e3f20 100644
> --- a/drivers/acpi/Makefile
> +++ b/drivers/acpi/Makefile
> @@ -72,6 +72,7 @@ obj-$(CONFIG_ACPI_PROCESSOR)+= processor.o
> obj-$(CONFIG_ACPI)
On Tue, 25 Apr 2017, Christoph Hellwig wrote:
> Hi John,
>
> please fix your quoting of the previous mails, thanks!
Shoot, sorry about any quoting issues. I'm sufficiently new to conversing
on these lists that I'm not even sure which mistake I made.
>
>
> What ACPI defines does not matter
e. It's complicating the Kconfig choices,
and adding problems. However, if DEVICE_PRIVATE must be kept, then something like this also fixes my
HMM tests:
From: John Hubbard <jhubb...@nvidia.com>
Date: Thu, 8 Jun 2017 20:13:13 -0700
Subject: [PATCH] hmm: select CONFIG_DEVICE_PRIVATE with HMM_DEV
ddr argument.
3. ...and it doesn't add anything that the driver can't trivially do itself.
So, let's just remove it. Less is more this time. :)
thanks,
--
John Hubbard
NVIDIA
ause I
suspect that the document is already good enough. This is based on not seeing any "I am
having trouble understanding HMM" complaints.
If that's not the case, please speak up. Otherwise, I'm assuming that all is well in the
HMM Documentation department.
thanks,
--
John Hubbard
NVIDIA
lobal_pgds(start, end - 1);
This does fix the HMM crash that I was seeing in hmm-next.
thanks,
John Hubbard
NVIDIA
}
#ifdef CONFIG_MEMORY_HOTREMOVE
--
2.4.11
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org. For more info on Linux M
On 05/17/2017 01:09 AM, Michal Hocko wrote:
From: Michal Hocko
While converting drm_[cm]alloc* helpers to kvmalloc* variants Chris
Wilson has wondered why we want to try kmalloc before vmalloc fallback
even for larger allocations requests. Let's clarify that one larger
On 06/14/2017 07:09 PM, Jerome Glisse wrote:
On Wed, Jun 14, 2017 at 04:10:32PM -0700, John Hubbard wrote:
On 06/14/2017 01:11 PM, Jérôme Glisse wrote:
[...]
Hi Jerome,
There are still some problems with using this configuration. First and
foremost, it is still possible (and likely, given
On 06/14/2017 01:11 PM, Jérôme Glisse wrote:
This just simplify kconfig and allow HMM and DEVICE_PUBLIC to be
selected for ppc64 once ZONE_DEVICE is allowed on ppc64 (different
patchset).
Signed-off-by: Jérôme Glisse <jgli...@redhat.com>
Signed-off-by: John Hubbard <jhubb...@nvidi
From: John Hubbard <jhubb...@nvidia.com>
Due to commit db3e50f3234b ("device property: Get rid of struct
fwnode_handle type field"), ACPI_HANDLE() inadvertently became
a GPL-only call. The call path that led to that was:
ACPI_HANDLE()
ACPI_COMPANION()
to_
From: John Hubbard <jhubb...@nvidia.com>
Hi everyone,
I really don't know for sure which fix is going to be preferred--the
following patch, or just an obvious one-line fix that changes
DECLARE_ACPI_FWNODE_OPS() so that it invokes EXPORT_SYMBOL, instead of
EXPORT_SYMBOL_GPL. I exp
On 11/28/2017 12:12 AM, Michal Hocko wrote:
> On Mon 27-11-17 15:26:27, John Hubbard wrote:
> [...]
>> Let me add a belated report, then: we ran into this limit while implementing
>> an early version of Unified Memory[1], back in 2013. The implementation
>> at the t
use this option with
+care, keeping in mind that different kernels and C libraries may set up quite
+different mapping ranges.
...because that advice is just wrong (it presumes that "less portable" ==
"must be discouraged").
Should I send out a separate patch for that, or is it better to glom it
together
with this one?
thanks,
John Hubbard
NVIDIA
CH] mmap.2: document new MAP_FIXED_SAFE flag
>
> 4.16+ kernels offer a new MAP_FIXED_SAFE flag which allows the caller to
> atomicaly probe for a given address range.
>
> [wording heavily updated by John Hubbard <jhubb...@nvidia.com>]
> Signed-off-by: Michal H
later, the design was *completely* changed to use a separate
tracking system altogether).
The existing limit seems rather too low, at least from my perspective. Maybe
it would be better, if expressed as a function of RAM size?
[1] https://devblogs.nvidia.com/parallelforall/unified-memory-in-cuda-6/
This is a way to automatically (via page faulting) migrate memory
between CPUs and devices (GPUs, here). This is before HMM, of course.
thanks,
John Hubbard
From: John Hubbard <jhubb...@nvidia.com>
Previously, MAP_FIXED was "discouraged", due to portability
issues with the fixed address. In fact, there are other, more
serious issues. Also, in some limited cases, this option can
be used safely.
Expand the documentation to discuss
On 12/04/2017 03:31 AM, Mike Rapoport wrote:
> On Sun, Dec 03, 2017 at 06:14:11PM -0800, john.hubb...@gmail.com wrote:
>> From: John Hubbard <jhubb...@nvidia.com>
>>
[...]
>> +.IP
>> +Given the above limitations, one of the very few ways to use this option
>&g
must be a multiple of SHMLBA (), which in turn is either
the system page size (on many architectures) or a multiple of the system
page size (on some architectures)."
What do you think?
thanks,
John Hubbard
NVIDIA
> Which should at least hint the reader that this is architecture specific.
>
On 12/04/2017 11:08 PM, Michal Hocko wrote:
> On Mon 04-12-17 18:52:27, John Hubbard wrote:
>> On 12/04/2017 03:31 AM, Mike Rapoport wrote:
>>> On Sun, Dec 03, 2017 at 06:14:11PM -0800, john.hubb...@gmail.com wrote:
>>>> From: John Hubbard <jhubb...@nvidia.com>
On 12/04/2017 11:05 PM, Michal Hocko wrote:
> On Mon 04-12-17 18:14:18, John Hubbard wrote:
>> On 12/04/2017 02:55 AM, Cyril Hrubis wrote:
>>> Hi!
>>> I know that we are not touching the rest of the existing description for
>>> MAP_FIXED however the sec
On 12/13/2017 06:52 PM, Jann Horn wrote:
> On Wed, Dec 13, 2017 at 10:31 AM, Michal Hocko <mho...@kernel.org> wrote:
>> From: John Hubbard <jhubb...@nvidia.com>
[...]
>> +.IP
>> +Furthermore, this option is extremely hazardous (when used on its own),
>&
On 12/18/2017 11:15 AM, Michael Kerrisk (man-pages) wrote:
> On 12/12/2017 01:23 AM, john.hubb...@gmail.com wrote:
>> From: John Hubbard <jhubb...@nvidia.com>
>>
>> -- Expand the documentation to discuss the hazards in
>>enough detail to allow av
On 12/13/2017 06:52 PM, Jann Horn wrote:
> On Wed, Dec 13, 2017 at 10:31 AM, Michal Hocko <mho...@kernel.org> wrote:
>> From: John Hubbard <jhubb...@nvidia.com>
>>
>> -- Expand the documentation to discuss the hazards in
>>enough detail to al
e way to solve the problem. :)
For the naming and implementation, I see a couple of things that might improve
it slightly:
a) Change MAP_FIXED_SAFE to MAP_NO_CLOBBER (as per Kees' idea), but keep the
new flag independent, by omitting the above two lines. Instead of forcing
MAP_FIXED as a result of
M (speaking loosely there--it's really any user space
code that manages a unified memory address space, across devices)
often ends up using MAP_FIXED, but MAP_FIXED crams several features
into one flag: an exact address, an "atomic" switch to the new mapping,
and unmapping the old mapp
On 11/20/2017 01:05 AM, Michal Hocko wrote:
> On Fri 17-11-17 00:45:49, John Hubbard wrote:
>> On 11/16/2017 04:14 AM, Michal Hocko wrote:
>>> [Ups, managed to screw the subject - fix it]
>>>
>>> On Thu 16-11-17 11:18:58, Michal Hocko wrote:
>>>> Hi,
From: John Hubbard <jhubb...@nvidia.com>
Previously, MAP_FIXED was "discouraged", due to portability
issues with the fixed address. In fact, there are other, more
serious issues. Also, in some limited cases, this option can
be used safely.
Expand the documentation to discuss
s space can change in response to
> virtually any library call. This is because almost any library call may be
> implemented by using dlopen(3) to load another shared library, which will be
> mapped into the process's address space. The PAM libraries are an excellent
> example, as well as more obvious examples like brk(2), malloc(3) and even
> pthread_create(3)."
>
> What do you think?
>
I'm working on some updated wording to capture these points. I'm even slower
at writing than I am at coding, so there will be a somewhat-brief pause here...
:)
thanks,
John Hubbard
NVIDIA
g thread).
Newer kernels (Linux 4.16 and later) have a MAP_FIXED_SAFE option that
avoids the corruption problem; if available, MAP_FIXED_SAFE should be
preferred over MAP_FIXED.
thanks,
John Hubbard
NVIDIA
From: John Hubbard <jhubb...@nvidia.com>
Previously, MAP_FIXED was "discouraged", due to portability
issues with the fixed address. In fact, there are other, more
serious issues. Also, alignment requirements were a bit vague.
So:
-- Expand the documentation to dis
On 12/10/2017 02:31 AM, Michal Hocko wrote:
> On Tue 05-12-17 19:14:34, john.hubb...@gmail.com wrote:
>> From: John Hubbard <jhubb...@nvidia.com>
>>
>> Previously, MAP_FIXED was "discouraged", due to portability
>> issues with the fixed address. In fa
et.
>
> I'm not set on MAP_REQUIRED. I came up with some awful names
> (MAP_TODDLER, MAP_TANTRUM, MAP_ULTIMATUM, MAP_BOSS, MAP_PROGRAM_MANAGER,
> etc). But I think we should drop FIXED from the middle of the name.
>
In that case, maybe:
MAP_EXACT
? ...because that's the characteristic behavior. It doesn't clobber, but
you don't need to say that in the name, now that we're not including
_FIXED_ in the middle.
thanks,
John Hubbard
NVIDIA
On 12/05/2017 11:35 PM, Florian Weimer wrote:
> On 12/06/2017 08:33 AM, John Hubbard wrote:
>> In that case, maybe:
>>
>> MAP_EXACT
>>
>> ? ...because that's the characteristic behavior.
>
> Is that true? mmap still silently rounding up the length
you're thinking that since the SHMLBA cannot be put in the man
pages, we could instead provide MapAlignment as sort of a different
way to document the requirement?
--
thanks,
John Hubbard
NVIDIA
On 12/06/2017 04:19 PM, Kees Cook wrote:
> On Wed, Dec 6, 2017 at 1:08 AM, Michal Hocko wrote:
>> On Wed 06-12-17 08:33:37, Rasmus Villemoes wrote:
>>> On 2017-12-06 05:50, Michael Ellerman wrote:
Michal Hocko writes:
> On Wed 29-11-17
From: John Hubbard <jhubb...@nvidia.com>
-- Expand the documentation to discuss the hazards in
enough detail to allow avoiding them.
-- Mention the upcoming MAP_FIXED_SAFE flag.
-- Enhance the alignment requirement slightly.
CC: Michael Ellerman <m...@ellerman.
From: John Hubbard <jhubb...@nvidia.com>
MAP_FIXED has been widely used for a very long time, yet the man
page still claims that "the use of this option is discouraged".
The documentation assumes that "less portable" == "must be discouraged".
Instead of disc
t: place the mapping at exactly that
address. addr must be suitably aligned: for most architectures a
multiple of page size is sufficient; however, some architectures
may impose additional restrictions.
...which is basically what Cyril was asking for, in his ear
On 06/18/2018 10:56 AM, Dan Williams wrote:
> On Mon, Jun 18, 2018 at 10:50 AM, John Hubbard wrote:
>> On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
>>> On Sun, Jun 17, 2018 at 01:28:18PM -0700, John Hubbard wrote:
>>>> Yes. However, my thinking was: get
On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
> On Sun, Jun 17, 2018 at 01:28:18PM -0700, John Hubbard wrote:
>> Yes. However, my thinking was: get_user_pages() can become a way to indicate
>> that
>> these pages are going to be treated specially. In particular, the call
On 06/15/2018 10:22 PM, Dan Williams wrote:
> On Fri, Jun 15, 2018 at 9:43 PM, John Hubbard wrote:
>> On 06/13/2018 12:51 PM, Dan Williams wrote:
>>> [ adding Andrew, Christoph, and linux-mm ]
>>>
>>> On Wed, Jun 13, 2018 at 12:33 PM, Joe Gorse wrote:
[sn
put_devmap_managed_page
devmap_managed_key
__put_devmap_managed_page
So if the goal is to restore put_page() to be effectively EXPORT_SYMBOL
again, then I think there would also need to be either a non-inlined
wrapper for devmap_managed_key (awkward for a static key), or else make
it EXPORT_SYMBOL, or maybe something else that's less obvious to me at the
moment.
thanks,
--
John Hubbard
NVIDIA
Hi Christoph,
Thanks for looking at this...
On 06/18/2018 12:56 AM, Christoph Hellwig wrote:
> On Sat, Jun 16, 2018 at 06:25:10PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> This fixes a few problems that come up when using devices (NICs, GPUs,
>
On 06/18/2018 12:21 PM, Dan Williams wrote:
> On Mon, Jun 18, 2018 at 11:14 AM, John Hubbard wrote:
>> On 06/18/2018 10:56 AM, Dan Williams wrote:
>>> On Mon, Jun 18, 2018 at 10:50 AM, John Hubbard wrote:
>>>> On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
>&
From: John Hubbard
This fixes a few problems that come up when using devices (NICs, GPUs,
for example) that want to have direct access to a chunk of system (CPU)
memory, so that they can DMA to/from that memory. Problems [1] come up
if that memory is backed by persistence storage; for example
1 - 100 of 1254 matches
Mail list logo