On 24.02.21 15:25, David Hildenbrand wrote:
+ tmp_end = min_t(unsigned long, end, vma->vm_end);
+ pages = populate_vma_page_range(vma, start, tmp_end, );
+ if (!locked) {
+ mmap_read_lock(mm);
+ *prev = NULL;
+
On 24.02.21 15:25, David Hildenbrand wrote:
+ tmp_end = min_t(unsigned long, end, vma->vm_end);
+ pages = populate_vma_page_range(vma, start, tmp_end, );
+ if (!locked) {
+ mmap_read_lock(mm);
+ *prev = NULL;
+
+ tmp_end = min_t(unsigned long, end, vma->vm_end);
+ pages = populate_vma_page_range(vma, start, tmp_end, );
+ if (!locked) {
+ mmap_read_lock(mm);
+ *prev = NULL;
+ vma = NULL;
^ locked
On 22.02.21 15:02, Michal Hocko wrote:
On Mon 22-02-21 14:22:37, David Hildenbrand wrote:
Exactly. But for hugetlbfs/shmem ("!RAM-backed files") this is not what we
want.
OK, then I must have misread your requirements. Maybe I just got lost in
all the combinations you have listed.
Another
On Mon 22-02-21 14:22:37, David Hildenbrand wrote:
> > > Exactly. But for hugetlbfs/shmem ("!RAM-backed files") this is not what we
> > > want.
> >
> > OK, then I must have misread your requirements. Maybe I just got lost in
> > all the combinations you have listed.
>
> Another special case
Exactly. But for hugetlbfs/shmem ("!RAM-backed files") this is not what we
want.
OK, then I must have misread your requirements. Maybe I just got lost in
all the combinations you have listed.
Another special case could be dax/pmem I think. You might want to fault
it in readable/writable but
On Mon 22-02-21 13:59:55, David Hildenbrand wrote:
> On 22.02.21 13:56, Michal Hocko wrote:
> > On Sat 20-02-21 10:12:26, David Hildenbrand wrote:
> > [...]
> > > Thinking about MADV_POPULATE vs. MADV_POPULATE_WRITE I wonder if it would
> > > be
> > > more versatile to break with existing
On 22.02.21 13:56, Michal Hocko wrote:
On Sat 20-02-21 10:12:26, David Hildenbrand wrote:
[...]
Thinking about MADV_POPULATE vs. MADV_POPULATE_WRITE I wonder if it would be
more versatile to break with existing MAP_POPULATE semantics and directly go
with
MADV_POPULATE_READ: simulate user space
On Sat 20-02-21 10:12:26, David Hildenbrand wrote:
[...]
> Thinking about MADV_POPULATE vs. MADV_POPULATE_WRITE I wonder if it would be
> more versatile to break with existing MAP_POPULATE semantics and directly go
> with
>
> MADV_POPULATE_READ: simulate user space read access without actually
>
On 22.02.21 13:46, Michal Hocko wrote:
I am slowly catching up with this thread.
On Fri 19-02-21 09:20:16, David Hildenbrand wrote:
[...]
So if we have zero, we write zero. We'll COW pages, triggering a write fault
- and that's the only good thing about it. For example, similar to
I am slowly catching up with this thread.
On Fri 19-02-21 09:20:16, David Hildenbrand wrote:
[...]
> So if we have zero, we write zero. We'll COW pages, triggering a write fault
> - and that's the only good thing about it. For example, similar to
> MADV_POPULATE, nothing stops KSM from merging
On 17.02.21 16:48, David Hildenbrand wrote:
When we manage sparse memory mappings dynamically in user space - also
sometimes involving MADV_NORESERVE - we want to dynamically populate/
discard memory inside such a sparse memory region. Example users are
hypervisors (especially implementing
Sorry, for jumping in late ... hugetlb keyword just hit my mail filters :)
Sorry for not realizing to cc you before I sent out the man page update :)
Yes, it is true that hugetlb reservations are not numa aware. So, even if
pages are reserved at mmap time one could still SIGBUS if a fault
> Am 19.02.2021 um 20:23 schrieb Peter Xu :
>
> On Fri, Feb 19, 2021 at 06:13:47PM +0100, David Hildenbrand wrote:
>>> On 19.02.21 17:31, Peter Xu wrote:
>>> On Fri, Feb 19, 2021 at 09:20:16AM +0100, David Hildenbrand wrote:
On 18.02.21 23:59, Peter Xu wrote:
> Hi, David,
>
>
On 2/19/21 11:14 AM, David Hildenbrand wrote:
>>> It's interesting to know about commit 1e356fc14be ("mem-prealloc: reduce
>>> large
>>> guest start-up and migration time.", 2017-03-14). It seems for speeding up
>>> VM
>>> boot, but what I can't understand is why it would cause the delay of
On Fri, Feb 19, 2021 at 06:13:47PM +0100, David Hildenbrand wrote:
> On 19.02.21 17:31, Peter Xu wrote:
> > On Fri, Feb 19, 2021 at 09:20:16AM +0100, David Hildenbrand wrote:
> > > On 18.02.21 23:59, Peter Xu wrote:
> > > > Hi, David,
> > > >
> > > > On Wed, Feb 17, 2021 at 04:48:44PM +0100,
It's interesting to know about commit 1e356fc14be ("mem-prealloc: reduce large
guest start-up and migration time.", 2017-03-14). It seems for speeding up VM
boot, but what I can't understand is why it would cause the delay of hugetlb
accounting - I thought we'd fail even earlier at either
On 19.02.21 17:31, Peter Xu wrote:
On Fri, Feb 19, 2021 at 09:20:16AM +0100, David Hildenbrand wrote:
On 18.02.21 23:59, Peter Xu wrote:
Hi, David,
On Wed, Feb 17, 2021 at 04:48:44PM +0100, David Hildenbrand wrote:
When we manage sparse memory mappings dynamically in user space - also
On Fri, Feb 19, 2021 at 09:20:16AM +0100, David Hildenbrand wrote:
> On 18.02.21 23:59, Peter Xu wrote:
> > Hi, David,
> >
> > On Wed, Feb 17, 2021 at 04:48:44PM +0100, David Hildenbrand wrote:
> > > When we manage sparse memory mappings dynamically in user space - also
> > > sometimes involving
On 19.02.21 12:04, Michal Hocko wrote:
On Fri 19-02-21 11:43:48, David Hildenbrand wrote:
On 19.02.21 11:35, Michal Hocko wrote:
On Wed 17-02-21 16:48:44, David Hildenbrand wrote:
[...]
I only got to the implementation now.
+static long madvise_populate(struct vm_area_struct *vma,
+
On Fri 19-02-21 11:43:48, David Hildenbrand wrote:
> On 19.02.21 11:35, Michal Hocko wrote:
> > On Wed 17-02-21 16:48:44, David Hildenbrand wrote:
> > [...]
> >
> > I only got to the implementation now.
> >
> > > +static long madvise_populate(struct vm_area_struct *vma,
> > > +
On 19.02.21 11:35, Michal Hocko wrote:
On Wed 17-02-21 16:48:44, David Hildenbrand wrote:
[...]
I only got to the implementation now.
+static long madvise_populate(struct vm_area_struct *vma,
+struct vm_area_struct **prev,
+unsigned
On Wed 17-02-21 16:48:44, David Hildenbrand wrote:
[...]
I only got to the implementation now.
> +static long madvise_populate(struct vm_area_struct *vma,
> + struct vm_area_struct **prev,
> + unsigned long start, unsigned long end)
> +{
> +
On 18.02.21 23:59, Peter Xu wrote:
Hi, David,
On Wed, Feb 17, 2021 at 04:48:44PM +0100, David Hildenbrand wrote:
When we manage sparse memory mappings dynamically in user space - also
sometimes involving MADV_NORESERVE - we want to dynamically populate/
discard memory inside such a sparse
Hi, David,
On Wed, Feb 17, 2021 at 04:48:44PM +0100, David Hildenbrand wrote:
> When we manage sparse memory mappings dynamically in user space - also
> sometimes involving MADV_NORESERVE - we want to dynamically populate/
> discard memory inside such a sparse memory region. Example users are
>
On Thu 18-02-21 11:54:48, David Hildenbrand wrote:
> > > > If we hit
> > > > hardware errors on pages, ignore them - nothing we really can or
> > > > should do.
> > > > 3. On errors during MADV_POPULATED, some memory might have been
> > > > populated. Callers have to clean up
On Thu 18-02-21 11:44:41, David Hildenbrand wrote:
> On 18.02.21 11:25, Michal Hocko wrote:
> > On Wed 17-02-21 16:48:44, David Hildenbrand wrote:
> > > When we manage sparse memory mappings dynamically in user space - also
> > > sometimes involving MADV_NORESERVE - we want to dynamically
If we hit
hardware errors on pages, ignore them - nothing we really can or
should do.
3. On errors during MADV_POPULATED, some memory might have been
populated. Callers have to clean up if they care.
How does caller find out? madvise reports 0 on success so how do you
find
> Am 18.02.2021 um 12:15 schrieb Rolf Eike Beer :
>
>
>>
>>> Let's introduce MADV_POPULATE with the following semantics
>>> 1. MADV_POPULATED does not work on PROT_NONE and special VMAs. It works
>>> on everything else.
>>> 2. Errors during MADV_POPULATED (especially OOM) are reported. If
Let's introduce MADV_POPULATE with the following semantics
1. MADV_POPULATED does not work on PROT_NONE and special VMAs. It
works
on everything else.
2. Errors during MADV_POPULATED (especially OOM) are reported. If we
hit
hardware errors on pages, ignore them - nothing we really can
If we hit
hardware errors on pages, ignore them - nothing we really can or
should do.
3. On errors during MADV_POPULATED, some memory might have been
populated. Callers have to clean up if they care.
How does caller find out? madvise reports 0 on success so how do you
find
On 18.02.21 11:25, Michal Hocko wrote:
On Wed 17-02-21 16:48:44, David Hildenbrand wrote:
When we manage sparse memory mappings dynamically in user space - also
sometimes involving MADV_NORESERVE - we want to dynamically populate/
Just wondering what is MADV_NORESERVE? I do not see anything
On Wed 17-02-21 16:48:44, David Hildenbrand wrote:
> When we manage sparse memory mappings dynamically in user space - also
> sometimes involving MADV_NORESERVE - we want to dynamically populate/
Just wondering what is MADV_NORESERVE? I do not see anything like that
in the Linus tree. Did you
+CC linux-api, please do on further revisions.
Keeping rest of the e-mail.
On 2/17/21 4:48 PM, David Hildenbrand wrote:
> When we manage sparse memory mappings dynamically in user space - also
> sometimes involving MADV_NORESERVE - we want to dynamically populate/
> discard memory inside such a
On 17.02.21 17:46, Dave Hansen wrote:
On 2/17/21 7:48 AM, David Hildenbrand wrote:
While MADV_DONTNEED and FALLOC_FL_PUNCH_HOLE provide us ways to reliably
discard memory, there is no generic approach to populate ("preallocate")
memory.
Although mmap() supports MAP_POPULATE, it is not
On 2/17/21 7:48 AM, David Hildenbrand wrote:
> While MADV_DONTNEED and FALLOC_FL_PUNCH_HOLE provide us ways to reliably
> discard memory, there is no generic approach to populate ("preallocate")
> memory.
>
> Although mmap() supports MAP_POPULATE, it is not applicable to the concept
> of sparse
When we manage sparse memory mappings dynamically in user space - also
sometimes involving MADV_NORESERVE - we want to dynamically populate/
discard memory inside such a sparse memory region. Example users are
hypervisors (especially implementing memory ballooning or similar
technologies like
37 matches
Mail list logo