On 6/20/17 4:35 PM, Prakash Sangappa wrote:
On 6/16/17 6:15 AM, Andrea Arcangeli wrote:
Adding a single if (ctx->feature & UFFD_FEATURE_SIGBUS) goto out,
branch for this corner case to handle_userfault() isn't great and the
hugetlbfs mount option is absolutely zero cost to the
On 6/20/17 4:35 PM, Prakash Sangappa wrote:
On 6/16/17 6:15 AM, Andrea Arcangeli wrote:
Adding a single if (ctx->feature & UFFD_FEATURE_SIGBUS) goto out,
branch for this corner case to handle_userfault() isn't great and the
hugetlbfs mount option is absolutely zero cost to the
On 6/16/17 6:15 AM, Andrea Arcangeli wrote:
Hello Prakash,
Thanks for you response. Comments inline.
On Tue, May 09, 2017 at 01:59:34PM -0700, Prakash Sangappa wrote:
On 5/9/17 1:58 AM, Christoph Hellwig wrote:
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
On 6/16/17 6:15 AM, Andrea Arcangeli wrote:
Hello Prakash,
Thanks for you response. Comments inline.
On Tue, May 09, 2017 at 01:59:34PM -0700, Prakash Sangappa wrote:
On 5/9/17 1:58 AM, Christoph Hellwig wrote:
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
Hello Prakash,
On Tue, May 09, 2017 at 01:59:34PM -0700, Prakash Sangappa wrote:
>
>
> On 5/9/17 1:58 AM, Christoph Hellwig wrote:
> > On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
> >> Regarding #3 as a general feature, do we want to
> >> consider this and the complexity
Hello Prakash,
On Tue, May 09, 2017 at 01:59:34PM -0700, Prakash Sangappa wrote:
>
>
> On 5/9/17 1:58 AM, Christoph Hellwig wrote:
> > On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
> >> Regarding #3 as a general feature, do we want to
> >> consider this and the complexity
On 5/9/17 1:59 PM, Prakash Sangappa wrote:
On 5/9/17 1:58 AM, Christoph Hellwig wrote:
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
Regarding #3 as a general feature, do we want to
consider this and the complexity associated with the
implementation?
We have to. Given
On 5/9/17 1:59 PM, Prakash Sangappa wrote:
On 5/9/17 1:58 AM, Christoph Hellwig wrote:
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
Regarding #3 as a general feature, do we want to
consider this and the complexity associated with the
implementation?
We have to. Given
On 5/9/17 1:58 AM, Christoph Hellwig wrote:
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
Regarding #3 as a general feature, do we want to
consider this and the complexity associated with the
implementation?
We have to. Given that no one has exclusive access to hugetlbfs
On 5/9/17 1:58 AM, Christoph Hellwig wrote:
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
Regarding #3 as a general feature, do we want to
consider this and the complexity associated with the
implementation?
We have to. Given that no one has exclusive access to hugetlbfs
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
> Regarding #3 as a general feature, do we want to
> consider this and the complexity associated with the
> implementation?
We have to. Given that no one has exclusive access to hugetlbfs
a mount option is fundamentally the wrong
On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
> Regarding #3 as a general feature, do we want to
> consider this and the complexity associated with the
> implementation?
We have to. Given that no one has exclusive access to hugetlbfs
a mount option is fundamentally the wrong
On 05/08/2017 08:58 AM, Dave Hansen wrote:
It depends on how you define the feature. I think you have three choices:
1. "Error" on page fault. Require all access to be pre-faulted.
2. Allow faults, but "Error" if page cache has to be allocated
3. Allow faults and page cache allocations,
On 05/08/2017 08:58 AM, Dave Hansen wrote:
It depends on how you define the feature. I think you have three choices:
1. "Error" on page fault. Require all access to be pre-faulted.
2. Allow faults, but "Error" if page cache has to be allocated
3. Allow faults and page cache allocations,
On 05/03/2017 12:02 PM, Prakash Sangappa wrote:
>>> If we do consider a new madvise()option, will it be acceptable
>>> since this will be specifically for hugetlbfs file mappings?
>> Ideally, it would be something that is *not* specifically for
>> hugetlbfs. MADV_NOAUTOFILL, for instance, could be
On 05/03/2017 12:02 PM, Prakash Sangappa wrote:
>>> If we do consider a new madvise()option, will it be acceptable
>>> since this will be specifically for hugetlbfs file mappings?
>> Ideally, it would be something that is *not* specifically for
>> hugetlbfs. MADV_NOAUTOFILL, for instance, could be
On 5/3/17 12:02 PM, Prakash Sangappa wrote:
On 5/2/17 4:43 PM, Dave Hansen wrote:
Ideally, it would be something that is *not* specifically for hugetlbfs.
MADV_NOAUTOFILL, for instance, could be defined to SIGSEGV whenever
memory is touched that was not populated with MADV_WILLNEED,
On 5/3/17 12:02 PM, Prakash Sangappa wrote:
On 5/2/17 4:43 PM, Dave Hansen wrote:
Ideally, it would be something that is *not* specifically for hugetlbfs.
MADV_NOAUTOFILL, for instance, could be defined to SIGSEGV whenever
memory is touched that was not populated with MADV_WILLNEED,
On 5/2/17 4:43 PM, Dave Hansen wrote:
On 05/02/2017 04:34 PM, Prakash Sangappa wrote:
Similarly, a madvise() option also requires additional system call by every
process mapping the file, this is considered a overhead for the database.
How long-lived are these processes? For a database, I'd
On 5/2/17 4:43 PM, Dave Hansen wrote:
On 05/02/2017 04:34 PM, Prakash Sangappa wrote:
Similarly, a madvise() option also requires additional system call by every
process mapping the file, this is considered a overhead for the database.
How long-lived are these processes? For a database, I'd
On 05/02/2017 04:34 PM, Prakash Sangappa wrote:
> Similarly, a madvise() option also requires additional system call by every
> process mapping the file, this is considered a overhead for the database.
How long-lived are these processes? For a database, I'd assume that
this would happen a single
On 05/02/2017 04:34 PM, Prakash Sangappa wrote:
> Similarly, a madvise() option also requires additional system call by every
> process mapping the file, this is considered a overhead for the database.
How long-lived are these processes? For a database, I'd assume that
this would happen a single
On 5/2/17 2:32 PM, Dave Hansen wrote:
On 05/01/2017 11:00 AM, Prakash Sangappa wrote:
This patch adds a new hugetlbfs mount option 'noautofill', to indicate that
pages should not be allocated at page fault time when accessed thru mmapped
address.
I think the main argument against doing
On 5/2/17 2:32 PM, Dave Hansen wrote:
On 05/01/2017 11:00 AM, Prakash Sangappa wrote:
This patch adds a new hugetlbfs mount option 'noautofill', to indicate that
pages should not be allocated at page fault time when accessed thru mmapped
address.
I think the main argument against doing
On 05/01/2017 11:00 AM, Prakash Sangappa wrote:
> This patch adds a new hugetlbfs mount option 'noautofill', to indicate that
> pages should not be allocated at page fault time when accessed thru mmapped
> address.
I think the main argument against doing something like this is further
On 05/01/2017 11:00 AM, Prakash Sangappa wrote:
> This patch adds a new hugetlbfs mount option 'noautofill', to indicate that
> pages should not be allocated at page fault time when accessed thru mmapped
> address.
I think the main argument against doing something like this is further
On 5/2/17 3:53 AM, Anshuman Khandual wrote:
On 05/01/2017 11:30 PM, Prakash Sangappa wrote:
Some applications like a database use hugetblfs for performance
reasons. Files on hugetlbfs filesystem are created and huge pages
allocated using fallocate() API. Pages are deallocated/freed using
On 5/2/17 3:53 AM, Anshuman Khandual wrote:
On 05/01/2017 11:30 PM, Prakash Sangappa wrote:
Some applications like a database use hugetblfs for performance
reasons. Files on hugetlbfs filesystem are created and huge pages
allocated using fallocate() API. Pages are deallocated/freed using
On 05/01/2017 11:30 PM, Prakash Sangappa wrote:
> Some applications like a database use hugetblfs for performance
> reasons. Files on hugetlbfs filesystem are created and huge pages
> allocated using fallocate() API. Pages are deallocated/freed using
> fallocate() hole punching support that has
On 05/01/2017 11:30 PM, Prakash Sangappa wrote:
> Some applications like a database use hugetblfs for performance
> reasons. Files on hugetlbfs filesystem are created and huge pages
> allocated using fallocate() API. Pages are deallocated/freed using
> fallocate() hole punching support that has
Some applications like a database use hugetblfs for performance
reasons. Files on hugetlbfs filesystem are created and huge pages
allocated using fallocate() API. Pages are deallocated/freed using
fallocate() hole punching support that has been added to hugetlbfs.
These files are mmapped and
Some applications like a database use hugetblfs for performance
reasons. Files on hugetlbfs filesystem are created and huge pages
allocated using fallocate() API. Pages are deallocated/freed using
fallocate() hole punching support that has been added to hugetlbfs.
These files are mmapped and
32 matches
Mail list logo