> I assume he means MAP_POPULATE
Yes.
>
> which does improve things, from ~3000 cycles to ~219 cycles but that's
> still more overhead than the ~130 or so you get by manually touching the
> page first.
That seems odd. It should be the same.
Can you do a trace-cmd function trace and compare
On Mon, 2 Sep 2013, Stephane Eranian wrote:
> On Mon, Sep 2, 2013 at 4:50 AM, Andi Kleen wrote:
> > Stephane Eranian writes:
> >
> >> I don't see a flag in mmap() to fault it in immediately.
> >
> > MAP_PRESENT
> >
> I could not find this constant defined anywhere in the kernel source tree
>
On Mon, Sep 02, 2013 at 10:24:13AM +0200, Stephane Eranian wrote:
> On Mon, Sep 2, 2013 at 4:50 AM, Andi Kleen wrote:
> > Stephane Eranian writes:
> >
> >> I don't see a flag in mmap() to fault it in immediately.
> >
> > MAP_PRESENT
> >
> I could not find this constant defined anywhere in the
On Mon, Sep 2, 2013 at 4:50 AM, Andi Kleen wrote:
> Stephane Eranian writes:
>
>> I don't see a flag in mmap() to fault it in immediately.
>
> MAP_PRESENT
>
I could not find this constant defined anywhere in the kernel source tree
nor in /usr/include. Are you sure of the name?
--
To unsubscribe
On Mon, Sep 2, 2013 at 4:50 AM, Andi Kleen a...@firstfloor.org wrote:
Stephane Eranian eran...@googlemail.com writes:
I don't see a flag in mmap() to fault it in immediately.
MAP_PRESENT
I could not find this constant defined anywhere in the kernel source tree
nor in /usr/include. Are you
On Mon, Sep 02, 2013 at 10:24:13AM +0200, Stephane Eranian wrote:
On Mon, Sep 2, 2013 at 4:50 AM, Andi Kleen a...@firstfloor.org wrote:
Stephane Eranian eran...@googlemail.com writes:
I don't see a flag in mmap() to fault it in immediately.
MAP_PRESENT
I could not find this constant
On Mon, 2 Sep 2013, Stephane Eranian wrote:
On Mon, Sep 2, 2013 at 4:50 AM, Andi Kleen a...@firstfloor.org wrote:
Stephane Eranian eran...@googlemail.com writes:
I don't see a flag in mmap() to fault it in immediately.
MAP_PRESENT
I could not find this constant defined anywhere in
I assume he means MAP_POPULATE
Yes.
which does improve things, from ~3000 cycles to ~219 cycles but that's
still more overhead than the ~130 or so you get by manually touching the
page first.
That seems odd. It should be the same.
Can you do a trace-cmd function trace and compare the
Stephane Eranian writes:
> I don't see a flag in mmap() to fault it in immediately.
MAP_PRESENT
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More
Stephane Eranian eran...@googlemail.com writes:
I don't see a flag in mmap() to fault it in immediately.
MAP_PRESENT
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to
Vince Weaver writes:
> On Fri, 30 Aug 2013, Stephane Eranian wrote:
> > >
> > You mean that the high cost in your first example comes from the fact
> > that you are averaging over all the iterations and not n-1 (where 1 is
> > the first). I don't see a flag in mmap() to fault it in
Vince Weaver writes:
On Fri, 30 Aug 2013, Stephane Eranian wrote:
You mean that the high cost in your first example comes from the fact
that you are averaging over all the iterations and not n-1 (where 1 is
the first). I don't see a flag in mmap() to fault it in immediately. But
On Fri, 30 Aug 2013, Stephane Eranian wrote:
> >
> You mean that the high cost in your first example comes from the fact
> that you are averaging over all the iterations and not n-1 (where 1 is
> the first). I don't see a flag in mmap() to fault it in immediately. But
> why not document, that
Hello,
I've finally found time to track down why perf_event/rdpmc self-monitoring
overhead was so bad.
To summarize, a test which does:
perf_event_open()
ioctl(PERF_EVENT_IOC_ENABLE)
read() /* either via syscall or the rdpmc code listed in
On Fri, Aug 30, 2013 at 7:55 PM, Vince Weaver wrote:
> Hello,
>
> I've finally found time to track down why perf_event/rdpmc self-monitoring
> overhead was so bad.
>
> To summarize, a test which does:
>
>perf_event_open()
>ioctl(PERF_EVENT_IOC_ENABLE)
>read() /* either via syscall or
On Fri, Aug 30, 2013 at 7:55 PM, Vince Weaver vincent.wea...@maine.edu wrote:
Hello,
I've finally found time to track down why perf_event/rdpmc self-monitoring
overhead was so bad.
To summarize, a test which does:
perf_event_open()
ioctl(PERF_EVENT_IOC_ENABLE)
read() /* either
Hello,
I've finally found time to track down why perf_event/rdpmc self-monitoring
overhead was so bad.
To summarize, a test which does:
perf_event_open()
ioctl(PERF_EVENT_IOC_ENABLE)
read() /* either via syscall or the rdpmc code listed in
On Fri, 30 Aug 2013, Stephane Eranian wrote:
You mean that the high cost in your first example comes from the fact
that you are averaging over all the iterations and not n-1 (where 1 is
the first). I don't see a flag in mmap() to fault it in immediately. But
why not document, that programs
18 matches
Mail list logo