Re: limit on number of kmapped pages

2001-01-25 Thread David Wragg

"Stephen C. Tweedie" <[EMAIL PROTECTED]> writes:
> On Wed, Jan 24, 2001 at 12:35:12AM +, David Wragg wrote:
> > 
> > > And why do the pages need to be kmapped? 
> > 
> > They only need to be kmapped while data is being copied into them.
> 
> But you only need to kmap one page at a time during the copy.  There
> is absolutely no need to copy the whole chunk at once.

The chunks I'm copying are always smaller than a page.  Usually they
are a few hundred bytes.

Though because I'm copying into the pages in a bottom half, I'll have
to use kmap_atomic.  After a page is filled, it is put into the page
cache.  So they have to be allocated with page_cache_alloc(), hence
__GFP_HIGHMEM and the reason I'm bothering with kmap at all.


David Wragg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-25 Thread Stephen C. Tweedie

Hi,

On Wed, Jan 24, 2001 at 12:35:12AM +, David Wragg wrote:
> 
> > And why do the pages need to be kmapped? 
> 
> They only need to be kmapped while data is being copied into them.

But you only need to kmap one page at a time during the copy.  There
is absolutely no need to copy the whole chunk at once.

--Stephen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-25 Thread Stephen C. Tweedie

Hi,

On Wed, Jan 24, 2001 at 12:35:12AM +, David Wragg wrote:
 
  And why do the pages need to be kmapped? 
 
 They only need to be kmapped while data is being copied into them.

But you only need to kmap one page at a time during the copy.  There
is absolutely no need to copy the whole chunk at once.

--Stephen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-25 Thread David Wragg

"Stephen C. Tweedie" [EMAIL PROTECTED] writes:
 On Wed, Jan 24, 2001 at 12:35:12AM +, David Wragg wrote:
  
   And why do the pages need to be kmapped? 
  
  They only need to be kmapped while data is being copied into them.
 
 But you only need to kmap one page at a time during the copy.  There
 is absolutely no need to copy the whole chunk at once.

The chunks I'm copying are always smaller than a page.  Usually they
are a few hundred bytes.

Though because I'm copying into the pages in a bottom half, I'll have
to use kmap_atomic.  After a page is filled, it is put into the page
cache.  So they have to be allocated with page_cache_alloc(), hence
__GFP_HIGHMEM and the reason I'm bothering with kmap at all.


David Wragg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-24 Thread Eric W. Biederman

David Wragg <[EMAIL PROTECTED]> writes:

> I'd still like to know what the basis for the current kmap limit
> setting is.

Mostly at one point kmap_atomic was all there was.  It was only the
difficulty of implementing copy_from_user with kmap_atomic that convinced
people we needed something more.  So actually if we can kmap several
megabyte at once the kmap limit is quite high.

Eric

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-24 Thread David Wragg

"Benjamin C.R. LaHaise" <[EMAIL PROTECTED]> writes:
> On 24 Jan 2001, David Wragg wrote:
> 
> > [EMAIL PROTECTED] (Eric W. Biederman) writes:
> > > Why do you need such a large buffer? 
> > 
> > ext2 doesn't guarantee sustained write bandwidth (in particular,
> > writing a page to an ext2 file can have a high latency due to reading
> > the block bitmap synchronously).  To deal with this I need at least a
> > 2MB buffer.
> 
> This is the wrong way of going about things -- you should probably insert
> the pages into the page cache and write them into the filesystem via
> writepage. 

I currently use prepare_write/commit_write, but I think writepage
would have the same issue: When ext2 allocates a block, and has to
allocate from a new block group, it may do a synchronous read of the
new block group bitmap.  So before the writepage (or whatever) that
causes this completes, it has to wait for the read to get picked by
the elevator, the seek for the read, etc.  By the time it gets back to
writing normally, I've buffered a couple of MB of data.

But I do have a workaround for the ext2 issue.

> That way the pages don't need to be mapped while being written
> out.

Point taken, though the kmap needed before prepare_write is much less
significant than the kmap I need to do before copying data into the
page.

> For incoming data from a network socket, making use of the
> data_ready callbacks and directly copying from the skbs in one pass with a
> kmap of only one page at a time.
>
> Maybe I'm guessing incorrect at what is being attempted, but kmap should
> be used sparingly and as briefly as possible.

I'm going to see if the one-page-kmapped approach makes a measurable
difference.

I'd still like to know what the basis for the current kmap limit
setting is.


David Wragg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-24 Thread David Wragg

"Benjamin C.R. LaHaise" [EMAIL PROTECTED] writes:
 On 24 Jan 2001, David Wragg wrote:
 
  [EMAIL PROTECTED] (Eric W. Biederman) writes:
   Why do you need such a large buffer? 
  
  ext2 doesn't guarantee sustained write bandwidth (in particular,
  writing a page to an ext2 file can have a high latency due to reading
  the block bitmap synchronously).  To deal with this I need at least a
  2MB buffer.
 
 This is the wrong way of going about things -- you should probably insert
 the pages into the page cache and write them into the filesystem via
 writepage. 

I currently use prepare_write/commit_write, but I think writepage
would have the same issue: When ext2 allocates a block, and has to
allocate from a new block group, it may do a synchronous read of the
new block group bitmap.  So before the writepage (or whatever) that
causes this completes, it has to wait for the read to get picked by
the elevator, the seek for the read, etc.  By the time it gets back to
writing normally, I've buffered a couple of MB of data.

But I do have a workaround for the ext2 issue.

 That way the pages don't need to be mapped while being written
 out.

Point taken, though the kmap needed before prepare_write is much less
significant than the kmap I need to do before copying data into the
page.

 For incoming data from a network socket, making use of the
 data_ready callbacks and directly copying from the skbs in one pass with a
 kmap of only one page at a time.

 Maybe I'm guessing incorrect at what is being attempted, but kmap should
 be used sparingly and as briefly as possible.

I'm going to see if the one-page-kmapped approach makes a measurable
difference.

I'd still like to know what the basis for the current kmap limit
setting is.


David Wragg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-24 Thread Eric W. Biederman

David Wragg [EMAIL PROTECTED] writes:

 I'd still like to know what the basis for the current kmap limit
 setting is.

Mostly at one point kmap_atomic was all there was.  It was only the
difficulty of implementing copy_from_user with kmap_atomic that convinced
people we needed something more.  So actually if we can kmap several
megabyte at once the kmap limit is quite high.

Eric

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-23 Thread Benjamin C.R. LaHaise

On 24 Jan 2001, David Wragg wrote:

> [EMAIL PROTECTED] (Eric W. Biederman) writes:
> > Why do you need such a large buffer? 
> 
> ext2 doesn't guarantee sustained write bandwidth (in particular,
> writing a page to an ext2 file can have a high latency due to reading
> the block bitmap synchronously).  To deal with this I need at least a
> 2MB buffer.

This is the wrong way of going about things -- you should probably insert
the pages into the page cache and write them into the filesystem via
writepage.  That way the pages don't need to be mapped while being written
out.  For incoming data from a network socket, making use of the
data_ready callbacks and directly copying from the skbs in one pass with a
kmap of only one page at a time.

Maybe I'm guessing incorrect at what is being attempted, but kmap should
be used sparingly and as briefly as possible.

-ben

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-23 Thread David Wragg

[EMAIL PROTECTED] (Eric W. Biederman) writes:
> Why do you need such a large buffer? 

ext2 doesn't guarantee sustained write bandwidth (in particular,
writing a page to an ext2 file can have a high latency due to reading
the block bitmap synchronously).  To deal with this I need at least a
2MB buffer.

I've modifed ext2 slightly to avoid that problem, but I still expect
to need a 512KB buffer (though the usual requirements are much lower).
While that wouldn't hit the kmap limit, it would bring the system
closer to it.

Perhaps further tuning could reduce the buffer needs of my
application, but it is better to have the buffer too big than too
small.

> And why do the pages need to be kmapped? 

They only need to be kmapped while data is being copied into them.

> If you are doing dma there is no such requirement...  And
> unless you are running on something faster than a PCI bus I can't
> imagine why you need a buffer that big. 

Gigabit ethernet.

> My hunch is that it makes
> sense to do the kmap, and the i/o in the bottom_half.  What is wrong
> with that?

Do you mean kmap_atomic?  The comments around kmap don't mention
avoiding it in BHs, but I don't see what prevents kmap -> kmap_high ->
map_new_virtual -> schedule.

> kmap should be quick and fast because it is for transitory mappings.
> It shouldn't be something whose overhead you are trying to avoid.  If
> kmap is that expensive then kmap needs to be fixed, instead of your
> code working around a perceived problem.
> 
> At least that is what it looks like from here.

When adding the kmap/kunmap calls to my code I arranged them so they
would be used as infrequently as possible.  After working on making
the critical paths in my code fast, I didn't want to add operations
that have an uncertain cost into those paths unless there is a good
reason.  Which is why I'm asking how significant the kmap limit is.



David Wragg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-23 Thread Eric W. Biederman

David Wragg <[EMAIL PROTECTED]> writes:

> While testing some kernel code of mine on a machine with
> CONFIG_HIGHMEM enabled, I've run into the limit on the number of pages
> that can be kmapped at once.  I was surprised to find it was so low --
> only 2MB/4MB of address space for kmap (according to the value of
> LAST_PKMAP; vmalloc gets a much more generous 128MB!).

kmap is for quick transitory mappings.  kmap is not for permanent mappings.
At least that was my impression.  The persistence is intended to just
kill error prone cases.

> My code allocates a large number of pages (4MB-worth would be typical)
> to act as a buffer; interrupt handlers/BHs copy data into this buffer,
> then a kernel thread moves filled pages into the page cache and
> replaces them with newly allocated pages.  To avoid overhead on
> IRQs/BHs, all the pages in the buffer are kmapped.  But with
> CONFIG_HIGHMEM if I try to kmap 512 pages or more at once, the kernel
> locks up (fork() starts blocking inside kmap(), etc.).

This may be a reasonable use, I'm not certain.  It wasn't the application
kmap was designed to deal with though...
 
> There are ways I could work around this (either by using kmap_atomic,
> or by adding another kernel thread that maintains a window of kmapped
> pages within the buffer).  But I'd prefer not to have to add a lot of
> code specific to the CONFIG_HIGHMEM case.

Why do you need such a large buffer?  And why do the pages need to be kmapped?
If you are doing dma there is no such requirement...  And unless you are
running on something faster than a PCI bus I can't imagine why you need
a buffer that big.  My hunch is that it makes sense to do the kmap,
and the i/o in the bottom_half.  What is wrong with that?

kmap should be quick and fast because it is for transitory mappings.
It shouldn't be something whose overhead you are trying to avoid.
If kmap is that expensive then kmap needs to be fixed, instead
of your code working around a perceived problem.

At least that is what it looks like from here.

Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-23 Thread Eric W. Biederman

David Wragg [EMAIL PROTECTED] writes:

 While testing some kernel code of mine on a machine with
 CONFIG_HIGHMEM enabled, I've run into the limit on the number of pages
 that can be kmapped at once.  I was surprised to find it was so low --
 only 2MB/4MB of address space for kmap (according to the value of
 LAST_PKMAP; vmalloc gets a much more generous 128MB!).

kmap is for quick transitory mappings.  kmap is not for permanent mappings.
At least that was my impression.  The persistence is intended to just
kill error prone cases.

 My code allocates a large number of pages (4MB-worth would be typical)
 to act as a buffer; interrupt handlers/BHs copy data into this buffer,
 then a kernel thread moves filled pages into the page cache and
 replaces them with newly allocated pages.  To avoid overhead on
 IRQs/BHs, all the pages in the buffer are kmapped.  But with
 CONFIG_HIGHMEM if I try to kmap 512 pages or more at once, the kernel
 locks up (fork() starts blocking inside kmap(), etc.).

This may be a reasonable use, I'm not certain.  It wasn't the application
kmap was designed to deal with though...
 
 There are ways I could work around this (either by using kmap_atomic,
 or by adding another kernel thread that maintains a window of kmapped
 pages within the buffer).  But I'd prefer not to have to add a lot of
 code specific to the CONFIG_HIGHMEM case.

Why do you need such a large buffer?  And why do the pages need to be kmapped?
If you are doing dma there is no such requirement...  And unless you are
running on something faster than a PCI bus I can't imagine why you need
a buffer that big.  My hunch is that it makes sense to do the kmap,
and the i/o in the bottom_half.  What is wrong with that?

kmap should be quick and fast because it is for transitory mappings.
It shouldn't be something whose overhead you are trying to avoid.
If kmap is that expensive then kmap needs to be fixed, instead
of your code working around a perceived problem.

At least that is what it looks like from here.

Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-23 Thread David Wragg

[EMAIL PROTECTED] (Eric W. Biederman) writes:
 Why do you need such a large buffer? 

ext2 doesn't guarantee sustained write bandwidth (in particular,
writing a page to an ext2 file can have a high latency due to reading
the block bitmap synchronously).  To deal with this I need at least a
2MB buffer.

I've modifed ext2 slightly to avoid that problem, but I still expect
to need a 512KB buffer (though the usual requirements are much lower).
While that wouldn't hit the kmap limit, it would bring the system
closer to it.

Perhaps further tuning could reduce the buffer needs of my
application, but it is better to have the buffer too big than too
small.

 And why do the pages need to be kmapped? 

They only need to be kmapped while data is being copied into them.

 If you are doing dma there is no such requirement...  And
 unless you are running on something faster than a PCI bus I can't
 imagine why you need a buffer that big. 

Gigabit ethernet.

 My hunch is that it makes
 sense to do the kmap, and the i/o in the bottom_half.  What is wrong
 with that?

Do you mean kmap_atomic?  The comments around kmap don't mention
avoiding it in BHs, but I don't see what prevents kmap - kmap_high -
map_new_virtual - schedule.

 kmap should be quick and fast because it is for transitory mappings.
 It shouldn't be something whose overhead you are trying to avoid.  If
 kmap is that expensive then kmap needs to be fixed, instead of your
 code working around a perceived problem.
 
 At least that is what it looks like from here.

When adding the kmap/kunmap calls to my code I arranged them so they
would be used as infrequently as possible.  After working on making
the critical paths in my code fast, I didn't want to add operations
that have an uncertain cost into those paths unless there is a good
reason.  Which is why I'm asking how significant the kmap limit is.



David Wragg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: limit on number of kmapped pages

2001-01-23 Thread Benjamin C.R. LaHaise

On 24 Jan 2001, David Wragg wrote:

 [EMAIL PROTECTED] (Eric W. Biederman) writes:
  Why do you need such a large buffer? 
 
 ext2 doesn't guarantee sustained write bandwidth (in particular,
 writing a page to an ext2 file can have a high latency due to reading
 the block bitmap synchronously).  To deal with this I need at least a
 2MB buffer.

This is the wrong way of going about things -- you should probably insert
the pages into the page cache and write them into the filesystem via
writepage.  That way the pages don't need to be mapped while being written
out.  For incoming data from a network socket, making use of the
data_ready callbacks and directly copying from the skbs in one pass with a
kmap of only one page at a time.

Maybe I'm guessing incorrect at what is being attempted, but kmap should
be used sparingly and as briefly as possible.

-ben

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/