Give users a hint when their locate database is too small.

2012-11-12 Thread Eitan Adler
What do people think of this? Maybe /usr/libexec/locate.updatedb is a
better pointer?

commit fb03b777daf2c69bb9612902e38fdb25b256be72
Author: Eitan Adler 
Date:   Mon Nov 12 22:05:55 2012 -0500

Give users a hint when their locate database is too small.

Reviwed by: ???
Approved by:???
MFC after:  3 weeks

diff --git a/usr.bin/locate/locate/locate.c b/usr.bin/locate/locate/locate.c
index b0faefb..f0c8c37 100644
--- a/usr.bin/locate/locate/locate.c
+++ b/usr.bin/locate/locate/locate.c
@@ -292,7 +292,7 @@ search_mmap(db, s)
err(1, "`%s'", db);
len = sb.st_size;
if (len < (2*NBG))
-   errx(1, "database too small: %s", db);
+   errx(1, "database too small: %s\nTry running
/etc/periodic/weekly/310.locate", db);

if ((p = mmap((caddr_t)0, (size_t)len,
  PROT_READ, MAP_SHARED,


-- 
Eitan Adler
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Julian Elischer

On 11/12/12 3:49 PM, Adrian Chadd wrote:

On 12 November 2012 15:26, Alan Cox  wrote:

On 11/12/2012 5:24 PM, Adrian Chadd wrote:

.. wait, so what exactly would the difference be between M_NOWAIT and
M_WAITOK?


Whether or not the allocation can sleep until memory becomes available.

Ok, so we're still maintaining that particular behaviour. Cool.

no mem  | mem avail
--
M_WAITOK | wait, then success   |success   |
--
M_NOWAIT |  returns failure|success  |
--

the question is whether  the top left can ever fail for any other reason.




Adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"




___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Sushanth Rai


--- On Mon, 11/12/12, Alan Cox  wrote:

> From: Alan Cox 
> Subject: Re: Memory reserves or lack thereof
> To: "Konstantin Belousov" 
> Cc: "Sushanth Rai" , a...@freebsd.org, 
> p...@freebsd.org, "StevenSears" , 
> "freebsd-hackers@freebsd.org" 
> Date: Monday, November 12, 2012, 3:10 PM
> On 11/12/2012 3:48 PM, Konstantin
> Belousov wrote:
> > On Mon, Nov 12, 2012 at 01:28:02PM -0800, Sushanth Rai
> wrote:
> >> This patch still doesn't address the issue of
> M_NOWAIT calls driving
> >> the memory the all the way down to 2 pages, right ?
> It would be nice to
> >> have M_NOWAIT just do non-sleep version of M_WAITOK
> and M_USE_RESERVE
> >> flag to dig deep.
> > This is out of scope of the change. But it is required
> for any further
> > adjustements.
> 
> I would suggest a somewhat different response:
> 
> The patch does make M_NOWAIT into a "non-sleep version of
> M_WAITOK" and does reintroduce M_USE_RESERVE as a way to
> specify "dig deep".
> 
> Currently, both M_NOWAIT and M_WAITOK can drive the
> cache/free memory down to two pages.  The effect of the
> patch is to stop M_NOWAIT at two pages rather than allowing
> it to continue to zero pages.


Thanks for the correction. I was associating VM_ALLOC_SYSTEM with just M_NOWAIT 
as it seemed in the first verion of the patch.

Sushanth
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Adrian Chadd
On 12 November 2012 15:26, Alan Cox  wrote:
> On 11/12/2012 5:24 PM, Adrian Chadd wrote:
>>
>> .. wait, so what exactly would the difference be between M_NOWAIT and
>> M_WAITOK?
>
>
> Whether or not the allocation can sleep until memory becomes available.

Ok, so we're still maintaining that particular behaviour. Cool.



Adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Alan Cox

On 11/12/2012 5:24 PM, Adrian Chadd wrote:

.. wait, so what exactly would the difference be between M_NOWAIT and M_WAITOK?


Whether or not the allocation can sleep until memory becomes available.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Adrian Chadd
.. wait, so what exactly would the difference be between M_NOWAIT and M_WAITOK?



adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Alan Cox

On 11/12/2012 3:48 PM, Konstantin Belousov wrote:

On Mon, Nov 12, 2012 at 01:28:02PM -0800, Sushanth Rai wrote:

This patch still doesn't address the issue of M_NOWAIT calls driving
the memory the all the way down to 2 pages, right ? It would be nice to
have M_NOWAIT just do non-sleep version of M_WAITOK and M_USE_RESERVE
flag to dig deep.

This is out of scope of the change. But it is required for any further
adjustements.


I would suggest a somewhat different response:

The patch does make M_NOWAIT into a "non-sleep version of M_WAITOK" and 
does reintroduce M_USE_RESERVE as a way to specify "dig deep".


Currently, both M_NOWAIT and M_WAITOK can drive the cache/free memory 
down to two pages.  The effect of the patch is to stop M_NOWAIT at two 
pages rather than allowing it to continue to zero pages.


When you say, "This is out of scope ...", I believe that you are 
referring to changing two pages into something larger.  I agree that 
this is out of scope for the current change.


Alan

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Konstantin Belousov
On Mon, Nov 12, 2012 at 01:28:02PM -0800, Sushanth Rai wrote:
> This patch still doesn't address the issue of M_NOWAIT calls driving
> the memory the all the way down to 2 pages, right ? It would be nice to
> have M_NOWAIT just do non-sleep version of M_WAITOK and M_USE_RESERVE
> flag to dig deep.

This is out of scope of the change. But it is required for any further
adjustements.


pgpHI7rQOhvFP.pgp
Description: PGP signature


Re: Memory reserves or lack thereof

2012-11-12 Thread Sushanth Rai
This patch still doesn't address the issue of M_NOWAIT calls driving the memory 
the all the way down to 2 pages, right ? It would be nice to have M_NOWAIT just 
do non-sleep version of M_WAITOK and M_USE_RESERVE flag to dig deep. 

Sushanth 

--- On Mon, 11/12/12, Konstantin Belousov  wrote:

> From: Konstantin Belousov 
> Subject: Re: Memory reserves or lack thereof
> To: a...@freebsd.org
> Cc: p...@freebsd.org, "Sears, Steven" , 
> "freebsd-hackers@freebsd.org" 
> Date: Monday, November 12, 2012, 5:36 AM
> On Sun, Nov 11, 2012 at 03:40:24PM
> -0600, Alan Cox wrote:
> > On Sat, Nov 10, 2012 at 7:20 AM, Konstantin Belousov
> wrote:
> > 
> > > On Fri, Nov 09, 2012 at 07:10:04PM +, Sears,
> Steven wrote:
> > > > I have a memory subsystem design question
> that I'm hoping someone can
> > > answer.
> > > >
> > > > I've been looking at a machine that is
> completely out of memory, as in
> > > >
> > > >  v_free_count = 0,
> > > >  v_cache_count = 0,
> > > >
> > > > I wondered how a machine could completely run
> out of memory like this,
> > > especially after finding a lack of interrupt
> storms or other pathologies
> > > that would tend to overcommit memory. So I started
> investigating.
> > > >
> > > > Most allocators come down to vm_page_alloc(),
> which has this guard:
> > > >
> > > >       if ((curproc
> == pageproc) && (page_req != VM_ALLOC_INTERRUPT)) {
> > > >           
>    page_req = VM_ALLOC_SYSTEM;
> > > >       };
> > > >
> > > >       if
> (cnt.v_free_count + cnt.v_cache_count >
> cnt.v_free_reserved ||
> > > >       
>    (page_req == VM_ALLOC_SYSTEM &&
> > > >       
>    cnt.v_free_count + cnt.v_cache_count >
> > > cnt.v_interrupt_free_min) ||
> > > >       
>    (page_req == VM_ALLOC_INTERRUPT
> &&
> > > >       
>    cnt.v_free_count + cnt.v_cache_count >
> 0)) {
> > > >
> > > > The key observation is if VM_ALLOC_INTERRUPT
> is set, it will allocate
> > > every last page.
> > > >
> > > > >From the name one might expect
> VM_ALLOC_INTERRUPT to be somewhat rare,
> > > perhaps only used from interrupt threads. Not so,
> see kmem_malloc() or
> > > uma_small_alloc() which both contain this
> mapping:
> > > >
> > > >       if ((flags
> & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
> > > >           
>    pflags = VM_ALLOC_INTERRUPT |
> VM_ALLOC_WIRED;
> > > >       else
> > > >           
>    pflags = VM_ALLOC_SYSTEM |
> VM_ALLOC_WIRED;
> > > >
> > > > Note that M_USE_RESERVE has been deprecated
> and is used in just a
> > > handful of places. Also note that lots of code
> paths come through these
> > > routines.
> > > >
> > > > What this means is essentially _any_
> allocation using M_NOWAIT will
> > > bypass whatever reserves have been held back and
> will take every last page
> > > available.
> > > >
> > > > There is no documentation stating M_NOWAIT
> has this side effect of
> > > essentially being privileged, so any innocuous
> piece of code that can't
> > > block will use it. And of course M_NOWAIT is
> literally used all over.
> > > >
> > > > It looks to me like the design goal of the
> BSD allocators is on
> > > recovery; it will give all pages away knowing it
> can recover.
> > > >
> > > > Am I missing anything? I would have expected
> some small number of pages
> > > to be held in reserve just in case. And I didn't
> expect M_NOWAIT to be a
> > > sort of back door for grabbing memory.
> > > >
> > >
> > > Your analysis is right, there is nothing to add or
> correct.
> > > This is the reason to strongly prefer M_WAITOK.
> > >
> > 
> > Agreed.  Once upon time, before SMPng, M_NOWAIT
> was rarely used.  It was
> > well understand that it should only be used by
> interrupt handlers.
> > 
> > The trouble is that M_NOWAIT conflates two orthogonal
> things.  The obvious
> > being that the allocation shouldn't sleep.  The
> other being how far we're
> > willing to deplete the cache/free page queues.
> > 
> > When fine-grained locking got sprinkled throughout the
> kernel, we all to
> > often found ourselves wanting to do allocations without
> the possibility of
> > blocking.  So, M_NOWAIT became commonplace, where
> it wasn't before.
> > 
> > This had the unintended consequence of introducing a
> lot of memory
> > allocations in the top-half of the kernel, i.e.,
> non-interrupt handling
> > code, that were digging deep into the cache/free page
> queues.
> > 
> > Also, ironically, in today's kernel an "M_NOWAIT |
> M_USE_RESERVE"
> > allocation is less likely to succeed than an "M_NOWAIT"
> allocation.
> > However, prior to FreeBSD 7.x, M_NOWAIT couldn't
> allocate a cached page; it
> > could only allocate a free page.  M_USE_RESERVE
> said that it ok to allocate
> > a cached page even though M_NOWAIT was specified. 
> Consequently, the system
> > wouldn't dig as far into the free page queue if
> M_USE_RESERVE was
> > specified, because it was allowed to reclaim a cached
> page.
> > 
> > In conclusion, I think it's time that we change
> M_NOWAIT so that it doesn't
> > dig any deeper into the cache

Re: Memory reserves or lack thereof

2012-11-12 Thread Konstantin Belousov
On Mon, Nov 12, 2012 at 11:35:42AM -0600, Alan Cox wrote:
> Agreed.  Most recently I eliminated several uses from the arm pmap
> implementations.  There is, however, one other use:
> 
> ofed/include/linux/gfp.h:#defineGFP_ATOMIC  (M_NOWAIT |
> M_USE_RESERVE)
Yes, I forgot to mention this. I have no idea about semantic  of
GFP_ATOMIC compat flag.

Below is the updated patch with two your notes applied.

diff --git a/sys/amd64/amd64/uma_machdep.c b/sys/amd64/amd64/uma_machdep.c
index dc9c307..ab1e869 100644
--- a/sys/amd64/amd64/uma_machdep.c
+++ b/sys/amd64/amd64/uma_machdep.c
@@ -29,6 +29,7 @@ __FBSDID("$FreeBSD$");
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -48,12 +49,7 @@ uma_small_alloc(uma_zone_t zone, int bytes, u_int8_t *flags, 
int wait)
int pflags;
 
*flags = UMA_SLAB_PRIV;
-   if ((wait & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
-   pflags = VM_ALLOC_INTERRUPT | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED;
-   else
-   pflags = VM_ALLOC_SYSTEM | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED;
-   if (wait & M_ZERO)
-   pflags |= VM_ALLOC_ZERO;
+   pflags = m2vm_flags(wait, VM_ALLOC_NOOBJ | VM_ALLOC_WIRED);
for (;;) {
m = vm_page_alloc(NULL, 0, pflags);
if (m == NULL) {
diff --git a/sys/arm/arm/vm_machdep.c b/sys/arm/arm/vm_machdep.c
index f60cdb1..75366e3 100644
--- a/sys/arm/arm/vm_machdep.c
+++ b/sys/arm/arm/vm_machdep.c
@@ -651,12 +651,7 @@ uma_small_alloc(uma_zone_t zone, int bytes, u_int8_t 
*flags, int wait)
ret = ((void *)kmem_malloc(kmem_map, bytes, M_NOWAIT));
return (ret);
}
-   if ((wait & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
-   pflags = VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;
-   else
-   pflags = VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;
-   if (wait & M_ZERO)
-   pflags |= VM_ALLOC_ZERO;
+   pflags = m2vm_flags(wait, VM_ALLOC_WIRED);
for (;;) {
m = vm_page_alloc(NULL, 0, pflags | VM_ALLOC_NOOBJ);
if (m == NULL) {
diff --git a/sys/fs/devfs/devfs_devs.c b/sys/fs/devfs/devfs_devs.c
index 71caa29..2ce1ca6 100644
--- a/sys/fs/devfs/devfs_devs.c
+++ b/sys/fs/devfs/devfs_devs.c
@@ -121,7 +121,7 @@ devfs_alloc(int flags)
struct cdev *cdev;
struct timespec ts;
 
-   cdp = malloc(sizeof *cdp, M_CDEVP, M_USE_RESERVE | M_ZERO |
+   cdp = malloc(sizeof *cdp, M_CDEVP, M_ZERO |
((flags & MAKEDEV_NOWAIT) ? M_NOWAIT : M_WAITOK));
if (cdp == NULL)
return (NULL);
diff --git a/sys/ia64/ia64/uma_machdep.c b/sys/ia64/ia64/uma_machdep.c
index 37353ff..9f77762 100644
--- a/sys/ia64/ia64/uma_machdep.c
+++ b/sys/ia64/ia64/uma_machdep.c
@@ -46,12 +46,7 @@ uma_small_alloc(uma_zone_t zone, int bytes, u_int8_t *flags, 
int wait)
int pflags;
 
*flags = UMA_SLAB_PRIV;
-   if ((wait & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
-   pflags = VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;
-   else
-   pflags = VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;
-   if (wait & M_ZERO)
-   pflags |= VM_ALLOC_ZERO;
+   pflags = m2vm_flags(wait, VM_ALLOC_WIRED);
 
for (;;) {
m = vm_page_alloc(NULL, 0, pflags | VM_ALLOC_NOOBJ);
diff --git a/sys/mips/mips/uma_machdep.c b/sys/mips/mips/uma_machdep.c
index 798e632..24baef0 100644
--- a/sys/mips/mips/uma_machdep.c
+++ b/sys/mips/mips/uma_machdep.c
@@ -48,11 +48,7 @@ uma_small_alloc(uma_zone_t zone, int bytes, u_int8_t *flags, 
int wait)
void *va;
 
*flags = UMA_SLAB_PRIV;
-
-   if ((wait & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
-   pflags = VM_ALLOC_INTERRUPT;
-   else
-   pflags = VM_ALLOC_SYSTEM;
+   pflags = m2vm_flags(wait, 0);
 
for (;;) {
m = pmap_alloc_direct_page(0, pflags);
diff --git a/sys/powerpc/aim/mmu_oea64.c b/sys/powerpc/aim/mmu_oea64.c
index a491680..3e320b9 100644
--- a/sys/powerpc/aim/mmu_oea64.c
+++ b/sys/powerpc/aim/mmu_oea64.c
@@ -1369,12 +1369,7 @@ moea64_uma_page_alloc(uma_zone_t zone, int bytes, 
u_int8_t *flags, int wait)
*flags = UMA_SLAB_PRIV;
needed_lock = !PMAP_LOCKED(kernel_pmap);
 
-if ((wait & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
-pflags = VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;
-else
-pflags = VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;
-if (wait & M_ZERO)
-pflags |= VM_ALLOC_ZERO;
+   pflags = m2vm_flags(wait, VM_ALLOC_WIRED);
 
 for (;;) {
 m = vm_page_alloc(NULL, 0, pflags | VM_ALLOC_NOOBJ);
diff --git a/sys/powerpc/aim/slb.c b/sys/powerpc/aim/slb.c
index 162c7fb..3882bfa 100644
--- a/sys/powerpc/aim/slb.c
+++ b/sys/powerpc/aim/slb.c
@@ -483,12 +483,7 @@ slb_uma_real_alloc(uma_zone_t zone, int bytes, u_int

Re: Memory reserves or lack thereof

2012-11-12 Thread Alfred Perlstein


On Nov 12, 2012, at 4:11 AM, Andre Oppermann  wrote:
> 
> 
> I don't think many places depend on M_NOWAIT digging deep.  I'm
> perfectly happy with having M_NOWAIT give up on first try.  Only
> together with M_TRY_REALLY_HARD it would dig into reserves.
> 
> PS: We have a really nasty namespace collision with the mbuf flags
> which use the M_* prefix as well.

Agreed. 

> 


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Alan Cox
On 11/12/2012 07:36, Konstantin Belousov wrote:
> On Sun, Nov 11, 2012 at 03:40:24PM -0600, Alan Cox wrote:
>> On Sat, Nov 10, 2012 at 7:20 AM, Konstantin Belousov 
>> wrote:
>>
>>> On Fri, Nov 09, 2012 at 07:10:04PM +, Sears, Steven wrote:
 I have a memory subsystem design question that I'm hoping someone can
>>> answer.
 I've been looking at a machine that is completely out of memory, as in

  v_free_count = 0,
  v_cache_count = 0,

 I wondered how a machine could completely run out of memory like this,
>>> especially after finding a lack of interrupt storms or other pathologies
>>> that would tend to overcommit memory. So I started investigating.
 Most allocators come down to vm_page_alloc(), which has this guard:

   if ((curproc == pageproc) && (page_req != VM_ALLOC_INTERRUPT)) {
   page_req = VM_ALLOC_SYSTEM;
   };

   if (cnt.v_free_count + cnt.v_cache_count > cnt.v_free_reserved ||
   (page_req == VM_ALLOC_SYSTEM &&
   cnt.v_free_count + cnt.v_cache_count >
>>> cnt.v_interrupt_free_min) ||
   (page_req == VM_ALLOC_INTERRUPT &&
   cnt.v_free_count + cnt.v_cache_count > 0)) {

 The key observation is if VM_ALLOC_INTERRUPT is set, it will allocate
>>> every last page.
 >From the name one might expect VM_ALLOC_INTERRUPT to be somewhat rare,
>>> perhaps only used from interrupt threads. Not so, see kmem_malloc() or
>>> uma_small_alloc() which both contain this mapping:
   if ((flags & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
   pflags = VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;
   else
   pflags = VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;

 Note that M_USE_RESERVE has been deprecated and is used in just a
>>> handful of places. Also note that lots of code paths come through these
>>> routines.
 What this means is essentially _any_ allocation using M_NOWAIT will
>>> bypass whatever reserves have been held back and will take every last page
>>> available.
 There is no documentation stating M_NOWAIT has this side effect of
>>> essentially being privileged, so any innocuous piece of code that can't
>>> block will use it. And of course M_NOWAIT is literally used all over.
 It looks to me like the design goal of the BSD allocators is on
>>> recovery; it will give all pages away knowing it can recover.
 Am I missing anything? I would have expected some small number of pages
>>> to be held in reserve just in case. And I didn't expect M_NOWAIT to be a
>>> sort of back door for grabbing memory.
>>> Your analysis is right, there is nothing to add or correct.
>>> This is the reason to strongly prefer M_WAITOK.
>>>
>> Agreed.  Once upon time, before SMPng, M_NOWAIT was rarely used.  It was
>> well understand that it should only be used by interrupt handlers.
>>
>> The trouble is that M_NOWAIT conflates two orthogonal things.  The obvious
>> being that the allocation shouldn't sleep.  The other being how far we're
>> willing to deplete the cache/free page queues.
>>
>> When fine-grained locking got sprinkled throughout the kernel, we all to
>> often found ourselves wanting to do allocations without the possibility of
>> blocking.  So, M_NOWAIT became commonplace, where it wasn't before.
>>
>> This had the unintended consequence of introducing a lot of memory
>> allocations in the top-half of the kernel, i.e., non-interrupt handling
>> code, that were digging deep into the cache/free page queues.
>>
>> Also, ironically, in today's kernel an "M_NOWAIT | M_USE_RESERVE"
>> allocation is less likely to succeed than an "M_NOWAIT" allocation.
>> However, prior to FreeBSD 7.x, M_NOWAIT couldn't allocate a cached page; it
>> could only allocate a free page.  M_USE_RESERVE said that it ok to allocate
>> a cached page even though M_NOWAIT was specified.  Consequently, the system
>> wouldn't dig as far into the free page queue if M_USE_RESERVE was
>> specified, because it was allowed to reclaim a cached page.
>>
>> In conclusion, I think it's time that we change M_NOWAIT so that it doesn't
>> dig any deeper into the cache/free page queues than M_WAITOK does and
>> reintroduce a M_USE_RESERVE-like flag that says dig deep into the
>> cache/free page queues.  The trouble is that we then need to identify all
>> of those places that are implicitly depending on the current behavior of
>> M_NOWAIT also digging deep into the cache/free page queues so that we can
>> add an explicit M_USE_RESERVE.
>>
>> Alan
>>
>> P.S. I suspect that we should also increase the size of the "page reserve"
>> that is kept for VM_ALLOC_INTERRUPT allocations in vm_page_alloc*().  How
>> many legitimate users of a new M_USE_RESERVE-like flag in today's kernel
>> could actually be satisfied by two pages?
> I am almost sure that most of people who put the M_NOWAIT flag, do not
> know the 'allow the deeper drain of free queue' effect. As such, I believe
> we should 

Re: Memory reserves or lack thereof

2012-11-12 Thread Andre Oppermann

On 12.11.2012 15:47, Ian Lepore wrote:

On Mon, 2012-11-12 at 13:18 +0100, Andre Oppermann wrote:

Well, what's the current set of best practices for allocating mbufs?


If an allocation is driven by user space then you can use M_WAITOK.

If an allocation is driven by the driver or kernel (callout and so on)
you do M_NOWAIT and handle a failure by trying again later either
directly by rescheduling the callout or by the upper layer retransmit
logic.

On top of that individual mbuf allocation or stitching mbufs and
clusters together manually is deprecated.  If every possible you
should use m_getm2().


root@pico:/root # man m_getm2
No manual entry for m_getm2


Oops... Have to fix that.


So when you say manually stitching mbufs together is deprecated, I take
you mean in the case where you're letting the mbuf routines allocate the
actual buffer space for you?


I mean allocating an mbuf, a cluster and then stitching them together.
You can it in one with m_getcl().


I've got an ethernet driver on an ARM SoC in which the hardware receives
into a series of buffers fixed at 128 bytes.  Right now the code is
allocating a cluster and then looping using m_append() to reassemble
these buffers back into a full contiguous frame in a cluster.  I was
going to have a shot at using MEXTADD() to manually string the series of
hardware/dma buffers together without copying the data.  Is that sort of
usage still a good idea?  (And would it actually be a performance win?


That really depends on the particular usage.  Attaching the 128 byte
buffers to mbufs probably isn't much of a win considering an mbuf is
256 bytes in size.  You could just as well copy each 128 buf into the
data section.  Allocating a 2K cluster and copying into it is more
efficient on the overall system.


If I hand it off to the net stack and an m_pullup() or similar is going
to happen along the way anyway, I might as well do it at driver level.)


If you properly m_align() the mbuf cluster before you copy into it
there shouldn't be any m_pullup's happening.

--
Andre

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Ian Lepore
On Mon, 2012-11-12 at 13:18 +0100, Andre Oppermann wrote:
> > Well, what's the current set of best practices for allocating mbufs?
> 
> If an allocation is driven by user space then you can use M_WAITOK.
> 
> If an allocation is driven by the driver or kernel (callout and so on)
> you do M_NOWAIT and handle a failure by trying again later either
> directly by rescheduling the callout or by the upper layer retransmit
> logic.
> 
> On top of that individual mbuf allocation or stitching mbufs and
> clusters together manually is deprecated.  If every possible you
> should use m_getm2().

root@pico:/root # man m_getm2
No manual entry for m_getm2

So when you say manually stitching mbufs together is deprecated, I take
you mean in the case where you're letting the mbuf routines allocate the
actual buffer space for you?

I've got an ethernet driver on an ARM SoC in which the hardware receives
into a series of buffers fixed at 128 bytes.  Right now the code is
allocating a cluster and then looping using m_append() to reassemble
these buffers back into a full contiguous frame in a cluster.  I was
going to have a shot at using MEXTADD() to manually string the series of
hardware/dma buffers together without copying the data.  Is that sort of
usage still a good idea?  (And would it actually be a performance win?
If I hand it off to the net stack and an m_pullup() or similar is going
to happen along the way anyway, I might as well do it at driver level.)

-- Ian


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Peter Holm
On Mon, Nov 12, 2012 at 03:36:38PM +0200, Konstantin Belousov wrote:
> On Sun, Nov 11, 2012 at 03:40:24PM -0600, Alan Cox wrote:
> > On Sat, Nov 10, 2012 at 7:20 AM, Konstantin Belousov 
> > wrote:
> > 
> > > On Fri, Nov 09, 2012 at 07:10:04PM +, Sears, Steven wrote:
> > > > I have a memory subsystem design question that I'm hoping someone can
> > > answer.
> > > >
> > > > I've been looking at a machine that is completely out of memory, as in
> > > >
> > > >  v_free_count = 0,
> > > >  v_cache_count = 0,
> > > >
> > > > I wondered how a machine could completely run out of memory like this,
> > > especially after finding a lack of interrupt storms or other pathologies
> > > that would tend to overcommit memory. So I started investigating.
> > > >
> > > > Most allocators come down to vm_page_alloc(), which has this guard:
> > > >
> > > >   if ((curproc == pageproc) && (page_req != VM_ALLOC_INTERRUPT)) {
> > > >   page_req = VM_ALLOC_SYSTEM;
> > > >   };
> > > >
> > > >   if (cnt.v_free_count + cnt.v_cache_count > cnt.v_free_reserved ||
> > > >   (page_req == VM_ALLOC_SYSTEM &&
> > > >   cnt.v_free_count + cnt.v_cache_count >
> > > cnt.v_interrupt_free_min) ||
> > > >   (page_req == VM_ALLOC_INTERRUPT &&
> > > >   cnt.v_free_count + cnt.v_cache_count > 0)) {
> > > >
> > > > The key observation is if VM_ALLOC_INTERRUPT is set, it will allocate
> > > every last page.
> > > >
> > > > >From the name one might expect VM_ALLOC_INTERRUPT to be somewhat rare,
> > > perhaps only used from interrupt threads. Not so, see kmem_malloc() or
> > > uma_small_alloc() which both contain this mapping:
> > > >
> > > >   if ((flags & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
> > > >   pflags = VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;
> > > >   else
> > > >   pflags = VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;
> > > >
> > > > Note that M_USE_RESERVE has been deprecated and is used in just a
> > > handful of places. Also note that lots of code paths come through these
> > > routines.
> > > >
> > > > What this means is essentially _any_ allocation using M_NOWAIT will
> > > bypass whatever reserves have been held back and will take every last page
> > > available.
> > > >
> > > > There is no documentation stating M_NOWAIT has this side effect of
> > > essentially being privileged, so any innocuous piece of code that can't
> > > block will use it. And of course M_NOWAIT is literally used all over.
> > > >
> > > > It looks to me like the design goal of the BSD allocators is on
> > > recovery; it will give all pages away knowing it can recover.
> > > >
> > > > Am I missing anything? I would have expected some small number of pages
> > > to be held in reserve just in case. And I didn't expect M_NOWAIT to be a
> > > sort of back door for grabbing memory.
> > > >
> > >
> > > Your analysis is right, there is nothing to add or correct.
> > > This is the reason to strongly prefer M_WAITOK.
> > >
> > 
> > Agreed.  Once upon time, before SMPng, M_NOWAIT was rarely used.  It was
> > well understand that it should only be used by interrupt handlers.
> > 
> > The trouble is that M_NOWAIT conflates two orthogonal things.  The obvious
> > being that the allocation shouldn't sleep.  The other being how far we're
> > willing to deplete the cache/free page queues.
> > 
> > When fine-grained locking got sprinkled throughout the kernel, we all to
> > often found ourselves wanting to do allocations without the possibility of
> > blocking.  So, M_NOWAIT became commonplace, where it wasn't before.
> > 
> > This had the unintended consequence of introducing a lot of memory
> > allocations in the top-half of the kernel, i.e., non-interrupt handling
> > code, that were digging deep into the cache/free page queues.
> > 
> > Also, ironically, in today's kernel an "M_NOWAIT | M_USE_RESERVE"
> > allocation is less likely to succeed than an "M_NOWAIT" allocation.
> > However, prior to FreeBSD 7.x, M_NOWAIT couldn't allocate a cached page; it
> > could only allocate a free page.  M_USE_RESERVE said that it ok to allocate
> > a cached page even though M_NOWAIT was specified.  Consequently, the system
> > wouldn't dig as far into the free page queue if M_USE_RESERVE was
> > specified, because it was allowed to reclaim a cached page.
> > 
> > In conclusion, I think it's time that we change M_NOWAIT so that it doesn't
> > dig any deeper into the cache/free page queues than M_WAITOK does and
> > reintroduce a M_USE_RESERVE-like flag that says dig deep into the
> > cache/free page queues.  The trouble is that we then need to identify all
> > of those places that are implicitly depending on the current behavior of
> > M_NOWAIT also digging deep into the cache/free page queues so that we can
> > add an explicit M_USE_RESERVE.
> > 
> > Alan
> > 
> > P.S. I suspect that we should also increase the size of the "page reserve"
> > that is kept for VM_ALLOC_INTERRUPT allocations in vm_page_alloc*().  How

Re: Memory reserves or lack thereof

2012-11-12 Thread Konstantin Belousov
On Sun, Nov 11, 2012 at 03:40:24PM -0600, Alan Cox wrote:
> On Sat, Nov 10, 2012 at 7:20 AM, Konstantin Belousov 
> wrote:
> 
> > On Fri, Nov 09, 2012 at 07:10:04PM +, Sears, Steven wrote:
> > > I have a memory subsystem design question that I'm hoping someone can
> > answer.
> > >
> > > I've been looking at a machine that is completely out of memory, as in
> > >
> > >  v_free_count = 0,
> > >  v_cache_count = 0,
> > >
> > > I wondered how a machine could completely run out of memory like this,
> > especially after finding a lack of interrupt storms or other pathologies
> > that would tend to overcommit memory. So I started investigating.
> > >
> > > Most allocators come down to vm_page_alloc(), which has this guard:
> > >
> > >   if ((curproc == pageproc) && (page_req != VM_ALLOC_INTERRUPT)) {
> > >   page_req = VM_ALLOC_SYSTEM;
> > >   };
> > >
> > >   if (cnt.v_free_count + cnt.v_cache_count > cnt.v_free_reserved ||
> > >   (page_req == VM_ALLOC_SYSTEM &&
> > >   cnt.v_free_count + cnt.v_cache_count >
> > cnt.v_interrupt_free_min) ||
> > >   (page_req == VM_ALLOC_INTERRUPT &&
> > >   cnt.v_free_count + cnt.v_cache_count > 0)) {
> > >
> > > The key observation is if VM_ALLOC_INTERRUPT is set, it will allocate
> > every last page.
> > >
> > > >From the name one might expect VM_ALLOC_INTERRUPT to be somewhat rare,
> > perhaps only used from interrupt threads. Not so, see kmem_malloc() or
> > uma_small_alloc() which both contain this mapping:
> > >
> > >   if ((flags & (M_NOWAIT|M_USE_RESERVE)) == M_NOWAIT)
> > >   pflags = VM_ALLOC_INTERRUPT | VM_ALLOC_WIRED;
> > >   else
> > >   pflags = VM_ALLOC_SYSTEM | VM_ALLOC_WIRED;
> > >
> > > Note that M_USE_RESERVE has been deprecated and is used in just a
> > handful of places. Also note that lots of code paths come through these
> > routines.
> > >
> > > What this means is essentially _any_ allocation using M_NOWAIT will
> > bypass whatever reserves have been held back and will take every last page
> > available.
> > >
> > > There is no documentation stating M_NOWAIT has this side effect of
> > essentially being privileged, so any innocuous piece of code that can't
> > block will use it. And of course M_NOWAIT is literally used all over.
> > >
> > > It looks to me like the design goal of the BSD allocators is on
> > recovery; it will give all pages away knowing it can recover.
> > >
> > > Am I missing anything? I would have expected some small number of pages
> > to be held in reserve just in case. And I didn't expect M_NOWAIT to be a
> > sort of back door for grabbing memory.
> > >
> >
> > Your analysis is right, there is nothing to add or correct.
> > This is the reason to strongly prefer M_WAITOK.
> >
> 
> Agreed.  Once upon time, before SMPng, M_NOWAIT was rarely used.  It was
> well understand that it should only be used by interrupt handlers.
> 
> The trouble is that M_NOWAIT conflates two orthogonal things.  The obvious
> being that the allocation shouldn't sleep.  The other being how far we're
> willing to deplete the cache/free page queues.
> 
> When fine-grained locking got sprinkled throughout the kernel, we all to
> often found ourselves wanting to do allocations without the possibility of
> blocking.  So, M_NOWAIT became commonplace, where it wasn't before.
> 
> This had the unintended consequence of introducing a lot of memory
> allocations in the top-half of the kernel, i.e., non-interrupt handling
> code, that were digging deep into the cache/free page queues.
> 
> Also, ironically, in today's kernel an "M_NOWAIT | M_USE_RESERVE"
> allocation is less likely to succeed than an "M_NOWAIT" allocation.
> However, prior to FreeBSD 7.x, M_NOWAIT couldn't allocate a cached page; it
> could only allocate a free page.  M_USE_RESERVE said that it ok to allocate
> a cached page even though M_NOWAIT was specified.  Consequently, the system
> wouldn't dig as far into the free page queue if M_USE_RESERVE was
> specified, because it was allowed to reclaim a cached page.
> 
> In conclusion, I think it's time that we change M_NOWAIT so that it doesn't
> dig any deeper into the cache/free page queues than M_WAITOK does and
> reintroduce a M_USE_RESERVE-like flag that says dig deep into the
> cache/free page queues.  The trouble is that we then need to identify all
> of those places that are implicitly depending on the current behavior of
> M_NOWAIT also digging deep into the cache/free page queues so that we can
> add an explicit M_USE_RESERVE.
> 
> Alan
> 
> P.S. I suspect that we should also increase the size of the "page reserve"
> that is kept for VM_ALLOC_INTERRUPT allocations in vm_page_alloc*().  How
> many legitimate users of a new M_USE_RESERVE-like flag in today's kernel
> could actually be satisfied by two pages?

I am almost sure that most of people who put the M_NOWAIT flag, do not
know the 'allow the deeper drain of free queue' effect. As such, I believe
we sh

Re: Memory reserves or lack thereof

2012-11-12 Thread Andre Oppermann

On 12.11.2012 03:02, Adrian Chadd wrote:

On 11 November 2012 13:40, Alan Cox  wrote:



Agreed.  Once upon time, before SMPng, M_NOWAIT was rarely used.  It was
well understand that it should only be used by interrupt handlers.

The trouble is that M_NOWAIT conflates two orthogonal things.  The obvious
being that the allocation shouldn't sleep.  The other being how far we're
willing to deplete the cache/free page queues.

When fine-grained locking got sprinkled throughout the kernel, we all to
often found ourselves wanting to do allocations without the possibility of
blocking.  So, M_NOWAIT became commonplace, where it wasn't before.


Well, what's the current set of best practices for allocating mbufs?


If an allocation is driven by user space then you can use M_WAITOK.

If an allocation is driven by the driver or kernel (callout and so on)
you do M_NOWAIT and handle a failure by trying again later either
directly by rescheduling the callout or by the upper layer retransmit
logic.

On top of that individual mbuf allocation or stitching mbufs and
clusters together manually is deprecated.  If every possible you
should use m_getm2().


I don't mind going through ath(4) and net80211(4), looking to make it
behave better with mbuf allocations. There's 49 M_NOWAIT's in net80211
and 10 in ath(4). I wonder how many of them are synonyms with "don't
fail allocating", too. Hm.


Mbuf allocations are normally allowed to fail without serious
after effects other than retransmits and some overall recovery
pain.

Only non-mbuf memory allocations for important structures or
state that can't be recreated on retransmit should dig into
reserves.  Normally this is a very rare case in network related
code.

--
Andre

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Memory reserves or lack thereof

2012-11-12 Thread Andre Oppermann

On 11.11.2012 22:40, Alan Cox wrote:

On Sat, Nov 10, 2012 at 7:20 AM, Konstantin Belousov wrote:

Your analysis is right, there is nothing to add or correct.
This is the reason to strongly prefer M_WAITOK.



Agreed.  Once upon time, before SMPng, M_NOWAIT was rarely used.  It was
well understand that it should only be used by interrupt handlers.

The trouble is that M_NOWAIT conflates two orthogonal things.  The obvious
being that the allocation shouldn't sleep.  The other being how far we're
willing to deplete the cache/free page queues.

When fine-grained locking got sprinkled throughout the kernel, we all to
often found ourselves wanting to do allocations without the possibility of
blocking.  So, M_NOWAIT became commonplace, where it wasn't before.


Yes, we have many places where we don't want to sleep for example in
the network code.  There we simply want to be told that we've run out
of memory and handle the failure.  It's expected to happen from time
to time.  We don't need or want to dig deep or into reserves.  Packets
are expected to get lost from time to time and upper layer protocols
will handle retransmits just fine.  What we *don't* want normally is to
get blocked on a failing memory allocation.  We'd rather drop this one
and go on with the next packet to avoid the head of line blocking
problem where everything cascades to a total halt.

As a side note we don't do many, if any, true interrupt time allocations
anymore.  Usually the interrupt is just acknowledged in interrupt
context and a taskqueue or ithread is scheduled to do all the hard work.
Neither runs in interrupt context.


This had the unintended consequence of introducing a lot of memory
allocations in the top-half of the kernel, i.e., non-interrupt handling
code, that were digging deep into the cache/free page queues.

Also, ironically, in today's kernel an "M_NOWAIT | M_USE_RESERVE"
allocation is less likely to succeed than an "M_NOWAIT" allocation.
However, prior to FreeBSD 7.x, M_NOWAIT couldn't allocate a cached page; it
could only allocate a free page.  M_USE_RESERVE said that it ok to allocate
a cached page even though M_NOWAIT was specified.  Consequently, the system
wouldn't dig as far into the free page queue if M_USE_RESERVE was
specified, because it was allowed to reclaim a cached page.

In conclusion, I think it's time that we change M_NOWAIT so that it doesn't
dig any deeper into the cache/free page queues than M_WAITOK does and
reintroduce a M_USE_RESERVE-like flag that says dig deep into the
cache/free page queues.  The trouble is that we then need to identify all
of those places that are implicitly depending on the current behavior of
M_NOWAIT also digging deep into the cache/free page queues so that we can
add an explicit M_USE_RESERVE.


I don't think many places depend on M_NOWAIT digging deep.  I'm
perfectly happy with having M_NOWAIT give up on first try.  Only
together with M_TRY_REALLY_HARD it would dig into reserves.

PS: We have a really nasty namespace collision with the mbuf flags
which use the M_* prefix as well.

--
Andre

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


help make sense of gdb backtrace

2012-11-12 Thread Anton Shterenlikht
I'm trying to debug firefox on ia64.
It segfaults on startup.
The output of "thread apply all bt" is at:

http://seis.bris.ac.uk/~mexas/ff17.gdb.log

or below.

Thanks for any hints on where to dig next.

Anton

Core was generated by `firefox'.
Program terminated with signal 11, Segmentation fault.
#0  0x000120476a80 in thr_kill () from /lib/libc.so.7
[New Thread 1357a7800 (LWP 110621/Media Decode)]
[New Thread 132b1f400 (LWP 110620/Media State)]
[New Thread 132b1f800 (LWP 110619/firefox)]
[New Thread 134fa0c00 (LWP 110618/mozStorage #4)]
[New Thread 134f9a800 (LWP 110617/mozStorage #3)]
[New Thread 133c4ec00 (LWP 110616/Cache Deleter)]
[New Thread 1323cc400 (LWP 110615/URL Classifier)]
[New Thread 1323cc800 (LWP 110614/Cache I/O)]
[New Thread 1323ce000 (LWP 110613/mozStorage #2)]
[New Thread 1323cec00 (LWP 107295/DNS Resolver #1)]
[New Thread 132717c00 (LWP 110611/StreamTrans #2)]
[New Thread 120803000 (LWP 110610/HTML5 Parser)]
[New Thread 132754c00 (LWP 110609/mozStorage #1)]
[New Thread 1304c6400 (LWP 110607/Cert Verify)]
[New Thread 1304c2c00 (LWP 110606/Timer)]
[New Thread 130190400 (LWP 110605/JS Watchdog)]
[New Thread 1303ca800 (LWP 110604/firefox)]
[New Thread 1303c9800 (LWP 109945/JS GC Helper)]
[New Thread 1303c3800 (LWP 109944/Hang Monitor)]
[New Thread 1303c5400 (LWP 109943/Socket Thread)]
[New Thread 120810400 (LWP 108325/XPCOM CC)]
[New Thread 120804000 (LWP 108320/Gecko_IOThread)]
[New Thread 120802400 (LWP 109371/firefox)]

Thread 23 (Thread 120802400 (LWP 109371/firefox)):
#0  0x0001291dfe21 in js::frontend::BytecodeEmitter::notes 
(this=0x7fffb360)
at BytecodeEmitter.h:174
#1  0x0001291fc830 in SetSrcNoteOffset (cx=0x132fe8200, 
bce=0x7fffb360, index=86, 
which=0, offset=25)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6838
#2  0x0001291fd3c0 in js::frontend::NewSrcNote2 (cx=0x132fe8200, 
bce=0x7fffb360, 
type=js::SRC_PCBASE, offset=25)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6756
#3  0x000129235ea0 in EmitCallOrNew (cx=0x132fe8200, 
bce=0x7fffb360, pn=0x131683d58, 
top=229)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:5457
#4  0x000129210830 in js::frontend::EmitTree (cx=0x132fe8200, 
bce=0x7fffb360, 
pn=0x131683d58)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6484
#5  0x000129213a00 in EmitUnary (cx=0x132fe8200, bce=0x7fffb360, 
pn=0x131683848)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:5972
#6  0x0001292100f0 in js::frontend::EmitTree (cx=0x132fe8200, 
bce=0x7fffb360, 
pn=0x131683848)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6420
#7  0x000129217c60 in EmitLogical (cx=0x132fe8200, bce=0x7fffb360, 
pn=0x131683d10)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:5528
#8  0x00012920f550 in js::frontend::EmitTree (cx=0x132fe8200, 
bce=0x7fffb360, 
pn=0x131683d10)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6317
#9  0x00012921b770 in EmitIf (cx=0x132fe8200, bce=0x7fffb360, 
pn=0x131683ec0)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:4177
#10 0x00012920e380 in js::frontend::EmitTree (cx=0x132fe8200, 
bce=0x7fffb360, 
pn=0x131683ec0)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6173
#11 0x000129218320 in EmitStatementList (cx=0x132fe8200, 
bce=0x7fffb360, 
pn=0x131686218, top=0)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:5189
#12 0x00012920ee90 in js::frontend::EmitTree (cx=0x132fe8200, 
bce=0x7fffb360, 
pn=0x131686218)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6252
#13 0x00012920e2c0 in js::frontend::EmitTree (cx=0x132fe8200, 
bce=0x7fffb360, 
pn=0x131685e78)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:6168
#14 0x00012921e910 in js::frontend::EmitFunctionScript (cx=0x132fe8200, 
bce=0x7fffb360, body=0x131685e78)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:2661
#15 0x00012920b960 in EmitFunc (cx=0x132fe8200, bce=0x7fffbf30, 
pn=0x131685da0)
at 
/usr/ports/freebsd-gecko/www/firefox/work/mozilla-beta/js/src/frontend/BytecodeEmitter.cpp:4905
#16 0x00012920ce90 in js::frontend::EmitTree (cx=0x132fe8200, 
bce=0x7fffbf30, 
pn=0x131685da0)
at 
/usr/ports/freebsd-gecko/www/firef

Re: Memory reserves or lack thereof

2012-11-12 Thread Adrian Chadd
On 11 November 2012 20:24, Alfred Perlstein  wrote:
> I think very few of the m_nowaits actually need the reserve behavior. We 
> should probably switch away from it digging that deep by default and 
> introduce a flag and/or a per thread flag to set the behavior.

There's already a perfectly fine flag - M_WAITOK. Just don't hold any
locks, right? :)


Adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"