Re: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage)

2014-06-25 Thread idina

Sent by Bird Mail___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage)

2014-06-03 Thread Maxim Dounin
Hello!

On Mon, Jun 02, 2014 at 08:42:50PM +0400, Maxim Dounin wrote:

 Hello!
 
 On Sat, May 31, 2014 at 11:46:28PM -0300, Wandenberg Peixoto wrote:
 
  Hello Maxim,
  
  I executed my tests again and seems that your improved patch version is
  working fine too.
 
 Good, thanks for testing.
 
  Did you plan to merge it on nginx core soon?
 
 It's currently waiting for Igor's review, and will be committed 
 once it's passed.

Committed with minor changes after Igor's review:

http://hg.nginx.org/nginx/rev/c46657e391a3

Thanks for prodding this.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage)

2014-06-03 Thread Wandenberg Peixoto
Thanks agentzh, Maxim and Igor


On Tue, Jun 3, 2014 at 11:00 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Mon, Jun 02, 2014 at 08:42:50PM +0400, Maxim Dounin wrote:

  Hello!
 
  On Sat, May 31, 2014 at 11:46:28PM -0300, Wandenberg Peixoto wrote:
 
   Hello Maxim,
  
   I executed my tests again and seems that your improved patch version is
   working fine too.
 
  Good, thanks for testing.
 
   Did you plan to merge it on nginx core soon?
 
  It's currently waiting for Igor's review, and will be committed
  once it's passed.

 Committed with minor changes after Igor's review:

 http://hg.nginx.org/nginx/rev/c46657e391a3

 Thanks for prodding this.

 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Help with shared memory usage

2014-05-31 Thread Wandenberg Peixoto
Hello Maxim,

I executed my tests again and seems that your improved patch version is
working fine too.

Did you plan to merge it on nginx core soon?

Regards


On Wed, May 28, 2014 at 3:41 PM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Wed, Jan 22, 2014 at 08:51:50PM +0400, Maxim Dounin wrote:

  Hello!
 
  On Wed, Jan 22, 2014 at 01:39:54AM -0200, Wandenberg Peixoto wrote:
 
   Hello Maxim,
  
   did you have opportunity to take a look on this last patch?
 
  It looks more or less correct, though I don't happy with the
  checks done, and there are various style issues.  I'm planning to
  look into it and build a better version as time permits.

 Sorry for long delay, see here for an improved patch:

 http://mailman.nginx.org/pipermail/nginx-devel/2014-May/005406.html

 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage)

2014-05-31 Thread Wandenberg Peixoto
Hello Maxim,

I executed my tests again and seems that your improved patch version is
working fine too.

Did you plan to merge it on nginx core soon?

-agentzh

Did you have opportunity to check if it works for you?

Regards



On Wed, May 28, 2014 at 3:38 PM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Sun, May 11, 2014 at 10:13:52PM -0700, Yichun Zhang (agentzh) wrote:

  Hello!
 
  On Mon, Jul 29, 2013 at 10:11 AM, Maxim Dounin wrote:
   Additionally, doing a full merge of all free blocks on a free
   operation looks too much.  It might be something we want to do on
   allocation failure, but not on a normal path in
   ngx_slab_free_pages().  And/or something lightweight may be done
   in ngx_slab_free_pages(), e.g., checking if pages following pages
   we are freeing are free too, and merging them in this case.
  
 
  I'd propose an alternative patch taking the second approach, that is,
  merging adjacent free pages (for both the previous and next blocks) in
  ngx_slab_free_pages(). This approach has the following advantages:
 
  1. It can effectively distribute the merging computations across all
  the page free operations, which can prevent potential frequent and
  long stalls when actually running out of large enough free blocks
  along the free list that is already very long for large zones (which
  usually consists of  tons of one-page blocks upon allocation
  failures).
 
  2. it can also make multi-page allocations generally faster because
  we're merging pages immediately when we can and thus it's more likely
  to find large enough free blocks along the (relatively short) free
  list for ngx_slab_alloc_pages().
 
  The only downside is that I have to introduce an extra field
  prev_slab (8-byte for x86_64) in ngx_slab_page_t in my patch, which
  makes the slab page metadata a bit larger.

 Below is a patch which does mostly the same without introducing
 any additional per-page fields.  Please take a look if it works
 for you.

 # HG changeset patch
 # User Maxim Dounin mdou...@mdounin.ru
 # Date 1401302011 -14400
 #  Wed May 28 22:33:31 2014 +0400
 # Node ID 7fb45c6042324e6cd92b0fb230c67a9c8c75681c
 # Parent  80bd391c90d11de707a05fcd0c9aa2a09c62877f
 Core: slab allocator defragmentation.

 Large allocations from a slab pool result in free page blocks being
 fragmented,
 eventually leading to a situation when no further allocation larger than a
 page
 size are possible from the pool.  While this isn't a problem for nginx
 itself,
 it is known to be bad for various 3rd party modules.  Fix is to merge
 adjacent
 blocks of free pages in the ngx_slab_free_pages() function.

 Prodded by Wandenberg Peixoto and Yichun Zhang.

 diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c
 --- a/src/core/ngx_slab.c
 +++ b/src/core/ngx_slab.c
 @@ -129,6 +129,8 @@ ngx_slab_init(ngx_slab_pool_t *pool)
  pool-pages-slab = pages;
  }

 +pool-last = pool-pages + pages;
 +
  pool-log_nomem = 1;
  pool-log_ctx = pool-zero;
  pool-zero = '\0';
 @@ -626,6 +628,8 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po
  if (page-slab = pages) {

  if (page-slab  pages) {
 +page[page-slab - 1].prev = (uintptr_t) page[pages];
 +
  page[pages].slab = page-slab - pages;
  page[pages].next = page-next;
  page[pages].prev = page-prev;
 @@ -672,7 +676,8 @@ static void
  ngx_slab_free_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page,
  ngx_uint_t pages)
  {
 -ngx_slab_page_t  *prev;
 +ngx_uint_ttype;
 +ngx_slab_page_t  *prev, *join;

  page-slab = pages--;

 @@ -686,6 +691,53 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo
  page-next-prev = page-prev;
  }

 +join = page + page-slab;
 +
 +if (join  pool-last) {
 +type = join-prev  NGX_SLAB_PAGE_MASK;
 +
 +if (type == NGX_SLAB_PAGE  join-next != NULL) {
 +pages += join-slab;
 +page-slab += join-slab;
 +
 +prev = (ngx_slab_page_t *) (join-prev  ~NGX_SLAB_PAGE_MASK);
 +prev-next = join-next;
 +join-next-prev = join-prev;
 +
 +join-slab = NGX_SLAB_PAGE_FREE;
 +join-next = NULL;
 +join-prev = NGX_SLAB_PAGE;
 +}
 +}
 +
 +if (page  pool-pages) {
 +join = page - 1;
 +type = join-prev  NGX_SLAB_PAGE_MASK;
 +
 +if (type == NGX_SLAB_PAGE  join-slab == NGX_SLAB_PAGE_FREE) {
 +join = (ngx_slab_page_t *) (join-prev  ~NGX_SLAB_PAGE_MASK);
 +}
 +
 +if (type == NGX_SLAB_PAGE  join-next != NULL) {
 +pages += join-slab;
 +join-slab += page-slab;
 +
 +prev = (ngx_slab_page_t *) (join-prev  ~NGX_SLAB_PAGE_MASK);
 +prev-next = join-next;
 +join-next-prev = join-prev;
 +
 +page-slab = NGX_SLAB_PAGE_FREE;
 +page-next = NULL;
 +page-prev = NGX_SLAB_PAGE;
 +
 +page 

Re: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage)

2014-05-31 Thread Yichun Zhang (agentzh)
Hi Maxim!

On Wed, May 28, 2014 at 11:38 AM, Maxim Dounin wrote:

 Below is a patch which does mostly the same without introducing
 any additional per-page fields.  Please take a look if it works
 for you.


Thank you for looking into this!

I've run my local test suite for this issue against an nginx with your
patch and it is passing completely. We haven't tried it out in
production though :)

Best regards,
-agentzh

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage)

2014-05-28 Thread Maxim Dounin
Hello!

On Sun, May 11, 2014 at 10:13:52PM -0700, Yichun Zhang (agentzh) wrote:

 Hello!
 
 On Mon, Jul 29, 2013 at 10:11 AM, Maxim Dounin wrote:
  Additionally, doing a full merge of all free blocks on a free
  operation looks too much.  It might be something we want to do on
  allocation failure, but not on a normal path in
  ngx_slab_free_pages().  And/or something lightweight may be done
  in ngx_slab_free_pages(), e.g., checking if pages following pages
  we are freeing are free too, and merging them in this case.
 
 
 I'd propose an alternative patch taking the second approach, that is,
 merging adjacent free pages (for both the previous and next blocks) in
 ngx_slab_free_pages(). This approach has the following advantages:
 
 1. It can effectively distribute the merging computations across all
 the page free operations, which can prevent potential frequent and
 long stalls when actually running out of large enough free blocks
 along the free list that is already very long for large zones (which
 usually consists of  tons of one-page blocks upon allocation
 failures).
 
 2. it can also make multi-page allocations generally faster because
 we're merging pages immediately when we can and thus it's more likely
 to find large enough free blocks along the (relatively short) free
 list for ngx_slab_alloc_pages().
 
 The only downside is that I have to introduce an extra field
 prev_slab (8-byte for x86_64) in ngx_slab_page_t in my patch, which
 makes the slab page metadata a bit larger.

Below is a patch which does mostly the same without introducing 
any additional per-page fields.  Please take a look if it works 
for you.

# HG changeset patch
# User Maxim Dounin mdou...@mdounin.ru
# Date 1401302011 -14400
#  Wed May 28 22:33:31 2014 +0400
# Node ID 7fb45c6042324e6cd92b0fb230c67a9c8c75681c
# Parent  80bd391c90d11de707a05fcd0c9aa2a09c62877f
Core: slab allocator defragmentation.

Large allocations from a slab pool result in free page blocks being fragmented,
eventually leading to a situation when no further allocation larger than a page
size are possible from the pool.  While this isn't a problem for nginx itself,
it is known to be bad for various 3rd party modules.  Fix is to merge adjacent
blocks of free pages in the ngx_slab_free_pages() function.

Prodded by Wandenberg Peixoto and Yichun Zhang.

diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c
--- a/src/core/ngx_slab.c
+++ b/src/core/ngx_slab.c
@@ -129,6 +129,8 @@ ngx_slab_init(ngx_slab_pool_t *pool)
 pool-pages-slab = pages;
 }
 
+pool-last = pool-pages + pages;
+
 pool-log_nomem = 1;
 pool-log_ctx = pool-zero;
 pool-zero = '\0';
@@ -626,6 +628,8 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po
 if (page-slab = pages) {
 
 if (page-slab  pages) {
+page[page-slab - 1].prev = (uintptr_t) page[pages];
+
 page[pages].slab = page-slab - pages;
 page[pages].next = page-next;
 page[pages].prev = page-prev;
@@ -672,7 +676,8 @@ static void
 ngx_slab_free_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page,
 ngx_uint_t pages)
 {
-ngx_slab_page_t  *prev;
+ngx_uint_ttype;
+ngx_slab_page_t  *prev, *join;
 
 page-slab = pages--;
 
@@ -686,6 +691,53 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo
 page-next-prev = page-prev;
 }
 
+join = page + page-slab;
+
+if (join  pool-last) {
+type = join-prev  NGX_SLAB_PAGE_MASK;
+
+if (type == NGX_SLAB_PAGE  join-next != NULL) {
+pages += join-slab;
+page-slab += join-slab;
+
+prev = (ngx_slab_page_t *) (join-prev  ~NGX_SLAB_PAGE_MASK);
+prev-next = join-next;
+join-next-prev = join-prev;
+
+join-slab = NGX_SLAB_PAGE_FREE;
+join-next = NULL;
+join-prev = NGX_SLAB_PAGE;
+}
+}
+
+if (page  pool-pages) {
+join = page - 1;
+type = join-prev  NGX_SLAB_PAGE_MASK;
+
+if (type == NGX_SLAB_PAGE  join-slab == NGX_SLAB_PAGE_FREE) {
+join = (ngx_slab_page_t *) (join-prev  ~NGX_SLAB_PAGE_MASK);
+}
+
+if (type == NGX_SLAB_PAGE  join-next != NULL) {
+pages += join-slab;
+join-slab += page-slab;
+
+prev = (ngx_slab_page_t *) (join-prev  ~NGX_SLAB_PAGE_MASK);
+prev-next = join-next;
+join-next-prev = join-prev;
+
+page-slab = NGX_SLAB_PAGE_FREE;
+page-next = NULL;
+page-prev = NGX_SLAB_PAGE;
+
+page = join;
+}
+}
+
+if (pages) {
+page[pages].prev = (uintptr_t) page;
+}
+
 page-prev = (uintptr_t) pool-free;
 page-next = pool-free.next;
 
diff --git a/src/core/ngx_slab.h b/src/core/ngx_slab.h
--- a/src/core/ngx_slab.h
+++ b/src/core/ngx_slab.h
@@ -29,6 +29,7 @@ typedef struct {
 size_tmin_shift;
 
 ngx_slab_page_t  *pages;
+ngx_slab_page_t  

Re: Help with shared memory usage

2014-05-28 Thread Maxim Dounin
Hello!

On Wed, Jan 22, 2014 at 08:51:50PM +0400, Maxim Dounin wrote:

 Hello!
 
 On Wed, Jan 22, 2014 at 01:39:54AM -0200, Wandenberg Peixoto wrote:
 
  Hello Maxim,
  
  did you have opportunity to take a look on this last patch?
 
 It looks more or less correct, though I don't happy with the 
 checks done, and there are various style issues.  I'm planning to 
 look into it and build a better version as time permits.

Sorry for long delay, see here for an improved patch:

http://mailman.nginx.org/pipermail/nginx-devel/2014-May/005406.html

-- 
Maxim Dounin
http://nginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage)

2014-05-11 Thread Yichun Zhang (agentzh)
Hello!

On Mon, Jul 29, 2013 at 10:11 AM, Maxim Dounin wrote:
 Additionally, doing a full merge of all free blocks on a free
 operation looks too much.  It might be something we want to do on
 allocation failure, but not on a normal path in
 ngx_slab_free_pages().  And/or something lightweight may be done
 in ngx_slab_free_pages(), e.g., checking if pages following pages
 we are freeing are free too, and merging them in this case.


I'd propose an alternative patch taking the second approach, that is,
merging adjacent free pages (for both the previous and next blocks) in
ngx_slab_free_pages(). This approach has the following advantages:

1. It can effectively distribute the merging computations across all
the page free operations, which can prevent potential frequent and
long stalls when actually running out of large enough free blocks
along the free list that is already very long for large zones (which
usually consists of  tons of one-page blocks upon allocation
failures).

2. it can also make multi-page allocations generally faster because
we're merging pages immediately when we can and thus it's more likely
to find large enough free blocks along the (relatively short) free
list for ngx_slab_alloc_pages().

The only downside is that I have to introduce an extra field
prev_slab (8-byte for x86_64) in ngx_slab_page_t in my patch, which
makes the slab page metadata a bit larger.

Feedback welcome!

Thanks!
-agentzh

# HG changeset patch
# User Yichun Zhang agen...@gmail.com
# Date 1399870567 25200
#  Sun May 11 21:56:07 2014 -0700
# Node ID 93614769dd4b6df8844c3c43c6a0b3f83bfa6746
# Parent  48c97d83ab7f0a3f641987fb32ace8af7720aefc
Core: merge adjacent free slab pages to ameliorate fragmentation from
multi-page blocks.

diff -r 48c97d83ab7f -r 93614769dd4b src/core/ngx_slab.c
--- a/src/core/ngx_slab.c Tue Apr 29 22:22:38 2014 +0200
+++ b/src/core/ngx_slab.c Sun May 11 21:56:07 2014 -0700
@@ -111,6 +111,7 @@
 ngx_memzero(p, pages * sizeof(ngx_slab_page_t));

 pool-pages = (ngx_slab_page_t *) p;
+pool-npages = pages;

 pool-free.prev = 0;
 pool-free.next = (ngx_slab_page_t *) p;
@@ -118,6 +119,7 @@
 pool-pages-slab = pages;
 pool-pages-next = pool-free;
 pool-pages-prev = (uintptr_t) pool-free;
+pool-pages-prev_slab = 0;

 pool-start = (u_char *)
   ngx_align_ptr((uintptr_t) p + pages *
sizeof(ngx_slab_page_t),
@@ -626,9 +628,16 @@
 if (page-slab = pages) {

 if (page-slab  pages) {
+/* adjust the next adjacent block's prev_slab field */
+p = page[page-slab];
+if (p  pool-pages + pool-npages) {
+p-prev_slab = page-slab - pages;
+}
+
 page[pages].slab = page-slab - pages;
 page[pages].next = page-next;
 page[pages].prev = page-prev;
+page[pages].prev_slab = pages;

 p = (ngx_slab_page_t *) page-prev;
 p-next = page[pages];
@@ -652,6 +661,7 @@
 p-slab = NGX_SLAB_PAGE_BUSY;
 p-next = NULL;
 p-prev = NGX_SLAB_PAGE;
+p-prev_slab = 0;
 p++;
 }

@@ -672,7 +682,7 @@
 ngx_slab_free_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page,
 ngx_uint_t pages)
 {
-ngx_slab_page_t  *prev;
+ngx_slab_page_t  *prev, *p;

 page-slab = pages--;

@@ -686,6 +696,53 @@
 page-next-prev = page-prev;
 }

+/* merge the next adjacent free block if it is free */
+
+p = page[page-slab];
+if (p  pool-pages + pool-npages
+ !(p-slab  NGX_SLAB_PAGE_START)
+ p-next != NULL
+ (p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)
+{
+page-slab += p-slab;
+
+/* remove the next adjacent block from the free list */
+
+prev = (ngx_slab_page_t *) p-prev;
+prev-next = p-next;
+p-next-prev = p-prev;
+
+/* adjust the prev_slab field in the next next adjacent block */
+if (p + p-slab  pool-pages + pool-npages) {
+p[p-slab].prev_slab = page-slab;
+}
+
+ngx_memzero(p, sizeof(ngx_slab_page_t));
+}
+
+if (page-prev_slab) {
+/* merge the previous adjacent block if it is free */
+
+p = page - page-prev_slab;
+if (!(p-slab  NGX_SLAB_PAGE_START)
+ p-next != NULL
+ (p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)
+{
+/* p-slab == page-prev_slab */
+
+p-slab += page-slab;
+ngx_memzero(page, sizeof(ngx_slab_page_t));
+
+/* adjust the prev_slab field in the next adjacent block */
+if (p + p-slab  pool-pages + pool-npages) {
+p[p-slab].prev_slab = p-slab;
+}
+
+/* skip adding page to the free list */
+return;
+}
+}
+
 page-prev = (uintptr_t) pool-free;
 page-next = pool-free.next;

diff -r 

Re: Help with shared memory usage

2014-04-26 Thread Filipe Da Silva
Hello,,

AFAIK, the patch attached here :
 http://mailman.nginx.org/pipermail/nginx-devel/2013-December/004721.html
compile, and should work on 1.6 too.

Maybe the merge is a bit tricky with the last changes made on ngx_slab.c
I have a up-to-date version on my patch queue.

as Maxim stated, it still needs a functional rework .
http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004337.html

Best,
Filipe

2014-04-25 21:47 GMT+02:00 John Watson j...@disqus.com:
 Hi Maxim,

 Any chance you can fix up this patch and get it into 1.7?

 This patch is not compatible with 1.6.

 Best,

 John

 On Wed, Jan 22, 2014 at 8:51 AM, Maxim Dounin mdou...@mdounin.ru wrote:
 Hello!

 On Wed, Jan 22, 2014 at 01:39:54AM -0200, Wandenberg Peixoto wrote:

 Hello Maxim,

 did you have opportunity to take a look on this last patch?

 It looks more or less correct, though I don't happy with the
 checks done, and there are various style issues.  I'm planning to
 look into it and build a better version as time permits.



 Regards,
 Wandenberg


 On Thu, Dec 26, 2013 at 10:12 PM, Wandenberg Peixoto
 wandenb...@gmail.comwrote:

  Hello Maxim,
 
  I changed the patch to check only the p-next pointer.
  And checking if the page is in an address less than the (pool-pages +
  pages).
 
  +ngx_slab_page_t *prev, *p;
  +ngx_uint_tpages;
  +size_tsize;
  +
  +size = pool-end - (u_char *) pool - sizeof(ngx_slab_pool_t);
  +pages = (ngx_uint_t) (size / (ngx_pagesize +
  sizeof(ngx_slab_page_t)));
   +
  +p = page[page-slab];
  +
  +if ((p  pool-pages + pages) 
  +(p-next != NULL) 
  +((p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {
 
 
  I hope that now I did the right checks.
 
  Regards,
  Wandenberg
 
 
  On Fri, Dec 20, 2013 at 2:49 PM, Maxim Dounin mdou...@mdounin.ru wrote:
 
  Hello!
 
  On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote:
 
   Hi Maxim,
  
   sorry for the long delay. I hope you remember my issue.
   In attach is the new patch with the changes you suggest.
   Can you check it again? I hope it can be applied to nginx code now.
  
   About this point 2. There is probably no need to check both prev and
   next.,
   I check both pointers to avoid a segmentation fault, since in some
   situations the next can be NULL and the prev may point to pool-free.
   As far as I could follow the code seems to me that could happen one of
  this
   pointers, next or prev, point to a NULL.
   I just made a double check to protect.
 
  As far as I see, all pages in the pool-free list are expected to
  have both p-prev and p-next set.  And all pages with type
  NGX_SLAB_PAGE will be either on the pool-free list, or will have
  p-next set to NULL.
 
  [...]
 
 +{
 +ngx_slab_page_t *neighbour = page[page-slab];
   
Here neighbour may point outside of the allocated page
structures if we are freeing the last page in the pool.
 
  It looks like you've tried to address this problem with the
  following check:
 
   +static ngx_int_t
   +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
   +{
   +ngx_slab_page_t *prev, *p;
   +
   +p = page[page-slab];
   +if ((u_char *) p = pool-end) {
   +return NGX_DECLINED;
   +}
 
  This looks wrong, as pool-end points to the end of the pool, not
  the end of allocated page structures.
 
  --
  Maxim Dounin
  http://nginx.org/
 
  ___
  nginx-devel mailing list
  nginx-devel@nginx.org
  http://mailman.nginx.org/mailman/listinfo/nginx-devel
 
 
 

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel


 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2014-04-25 Thread John Watson
Hi Maxim,

Any chance you can fix up this patch and get it into 1.7?

This patch is not compatible with 1.6.

Best,

John

On Wed, Jan 22, 2014 at 8:51 AM, Maxim Dounin mdou...@mdounin.ru wrote:
 Hello!

 On Wed, Jan 22, 2014 at 01:39:54AM -0200, Wandenberg Peixoto wrote:

 Hello Maxim,

 did you have opportunity to take a look on this last patch?

 It looks more or less correct, though I don't happy with the
 checks done, and there are various style issues.  I'm planning to
 look into it and build a better version as time permits.



 Regards,
 Wandenberg


 On Thu, Dec 26, 2013 at 10:12 PM, Wandenberg Peixoto
 wandenb...@gmail.comwrote:

  Hello Maxim,
 
  I changed the patch to check only the p-next pointer.
  And checking if the page is in an address less than the (pool-pages +
  pages).
 
  +ngx_slab_page_t *prev, *p;
  +ngx_uint_tpages;
  +size_tsize;
  +
  +size = pool-end - (u_char *) pool - sizeof(ngx_slab_pool_t);
  +pages = (ngx_uint_t) (size / (ngx_pagesize +
  sizeof(ngx_slab_page_t)));
   +
  +p = page[page-slab];
  +
  +if ((p  pool-pages + pages) 
  +(p-next != NULL) 
  +((p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {
 
 
  I hope that now I did the right checks.
 
  Regards,
  Wandenberg
 
 
  On Fri, Dec 20, 2013 at 2:49 PM, Maxim Dounin mdou...@mdounin.ru wrote:
 
  Hello!
 
  On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote:
 
   Hi Maxim,
  
   sorry for the long delay. I hope you remember my issue.
   In attach is the new patch with the changes you suggest.
   Can you check it again? I hope it can be applied to nginx code now.
  
   About this point 2. There is probably no need to check both prev and
   next.,
   I check both pointers to avoid a segmentation fault, since in some
   situations the next can be NULL and the prev may point to pool-free.
   As far as I could follow the code seems to me that could happen one of
  this
   pointers, next or prev, point to a NULL.
   I just made a double check to protect.
 
  As far as I see, all pages in the pool-free list are expected to
  have both p-prev and p-next set.  And all pages with type
  NGX_SLAB_PAGE will be either on the pool-free list, or will have
  p-next set to NULL.
 
  [...]
 
 +{
 +ngx_slab_page_t *neighbour = page[page-slab];
   
Here neighbour may point outside of the allocated page
structures if we are freeing the last page in the pool.
 
  It looks like you've tried to address this problem with the
  following check:
 
   +static ngx_int_t
   +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
   +{
   +ngx_slab_page_t *prev, *p;
   +
   +p = page[page-slab];
   +if ((u_char *) p = pool-end) {
   +return NGX_DECLINED;
   +}
 
  This looks wrong, as pool-end points to the end of the pool, not
  the end of allocated page structures.
 
  --
  Maxim Dounin
  http://nginx.org/
 
  ___
  nginx-devel mailing list
  nginx-devel@nginx.org
  http://mailman.nginx.org/mailman/listinfo/nginx-devel
 
 
 

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel


 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2014-01-22 Thread Maxim Dounin
Hello!

On Wed, Jan 22, 2014 at 01:39:54AM -0200, Wandenberg Peixoto wrote:

 Hello Maxim,
 
 did you have opportunity to take a look on this last patch?

It looks more or less correct, though I don't happy with the 
checks done, and there are various style issues.  I'm planning to 
look into it and build a better version as time permits.

 
 
 Regards,
 Wandenberg
 
 
 On Thu, Dec 26, 2013 at 10:12 PM, Wandenberg Peixoto
 wandenb...@gmail.comwrote:
 
  Hello Maxim,
 
  I changed the patch to check only the p-next pointer.
  And checking if the page is in an address less than the (pool-pages +
  pages).
 
  +ngx_slab_page_t *prev, *p;
  +ngx_uint_tpages;
  +size_tsize;
  +
  +size = pool-end - (u_char *) pool - sizeof(ngx_slab_pool_t);
  +pages = (ngx_uint_t) (size / (ngx_pagesize +
  sizeof(ngx_slab_page_t)));
   +
  +p = page[page-slab];
  +
  +if ((p  pool-pages + pages) 
  +(p-next != NULL) 
  +((p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {
 
 
  I hope that now I did the right checks.
 
  Regards,
  Wandenberg
 
 
  On Fri, Dec 20, 2013 at 2:49 PM, Maxim Dounin mdou...@mdounin.ru wrote:
 
  Hello!
 
  On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote:
 
   Hi Maxim,
  
   sorry for the long delay. I hope you remember my issue.
   In attach is the new patch with the changes you suggest.
   Can you check it again? I hope it can be applied to nginx code now.
  
   About this point 2. There is probably no need to check both prev and
   next.,
   I check both pointers to avoid a segmentation fault, since in some
   situations the next can be NULL and the prev may point to pool-free.
   As far as I could follow the code seems to me that could happen one of
  this
   pointers, next or prev, point to a NULL.
   I just made a double check to protect.
 
  As far as I see, all pages in the pool-free list are expected to
  have both p-prev and p-next set.  And all pages with type
  NGX_SLAB_PAGE will be either on the pool-free list, or will have
  p-next set to NULL.
 
  [...]
 
 +{
 +ngx_slab_page_t *neighbour = page[page-slab];
   
Here neighbour may point outside of the allocated page
structures if we are freeing the last page in the pool.
 
  It looks like you've tried to address this problem with the
  following check:
 
   +static ngx_int_t
   +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
   +{
   +ngx_slab_page_t *prev, *p;
   +
   +p = page[page-slab];
   +if ((u_char *) p = pool-end) {
   +return NGX_DECLINED;
   +}
 
  This looks wrong, as pool-end points to the end of the pool, not
  the end of allocated page structures.
 
  --
  Maxim Dounin
  http://nginx.org/
 
  ___
  nginx-devel mailing list
  nginx-devel@nginx.org
  http://mailman.nginx.org/mailman/listinfo/nginx-devel
 
 
 

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel


-- 
Maxim Dounin
http://nginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2014-01-22 Thread Yichun Zhang (agentzh)
Hello Maxim!

On Wed, Jan 22, 2014 at 8:51 AM, Maxim Dounin wrote:

 It looks more or less correct, though I don't happy with the
 checks done, and there are various style issues.  I'm planning to
 look into it and build a better version as time permits.


We're also having this issue. Hopefully this patch can get updated and
merged soon :)

Thanks for your time!

Best regards,
-agentzh

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2014-01-21 Thread Wandenberg Peixoto
Hello Maxim,

did you have opportunity to take a look on this last patch?


Regards,
Wandenberg


On Thu, Dec 26, 2013 at 10:12 PM, Wandenberg Peixoto
wandenb...@gmail.comwrote:

 Hello Maxim,

 I changed the patch to check only the p-next pointer.
 And checking if the page is in an address less than the (pool-pages +
 pages).

 +ngx_slab_page_t *prev, *p;
 +ngx_uint_tpages;
 +size_tsize;
 +
 +size = pool-end - (u_char *) pool - sizeof(ngx_slab_pool_t);
 +pages = (ngx_uint_t) (size / (ngx_pagesize +
 sizeof(ngx_slab_page_t)));
  +
 +p = page[page-slab];
 +
 +if ((p  pool-pages + pages) 
 +(p-next != NULL) 
 +((p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {


 I hope that now I did the right checks.

 Regards,
 Wandenberg


 On Fri, Dec 20, 2013 at 2:49 PM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote:

  Hi Maxim,
 
  sorry for the long delay. I hope you remember my issue.
  In attach is the new patch with the changes you suggest.
  Can you check it again? I hope it can be applied to nginx code now.
 
  About this point 2. There is probably no need to check both prev and
  next.,
  I check both pointers to avoid a segmentation fault, since in some
  situations the next can be NULL and the prev may point to pool-free.
  As far as I could follow the code seems to me that could happen one of
 this
  pointers, next or prev, point to a NULL.
  I just made a double check to protect.

 As far as I see, all pages in the pool-free list are expected to
 have both p-prev and p-next set.  And all pages with type
 NGX_SLAB_PAGE will be either on the pool-free list, or will have
 p-next set to NULL.

 [...]

+{
+ngx_slab_page_t *neighbour = page[page-slab];
  
   Here neighbour may point outside of the allocated page
   structures if we are freeing the last page in the pool.

 It looks like you've tried to address this problem with the
 following check:

  +static ngx_int_t
  +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
  +{
  +ngx_slab_page_t *prev, *p;
  +
  +p = page[page-slab];
  +if ((u_char *) p = pool-end) {
  +return NGX_DECLINED;
  +}

 This looks wrong, as pool-end points to the end of the pool, not
 the end of allocated page structures.

 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel



___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Help with shared memory usage

2013-12-26 Thread Wandenberg Peixoto
Hello Maxim,

I changed the patch to check only the p-next pointer.
And checking if the page is in an address less than the (pool-pages +
pages).

+ngx_slab_page_t *prev, *p;
+ngx_uint_tpages;
+size_tsize;
+
+size = pool-end - (u_char *) pool - sizeof(ngx_slab_pool_t);
+pages = (ngx_uint_t) (size / (ngx_pagesize + sizeof(ngx_slab_page_t)));
+
+p = page[page-slab];
+
+if ((p  pool-pages + pages) 
+(p-next != NULL) 
+((p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {


I hope that now I did the right checks.

Regards,
Wandenberg


On Fri, Dec 20, 2013 at 2:49 PM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote:

  Hi Maxim,
 
  sorry for the long delay. I hope you remember my issue.
  In attach is the new patch with the changes you suggest.
  Can you check it again? I hope it can be applied to nginx code now.
 
  About this point 2. There is probably no need to check both prev and
  next.,
  I check both pointers to avoid a segmentation fault, since in some
  situations the next can be NULL and the prev may point to pool-free.
  As far as I could follow the code seems to me that could happen one of
 this
  pointers, next or prev, point to a NULL.
  I just made a double check to protect.

 As far as I see, all pages in the pool-free list are expected to
 have both p-prev and p-next set.  And all pages with type
 NGX_SLAB_PAGE will be either on the pool-free list, or will have
 p-next set to NULL.

 [...]

+{
+ngx_slab_page_t *neighbour = page[page-slab];
  
   Here neighbour may point outside of the allocated page
   structures if we are freeing the last page in the pool.

 It looks like you've tried to address this problem with the
 following check:

  +static ngx_int_t
  +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
  +{
  +ngx_slab_page_t *prev, *p;
  +
  +p = page[page-slab];
  +if ((u_char *) p = pool-end) {
  +return NGX_DECLINED;
  +}

 This looks wrong, as pool-end points to the end of the pool, not
 the end of allocated page structures.

 --
 Maxim Dounin
 http://nginx.org/

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel

--- src/core/ngx_slab.c	2013-12-15 22:10:51.079133826 -0200
+++ src/core/ngx_slab.c	2013-12-26 22:02:37.103802929 -0200
@@ -62,7 +62,8 @@
 ngx_uint_t pages);
 static void ngx_slab_error(ngx_slab_pool_t *pool, ngx_uint_t level,
 char *text);
-
+static ngx_int_t ngx_slab_merge_pages(ngx_slab_pool_t *pool,
+ngx_slab_page_t *page);
 
 static ngx_uint_t  ngx_slab_max_size;
 static ngx_uint_t  ngx_slab_exact_size;
@@ -658,12 +659,58 @@
 }
 }
 
+ngx_flag_t retry = 0;
+for (page = pool-free.next; page != pool-free;) {
+if (ngx_slab_merge_pages(pool, page) == NGX_OK) {
+retry = 1;
+} else {
+page = page-next;
+}
+}
+
+if (retry) {
+return ngx_slab_alloc_pages(pool, pages);
+}
+
 ngx_slab_error(pool, NGX_LOG_CRIT, ngx_slab_alloc() failed: no memory);
 
 return NULL;
 }
 
 
+static ngx_int_t
+ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
+{
+ngx_slab_page_t *prev, *p;
+ngx_uint_tpages;
+size_tsize;
+
+size = pool-end - (u_char *) pool - sizeof(ngx_slab_pool_t);
+pages = (ngx_uint_t) (size / (ngx_pagesize + sizeof(ngx_slab_page_t)));
+
+p = page[page-slab];
+
+if ((p  pool-pages + pages) 
+(p-next != NULL) 
+((p-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {
+
+page-slab += p-slab;
+
+prev = (ngx_slab_page_t *) (p-prev  ~NGX_SLAB_PAGE_MASK);
+prev-next = p-next;
+p-next-prev = p-prev;
+
+p-slab = NGX_SLAB_PAGE_FREE;
+p-next = NULL;
+p-prev = NGX_SLAB_PAGE;
+
+return NGX_OK;
+}
+
+return NGX_DECLINED;
+}
+
+
 static void
 ngx_slab_free_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page,
 ngx_uint_t pages)
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Help with shared memory usage

2013-12-20 Thread Maxim Dounin
Hello!

On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote:

 Hi Maxim,
 
 sorry for the long delay. I hope you remember my issue.
 In attach is the new patch with the changes you suggest.
 Can you check it again? I hope it can be applied to nginx code now.
 
 About this point 2. There is probably no need to check both prev and
 next.,
 I check both pointers to avoid a segmentation fault, since in some
 situations the next can be NULL and the prev may point to pool-free.
 As far as I could follow the code seems to me that could happen one of this
 pointers, next or prev, point to a NULL.
 I just made a double check to protect.

As far as I see, all pages in the pool-free list are expected to 
have both p-prev and p-next set.  And all pages with type 
NGX_SLAB_PAGE will be either on the pool-free list, or will have 
p-next set to NULL.

[...]

   +{
   +ngx_slab_page_t *neighbour = page[page-slab];
 
  Here neighbour may point outside of the allocated page
  structures if we are freeing the last page in the pool.

It looks like you've tried to address this problem with the 
following check:

 +static ngx_int_t
 +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
 +{
 +ngx_slab_page_t *prev, *p;
 +
 +p = page[page-slab];
 +if ((u_char *) p = pool-end) {
 +return NGX_DECLINED;
 +}

This looks wrong, as pool-end points to the end of the pool, not 
the end of allocated page structures.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2013-12-17 Thread Wandenberg Peixoto
Hi Maxim,

sorry for the long delay. I hope you remember my issue.
In attach is the new patch with the changes you suggest.
Can you check it again? I hope it can be applied to nginx code now.

About this point 2. There is probably no need to check both prev and
next.,
I check both pointers to avoid a segmentation fault, since in some
situations the next can be NULL and the prev may point to pool-free.
As far as I could follow the code seems to me that could happen one of this
pointers, next or prev, point to a NULL.
I just made a double check to protect.

If I'm wrong, just tell me what can I check to be sure the next and the
previous free pages can be accessed, without problems and I will make the
changes.

Regards,
Wandenberg



On Sun, Oct 6, 2013 at 6:37 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Wed, Jul 31, 2013 at 12:28:02AM -0300, Wandenberg Peixoto wrote:

  Hello!
 
  Thanks for your help. I hope that the patch be OK now.
  I don't know if the function and variable names are on nginx pattern.
  Feel free to change the patch.
  If you have any other point before accept it, will be a pleasure to fix
 it.

 Sorry for long delay.  As promised, I've looked into this.  Apart
 from multiple style problems, I see at least one case of possible use of
 uninitialized memory.  See below for comments.

 
  --- src/core/ngx_slab.c2013-05-06 07:27:10.0 -0300
  +++ src/core/ngx_slab.c2013-07-31 00:21:08.043034442 -0300
  @@ -615,6 +615,26 @@ fail:
 
 
   static ngx_slab_page_t *
  +ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t
 *page)

 A side note: there are multiple issues with style in this patch.
 In no particular order:

 1. There is no reason to return ngx_slab_page_t * here.  Just
 ngx_int_t should be enough (or, even void would be good enough,
 see below).

 2. It is recommended do place function implementation below it's
 use (and add a prototype).  This allows to read the code linearly,
 top-down.

  +{
  +ngx_slab_page_t *neighbour = page[page-slab];

 Here neighbour may point outside of the allocated page
 structures if we are freeing the last page in the pool.

  +if (((ngx_slab_page_t *) neighbour-prev != NULL) 
 (neighbour-next
  != NULL)  ((neighbour-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {

 In no particular order:

 1. Style problems - more than 80 chars, extra parens.

 2. There is probably no need to check both prev and next.

 3. See below about casting neighbour-prev to (ngx_slab_page_t *).

  +page-slab += neighbour-slab;
  +
  +((ngx_slab_page_t *) neighbour-prev)-next = neighbour-next;
  +neighbour-next-prev = neighbour-prev;

 Casting neighbour-prev to (ngx_slab_page_t *) without removing
 last NGX_SLAB_PAGE_MASK bits isn't really a good idea.  It works
 here as NGX_SLAB_PAGE == 0, but may fail in other cases.

 It would be much better to use a snippet already used in other
 places:

 prev = (ngx_slab_page_t *) (p-prev  ~NGX_SLAB_PAGE_MASK);
 prev-next = p-next;
 p-next-prev = p-prev;

  +neighbour-slab = NGX_SLAB_PAGE_FREE;
  +neighbour-prev = (uintptr_t) pool-free;
  +neighbour-next = pool-free;

 Unused page structures used to be zeroed.  Any reason for
 prev/next pointing to pool-free?

 It looks like something like

 p-slab = NGX_SLAB_PAGE_FREE;
 p-next = NULL;
 p-prev = NGX_SLAB_PAGE;

 would be reasonable here.

  +
  +return page;
  +}
  +return NULL;
  +}

 See above, there is no need to return page as it's unused.
 Something like return NGX_OK / return NGX_DECLINED would be better
 here - if we are going to check if any pages were actually merged.

 Additionally, an empty line should be before last return.

  +
  +
  +static ngx_slab_page_t *
   ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages)
   {
   ngx_slab_page_t  *page, *p;
  @@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po
   }
   }
 
  +ngx_flag_t retry = 0;
  +for (page = pool-free.next; page != pool-free;) {
  +if (ngx_slab_merge_with_neighbour(pool, page)) {
  +retry = 1;
  +} else {
  +page = page-next;
  +}
  +}
  +
  +if (retry) {
  +return ngx_slab_alloc_pages(pool, pages);
  +}
  +

 Apart from multiple style issues here, I think we should choose
 one aproach: either loop over free pages and merge them here, or
 maintain all free pages merged in ngx_slab_free_pages() (that is,
 merge in both directions there).

   ngx_slab_error(pool, NGX_LOG_CRIT, ngx_slab_alloc() failed: no
  memory);
 
   return NULL;
  @@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo
   page-next-prev = (uintptr_t) page;
 
   pool-free.next = page;
  +
  +ngx_slab_merge_with_neighbour(pool, page);
   }

 See above.

 Overral it looks much better than previous patch, but still needs
 more work.

 --
 Maxim Dounin
 

Re: Help with shared memory usage

2013-10-06 Thread Maxim Dounin
Hello!

On Wed, Jul 31, 2013 at 12:28:02AM -0300, Wandenberg Peixoto wrote:

 Hello!
 
 Thanks for your help. I hope that the patch be OK now.
 I don't know if the function and variable names are on nginx pattern.
 Feel free to change the patch.
 If you have any other point before accept it, will be a pleasure to fix it.

Sorry for long delay.  As promised, I've looked into this.  Apart 
from multiple style problems, I see at least one case of possible use of 
uninitialized memory.  See below for comments.

 
 --- src/core/ngx_slab.c2013-05-06 07:27:10.0 -0300
 +++ src/core/ngx_slab.c2013-07-31 00:21:08.043034442 -0300
 @@ -615,6 +615,26 @@ fail:
 
 
  static ngx_slab_page_t *
 +ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t *page)

A side note: there are multiple issues with style in this patch.  
In no particular order:

1. There is no reason to return ngx_slab_page_t * here.  Just 
ngx_int_t should be enough (or, even void would be good enough, 
see below).

2. It is recommended do place function implementation below it's 
use (and add a prototype).  This allows to read the code linearly, 
top-down.

 +{
 +ngx_slab_page_t *neighbour = page[page-slab];

Here neighbour may point outside of the allocated page 
structures if we are freeing the last page in the pool.

 +if (((ngx_slab_page_t *) neighbour-prev != NULL)  (neighbour-next
 != NULL)  ((neighbour-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {

In no particular order:

1. Style problems - more than 80 chars, extra parens.

2. There is probably no need to check both prev and next.

3. See below about casting neighbour-prev to (ngx_slab_page_t *).

 +page-slab += neighbour-slab;
 +
 +((ngx_slab_page_t *) neighbour-prev)-next = neighbour-next;
 +neighbour-next-prev = neighbour-prev;

Casting neighbour-prev to (ngx_slab_page_t *) without removing 
last NGX_SLAB_PAGE_MASK bits isn't really a good idea.  It works 
here as NGX_SLAB_PAGE == 0, but may fail in other cases.

It would be much better to use a snippet already used in other 
places:

prev = (ngx_slab_page_t *) (p-prev  ~NGX_SLAB_PAGE_MASK);
prev-next = p-next;
p-next-prev = p-prev;

 +neighbour-slab = NGX_SLAB_PAGE_FREE;
 +neighbour-prev = (uintptr_t) pool-free;
 +neighbour-next = pool-free;

Unused page structures used to be zeroed.  Any reason for 
prev/next pointing to pool-free?

It looks like something like

p-slab = NGX_SLAB_PAGE_FREE;
p-next = NULL;
p-prev = NGX_SLAB_PAGE;

would be reasonable here.

 +
 +return page;
 +}
 +return NULL;
 +}

See above, there is no need to return page as it's unused.  
Something like return NGX_OK / return NGX_DECLINED would be better 
here - if we are going to check if any pages were actually merged.

Additionally, an empty line should be before last return.

 +
 +
 +static ngx_slab_page_t *
  ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages)
  {
  ngx_slab_page_t  *page, *p;
 @@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po
  }
  }
 
 +ngx_flag_t retry = 0;
 +for (page = pool-free.next; page != pool-free;) {
 +if (ngx_slab_merge_with_neighbour(pool, page)) {
 +retry = 1;
 +} else {
 +page = page-next;
 +}
 +}
 +
 +if (retry) {
 +return ngx_slab_alloc_pages(pool, pages);
 +}
 +

Apart from multiple style issues here, I think we should choose 
one aproach: either loop over free pages and merge them here, or 
maintain all free pages merged in ngx_slab_free_pages() (that is, 
merge in both directions there).

  ngx_slab_error(pool, NGX_LOG_CRIT, ngx_slab_alloc() failed: no
 memory);
 
  return NULL;
 @@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo
  page-next-prev = (uintptr_t) page;
 
  pool-free.next = page;
 +
 +ngx_slab_merge_with_neighbour(pool, page);
  }

See above.

Overral it looks much better than previous patch, but still needs 
more work.

-- 
Maxim Dounin
http://nginx.org/en/donation.html

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2013-09-02 Thread Wandenberg Peixoto
Hi Maxim,

did you have opportunity to take a look on this patch?

Regards,
Wandenberg


On Wed, Jul 31, 2013 at 12:28 AM, Wandenberg Peixoto
wandenb...@gmail.comwrote:

 Hello!

 Thanks for your help. I hope that the patch be OK now.
 I don't know if the function and variable names are on nginx pattern.
 Feel free to change the patch.
 If you have any other point before accept it, will be a pleasure to fix it.


 --- src/core/ngx_slab.c2013-05-06 07:27:10.0 -0300
 +++ src/core/ngx_slab.c2013-07-31 00:21:08.043034442 -0300
 @@ -615,6 +615,26 @@ fail:


  static ngx_slab_page_t *
 +ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t
 *page)
 +{

 +ngx_slab_page_t *neighbour = page[page-slab];
 +if (((ngx_slab_page_t *) neighbour-prev != NULL)  (neighbour-next
 != NULL)  ((neighbour-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {
 +page-slab += neighbour-slab;

 +
 +((ngx_slab_page_t *) neighbour-prev)-next = neighbour-next;
 +neighbour-next-prev = neighbour-prev;
 +
 +neighbour-slab = NGX_SLAB_PAGE_FREE;
 +neighbour-prev = (uintptr_t) pool-free;
 +neighbour-next = pool-free;
 +
 +return page;
 +}
 +return NULL;
 +}
 +
 +
 +static ngx_slab_page_t *
  ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages)
  {
  ngx_slab_page_t  *page, *p;
 @@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po
  }
  }

 +ngx_flag_t retry = 0;
 +for (page = pool-free.next; page != pool-free;) {
 +if (ngx_slab_merge_with_neighbour(pool, page)) {
 +retry = 1;
 +} else {
 +page = page-next;
 +}
 +}
 +
 +if (retry) {
 +return ngx_slab_alloc_pages(pool, pages);
 +}
 +
  ngx_slab_error(pool, NGX_LOG_CRIT, ngx_slab_alloc() failed: no
 memory);

  return NULL;
 @@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo

  page-next-prev = (uintptr_t) page;

  pool-free.next = page;
 +
 +ngx_slab_merge_with_neighbour(pool, page);
  }






 On Tue, Jul 30, 2013 at 7:09 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote:

 [...]

  What would be an alternative to not loop on pool-pages?

 Free memory blocks are linked in pool-free list, it should be
 enough to look there.

 [...]

 --
 Maxim Dounin
 http://nginx.org/en/donation.html

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel



___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Help with shared memory usage

2013-09-02 Thread Maxim Dounin
Hello!

On Mon, Sep 02, 2013 at 11:24:33AM -0300, Wandenberg Peixoto wrote:

 did you have opportunity to take a look on this patch?

Not yet, sorry.  It's in my TODO and I'll try to look at it this 
week.  Overall it seems good enough, but it certainly needs 
style/cosmetic cleanup before it can be committed.

[...]

-- 
Maxim Dounin
http://nginx.org/en/donation.html

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2013-08-07 Thread John Watson
I've been running a node with this patch on a production machine for 5 days and
am seeing marked improvements. The instance hasn't needed to be restarted
due to ngx_slab_alloc() failed: no memory. The shared memory usage has been
growing at a far slower rate compared to a node without the patch.
Also, not seeing any significant increase in CPU usage.

nginx/1.2.9
Mixture of HTTP/HTTPS traffic
400,000 to 700,000 concurrent connections
Linux 3.2.0-40-generic x86_64

It's a dedicated instance for Wandenberg's
https://github.com/wandenberg/nginx-push-stream-module

On Tue, Aug 6, 2013 at 1:19 PM, Wandenberg Peixoto wandenb...@gmail.com wrote:

 Hello!

 Thanks for your help. I hope that the patch be OK now.
 I don't know if the function and variable names are on nginx pattern.
 Feel free to change the patch.
 If you have any other point before accept it, will be a pleasure to fix it.


 --- src/core/ngx_slab.c2013-05-06 07:27:10.0 -0300
 +++ src/core/ngx_slab.c2013-07-31 00:21:08.043034442 -0300
 @@ -615,6 +615,26 @@ fail:


  static ngx_slab_page_t *
 +ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
 +{

 +ngx_slab_page_t *neighbour = page[page-slab];
 +if (((ngx_slab_page_t *) neighbour-prev != NULL)  (neighbour-next != 
 NULL)  ((neighbour-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {
 +page-slab += neighbour-slab;

 +
 +((ngx_slab_page_t *) neighbour-prev)-next = neighbour-next;
 +neighbour-next-prev = neighbour-prev;
 +
 +neighbour-slab = NGX_SLAB_PAGE_FREE;
 +neighbour-prev = (uintptr_t) pool-free;
 +neighbour-next = pool-free;
 +
 +return page;
 +}
 +return NULL;
 +}
 +
 +
 +static ngx_slab_page_t *
  ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages)
  {
  ngx_slab_page_t  *page, *p;
 @@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po
  }
  }

 +ngx_flag_t retry = 0;
 +for (page = pool-free.next; page != pool-free;) {
 +if (ngx_slab_merge_with_neighbour(pool, page)) {
 +retry = 1;
 +} else {
 +page = page-next;
 +}
 +}
 +
 +if (retry) {
 +return ngx_slab_alloc_pages(pool, pages);
 +}
 +
  ngx_slab_error(pool, NGX_LOG_CRIT, ngx_slab_alloc() failed: no memory);

  return NULL;
 @@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo

  page-next-prev = (uintptr_t) page;

  pool-free.next = page;
 +
 +ngx_slab_merge_with_neighbour(pool, page);
  }






 On Tue, Jul 30, 2013 at 7:09 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote:

 [...]

  What would be an alternative to not loop on pool-pages?

 Free memory blocks are linked in pool-free list, it should be
 enough to look there.

 [...]

 --
 Maxim Dounin
 http://nginx.org/en/donation.html

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel




___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2013-07-30 Thread Wandenberg Peixoto
Hello!

Thanks for your help. I hope that the patch be OK now.
I don't know if the function and variable names are on nginx pattern.
Feel free to change the patch.
If you have any other point before accept it, will be a pleasure to fix it.

--- src/core/ngx_slab.c2013-05-06 07:27:10.0 -0300
+++ src/core/ngx_slab.c2013-07-31 00:21:08.043034442 -0300
@@ -615,6 +615,26 @@ fail:


 static ngx_slab_page_t *
+ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t *page)
+{
+ngx_slab_page_t *neighbour = page[page-slab];
+if (((ngx_slab_page_t *) neighbour-prev != NULL)  (neighbour-next
!= NULL)  ((neighbour-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) {
+page-slab += neighbour-slab;
+
+((ngx_slab_page_t *) neighbour-prev)-next = neighbour-next;
+neighbour-next-prev = neighbour-prev;
+
+neighbour-slab = NGX_SLAB_PAGE_FREE;
+neighbour-prev = (uintptr_t) pool-free;
+neighbour-next = pool-free;
+
+return page;
+}
+return NULL;
+}
+
+
+static ngx_slab_page_t *
 ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages)
 {
 ngx_slab_page_t  *page, *p;
@@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po
 }
 }

+ngx_flag_t retry = 0;
+for (page = pool-free.next; page != pool-free;) {
+if (ngx_slab_merge_with_neighbour(pool, page)) {
+retry = 1;
+} else {
+page = page-next;
+}
+}
+
+if (retry) {
+return ngx_slab_alloc_pages(pool, pages);
+}
+
 ngx_slab_error(pool, NGX_LOG_CRIT, ngx_slab_alloc() failed: no
memory);

 return NULL;
@@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo
 page-next-prev = (uintptr_t) page;

 pool-free.next = page;
+
+ngx_slab_merge_with_neighbour(pool, page);
 }






On Tue, Jul 30, 2013 at 7:09 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote:

 [...]

  What would be an alternative to not loop on pool-pages?

 Free memory blocks are linked in pool-free list, it should be
 enough to look there.

 [...]

 --
 Maxim Dounin
 http://nginx.org/en/donation.html

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel



unfrag_slab_memory2.patch
Description: Binary data
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Help with shared memory usage

2013-07-29 Thread Maxim Dounin
Hello!

On Sat, Jul 27, 2013 at 04:10:51PM -0300, Wandenberg Peixoto wrote:

 Hello Maxim.
 
 I've been looking into those functions and guided by your comments
 made the following patch to merge continuous block of memory.
 Can you check if it is ok?
 Comments are welcome.
 
 --- src/core/ngx_slab.c2013-05-06 07:27:10.0 -0300
 +++ src/core/ngx_slab.c2013-07-27 15:54:55.316995223 -0300
 @@ -687,6 +687,25 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo
  page-next-prev = (uintptr_t) page;
 
  pool-free.next = page;
 +
 +for (page = pool-pages; ((page-slab  0)  (page[page-slab] 
 (ngx_slab_page_t *) (pool-start - sizeof(ngx_slab_page_t;) {
 +ngx_slab_page_t *neighbour = page[page-slab];
 +if (((ngx_slab_page_t *) page-prev != NULL)  (page-next !=
 NULL)  ((page-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) 
 +((ngx_slab_page_t *) neighbour-prev != NULL) 
 (neighbour-next != NULL)  ((neighbour-prev  NGX_SLAB_PAGE_MASK) ==
 NGX_SLAB_PAGE)) {
 +
 +page-slab += neighbour-slab;
 +
 +((ngx_slab_page_t *) neighbour-prev)-next = neighbour-next;
 +neighbour-next-prev = neighbour-prev;
 +
 +neighbour-slab = NGX_SLAB_PAGE_FREE;
 +neighbour-prev = (uintptr_t) pool-free;
 +neighbour-next = pool-free;
 +continue;
 +}
 +
 +page += ((page-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) ?
 page-slab : 1;
 +}
  }

The patch doesn't look right (well, may be it works - but at least it's 
not something I would like to see committed).

The pool-pages isn't something you should iterate though, it's 
just a preallocated storage space for ngx_slab_page_t structures.  

Additionally, doing a full merge of all free blocks on a free 
operation looks too much.  It might be something we want to do on 
allocation failure, but not on a normal path in 
ngx_slab_free_pages().  And/or something lightweight may be done 
in ngx_slab_free_pages(), e.g., checking if pages following pages 
we are freeing are free too, and merging them in this case.

 
 
 
 Regards,
 Wandenberg
 
 
 On Mon, Jul 1, 2013 at 8:36 AM, Maxim Dounin mdou...@mdounin.ru wrote:
 
  Hello!
 
  On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote:
 
   Hi,
  
   I'm trying to understand how the shared memory pool works inside the
  Nginx.
   To do that, I made a very small module which create a shared memory zone
   with 2097152 bytes,
   and allocating and freeing blocks of memory, starting from 0 and
  increasing
   by 1kb until the allocation fails.
  
   The strange parts to me were:
   - the maximum block I could allocate was 128000 bytes
   - each time the allocation fails, I started again from 0, but the maximum
   allocated block changed with the following profile
   128000
   87040
   70656
   62464
   58368
   54272
   50176
   46080
   41984
   37888
   33792
   29696
  
   This is the expected behavior?
   Can anyone help me explaining how shared memory works?
   I have another module which do an intensive shared memory usage, and
   understanding this can help me improve it solving some no memory
  messages.
  
   I put the code in attach.
 
  I've looked into this, and the behaviour is expected as per
  nginx slab allocator code and the way you do allocations in your
  test.
 
  Increasing allocations of large blocks immediately followed by
  freeing them result in free memory blocks split into smaller
  blocks, eventually resulting in at most page size allocations
  being possible.  Take a look at ngx_slab_alloc_pages() and
  ngx_slab_free_pages() for details.
 
  Note that slab allocator nginx uses for allocations in shared
  memory is designed mostly for small allocations.  It works well
  for allocations less than page size, but large allocations support
  is very simple.  Probably it should be improved, but as of now
  nothing in nginx uses large allocations in shared memory.
 
  --
  Maxim Dounin
  http://nginx.org/en/donation.html
 
  ___
  nginx-devel mailing list
  nginx-devel@nginx.org
  http://mailman.nginx.org/mailman/listinfo/nginx-devel
 


 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel


-- 
Maxim Dounin
http://nginx.org/en/donation.html

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Help with shared memory usage

2013-07-27 Thread Wandenberg Peixoto
Hello Maxim.

I've been looking into those functions and guided by your comments
made the following patch to merge continuous block of memory.
Can you check if it is ok?
Comments are welcome.

--- src/core/ngx_slab.c2013-05-06 07:27:10.0 -0300
+++ src/core/ngx_slab.c2013-07-27 15:54:55.316995223 -0300
@@ -687,6 +687,25 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo
 page-next-prev = (uintptr_t) page;

 pool-free.next = page;
+
+for (page = pool-pages; ((page-slab  0)  (page[page-slab] 
(ngx_slab_page_t *) (pool-start - sizeof(ngx_slab_page_t;) {
+ngx_slab_page_t *neighbour = page[page-slab];
+if (((ngx_slab_page_t *) page-prev != NULL)  (page-next !=
NULL)  ((page-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) 
+((ngx_slab_page_t *) neighbour-prev != NULL) 
(neighbour-next != NULL)  ((neighbour-prev  NGX_SLAB_PAGE_MASK) ==
NGX_SLAB_PAGE)) {
+
+page-slab += neighbour-slab;
+
+((ngx_slab_page_t *) neighbour-prev)-next = neighbour-next;
+neighbour-next-prev = neighbour-prev;
+
+neighbour-slab = NGX_SLAB_PAGE_FREE;
+neighbour-prev = (uintptr_t) pool-free;
+neighbour-next = pool-free;
+continue;
+}
+
+page += ((page-prev  NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) ?
page-slab : 1;
+}
 }



Regards,
Wandenberg


On Mon, Jul 1, 2013 at 8:36 AM, Maxim Dounin mdou...@mdounin.ru wrote:

 Hello!

 On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote:

  Hi,
 
  I'm trying to understand how the shared memory pool works inside the
 Nginx.
  To do that, I made a very small module which create a shared memory zone
  with 2097152 bytes,
  and allocating and freeing blocks of memory, starting from 0 and
 increasing
  by 1kb until the allocation fails.
 
  The strange parts to me were:
  - the maximum block I could allocate was 128000 bytes
  - each time the allocation fails, I started again from 0, but the maximum
  allocated block changed with the following profile
  128000
  87040
  70656
  62464
  58368
  54272
  50176
  46080
  41984
  37888
  33792
  29696
 
  This is the expected behavior?
  Can anyone help me explaining how shared memory works?
  I have another module which do an intensive shared memory usage, and
  understanding this can help me improve it solving some no memory
 messages.
 
  I put the code in attach.

 I've looked into this, and the behaviour is expected as per
 nginx slab allocator code and the way you do allocations in your
 test.

 Increasing allocations of large blocks immediately followed by
 freeing them result in free memory blocks split into smaller
 blocks, eventually resulting in at most page size allocations
 being possible.  Take a look at ngx_slab_alloc_pages() and
 ngx_slab_free_pages() for details.

 Note that slab allocator nginx uses for allocations in shared
 memory is designed mostly for small allocations.  It works well
 for allocations less than page size, but large allocations support
 is very simple.  Probably it should be improved, but as of now
 nothing in nginx uses large allocations in shared memory.

 --
 Maxim Dounin
 http://nginx.org/en/donation.html

 ___
 nginx-devel mailing list
 nginx-devel@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx-devel



unfrag_slab_memory.patch
Description: Binary data
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Help with shared memory usage

2013-07-01 Thread Maxim Dounin
Hello!

On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote:

 Hi,
 
 I'm trying to understand how the shared memory pool works inside the Nginx.
 To do that, I made a very small module which create a shared memory zone
 with 2097152 bytes,
 and allocating and freeing blocks of memory, starting from 0 and increasing
 by 1kb until the allocation fails.
 
 The strange parts to me were:
 - the maximum block I could allocate was 128000 bytes
 - each time the allocation fails, I started again from 0, but the maximum
 allocated block changed with the following profile
 128000
 87040
 70656
 62464
 58368
 54272
 50176
 46080
 41984
 37888
 33792
 29696
 
 This is the expected behavior?
 Can anyone help me explaining how shared memory works?
 I have another module which do an intensive shared memory usage, and
 understanding this can help me improve it solving some no memory messages.
 
 I put the code in attach.

I've looked into this, and the behaviour is expected as per 
nginx slab allocator code and the way you do allocations in your 
test.

Increasing allocations of large blocks immediately followed by 
freeing them result in free memory blocks split into smaller 
blocks, eventually resulting in at most page size allocations 
being possible.  Take a look at ngx_slab_alloc_pages() and 
ngx_slab_free_pages() for details.

Note that slab allocator nginx uses for allocations in shared 
memory is designed mostly for small allocations.  It works well 
for allocations less than page size, but large allocations support 
is very simple.  Probably it should be improved, but as of now 
nothing in nginx uses large allocations in shared memory.

-- 
Maxim Dounin
http://nginx.org/en/donation.html

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Help with shared memory usage

2013-06-28 Thread Wandenberg Peixoto
Hi,

I'm trying to understand how the shared memory pool works inside the Nginx.
To do that, I made a very small module which create a shared memory zone
with 2097152 bytes,
and allocating and freeing blocks of memory, starting from 0 and increasing
by 1kb until the allocation fails.

The strange parts to me were:
- the maximum block I could allocate was 128000 bytes
- each time the allocation fails, I started again from 0, but the maximum
allocated block changed with the following profile
128000
87040
70656
62464
58368
54272
50176
46080
41984
37888
33792
29696

This is the expected behavior?
Can anyone help me explaining how shared memory works?
I have another module which do an intensive shared memory usage, and
understanding this can help me improve it solving some no memory messages.

I put the code in attach.

Thanks,
Wandenberg


nginx-shm-fragmentation-module.tar.gz
Description: GNU Zip compressed data
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel