Here some pprof result using go 1.7.1 first and tip later: 

 go tool pprof -alloc_space lushan-server https:
//localhost:8083/debug/pprof/heap
Fetching profile from https://localhost:8083/debug/pprof/heap
Saved profile in /Users/rdifazio/pprof/pprof.lushan-server.localhost:
8083.alloc_objects.alloc_space.078.pb.gz
Entering interactive mode (type "help" for commands)
(pprof) top
1294.76MB of 1549.02MB total (83.59%)
Dropped 117 nodes (cum <= 7.75MB)
Showing top 10 nodes out of 89 (cum >= 20.50MB)
      flat  flat%   sum%        cum   cum%
  865.53MB 55.88% 55.88%   866.03MB 55.91%  math/big.putNat
   99.04MB  6.39% 62.27%   965.57MB 62.33%  math/big.nat.divLarge
   89.05MB  5.75% 68.02%    91.05MB  5.88%  math/big.nat.mul
   51.51MB  3.33% 71.34%    51.51MB  3.33%  math/big.nat.montgomery
   51.01MB  3.29% 74.64%   928.54MB 59.94%  math/big.(*Int).GCD
   38.04MB  2.46% 77.09%    38.04MB  2.46%  crypto/tls.(*block).reserve
   35.05MB  2.26% 79.36%    35.05MB  2.26%  crypto/tls.(*Conn).write
   23.01MB  1.49% 80.84%   218.09MB 14.08%  math/big.nat.expNN
   22.03MB  1.42% 82.26%    22.03MB  1.42%  crypto/elliptic.(*p256Point).
p256ScalarMult
   20.50MB  1.32% 83.59%    20.50MB  1.32%  crypto/sha256.New
(pprof) %                                                                   
                                                                            
                                                                            
          lushan-server master % htop
lushan-server master % htop
lushan-server master % go tool pprof -alloc_space lushan-server https:
//localhost:8083/debug/pprof/heap
Fetching profile from https://localhost:8083/debug/pprof/heap
http fetch https://localhost:8083/debug/pprof/heap: Get 
https://localhost:8083/debug/pprof/heap: dial tcp [::1]:8083: getsockopt: 
connection refused
lushan-server master % go tool pprof -alloc_space lushan-server https:
//localhost:8083/debug/pprof/heap
Fetching profile from https://localhost:8083/debug/pprof/heap
Saved profile in /Users/rdifazio/pprof/pprof.lushan-server.localhost:
8083.alloc_objects.alloc_space.079.pb.gz
Entering interactive mode (type "help" for commands)
(pprof) top
366.69MB of 536.79MB total (68.31%)
Dropped 80 nodes (cum <= 2.68MB)
Showing top 10 nodes out of 106 (cum >= 170.57MB)
      flat  flat%   sum%        cum   cum%
   76.54MB 14.26% 14.26%    82.04MB 15.28%  math/big.nat.mul
   73.53MB 13.70% 27.96%    74.03MB 13.79%  math/big.nat.divLarge
   50.51MB  9.41% 37.37%    50.51MB  9.41%  math/big.nat.montgomery
   32.51MB  6.06% 43.42%    70.01MB 13.04%  math/big.(*Int).GCD
   30.53MB  5.69% 49.11%    30.53MB  5.69%  crypto/tls.(*block).reserve
   27.01MB  5.03% 54.14%    27.01MB  5.03%  math/big.nat.add
   26.53MB  4.94% 59.08%    26.53MB  4.94%  crypto/tls.(*Conn).write
   21.53MB  4.01% 63.09%    21.53MB  4.01%  crypto/elliptic.(*p256Point).
p256ScalarMult
   14.50MB  2.70% 65.80%    14.50MB  2.70%  crypto/sha256.New
   13.50MB  2.52% 68.31%   170.57MB 31.78%  math/big.nat.expNN
(pprof)




The improvement is definitely noticeable. The questions from before are 
still open though :-) 



On Sunday, October 16, 2016 at 11:39:02 AM UTC+2, Raffaele Di Fazio wrote:
>
> I haven't tried tip yet, but I saw this patch yesterday. It looks like 
> this might help as I'm not using these pools directly. Over time though, I 
> would expect the GC to free the memory allocated. It's true that I see much 
> lower values in the profiler for the in_use memory, but I wonder why my 
> machine keeps saying that the application is using quite some memory (400 
> MB for a simple web app seems quite a lot)... maybe the GC collects the 
> memory but does not return it to the system? 
>
> @Andrey: I might create a gist, unfortunately I can't share the full 
> codebase. The initialization though, looks mostly like that: 
> https://github.com/zalando/chimp/blob/master/api/server.go#L124 . I 
> wonder if setting proper read and write timeouts and disabling keepalives 
> might help. 
>
>
>
> On Saturday, October 15, 2016 at 8:43:44 PM UTC+2, alb.do...@gmail.com 
> wrote:
>>
>> Does this happens on tip too? There was a recent CL that
>> modified the code of the nat pool; see
>>
>> https://go-review.googlesource.com/#/c/30613/
>>
>> exp. the "Eliminate allocation in divLarge nat pool" part.
>>
>>
>> Il giorno sabato 15 ottobre 2016 16:28:01 UTC+2, Raffaele Di Fazio ha 
>> scritto:
>>>
>>> Hi, 
>>> I have a web application that over time uses more and more memory. This 
>>> is the output of pprof of the heap: 
>>>
>>> go tool pprof -alloc_space lushan-server https:
>>> //localhost:8083/debug/pprof/heap
>>> Fetching profile from https://localhost:8083/debug/pprof/heap
>>> Saved profile in /Users/rdifazio/pprof/pprof.lushan-server.localhost:
>>> 8083.alloc_objects.alloc_space.022.pb.gz
>>> Entering interactive mode (type "help" for commands)
>>> (pprof) top
>>> 43.06MB of 67.07MB total (64.20%)
>>> Dropped 4 nodes (cum <= 0.34MB)
>>> Showing top 10 nodes out of 188 (cum >= 1.50MB)
>>>       flat  flat%   sum%        cum   cum%
>>>       26MB 38.76% 38.76%       26MB 38.76%  math/big.putNat
>>>        3MB  4.48% 43.24%     3.50MB  5.22%  encoding/json.(*decodeState
>>> ).objectInterface
>>>     2.50MB  3.73% 46.97%     2.50MB  3.73%  crypto/tls.(*Conn).write
>>>        2MB  2.98% 49.96%        2MB  2.98%  crypto/tls.(*block).reserve
>>>        2MB  2.98% 52.94%    10.50MB 15.66%  encoding/json.Unmarshal
>>>     1.55MB  2.31% 55.25%     1.55MB  2.31%  regexp.(*bitState).reset
>>>     1.50MB  2.24% 57.49%     9.50MB 14.17%  github.com/go-openapi/spec
>>> .(*Schema).UnmarshalJSON
>>>     1.50MB  2.24% 59.72%    27.50MB 41.00%  math/big.nat.divLarge
>>>     1.50MB  2.24% 61.96%     5.50MB  8.20%  math/big.nat.expNN
>>>     1.50MB  2.24% 64.20%     1.50MB  2.24%  crypto/sha512.New384
>>> (pprof)
>>>
>>>
>>> ROUTINE ======================== math/big.putNat in /usr/local/Cellar/go
>>> /1.7.1/libexec/src/math/big/nat.go
>>>       26MB       26MB (flat, cum) 38.76% of Total
>>>          .          .    550: }
>>>          .          .    551: return z.make(n)
>>>          .          .    552:}
>>>          .          .    553:
>>>          .          .    554:func putNat(x nat) {
>>>       26MB       26MB    555: natPool.Put(x)
>>>          .          .    556:}
>>>          .          .    557:
>>>          .          .    558:var natPool sync.Pool
>>>          .          .    559:
>>>          .          .    560:// q = (uIn-r)/v, with 0 <= r < y
>>>
>>>
>>>
>>> The memory allocated in math/big.putNat seems to increase over time, 
>>> generating a very high usage of memory for a web applications that is 
>>> executing very few requests per second. I wonder why and how I can better 
>>> analyze this issue. Please notice that this happens only when serving 
>>> HTTPS. 
>>>
>>> I'm currently using go 1.7 and the app itself uses the gin web 
>>> framework. 
>>>
>>> Thanks in advance! 
>>>
>>> Raffaele 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to