Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-30 Thread Tamás Gulácsi

2020. március 30., hétfő 19:44:05 UTC+2 időpontban Nitish Saboo a 
következőt írta:
>
> Hi,
>
> Requesting valuable inputs on this.
>
> Thanks,
> Nitish
>
> On Thu, Mar 26, 2020 at 5:08 PM Nitish Saboo  > wrote:
>
>> Hi Tamas,
>>
>> 1) There is no such option '--heap_inuse'. We have an -*-inuse_space* 
>> option. Is this what you are talking about?
>>
>> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof --inuse_space main 
>> mem3.prof 
>> Fetched 1 source profiles out of 2
>> File: main
>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>> Type: inuse_space
>> Time: Mar 22, 2020 at 6:32am (PDT)
>> Entering interactive mode (type "help" for commands, "o" for options)
>> (pprof) o
>>   call_tree = false
>>   compact_labels= true 
>>   cumulative= flat //: [cum | flat]
>>   divide_by = 1
>>   drop_negative = false
>>   edgefraction  = 0.001
>>   focus = ""   
>>   granularity   = filefunctions//: [addresses | 
>> filefunctions | files | functions | lines]
>>   hide  = ""   
>>   ignore= ""   
>>   mean  = false
>>   nodecount = -1   //: default
>>   nodefraction  = 0.005
>>   noinlines = false
>>   normalize = false
>>   output= ""   
>>   prune_from= ""   
>>   relative_percentages  = false
>>   sample_index  = inuse_space  //: [alloc_objects | 
>> alloc_space | inuse_objects | inuse_space]
>>   show  = ""   
>>   show_from = ""   
>>   tagfocus  = ""   
>>   taghide   = ""   
>>   tagignore = ""   
>>   tagshow   = ""   
>>   trim  = true 
>>   trim_path = ""   
>>   unit  = minimum   
>>
>>  2) When I don't pass the flag it defaults to *--inuse_space*:
>>
>> File: main
>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>> Type: inuse_space
>> Time: Mar 22, 2020 at 6:32am (PDT)
>> Entering interactive mode (type "help" for commands, "o" for options)
>> (pprof) top
>> Showing nodes accounting for 3303.21kB, 100% of 3303.21kB total
>> Showing top 10 nodes out of 28
>>   flat  flat%   sum%cum   cum%
>>  1762.94kB 53.37% 53.37%  1762.94kB 53.37%  runtime/pprof.StartCPUProfile
>>   516.01kB 15.62% 68.99%   516.01kB 15.62% 
>>  runtime/pprof.(*profMap).lookup
>>   512.19kB 15.51% 84.50%   512.19kB 15.51%  runtime.malg
>>   512.08kB 15.50%   100%   512.08kB 15.50% 
>>  crypto/x509/pkix.(*Name).FillFromRDNSequence
>>  0 0%   100%   512.08kB 15.50%  crypto/tls.(*Conn).Handshake
>>  0 0%   100%   512.08kB 15.50% 
>>  crypto/tls.(*Conn).clientHandshake
>>  0 0%   100%   512.08kB 15.50% 
>>  crypto/tls.(*Conn).verifyServerCertificate
>>  0 0%   100%   512.08kB 15.50% 
>>  crypto/tls.(*clientHandshakeState).doFullHandshake
>>  0 0%   100%   512.08kB 15.50% 
>>  crypto/tls.(*clientHandshakeState).handshake
>>  0 0%   100%   512.08kB 15.50% 
>>  crypto/x509.(*CertPool).AppendCertsFromPEM
>>
>> Please correct me if I am missing something here.
>>
>> Thanks,
>> Nitish
>>
>>
>>>
" Showing nodes accounting for 3303.21kB, 100% of 3303.21kB total"

means Go has occupies only about 3.3MB of memory - if you see waay more 
RSS, than its somewhere else.
If you use cgo, than that's the principal suspect. I mean your C code.

Tamás

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/06b987a8-230b-4b66-bd9f-9805f9277ddd%40googlegroups.com.


Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-30 Thread Robert Engels
It has already been answered. The alloc size is not the mem in use heap size. 

> On Mar 30, 2020, at 12:43 PM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> Requesting valuable inputs on this.
> 
> Thanks,
> Nitish
> 
>> On Thu, Mar 26, 2020 at 5:08 PM Nitish Saboo  
>> wrote:
>> Hi Tamas,
>> 
>> 1) There is no such option '--heap_inuse'. We have an --inuse_space option. 
>> Is this what you are talking about?
>> 
>> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof --inuse_space main mem3.prof 
>> Fetched 1 source profiles out of 2
>> File: main
>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>> Type: inuse_space
>> Time: Mar 22, 2020 at 6:32am (PDT)
>> Entering interactive mode (type "help" for commands, "o" for options)
>> (pprof) o
>>   call_tree = false
>>   compact_labels= true 
>>   cumulative= flat //: [cum | flat]
>>   divide_by = 1
>>   drop_negative = false
>>   edgefraction  = 0.001
>>   focus = ""   
>>   granularity   = filefunctions//: [addresses | 
>> filefunctions | files | functions | lines]
>>   hide  = ""   
>>   ignore= ""   
>>   mean  = false
>>   nodecount = -1   //: default
>>   nodefraction  = 0.005
>>   noinlines = false
>>   normalize = false
>>   output= ""   
>>   prune_from= ""   
>>   relative_percentages  = false
>>   sample_index  = inuse_space  //: [alloc_objects | 
>> alloc_space | inuse_objects | inuse_space]
>>   show  = ""   
>>   show_from = ""   
>>   tagfocus  = ""   
>>   taghide   = ""   
>>   tagignore = ""   
>>   tagshow   = ""   
>>   trim  = true 
>>   trim_path = ""   
>>   unit  = minimum   
>> 
>>  2) When I don't pass the flag it defaults to --inuse_space:
>> 
>> File: main
>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>> Type: inuse_space
>> Time: Mar 22, 2020 at 6:32am (PDT)
>> Entering interactive mode (type "help" for commands, "o" for options)
>> (pprof) top
>> Showing nodes accounting for 3303.21kB, 100% of 3303.21kB total
>> Showing top 10 nodes out of 28
>>   flat  flat%   sum%cum   cum%
>>  1762.94kB 53.37% 53.37%  1762.94kB 53.37%  runtime/pprof.StartCPUProfile
>>   516.01kB 15.62% 68.99%   516.01kB 15.62%  runtime/pprof.(*profMap).lookup
>>   512.19kB 15.51% 84.50%   512.19kB 15.51%  runtime.malg
>>   512.08kB 15.50%   100%   512.08kB 15.50%  
>> crypto/x509/pkix.(*Name).FillFromRDNSequence
>>  0 0%   100%   512.08kB 15.50%  crypto/tls.(*Conn).Handshake
>>  0 0%   100%   512.08kB 15.50%  
>> crypto/tls.(*Conn).clientHandshake
>>  0 0%   100%   512.08kB 15.50%  
>> crypto/tls.(*Conn).verifyServerCertificate
>>  0 0%   100%   512.08kB 15.50%  
>> crypto/tls.(*clientHandshakeState).doFullHandshake
>>  0 0%   100%   512.08kB 15.50%  
>> crypto/tls.(*clientHandshakeState).handshake
>>  0 0%   100%   512.08kB 15.50%  
>> crypto/x509.(*CertPool).AppendCertsFromPEM
>> 
>> Please correct me if I am missing something here.
>> 
>> Thanks,
>> Nitish
>> 
>> 
>>> On Wed, Mar 25, 2020 at 12:16 AM Tamás Gulácsi  wrote:
>>> You've requested the total allocated space (--alloc_space), not only the 
>>> heap used (--heap_inuse, or no flag).
>>> So that 17GiB is the total allocated size, does NOT include the released!
>>> 
>>> 2020. március 24., kedd 15:16:46 UTC+1 időpontban Nitish Saboo a következőt 
>>> írta:
 
 Hi,
 
 I have already gone through those links. They helped me to gather the mem 
 profile and while analyzing the data(as given in those links) I have come 
 across the following issue:
 
 While I was running the service for 100 minutes the 'top' command output 
 was showing Mem% as 11.1. There was no increase in mem usage since I had 
 not called 'LoadPatternDB()' method. I have 8GB of memory on the node 
 where I am running the service. My issue is :
 
 Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is 
 .88GB and my node is only of 8GB. I feel the way I gathered the mem 
 profiling was not correct ..is it ?
 Please let me know where am I going wrong?
 
 Thanks,
 Nitish
 
> On Tue, Mar 24, 2020 at 5:32 PM Nitish 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-30 Thread Nitish Saboo
Hi,

Requesting valuable inputs on this.

Thanks,
Nitish

On Thu, Mar 26, 2020 at 5:08 PM Nitish Saboo 
wrote:

> Hi Tamas,
>
> 1) There is no such option '--heap_inuse'. We have an -*-inuse_space*
> option. Is this what you are talking about?
>
> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof --inuse_space main
> mem3.prof
> Fetched 1 source profiles out of 2
> File: main
> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
> Type: inuse_space
> Time: Mar 22, 2020 at 6:32am (PDT)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) o
>   call_tree = false
>   compact_labels= true
>   cumulative= flat //: [cum | flat]
>   divide_by = 1
>   drop_negative = false
>   edgefraction  = 0.001
>   focus = ""
>   granularity   = filefunctions//: [addresses |
> filefunctions | files | functions | lines]
>   hide  = ""
>   ignore= ""
>   mean  = false
>   nodecount = -1   //: default
>   nodefraction  = 0.005
>   noinlines = false
>   normalize = false
>   output= ""
>   prune_from= ""
>   relative_percentages  = false
>   sample_index  = inuse_space  //: [alloc_objects |
> alloc_space | inuse_objects | inuse_space]
>   show  = ""
>   show_from = ""
>   tagfocus  = ""
>   taghide   = ""
>   tagignore = ""
>   tagshow   = ""
>   trim  = true
>   trim_path = ""
>   unit  = minimum
>
>  2) When I don't pass the flag it defaults to *--inuse_space*:
>
> File: main
> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
> Type: inuse_space
> Time: Mar 22, 2020 at 6:32am (PDT)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top
> Showing nodes accounting for 3303.21kB, 100% of 3303.21kB total
> Showing top 10 nodes out of 28
>   flat  flat%   sum%cum   cum%
>  1762.94kB 53.37% 53.37%  1762.94kB 53.37%  runtime/pprof.StartCPUProfile
>   516.01kB 15.62% 68.99%   516.01kB 15.62%  runtime/pprof.(*profMap).lookup
>   512.19kB 15.51% 84.50%   512.19kB 15.51%  runtime.malg
>   512.08kB 15.50%   100%   512.08kB 15.50%
>  crypto/x509/pkix.(*Name).FillFromRDNSequence
>  0 0%   100%   512.08kB 15.50%  crypto/tls.(*Conn).Handshake
>  0 0%   100%   512.08kB 15.50%
>  crypto/tls.(*Conn).clientHandshake
>  0 0%   100%   512.08kB 15.50%
>  crypto/tls.(*Conn).verifyServerCertificate
>  0 0%   100%   512.08kB 15.50%
>  crypto/tls.(*clientHandshakeState).doFullHandshake
>  0 0%   100%   512.08kB 15.50%
>  crypto/tls.(*clientHandshakeState).handshake
>  0 0%   100%   512.08kB 15.50%
>  crypto/x509.(*CertPool).AppendCertsFromPEM
>
> Please correct me if I am missing something here.
>
> Thanks,
> Nitish
>
>
> On Wed, Mar 25, 2020 at 12:16 AM Tamás Gulácsi 
> wrote:
>
>> You've requested the total allocated space (--alloc_space), not only the
>> heap used (--heap_inuse, or no flag).
>> So that 17GiB is the total allocated size, does NOT include the released!
>>
>> 2020. március 24., kedd 15:16:46 UTC+1 időpontban Nitish Saboo a
>> következőt írta:
>>>
>>> Hi,
>>>
>>> I have already gone through those links. They helped me to gather the
>>> mem profile and while analyzing the data(as given in those links) I have
>>> come across the following issue:
>>>
>>> While I was running the service for 100 minutes the 'top' command output
>>> was showing Mem% as 11.1. There was no increase in mem usage since I had
>>> not called 'LoadPatternDB()' method. I have 8GB of memory on the node where
>>> I am running the service. My issue is :
>>>
>>>
>>>- Why is it showing memory accounting for around 17GB?  11.1 % of
>>>8GB is .88GB and my node is only of 8GB. I feel the way I gathered the 
>>> mem
>>>profiling was not correct ..is it ?
>>>
>>> Please let me know where am I going wrong?
>>>
>>> Thanks,
>>> Nitish
>>>
>>> On Tue, Mar 24, 2020 at 5:32 PM Nitish Saboo 
>>> wrote:
>>>
 Hi,

 >>There is no root analysis available in Go. Read the paper I linked
 to.

 Sorry I did not get you. Which paper are you referring to?

 While I was running the service for 100 minutes the 'top' command
 output was showing Mem% as 11.1. There was no increase in mem usage since I
 had not called 'LoadPatternDB()' method.I have 8GB of memory on the node
 where I am running the service. My issue is :


- Why is it showing memory accounting for around 17GB?  11.1 % of
8GB is .88GB and my node is only of 8GB. I feel the way I gathered the 
 mem
profiling was not correct 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-26 Thread Nitish Saboo
Hi Tamas,

1) There is no such option '--heap_inuse'. We have an -*-inuse_space*
option. Is this what you are talking about?

nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof --inuse_space main mem3.prof
Fetched 1 source profiles out of 2
File: main
Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
Type: inuse_space
Time: Mar 22, 2020 at 6:32am (PDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) o
  call_tree = false
  compact_labels= true
  cumulative= flat //: [cum | flat]
  divide_by = 1
  drop_negative = false
  edgefraction  = 0.001
  focus = ""
  granularity   = filefunctions//: [addresses |
filefunctions | files | functions | lines]
  hide  = ""
  ignore= ""
  mean  = false
  nodecount = -1   //: default
  nodefraction  = 0.005
  noinlines = false
  normalize = false
  output= ""
  prune_from= ""
  relative_percentages  = false
  sample_index  = inuse_space  //: [alloc_objects |
alloc_space | inuse_objects | inuse_space]
  show  = ""
  show_from = ""
  tagfocus  = ""
  taghide   = ""
  tagignore = ""
  tagshow   = ""
  trim  = true
  trim_path = ""
  unit  = minimum

 2) When I don't pass the flag it defaults to *--inuse_space*:

File: main
Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
Type: inuse_space
Time: Mar 22, 2020 at 6:32am (PDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 3303.21kB, 100% of 3303.21kB total
Showing top 10 nodes out of 28
  flat  flat%   sum%cum   cum%
 1762.94kB 53.37% 53.37%  1762.94kB 53.37%  runtime/pprof.StartCPUProfile
  516.01kB 15.62% 68.99%   516.01kB 15.62%  runtime/pprof.(*profMap).lookup
  512.19kB 15.51% 84.50%   512.19kB 15.51%  runtime.malg
  512.08kB 15.50%   100%   512.08kB 15.50%
 crypto/x509/pkix.(*Name).FillFromRDNSequence
 0 0%   100%   512.08kB 15.50%  crypto/tls.(*Conn).Handshake
 0 0%   100%   512.08kB 15.50%
 crypto/tls.(*Conn).clientHandshake
 0 0%   100%   512.08kB 15.50%
 crypto/tls.(*Conn).verifyServerCertificate
 0 0%   100%   512.08kB 15.50%
 crypto/tls.(*clientHandshakeState).doFullHandshake
 0 0%   100%   512.08kB 15.50%
 crypto/tls.(*clientHandshakeState).handshake
 0 0%   100%   512.08kB 15.50%
 crypto/x509.(*CertPool).AppendCertsFromPEM

Please correct me if I am missing something here.

Thanks,
Nitish


On Wed, Mar 25, 2020 at 12:16 AM Tamás Gulácsi  wrote:

> You've requested the total allocated space (--alloc_space), not only the
> heap used (--heap_inuse, or no flag).
> So that 17GiB is the total allocated size, does NOT include the released!
>
> 2020. március 24., kedd 15:16:46 UTC+1 időpontban Nitish Saboo a
> következőt írta:
>>
>> Hi,
>>
>> I have already gone through those links. They helped me to gather the mem
>> profile and while analyzing the data(as given in those links) I have come
>> across the following issue:
>>
>> While I was running the service for 100 minutes the 'top' command output
>> was showing Mem% as 11.1. There was no increase in mem usage since I had
>> not called 'LoadPatternDB()' method. I have 8GB of memory on the node where
>> I am running the service. My issue is :
>>
>>
>>- Why is it showing memory accounting for around 17GB?  11.1 % of 8GB
>>is .88GB and my node is only of 8GB. I feel the way I gathered the mem
>>profiling was not correct ..is it ?
>>
>> Please let me know where am I going wrong?
>>
>> Thanks,
>> Nitish
>>
>> On Tue, Mar 24, 2020 at 5:32 PM Nitish Saboo  wrote:
>>
>>> Hi,
>>>
>>> >>There is no root analysis available in Go. Read the paper I linked to.
>>>
>>> Sorry I did not get you. Which paper are you referring to?
>>>
>>> While I was running the service for 100 minutes the 'top' command output
>>> was showing Mem% as 11.1. There was no increase in mem usage since I had
>>> not called 'LoadPatternDB()' method.I have 8GB of memory on the node where
>>> I am running the service. My issue is :
>>>
>>>
>>>- Why is it showing memory accounting for around 17GB?  11.1 % of
>>>8GB is .88GB and my node is only of 8GB. I feel the way I gathered the 
>>> mem
>>>profiling was not correct ..is it ?
>>>
>>> Please advise me what am I missing?
>>>
>>> Thanks,
>>> Nitish
>>>
>>> On Tue, Mar 24, 2020 at 1:28 AM Robert Engels 
>>> wrote:
>>>
 Yes. You have a leak in your Go code. It shows you the object types
 that are taking up all of the space. There is no root analysis available in
 Go. Read the 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-24 Thread Robert Engels
Correct. I didn’t read the output... :) I just assumed the OP was doing that... 
my bad. 

> On Mar 24, 2020, at 1:46 PM, Tamás Gulácsi  wrote:
> 
> 
> You've requested the total allocated space (--alloc_space), not only the heap 
> used (--heap_inuse, or no flag).
> So that 17GiB is the total allocated size, does NOT include the released!
> 
> 2020. március 24., kedd 15:16:46 UTC+1 időpontban Nitish Saboo a következőt 
> írta:
>> 
>> Hi,
>> 
>> I have already gone through those links. They helped me to gather the mem 
>> profile and while analyzing the data(as given in those links) I have come 
>> across the following issue:
>> 
>> While I was running the service for 100 minutes the 'top' command output was 
>> showing Mem% as 11.1. There was no increase in mem usage since I had not 
>> called 'LoadPatternDB()' method. I have 8GB of memory on the node where I am 
>> running the service. My issue is :
>> 
>> Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is .88GB 
>> and my node is only of 8GB. I feel the way I gathered the mem profiling was 
>> not correct ..is it ?
>> Please let me know where am I going wrong?
>> 
>> Thanks,
>> Nitish
>> 
>>> On Tue, Mar 24, 2020 at 5:32 PM Nitish Saboo  wrote:
>>> Hi,
>>> 
>>> >>There is no root analysis available in Go. Read the paper I linked to. 
>>> 
>>> Sorry I did not get you. Which paper are you referring to?
>>> 
>>> While I was running the service for 100 minutes the 'top' command output 
>>> was showing Mem% as 11.1. There was no increase in mem usage since I had 
>>> not called 'LoadPatternDB()' method.I have 8GB of memory on the node where 
>>> I am running the service. My issue is :
>>> 
>>> Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is 
>>> .88GB and my node is only of 8GB. I feel the way I gathered the mem 
>>> profiling was not correct ..is it ?
>>> Please advise me what am I missing?
>>> 
>>> Thanks,
>>> Nitish
>>> 
 On Tue, Mar 24, 2020 at 1:28 AM Robert Engels  wrote:
 Yes. You have a leak in your Go code. It shows you the object types that 
 are taking up all of the space. There is no root analysis available in Go. 
 Read the paper I linked to. 
 
>> On Mar 23, 2020, at 9:12 AM, Nitish Saboo  wrote:
>> 
> 
> Hi,
> 
> I used something like the following to generate a memprof for 100 minutes
> 
> func main() {
> flag.Parse()
> if *cpuprofile != "" {
> f, err := os.Create(*cpuprofile)
> if err != nil {
> fmt.Println("could not create CPU profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> if err := pprof.StartCPUProfile(f); err != nil {
> fmt.Print("could not start CPU profile: ", err)
> }
> defer pprof.StopCPUProfile()
> }
> timeout := time.After(100 * time.Minute)
> A_chan := make(chan bool)
> B_chan := make(chan bool)
> go util.A(A_chan)
> go util.B(B_chan)
> (..Rest of the code..)
> 
> for {
> select {
> case <-A_chan:
> continue
> case <-B_chan:
> continue
> case <-timeout:
> break
> }
> break
> }
> 
> if *memprofile != "" {
> count = count + 1
> fmt.Println("Generating Mem Profile:")
> fmt.Print(count)
> f, err := os.Create(*memprofile)
> if err != nil {
> fmt.Println("could not create memory profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> runtime.GC()// get up-to-date statistics
> if err := pprof.WriteHeapProfile(f); err != nil {
> fmt.Println("could not write memory profile: ", err)
> }
> 
> }
> }
> 
> /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
> Fetched 1 source profiles out of 2
> File: main
> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
> Type: alloc_space
> Time: Mar 22, 2020 at 7:02pm (IST)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top
> Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total   
> <<<
> Dropped 445 nodes (cum <= 101.65MB)
> Showing top 10 nodes out of 58
>   flat  flat%   sum%cum   cum%
> 11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
>  1675.69MB  8.24% 67.27%  1911.69MB  9.40%  
> home/nsaboo/project/utils.Unflatten
>   627.21MB  3.09% 70.35%  1475.10MB  7.26%  
> encoding/json.mapEncoder.encode
>   626.76MB  3.08% 73.43%   626.76MB  3.08%  
> encoding/json.(*Decoder).refill
>   611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
>   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
>   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
>   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
>   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
>   344.88MB  1.70% 87.65%   

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-24 Thread Tamás Gulácsi
You've requested the total allocated space (--alloc_space), not only the 
heap used (--heap_inuse, or no flag).
So that 17GiB is the total allocated size, does NOT include the released!

2020. március 24., kedd 15:16:46 UTC+1 időpontban Nitish Saboo a következőt 
írta:
>
> Hi,
>
> I have already gone through those links. They helped me to gather the mem 
> profile and while analyzing the data(as given in those links) I have come 
> across the following issue:
>
> While I was running the service for 100 minutes the 'top' command output 
> was showing Mem% as 11.1. There was no increase in mem usage since I had 
> not called 'LoadPatternDB()' method. I have 8GB of memory on the node where 
> I am running the service. My issue is :
>
>
>- Why is it showing memory accounting for around 17GB?  11.1 % of 8GB 
>is .88GB and my node is only of 8GB. I feel the way I gathered the mem 
>profiling was not correct ..is it ?
>
> Please let me know where am I going wrong?
>
> Thanks,
> Nitish
>
> On Tue, Mar 24, 2020 at 5:32 PM Nitish Saboo  > wrote:
>
>> Hi,
>>
>> >>There is no root analysis available in Go. Read the paper I linked to. 
>>
>> Sorry I did not get you. Which paper are you referring to?
>>
>> While I was running the service for 100 minutes the 'top' command output 
>> was showing Mem% as 11.1. There was no increase in mem usage since I had 
>> not called 'LoadPatternDB()' method.I have 8GB of memory on the node where 
>> I am running the service. My issue is :
>>
>>
>>- Why is it showing memory accounting for around 17GB?  11.1 % of 8GB 
>>is .88GB and my node is only of 8GB. I feel the way I gathered the mem 
>>profiling was not correct ..is it ?
>>
>> Please advise me what am I missing?
>>
>> Thanks,
>> Nitish
>>
>> On Tue, Mar 24, 2020 at 1:28 AM Robert Engels > > wrote:
>>
>>> Yes. You have a leak in your Go code. It shows you the object types that 
>>> are taking up all of the space. There is no root analysis available in Go. 
>>> Read the paper I linked to. 
>>>
>>> On Mar 23, 2020, at 9:12 AM, Nitish Saboo >> > wrote:
>>>
>>> 
>>> Hi,
>>>
>>> I used something like the following to generate a memprof for 100 minutes
>>>
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>> timeout := time.After(100 * time.Minute)
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>>
>>> for {
>>> select {
>>> case <-A_chan:
>>> continue
>>> case <-B_chan:
>>> continue
>>> case <-timeout:
>>> break
>>> }
>>> break
>>> }
>>>
>>> if *memprofile != "" {
>>> count = count + 1
>>> fmt.Println("Generating Mem Profile:")
>>> fmt.Print(count)
>>> f, err := os.Create(*memprofile)
>>> if err != nil {
>>> fmt.Println("could not create memory profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> runtime.GC()// get up-to-date statistics
>>> if err := pprof.WriteHeapProfile(f); err != nil {
>>> fmt.Println("could not write memory profile: ", err)
>>> }
>>>
>>> }
>>> }
>>>
>>> /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
>>> Fetched 1 source profiles out of 2
>>> File: main
>>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>>> Type: alloc_space
>>> Time: Mar 22, 2020 at 7:02pm (IST)
>>> Entering interactive mode (type "help" for commands, "o" for options)
>>> (pprof) top
>>> Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total   
>>> <<<
>>> Dropped 445 nodes (cum <= 101.65MB)
>>> Showing top 10 nodes out of 58
>>>   flat  flat%   sum%cum   cum%
>>> 11999.09MB 59.02% 59.02% 19800.37MB 97.40% 
>>>  home/nsaboo/project/aws.Events
>>>  1675.69MB  8.24% 67.27%  1911.69MB  9.40% 
>>>  home/nsaboo/project/utils.Unflatten
>>>   627.21MB  3.09% 70.35%  1475.10MB  7.26% 
>>>  encoding/json.mapEncoder.encode
>>>   626.76MB  3.08% 73.43%   626.76MB  3.08% 
>>>  encoding/json.(*Decoder).refill
>>>   611.95MB  3.01% 76.44%  4557.69MB 22.42% 
>>>  home/nsaboo/project/lib.format
>>>   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
>>>   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
>>>   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
>>>   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
>>>   344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys
>>>
>>> 1)Is this the correct way of getting a memory profile?
>>>
>>> 2)I ran the service for 100 minutes on a machine with 8GB memory. How am 
>>> I seeing memory accounting for around 17GB?
>>>
>>> 3)I understand 'flat' means memory occupied within that method, but how 
>>> come it shot up 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-24 Thread Robert Engels
You have virtual memory most likely. The in use is clearly 17gb. 

> On Mar 24, 2020, at 9:16 AM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> I have already gone through those links. They helped me to gather the mem 
> profile and while analyzing the data(as given in those links) I have come 
> across the following issue:
> 
> While I was running the service for 100 minutes the 'top' command output was 
> showing Mem% as 11.1. There was no increase in mem usage since I had not 
> called 'LoadPatternDB()' method. I have 8GB of memory on the node where I am 
> running the service. My issue is :
> 
> Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is .88GB 
> and my node is only of 8GB. I feel the way I gathered the mem profiling was 
> not correct ..is it ?
> Please let me know where am I going wrong?
> 
> Thanks,
> Nitish
> 
>> On Tue, Mar 24, 2020 at 5:32 PM Nitish Saboo  
>> wrote:
>> Hi,
>> 
>> >>There is no root analysis available in Go. Read the paper I linked to. 
>> 
>> Sorry I did not get you. Which paper are you referring to?
>> 
>> While I was running the service for 100 minutes the 'top' command output was 
>> showing Mem% as 11.1. There was no increase in mem usage since I had not 
>> called 'LoadPatternDB()' method.I have 8GB of memory on the node where I am 
>> running the service. My issue is :
>> 
>> Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is .88GB 
>> and my node is only of 8GB. I feel the way I gathered the mem profiling was 
>> not correct ..is it ?
>> Please advise me what am I missing?
>> 
>> Thanks,
>> Nitish
>> 
>>> On Tue, Mar 24, 2020 at 1:28 AM Robert Engels  wrote:
>>> Yes. You have a leak in your Go code. It shows you the object types that 
>>> are taking up all of the space. There is no root analysis available in Go. 
>>> Read the paper I linked to. 
>>> 
> On Mar 23, 2020, at 9:12 AM, Nitish Saboo  
> wrote:
> 
 
 Hi,
 
 I used something like the following to generate a memprof for 100 minutes
 
 func main() {
 flag.Parse()
 if *cpuprofile != "" {
 f, err := os.Create(*cpuprofile)
 if err != nil {
 fmt.Println("could not create CPU profile: ", err)
 }
 defer f.Close() // error handling omitted for example
 if err := pprof.StartCPUProfile(f); err != nil {
 fmt.Print("could not start CPU profile: ", err)
 }
 defer pprof.StopCPUProfile()
 }
 timeout := time.After(100 * time.Minute)
 A_chan := make(chan bool)
 B_chan := make(chan bool)
 go util.A(A_chan)
 go util.B(B_chan)
 (..Rest of the code..)
 
 for {
 select {
 case <-A_chan:
 continue
 case <-B_chan:
 continue
 case <-timeout:
 break
 }
 break
 }
 
 if *memprofile != "" {
 count = count + 1
 fmt.Println("Generating Mem Profile:")
 fmt.Print(count)
 f, err := os.Create(*memprofile)
 if err != nil {
 fmt.Println("could not create memory profile: ", err)
 }
 defer f.Close() // error handling omitted for example
 runtime.GC()// get up-to-date statistics
 if err := pprof.WriteHeapProfile(f); err != nil {
 fmt.Println("could not write memory profile: ", err)
 }
 
 }
 }
 
 /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
 Fetched 1 source profiles out of 2
 File: main
 Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
 Type: alloc_space
 Time: Mar 22, 2020 at 7:02pm (IST)
 Entering interactive mode (type "help" for commands, "o" for options)
 (pprof) top
 Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total   
 <<<
 Dropped 445 nodes (cum <= 101.65MB)
 Showing top 10 nodes out of 58
   flat  flat%   sum%cum   cum%
 11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
  1675.69MB  8.24% 67.27%  1911.69MB  9.40%  
 home/nsaboo/project/utils.Unflatten
   627.21MB  3.09% 70.35%  1475.10MB  7.26%  encoding/json.mapEncoder.encode
   626.76MB  3.08% 73.43%   626.76MB  3.08%  encoding/json.(*Decoder).refill
   611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
   344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys
 
 1)Is this the correct way of getting a memory profile?
 
 2)I ran the service for 100 minutes on a machine with 8GB memory. How am I 
 seeing memory accounting for around 17GB?
 
 3)I understand 'flat' means memory occupied within that method, but how 
 come it shot up more than the available memory? Hence, asking if the above 
 process is the correct way of 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-24 Thread Nitish Saboo
Hi,

I have already gone through those links. They helped me to gather the mem
profile and while analyzing the data(as given in those links) I have come
across the following issue:

While I was running the service for 100 minutes the 'top' command output
was showing Mem% as 11.1. There was no increase in mem usage since I had
not called 'LoadPatternDB()' method. I have 8GB of memory on the node where
I am running the service. My issue is :


   - Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is
   .88GB and my node is only of 8GB. I feel the way I gathered the mem
   profiling was not correct ..is it ?

Please let me know where am I going wrong?

Thanks,
Nitish

On Tue, Mar 24, 2020 at 5:32 PM Nitish Saboo 
wrote:

> Hi,
>
> >>There is no root analysis available in Go. Read the paper I linked to.
>
> Sorry I did not get you. Which paper are you referring to?
>
> While I was running the service for 100 minutes the 'top' command output
> was showing Mem% as 11.1. There was no increase in mem usage since I had
> not called 'LoadPatternDB()' method.I have 8GB of memory on the node where
> I am running the service. My issue is :
>
>
>- Why is it showing memory accounting for around 17GB?  11.1 % of 8GB
>is .88GB and my node is only of 8GB. I feel the way I gathered the mem
>profiling was not correct ..is it ?
>
> Please advise me what am I missing?
>
> Thanks,
> Nitish
>
> On Tue, Mar 24, 2020 at 1:28 AM Robert Engels 
> wrote:
>
>> Yes. You have a leak in your Go code. It shows you the object types that
>> are taking up all of the space. There is no root analysis available in Go.
>> Read the paper I linked to.
>>
>> On Mar 23, 2020, at 9:12 AM, Nitish Saboo 
>> wrote:
>>
>> 
>> Hi,
>>
>> I used something like the following to generate a memprof for 100 minutes
>>
>> func main() {
>> flag.Parse()
>> if *cpuprofile != "" {
>> f, err := os.Create(*cpuprofile)
>> if err != nil {
>> fmt.Println("could not create CPU profile: ", err)
>> }
>> defer f.Close() // error handling omitted for example
>> if err := pprof.StartCPUProfile(f); err != nil {
>> fmt.Print("could not start CPU profile: ", err)
>> }
>> defer pprof.StopCPUProfile()
>> }
>> timeout := time.After(100 * time.Minute)
>> A_chan := make(chan bool)
>> B_chan := make(chan bool)
>> go util.A(A_chan)
>> go util.B(B_chan)
>> (..Rest of the code..)
>>
>> for {
>> select {
>> case <-A_chan:
>> continue
>> case <-B_chan:
>> continue
>> case <-timeout:
>> break
>> }
>> break
>> }
>>
>> if *memprofile != "" {
>> count = count + 1
>> fmt.Println("Generating Mem Profile:")
>> fmt.Print(count)
>> f, err := os.Create(*memprofile)
>> if err != nil {
>> fmt.Println("could not create memory profile: ", err)
>> }
>> defer f.Close() // error handling omitted for example
>> runtime.GC()// get up-to-date statistics
>> if err := pprof.WriteHeapProfile(f); err != nil {
>> fmt.Println("could not write memory profile: ", err)
>> }
>>
>> }
>> }
>>
>> /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
>> Fetched 1 source profiles out of 2
>> File: main
>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>> Type: alloc_space
>> Time: Mar 22, 2020 at 7:02pm (IST)
>> Entering interactive mode (type "help" for commands, "o" for options)
>> (pprof) top
>> Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total
>> <<<
>> Dropped 445 nodes (cum <= 101.65MB)
>> Showing top 10 nodes out of 58
>>   flat  flat%   sum%cum   cum%
>> 11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
>>  1675.69MB  8.24% 67.27%  1911.69MB  9.40%
>>  home/nsaboo/project/utils.Unflatten
>>   627.21MB  3.09% 70.35%  1475.10MB  7.26%
>>  encoding/json.mapEncoder.encode
>>   626.76MB  3.08% 73.43%   626.76MB  3.08%
>>  encoding/json.(*Decoder).refill
>>   611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
>>   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
>>   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
>>   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
>>   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
>>   344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys
>>
>> 1)Is this the correct way of getting a memory profile?
>>
>> 2)I ran the service for 100 minutes on a machine with 8GB memory. How am
>> I seeing memory accounting for around 17GB?
>>
>> 3)I understand 'flat' means memory occupied within that method, but how
>> come it shot up more than the available memory? Hence, asking if the above
>> process is the correct way of gathering the memory profile.
>>
>> Thanks,
>> Nitish
>>
>> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones 
>> wrote:
>>
>>> hi. get the time at the start, check the elapsed time in your infinite
>>> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes,
>>> ...
>>>
>>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo 
>>> wrote:
>>>
 Hi Michael,

 Thanks 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-24 Thread Robert Engels
Sorry not paper, two websites I gave you previously. 


> On Mar 24, 2020, at 7:02 AM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> >>There is no root analysis available in Go. Read the paper I linked to. 
> 
> Sorry I did not get you. Which paper are you referring to?
> 
> While I was running the service for 100 minutes the 'top' command output was 
> showing Mem% as 11.1. There was no increase in mem usage since I had not 
> called 'LoadPatternDB()' method.I have 8GB of memory on the node where I am 
> running the service. My issue is :
> 
> Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is .88GB 
> and my node is only of 8GB. I feel the way I gathered the mem profiling was 
> not correct ..is it ?
> Please advise me what am I missing?
> 
> Thanks,
> Nitish
> 
>> On Tue, Mar 24, 2020 at 1:28 AM Robert Engels  wrote:
>> Yes. You have a leak in your Go code. It shows you the object types that are 
>> taking up all of the space. There is no root analysis available in Go. Read 
>> the paper I linked to. 
>> 
 On Mar 23, 2020, at 9:12 AM, Nitish Saboo  wrote:
 
>>> 
>>> Hi,
>>> 
>>> I used something like the following to generate a memprof for 100 minutes
>>> 
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>> timeout := time.After(100 * time.Minute)
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>> 
>>> for {
>>> select {
>>> case <-A_chan:
>>> continue
>>> case <-B_chan:
>>> continue
>>> case <-timeout:
>>> break
>>> }
>>> break
>>> }
>>> 
>>> if *memprofile != "" {
>>> count = count + 1
>>> fmt.Println("Generating Mem Profile:")
>>> fmt.Print(count)
>>> f, err := os.Create(*memprofile)
>>> if err != nil {
>>> fmt.Println("could not create memory profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> runtime.GC()// get up-to-date statistics
>>> if err := pprof.WriteHeapProfile(f); err != nil {
>>> fmt.Println("could not write memory profile: ", err)
>>> }
>>> 
>>> }
>>> }
>>> 
>>> /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
>>> Fetched 1 source profiles out of 2
>>> File: main
>>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>>> Type: alloc_space
>>> Time: Mar 22, 2020 at 7:02pm (IST)
>>> Entering interactive mode (type "help" for commands, "o" for options)
>>> (pprof) top
>>> Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total   
>>> <<<
>>> Dropped 445 nodes (cum <= 101.65MB)
>>> Showing top 10 nodes out of 58
>>>   flat  flat%   sum%cum   cum%
>>> 11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
>>>  1675.69MB  8.24% 67.27%  1911.69MB  9.40%  
>>> home/nsaboo/project/utils.Unflatten
>>>   627.21MB  3.09% 70.35%  1475.10MB  7.26%  encoding/json.mapEncoder.encode
>>>   626.76MB  3.08% 73.43%   626.76MB  3.08%  encoding/json.(*Decoder).refill
>>>   611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
>>>   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
>>>   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
>>>   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
>>>   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
>>>   344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys
>>> 
>>> 1)Is this the correct way of getting a memory profile?
>>> 
>>> 2)I ran the service for 100 minutes on a machine with 8GB memory. How am I 
>>> seeing memory accounting for around 17GB?
>>> 
>>> 3)I understand 'flat' means memory occupied within that method, but how 
>>> come it shot up more than the available memory? Hence, asking if the above 
>>> process is the correct way of gathering the memory profile.
>>> 
>>> Thanks,
>>> Nitish
>>> 
 On Fri, Mar 13, 2020 at 6:22 PM Michael Jones  
 wrote:
 hi. get the time at the start, check the elapsed time in your infinite 
 loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes, 
 ...
 
> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo  
> wrote:
> Hi Michael,
> 
> Thanks for your response.
> 
> That code looks wrong. I see the end but not the start. Look here and 
> copy carefully:
> 
> >>Since I did not want cpu profiling I omitted the start of the code and 
> >>just added memory profiling part.
> 
> Call at end, on way out.
> 
> >>Oh yes, I missed that.I have to call memory profiling code at the end 
> >>on the way out.But the thing is that it runs as a service in infinite 
> >>for loop.
> 
> func main() {
> 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-24 Thread Nitish Saboo
Hi,

>>There is no root analysis available in Go. Read the paper I linked to.

Sorry I did not get you. Which paper are you referring to?

While I was running the service for 100 minutes the 'top' command output
was showing Mem% as 11.1. There was no increase in mem usage since I had
not called 'LoadPatternDB()' method.I have 8GB of memory on the node where
I am running the service. My issue is :


   - Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is
   .88GB and my node is only of 8GB. I feel the way I gathered the mem
   profiling was not correct ..is it ?

Please advise me what am I missing?

Thanks,
Nitish

On Tue, Mar 24, 2020 at 1:28 AM Robert Engels  wrote:

> Yes. You have a leak in your Go code. It shows you the object types that
> are taking up all of the space. There is no root analysis available in Go.
> Read the paper I linked to.
>
> On Mar 23, 2020, at 9:12 AM, Nitish Saboo 
> wrote:
>
> 
> Hi,
>
> I used something like the following to generate a memprof for 100 minutes
>
> func main() {
> flag.Parse()
> if *cpuprofile != "" {
> f, err := os.Create(*cpuprofile)
> if err != nil {
> fmt.Println("could not create CPU profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> if err := pprof.StartCPUProfile(f); err != nil {
> fmt.Print("could not start CPU profile: ", err)
> }
> defer pprof.StopCPUProfile()
> }
> timeout := time.After(100 * time.Minute)
> A_chan := make(chan bool)
> B_chan := make(chan bool)
> go util.A(A_chan)
> go util.B(B_chan)
> (..Rest of the code..)
>
> for {
> select {
> case <-A_chan:
> continue
> case <-B_chan:
> continue
> case <-timeout:
> break
> }
> break
> }
>
> if *memprofile != "" {
> count = count + 1
> fmt.Println("Generating Mem Profile:")
> fmt.Print(count)
> f, err := os.Create(*memprofile)
> if err != nil {
> fmt.Println("could not create memory profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> runtime.GC()// get up-to-date statistics
> if err := pprof.WriteHeapProfile(f); err != nil {
> fmt.Println("could not write memory profile: ", err)
> }
>
> }
> }
>
> /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
> Fetched 1 source profiles out of 2
> File: main
> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
> Type: alloc_space
> Time: Mar 22, 2020 at 7:02pm (IST)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top
> Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total
> <<<
> Dropped 445 nodes (cum <= 101.65MB)
> Showing top 10 nodes out of 58
>   flat  flat%   sum%cum   cum%
> 11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
>  1675.69MB  8.24% 67.27%  1911.69MB  9.40%
>  home/nsaboo/project/utils.Unflatten
>   627.21MB  3.09% 70.35%  1475.10MB  7.26%  encoding/json.mapEncoder.encode
>   626.76MB  3.08% 73.43%   626.76MB  3.08%  encoding/json.(*Decoder).refill
>   611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
>   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
>   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
>   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
>   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
>   344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys
>
> 1)Is this the correct way of getting a memory profile?
>
> 2)I ran the service for 100 minutes on a machine with 8GB memory. How am I
> seeing memory accounting for around 17GB?
>
> 3)I understand 'flat' means memory occupied within that method, but how
> come it shot up more than the available memory? Hence, asking if the above
> process is the correct way of gathering the memory profile.
>
> Thanks,
> Nitish
>
> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones 
> wrote:
>
>> hi. get the time at the start, check the elapsed time in your infinite
>> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes,
>> ...
>>
>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo 
>> wrote:
>>
>>> Hi Michael,
>>>
>>> Thanks for your response.
>>>
>>> That code looks wrong. I see the end but not the start. Look here and
>>> copy carefully:
>>>
>>> >>Since I did not want cpu profiling I omitted the start of the code and
>>> just added memory profiling part.
>>>
>>> Call at end, on way out.
>>>
>>> >>Oh yes, I missed that.I have to call memory profiling code at the end
>>> on the way out.But the thing is that it runs as a service in infinite for
>>> loop.
>>>
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>>
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-23 Thread Robert Engels
Yes. You have a leak in your Go code. It shows you the object types that are 
taking up all of the space. There is no root analysis available in Go. Read the 
paper I linked to. 

> On Mar 23, 2020, at 9:12 AM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> I used something like the following to generate a memprof for 100 minutes
> 
> func main() {
> flag.Parse()
> if *cpuprofile != "" {
> f, err := os.Create(*cpuprofile)
> if err != nil {
> fmt.Println("could not create CPU profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> if err := pprof.StartCPUProfile(f); err != nil {
> fmt.Print("could not start CPU profile: ", err)
> }
> defer pprof.StopCPUProfile()
> }
> timeout := time.After(100 * time.Minute)
> A_chan := make(chan bool)
> B_chan := make(chan bool)
> go util.A(A_chan)
> go util.B(B_chan)
> (..Rest of the code..)
> 
> for {
> select {
> case <-A_chan:
> continue
> case <-B_chan:
> continue
> case <-timeout:
> break
> }
> break
> }
> 
> if *memprofile != "" {
> count = count + 1
> fmt.Println("Generating Mem Profile:")
> fmt.Print(count)
> f, err := os.Create(*memprofile)
> if err != nil {
> fmt.Println("could not create memory profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> runtime.GC()// get up-to-date statistics
> if err := pprof.WriteHeapProfile(f); err != nil {
> fmt.Println("could not write memory profile: ", err)
> }
> 
> }
> }
> 
> /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
> Fetched 1 source profiles out of 2
> File: main
> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
> Type: alloc_space
> Time: Mar 22, 2020 at 7:02pm (IST)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top
> Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total   
> <<<
> Dropped 445 nodes (cum <= 101.65MB)
> Showing top 10 nodes out of 58
>   flat  flat%   sum%cum   cum%
> 11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
>  1675.69MB  8.24% 67.27%  1911.69MB  9.40%  
> home/nsaboo/project/utils.Unflatten
>   627.21MB  3.09% 70.35%  1475.10MB  7.26%  encoding/json.mapEncoder.encode
>   626.76MB  3.08% 73.43%   626.76MB  3.08%  encoding/json.(*Decoder).refill
>   611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
>   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
>   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
>   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
>   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
>   344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys
> 
> 1)Is this the correct way of getting a memory profile?
> 
> 2)I ran the service for 100 minutes on a machine with 8GB memory. How am I 
> seeing memory accounting for around 17GB?
> 
> 3)I understand 'flat' means memory occupied within that method, but how come 
> it shot up more than the available memory? Hence, asking if the above process 
> is the correct way of gathering the memory profile.
> 
> Thanks,
> Nitish
> 
>> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones  
>> wrote:
>> hi. get the time at the start, check the elapsed time in your infinite loop, 
>> and trigger the write/exit after a minute, 10 minutes, 100 minutes, ...
>> 
>>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo  
>>> wrote:
>>> Hi Michael,
>>> 
>>> Thanks for your response.
>>> 
>>> That code looks wrong. I see the end but not the start. Look here and copy 
>>> carefully:
>>> 
>>> >>Since I did not want cpu profiling I omitted the start of the code and 
>>> >>just added memory profiling part.
>>> 
>>> Call at end, on way out.
>>> 
>>> >>Oh yes, I missed that.I have to call memory profiling code at the end on 
>>> >>the way out.But the thing is that it runs as a service in infinite for 
>>> >>loop.
>>> 
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>> 
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>> 
>>> for {
>>> select {
>>> case <-A_chan: 
>>> continue
>>> case <-B_chan: 
>>> continue
>>> 
>>> }
>>> }
>>> 
>>> }
>>> 
>>> What would be the correct way to add the memprofile code changes, since it 
>>> is running in an infinite for loop ?
>>> 
>>> Also, as shared by others above, there are no promises about how soon the 
>>> dead allocations go away, The speed gets faster and faster version to 
>>> version, and is impressive indeed now, so old versions are not the best to 
>>> use, ubt even so, if the allocation feels small to th GC the urgency to 
>>> free it will be low. You need to loop 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-23 Thread Nitish Saboo
Hi,

I used something like the following to generate a memprof for 100 minutes

func main() {
flag.Parse()
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
if err != nil {
fmt.Println("could not create CPU profile: ", err)
}
defer f.Close() // error handling omitted for example
if err := pprof.StartCPUProfile(f); err != nil {
fmt.Print("could not start CPU profile: ", err)
}
defer pprof.StopCPUProfile()
}
timeout := time.After(100 * time.Minute)
A_chan := make(chan bool)
B_chan := make(chan bool)
go util.A(A_chan)
go util.B(B_chan)
(..Rest of the code..)

for {
select {
case <-A_chan:
continue
case <-B_chan:
continue
case <-timeout:
break
}
break
}

if *memprofile != "" {
count = count + 1
fmt.Println("Generating Mem Profile:")
fmt.Print(count)
f, err := os.Create(*memprofile)
if err != nil {
fmt.Println("could not create memory profile: ", err)
}
defer f.Close() // error handling omitted for example
runtime.GC()// get up-to-date statistics
if err := pprof.WriteHeapProfile(f); err != nil {
fmt.Println("could not write memory profile: ", err)
}

}
}

/Desktop/memprof:go tool pprof --alloc_space main mem3.prof
Fetched 1 source profiles out of 2
File: main
Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
Type: alloc_space
Time: Mar 22, 2020 at 7:02pm (IST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total
<<<
Dropped 445 nodes (cum <= 101.65MB)
Showing top 10 nodes out of 58
  flat  flat%   sum%cum   cum%
11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
 1675.69MB  8.24% 67.27%  1911.69MB  9.40%
 home/nsaboo/project/utils.Unflatten
  627.21MB  3.09% 70.35%  1475.10MB  7.26%  encoding/json.mapEncoder.encode
  626.76MB  3.08% 73.43%   626.76MB  3.08%  encoding/json.(*Decoder).refill
  611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
  569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
  558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
  447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
  356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
  344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys

1)Is this the correct way of getting a memory profile?

2)I ran the service for 100 minutes on a machine with 8GB memory. How am I
seeing memory accounting for around 17GB?

3)I understand 'flat' means memory occupied within that method, but how
come it shot up more than the available memory? Hence, asking if the above
process is the correct way of gathering the memory profile.

Thanks,
Nitish

On Fri, Mar 13, 2020 at 6:22 PM Michael Jones 
wrote:

> hi. get the time at the start, check the elapsed time in your infinite
> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes,
> ...
>
> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo 
> wrote:
>
>> Hi Michael,
>>
>> Thanks for your response.
>>
>> That code looks wrong. I see the end but not the start. Look here and
>> copy carefully:
>>
>> >>Since I did not want cpu profiling I omitted the start of the code and
>> just added memory profiling part.
>>
>> Call at end, on way out.
>>
>> >>Oh yes, I missed that.I have to call memory profiling code at the end
>> on the way out.But the thing is that it runs as a service in infinite for
>> loop.
>>
>> func main() {
>> flag.Parse()
>> if *cpuprofile != "" {
>> f, err := os.Create(*cpuprofile)
>> if err != nil {
>> fmt.Println("could not create CPU profile: ", err)
>> }
>> defer f.Close() // error handling omitted for example
>> if err := pprof.StartCPUProfile(f); err != nil {
>> fmt.Print("could not start CPU profile: ", err)
>> }
>> defer pprof.StopCPUProfile()
>> }
>>
>> A_chan := make(chan bool)
>> B_chan := make(chan bool)
>> go util.A(A_chan)
>> go util.B(B_chan)
>> (..Rest of the code..)
>>
>> for {
>> select {
>> case <-A_chan:
>> continue
>> case <-B_chan:
>> continue
>>
>> }
>> }
>>
>> }
>>
>> What would be the correct way to add the memprofile code changes, since
>> it is running in an infinite for loop ?
>>
>> Also, as shared by others above, there are no promises about how soon the
>> dead allocations go away, The speed gets faster and faster version to
>> version, and is impressive indeed now, so old versions are not the best to
>> use, ubt even so, if the allocation feels small to th GC the urgency to
>> free it will be low. You need to loop in allocating and see if the memory
>> grows and grows.
>>
>> >> Yes, got it.I will try using the latest version of Go and check the
>> behavior.
>>
>> Thanks,
>> Nitish
>>
>> On Fri, Mar 13, 2020 at 6:20 AM Michael Jones 
>> wrote:
>>
>>> That code looks wrong. I see the end but not the start. Look here and
>>> copy carefully:
>>> https://golang.org/pkg/runtime/pprof/
>>>
>>> Call at end, on way out.
>>>
>>> Also, as shared by others above, there are no promises about how soon
>>> the dead allocations go away, The 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-19 Thread Robert Engels
And

https://blog.golang.org/pprof

> On Mar 19, 2020, at 9:27 AM, Robert Engels  wrote:
> 
> 
> https://www.freecodecamp.org/news/how-i-investigated-memory-leaks-in-go-using-pprof-on-a-large-codebase-4bec4325e192/amp/
> 
>>> On Mar 19, 2020, at 9:24 AM, Nitish Saboo  wrote:
>>> 
>> 
>> Hi,
>> 
>> Are there any other commands that provide an exact allocation of memory for 
>> each of the functions or help to analyze the memory allocation much better?
>> 
>> Thanks,
>> Nitish
>> 
>>> On Thu, Mar 19, 2020 at 7:08 PM Robert Engels  wrote:
>>> You are only using 1.5 mb on the Go side... so if your process is consuming 
>>> lots of memory it’s on the C side. 
>>> 
> On Mar 19, 2020, at 7:55 AM, Nitish Saboo  
> wrote:
> 
 
 Hi Michael,
 
 I used something like this to generate a mem-prof for 60 minutes
 
 func main() {
 flag.Parse()
 if *cpuprofile != "" {
 f, err := os.Create(*cpuprofile)
 if err != nil {
 fmt.Println("could not create CPU profile: ", err)
 }
 defer f.Close() // error handling omitted for example
 if err := pprof.StartCPUProfile(f); err != nil {
 fmt.Print("could not start CPU profile: ", err)
 }
 defer pprof.StopCPUProfile()
 }
 timeout := time.After(60 * time.Minute)
 A_chan := make(chan bool)
 B_chan := make(chan bool)
 go util.A(A_chan)
 go util.B(B_chan)
 (..Rest of the code..)
 
 for {
 select {
 case <-A_chan:
 continue
 case <-B_chan:
 continue
 case <-timeout:
 break
 }
 break
 }
 
 if *memprofile != "" {
 count = count + 1
 fmt.Println("Generating Mem Profile:")
 fmt.Print(count)
 f, err := os.Create(*memprofile)
 if err != nil {
 fmt.Println("could not create memory profile: ", err)
 }
 defer f.Close() // error handling omitted for example
 runtime.GC()// get up-to-date statistics
 if err := pprof.WriteHeapProfile(f); err != nil {
 fmt.Println("could not write memory profile: ", err)
 }
 
 }
 }
 
 I got the following output from the mem.prof:
 
 ~/Desktop/memprof:go tool pprof mem.prof
 File: main
 Build ID: 331d79200cabd2a81713918e51b8c9a63e3f7d29
 Type: inuse_space
 Time: Mar 19, 2020 at 3:57pm (IST)
 Entering interactive mode (type "help" for commands, "o" for options)
 (pprof) top 14
 Showing nodes accounting for 1581.40kB, 100% of 1581.40kB total
   flat  flat%   sum%cum   cum%
  1024.14kB 64.76% 64.76%  1024.14kB 64.76%  
 github.com/aws/aws-sdk-go/aws/endpoints.init.ializers
   557.26kB 35.24%   100%   557.26kB 35.24%  crypto/elliptic.initTable
  0 0%   100%   557.26kB 35.24%  
 crypto/elliptic.(*p256Point).p256BaseMult
  0 0%   100%   557.26kB 35.24%  crypto/elliptic.GenerateKey
  0 0%   100%   557.26kB 35.24%  
 crypto/elliptic.p256Curve.ScalarBaseMult
  0 0%   100%   557.26kB 35.24%  crypto/tls.(*Conn).Handshake
  0 0%   100%   557.26kB 35.24%  
 crypto/tls.(*Conn).clientHandshake
  0 0%   100%   557.26kB 35.24%  
 crypto/tls.(*clientHandshakeState).doFullHandshake
  0 0%   100%   557.26kB 35.24%  
 crypto/tls.(*clientHandshakeState).handshake
  0 0%   100%   557.26kB 35.24%  
 crypto/tls.(*ecdheKeyAgreement).processServerKeyExchange
  0 0%   100%   557.26kB 35.24%  
 crypto/tls.generateECDHEParameters
  0 0%   100%   557.26kB 35.24%  
 net/http.(*persistConn).addTLS.func2
  0 0%   100%  1024.14kB 64.76%  runtime.main
  0 0%   100%   557.26kB 35.24%  sync.(*Once).Do
 
 (pprof)
 
 Can you please share some commands or any link which I can refer to to 
 analyze the data?
 
 Thanks,
 Nitish
 
 
 
> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones  
> wrote:
> hi. get the time at the start, check the elapsed time in your infinite 
> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes, 
> ...
> 
>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo  
>> wrote:
>> Hi Michael,
>> 
>> Thanks for your response.
>> 
>> That code looks wrong. I see the end but not the start. Look here and 
>> copy carefully:
>> 
>> >>Since I did not want cpu profiling I omitted the start of the code and 
>> >>just added memory profiling part.
>> 
>> Call at end, on way out.
>> 
>> >>Oh yes, I missed that.I have to call memory profiling code at the end 
>> >>on the way out.But the thing is that it runs as a service in infinite 
>> >>for loop.
>> 
>> func main() {
>> flag.Parse()
>> if *cpuprofile != "" {
>> f, err := os.Create(*cpuprofile)
>> if err != nil {
>> fmt.Println("could not create CPU profile: ", err)

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-19 Thread Robert Engels
https://www.freecodecamp.org/news/how-i-investigated-memory-leaks-in-go-using-pprof-on-a-large-codebase-4bec4325e192/amp/

> On Mar 19, 2020, at 9:24 AM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> Are there any other commands that provide an exact allocation of memory for 
> each of the functions or help to analyze the memory allocation much better?
> 
> Thanks,
> Nitish
> 
>> On Thu, Mar 19, 2020 at 7:08 PM Robert Engels  wrote:
>> You are only using 1.5 mb on the Go side... so if your process is consuming 
>> lots of memory it’s on the C side. 
>> 
 On Mar 19, 2020, at 7:55 AM, Nitish Saboo  wrote:
 
>>> 
>>> Hi Michael,
>>> 
>>> I used something like this to generate a mem-prof for 60 minutes
>>> 
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>> timeout := time.After(60 * time.Minute)
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>> 
>>> for {
>>> select {
>>> case <-A_chan:
>>> continue
>>> case <-B_chan:
>>> continue
>>> case <-timeout:
>>> break
>>> }
>>> break
>>> }
>>> 
>>> if *memprofile != "" {
>>> count = count + 1
>>> fmt.Println("Generating Mem Profile:")
>>> fmt.Print(count)
>>> f, err := os.Create(*memprofile)
>>> if err != nil {
>>> fmt.Println("could not create memory profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> runtime.GC()// get up-to-date statistics
>>> if err := pprof.WriteHeapProfile(f); err != nil {
>>> fmt.Println("could not write memory profile: ", err)
>>> }
>>> 
>>> }
>>> }
>>> 
>>> I got the following output from the mem.prof:
>>> 
>>> ~/Desktop/memprof:go tool pprof mem.prof
>>> File: main
>>> Build ID: 331d79200cabd2a81713918e51b8c9a63e3f7d29
>>> Type: inuse_space
>>> Time: Mar 19, 2020 at 3:57pm (IST)
>>> Entering interactive mode (type "help" for commands, "o" for options)
>>> (pprof) top 14
>>> Showing nodes accounting for 1581.40kB, 100% of 1581.40kB total
>>>   flat  flat%   sum%cum   cum%
>>>  1024.14kB 64.76% 64.76%  1024.14kB 64.76%  
>>> github.com/aws/aws-sdk-go/aws/endpoints.init.ializers
>>>   557.26kB 35.24%   100%   557.26kB 35.24%  crypto/elliptic.initTable
>>>  0 0%   100%   557.26kB 35.24%  
>>> crypto/elliptic.(*p256Point).p256BaseMult
>>>  0 0%   100%   557.26kB 35.24%  crypto/elliptic.GenerateKey
>>>  0 0%   100%   557.26kB 35.24%  
>>> crypto/elliptic.p256Curve.ScalarBaseMult
>>>  0 0%   100%   557.26kB 35.24%  crypto/tls.(*Conn).Handshake
>>>  0 0%   100%   557.26kB 35.24%  
>>> crypto/tls.(*Conn).clientHandshake
>>>  0 0%   100%   557.26kB 35.24%  
>>> crypto/tls.(*clientHandshakeState).doFullHandshake
>>>  0 0%   100%   557.26kB 35.24%  
>>> crypto/tls.(*clientHandshakeState).handshake
>>>  0 0%   100%   557.26kB 35.24%  
>>> crypto/tls.(*ecdheKeyAgreement).processServerKeyExchange
>>>  0 0%   100%   557.26kB 35.24%  
>>> crypto/tls.generateECDHEParameters
>>>  0 0%   100%   557.26kB 35.24%  
>>> net/http.(*persistConn).addTLS.func2
>>>  0 0%   100%  1024.14kB 64.76%  runtime.main
>>>  0 0%   100%   557.26kB 35.24%  sync.(*Once).Do
>>> 
>>> (pprof)
>>> 
>>> Can you please share some commands or any link which I can refer to to 
>>> analyze the data?
>>> 
>>> Thanks,
>>> Nitish
>>> 
>>> 
>>> 
 On Fri, Mar 13, 2020 at 6:22 PM Michael Jones  
 wrote:
 hi. get the time at the start, check the elapsed time in your infinite 
 loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes, 
 ...
 
> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo  
> wrote:
> Hi Michael,
> 
> Thanks for your response.
> 
> That code looks wrong. I see the end but not the start. Look here and 
> copy carefully:
> 
> >>Since I did not want cpu profiling I omitted the start of the code and 
> >>just added memory profiling part.
> 
> Call at end, on way out.
> 
> >>Oh yes, I missed that.I have to call memory profiling code at the end 
> >>on the way out.But the thing is that it runs as a service in infinite 
> >>for loop.
> 
> func main() {
> flag.Parse()
> if *cpuprofile != "" {
> f, err := os.Create(*cpuprofile)
> if err != nil {
> fmt.Println("could not create CPU profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> if err := pprof.StartCPUProfile(f); err != nil {
> fmt.Print("could not start CPU profile: ", err)
> }
> defer pprof.StopCPUProfile()
> }
> 
> A_chan := 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-19 Thread Nitish Saboo
Hi,

Are there any other commands that provide an exact allocation of memory for
each of the functions or help to analyze the memory allocation much better?

Thanks,
Nitish

On Thu, Mar 19, 2020 at 7:08 PM Robert Engels  wrote:

> You are only using 1.5 mb on the Go side... so if your process is
> consuming lots of memory it’s on the C side.
>
> On Mar 19, 2020, at 7:55 AM, Nitish Saboo 
> wrote:
>
> 
> Hi Michael,
>
> I used something like this to generate a mem-prof for 60 minutes
>
> func main() {
> flag.Parse()
> if *cpuprofile != "" {
> f, err := os.Create(*cpuprofile)
> if err != nil {
> fmt.Println("could not create CPU profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> if err := pprof.StartCPUProfile(f); err != nil {
> fmt.Print("could not start CPU profile: ", err)
> }
> defer pprof.StopCPUProfile()
> }
> timeout := time.After(60 * time.Minute)
> A_chan := make(chan bool)
> B_chan := make(chan bool)
> go util.A(A_chan)
> go util.B(B_chan)
> (..Rest of the code..)
>
> for {
> select {
> case <-A_chan:
> continue
> case <-B_chan:
> continue
> case <-timeout:
> break
> }
> break
> }
>
> if *memprofile != "" {
> count = count + 1
> fmt.Println("Generating Mem Profile:")
> fmt.Print(count)
> f, err := os.Create(*memprofile)
> if err != nil {
> fmt.Println("could not create memory profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> runtime.GC()// get up-to-date statistics
> if err := pprof.WriteHeapProfile(f); err != nil {
> fmt.Println("could not write memory profile: ", err)
> }
>
> }
> }
>
> I got the following output from the mem.prof:
>
> ~/Desktop/memprof:go tool pprof mem.prof
> File: main
> Build ID: 331d79200cabd2a81713918e51b8c9a63e3f7d29
> Type: inuse_space
> Time: Mar 19, 2020 at 3:57pm (IST)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top 14
> Showing nodes accounting for 1581.40kB, 100% of 1581.40kB total
>   flat  flat%   sum%cum   cum%
>  1024.14kB 64.76% 64.76%  1024.14kB 64.76%
> github.com/aws/aws-sdk-go/aws/endpoints.init.ializers
>   557.26kB 35.24%   100%   557.26kB 35.24%  crypto/elliptic.initTable
>  0 0%   100%   557.26kB 35.24%
>  crypto/elliptic.(*p256Point).p256BaseMult
>  0 0%   100%   557.26kB 35.24%  crypto/elliptic.GenerateKey
>  0 0%   100%   557.26kB 35.24%
>  crypto/elliptic.p256Curve.ScalarBaseMult
>  0 0%   100%   557.26kB 35.24%  crypto/tls.(*Conn).Handshake
>  0 0%   100%   557.26kB 35.24%
>  crypto/tls.(*Conn).clientHandshake
>  0 0%   100%   557.26kB 35.24%
>  crypto/tls.(*clientHandshakeState).doFullHandshake
>  0 0%   100%   557.26kB 35.24%
>  crypto/tls.(*clientHandshakeState).handshake
>  0 0%   100%   557.26kB 35.24%
>  crypto/tls.(*ecdheKeyAgreement).processServerKeyExchange
>  0 0%   100%   557.26kB 35.24%
>  crypto/tls.generateECDHEParameters
>  0 0%   100%   557.26kB 35.24%
>  net/http.(*persistConn).addTLS.func2
>  0 0%   100%  1024.14kB 64.76%  runtime.main
>  0 0%   100%   557.26kB 35.24%  sync.(*Once).Do
>
> (pprof)
>
> Can you please share some commands or any link which I can refer to to
> analyze the data?
>
> Thanks,
> Nitish
>
>
>
> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones 
> wrote:
>
>> hi. get the time at the start, check the elapsed time in your infinite
>> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes,
>> ...
>>
>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo 
>> wrote:
>>
>>> Hi Michael,
>>>
>>> Thanks for your response.
>>>
>>> That code looks wrong. I see the end but not the start. Look here and
>>> copy carefully:
>>>
>>> >>Since I did not want cpu profiling I omitted the start of the code and
>>> just added memory profiling part.
>>>
>>> Call at end, on way out.
>>>
>>> >>Oh yes, I missed that.I have to call memory profiling code at the end
>>> on the way out.But the thing is that it runs as a service in infinite for
>>> loop.
>>>
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>>
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>>
>>> for {
>>> select {
>>> case <-A_chan:
>>> continue
>>> case <-B_chan:
>>> continue
>>>
>>> }
>>> }
>>>
>>> }
>>>
>>> What would be the correct way to add the memprofile code changes, since
>>> it is running in an infinite for loop ?
>>>
>>> Also, as shared by others above, there are no promises about how soon
>>> the dead allocations go away, The speed gets faster and faster version to
>>> version, and is 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-19 Thread Robert Engels
You are only using 1.5 mb on the Go side... so if your process is consuming 
lots of memory it’s on the C side. 

> On Mar 19, 2020, at 7:55 AM, Nitish Saboo  wrote:
> 
> 
> Hi Michael,
> 
> I used something like this to generate a mem-prof for 60 minutes
> 
> func main() {
> flag.Parse()
> if *cpuprofile != "" {
> f, err := os.Create(*cpuprofile)
> if err != nil {
> fmt.Println("could not create CPU profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> if err := pprof.StartCPUProfile(f); err != nil {
> fmt.Print("could not start CPU profile: ", err)
> }
> defer pprof.StopCPUProfile()
> }
> timeout := time.After(60 * time.Minute)
> A_chan := make(chan bool)
> B_chan := make(chan bool)
> go util.A(A_chan)
> go util.B(B_chan)
> (..Rest of the code..)
> 
> for {
> select {
> case <-A_chan:
> continue
> case <-B_chan:
> continue
> case <-timeout:
> break
> }
> break
> }
> 
> if *memprofile != "" {
> count = count + 1
> fmt.Println("Generating Mem Profile:")
> fmt.Print(count)
> f, err := os.Create(*memprofile)
> if err != nil {
> fmt.Println("could not create memory profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> runtime.GC()// get up-to-date statistics
> if err := pprof.WriteHeapProfile(f); err != nil {
> fmt.Println("could not write memory profile: ", err)
> }
> 
> }
> }
> 
> I got the following output from the mem.prof:
> 
> ~/Desktop/memprof:go tool pprof mem.prof
> File: main
> Build ID: 331d79200cabd2a81713918e51b8c9a63e3f7d29
> Type: inuse_space
> Time: Mar 19, 2020 at 3:57pm (IST)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top 14
> Showing nodes accounting for 1581.40kB, 100% of 1581.40kB total
>   flat  flat%   sum%cum   cum%
>  1024.14kB 64.76% 64.76%  1024.14kB 64.76%  
> github.com/aws/aws-sdk-go/aws/endpoints.init.ializers
>   557.26kB 35.24%   100%   557.26kB 35.24%  crypto/elliptic.initTable
>  0 0%   100%   557.26kB 35.24%  
> crypto/elliptic.(*p256Point).p256BaseMult
>  0 0%   100%   557.26kB 35.24%  crypto/elliptic.GenerateKey
>  0 0%   100%   557.26kB 35.24%  
> crypto/elliptic.p256Curve.ScalarBaseMult
>  0 0%   100%   557.26kB 35.24%  crypto/tls.(*Conn).Handshake
>  0 0%   100%   557.26kB 35.24%  crypto/tls.(*Conn).clientHandshake
>  0 0%   100%   557.26kB 35.24%  
> crypto/tls.(*clientHandshakeState).doFullHandshake
>  0 0%   100%   557.26kB 35.24%  
> crypto/tls.(*clientHandshakeState).handshake
>  0 0%   100%   557.26kB 35.24%  
> crypto/tls.(*ecdheKeyAgreement).processServerKeyExchange
>  0 0%   100%   557.26kB 35.24%  crypto/tls.generateECDHEParameters
>  0 0%   100%   557.26kB 35.24%  
> net/http.(*persistConn).addTLS.func2
>  0 0%   100%  1024.14kB 64.76%  runtime.main
>  0 0%   100%   557.26kB 35.24%  sync.(*Once).Do
> 
> (pprof)
> 
> Can you please share some commands or any link which I can refer to to 
> analyze the data?
> 
> Thanks,
> Nitish
> 
> 
> 
>> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones  
>> wrote:
>> hi. get the time at the start, check the elapsed time in your infinite loop, 
>> and trigger the write/exit after a minute, 10 minutes, 100 minutes, ...
>> 
>>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo  
>>> wrote:
>>> Hi Michael,
>>> 
>>> Thanks for your response.
>>> 
>>> That code looks wrong. I see the end but not the start. Look here and copy 
>>> carefully:
>>> 
>>> >>Since I did not want cpu profiling I omitted the start of the code and 
>>> >>just added memory profiling part.
>>> 
>>> Call at end, on way out.
>>> 
>>> >>Oh yes, I missed that.I have to call memory profiling code at the end on 
>>> >>the way out.But the thing is that it runs as a service in infinite for 
>>> >>loop.
>>> 
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>> 
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>> 
>>> for {
>>> select {
>>> case <-A_chan: 
>>> continue
>>> case <-B_chan: 
>>> continue
>>> 
>>> }
>>> }
>>> 
>>> }
>>> 
>>> What would be the correct way to add the memprofile code changes, since it 
>>> is running in an infinite for loop ?
>>> 
>>> Also, as shared by others above, there are no promises about how soon the 
>>> dead allocations go away, The speed gets faster and faster version to 
>>> version, and is impressive indeed now, so old versions are not the best to 
>>> use, ubt even so, if the allocation feels small to th GC the urgency to 
>>> free it will be low. You need to loop in 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-19 Thread Nitish Saboo
Hi Michael,

I used something like this to generate a mem-prof for 60 minutes

func main() {
flag.Parse()
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
if err != nil {
fmt.Println("could not create CPU profile: ", err)
}
defer f.Close() // error handling omitted for example
if err := pprof.StartCPUProfile(f); err != nil {
fmt.Print("could not start CPU profile: ", err)
}
defer pprof.StopCPUProfile()
}
timeout := time.After(60 * time.Minute)
A_chan := make(chan bool)
B_chan := make(chan bool)
go util.A(A_chan)
go util.B(B_chan)
(..Rest of the code..)

for {
select {
case <-A_chan:
continue
case <-B_chan:
continue
case <-timeout:
break
}
break
}

if *memprofile != "" {
count = count + 1
fmt.Println("Generating Mem Profile:")
fmt.Print(count)
f, err := os.Create(*memprofile)
if err != nil {
fmt.Println("could not create memory profile: ", err)
}
defer f.Close() // error handling omitted for example
runtime.GC()// get up-to-date statistics
if err := pprof.WriteHeapProfile(f); err != nil {
fmt.Println("could not write memory profile: ", err)
}

}
}

I got the following output from the mem.prof:

~/Desktop/memprof:go tool pprof mem.prof
File: main
Build ID: 331d79200cabd2a81713918e51b8c9a63e3f7d29
Type: inuse_space
Time: Mar 19, 2020 at 3:57pm (IST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top 14
Showing nodes accounting for 1581.40kB, 100% of 1581.40kB total
  flat  flat%   sum%cum   cum%
 1024.14kB 64.76% 64.76%  1024.14kB 64.76%
github.com/aws/aws-sdk-go/aws/endpoints.init.ializers
  557.26kB 35.24%   100%   557.26kB 35.24%  crypto/elliptic.initTable
 0 0%   100%   557.26kB 35.24%
 crypto/elliptic.(*p256Point).p256BaseMult
 0 0%   100%   557.26kB 35.24%  crypto/elliptic.GenerateKey
 0 0%   100%   557.26kB 35.24%
 crypto/elliptic.p256Curve.ScalarBaseMult
 0 0%   100%   557.26kB 35.24%  crypto/tls.(*Conn).Handshake
 0 0%   100%   557.26kB 35.24%
 crypto/tls.(*Conn).clientHandshake
 0 0%   100%   557.26kB 35.24%
 crypto/tls.(*clientHandshakeState).doFullHandshake
 0 0%   100%   557.26kB 35.24%
 crypto/tls.(*clientHandshakeState).handshake
 0 0%   100%   557.26kB 35.24%
 crypto/tls.(*ecdheKeyAgreement).processServerKeyExchange
 0 0%   100%   557.26kB 35.24%
 crypto/tls.generateECDHEParameters
 0 0%   100%   557.26kB 35.24%
 net/http.(*persistConn).addTLS.func2
 0 0%   100%  1024.14kB 64.76%  runtime.main
 0 0%   100%   557.26kB 35.24%  sync.(*Once).Do

(pprof)

Can you please share some commands or any link which I can refer to to
analyze the data?

Thanks,
Nitish



On Fri, Mar 13, 2020 at 6:22 PM Michael Jones 
wrote:

> hi. get the time at the start, check the elapsed time in your infinite
> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes,
> ...
>
> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo 
> wrote:
>
>> Hi Michael,
>>
>> Thanks for your response.
>>
>> That code looks wrong. I see the end but not the start. Look here and
>> copy carefully:
>>
>> >>Since I did not want cpu profiling I omitted the start of the code and
>> just added memory profiling part.
>>
>> Call at end, on way out.
>>
>> >>Oh yes, I missed that.I have to call memory profiling code at the end
>> on the way out.But the thing is that it runs as a service in infinite for
>> loop.
>>
>> func main() {
>> flag.Parse()
>> if *cpuprofile != "" {
>> f, err := os.Create(*cpuprofile)
>> if err != nil {
>> fmt.Println("could not create CPU profile: ", err)
>> }
>> defer f.Close() // error handling omitted for example
>> if err := pprof.StartCPUProfile(f); err != nil {
>> fmt.Print("could not start CPU profile: ", err)
>> }
>> defer pprof.StopCPUProfile()
>> }
>>
>> A_chan := make(chan bool)
>> B_chan := make(chan bool)
>> go util.A(A_chan)
>> go util.B(B_chan)
>> (..Rest of the code..)
>>
>> for {
>> select {
>> case <-A_chan:
>> continue
>> case <-B_chan:
>> continue
>>
>> }
>> }
>>
>> }
>>
>> What would be the correct way to add the memprofile code changes, since
>> it is running in an infinite for loop ?
>>
>> Also, as shared by others above, there are no promises about how soon the
>> dead allocations go away, The speed gets faster and faster version to
>> version, and is impressive indeed now, so old versions are not the best to
>> use, ubt even so, if the allocation feels small to th GC the urgency to
>> free it will be low. You need to loop in allocating and see if the memory
>> grows and grows.
>>
>> >> Yes, got it.I will try using the latest version of Go and check the
>> behavior.
>>
>> Thanks,
>> Nitish
>>
>> On Fri, Mar 13, 2020 at 6:20 AM Michael Jones 
>> wrote:
>>
>>> That code looks wrong. I see the end but not the start. Look here and
>>> copy carefully:
>>> https://golang.org/pkg/runtime/pprof/
>>>
>>> Call at end, on way out.
>>>
>>> Also, as shared by others above, there are no promises about how soon

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-18 Thread Nitish Saboo
Hi,

Yeah, even I did not expect the program to have 29% memory usage.Not sure
if this is how the Go GC works.

Thanks,
Nitish

On Wed, Mar 18, 2020 at 12:48 AM Robert Engels 
wrote:

> My only thought was that maybe you had more Go routines accessing it than
> you thought.
>
> It is remains constant after a while it is most likely not a memory leak.
> It is done what surprising that the memory consumption in a steady state
> would be 4x the equivalent C program.
>
> On Mar 17, 2020, at 9:21 AM, Nitish Saboo 
> wrote:
>
> 
> Hi Robert,
>
> Thanks for your response.
> Since patterndb is a global variable(not a thread-local variable) and I
> have a single goroutine that calls load_pattern_db() method, therefore it
> was not looking correct to me to pin a goroutine to a thread.
> I once again tested the code flow. Apologies for making confusion in my
> last mail. When I called load_pattern_db() for about 6-7 times, every time
> the following lines were getting printed. It looks like patterndb instance
> is getting freed and the memory is becoming constant at around 29%.
>
> Patterndb Free Entered
> Patterndb Free called
> Patterndb New called
>
> node.c
> -
>
> PatternDB *patterndb;
>
> int load_pattern_db(const gchar* file, key_value_cb cb)
> {
>  printf("Patterndb Free Entered\n")
>  if(patterndb != NULL){
>   printf("Patterndb Free called\n"); <<< It is getting printed
> pattern_db_free(patterndb);
>   }
>   printf("Patterndb New called\n")
>   patterndb = pattern_db_new();
>   pattern_db_reload_ruleset(patterndb, configuration, file);
>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>   return 0;
> }
>
> But, what made you feel that Go global variable would report a race
> condition? Since it is a single goroutine what would cause a race condition
> here ?
>
> Thanks,
> Nitish
>
> On Tue, Mar 17, 2020 at 6:31 PM Robert Engels 
> wrote:
>
>> I’ve been thinking about this some more, and I think that LockOSThread()
>> should not be needed - that the Go thread multiplexing must perform memory
>> fences otherwise the simplest of Go apps would have concurrency issues.
>>
>> So, that leads me to believe that your “single routine” is not correct. I
>> would add code on the Go side that does similar Go global variable handling
>> at the call site for the C call. Then run under the race detector - I’m
>> guessing that it will report a race on the Go global.
>>
>> On Mar 16, 2020, at 2:46 PM, Robert Engels  wrote:
>>
>> In the single Go routine, use LockOSThread(). Then it was always be
>> accessed on the same thread removing the memory synchronization problems.
>>
>> On Mar 16, 2020, at 11:28 AM, Nitish Saboo 
>> wrote:
>>
>> 
>> Hi,
>>
>> So finally I got a little hint of the problem from what Robert described
>> earlier in the mail. Thank you so much Robert.
>> Looks like patterndb instance is not getting freed.
>>
>> node.c
>> -
>>
>> PatternDB *patterndb;
>>
>> int load_pattern_db(const gchar* file, key_value_cb cb)
>> {
>>  if(patterndb != NULL){
>>   printf("Patterndb Free called\n"); <<< Not getting printed
>> pattern_db_free(patterndb);
>>   }
>>   patterndb = pattern_db_new();
>>   pattern_db_reload_ruleset(patterndb, configuration, file);
>>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>>   return 0;
>> }
>>
>>
>> patterndb is a global variable in C wrapper code that internally calls
>> some syslog-ng library api's.Since load_pattern_db() method is getting
>> called from a single goroutine every 3 mins, patterndb instance is not
>> getting free because the statement inside if clause ('if(patterndb !=
>> NULL)') is not getting printed when I call 'load_pattern_db()' method.Looks
>> like that is the leak here.
>>
>>
>> 1)Can someone please help me understand the problem in detail as in why
>> am I facing this issue?
>>
>> 2)Though patterndb instance is a global variable in the C wrapper code,
>> why is it not getting freed?
>>
>> 3)How can I fix this issue?
>>
>> Thanks,
>> Nitish
>>
>> On Mon, Mar 16, 2020 at 8:17 PM Nitish Saboo 
>> wrote:
>>
>>> Hi Robert,
>>>
>>> Sorry I did not understand your point completely.
>>> I have a global variable patterndb on C side and It is getting called
>>> from a single goroutine every 3 mins. Why do I need to synchronize it?
>>> Even though the goroutine gets pinned to different threads, it can
>>> access the same global variable every time and free it ...right ?
>>>
>>> Thanks,
>>> Nitish
>>>
>>>
>>> On Mon, Mar 16, 2020 at 8:10 PM Robert Engels 
>>> wrote:
>>>
 Yes, you have a shared global variable you need to synchronize.

 On Mar 16, 2020, at 9:35 AM, Nitish Saboo 
 wrote:

 
 Hi,

 Are you saying it is working as expected?

 Thanks,
 Nitish

 On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler <
 dr.volker.dob...@gmail.com> wrote:

> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>>
>> Hi,
>>
>> 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-17 Thread Robert Engels
My only thought was that maybe you had more Go routines accessing it than you 
thought. 

It is remains constant after a while it is most likely not a memory leak. It is 
done what surprising that the memory consumption in a steady state would be 4x 
the equivalent C program. 

> On Mar 17, 2020, at 9:21 AM, Nitish Saboo  wrote:
> 
> 
> Hi Robert,
> 
> Thanks for your response.
> Since patterndb is a global variable(not a thread-local variable) and I have 
> a single goroutine that calls load_pattern_db() method, therefore it was not 
> looking correct to me to pin a goroutine to a thread.
> I once again tested the code flow. Apologies for making confusion in my last 
> mail. When I called load_pattern_db() for about 6-7 times, every time the 
> following lines were getting printed. It looks like patterndb instance is 
> getting freed and the memory is becoming constant at around 29%.
> 
> Patterndb Free Entered
> Patterndb Free called
> Patterndb New called
> 
> node.c
> -
> 
> PatternDB *patterndb;
> 
> int load_pattern_db(const gchar* file, key_value_cb cb)
> {
>  printf("Patterndb Free Entered\n")
>  if(patterndb != NULL){
>   printf("Patterndb Free called\n"); <<< It is getting printed
> pattern_db_free(patterndb);
>   }
>   printf("Patterndb New called\n")
>   patterndb = pattern_db_new();
>   pattern_db_reload_ruleset(patterndb, configuration, file);
>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>   return 0;
> }
> 
> But, what made you feel that Go global variable would report a race 
> condition? Since it is a single goroutine what would cause a race condition 
> here ?
> 
> Thanks,
> Nitish 
> 
>> On Tue, Mar 17, 2020 at 6:31 PM Robert Engels  wrote:
>> I’ve been thinking about this some more, and I think that LockOSThread() 
>> should not be needed - that the Go thread multiplexing must perform memory 
>> fences otherwise the simplest of Go apps would have concurrency issues. 
>> 
>> So, that leads me to believe that your “single routine” is not correct. I 
>> would add code on the Go side that does similar Go global variable handling 
>> at the call site for the C call. Then run under the race detector - I’m 
>> guessing that it will report a race on the Go global. 
>> 
>>> On Mar 16, 2020, at 2:46 PM, Robert Engels  wrote:
>>> 
>>> In the single Go routine, use LockOSThread(). Then it was always be 
>>> accessed on the same thread removing the memory synchronization problems. 
>>> 
> On Mar 16, 2020, at 11:28 AM, Nitish Saboo  
> wrote:
> 
 
 Hi,
 
 So finally I got a little hint of the problem from what Robert described 
 earlier in the mail. Thank you so much Robert.
 Looks like patterndb instance is not getting freed.
 
 node.c
 -
 
 PatternDB *patterndb;
 
 int load_pattern_db(const gchar* file, key_value_cb cb)
 {
  if(patterndb != NULL){
   printf("Patterndb Free called\n"); <<< Not getting printed
 pattern_db_free(patterndb);
   }
   patterndb = pattern_db_new();
   pattern_db_reload_ruleset(patterndb, configuration, file);
   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
   return 0;
 }
 
 
 patterndb is a global variable in C wrapper code that internally calls 
 some syslog-ng library api's.Since load_pattern_db() method is getting 
 called from a single goroutine every 3 mins, patterndb instance is not 
 getting free because the statement inside if clause ('if(patterndb != 
 NULL)') is not getting printed when I call 'load_pattern_db()' 
 method.Looks like that is the leak here.
 
 
 1)Can someone please help me understand the problem in detail as in why am 
 I facing this issue?
 
 2)Though patterndb instance is a global variable in the C wrapper code, 
 why is it not getting freed?
 
 3)How can I fix this issue?
 
 Thanks,
 Nitish
 
> On Mon, Mar 16, 2020 at 8:17 PM Nitish Saboo  
> wrote:
> Hi Robert,
> 
> Sorry I did not understand your point completely.
> I have a global variable patterndb on C side and It is getting called 
> from a single goroutine every 3 mins. Why do I need to synchronize it?
> Even though the goroutine gets pinned to different threads, it can access 
> the same global variable every time and free it ...right ?
> 
> Thanks,
> Nitish
> 
> 
>> On Mon, Mar 16, 2020 at 8:10 PM Robert Engels  
>> wrote:
>> Yes, you have a shared global variable you need to synchronize. 
>> 
 On Mar 16, 2020, at 9:35 AM, Nitish Saboo  
 wrote:
 
>>> 
>>> Hi,
>>> 
>>> Are you saying it is working as expected?
>>> 
>>> Thanks,
>>> Nitish
>>> 
 On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler 
  wrote:
> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
> 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-17 Thread Nitish Saboo
Hi Robert,

Thanks for your response.
Since patterndb is a global variable(not a thread-local variable) and I
have a single goroutine that calls load_pattern_db() method, therefore it
was not looking correct to me to pin a goroutine to a thread.
I once again tested the code flow. Apologies for making confusion in my
last mail. When I called load_pattern_db() for about 6-7 times, every time
the following lines were getting printed. It looks like patterndb instance
is getting freed and the memory is becoming constant at around 29%.

Patterndb Free Entered
Patterndb Free called
Patterndb New called

node.c
-

PatternDB *patterndb;

int load_pattern_db(const gchar* file, key_value_cb cb)
{
 printf("Patterndb Free Entered\n")
 if(patterndb != NULL){
  printf("Patterndb Free called\n"); <<< It is getting printed
pattern_db_free(patterndb);
  }
  printf("Patterndb New called\n")
  patterndb = pattern_db_new();
  pattern_db_reload_ruleset(patterndb, configuration, file);
  pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
  return 0;
}

But, what made you feel that Go global variable would report a race
condition? Since it is a single goroutine what would cause a race condition
here ?

Thanks,
Nitish

On Tue, Mar 17, 2020 at 6:31 PM Robert Engels  wrote:

> I’ve been thinking about this some more, and I think that LockOSThread()
> should not be needed - that the Go thread multiplexing must perform memory
> fences otherwise the simplest of Go apps would have concurrency issues.
>
> So, that leads me to believe that your “single routine” is not correct. I
> would add code on the Go side that does similar Go global variable handling
> at the call site for the C call. Then run under the race detector - I’m
> guessing that it will report a race on the Go global.
>
> On Mar 16, 2020, at 2:46 PM, Robert Engels  wrote:
>
> In the single Go routine, use LockOSThread(). Then it was always be
> accessed on the same thread removing the memory synchronization problems.
>
> On Mar 16, 2020, at 11:28 AM, Nitish Saboo 
> wrote:
>
> 
> Hi,
>
> So finally I got a little hint of the problem from what Robert described
> earlier in the mail. Thank you so much Robert.
> Looks like patterndb instance is not getting freed.
>
> node.c
> -
>
> PatternDB *patterndb;
>
> int load_pattern_db(const gchar* file, key_value_cb cb)
> {
>  if(patterndb != NULL){
>   printf("Patterndb Free called\n"); <<< Not getting printed
> pattern_db_free(patterndb);
>   }
>   patterndb = pattern_db_new();
>   pattern_db_reload_ruleset(patterndb, configuration, file);
>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>   return 0;
> }
>
>
> patterndb is a global variable in C wrapper code that internally calls
> some syslog-ng library api's.Since load_pattern_db() method is getting
> called from a single goroutine every 3 mins, patterndb instance is not
> getting free because the statement inside if clause ('if(patterndb !=
> NULL)') is not getting printed when I call 'load_pattern_db()' method.Looks
> like that is the leak here.
>
>
> 1)Can someone please help me understand the problem in detail as in why am
> I facing this issue?
>
> 2)Though patterndb instance is a global variable in the C wrapper code,
> why is it not getting freed?
>
> 3)How can I fix this issue?
>
> Thanks,
> Nitish
>
> On Mon, Mar 16, 2020 at 8:17 PM Nitish Saboo 
> wrote:
>
>> Hi Robert,
>>
>> Sorry I did not understand your point completely.
>> I have a global variable patterndb on C side and It is getting called
>> from a single goroutine every 3 mins. Why do I need to synchronize it?
>> Even though the goroutine gets pinned to different threads, it can access
>> the same global variable every time and free it ...right ?
>>
>> Thanks,
>> Nitish
>>
>>
>> On Mon, Mar 16, 2020 at 8:10 PM Robert Engels 
>> wrote:
>>
>>> Yes, you have a shared global variable you need to synchronize.
>>>
>>> On Mar 16, 2020, at 9:35 AM, Nitish Saboo 
>>> wrote:
>>>
>>> 
>>> Hi,
>>>
>>> Are you saying it is working as expected?
>>>
>>> Thanks,
>>> Nitish
>>>
>>> On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler <
>>> dr.volker.dob...@gmail.com> wrote:
>>>
 On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>
> Hi,
>
> I upgraded the go version and compiled the binary against go version
> 'go version go1.12.4 linux/amd64'.
> I ran the program for some time. I made almost 30-40 calls to the
> method Load_Pattern_Db().
> The program starts with 6% Mem Usage. The memory usage increases only
> when I call 'LoadPatternDb()' method and LoadPatternDb() method is called
> by a goroutine at regular intervals of 3 minutes(making use of ticker here
> ).
>
> What I observed is:
>
> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory
> usage got almost constant at 29%. But I did not expect the program to take
> this much memory.
>When I restart the 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-17 Thread Robert Engels
I’ve been thinking about this some more, and I think that LockOSThread() should 
not be needed - that the Go thread multiplexing must perform memory fences 
otherwise the simplest of Go apps would have concurrency issues. 

So, that leads me to believe that your “single routine” is not correct. I would 
add code on the Go side that does similar Go global variable handling at the 
call site for the C call. Then run under the race detector - I’m guessing that 
it will report a race on the Go global. 

> On Mar 16, 2020, at 2:46 PM, Robert Engels  wrote:
> 
> In the single Go routine, use LockOSThread(). Then it was always be accessed 
> on the same thread removing the memory synchronization problems. 
> 
>> On Mar 16, 2020, at 11:28 AM, Nitish Saboo  wrote:
>> 
>> 
>> Hi,
>> 
>> So finally I got a little hint of the problem from what Robert described 
>> earlier in the mail. Thank you so much Robert.
>> Looks like patterndb instance is not getting freed.
>> 
>> node.c
>> -
>> 
>> PatternDB *patterndb;
>> 
>> int load_pattern_db(const gchar* file, key_value_cb cb)
>> {
>>  if(patterndb != NULL){
>>  printf("Patterndb Free called\n"); <<< Not getting printed
>> pattern_db_free(patterndb);
>>   }
>>   patterndb = pattern_db_new();
>>   pattern_db_reload_ruleset(patterndb, configuration, file);
>>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>>   return 0;
>> }
>> 
>> 
>> patterndb is a global variable in C wrapper code that internally calls some 
>> syslog-ng library api's.Since load_pattern_db() method is getting called 
>> from a single goroutine every 3 mins, patterndb instance is not getting free 
>> because the statement inside if clause ('if(patterndb != NULL)') is not 
>> getting printed when I call 'load_pattern_db()' method.Looks like that is 
>> the leak here.
>> 
>> 
>> 1)Can someone please help me understand the problem in detail as in why am I 
>> facing this issue?
>> 
>> 2)Though patterndb instance is a global variable in the C wrapper code, why 
>> is it not getting freed?
>> 
>> 3)How can I fix this issue?
>> 
>> Thanks,
>> Nitish
>> 
>>> On Mon, Mar 16, 2020 at 8:17 PM Nitish Saboo  
>>> wrote:
>>> Hi Robert,
>>> 
>>> Sorry I did not understand your point completely.
>>> I have a global variable patterndb on C side and It is getting called from 
>>> a single goroutine every 3 mins. Why do I need to synchronize it?
>>> Even though the goroutine gets pinned to different threads, it can access 
>>> the same global variable every time and free it ...right ?
>>> 
>>> Thanks,
>>> Nitish
>>> 
>>> 
 On Mon, Mar 16, 2020 at 8:10 PM Robert Engels  
 wrote:
 Yes, you have a shared global variable you need to synchronize. 
 
> On Mar 16, 2020, at 9:35 AM, Nitish Saboo  
> wrote:
> 
> 
> Hi,
> 
> Are you saying it is working as expected?
> 
> Thanks,
> Nitish
> 
>> On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler 
>>  wrote:
>>> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>>> Hi,
>>> 
>>> I upgraded the go version and compiled the binary against go version 
>>> 'go version go1.12.4 linux/amd64'.
>>> I ran the program for some time. I made almost 30-40 calls to the 
>>> method Load_Pattern_Db().
>>> The program starts with 6% Mem Usage. The memory usage increases only 
>>> when I call 'LoadPatternDb()' method and LoadPatternDb() method is 
>>> called by a goroutine at regular intervals of 3 minutes(making use of 
>>> ticker here ).
>>> 
>>> What I observed is:
>>> 
>>> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory 
>>> usage got almost constant at 29%. But I did not expect the program to 
>>> take this much memory.
>>>When I restart the service the Mem Usage again starts with 6%.
>>> 
>>> a) Is this the sign of memory leaking?
>> 
>> No, as explained above.
>>  
>>> 
>>> b) Till this moment I did not see memory getting reclaimed or going 
>>> down but it did become constant.
>>> As mentioned by experts above, the same sort of behavior is seen here. 
>>> But I did not expect the memory usage to grow this much. Is this 
>>> expected? 
>> Yes. (Well, no. But your gut feeling of how much memory
>> should grow is not a suitable benchmark to compare
>> actual growth to.)
>>  
>>> 
>>> 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) 
>>> as mentioned in the earlier email.
>>> 
>>> a) Which all mem-stats variables should I look into for debugging this 
>>> kind of behavior?
>> Alloc/HeapAlloc 
>> But probably this is plain useless as nothing here indicates
>> that you do have any memory issues.
>> 
>> V.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google 
>> Groups "golang-nuts" group.
>> To unsubscribe from 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Robert Engels
In the single Go routine, use LockOSThread(). Then it was always be accessed on 
the same thread removing the memory synchronization problems. 

> On Mar 16, 2020, at 11:28 AM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> So finally I got a little hint of the problem from what Robert described 
> earlier in the mail. Thank you so much Robert.
> Looks like patterndb instance is not getting freed.
> 
> node.c
> -
> 
> PatternDB *patterndb;
> 
> int load_pattern_db(const gchar* file, key_value_cb cb)
> {
>  if(patterndb != NULL){
>   printf("Patterndb Free called\n"); <<< Not getting printed
> pattern_db_free(patterndb);
>   }
>   patterndb = pattern_db_new();
>   pattern_db_reload_ruleset(patterndb, configuration, file);
>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>   return 0;
> }
> 
> 
> patterndb is a global variable in C wrapper code that internally calls some 
> syslog-ng library api's.Since load_pattern_db() method is getting called from 
> a single goroutine every 3 mins, patterndb instance is not getting free 
> because the statement inside if clause ('if(patterndb != NULL)') is not 
> getting printed when I call 'load_pattern_db()' method.Looks like that is the 
> leak here.
> 
> 
> 1)Can someone please help me understand the problem in detail as in why am I 
> facing this issue?
> 
> 2)Though patterndb instance is a global variable in the C wrapper code, why 
> is it not getting freed?
> 
> 3)How can I fix this issue?
> 
> Thanks,
> Nitish
> 
>> On Mon, Mar 16, 2020 at 8:17 PM Nitish Saboo  
>> wrote:
>> Hi Robert,
>> 
>> Sorry I did not understand your point completely.
>> I have a global variable patterndb on C side and It is getting called from a 
>> single goroutine every 3 mins. Why do I need to synchronize it?
>> Even though the goroutine gets pinned to different threads, it can access 
>> the same global variable every time and free it ...right ?
>> 
>> Thanks,
>> Nitish
>> 
>> 
>>> On Mon, Mar 16, 2020 at 8:10 PM Robert Engels  wrote:
>>> Yes, you have a shared global variable you need to synchronize. 
>>> 
> On Mar 16, 2020, at 9:35 AM, Nitish Saboo  
> wrote:
> 
 
 Hi,
 
 Are you saying it is working as expected?
 
 Thanks,
 Nitish
 
> On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler 
>  wrote:
>> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>> Hi,
>> 
>> I upgraded the go version and compiled the binary against go version 'go 
>> version go1.12.4 linux/amd64'.
>> I ran the program for some time. I made almost 30-40 calls to the method 
>> Load_Pattern_Db().
>> The program starts with 6% Mem Usage. The memory usage increases only 
>> when I call 'LoadPatternDb()' method and LoadPatternDb() method is 
>> called by a goroutine at regular intervals of 3 minutes(making use of 
>> ticker here ).
>> 
>> What I observed is:
>> 
>> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory 
>> usage got almost constant at 29%. But I did not expect the program to 
>> take this much memory.
>>When I restart the service the Mem Usage again starts with 6%.
>> 
>> a) Is this the sign of memory leaking?
> 
> No, as explained above.
>  
>> 
>> b) Till this moment I did not see memory getting reclaimed or going down 
>> but it did become constant.
>> As mentioned by experts above, the same sort of behavior is seen here. 
>> But I did not expect the memory usage to grow this much. Is this 
>> expected? 
> Yes. (Well, no. But your gut feeling of how much memory
> should grow is not a suitable benchmark to compare
> actual growth to.)
>  
>> 
>> 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) as 
>> mentioned in the earlier email.
>> 
>> a) Which all mem-stats variables should I look into for debugging this 
>> kind of behavior?
> Alloc/HeapAlloc 
> But probably this is plain useless as nothing here indicates
> that you do have any memory issues.
> 
> V.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/e664151d-474d-4c1d-ae1d-979dc6975469%40googlegroups.com.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 "golang-nuts" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to golang-nuts+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/golang-nuts/CALjMrq7EuvpFBaAQCJfO_QhkW8ceac8oEv-oFq9GPsik%3D5GNkw%40mail.gmail.com.

-- 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Nitish Saboo
Hi,

So finally I got a little hint of the problem from what Robert described
earlier in the mail. Thank you so much Robert.
Looks like patterndb instance is not getting freed.

node.c
-

PatternDB *patterndb;

int load_pattern_db(const gchar* file, key_value_cb cb)
{
 if(patterndb != NULL){
  printf("Patterndb Free called\n"); <<< Not getting printed
pattern_db_free(patterndb);
  }
  patterndb = pattern_db_new();
  pattern_db_reload_ruleset(patterndb, configuration, file);
  pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
  return 0;
}


patterndb is a global variable in C wrapper code that internally calls some
syslog-ng library api's.Since load_pattern_db() method is getting called
from a single goroutine every 3 mins, patterndb instance is not getting
free because the statement inside if clause ('if(patterndb != NULL)') is
not getting printed when I call 'load_pattern_db()' method.Looks like that
is the leak here.


1)Can someone please help me understand the problem in detail as in why am
I facing this issue?

2)Though patterndb instance is a global variable in the C wrapper code, why
is it not getting freed?

3)How can I fix this issue?

Thanks,
Nitish

On Mon, Mar 16, 2020 at 8:17 PM Nitish Saboo 
wrote:

> Hi Robert,
>
> Sorry I did not understand your point completely.
> I have a global variable patterndb on C side and It is getting called from
> a single goroutine every 3 mins. Why do I need to synchronize it?
> Even though the goroutine gets pinned to different threads, it can access
> the same global variable every time and free it ...right ?
>
> Thanks,
> Nitish
>
>
> On Mon, Mar 16, 2020 at 8:10 PM Robert Engels 
> wrote:
>
>> Yes, you have a shared global variable you need to synchronize.
>>
>> On Mar 16, 2020, at 9:35 AM, Nitish Saboo 
>> wrote:
>>
>> 
>> Hi,
>>
>> Are you saying it is working as expected?
>>
>> Thanks,
>> Nitish
>>
>> On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler 
>> wrote:
>>
>>> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:

 Hi,

 I upgraded the go version and compiled the binary against go version
 'go version go1.12.4 linux/amd64'.
 I ran the program for some time. I made almost 30-40 calls to the
 method Load_Pattern_Db().
 The program starts with 6% Mem Usage. The memory usage increases only
 when I call 'LoadPatternDb()' method and LoadPatternDb() method is called
 by a goroutine at regular intervals of 3 minutes(making use of ticker here
 ).

 What I observed is:

 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory
 usage got almost constant at 29%. But I did not expect the program to take
 this much memory.
When I restart the service the Mem Usage again starts with 6%.

 a) Is this the sign of memory leaking?

>>>
>>> No, as explained above.
>>>
>>>

 b) Till this moment I did not see memory getting reclaimed or going
 down but it did become constant.
 As mentioned by experts above, the same sort of behavior is seen here.
 But I did not expect the memory usage to grow this much. Is this expected?

>>> Yes. (Well, no. But your gut feeling of how much memory
>>> should grow is not a suitable benchmark to compare
>>> actual growth to.)
>>>
>>>

 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc)
 as mentioned in the earlier email.

 a) Which all mem-stats variables should I look into for debugging this
 kind of behavior?

>>> Alloc/HeapAlloc
>>> But probably this is plain useless as nothing here indicates
>>> that you do have any memory issues.
>>>
>>> V.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to golang-nuts+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/golang-nuts/e664151d-474d-4c1d-ae1d-979dc6975469%40googlegroups.com
>>> 
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/golang-nuts/CALjMrq7EuvpFBaAQCJfO_QhkW8ceac8oEv-oFq9GPsik%3D5GNkw%40mail.gmail.com
>> 
>> .
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Nitish Saboo
Hi Robert,

Sorry I did not understand your point completely.
I have a global variable patterndb on C side and It is getting called from
a single goroutine every 3 mins. Why do I need to synchronize it?
Even though the goroutine gets pinned to different threads, it can access
the same global variable every time and free it ...right ?

Thanks,
Nitish


On Mon, Mar 16, 2020 at 8:10 PM Robert Engels  wrote:

> Yes, you have a shared global variable you need to synchronize.
>
> On Mar 16, 2020, at 9:35 AM, Nitish Saboo 
> wrote:
>
> 
> Hi,
>
> Are you saying it is working as expected?
>
> Thanks,
> Nitish
>
> On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler 
> wrote:
>
>> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>>>
>>> Hi,
>>>
>>> I upgraded the go version and compiled the binary against go version 'go
>>> version go1.12.4 linux/amd64'.
>>> I ran the program for some time. I made almost 30-40 calls to the method
>>> Load_Pattern_Db().
>>> The program starts with 6% Mem Usage. The memory usage increases only
>>> when I call 'LoadPatternDb()' method and LoadPatternDb() method is called
>>> by a goroutine at regular intervals of 3 minutes(making use of ticker here
>>> ).
>>>
>>> What I observed is:
>>>
>>> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory
>>> usage got almost constant at 29%. But I did not expect the program to take
>>> this much memory.
>>>When I restart the service the Mem Usage again starts with 6%.
>>>
>>> a) Is this the sign of memory leaking?
>>>
>>
>> No, as explained above.
>>
>>
>>>
>>> b) Till this moment I did not see memory getting reclaimed or going down
>>> but it did become constant.
>>> As mentioned by experts above, the same sort of behavior is seen here.
>>> But I did not expect the memory usage to grow this much. Is this expected?
>>>
>> Yes. (Well, no. But your gut feeling of how much memory
>> should grow is not a suitable benchmark to compare
>> actual growth to.)
>>
>>
>>>
>>> 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) as
>>> mentioned in the earlier email.
>>>
>>> a) Which all mem-stats variables should I look into for debugging this
>>> kind of behavior?
>>>
>> Alloc/HeapAlloc
>> But probably this is plain useless as nothing here indicates
>> that you do have any memory issues.
>>
>> V.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/golang-nuts/e664151d-474d-4c1d-ae1d-979dc6975469%40googlegroups.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/CALjMrq7EuvpFBaAQCJfO_QhkW8ceac8oEv-oFq9GPsik%3D5GNkw%40mail.gmail.com
> 
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CALjMrq4qOC1zBDSdwnFRMSQZB_R%2Bf%2B6fcwwFPOMy9ArFO-4uoQ%40mail.gmail.com.


Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Robert Engels
Yes, you have a shared global variable you need to synchronize. 

> On Mar 16, 2020, at 9:35 AM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> Are you saying it is working as expected?
> 
> Thanks,
> Nitish
> 
>> On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler  
>> wrote:
>>> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>>> Hi,
>>> 
>>> I upgraded the go version and compiled the binary against go version 'go 
>>> version go1.12.4 linux/amd64'.
>>> I ran the program for some time. I made almost 30-40 calls to the method 
>>> Load_Pattern_Db().
>>> The program starts with 6% Mem Usage. The memory usage increases only when 
>>> I call 'LoadPatternDb()' method and LoadPatternDb() method is called by a 
>>> goroutine at regular intervals of 3 minutes(making use of ticker here ).
>>> 
>>> What I observed is:
>>> 
>>> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory usage 
>>> got almost constant at 29%. But I did not expect the program to take this 
>>> much memory.
>>>When I restart the service the Mem Usage again starts with 6%.
>>> 
>>> a) Is this the sign of memory leaking?
>> 
>> No, as explained above.
>>  
>>> 
>>> b) Till this moment I did not see memory getting reclaimed or going down 
>>> but it did become constant.
>>> As mentioned by experts above, the same sort of behavior is seen here. But 
>>> I did not expect the memory usage to grow this much. Is this expected? 
>> Yes. (Well, no. But your gut feeling of how much memory
>> should grow is not a suitable benchmark to compare
>> actual growth to.)
>>  
>>> 
>>> 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) as 
>>> mentioned in the earlier email.
>>> 
>>> a) Which all mem-stats variables should I look into for debugging this kind 
>>> of behavior?
>> Alloc/HeapAlloc 
>> But probably this is plain useless as nothing here indicates
>> that you do have any memory issues.
>> 
>> V.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/golang-nuts/e664151d-474d-4c1d-ae1d-979dc6975469%40googlegroups.com.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/CALjMrq7EuvpFBaAQCJfO_QhkW8ceac8oEv-oFq9GPsik%3D5GNkw%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/0F39BE6B-52E6-4665-93C4-B18490BC7C23%40ix.netcom.com.


Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Nitish Saboo
Hi,

Are you saying it is working as expected?

Thanks,
Nitish

On Mon, Mar 16, 2020 at 7:42 PM Volker Dobler 
wrote:

> On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>>
>> Hi,
>>
>> I upgraded the go version and compiled the binary against go version 'go
>> version go1.12.4 linux/amd64'.
>> I ran the program for some time. I made almost 30-40 calls to the method
>> Load_Pattern_Db().
>> The program starts with 6% Mem Usage. The memory usage increases only
>> when I call 'LoadPatternDb()' method and LoadPatternDb() method is called
>> by a goroutine at regular intervals of 3 minutes(making use of ticker here
>> ).
>>
>> What I observed is:
>>
>> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory
>> usage got almost constant at 29%. But I did not expect the program to take
>> this much memory.
>>When I restart the service the Mem Usage again starts with 6%.
>>
>> a) Is this the sign of memory leaking?
>>
>
> No, as explained above.
>
>
>>
>> b) Till this moment I did not see memory getting reclaimed or going down
>> but it did become constant.
>> As mentioned by experts above, the same sort of behavior is seen here.
>> But I did not expect the memory usage to grow this much. Is this expected?
>>
> Yes. (Well, no. But your gut feeling of how much memory
> should grow is not a suitable benchmark to compare
> actual growth to.)
>
>
>>
>> 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) as
>> mentioned in the earlier email.
>>
>> a) Which all mem-stats variables should I look into for debugging this
>> kind of behavior?
>>
> Alloc/HeapAlloc
> But probably this is plain useless as nothing here indicates
> that you do have any memory issues.
>
> V.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/e664151d-474d-4c1d-ae1d-979dc6975469%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CALjMrq7EuvpFBaAQCJfO_QhkW8ceac8oEv-oFq9GPsik%3D5GNkw%40mail.gmail.com.


Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Volker Dobler
On Monday, 16 March 2020 14:25:52 UTC+1, Nitish Saboo wrote:
>
> Hi,
>
> I upgraded the go version and compiled the binary against go version 'go 
> version go1.12.4 linux/amd64'.
> I ran the program for some time. I made almost 30-40 calls to the method 
> Load_Pattern_Db().
> The program starts with 6% Mem Usage. The memory usage increases only when 
> I call 'LoadPatternDb()' method and LoadPatternDb() method is called by a 
> goroutine at regular intervals of 3 minutes(making use of ticker here ).
>
> What I observed is:
>
> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory 
> usage got almost constant at 29%. But I did not expect the program to take 
> this much memory.
>When I restart the service the Mem Usage again starts with 6%.
>
> a) Is this the sign of memory leaking?
>

No, as explained above.
 

>
> b) Till this moment I did not see memory getting reclaimed or going down 
> but it did become constant.
> As mentioned by experts above, the same sort of behavior is seen here. But 
> I did not expect the memory usage to grow this much. Is this expected? 
>
Yes. (Well, no. But your gut feeling of how much memory
should grow is not a suitable benchmark to compare
actual growth to.)
 

>
> 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) as 
> mentioned in the earlier email.
>
> a) Which all mem-stats variables should I look into for debugging this 
> kind of behavior?
>
Alloc/HeapAlloc 
But probably this is plain useless as nothing here indicates
that you do have any memory issues.

V.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/e664151d-474d-4c1d-ae1d-979dc6975469%40googlegroups.com.


Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Robert Engels
Sounds like it. Probably in the C code. You need to check that your 
release/free code is correct. 

You can try a similar C program that instantiates and frees the structure to 
check for similar behavior. 

> On Mar 16, 2020, at 8:25 AM, Nitish Saboo  wrote:
> 
> 
> Hi,
> 
> I upgraded the go version and compiled the binary against go version 'go 
> version go1.12.4 linux/amd64'.
> I ran the program for some time. I made almost 30-40 calls to the method 
> Load_Pattern_Db().
> The program starts with 6% Mem Usage. The memory usage increases only when I 
> call 'LoadPatternDb()' method and LoadPatternDb() method is called by a 
> goroutine at regular intervals of 3 minutes(making use of ticker here ).
> 
> What I observed is:
> 
> 1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory usage 
> got almost constant at 29%. But I did not expect the program to take this 
> much memory.
>When I restart the service the Mem Usage again starts with 6%.
> 
> a) Is this the sign of memory leaking?
> 
> b) Till this moment I did not see memory getting reclaimed or going down but 
> it did become constant.
> As mentioned by experts above, the same sort of behavior is seen here. But I 
> did not expect the memory usage to grow this much. Is this expected?
> 
> 2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) as 
> mentioned in the earlier email.
> 
> a) Which all mem-stats variables should I look into for debugging this kind 
> of behavior?
> 
> Thanks,
> Nitish
> 
>> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones  
>> wrote:
>> hi. get the time at the start, check the elapsed time in your infinite loop, 
>> and trigger the write/exit after a minute, 10 minutes, 100 minutes, ...
>> 
>>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo  
>>> wrote:
>>> Hi Michael,
>>> 
>>> Thanks for your response.
>>> 
>>> That code looks wrong. I see the end but not the start. Look here and copy 
>>> carefully:
>>> 
>>> >>Since I did not want cpu profiling I omitted the start of the code and 
>>> >>just added memory profiling part.
>>> 
>>> Call at end, on way out.
>>> 
>>> >>Oh yes, I missed that.I have to call memory profiling code at the end on 
>>> >>the way out.But the thing is that it runs as a service in infinite for 
>>> >>loop.
>>> 
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>> 
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>> 
>>> for {
>>> select {
>>> case <-A_chan: 
>>> continue
>>> case <-B_chan: 
>>> continue
>>> 
>>> }
>>> }
>>> 
>>> }
>>> 
>>> What would be the correct way to add the memprofile code changes, since it 
>>> is running in an infinite for loop ?
>>> 
>>> Also, as shared by others above, there are no promises about how soon the 
>>> dead allocations go away, The speed gets faster and faster version to 
>>> version, and is impressive indeed now, so old versions are not the best to 
>>> use, ubt even so, if the allocation feels small to th GC the urgency to 
>>> free it will be low. You need to loop in allocating and see if the memory 
>>> grows and grows.
>>> 
>>> >> Yes, got it.I will try using the latest version of Go and check the 
>>> >> behavior.
>>> 
>>> Thanks,
>>> Nitish
>>> 
 On Fri, Mar 13, 2020 at 6:20 AM Michael Jones  
 wrote:
 That code looks wrong. I see the end but not the start. Look here and copy 
 carefully:
 https://golang.org/pkg/runtime/pprof/
 
 Call at end, on way out.
 
 Also, as shared by others above, there are no promises about how soon the 
 dead allocations go away, The speed gets faster and faster version to 
 version, and is impressive indeed now, so old versions are not the best to 
 use, ubt even so, if the allocation feels small to th GC the urgency to 
 free it will be low. You need to loop in allocating and see if the memory 
 grows and grows.
 
> On Thu, Mar 12, 2020 at 9:22 AM Nitish Saboo  
> wrote:
> Hi,
> 
> I have compiled my Go binary against go version 'go1.7 linux/amd64'.
> I added the following code change in the main function to get the memory 
> profiling of my service 
> 
> var memprofile = flag.String("memprofile", "", "write memory profile to 
> `file`")
> 
> func main() {
> flag.Parse()
> if *memprofile != "" {
> f, err := os.Create(*memprofile)
> if err != nil {
> fmt.Println("could not create memory profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> runtime.GC() // get up-to-date statistics
> if err := 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-16 Thread Nitish Saboo
Hi,

I upgraded the go version and compiled the binary against go version 'go
version go1.12.4 linux/amd64'.
I ran the program for some time. I made almost 30-40 calls to the method
Load_Pattern_Db().
The program starts with 6% Mem Usage. The memory usage increases only when
I call 'LoadPatternDb()' method and LoadPatternDb() method is called by a
goroutine at regular intervals of 3 minutes(making use of ticker here ).

What I observed is:

1)After almost 16-17 calls to the method 'LoadPatternDb(), the memory usage
got almost constant at 29%. But I did not expect the program to take this
much memory.
   When I restart the service the Mem Usage again starts with 6%.

a) Is this the sign of memory leaking?

b) Till this moment I did not see memory getting reclaimed or going down
but it did become constant.
As mentioned by experts above, the same sort of behavior is seen here. But
I did not expect the memory usage to grow this much. Is this expected?

2)I will run mem-profiling at intervals(10 minutes, 100 minutes..etc) as
mentioned in the earlier email.

a) Which all mem-stats variables should I look into for debugging this kind
of behavior?

Thanks,
Nitish

On Fri, Mar 13, 2020 at 6:22 PM Michael Jones 
wrote:

> hi. get the time at the start, check the elapsed time in your infinite
> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes,
> ...
>
> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo 
> wrote:
>
>> Hi Michael,
>>
>> Thanks for your response.
>>
>> That code looks wrong. I see the end but not the start. Look here and
>> copy carefully:
>>
>> >>Since I did not want cpu profiling I omitted the start of the code and
>> just added memory profiling part.
>>
>> Call at end, on way out.
>>
>> >>Oh yes, I missed that.I have to call memory profiling code at the end
>> on the way out.But the thing is that it runs as a service in infinite for
>> loop.
>>
>> func main() {
>> flag.Parse()
>> if *cpuprofile != "" {
>> f, err := os.Create(*cpuprofile)
>> if err != nil {
>> fmt.Println("could not create CPU profile: ", err)
>> }
>> defer f.Close() // error handling omitted for example
>> if err := pprof.StartCPUProfile(f); err != nil {
>> fmt.Print("could not start CPU profile: ", err)
>> }
>> defer pprof.StopCPUProfile()
>> }
>>
>> A_chan := make(chan bool)
>> B_chan := make(chan bool)
>> go util.A(A_chan)
>> go util.B(B_chan)
>> (..Rest of the code..)
>>
>> for {
>> select {
>> case <-A_chan:
>> continue
>> case <-B_chan:
>> continue
>>
>> }
>> }
>>
>> }
>>
>> What would be the correct way to add the memprofile code changes, since
>> it is running in an infinite for loop ?
>>
>> Also, as shared by others above, there are no promises about how soon the
>> dead allocations go away, The speed gets faster and faster version to
>> version, and is impressive indeed now, so old versions are not the best to
>> use, ubt even so, if the allocation feels small to th GC the urgency to
>> free it will be low. You need to loop in allocating and see if the memory
>> grows and grows.
>>
>> >> Yes, got it.I will try using the latest version of Go and check the
>> behavior.
>>
>> Thanks,
>> Nitish
>>
>> On Fri, Mar 13, 2020 at 6:20 AM Michael Jones 
>> wrote:
>>
>>> That code looks wrong. I see the end but not the start. Look here and
>>> copy carefully:
>>> https://golang.org/pkg/runtime/pprof/
>>>
>>> Call at end, on way out.
>>>
>>> Also, as shared by others above, there are no promises about how soon
>>> the dead allocations go away, The speed gets faster and faster version to
>>> version, and is impressive indeed now, so old versions are not the best to
>>> use, ubt even so, if the allocation feels small to th GC the urgency to
>>> free it will be low. You need to loop in allocating and see if the memory
>>> grows and grows.
>>>
>>> On Thu, Mar 12, 2020 at 9:22 AM Nitish Saboo 
>>> wrote:
>>>
 Hi,

 I have compiled my Go binary against go version 'go1.7 linux/amd64'.
 I added the following code change in the main function to get the
 memory profiling of my service

 var memprofile = flag.String("memprofile", "", "write memory profile to
 `file`")

 func main() {
 flag.Parse()
 if *memprofile != "" {
 f, err := os.Create(*memprofile)
 if err != nil {
 fmt.Println("could not create memory profile: ", err)
 }
 defer f.Close() // error handling omitted for example
 runtime.GC() // get up-to-date statistics
 if err := pprof.WriteHeapProfile(f); err != nil {
 fmt.Println("could not write memory profile: ", err)
 }
 }
 ..
 ..
 (Rest code to follow)

 I ran the binary with the following command:

 nsaboo@ubuntu:./main -memprofile=mem.prof

 After running the service for couple of minutes, I stopped it and got
 the file 'mem.prof'

 1)mem.prof contains the following:

 nsaboo@ubuntu:~/Desktop/memprof$ vim mem.prof

 heap profile: 0: 0 [0: 0] @ heap/1048576

 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-13 Thread Michael Jones
hi. get the time at the start, check the elapsed time in your infinite
loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes,
...

On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo 
wrote:

> Hi Michael,
>
> Thanks for your response.
>
> That code looks wrong. I see the end but not the start. Look here and copy
> carefully:
>
> >>Since I did not want cpu profiling I omitted the start of the code and
> just added memory profiling part.
>
> Call at end, on way out.
>
> >>Oh yes, I missed that.I have to call memory profiling code at the end on
> the way out.But the thing is that it runs as a service in infinite for loop.
>
> func main() {
> flag.Parse()
> if *cpuprofile != "" {
> f, err := os.Create(*cpuprofile)
> if err != nil {
> fmt.Println("could not create CPU profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> if err := pprof.StartCPUProfile(f); err != nil {
> fmt.Print("could not start CPU profile: ", err)
> }
> defer pprof.StopCPUProfile()
> }
>
> A_chan := make(chan bool)
> B_chan := make(chan bool)
> go util.A(A_chan)
> go util.B(B_chan)
> (..Rest of the code..)
>
> for {
> select {
> case <-A_chan:
> continue
> case <-B_chan:
> continue
>
> }
> }
>
> }
>
> What would be the correct way to add the memprofile code changes, since it
> is running in an infinite for loop ?
>
> Also, as shared by others above, there are no promises about how soon the
> dead allocations go away, The speed gets faster and faster version to
> version, and is impressive indeed now, so old versions are not the best to
> use, ubt even so, if the allocation feels small to th GC the urgency to
> free it will be low. You need to loop in allocating and see if the memory
> grows and grows.
>
> >> Yes, got it.I will try using the latest version of Go and check the
> behavior.
>
> Thanks,
> Nitish
>
> On Fri, Mar 13, 2020 at 6:20 AM Michael Jones 
> wrote:
>
>> That code looks wrong. I see the end but not the start. Look here and
>> copy carefully:
>> https://golang.org/pkg/runtime/pprof/
>>
>> Call at end, on way out.
>>
>> Also, as shared by others above, there are no promises about how soon the
>> dead allocations go away, The speed gets faster and faster version to
>> version, and is impressive indeed now, so old versions are not the best to
>> use, ubt even so, if the allocation feels small to th GC the urgency to
>> free it will be low. You need to loop in allocating and see if the memory
>> grows and grows.
>>
>> On Thu, Mar 12, 2020 at 9:22 AM Nitish Saboo 
>> wrote:
>>
>>> Hi,
>>>
>>> I have compiled my Go binary against go version 'go1.7 linux/amd64'.
>>> I added the following code change in the main function to get the memory
>>> profiling of my service
>>>
>>> var memprofile = flag.String("memprofile", "", "write memory profile to
>>> `file`")
>>>
>>> func main() {
>>> flag.Parse()
>>> if *memprofile != "" {
>>> f, err := os.Create(*memprofile)
>>> if err != nil {
>>> fmt.Println("could not create memory profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> runtime.GC() // get up-to-date statistics
>>> if err := pprof.WriteHeapProfile(f); err != nil {
>>> fmt.Println("could not write memory profile: ", err)
>>> }
>>> }
>>> ..
>>> ..
>>> (Rest code to follow)
>>>
>>> I ran the binary with the following command:
>>>
>>> nsaboo@ubuntu:./main -memprofile=mem.prof
>>>
>>> After running the service for couple of minutes, I stopped it and got
>>> the file 'mem.prof'
>>>
>>> 1)mem.prof contains the following:
>>>
>>> nsaboo@ubuntu:~/Desktop/memprof$ vim mem.prof
>>>
>>> heap profile: 0: 0 [0: 0] @ heap/1048576
>>>
>>> # runtime.MemStats
>>> # Alloc = 761184
>>> # TotalAlloc = 1160960
>>> # Sys = 3149824
>>> # Lookups = 10
>>> # Mallocs = 8358
>>> # Frees = 1981
>>> # HeapAlloc = 761184
>>> # HeapSys = 1802240
>>> # HeapIdle = 499712
>>> # HeapInuse = 1302528
>>> # HeapReleased = 0
>>> # HeapObjects = 6377
>>> # Stack = 294912 / 294912
>>> # MSpan = 22560 / 32768
>>> # MCache = 2400 / 16384
>>> # BuckHashSys = 2727
>>> # NextGC = 4194304
>>> # PauseNs = [752083 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>>> 0]
>>> # NumGC = 1
>>> # DebugGC = false
>>>
>>> 2)When I tried to open the file using the following command, it just
>>> goes into interactive mode and shows nothing
>>>
>>> a)Output from go version go1.7 linux/amd64 for mem.prof
>>>
>>> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof
>>> Entering interactive mode (type "help" for commands)
>>> (pprof) top
>>> profile 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-13 Thread Nitish Saboo
Hi Michael,

Thanks for your response.

That code looks wrong. I see the end but not the start. Look here and copy
carefully:

>>Since I did not want cpu profiling I omitted the start of the code and
just added memory profiling part.

Call at end, on way out.

>>Oh yes, I missed that.I have to call memory profiling code at the end on
the way out.But the thing is that it runs as a service in infinite for loop.

func main() {
flag.Parse()
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
if err != nil {
fmt.Println("could not create CPU profile: ", err)
}
defer f.Close() // error handling omitted for example
if err := pprof.StartCPUProfile(f); err != nil {
fmt.Print("could not start CPU profile: ", err)
}
defer pprof.StopCPUProfile()
}

A_chan := make(chan bool)
B_chan := make(chan bool)
go util.A(A_chan)
go util.B(B_chan)
(..Rest of the code..)

for {
select {
case <-A_chan:
continue
case <-B_chan:
continue

}
}

}

What would be the correct way to add the memprofile code changes, since it
is running in an infinite for loop ?

Also, as shared by others above, there are no promises about how soon the
dead allocations go away, The speed gets faster and faster version to
version, and is impressive indeed now, so old versions are not the best to
use, ubt even so, if the allocation feels small to th GC the urgency to
free it will be low. You need to loop in allocating and see if the memory
grows and grows.

>> Yes, got it.I will try using the latest version of Go and check the
behavior.

Thanks,
Nitish

On Fri, Mar 13, 2020 at 6:20 AM Michael Jones 
wrote:

> That code looks wrong. I see the end but not the start. Look here and copy
> carefully:
> https://golang.org/pkg/runtime/pprof/
>
> Call at end, on way out.
>
> Also, as shared by others above, there are no promises about how soon the
> dead allocations go away, The speed gets faster and faster version to
> version, and is impressive indeed now, so old versions are not the best to
> use, ubt even so, if the allocation feels small to th GC the urgency to
> free it will be low. You need to loop in allocating and see if the memory
> grows and grows.
>
> On Thu, Mar 12, 2020 at 9:22 AM Nitish Saboo 
> wrote:
>
>> Hi,
>>
>> I have compiled my Go binary against go version 'go1.7 linux/amd64'.
>> I added the following code change in the main function to get the memory
>> profiling of my service
>>
>> var memprofile = flag.String("memprofile", "", "write memory profile to
>> `file`")
>>
>> func main() {
>> flag.Parse()
>> if *memprofile != "" {
>> f, err := os.Create(*memprofile)
>> if err != nil {
>> fmt.Println("could not create memory profile: ", err)
>> }
>> defer f.Close() // error handling omitted for example
>> runtime.GC() // get up-to-date statistics
>> if err := pprof.WriteHeapProfile(f); err != nil {
>> fmt.Println("could not write memory profile: ", err)
>> }
>> }
>> ..
>> ..
>> (Rest code to follow)
>>
>> I ran the binary with the following command:
>>
>> nsaboo@ubuntu:./main -memprofile=mem.prof
>>
>> After running the service for couple of minutes, I stopped it and got the
>> file 'mem.prof'
>>
>> 1)mem.prof contains the following:
>>
>> nsaboo@ubuntu:~/Desktop/memprof$ vim mem.prof
>>
>> heap profile: 0: 0 [0: 0] @ heap/1048576
>>
>> # runtime.MemStats
>> # Alloc = 761184
>> # TotalAlloc = 1160960
>> # Sys = 3149824
>> # Lookups = 10
>> # Mallocs = 8358
>> # Frees = 1981
>> # HeapAlloc = 761184
>> # HeapSys = 1802240
>> # HeapIdle = 499712
>> # HeapInuse = 1302528
>> # HeapReleased = 0
>> # HeapObjects = 6377
>> # Stack = 294912 / 294912
>> # MSpan = 22560 / 32768
>> # MCache = 2400 / 16384
>> # BuckHashSys = 2727
>> # NextGC = 4194304
>> # PauseNs = [752083 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
>> # NumGC = 1
>> # DebugGC = false
>>
>> 2)When I tried to open the file using the following command, it just goes
>> into interactive mode and shows nothing
>>
>> a)Output from go version go1.7 linux/amd64 for mem.prof
>>
>> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof
>> Entering interactive mode (type "help" for commands)
>> (pprof) top
>> profile is empty
>> (pprof)
>>
>> b)Output from go version go1.12.4 linux/amd64 for mem.prof
>>
>> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof
>> Type: space
>> No samples were found with the default sample value type.
>> Try "sample_index" command to analyze different sample values.
>> Entering interactive mode (type "help" for commands, "o" for options)
>> (pprof) o
>>   call_tree = false

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-12 Thread Michael Jones
That code looks wrong. I see the end but not the start. Look here and copy
carefully:
https://golang.org/pkg/runtime/pprof/

Call at end, on way out.

Also, as shared by others above, there are no promises about how soon the
dead allocations go away, The speed gets faster and faster version to
version, and is impressive indeed now, so old versions are not the best to
use, ubt even so, if the allocation feels small to th GC the urgency to
free it will be low. You need to loop in allocating and see if the memory
grows and grows.

On Thu, Mar 12, 2020 at 9:22 AM Nitish Saboo 
wrote:

> Hi,
>
> I have compiled my Go binary against go version 'go1.7 linux/amd64'.
> I added the following code change in the main function to get the memory
> profiling of my service
>
> var memprofile = flag.String("memprofile", "", "write memory profile to
> `file`")
>
> func main() {
> flag.Parse()
> if *memprofile != "" {
> f, err := os.Create(*memprofile)
> if err != nil {
> fmt.Println("could not create memory profile: ", err)
> }
> defer f.Close() // error handling omitted for example
> runtime.GC() // get up-to-date statistics
> if err := pprof.WriteHeapProfile(f); err != nil {
> fmt.Println("could not write memory profile: ", err)
> }
> }
> ..
> ..
> (Rest code to follow)
>
> I ran the binary with the following command:
>
> nsaboo@ubuntu:./main -memprofile=mem.prof
>
> After running the service for couple of minutes, I stopped it and got the
> file 'mem.prof'
>
> 1)mem.prof contains the following:
>
> nsaboo@ubuntu:~/Desktop/memprof$ vim mem.prof
>
> heap profile: 0: 0 [0: 0] @ heap/1048576
>
> # runtime.MemStats
> # Alloc = 761184
> # TotalAlloc = 1160960
> # Sys = 3149824
> # Lookups = 10
> # Mallocs = 8358
> # Frees = 1981
> # HeapAlloc = 761184
> # HeapSys = 1802240
> # HeapIdle = 499712
> # HeapInuse = 1302528
> # HeapReleased = 0
> # HeapObjects = 6377
> # Stack = 294912 / 294912
> # MSpan = 22560 / 32768
> # MCache = 2400 / 16384
> # BuckHashSys = 2727
> # NextGC = 4194304
> # PauseNs = [752083 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
> # NumGC = 1
> # DebugGC = false
>
> 2)When I tried to open the file using the following command, it just goes
> into interactive mode and shows nothing
>
> a)Output from go version go1.7 linux/amd64 for mem.prof
>
> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof
> Entering interactive mode (type "help" for commands)
> (pprof) top
> profile is empty
> (pprof)
>
> b)Output from go version go1.12.4 linux/amd64 for mem.prof
>
> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof
> Type: space
> No samples were found with the default sample value type.
> Try "sample_index" command to analyze different sample values.
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) o
>   call_tree = false
>   compact_labels= true
>   cumulative= flat //: [cum | flat]
>   divide_by = 1
>   drop_negative = false
>   edgefraction  = 0.001
>   focus = ""
>   granularity   = functions//: [addresses |
> filefunctions | files | functions | lines]
>   hide  = ""
>   ignore= ""
>   mean  = false
>   nodecount = -1   //: default
>   nodefraction  = 0.005
>   noinlines = false
>   normalize = false
>   output= ""
>   prune_from= ""
>   relative_percentages  = false
>   sample_index  = space//: [objects | space]
>   show  = ""
>   show_from = ""
>   tagfocus  = ""
>   taghide   = ""
>   tagignore = ""
>   tagshow   = ""
>   trim  = true
>   trim_path = ""
>   unit  = minimum
> (pprof) space
> (pprof) sample_index
> (pprof) top
> Showing nodes accounting for 0, 0% of 0 total
>   flat  flat%   sum%cum   cum%
>
>
> 3)Please let me know if it is this the correct way of getting the memory
> profiling ?
>
> 4)Can we deduce something from this memory stats that points us to
> increase in memory usage?
>
> 5)I am just thinking out loud, since I am using go1.7, can that be the
> reason for the issue of increase in memory usage that might get fixed with
> latest go versions ?
>
> Thanks,
> Nitish
>
> On 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-12 Thread Nitish Saboo
Hi,

I have compiled my Go binary against go version 'go1.7 linux/amd64'.
I added the following code change in the main function to get the memory
profiling of my service

var memprofile = flag.String("memprofile", "", "write memory profile to
`file`")

func main() {
flag.Parse()
if *memprofile != "" {
f, err := os.Create(*memprofile)
if err != nil {
fmt.Println("could not create memory profile: ", err)
}
defer f.Close() // error handling omitted for example
runtime.GC() // get up-to-date statistics
if err := pprof.WriteHeapProfile(f); err != nil {
fmt.Println("could not write memory profile: ", err)
}
}
..
..
(Rest code to follow)

I ran the binary with the following command:

nsaboo@ubuntu:./main -memprofile=mem.prof

After running the service for couple of minutes, I stopped it and got the
file 'mem.prof'

1)mem.prof contains the following:

nsaboo@ubuntu:~/Desktop/memprof$ vim mem.prof

heap profile: 0: 0 [0: 0] @ heap/1048576

# runtime.MemStats
# Alloc = 761184
# TotalAlloc = 1160960
# Sys = 3149824
# Lookups = 10
# Mallocs = 8358
# Frees = 1981
# HeapAlloc = 761184
# HeapSys = 1802240
# HeapIdle = 499712
# HeapInuse = 1302528
# HeapReleased = 0
# HeapObjects = 6377
# Stack = 294912 / 294912
# MSpan = 22560 / 32768
# MCache = 2400 / 16384
# BuckHashSys = 2727
# NextGC = 4194304
# PauseNs = [752083 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
# NumGC = 1
# DebugGC = false

2)When I tried to open the file using the following command, it just goes
into interactive mode and shows nothing

a)Output from go version go1.7 linux/amd64 for mem.prof

nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof
Entering interactive mode (type "help" for commands)
(pprof) top
profile is empty
(pprof)

b)Output from go version go1.12.4 linux/amd64 for mem.prof

nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof
Type: space
No samples were found with the default sample value type.
Try "sample_index" command to analyze different sample values.
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) o
  call_tree = false
  compact_labels= true
  cumulative= flat //: [cum | flat]
  divide_by = 1
  drop_negative = false
  edgefraction  = 0.001
  focus = ""
  granularity   = functions//: [addresses |
filefunctions | files | functions | lines]
  hide  = ""
  ignore= ""
  mean  = false
  nodecount = -1   //: default
  nodefraction  = 0.005
  noinlines = false
  normalize = false
  output= ""
  prune_from= ""
  relative_percentages  = false
  sample_index  = space//: [objects | space]
  show  = ""
  show_from = ""
  tagfocus  = ""
  taghide   = ""
  tagignore = ""
  tagshow   = ""
  trim  = true
  trim_path = ""
  unit  = minimum
(pprof) space
(pprof) sample_index
(pprof) top
Showing nodes accounting for 0, 0% of 0 total
  flat  flat%   sum%cum   cum%


3)Please let me know if it is this the correct way of getting the memory
profiling ?

4)Can we deduce something from this memory stats that points us to increase
in memory usage?

5)I am just thinking out loud, since I am using go1.7, can that be the
reason for the issue of increase in memory usage that might get fixed with
latest go versions ?

Thanks,
Nitish

On Tue, Mar 10, 2020 at 6:56 AM Jake Montgomery  wrote:

>
> On Monday, March 9, 2020 at 1:37:00 PM UTC-4, Nitish Saboo wrote:
>>
>> Hi Jake,
>>
>> The memory usage remains constant when the rest of the service is
>> running.Only when LoadPatternDB() method is called within the service,
>> Memory Consumption increases which actually should not happen.
>>  I am assuming if there is a memory leak while calling this method
>> because the memory usage then becomes constant after getting increased and
>> then further increases on next call.
>>
>
> Its possible that I am not fully understanding, perhaps a language
> problem. But from what you have written above I still don't see that this
> means you definitely have a memory leak. To test for that you would need to 
> *continuously
> *call LoadPatternDB() and monitor memory for a 

Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-09 Thread Jake Montgomery

On Monday, March 9, 2020 at 1:37:00 PM UTC-4, Nitish Saboo wrote:
>
> Hi Jake,
>
> The memory usage remains constant when the rest of the service is 
> running.Only when LoadPatternDB() method is called within the service, 
> Memory Consumption increases which actually should not happen.
>  I am assuming if there is a memory leak while calling this method because 
> the memory usage then becomes constant after getting increased and then 
> further increases on next call.
>

Its possible that I am not fully understanding, perhaps a language problem. 
But from what you have written above I still don't see that this means you 
definitely have a memory leak. To test for that you would need to *continuously 
*call LoadPatternDB() and monitor memory for a *considerable time*. If it 
eventually stabilizes to a constant range then there is no leak, just 
normal Go-GC variation. If it never stops climbing, and eventually consumes 
all the memory, then it would probably be a leak. Just because it goes up 
after one call, or a few calls doe not mean there is a leak. 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/f897fdb1-8968-4435-9fe9-02e167e09a36%40googlegroups.com.


Re: [go-nuts] Re: Mem-Leak in Go method

2020-03-09 Thread Nitish Saboo
Hi Jake,

The memory usage remains constant when the rest of the service is
running.Only when LoadPatternDB() method is called within the service,
Memory Consumption increases which actually should not happen.
 I am assuming if there is a memory leak while calling this method because
the memory usage then becomes constant after getting increased and then
further increases on next call.

Thanks,
Nitish

On Mon, Mar 9, 2020 at 9:43 PM Jake Montgomery  wrote:

> You may indeed have a leak. I did not check your code that carefully. But
> i do wonder about your statement:
>
>>
>> 1)As soon as I call LoadPatternDB() method in parser.go there is some
>> increase in memory consumption(some memory leak). Ideally that should not
>> have happened.
>>
>
> Go is a garbage collected language. So it is not uncommon for code to
> allocate memory even when it is not obvious why. Just because memory is
> allocated, or even seems to increase over time, does not mean there is a
> leak. In Go, you have a leak if the memory increases *unbounded *over
> time. You need to run your code for a long, long time, and see if the
> memory usage eventually stabilizes. If it does, then you do not have a
> leak.
>
> There are many discussions on this topic, but one from just a few days
> might be helpful:
> https://groups.google.com/d/topic/golang-nuts/QoKBlrrq3Ww/discussion
>
> Of course, you may have an actual leak. But "some increase in memory" is
> not enough to say that you do.
>
> On Monday, March 9, 2020 at 7:34:31 AM UTC-4, Nitish Saboo wrote:
>>
>> Hi
>>
>> Following are my Go code and C header file and C wrapper code
>>
>> parser.go
>> ==
>> var f *os.File
>>
>> func LoadPatternDB(patterndb string) {
>> path := C.CString(patterndb)
>> defer C.free(unsafe.Pointer(path))
>> C.load_pattern_db(path,
>> (C.key_value_cb)(unsafe.Pointer(C.callOnMeGo_cgo)))
>> }
>>
>> //export ParsedData
>> func ParsedData(k *C.char, val *C.char, val_len C.size_t) {
>> f.WriteString(C.GoString(k))
>> f.WriteString("\n")
>> }
>>
>> cfunc.go
>> 
>> /*
>> #include 
>> // The gateway function
>> void callOnMeGo_cgo(char *k, char *val, size_t val_len)
>> {
>> void ParsedData(const char *k, const char *val, size_t val_len);
>> ParsedData(k, val, val_len);
>> }
>> */
>> import "C"
>>
>> node.h
>> ===
>>
>> #ifndef TEST_H_INCLUDED
>> #define TEST_H_INCLUDED
>>
>> #include 
>>
>> typedef void (*key_value_cb)(const char* k, const char* val, size_t
>> val_len);
>> int load_pattern_db(const char* file, key_value_cb cb);
>>
>> #endif
>>
>> node.c
>> -
>> int load_pattern_db(const gchar* file, key_value_cb cb)
>> {
>>   patterndb = pattern_db_new();
>>   pattern_db_reload_ruleset(patterndb, configuration, file);
>>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>>   return 0;
>> }
>>
>>
>> I am calling '*LoadPatternDB*' method in my parser.go file that makes a
>> cgo call '*C.load_pattern_db'* where I am passing a callback function to
>> the C code.
>> The C code is a wrapper code that internally calls some syslog-ng library
>> apis'
>>
>> What I observed is:
>>
>> 1)As soon as I call LoadPatternDB() method in parser.go there is some
>> increase in memory consumption(some memory leak).Ideally that should not
>> have happened.
>>
>> 2)To verify if the C code had some issue, I called the C wrapper code
>> method '*load_pattern_db*' from my main.c in the following manner to
>> completely eliminate Go code here.What I found is there is no increase in
>> memory consumption after every call ('load_pattern_db' was called 5
>> times).Hence there is no memory leak from C code.So the issue lies in the
>> Go code in '*LoadPatternDB*' method in parser.go
>>
>> main.c
>> ===
>>
>> void check(char *key, char *value, size_t value_len)
>> {
>>   printf("I am in function check\n");
>> }
>>
>> int main(void){
>> char* filename = "/home/nitish/default.xml";
>> key_value_cb s = check;
>> int i;
>> for (i=1; i<=5; i++)
>> {
>> load_pattern_db(filename, s);
>> printf("Sleeping for 5 second.\n");
>> sleep(5);
>> }
>> printf("Loading done 5 times.\n");
>> return 0;
>> }
>>
>> 3)Can someone please guide me and help me figure out the mem-leak in
>> 'LoadPatternDB' method in parser.go at very first glance? Is the callback
>> function pointer an issue here ?
>>
>> 4)What tool can I use to check this mem-leak ?
>>
>> Thanks,
>> Nitish
>>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/9d360b1e-47ca-4e71-8547-80fad296f5ff%40googlegroups.com
> 
> .
>

-- 
You received 

[go-nuts] Re: Mem-Leak in Go method

2020-03-09 Thread Jake Montgomery
You may indeed have a leak. I did not check your code that carefully. But i 
do wonder about your statement:

>
> 1)As soon as I call LoadPatternDB() method in parser.go there is some 
> increase in memory consumption(some memory leak). Ideally that should not 
> have happened.
>

Go is a garbage collected language. So it is not uncommon for code to 
allocate memory even when it is not obvious why. Just because memory is 
allocated, or even seems to increase over time, does not mean there is a 
leak. In Go, you have a leak if the memory increases *unbounded *over time. 
You need to run your code for a long, long time, and see if the memory 
usage eventually stabilizes. If it does, then you do not have a leak. 

There are many discussions on this topic, but one from just a few days 
might be helpful: 
https://groups.google.com/d/topic/golang-nuts/QoKBlrrq3Ww/discussion

Of course, you may have an actual leak. But "some increase in memory" is 
not enough to say that you do. 

On Monday, March 9, 2020 at 7:34:31 AM UTC-4, Nitish Saboo wrote:
>
> Hi
>
> Following are my Go code and C header file and C wrapper code
>
> parser.go
> ==
> var f *os.File
>
> func LoadPatternDB(patterndb string) {
> path := C.CString(patterndb)
> defer C.free(unsafe.Pointer(path))
> C.load_pattern_db(path, (C.key_value_cb)(unsafe.Pointer(C.callOnMeGo_cgo)))
> }
>
> //export ParsedData
> func ParsedData(k *C.char, val *C.char, val_len C.size_t) {
> f.WriteString(C.GoString(k))
> f.WriteString("\n")
> }
>
> cfunc.go
> 
> /*
> #include 
> // The gateway function
> void callOnMeGo_cgo(char *k, char *val, size_t val_len)
> {
> void ParsedData(const char *k, const char *val, size_t val_len);
> ParsedData(k, val, val_len);
> }
> */
> import "C"
>
> node.h
> ===
>
> #ifndef TEST_H_INCLUDED
> #define TEST_H_INCLUDED
>
> #include 
>
> typedef void (*key_value_cb)(const char* k, const char* val, size_t 
> val_len);
> int load_pattern_db(const char* file, key_value_cb cb);
>
> #endif
>
> node.c
> -
> int load_pattern_db(const gchar* file, key_value_cb cb)
> {
>   patterndb = pattern_db_new();
>   pattern_db_reload_ruleset(patterndb, configuration, file);
>   pattern_db_set_emit_func(patterndb, pdbtool_pdb_emit_accumulate, cb);
>   return 0;
> }
>
>
> I am calling '*LoadPatternDB*' method in my parser.go file that makes a 
> cgo call '*C.load_pattern_db'* where I am passing a callback function to 
> the C code.
> The C code is a wrapper code that internally calls some syslog-ng library 
> apis'
>
> What I observed is:
>
> 1)As soon as I call LoadPatternDB() method in parser.go there is some 
> increase in memory consumption(some memory leak).Ideally that should not 
> have happened.
>
> 2)To verify if the C code had some issue, I called the C wrapper code 
> method '*load_pattern_db*' from my main.c in the following manner to 
> completely eliminate Go code here.What I found is there is no increase in 
> memory consumption after every call ('load_pattern_db' was called 5 
> times).Hence there is no memory leak from C code.So the issue lies in the 
> Go code in '*LoadPatternDB*' method in parser.go
>
> main.c
> ===
>
> void check(char *key, char *value, size_t value_len)
> {
>   printf("I am in function check\n");
> }
>
> int main(void){
> char* filename = "/home/nitish/default.xml";
> key_value_cb s = check;
> int i;
> for (i=1; i<=5; i++)
> {
> load_pattern_db(filename, s);
> printf("Sleeping for 5 second.\n");
> sleep(5);
> }
> printf("Loading done 5 times.\n");
> return 0;
> }
>
> 3)Can someone please guide me and help me figure out the mem-leak in 
> 'LoadPatternDB' method in parser.go at very first glance? Is the callback 
> function pointer an issue here ?
>
> 4)What tool can I use to check this mem-leak ?
>
> Thanks,
> Nitish
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/9d360b1e-47ca-4e71-8547-80fad296f5ff%40googlegroups.com.