Sorry not paper, two websites I gave you previously. 

> On Mar 24, 2020, at 7:02 AM, Nitish Saboo <nitish.sabo...@gmail.com> wrote:
> 
> 
> Hi,
> 
> >>There is no root analysis available in Go. Read the paper I linked to. 
> 
> Sorry I did not get you. Which paper are you referring to?
> 
> While I was running the service for 100 minutes the 'top' command output was 
> showing Mem% as 11.1. There was no increase in mem usage since I had not 
> called 'LoadPatternDB()' method.I have 8GB of memory on the node where I am 
> running the service. My issue is :
> 
> Why is it showing memory accounting for around 17GB?  11.1 % of 8GB is .88GB 
> and my node is only of 8GB. I feel the way I gathered the mem profiling was 
> not correct ..is it ?
> Please advise me what am I missing?
> 
> Thanks,
> Nitish
> 
>> On Tue, Mar 24, 2020 at 1:28 AM Robert Engels <reng...@ix.netcom.com> wrote:
>> Yes. You have a leak in your Go code. It shows you the object types that are 
>> taking up all of the space. There is no root analysis available in Go. Read 
>> the paper I linked to. 
>> 
>>>> On Mar 23, 2020, at 9:12 AM, Nitish Saboo <nitish.sabo...@gmail.com> wrote:
>>>> 
>>> 
>>> Hi,
>>> 
>>> I used something like the following to generate a memprof for 100 minutes
>>> 
>>> func main() {
>>> flag.Parse()
>>> if *cpuprofile != "" {
>>> f, err := os.Create(*cpuprofile)
>>> if err != nil {
>>> fmt.Println("could not create CPU profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> if err := pprof.StartCPUProfile(f); err != nil {
>>> fmt.Print("could not start CPU profile: ", err)
>>> }
>>> defer pprof.StopCPUProfile()
>>> }
>>> timeout := time.After(100 * time.Minute)
>>> A_chan := make(chan bool)
>>> B_chan := make(chan bool)
>>> go util.A(A_chan)
>>> go util.B(B_chan)
>>> (..Rest of the code..)
>>> 
>>> for {
>>> select {
>>> case <-A_chan:
>>> continue
>>> case <-B_chan:
>>> continue
>>> case <-timeout:
>>> break
>>> }
>>> break
>>> }
>>> 
>>> if *memprofile != "" {
>>> count = count + 1
>>> fmt.Println("Generating Mem Profile:")
>>> fmt.Print(count)
>>> f, err := os.Create(*memprofile)
>>> if err != nil {
>>> fmt.Println("could not create memory profile: ", err)
>>> }
>>> defer f.Close() // error handling omitted for example
>>> runtime.GC()    // get up-to-date statistics
>>> if err := pprof.WriteHeapProfile(f); err != nil {
>>> fmt.Println("could not write memory profile: ", err)
>>> }
>>> 
>>> }
>>> }
>>> 
>>> /Desktop/memprof:go tool pprof --alloc_space main mem3.prof
>>> Fetched 1 source profiles out of 2
>>> File: main
>>> Build ID: 99b8f2b91a4e037cf4a622aa32f2c1866764e7eb
>>> Type: alloc_space
>>> Time: Mar 22, 2020 at 7:02pm (IST)
>>> Entering interactive mode (type "help" for commands, "o" for options)
>>> (pprof) top
>>> Showing nodes accounting for 17818.11MB, 87.65% of 20329.62MB total   
>>> <<<<<<<<<<<<<<<
>>> Dropped 445 nodes (cum <= 101.65MB)
>>> Showing top 10 nodes out of 58
>>>       flat  flat%   sum%        cum   cum%
>>> 11999.09MB 59.02% 59.02% 19800.37MB 97.40%  home/nsaboo/project/aws.Events
>>>  1675.69MB  8.24% 67.27%  1911.69MB  9.40%  
>>> home/nsaboo/project/utils.Unflatten
>>>   627.21MB  3.09% 70.35%  1475.10MB  7.26%  encoding/json.mapEncoder.encode
>>>   626.76MB  3.08% 73.43%   626.76MB  3.08%  encoding/json.(*Decoder).refill
>>>   611.95MB  3.01% 76.44%  4557.69MB 22.42%  home/nsaboo/project/lib.format
>>>   569.97MB  2.80% 79.25%   569.97MB  2.80%  os.(*File).WriteString
>>>   558.95MB  2.75% 82.00%  2034.05MB 10.01%  encoding/json.Marshal
>>>   447.51MB  2.20% 84.20%   447.51MB  2.20%  reflect.copyVal
>>>   356.10MB  1.75% 85.95%   432.28MB  2.13%  compress/flate.NewWriter
>>>   344.88MB  1.70% 87.65%   590.38MB  2.90%  reflect.Value.MapKeys
>>> 
>>> 1)Is this the correct way of getting a memory profile?
>>> 
>>> 2)I ran the service for 100 minutes on a machine with 8GB memory. How am I 
>>> seeing memory accounting for around 17GB?
>>> 
>>> 3)I understand 'flat' means memory occupied within that method, but how 
>>> come it shot up more than the available memory? Hence, asking if the above 
>>> process is the correct way of gathering the memory profile.
>>> 
>>> Thanks,
>>> Nitish
>>> 
>>>> On Fri, Mar 13, 2020 at 6:22 PM Michael Jones <michael.jo...@gmail.com> 
>>>> wrote:
>>>> hi. get the time at the start, check the elapsed time in your infinite 
>>>> loop, and trigger the write/exit after a minute, 10 minutes, 100 minutes, 
>>>> ...
>>>> 
>>>>> On Fri, Mar 13, 2020 at 5:45 AM Nitish Saboo <nitish.sabo...@gmail.com> 
>>>>> wrote:
>>>>> Hi Michael,
>>>>> 
>>>>> Thanks for your response.
>>>>> 
>>>>> That code looks wrong. I see the end but not the start. Look here and 
>>>>> copy carefully:
>>>>> 
>>>>> >>Since I did not want cpu profiling I omitted the start of the code and 
>>>>> >>just added memory profiling part.
>>>>> 
>>>>> Call at end, on way out.
>>>>> 
>>>>> >>Oh yes, I missed that.I have to call memory profiling code at the end 
>>>>> >>on the way out.But the thing is that it runs as a service in infinite 
>>>>> >>for loop.
>>>>> 
>>>>> func main() {
>>>>> flag.Parse()
>>>>> if *cpuprofile != "" {
>>>>> f, err := os.Create(*cpuprofile)
>>>>> if err != nil {
>>>>> fmt.Println("could not create CPU profile: ", err)
>>>>> }
>>>>> defer f.Close() // error handling omitted for example
>>>>> if err := pprof.StartCPUProfile(f); err != nil {
>>>>> fmt.Print("could not start CPU profile: ", err)
>>>>> }
>>>>> defer pprof.StopCPUProfile()
>>>>> }
>>>>> 
>>>>> A_chan := make(chan bool)
>>>>> B_chan := make(chan bool)
>>>>> go util.A(A_chan)
>>>>> go util.B(B_chan)
>>>>> (..Rest of the code..)
>>>>> 
>>>>> for {
>>>>> select {
>>>>> case <-A_chan: 
>>>>> continue
>>>>> case <-B_chan: 
>>>>> continue
>>>>> 
>>>>> }
>>>>> }
>>>>> 
>>>>> }
>>>>> 
>>>>> What would be the correct way to add the memprofile code changes, since 
>>>>> it is running in an infinite for loop ?
>>>>> 
>>>>> Also, as shared by others above, there are no promises about how soon the 
>>>>> dead allocations go away, The speed gets faster and faster version to 
>>>>> version, and is impressive indeed now, so old versions are not the best 
>>>>> to use, ubt even so, if the allocation feels small to th GC the urgency 
>>>>> to free it will be low. You need to loop in allocating and see if the 
>>>>> memory grows and grows.
>>>>> 
>>>>> >> Yes, got it.I will try using the latest version of Go and check the 
>>>>> >> behavior.
>>>>> 
>>>>> Thanks,
>>>>> Nitish
>>>>> 
>>>>>> On Fri, Mar 13, 2020 at 6:20 AM Michael Jones <michael.jo...@gmail.com> 
>>>>>> wrote:
>>>>>> That code looks wrong. I see the end but not the start. Look here and 
>>>>>> copy carefully:
>>>>>> https://golang.org/pkg/runtime/pprof/
>>>>>> 
>>>>>> Call at end, on way out.
>>>>>> 
>>>>>> Also, as shared by others above, there are no promises about how soon 
>>>>>> the dead allocations go away, The speed gets faster and faster version 
>>>>>> to version, and is impressive indeed now, so old versions are not the 
>>>>>> best to use, ubt even so, if the allocation feels small to th GC the 
>>>>>> urgency to free it will be low. You need to loop in allocating and see 
>>>>>> if the memory grows and grows.
>>>>>> 
>>>>>>> On Thu, Mar 12, 2020 at 9:22 AM Nitish Saboo <nitish.sabo...@gmail.com> 
>>>>>>> wrote:
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I have compiled my Go binary against go version 'go1.7 linux/amd64'.
>>>>>>> I added the following code change in the main function to get the 
>>>>>>> memory profiling of my service 
>>>>>>> 
>>>>>>> var memprofile = flag.String("memprofile", "", "write memory profile to 
>>>>>>> `file`")
>>>>>>> 
>>>>>>> func main() {
>>>>>>> flag.Parse()
>>>>>>> if *memprofile != "" {
>>>>>>> f, err := os.Create(*memprofile)
>>>>>>> if err != nil {
>>>>>>> fmt.Println("could not create memory profile: ", err)
>>>>>>> }
>>>>>>> defer f.Close() // error handling omitted for example
>>>>>>> runtime.GC() // get up-to-date statistics
>>>>>>> if err := pprof.WriteHeapProfile(f); err != nil {
>>>>>>> fmt.Println("could not write memory profile: ", err)
>>>>>>> }
>>>>>>> }
>>>>>>> ..
>>>>>>> ..
>>>>>>> (Rest code to follow)
>>>>>>> 
>>>>>>> I ran the binary with the following command:
>>>>>>> 
>>>>>>> nsaboo@ubuntu:./main -memprofile=mem.prof
>>>>>>> 
>>>>>>> After running the service for couple of minutes, I stopped it and got 
>>>>>>> the file 'mem.prof'
>>>>>>> 
>>>>>>> 1)mem.prof contains the following:
>>>>>>> 
>>>>>>> nsaboo@ubuntu:~/Desktop/memprof$ vim mem.prof 
>>>>>>> 
>>>>>>> heap profile: 0: 0 [0: 0] @ heap/1048576
>>>>>>> 
>>>>>>> # runtime.MemStats
>>>>>>> # Alloc = 761184
>>>>>>> # TotalAlloc = 1160960
>>>>>>> # Sys = 3149824
>>>>>>> # Lookups = 10
>>>>>>> # Mallocs = 8358
>>>>>>> # Frees = 1981
>>>>>>> # HeapAlloc = 761184
>>>>>>> # HeapSys = 1802240
>>>>>>> # HeapIdle = 499712
>>>>>>> # HeapInuse = 1302528
>>>>>>> # HeapReleased = 0
>>>>>>> # HeapObjects = 6377
>>>>>>> # Stack = 294912 / 294912
>>>>>>> # MSpan = 22560 / 32768
>>>>>>> # MCache = 2400 / 16384
>>>>>>> # BuckHashSys = 2727
>>>>>>> # NextGC = 4194304
>>>>>>> # PauseNs = [752083 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
>>>>>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
>>>>>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
>>>>>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
>>>>>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
>>>>>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
>>>>>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
>>>>>>> 0 0 0 0 0 0 0 0 0 0 0 0 0]
>>>>>>> # NumGC = 1
>>>>>>> # DebugGC = false
>>>>>>> 
>>>>>>> 2)When I tried to open the file using the following command, it just 
>>>>>>> goes into interactive mode and shows nothing
>>>>>>> 
>>>>>>> a)Output from go version go1.7 linux/amd64 for mem.prof
>>>>>>> 
>>>>>>> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof 
>>>>>>> Entering interactive mode (type "help" for commands)
>>>>>>> (pprof) top
>>>>>>> profile is empty
>>>>>>> (pprof)
>>>>>>> 
>>>>>>> b)Output from go version go1.12.4 linux/amd64 for mem.prof
>>>>>>> 
>>>>>>> nsaboo@ubuntu:~/Desktop/memprof$ go tool pprof mem.prof 
>>>>>>> Type: space
>>>>>>> No samples were found with the default sample value type.
>>>>>>> Try "sample_index" command to analyze different sample values.
>>>>>>> Entering interactive mode (type "help" for commands, "o" for options)
>>>>>>> (pprof) o
>>>>>>>   call_tree                 = false                
>>>>>>>   compact_labels            = true                 
>>>>>>>   cumulative                = flat                 //: [cum | flat]
>>>>>>>   divide_by                 = 1                    
>>>>>>>   drop_negative             = false                
>>>>>>>   edgefraction              = 0.001                
>>>>>>>   focus                     = ""                   
>>>>>>>   granularity               = functions            //: [addresses | 
>>>>>>> filefunctions | files | functions | lines]
>>>>>>>   hide                      = ""                   
>>>>>>>   ignore                    = ""                   
>>>>>>>   mean                      = false                
>>>>>>>   nodecount                 = -1                   //: default
>>>>>>>   nodefraction              = 0.005                
>>>>>>>   noinlines                 = false                
>>>>>>>   normalize                 = false                
>>>>>>>   output                    = ""                   
>>>>>>>   prune_from                = ""                   
>>>>>>>   relative_percentages      = false                
>>>>>>>   sample_index              = space                //: [objects | space]
>>>>>>>   show                      = ""                   
>>>>>>>   show_from                 = ""                   
>>>>>>>   tagfocus                  = ""                   
>>>>>>>   taghide                   = ""                   
>>>>>>>   tagignore                 = ""                   
>>>>>>>   tagshow                   = ""                   
>>>>>>>   trim                      = true                 
>>>>>>>   trim_path                 = ""                   
>>>>>>>   unit                      = minimum              
>>>>>>> (pprof) space
>>>>>>> (pprof) sample_index
>>>>>>> (pprof) top
>>>>>>> Showing nodes accounting for 0, 0% of 0 total
>>>>>>>       flat  flat%   sum%        cum   cum%
>>>>>>> 
>>>>>>> 
>>>>>>> 3)Please let me know if it is this the correct way of getting the 
>>>>>>> memory profiling ?
>>>>>>> 
>>>>>>> 4)Can we deduce something from this memory stats that points us to 
>>>>>>> increase in memory usage?
>>>>>>> 
>>>>>>> 5)I am just thinking out loud, since I am using go1.7, can that be the 
>>>>>>> reason for the issue of increase in memory usage that might get fixed 
>>>>>>> with latest go versions ?
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Nitish
>>>>>>> 
>>>>>>>> On Tue, Mar 10, 2020 at 6:56 AM Jake Montgomery <jake6...@gmail.com> 
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> On Monday, March 9, 2020 at 1:37:00 PM UTC-4, Nitish Saboo wrote:
>>>>>>>>> Hi Jake,
>>>>>>>>> 
>>>>>>>>> The memory usage remains constant when the rest of the service is 
>>>>>>>>> running.Only when LoadPatternDB() method is called within the 
>>>>>>>>> service, Memory Consumption increases which actually should not 
>>>>>>>>> happen.
>>>>>>>>>  I am assuming if there is a memory leak while calling this method 
>>>>>>>>> because the memory usage then becomes constant after getting 
>>>>>>>>> increased and then further increases on next call.
>>>>>>>> 
>>>>>>>> Its possible that I am not fully understanding, perhaps a language 
>>>>>>>> problem. But from what you have written above I still don't see that 
>>>>>>>> this means you definitely have a memory leak. To test for that you 
>>>>>>>> would need to continuously call LoadPatternDB() and monitor memory for 
>>>>>>>> a considerable time. If it eventually stabilizes to a constant range 
>>>>>>>> then there is no leak, just normal Go-GC variation. If it never stops 
>>>>>>>> climbing, and eventually consumes all the memory, then it would 
>>>>>>>> probably be a leak. Just because it goes up after one call, or a few 
>>>>>>>> calls doe not mean there is a leak. 
>>>>>>>> -- 
>>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>>> Groups "golang-nuts" group.
>>>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>>>> an email to golang-nuts+unsubscr...@googlegroups.com.
>>>>>>>> To view this discussion on the web visit 
>>>>>>>> https://groups.google.com/d/msgid/golang-nuts/f897fdb1-8968-4435-9fe9-02e167e09a36%40googlegroups.com.
>>>>>>> 
>>>>>>> -- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "golang-nuts" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>>> an email to golang-nuts+unsubscr...@googlegroups.com.
>>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/golang-nuts/CALjMrq6DC98p4M4V2QCbQFTcsL1PtOWELvg8MEcMYj9EM9ui_A%40mail.gmail.com.
>>>>>> 
>>>>>> 
>>>>>> -- 
>>>>>> Michael T. Jones
>>>>>> michael.jo...@gmail.com
>>>> 
>>>> 
>>>> -- 
>>>> Michael T. Jones
>>>> michael.jo...@gmail.com
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to golang-nuts+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/golang-nuts/CALjMrq7_d8s%3DmS5WVgV9K1m5VCBUoep2mitvX4o%3D%2BHVqf1APmQ%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/5B1E6965-E0AF-4302-A802-672751E029CC%40ix.netcom.com.

Reply via email to