Hi, I just wrote a thrift rpc service and run performance testing on the programe. After stoping the test clients (2000 clients), RES (in top command) will reduce to 84M, while 240M when 2000 clients running. But after a few minutes, RES grows back to 240M, and the inuse memory still low in pprof.
We run the program in CentOS release 6.5 (Final); thrift version is 0.9.3; go version is 1.8.1. Here is the pprof's output when the server just start: (pprof) top 5121.71kB of 5121.71kB total ( 100%) Dropped 9 nodes (cum <= 25.61kB) Showing top 10 nodes out of 14 (cum >= 4097.62kB) flat flat% sum% cum cum% 4097.62kB 80.01% 80.01% 4097.62kB 80.01% runtime.malg 512.05kB 10.00% 90.00% 512.05kB 10.00% util/workerpool.New 512.04kB 10.00% 100% 512.04kB 10.00% runtime.acquireSudog 0 0% 100% 512.05kB 10.00% main.main 0 0% 100% 512.05kB 10.00% qlyregister. InitRegisterWorkerPool 0 0% 100% 1024.09kB 19.99% runtime.goexit 0 0% 100% 512.05kB 10.00% runtime.main 0 0% 100% 4097.62kB 80.01% runtime.mstart 0 0% 100% 4097.62kB 80.01% runtime.newproc.func1 0 0% 100% 4097.62kB 80.01% runtime.newproc1 and the top's output shows below: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20553 root 20 0 245m 54m 3576 S 0.0 0.7 0:00.48 rpcregisterserv After 2000 clients started,the pprof's result become: (pprof) top 66094.31kB of 66094.31kB total ( 100%) Dropped 137 nodes (cum <= 330.47kB) Showing top 10 nodes out of 43 (cum >= 6270.55kB) flat flat% sum% cum cum% 23553.62kB 35.64% 35.64% 23553.62kB 35.64% runtime.makechan 16385kB 24.79% 60.43% 41845.44kB 63.31% time.NewTimer 12848.11kB 19.44% 79.87% 12848.11kB 19.44% git.apache.org/thrift.git/lib/go /thrift.(*tFramedTransportFactory).GetTransport 4609.83kB 6.97% 86.84% 4609.83kB 6.97% runtime.malg 2570.01kB 3.89% 90.73% 2570.01kB 3.89% github.com/garyburd/redigo/redis .Dial 2048.16kB 3.10% 93.83% 2048.16kB 3.10% runtime.acquireSudog 1906.81kB 2.88% 96.71% 1906.81kB 2.88% runtime.addtimerLocked 1148.68kB 1.74% 98.45% 1148.68kB 1.74% runtime.allgadd 512.05kB 0.77% 99.23% 512.05kB 0.77% util/workerpool.New 512.04kB 0.77% 100% 6270.55kB 9.49% runtime.systemstack and top's output after 2000 clients started: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20553 root 20 0 668m 241m 4272 S 150.2 3.1 21:43.41 rpcregisterserv After 2000 clients stop, GC already release memorys to system: GC forced gc 675 @1732.098s 3%: 0.072+19+0.17 ms clock, 0.28+0/19/50+0.69 ms cpu, 10-> 10->10 MB, 20 MB goal, 4 P scvg10: 155 MB released scvg10: inuse: 61, idle: 158, sys: 220, released: 158, consumed: 61 (MB) (pprof) top 9713.49kB of 9713.49kB total ( 100%) Dropped 169 nodes (cum <= 48.57kB) Showing top 10 nodes out of 21 (cum >= 3954.98kB) flat flat% sum% cum cum% 4609.83kB 47.46% 47.46% 4609.83kB 47.46% runtime.malg 1906.81kB 19.63% 67.09% 1906.81kB 19.63% runtime.addtimerLocked 1536.12kB 15.81% 82.90% 1536.12kB 15.81% runtime.acquireSudog 1148.68kB 11.83% 94.73% 1148.68kB 11.83% runtime.allgadd 512.05kB 5.27% 100% 512.05kB 5.27% util/workerpool.New 0 0% 100% 1906.81kB 19.63% github.com/yosssi/gmq/mqtt/ client.(*Client).sendPackets 0 0% 100% 512.05kB 5.27% main.main 0 0% 100% 512.05kB 5.27% qlyregister. InitRegisterWorkerPool 0 0% 100% 1906.81kB 19.63% runtime.addtimer 0 0% 100% 3954.98kB 40.72% runtime.goexit But after 3 minutes, RES in top grown again: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20553 root 20 0 668m 241m 4276 S 0.0 3.1 22:27.34 rpcregisterserv Why RES become high again? Thanks -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.