Two other replies have mentioned Sync.Pool. I agree that Sync.Pool is a 
valuable tool. 

However, for the benefit of any beginning gophers who may read this thread, 
I wanted to point out that, in a situation like yours, I would want to try 
to reduce heap allocation in other ways first. Not that Sync.pool is a last 
resort exactly, but it does have non-trivial overhead, and is not a 
substitute for thinking about other ways to clean up heap allocations. For 
example, pulling allocations out of loops, or manual object re-use where it 
fits naturally into the code.

On Friday, July 16, 2021 at 8:27:03 AM UTC-4 rmfr wrote:

> I run it at an 8 cores 16GB machine and it occupies all cpu cores it could.
>
> 1. It is ~95% cpu intensive and with ~5% network communications.
>
> 2. The codebase is huge and has more than 300 thousands of lines of code 
> (even didn't count the third party library yet).
>
> 3. The tool pprof tells nearly 50% percent of the time is spending on the 
> runtime, something related to gc, mallocgc, memclrNoHeapPointers, and so on.
>
> 4. It has ~100 million dynamic objects.
>
> Do you guys have some good advice to optimize the performance?
>
> One idea that occurs to me is to do something like sync.Pool to buffer 
> some most frequently allocated and freed objects. But the problem is I 
> didn't manage to find a golang tool to find such objects. The runtime 
> provides api to get the amount of objects but it didn't provide api to get 
> the detailed statistics of all objects. Please correct me if I'm wrong. 
> Thanks a lot :-)
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/512e7d48-3ee3-48c8-8a64-5118a341e6aen%40googlegroups.com.

Reply via email to