Try a naive solution, and see if it is good enough, 
before optimising.

On Monday, 20 July 2020 18:35:14 UTC+1, netconn...@gmail.com wrote:
>
> I have an application where I will be allocating millions of data 
> structures, all of the same size. My program will need to run continuously 
> and be pretty responsive to 
> its network peers.
>
> The data is fairly static, once allocated it will rarely need to be 
> modified or deleted.
>
> In order to minimize the garbage collection scanning overhead, I was 
> thinking of allocating large blocks on the heap that were a fixed size that 
> would hold 20K or so elements
> and then write a simple allocator to hand out pieces of those blocks when 
> needed. Instead of having to scan millions of items on the heap, the GC 
> would only be scanning 100 or so
> items.
>
> Sound reasonable?  Or does this 'go' against the golang way of doing 
> things?
>
> F
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/aac8728a-da18-4117-99b4-b472f897fb9co%40googlegroups.com.

Reply via email to