Thank you for your input!

> Throwing more cores at the problem is a sensible approach on modern 
hardware.  Since the cores are no longer getting faster, making your code 
run faster on a single core is not the optimal design for the hardware of 
the near future.

I had not considered increasing core counts as something that could drive 
such important decisions about GC design, but it makes sense. Now that I 
think of it, it falls right in line with Go's general goal of being useful 
on newer, multi-core machines.

Here is Rick Hudson's comment, for those who are 
interested: 
https://medium.com/@rlh_21830/it-is-not-true-that-without-compaction-fragmentation-is-inevitable-e622227d111e#.fuuajmdoz

On Monday, December 19, 2016 at 10:31:52 PM UTC-7, Ian Lance Taylor wrote:
>
> On Mon, Dec 19, 2016 at 8:56 PM, Tyler Compton <xav...@gmail.com 
> <javascript:>> wrote: 
> > 
> https://medium.com/@octskyward/modern-garbage-collection-911ef4f8bd8e#.c48w4ifa7
>  
> > 
> > Thoughts? How accurate is this article? If it is, why, historically, is 
> the 
> > work being done on the GC so concentrated on pause times? 
>
> If you click on the comments link near the bottom you will see that 
> Rick has commented on part of the essay. 
>
> I do not work on the GC myself, but these are my personal observations. 
>
> I think the key observation in Go's GC is that modern systems have 
> multiple cores, and are rapidly getting more cores.  Throwing more 
> cores at the problem is a sensible approach on modern hardware.  Since 
> the cores are no longer getting faster, making your code run faster on 
> a single core is not the optimal design for the hardware of the near 
> future. 
>
> For a language like Go, stop-the-world pause times really are the most 
> important thing, because during a pause your server isn't doing 
> anything useful.  The essay suggests that the Go runtime is "willing 
> to slow down your program by almost any amount" in order to reduce 
> pause times; that is clearly hyperbole, and it isn't literally true. 
> The slowdowns in compiled Go code occur when writing pointers into the 
> heap and when allocating new memory.  The compiler works hard to let 
> you store variables on the stack rather than the heap, where no 
> slowdown occurs.  The language is designed to let you control when and 
> how memory is allocated, giving you control over when memory is 
> allocated.  The effect is that in Go you can adjust your program to 
> reduce GC overhead, rather than tuning the GC.  This is far more true 
> in Go than in Java, since Java has no stack variables or structs.  I'm 
> not familiar enough with C# to say anything serious about it. 
>
> That said, it should be obvious that nobody thinks that work on Go's 
> garbage collector has finished. 
>
> The essay mentions the request-oriented collector, which is indeed 
> similar to a generational garbage collector.  If everything works, the 
> request oriented collector can be cheaper than a generational 
> collector because no copying is required.  The essay suggests that it 
> can be simulated by a generational GC "by ensuring the young 
> generation is large enough that all garbage generated by handling a 
> request fits within it" but that is somewhat meaningless if any of 
> your requests can use a lot of memory, since you have to waste a lot 
> of space by allocating that much memory for each goroutine. 
>
> Once the request-oriented collector is working or abandoned, I believe 
> the GC team plans to focus on increasing throughput, since latency is 
> now low enough that it hopefully doesn't matter for non-realtime 
> programs. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to