We’re using a typical producer-consumer pattern: goroutines send messages 
to a channel, and a worker processes them. A colleague asked me why we even 
bother with a buffered channel (say, size 1000) if we’re waiting for the 
result anyway.

I tried to explain it like this: there are two kinds of waiting.

“Bad” waiting – when a goroutine is blocked trying to send to a full 
channel:
requestChan <- req // goroutine just hangs here, blocking the system

“Good” waiting – when the send succeeds quickly, and you wait for the 
result afterwards:
requestChan <- req // quickly enqueued
result := <-resultChan // wait for result without holding up others

The point: a big buffer lets goroutines hand off tasks fast and free 
themselves for new work. Under burst load, this is crucial — it lets the 
system absorb spikes without slowing everything down.

But here’s the twist: my colleague tested it with 2000 goroutines and got 
roughly the same processing time. His argument: “waiting to enqueue or 
dequeue seems to perform the same no matter how many goroutines are 
waiting.”

So my question is: does Go have any official docs that describe this idea? 
*Effective 
Go* shows semaphores, but it doesn’t really spell out this difference in 
blocking types.

Am I misunderstanding something, or is this just one of those “implicit Go 
concurrency truths” that everyone sort of knows but isn’t officially 
documented?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com.

Reply via email to