There is not enough info to give a full recommendation but I suspect you are misunderstanding how it works.
The buffered channels allow the producers to continue while waiting for the consumer to finish. If the producer can’t continue until the consumer runs and provides a value via a callback or other channel, then yes the buffered channel might not seem to provide any value - expect that in a highly concurrent environment go routines are usually not in a pure ‘reading the channel’ mode - they are finishing up a previous request - so the buffering allows some level of additional concurrency in the state. When requests are extremely short in duration this can matter a lot. Usually though, a better solution is to simply have N+1 consumers for N producers and use a handoff channel (unbuffered) - but if the workload is CPU bound you will expend extra resources context switching (ie. thrashing) - because these Go routines will be timesliced. Better to cap the consumers and use a buffered channel. > On Sep 1, 2025, at 08:37, Egor Ponomarev <egorvponoma...@gmail.com> wrote: > > We’re using a typical producer-consumer pattern: goroutines send messages to > a channel, and a worker processes them. A colleague asked me why we even > bother with a buffered channel (say, size 1000) if we’re waiting for the > result anyway. > > I tried to explain it like this: there are two kinds of waiting. > > > > “Bad” waiting – when a goroutine is blocked trying to send to a full channel: > requestChan <- req // goroutine just hangs here, blocking the system > > “Good” waiting – when the send succeeds quickly, and you wait for the result > afterwards: > requestChan <- req // quickly enqueued > result := <-resultChan // wait for result without holding up others > > > The point: a big buffer lets goroutines hand off tasks fast and free > themselves for new work. Under burst load, this is crucial — it lets the > system absorb spikes without slowing everything down. > > But here’s the twist: my colleague tested it with 2000 goroutines and got > roughly the same processing time. His argument: “waiting to enqueue or > dequeue seems to perform the same no matter how many goroutines are waiting.” > > So my question is: does Go have any official docs that describe this idea? > Effective Go shows semaphores, but it doesn’t really spell out this > difference in blocking types. > > > Am I misunderstanding something, or is this just one of those “implicit Go > concurrency truths” that everyone sort of knows but isn’t officially > documented? > > > -- > You received this message because you are subscribed to the Google Groups > "golang-nuts" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to golang-nuts+unsubscr...@googlegroups.com > <mailto:golang-nuts+unsubscr...@googlegroups.com>. > To view this discussion visit > https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com > > <https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com?utm_medium=email&utm_source=footer>. -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/CA3C1054-C583-4E04-A699-44EE0A0E0A86%40ix.netcom.com.