Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread Ivan Bertona
You are totally right on that, sorry. It's just > 4k.

On Thu, Nov 21, 2019 at 7:13 PM burak serdar  wrote:

> On Thu, Nov 21, 2019 at 4:59 PM Ivan Bertona  wrote:
> >
> > 1) Yes if you set NumberOfWorkers high enough (> 4k / num CPUs), and
> your machine is actually capable of handling this workload. Based on
> experience I'd say you shouldn't expect significant overhead for job
> scheduling.
>
> Not divided by nCPUs though, right?
>
> If there are w workers, and with w workers each goroutine takes 2secs,
> and if you're getting work at a rate of 2k/sec, you need at least 4k
> goroutines to keep up, regardless of the cpu count. After the 2nd
> second, you'll have all 4k goroutines busy. Am I missing something?
>
> The important thing is: does it take 2 secs for each goroutine to
> complete when w is > 4k
>
>
> > 2) Not sure this is a question
> > 3) No
> > 4) What you are doing is totally fine at 2k/s
> >
> > I'll add that you shouldn't trust me, you can easily measure the
> overhead yourself by making the consumer work be a 2s sleep, setting
> NumberOfWorkers to 4k / num CPUs, pushing 2k/s jobs, and looking at how
> your system load looks like when it's running. As for whether this would
> work with your actual workload, again the only way is to try and measure.
> >
> > Best,
> > Ivan
> >
> > On Thursday, November 21, 2019 at 3:24:26 PM UTC-5, Michael Jones wrote:
> >>
> >> Agree. Essentially I'm saying the "channel aspect" is not an issue.
> >>
> >> On Thu, Nov 21, 2019 at 12:12 PM Robert Engels 
> wrote:
> >>>
> >>> He stated "each request takes 2 secs to process" - what's involved in
> that is the important aspect imo.
> >>>
> >>> -Original Message-
> >>> From: Michael Jones
> >>> Sent: Nov 21, 2019 2:06 PM
> >>> To: Robert Engels
> >>> Cc: Sankar , golang-nuts
> >>> Subject: Re: [go-nuts] golang multiple go routines reading from a
> channel and performance implications
> >>>
> >>> In my (past) benchmarking, I got ~3M channel send/receive operations
> per second on my MacBook Pro. It is faster on faster computers. 2k
> requests/src is much less than 3M, clearly, and the 1/1000 duty cycle
> suggests that you'll have 99.9% non-overhead to do your processing. This is
> back of the envelope thinking, but what I go through for every path
> explored. What should it be? How is in in fact? What explains the
> difference? ... that kind of thing.
> >>>
> >>> On Thu, Nov 21, 2019 at 11:25 AM Robert Engels 
> wrote:
> >>>>
> >>>> You need to determine how well they parallelize and what the resource
> consumption of a request is. For example, if every request can run
> concurrently at 100% (not possible btw because of switching overhead), and
> each request takes 0.5 secs of CPU, and 1.5 secs of IO, for a total wall
> time of 2 secs). At 2k request/per sec, you need a machine with 1000 CPUs.
> IO can run concurrently on most modern setups, so you can essentially
> factor this out, less so if most of the operations are writes.
> >>>>
> >>>> Your local CPU requirements may be less if the request is then
> handled by a cluster (over the network, database etc), but you will still
> need 1000 cpus in the cluster (probably a lot more due to the network
> overhead).
> >>>>
> >>>> You can look at github.com/robaho/go-trader for an example of very
> high CPU based processing using Go and channels (and other concurrency
> structs).
> >>>>
> >>>>
> >>>>
> >>>> -Original Message-
> >>>> From: Sankar
> >>>> Sent: Nov 21, 2019 12:30 PM
> >>>> To: golang-nuts
> >>>> Subject: [go-nuts] golang multiple go routines reading from a channel
> and performance implications
> >>>>
> >>>> We have a setup where we have a producer goroutine pumping in a few
> thousand objects into a channel (Approximately 2k requests per second).
> There are a configurable number of goroutines that work as consumers,
> consuming from this single channel. If none of the consumer threads could
> receive the message, the message gets discarded. Each consumer go routine
> takes about 2 seconds for each work to be completed, by which they will
> come back to read the next item in the channel. The channel is sized such
> that it can hold up to 10,000 messages.
> >>>>
> >>>> The code is roughly something like:
> >>>>
> >>&g

Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread burak serdar
On Thu, Nov 21, 2019 at 4:59 PM Ivan Bertona  wrote:
>
> 1) Yes if you set NumberOfWorkers high enough (> 4k / num CPUs), and your 
> machine is actually capable of handling this workload. Based on experience 
> I'd say you shouldn't expect significant overhead for job scheduling.

Not divided by nCPUs though, right?

If there are w workers, and with w workers each goroutine takes 2secs,
and if you're getting work at a rate of 2k/sec, you need at least 4k
goroutines to keep up, regardless of the cpu count. After the 2nd
second, you'll have all 4k goroutines busy. Am I missing something?

The important thing is: does it take 2 secs for each goroutine to
complete when w is > 4k


> 2) Not sure this is a question
> 3) No
> 4) What you are doing is totally fine at 2k/s
>
> I'll add that you shouldn't trust me, you can easily measure the overhead 
> yourself by making the consumer work be a 2s sleep, setting NumberOfWorkers 
> to 4k / num CPUs, pushing 2k/s jobs, and looking at how your system load 
> looks like when it's running. As for whether this would work with your actual 
> workload, again the only way is to try and measure.
>
> Best,
> Ivan
>
> On Thursday, November 21, 2019 at 3:24:26 PM UTC-5, Michael Jones wrote:
>>
>> Agree. Essentially I'm saying the "channel aspect" is not an issue.
>>
>> On Thu, Nov 21, 2019 at 12:12 PM Robert Engels  wrote:
>>>
>>> He stated "each request takes 2 secs to process" - what's involved in that 
>>> is the important aspect imo.
>>>
>>> -----Original Message-
>>> From: Michael Jones
>>> Sent: Nov 21, 2019 2:06 PM
>>> To: Robert Engels
>>> Cc: Sankar , golang-nuts
>>> Subject: Re: [go-nuts] golang multiple go routines reading from a channel 
>>> and performance implications
>>>
>>> In my (past) benchmarking, I got ~3M channel send/receive operations per 
>>> second on my MacBook Pro. It is faster on faster computers. 2k requests/src 
>>> is much less than 3M, clearly, and the 1/1000 duty cycle suggests that 
>>> you'll have 99.9% non-overhead to do your processing. This is back of the 
>>> envelope thinking, but what I go through for every path explored. What 
>>> should it be? How is in in fact? What explains the difference? ... that 
>>> kind of thing.
>>>
>>> On Thu, Nov 21, 2019 at 11:25 AM Robert Engels  wrote:
>>>>
>>>> You need to determine how well they parallelize and what the resource 
>>>> consumption of a request is. For example, if every request can run 
>>>> concurrently at 100% (not possible btw because of switching overhead), and 
>>>> each request takes 0.5 secs of CPU, and 1.5 secs of IO, for a total wall 
>>>> time of 2 secs). At 2k request/per sec, you need a machine with 1000 CPUs. 
>>>> IO can run concurrently on most modern setups, so you can essentially 
>>>> factor this out, less so if most of the operations are writes.
>>>>
>>>> Your local CPU requirements may be less if the request is then handled by 
>>>> a cluster (over the network, database etc), but you will still need 1000 
>>>> cpus in the cluster (probably a lot more due to the network overhead).
>>>>
>>>> You can look at github.com/robaho/go-trader for an example of very high 
>>>> CPU based processing using Go and channels (and other concurrency structs).
>>>>
>>>>
>>>>
>>>> -Original Message-
>>>> From: Sankar
>>>> Sent: Nov 21, 2019 12:30 PM
>>>> To: golang-nuts
>>>> Subject: [go-nuts] golang multiple go routines reading from a channel and 
>>>> performance implications
>>>>
>>>> We have a setup where we have a producer goroutine pumping in a few 
>>>> thousand objects into a channel (Approximately 2k requests per second). 
>>>> There are a configurable number of goroutines that work as consumers, 
>>>> consuming from this single channel. If none of the consumer threads could 
>>>> receive the message, the message gets discarded. Each consumer go routine 
>>>> takes about 2 seconds for each work to be completed, by which they will 
>>>> come back to read the next item in the channel. The channel is sized such 
>>>> that it can hold up to 10,000 messages.
>>>>
>>>> The code is roughly something like:
>>>>
>>>> producer.go:
>>>> func produce() {
>>>>  ch <- item
>>>> }
>>>>
>>&g

Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread Ivan Bertona
1) Yes if you set NumberOfWorkers high enough (> 4k / num CPUs), and your 
machine is actually capable of handling this workload. Based on experience 
I'd say you shouldn't expect significant overhead for job scheduling.
2) Not sure this is a question
3) No
4) What you are doing is totally fine at 2k/s

I'll add that you shouldn't trust me, you can easily measure the overhead 
yourself by making the consumer work be a 2s sleep, setting NumberOfWorkers 
to 4k / num CPUs, pushing 2k/s jobs, and looking at how your system load 
looks like when it's running. As for whether this would work with your 
actual workload, again the only way is to try and measure.

Best,
Ivan

On Thursday, November 21, 2019 at 3:24:26 PM UTC-5, Michael Jones wrote:
>
> Agree. Essentially I'm saying the "channel aspect" is not an issue.
>
> On Thu, Nov 21, 2019 at 12:12 PM Robert Engels  > wrote:
>
>> He stated "each request takes 2 secs to process" - what's involved in 
>> that is the important aspect imo.
>>
>> -Original Message- 
>> From: Michael Jones 
>> Sent: Nov 21, 2019 2:06 PM 
>> To: Robert Engels 
>> Cc: Sankar , golang-nuts 
>> Subject: Re: [go-nuts] golang multiple go routines reading from a channel 
>> and performance implications 
>>
>> In my (past) benchmarking, I got ~3M channel send/receive operations per 
>> second on my MacBook Pro. It is faster on faster computers. 2k requests/src 
>> is much less than 3M, clearly, and the 1/1000 duty cycle suggests that 
>> you'll have 99.9% non-overhead to do your processing. This is back of the 
>> envelope thinking, but what I go through for every path explored. What 
>> should it be? How is in in fact? What explains the difference? ... that 
>> kind of thing.
>>
>> On Thu, Nov 21, 2019 at 11:25 AM Robert Engels > > wrote:
>>
>>> You need to determine how well they parallelize and what the resource 
>>> consumption of a request is. For example, if every request can run 
>>> concurrently at 100% (not possible btw because of switching overhead), and 
>>> each request takes 0.5 secs of CPU, and 1.5 secs of IO, for a total wall 
>>> time of 2 secs). At 2k request/per sec, you need a machine with 1000 CPUs. 
>>> IO can run concurrently on most modern setups, so you can essentially 
>>> factor this out, less so if most of the operations are writes.
>>>
>>> Your local CPU requirements may be less if the request is then handled 
>>> by a cluster (over the network, database etc), but you will still need 1000 
>>> cpus in the cluster (probably a lot more due to the network overhead).
>>>
>>> You can look at github.com/robaho/go-trader for an example of very high 
>>> CPU based processing using Go and channels (and other concurrency structs).
>>>
>>>
>>>
>>> -Original Message- 
>>> From: Sankar 
>>> Sent: Nov 21, 2019 12:30 PM 
>>> To: golang-nuts 
>>> Subject: [go-nuts] golang multiple go routines reading from a channel 
>>> and performance implications 
>>>
>>> We have a setup where we have a producer goroutine pumping in a few 
>>> thousand objects into a channel (Approximately 2k requests per second). 
>>> There are a configurable number of goroutines that work as consumers, 
>>> consuming from this single channel. If none of the consumer threads could 
>>> receive the message, the message gets discarded. Each consumer go routine 
>>> takes about 2 seconds for each work to be completed, by which they will 
>>> come back to read the next item in the channel. The channel is sized such 
>>> that it can hold up to 10,000 messages.
>>>
>>> The code is roughly something like:
>>>
>>> producer.go:
>>> func produce() {
>>>  ch <- item
>>> }
>>>
>>> func consumer() {
>>>  for i:=0; i < NumberOfWorkers; i ++ {
>>>go func() {
>>>   for _, item := range ch {
>>>  // process item
>>>   }
>>>} ()
>>>  }
>>> }
>>>
>>> With this above setup, we are seeing about 40% of our messages getting 
>>> dropped.
>>>
>>> So my questions are:
>>>
>>> 1) In such a high velocity incoming data, will this above design work ? 
>>> (Producer, Consumer Worker Threads)
>>> 2) We did not go for an external middleware for saving the message and 
>>> processing data later, as we are concerned about latency for now.
>>> 3) Are channels bad for such an approach ? Are 

Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread Michael Jones
Agree. Essentially I'm saying the "channel aspect" is not an issue.

On Thu, Nov 21, 2019 at 12:12 PM Robert Engels 
wrote:

> He stated "each request takes 2 secs to process" - what's involved in that
> is the important aspect imo.
>
> -Original Message-
> From: Michael Jones
> Sent: Nov 21, 2019 2:06 PM
> To: Robert Engels
> Cc: Sankar , golang-nuts
> Subject: Re: [go-nuts] golang multiple go routines reading from a channel
> and performance implications
>
> In my (past) benchmarking, I got ~3M channel send/receive operations per
> second on my MacBook Pro. It is faster on faster computers. 2k requests/src
> is much less than 3M, clearly, and the 1/1000 duty cycle suggests that
> you'll have 99.9% non-overhead to do your processing. This is back of the
> envelope thinking, but what I go through for every path explored. What
> should it be? How is in in fact? What explains the difference? ... that
> kind of thing.
>
> On Thu, Nov 21, 2019 at 11:25 AM Robert Engels 
> wrote:
>
>> You need to determine how well they parallelize and what the resource
>> consumption of a request is. For example, if every request can run
>> concurrently at 100% (not possible btw because of switching overhead), and
>> each request takes 0.5 secs of CPU, and 1.5 secs of IO, for a total wall
>> time of 2 secs). At 2k request/per sec, you need a machine with 1000 CPUs.
>> IO can run concurrently on most modern setups, so you can essentially
>> factor this out, less so if most of the operations are writes.
>>
>> Your local CPU requirements may be less if the request is then handled by
>> a cluster (over the network, database etc), but you will still need 1000
>> cpus in the cluster (probably a lot more due to the network overhead).
>>
>> You can look at github.com/robaho/go-trader for an example of very high
>> CPU based processing using Go and channels (and other concurrency structs).
>>
>>
>>
>> -----Original Message-
>> From: Sankar
>> Sent: Nov 21, 2019 12:30 PM
>> To: golang-nuts
>> Subject: [go-nuts] golang multiple go routines reading from a channel and
>> performance implications
>>
>> We have a setup where we have a producer goroutine pumping in a few
>> thousand objects into a channel (Approximately 2k requests per second).
>> There are a configurable number of goroutines that work as consumers,
>> consuming from this single channel. If none of the consumer threads could
>> receive the message, the message gets discarded. Each consumer go routine
>> takes about 2 seconds for each work to be completed, by which they will
>> come back to read the next item in the channel. The channel is sized such
>> that it can hold up to 10,000 messages.
>>
>> The code is roughly something like:
>>
>> producer.go:
>> func produce() {
>>  ch <- item
>> }
>>
>> func consumer() {
>>  for i:=0; i < NumberOfWorkers; i ++ {
>>go func() {
>>   for _, item := range ch {
>>  // process item
>>   }
>>} ()
>>  }
>> }
>>
>> With this above setup, we are seeing about 40% of our messages getting
>> dropped.
>>
>> So my questions are:
>>
>> 1) In such a high velocity incoming data, will this above design work ?
>> (Producer, Consumer Worker Threads)
>> 2) We did not go for an external middleware for saving the message and
>> processing data later, as we are concerned about latency for now.
>> 3) Are channels bad for such an approach ? Are there any other alternate
>> performant mechanism to achieve this in the go way ?
>> 4) Are there any sample FOSS projects that we can refer to see such
>> performant code ? Any other book, tutorial, video or some such for these
>> high performance Golang application development guidelines ?
>>
>> I am planning to do some profiling of our system to see where the
>> performance is getting dropped, but before that I wanted to ask here, in
>> case there are any best-known-methods that I am missing out on. Thanks.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com
>> <https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com?utm_medium=email_source

Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread Robert Engels
He stated "each request takes 2 secs to process" - what's involved in that is the important aspect imo.-Original Message-
From: Michael Jones 
Sent: Nov 21, 2019 2:06 PM
To: Robert Engels 
Cc: Sankar , golang-nuts 
Subject: Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

In my (past) benchmarking, I got ~3M channel send/receive operations per second on my MacBook Pro. It is faster on faster computers. 2k requests/src is much less than 3M, clearly, and the 1/1000 duty cycle suggests that you'll have 99.9% non-overhead to do your processing. This is back of the envelope thinking, but what I go through for every path explored. What should it be? How is in in fact? What explains the difference? ... that kind of thing.On Thu, Nov 21, 2019 at 11:25 AM Robert Engels <reng...@ix.netcom.com> wrote:You need to determine how well they parallelize and what the resource consumption of a request is. For example, if every request can run concurrently at 100% (not possible btw because of switching overhead), and each request takes 0.5 secs of CPU, and 1.5 secs of IO, for a total wall time of 2 secs). At 2k request/per sec, you need a machine with 1000 CPUs. IO can run concurrently on most modern setups, so you can essentially factor this out, less so if most of the operations are writes.Your local CPU requirements may be less if the request is then handled by a cluster (over the network, database etc), but you will still need 1000 cpus in the cluster (probably a lot more due to the network overhead).You can look at github.com/robaho/go-trader for an example of very high CPU based processing using Go and channels (and other concurrency structs).-Original Message-
From: Sankar 
Sent: Nov 21, 2019 12:30 PM
To: golang-nuts 
Subject: [go-nuts] golang multiple go routines reading from a channel and performance implications

We have a setup where we have a producer goroutine pumping in a few thousand objects into a channel (Approximately 2k requests per second). There are a configurable number of goroutines that work as consumers, consuming from this single channel. If none of the consumer threads could receive the message, the message gets discarded. Each consumer go routine takes about 2 seconds for each work to be completed, by which they will come back to read the next item in the channel. The channel is sized such that it can hold up to 10,000 messages.The code is roughly something like:producer.go:func produce() { ch <- item}func consumer() { for i:=0; i < NumberOfWorkers; i ++ {   go func() {      for _, item := range ch {         // process item      }   } () }}With this above setup, we are seeing about 40% of our messages getting dropped.So my questions are:1) In such a high velocity incoming data, will this above design work ? (Producer, Consumer Worker Threads)2) We did not go for an external middleware for saving the message and processing data later, as we are concerned about latency for now.3) Are channels bad for such an approach ? Are there any other alternate performant mechanism to achieve this in the go way ?4) Are there any sample FOSS projects that we can refer to see such performant code ? Any other book, tutorial, video or some such for these high performance Golang application development guidelines ?I am planning to do some profiling of our system to see where the performance is getting dropped, but before that I wanted to ask here, in case there are any best-known-methods that I am missing out on. Thanks.



-- 
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com.




-- 
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/225583457.2024.1574364299085%40wamui-scooby.atl.sa.earthlink.net.
-- Michael T. Jonesmichael.jo...@gmail.com




-- 
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/1791680454.2277.1574367136562%40wamui-scooby.atl.sa.earthlink.net.


Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread Michael Jones
In my (past) benchmarking, I got ~3M channel send/receive operations per
second on my MacBook Pro. It is faster on faster computers. 2k requests/src
is much less than 3M, clearly, and the 1/1000 duty cycle suggests that
you'll have 99.9% non-overhead to do your processing. This is back of the
envelope thinking, but what I go through for every path explored. What
should it be? How is in in fact? What explains the difference? ... that
kind of thing.

On Thu, Nov 21, 2019 at 11:25 AM Robert Engels 
wrote:

> You need to determine how well they parallelize and what the resource
> consumption of a request is. For example, if every request can run
> concurrently at 100% (not possible btw because of switching overhead), and
> each request takes 0.5 secs of CPU, and 1.5 secs of IO, for a total wall
> time of 2 secs). At 2k request/per sec, you need a machine with 1000 CPUs.
> IO can run concurrently on most modern setups, so you can essentially
> factor this out, less so if most of the operations are writes.
>
> Your local CPU requirements may be less if the request is then handled by
> a cluster (over the network, database etc), but you will still need 1000
> cpus in the cluster (probably a lot more due to the network overhead).
>
> You can look at github.com/robaho/go-trader for an example of very high
> CPU based processing using Go and channels (and other concurrency structs).
>
>
>
> -Original Message-
> From: Sankar
> Sent: Nov 21, 2019 12:30 PM
> To: golang-nuts
> Subject: [go-nuts] golang multiple go routines reading from a channel and
> performance implications
>
> We have a setup where we have a producer goroutine pumping in a few
> thousand objects into a channel (Approximately 2k requests per second).
> There are a configurable number of goroutines that work as consumers,
> consuming from this single channel. If none of the consumer threads could
> receive the message, the message gets discarded. Each consumer go routine
> takes about 2 seconds for each work to be completed, by which they will
> come back to read the next item in the channel. The channel is sized such
> that it can hold up to 10,000 messages.
>
> The code is roughly something like:
>
> producer.go:
> func produce() {
>  ch <- item
> }
>
> func consumer() {
>  for i:=0; i < NumberOfWorkers; i ++ {
>go func() {
>   for _, item := range ch {
>  // process item
>   }
>} ()
>  }
> }
>
> With this above setup, we are seeing about 40% of our messages getting
> dropped.
>
> So my questions are:
>
> 1) In such a high velocity incoming data, will this above design work ?
> (Producer, Consumer Worker Threads)
> 2) We did not go for an external middleware for saving the message and
> processing data later, as we are concerned about latency for now.
> 3) Are channels bad for such an approach ? Are there any other alternate
> performant mechanism to achieve this in the go way ?
> 4) Are there any sample FOSS projects that we can refer to see such
> performant code ? Any other book, tutorial, video or some such for these
> high performance Golang application development guidelines ?
>
> I am planning to do some profiling of our system to see where the
> performance is getting dropped, but before that I wanted to ask here, in
> case there are any best-known-methods that I am missing out on. Thanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com
> <https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/225583457.2024.1574364299085%40wamui-scooby.atl.sa.earthlink.net
> <https://groups.google.com/d/msgid/golang-nuts/225583457.2024.1574364299085%40wamui-scooby.atl.sa.earthlink.net?utm_medium=email_source=footer>
> .
>


-- 

*Michael T. jonesmichael.jo...@gmail.com *

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CALoEmQwNmToO6SBUCem7pWQ24nMkra7z8gLr_Mn0QV6fwbPHwA%40mail.gmail.com.


Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread Robert Engels
You need to determine how well they parallelize and what the resource consumption of a request is. For example, if every request can run concurrently at 100% (not possible btw because of switching overhead), and each request takes 0.5 secs of CPU, and 1.5 secs of IO, for a total wall time of 2 secs). At 2k request/per sec, you need a machine with 1000 CPUs. IO can run concurrently on most modern setups, so you can essentially factor this out, less so if most of the operations are writes.Your local CPU requirements may be less if the request is then handled by a cluster (over the network, database etc), but you will still need 1000 cpus in the cluster (probably a lot more due to the network overhead).You can look at github.com/robaho/go-trader for an example of very high CPU based processing using Go and channels (and other concurrency structs).-Original Message-
From: Sankar 
Sent: Nov 21, 2019 12:30 PM
To: golang-nuts 
Subject: [go-nuts] golang multiple go routines reading from a channel and performance implications

We have a setup where we have a producer goroutine pumping in a few thousand objects into a channel (Approximately 2k requests per second). There are a configurable number of goroutines that work as consumers, consuming from this single channel. If none of the consumer threads could receive the message, the message gets discarded. Each consumer go routine takes about 2 seconds for each work to be completed, by which they will come back to read the next item in the channel. The channel is sized such that it can hold up to 10,000 messages.The code is roughly something like:producer.go:func produce() { ch <- item}func consumer() { for i:=0; i < NumberOfWorkers; i ++ {   go func() {      for _, item := range ch {         // process item      }   } () }}With this above setup, we are seeing about 40% of our messages getting dropped.So my questions are:1) In such a high velocity incoming data, will this above design work ? (Producer, Consumer Worker Threads)2) We did not go for an external middleware for saving the message and processing data later, as we are concerned about latency for now.3) Are channels bad for such an approach ? Are there any other alternate performant mechanism to achieve this in the go way ?4) Are there any sample FOSS projects that we can refer to see such performant code ? Any other book, tutorial, video or some such for these high performance Golang application development guidelines ?I am planning to do some profiling of our system to see where the performance is getting dropped, but before that I wanted to ask here, in case there are any best-known-methods that I am missing out on. Thanks.



-- 
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com.




-- 
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/225583457.2024.1574364299085%40wamui-scooby.atl.sa.earthlink.net.


Re: [go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread burak serdar
On Thu, Nov 21, 2019 at 11:30 AM Sankar  wrote:
>
> We have a setup where we have a producer goroutine pumping in a few thousand 
> objects into a channel (Approximately 2k requests per second). There are a 
> configurable number of goroutines that work as consumers, consuming from this 
> single channel. If none of the consumer threads could receive the message, 
> the message gets discarded. Each consumer go routine takes about 2 seconds 
> for each work to be completed, by which they will come back to read the next 
> item in the channel. The channel is sized such that it can hold up to 10,000 
> messages.

If each goroutine takes 2secs, you'd need > 4k worker goroutines to
keep up with the inflow. How many do you have?

Have you tried with a separate channel per goroutine, with a smaller
channel size?


>
> The code is roughly something like:
>
> producer.go:
> func produce() {
>  ch <- item
> }
>
> func consumer() {
>  for i:=0; i < NumberOfWorkers; i ++ {
>go func() {
>   for _, item := range ch {
>  // process item
>   }
>} ()
>  }
> }
>
> With this above setup, we are seeing about 40% of our messages getting 
> dropped.
>
> So my questions are:
>
> 1) In such a high velocity incoming data, will this above design work ? 
> (Producer, Consumer Worker Threads)
> 2) We did not go for an external middleware for saving the message and 
> processing data later, as we are concerned about latency for now.
> 3) Are channels bad for such an approach ? Are there any other alternate 
> performant mechanism to achieve this in the go way ?
> 4) Are there any sample FOSS projects that we can refer to see such 
> performant code ? Any other book, tutorial, video or some such for these high 
> performance Golang application development guidelines ?
>
> I am planning to do some profiling of our system to see where the performance 
> is getting dropped, but before that I wanted to ask here, in case there are 
> any best-known-methods that I am missing out on. Thanks.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CAMV2RqojyQZ84Ajxye8O6O9zO7WUiDN8jPbbndhYQVNuM%2BT_4A%40mail.gmail.com.


[go-nuts] golang multiple go routines reading from a channel and performance implications

2019-11-21 Thread Sankar
We have a setup where we have a producer goroutine pumping in a few 
thousand objects into a channel (Approximately 2k requests per second). 
There are a configurable number of goroutines that work as consumers, 
consuming from this single channel. If none of the consumer threads could 
receive the message, the message gets discarded. Each consumer go routine 
takes about 2 seconds for each work to be completed, by which they will 
come back to read the next item in the channel. The channel is sized such 
that it can hold up to 10,000 messages.

The code is roughly something like:

producer.go:
func produce() {
 ch <- item
}

func consumer() {
 for i:=0; i < NumberOfWorkers; i ++ {
   go func() {
  for _, item := range ch {
 // process item
  }
   } ()
 }
}

With this above setup, we are seeing about 40% of our messages getting 
dropped.

So my questions are:

1) In such a high velocity incoming data, will this above design work ? 
(Producer, Consumer Worker Threads)
2) We did not go for an external middleware for saving the message and 
processing data later, as we are concerned about latency for now.
3) Are channels bad for such an approach ? Are there any other alternate 
performant mechanism to achieve this in the go way ?
4) Are there any sample FOSS projects that we can refer to see such 
performant code ? Any other book, tutorial, video or some such for these 
high performance Golang application development guidelines ?

I am planning to do some profiling of our system to see where the 
performance is getting dropped, but before that I wanted to ask here, in 
case there are any best-known-methods that I am missing out on. Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/f8b5d9fb-d9b7-44b8-bc50-70dcb5f10cd0%40googlegroups.com.