Another option is to profile a % of requests. In the past I've done that by 
enabling profiling on a set % of application servers then extrapolating 
from there.

On Tuesday, 25 July 2017 10:55:42 UTC+10, Jaana Burcu Dogan wrote:
>
> It would be very speculative to provide reference numbers without actually 
> seeing the specific program. You can benchmark the latency/throughput with 
> the CPU profiler on to see a realistic estimate. FWIW, memory profiling, 
> goroutine, thread create profiles are always on. At Google, we continuously 
> profile Go production services and it is safe to do so.
>
>
> On Monday, July 24, 2017 at 5:44:10 PM UTC-7, nat...@honeycomb.io wrote:
>>
>> Hello,
>>
>> I am curious what the performance impact of running pprof to collect 
>> information about CPU or memory usage is. Is it like strace where there 
>> could be a massive slowdown (up to 100x) or is it lower overhead, i.e., 
>> safe to use in production? The article here - 
>> http://artem.krylysov.com/blog/2017/03/13/profiling-and-optimizing-go-web-applications/
>>  
>> - suggests that "one of the biggest pprof advantages is that it has low 
>> overhead and can be used in a production environment on a live traffic 
>> without any noticeable performance penalties". Is that accurate?
>>
>> Thanks!
>>
>> Nathan
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to