I have just added the gc tracing and it looks like this more or less all
the time:

gc 78 @253.095s 0%: 0.032+3.3+0.46 ms clock, 0.26+0.24/2.6/2.4+3.6 ms cpu,
11->12->4 MB, 12 MB goal, 8 P
gc 79 @253.109s 0%: 0.021+2.1+0.17 ms clock, 0.16+0.19/3.6/1.2+1.3 ms cpu,
9->9->4 MB, 10 MB goal, 8 P
gc 80 @253.120s 0%: 0.022+2.8+2.2 ms clock, 0.17+0.27/4.8/0.006+18 ms cpu,
8->8->4 MB, 9 MB goal, 8 P
gc 81 @253.138s 0%: 0.019+2.3+0.10 ms clock, 0.15+0.73/3.9/3.1+0.81 ms cpu,
9->9->5 MB, 10 MB goal, 8 P

Heroku already reports a SW of 343 ms but I can't find it by manual
inspection. I will download the logs later today and try to generate
realistic load.
What is the overhead of running like this, aside from the obvious extra
logging?
Are there any automatic tools to analyze these logs?

lör 2 dec. 2017 kl 22:36 skrev Henrik Johansson <dahankz...@gmail.com>:

> I am sorry, I was unclear. The app uses very little ram but the
> provisioned available memory is 512 MB.
>
> I will try to experiment with GC toggles as you suggest and report back.
>
> Thx!
>
> On Sat, Dec 2, 2017, 22:18 rlh via golang-nuts <
> golang-nuts@googlegroups.com> wrote:
>
>> Hard telling what it going on. 35MB, even for 1 CPU, seems very small.
>> Most modern system provision more than 1GB per HW thread though I've seen
>> some provision as little as 512MB. GOGC (SetGCPercent) can be adjust so
>> that the application uses more of the available RAM. Running with
>> GODEBUG=gctrace=1 will give you a sense of the GC's view of the application.
>>
>> In any case these kinds of numbers, running on a real systems, and
>> duplicatable on tip are worth filing an issue.
>>
>> On Saturday, December 2, 2017 at 3:02:30 AM UTC-5, Henrik Johansson wrote:
>>>
>>> Hi,
>>>
>>> I am befuddled by GC SW times on several seconds (seen 20s once) in the
>>> metrics page for our app. There are several things that are strange but
>>> perhaps I am misreading it. The same metrics page reports Max Total 35 MB
>>> out of which 1 MB s swap the rest RSS. The response times on the service is
>>> has 99% ~400 ms which is not good but 95% is ~120 ms usually.
>>> The app reloads an in memory cache as needed using atomic,Value as a
>>> holder and the size is no more than a few thousand at any given time.
>>> Basically a map with pointers to simple structs and lists with pointers
>>> to the same structs to allow for some simple access scenarios.
>>>
>>> Now I haven't profiled the app yet but even in a very pathologial case
>>> it seems as though the GC would be able to keep up easily with such little
>>> amount of memory being used. Granted this is a Standard 1x dyno but even so
>>> once the machiine is stopped the GC should be able to complete it's work in
>>> a very short time given the low used memory.
>>>
>>> Has anyone seen this as well? Could the Go metrics on Heroku simply
>>> report erroneously? Perhaps a couple of orders of magnitide?
>>>
>>> Cheers,
>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to