elping with the debugging!
On Wed, Jul 3, 2019 at 11:34 AM Tom Mitchell wrote:
>
> On Mon, Jul 1, 2019 at 12:42 PM 'Yunchi Luo' via golang-nuts <
> golang-nuts@googlegroups.com> wrote:
>
>> Hello, I'd like to solicit some help with a weird GC issue we are seeing.
>>
>&g
before the OOM crash.
> >
> > Note that the GC logs allow you to see some aspect of the GC behavior
> better but if you don't understand it well enough, it may seem mysterious
> (compared to your mental model) and later, when you run out of other
> hypotheses, it may even seem susp
vice/dynamodb.(*DynamoDB).GetItemWithContext
>
> On Tue, Jul 2, 2019 at 2:08 PM andrey mirtchovski
> wrote:
>
>> What I have found useful in the past is pprof's ability to diff profiles.
>> That means that if you capture heap profiles at regular intervals you can
>> see a m
egular intervals you can
> see a much smaller subset of changes and compare allocation patterns.
>
> On Tue, Jul 2, 2019, 10:53 AM 'Yunchi Luo' via golang-nuts <
> golang-nuts@googlegroups.com> wrote:
>
>> I'm not so much pointing my finger at GC as I am hoping GC
your
> own code and look at what patterns emerge. [Not to mention any time you
> spend on understanding your code will help improve your service; but better
> understanding of and debugging the GC won't necessarily help you!]
>
> On Jul 1, 2019, at 12:14 PM, 'Yunchi Luo' via golang-nuts <
o show in pprof either.
>
> -----Original Message-
> From: 'Yunchi Luo' via golang-nuts
> Sent: Jul 1, 2019 4:26 PM
> To: Robert Engels
> Cc: golang-nuts@googlegroups.com, Alec Thomas
> Subject: Re: [go-nuts] OOM occurring with a small heap
>
> I actually have a heap
probably much smaller than the buffer allocation).
>
> That would be my guess - but just a guess.
>
> -----Original Message-
> From: 'Yunchi Luo' via golang-nuts
> Sent: Jul 1, 2019 2:14 PM
> To: golang-nuts@googlegroups.com
> Cc: Alec Thomas
> Subject: [go-nuts] OOM occurring
Hello, I'd like to solicit some help with a weird GC issue we are seeing.
I'm trying to debug OOM on a service we are running in k8s. The service is
just a CRUD server hitting a database (DynamoDB). Each replica serves about
300 qps of traffic. There are no memory leaks. On occasion (seemingly