We deployed changes Thursday evening that fixed the HTTP endpoint the
wiretap sends payloads to, and it's been running without incident since.

Feels like something may be wrong with error handling there.

On 27 July 2017 at 10:34, James Green <james.mk.gr...@gmail.com> wrote:

> I have a thread dump with it hung - uploaded to fastthread.io and
> received a smiley face - there's nothing wrong!
>
> Here's the gist if anyone can spot something obvious: https://gist.github.
> com/jmkgreen/a293ace71678a1ae2a5aac7b6408876e
>
> The notable thing is that wiretap is being used and features - could there
> be a starvation problem due to this not completing?
>
> James
>
>
> On 27 July 2017 at 09:24, Zoran Regvart <zo...@regvart.com> wrote:
>
>> Hi James,
>> besides the obvious garbage collector pauses, one thing that might reveal
>> the problem is to do periodic thread dumps and look if there are any
>> locking issues. And perhaps increasing the verbosity of the logs can give
>> you more clues,
>>
>> zoran
>>
>> On Thu, Jul 27, 2017 at 9:58 AM, James Green <james.mk.gr...@gmail.com>
>> wrote:
>>
>> > This is a Spring Boot 1.3 application that was recently re-imaged
>> (Docker)
>> > and rebooted after several months of loyal service. We are unaware of
>> any
>> > major changes beyond going from java:8 to openjdk:8.
>> >
>> > It receives payloads from ActiveMQ, works against Mongo, and spits out
>> > payloads to HTTP and SMTP endpoints. Since deployment, each morning
>> during
>> > "rush hour" the application has ceased operating seconds - the JVM is
>> alive
>> > yet the logs and data processing have all halted. A restart resumes
>> things.
>> >
>> > So what's in the logs? It seems we have a loading problem contacting the
>> > HTTP endpoint (NoHttpResponseException), which we will fix. There are
>> also
>> > warnings about ActiveMQ Inactivity Monitor exceeding 30s and will
>> reconnect
>> > which we don't have any obvious clue about.
>> >
>> > But I'm surprised either of these should (eventually) completely halt
>> the
>> > system!
>> >
>> > The very last log lines are absolutely normal behaviour indicating that
>> it
>> > has completed processing the message, indicating at least one thread has
>> > finished.
>> >
>> > Can anyone suggest what could cause what we're seeing? My next step is
>> to
>> > capture a heap dump but we're down to dark magic levels.
>> >
>> > We're trying hawt.io to get some visibility but it's a real pain
>> within a
>> > docker-compose launched set of containers due to some security
>> restrictions
>> > recently introduced and apparent bugs within.
>> >
>> > Appreciate the help in advance.
>> >
>> > Thanks,
>> >
>> > James
>> >
>>
>>
>>
>> --
>> Zoran Regvart
>>
>
>

Reply via email to