We have a collection of 'micro services' running with hypnotoad, which all 
share the same framework code. We've been running this for about two years 
now, periodically adding new services. Outside of local development all our 
services are behind an nginx proxy, which routes to the appropriate service 
in the collection. This has worked fantastically well so far.

But in our newest service, while testing in our beta environment, we 
started noticing an odd behavior. We would occasionally get a 405 error 
response on calls, including calls from shared code that should never fail 
unexpectedly from test code. Watching the log we observed the following 
entries *always* corresponded with the error response:
[Thu Jun 30 12:21:47 2016] [debug] Worker 34303 stopped
[Thu Jun 30 12:21:47 2016] [debug] Worker 34442 started 

Playing the with configuration value for 'accepts' increased or decreased 
the frequency with which we see this error. Using hypnotoad locally and a 
low accepts value I am able to easily replicate this, and doing tests with 
curl it appears the worker is hanging up unexpectedly after accepting the 
connection:
Connected to 127.0.0.1 (127.0.0.1) port 3000 (#0)
> GET /account HTTP/1.1
> Host: 127.0.0.1:3000
> User-Agent: curl/7.43.0
>
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server

We haven't done anything obviously different with this service than with 
any of the others where we do not appear to be having this issue.

   1. Why does hypnotoad seem to accept the connection, but then kill the 
   Worker handling the request immediately (un-gracefully) before it finishes 
   responding?
   2. Is there any additional debugging I could or should do to gain more 
   information?
      1. We thought at first out service was dieing in some new and unusual 
      way, but we see nothing in the application log to indicate it ever even 
      started handling the request. 
      2. If something is happening to generate STDERR output, would that 
      end up in the mojo log where we see the Worker start/stop etc? I notice 
the 
      STDERR output we see in the terminal using morbo (prefork), as we usually 
      do for local development, appears to be missing from the mojo service 
log; 
      where does it go?
   3. Tailing the service log for other instances I do see the Worker 
   stopped/start entries, but no empty replies from the service (restart 
   appears graceful as expected); what might influence this behavior 
   difference in our new service?
      1. FYI when replicating locally all my services are symlinked to the 
      same moniker.conf file They also all inherit from the same base class for 
      their startup code.
   
Any suggestions for how to resolve this issue would be most welcome! While 
I can replicate it I can not see anything different in this one service 
that might cause this behavior difference?!?!

-Regards,
Sean

-- 
You received this message because you are subscribed to the Google Groups 
"Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/d/optout.

Reply via email to