unsubscribe

On 2012-11-30, at 3:05 PM, [email protected] wrote:

> Send mongrel-unicorn mailing list submissions to
>       [email protected]
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>       http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> or, via email, send a message with subject or body 'help' to
>       [email protected]
> 
> You can reach the person managing the list at
>       [email protected]
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of mongrel-unicorn digest..."
> 
> 
> Today's Topics:
> 
>   1. Re: Maintaining capacity during deploys (Tony Arcieri)
>   2. Re: Maintaining capacity during deploys ([email protected])
>   3. Re: When a client terminates a connection (Andrew Stewart)
>   4. using unicorn with logging-rails gem (Yoav Aner)
>   5. Re: using unicorn with logging-rails gem (Eric Wong)
>   6. Re: When a client terminates a connection (Eric Wong)
>   7. Re: using unicorn with logging-rails gem (Yoav Aner)
>   8. Re: using unicorn with logging-rails gem (Eric Wong)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Thu, 29 Nov 2012 17:40:22 -0800
> From: Tony Arcieri <[email protected]>
> To: unicorn list <[email protected]>
> Subject: Re: Maintaining capacity during deploys
> Message-ID:
>       <CAHOTMV+=XXHVe-SafCN6BZQPikZpY+FTVTBt528BD=fWpw=7...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On Thu, Nov 29, 2012 at 5:28 PM, Devin Ben-Hur <[email protected]> wrote:
>> A better solution is to use a profiler to identify what extra work is being
>> done when an unwarm worker gets its first request and move that work into an
>> initialization step which occurs before fork when run with app preload
>> enabled.
> 
> I've done that, unfortunately that work is connection setup which must
> happen after forking or otherwise file descriptors would wind up
> shared between processes.
> 
> --
> Tony Arcieri
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Thu, 29 Nov 2012 20:48:16 -0800
> From: [email protected]
> To: unicorn list <[email protected]>
> Subject: Re: Maintaining capacity during deploys
> Message-ID: <[email protected]>
> Content-Type: text/plain;     charset=us-ascii
> 
> In my experience high loads and contention are a common issue when restarting 
> the unicorn master process. In a previous project we dealt with this by 1) 
> performing some warmup requests in the master before starting to fork 
> workers; 2) replacing old workers slowly by having each new worker send a 
> TTOU to the old master in after_fork and having the new master sleep for a 
> couple of seconds between spawning workers.
> 
> It was a couple of years ago so the details are not fresh but iirc before 
> tuning a restart took 5-10 seconds followed by load climbing to 10-20 (on a 4 
> proc machine) with a 2-5 minute slow recovery of long request times. In 
> particularly pathological cases requests can start timing out which results 
> in workers being killed and new workers needing to warm up on and already 
> overloaded system. After tuning the rolling restart took 30-40 seconds but 
> the load barely budged and the request processing times stayed constant.
> 
> .seth
> 
> On Nov 29, 2012, at 5:24 PM, Eric Wong <[email protected]> wrote:
> 
>> Tony Arcieri <[email protected]> wrote:
>>> On Thu, Nov 29, 2012 at 3:34 PM, Lawrence Pit <[email protected]> 
>>> wrote:
>>>> 
>>>> Perhaps it's possible to warm up the workers in the unicorn after_fork 
>>>> block?
>>> 
>>> Are people doing this in production (i.e. moving the termination of
>>> the old master from before_fork to after_fork)? My worry is that
>>> during this warming process you will have 2X the normal number of
>>> Unicorn workers active at the same time, which could potentially lead
>>> to exhausting of system resources (i.e. RAM)
>> 
>> I haven't done any terminations in the *_fork hooks for a long time.
>> I just let 2x the normal workers run for a bit before sending SIGQUIT.
>> 
>> That said, I usually have plenty of RAM (and DB connections) to spare.
>> Excessive CPU-bound loads are handled very well nowadays.
>> _______________________________________________
>> Unicorn mailing list - [email protected]
>> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
>> Do not quote signatures (like this one) or top post when replying
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Fri, 30 Nov 2012 10:15:17 +0100
> From: Andrew Stewart <[email protected]>
> To: unicorn list <[email protected]>
> Subject: Re: When a client terminates a connection
> Message-ID:
>       <[email protected]>
> Content-Type: text/plain; charset=utf-8
> 
> On 30 Nov 2012, at 00:04, Eric Wong wrote:
>> I?aki Baz Castillo <[email protected]> wrote:
>>> I don't understand why the application on top of the HTTP/TCP layer
>>> (so the Rails app) should be interrupted because the TCP connection is
>>> closed prematurely. The HTTP layer (which involves a single TCP
>>> connection for each HTTP request/response pair) is a layer below the
>>> application layer, should not disturb the Rails app IMHO.
>>> 
>>> Is this a design issue of Rails?
>> 
>> I suspect the the only possibility is when Rails is reading the
>> multipart response in a largish upload (any upload request which
>> can't fit in socket buffers).
> 
> On the three occasions I've had this problem it's been a tiny request.
> 
> The user clicks an ordinary Rails delete link.  This POSTs to a URL with two 
> parameters: an authenticity token and the _method.
> 
> The corresponding destroy action looks like:
> 
>    def destroy
>      client = Client.find params[:id]  # 1  (ignores soft-deleted records)
>      client.soft_delete                # 2  (sets a deleted_at timestamp)
>      client.brands.each(&:destroy)     # 3  (a client has many brands)
>      redirect_to clients_path
>    end
> 
> Nginx logged:
> 
>    x.xxx.xx.xx - - [27/Nov/2012:14:40:28 +0000] "POST /clients/2248 HTTP/1.1" 
> 499 0 "https://example.com/clients/2248/edit"; "Mozilla/5.0 (compatible; MSIE 
> 9.0; Windows NT 6.1; WOW64; Trident/5.0)"
>    x.xxx.xx.xx - - [27/Nov/2012:14:40:29 +0000] "POST /clients/2248 HTTP/1.1" 
> 404 592 "https://example.com/companies/2248/edit"; "Mozilla/5.0 (compatible; 
> MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)"
> 
> You can see the first request was terminated by the user.  Hot on its heels 
> the user tried again and got a 404.
> 
> The 404 was triggered by step 1 above: the first request had successfully 
> soft-deleted the client.  However the client's brands were still in the 
> database.  The first request had executed steps 1 and 2 but not step 3.
> 
> Unicorn didn't log anything.  Rails didn't log the first request but did log 
> the second.
> 
> I understand the steps to mitigate the problem...but I remain confused about 
> why the Rails app was interrupted because the TCP connection was closed 
> prematurely.
> 
> Yours,
> Andrew Stewart
> 
> ------------------------------
> 
> Message: 4
> Date: Fri, 30 Nov 2012 18:20:51 +0100
> From: Yoav Aner <[email protected]>
> To: [email protected]
> Subject: using unicorn with logging-rails gem
> Message-ID:
>       <ca+ma6phvkcwudvedsadj5ott7c3xftgnowoexudfgg4oj2k...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Hope this is the right place to ask questions about unicorn. I'm not
> on the mailing list though, so please CC my email address.
> 
> Since moving from phusion passenger to Unicorn, it seems like one
> functionality of the logging-rails gem
> (https://github.com/TwP/logging-rails) is (half) broken.
> 
> Logging-rails supports an email appender, which aggregates log
> messages (typically ERROR and above) and then emails them. However,
> since moving to Unicorn we stopped getting those emails.
> 
> The interesting thing however, is that when we USR2 signal our unicorn
> and it restarts itself, then those emails somehow get flushed out and
> get sent all at once... I tried sending USR1 signal to see if it has
> something to do with log flushing, but this does not seem to make any
> difference.
> 
> Any ideas how to investigate this / make it work again??
> 
> Cheers
> Yoav
> 
> 
> ------------------------------
> 
> Message: 5
> Date: Fri, 30 Nov 2012 19:15:55 +0000
> From: Eric Wong <[email protected]>
> To: unicorn list <[email protected]>
> Cc: Yoav Aner <[email protected]>
> Subject: Re: using unicorn with logging-rails gem
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=us-ascii
> 
> Yoav Aner <[email protected]> wrote:
>> Hope this is the right place to ask questions about unicorn. I'm not
>> on the mailing list though, so please CC my email address.
> 
> No problem!
> 
>> Since moving from phusion passenger to Unicorn, it seems like one
>> functionality of the logging-rails gem
>> (https://github.com/TwP/logging-rails) is (half) broken.
>> 
>> Logging-rails supports an email appender, which aggregates log
>> messages (typically ERROR and above) and then emails them. However,
>> since moving to Unicorn we stopped getting those emails.
>> 
>> The interesting thing however, is that when we USR2 signal our unicorn
>> and it restarts itself, then those emails somehow get flushed out and
>> get sent all at once... I tried sending USR1 signal to see if it has
>> something to do with log flushing, but this does not seem to make any
>> difference.
> 
> USR1 only affects logging to regular files, so is unlikely to help
> in your case.
> 
> Are you using preload_app true or false?  If true, perhaps some shared
> resource (socket or temporary file) isn't being handled correctly?
> 
>> Any ideas how to investigate this / make it work again??
> 
> I'm not familiar with the "logging" or "logging-rails" gems.
> I took a brief look at them but couldn't figure anything out,
> can you ask the author of those gems?
> 
> If you do figure something out, please let the list know so it can
> possibly help others in the future.
> 
> 
> ------------------------------
> 
> Message: 6
> Date: Fri, 30 Nov 2012 19:26:20 +0000
> From: Eric Wong <[email protected]>
> To: unicorn list <[email protected]>
> Subject: Re: When a client terminates a connection
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=utf-8
> 
> Andrew Stewart <[email protected]> wrote:
>> On 30 Nov 2012, at 00:04, Eric Wong wrote:
>>> I?aki Baz Castillo <[email protected]> wrote:
>>>> I don't understand why the application on top of the HTTP/TCP layer
>>>> (so the Rails app) should be interrupted because the TCP connection is
>>>> closed prematurely. The HTTP layer (which involves a single TCP
>>>> connection for each HTTP request/response pair) is a layer below the
>>>> application layer, should not disturb the Rails app IMHO.
>>>> 
>>>> Is this a design issue of Rails?
>>> 
>>> I suspect the the only possibility is when Rails is reading the
>>> multipart response in a largish upload (any upload request which
>>> can't fit in socket buffers).
>> 
>> On the three occasions I've had this problem it's been a tiny request.
>> 
>> The user clicks an ordinary Rails delete link.  This POSTs to a URL
>> with two parameters: an authenticity token and the _method.
>> 
>> The corresponding destroy action looks like:
>> 
>>    def destroy
>>      client = Client.find params[:id]  # 1  (ignores soft-deleted records)
>>      client.soft_delete                # 2  (sets a deleted_at timestamp)
>>      client.brands.each(&:destroy)     # 3  (a client has many brands)
>>      redirect_to clients_path
>>    end
>> 
>> Nginx logged:
>> 
>>    x.xxx.xx.xx - - [27/Nov/2012:14:40:28 +0000] "POST /clients/2248 
>> HTTP/1.1" 499 0 "https://example.com/clients/2248/edit"; "Mozilla/5.0 
>> (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)"
>>    x.xxx.xx.xx - - [27/Nov/2012:14:40:29 +0000] "POST /clients/2248 
>> HTTP/1.1" 404 592 "https://example.com/companies/2248/edit"; "Mozilla/5.0 
>> (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)"
>> 
>> You can see the first request was terminated by the user.  Hot on its
>> heels the user tried again and got a 404.
> 
> Weird, and this is the only nginx instance that saw this request?
> 
> Also, according to your logs above, the two requests above came from
> different Referer pages, so perhaps there were more requests involved
> that hit a different nginx?
> 
>> The 404 was triggered by step 1 above: the first request had
>> successfully soft-deleted the client.  However the client's brands
>> were still in the database.  The first request had executed steps 1
>> and 2 but not step 3.
>> 
>> Unicorn didn't log anything.  Rails didn't log the first request but
>> did log the second.
> 
> Odd.  It's been a long time since I looked at Rails; but doesn't Rails
> log when a request _starts_?  Also, doesn't Rails log all DB queries?
> 
> Or, by any chance, do you have query logging enabled in your DB to track
> this down?
> 
> 
> ------------------------------
> 
> Message: 7
> Date: Fri, 30 Nov 2012 20:59:24 +0100
> From: Yoav Aner <[email protected]>
> To: Eric Wong <[email protected]>
> Cc: unicorn list <[email protected]>
> Subject: Re: using unicorn with logging-rails gem
> Message-ID:
>       <CA+mA6PhPvZm80bzarQhGgvQ4goOwFwbQREhbBDM==m7so6o...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Hi Eric,
> 
> Thanks a bunch for getting back so quickly on this.
> 
> I followed your suggestion and tried with `preload_app = false` and
> looks like this seems to fix this problem! Any idea what can go wrong
> when it's set to true or how I can try go about fixing this??
> 
> I haven't yet contacted the logging / logging-rails project. Perhaps
> that's a good idea. Considering the gem did/does work fine on my dev
> environment and with phusion passenger (and now it seems also with
> Unicorn, albeit with preload_app = false), I wasn't sure whether the
> problem is with this gem or elsewhere.
> 
> Any tips on how to investigate this further or resolve this, or what
> information I can give the gem maintainer(s) would be much
> appreciated.
> 
> Cheers
> Yoav
> 
> On 30 November 2012 20:15, Eric Wong <[email protected]> wrote:
>> 
>> Yoav Aner <[email protected]> wrote:
>>> Hope this is the right place to ask questions about unicorn. I'm not
>>> on the mailing list though, so please CC my email address.
>> 
>> No problem!
>> 
>>> Since moving from phusion passenger to Unicorn, it seems like one
>>> functionality of the logging-rails gem
>>> (https://github.com/TwP/logging-rails) is (half) broken.
>>> 
>>> Logging-rails supports an email appender, which aggregates log
>>> messages (typically ERROR and above) and then emails them. However,
>>> since moving to Unicorn we stopped getting those emails.
>>> 
>>> The interesting thing however, is that when we USR2 signal our unicorn
>>> and it restarts itself, then those emails somehow get flushed out and
>>> get sent all at once... I tried sending USR1 signal to see if it has
>>> something to do with log flushing, but this does not seem to make any
>>> difference.
>> 
>> USR1 only affects logging to regular files, so is unlikely to help
>> in your case.
>> 
>> Are you using preload_app true or false?  If true, perhaps some shared
>> resource (socket or temporary file) isn't being handled correctly?
>> 
>>> Any ideas how to investigate this / make it work again??
>> 
>> I'm not familiar with the "logging" or "logging-rails" gems.
>> I took a brief look at them but couldn't figure anything out,
>> can you ask the author of those gems?
>> 
>> If you do figure something out, please let the list know so it can
>> possibly help others in the future.
> 
> 
> ------------------------------
> 
> Message: 8
> Date: Fri, 30 Nov 2012 20:05:35 +0000
> From: Eric Wong <[email protected]>
> To: unicorn list <[email protected]>
> Cc: Yoav Aner <[email protected]>
> Subject: Re: using unicorn with logging-rails gem
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=us-ascii
> 
> Yoav Aner <[email protected]> wrote:
>> Hi Eric,
>> 
>> Thanks a bunch for getting back so quickly on this.
>> 
>> I followed your suggestion and tried with `preload_app = false` and looks
>> like this seems to fix this problem! Any idea what can go wrong when it's
>> set to true or how I can try go about fixing this??
> 
> preload_app is false by default because it's the most likely to work
> for all gems.
> 
> I suspect there's a shared resource (file/socket) or some cached value
> that's initialized before fork and loses state after forking.
> 
> It's a fairly common issue with database libraries
> 
>> I haven't yet contacted the logging / logging-rails project. Perhaps that's
>> a good idea. Considering the gem did/does work fine on my dev environment
>> and with phusion passenger (and now it seems also with Unicorn, albeit with
>> preload_app = false), I wasn't sure whether the problem is with this gem or
>> elsewhere.
>> 
>> Any tips on how to investigate this further or resolve this, or what
>> information I can give the gem maintainer(s) would be much appreciated.
> 
> I would definitely contact the maintainer of logging/logging-rails
> on how to reinitialize any state after forking.
> 
> 
> ------------------------------
> 
> _______________________________________________
> mongrel-unicorn mailing list
> [email protected]
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> 
> 
> End of mongrel-unicorn Digest, Vol 41, Issue 10
> ***********************************************

_______________________________________________
Unicorn mailing list - [email protected]
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

Reply via email to