Thanks for the info. Good to know I'm not the only one facing this problem 
and there's a well-documented solution. Great






On Thursday, June 13, 2013 2:57:20 PM UTC+2, dB. wrote:
>
> Having gone that path, we eventually gave up. Any high resolution image 
> cannot be processed with ImageMagick without 2-4GB of RAM. We now run our 
> workers on EC2, documented in 
> http://artsy.github.io/blog/2012/01/31/beyond-heroku-satellite-delayed-job-workers-on-ec2/
> .
>
> On Thu, Jun 13, 2013 at 5:20 AM, Daniel Farina <[email protected]<javascript:>
> > wrote:
>
>> On Thu, Jun 13, 2013 at 2:03 AM, Josal <[email protected] <javascript:>> 
>> wrote:
>> > What I plan is to stop the process and flag it as "pending" in my
>> > backoffice, making the process manually later in my machine as an 
>> exception.
>> >
>> > I've used 2X dynos, but all the memory available is used and R14 errors 
>> also
>> > are thrown. Imagemagick works this way, with much memory. I've limited
>> > delegating to disk cache instead of memory cache
>> > (http://www.imagemagick.org/script/command-line-options.php#limit, 
>> suggested
>> > in some places, example here:
>> > http://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=23090), 
>> but
>> > no changes.
>>
>> Tricky.  Nominally this is what Linux cgroups was invented to help
>> with, but I think the amount of power delegated to non-root users (and
>> thus, on Heroku) is minimal.
>>
>> Here are two options that come to mind:
>>
>> 1) Somehow delegate these possibly expensive processes to their own 2X
>> dyno, taking the computation out-of-band.  e.g., a queuing strategy,
>> multi-app delegation (one app posting to another), or the Heroku API
>> (be careful with that latter one, as it's basically automated "spend
>> money", and one can hit api limits, also, the control plane has worse
>> availability than standing-processes...but it has upsides, like burst
>> capacity and 0-cost when there's nothing to do)
>>
>> 2) Use /proc/[self|pid]/statm or /proc/[self|pid]/status (which has
>> similar information, but is more human-targeted) to inspect the RSS
>> size of a process over time to take control over over-bloated
>> processes.  A process could watch itself grow in this way, which would
>> be useful if it can do it semi-frequently enough to write a "too big,
>> can't do" record and give up.
>>
>> If one uses /proc/pid/statm, be aware one will have to multiply by the
>> system page size, which has been 4096 bytes for years but *could*
>> change any time, so consider 'getconf PAGESIZE' to pick up the
>> multiplier.
>>
>> --
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Heroku" group.
>>
>> To unsubscribe from this group, send email to
>> [email protected] <javascript:>
>> For more options, visit this group at
>> http://groups.google.com/group/heroku?hl=en_US?hl=en
>>
>> ---
>> You received this message because you are subscribed to the Google Groups 
>> "Heroku Community" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>>
>
>
> -- 
>
> dB. | Moscow - Geneva - Seattle - New York
> code.dblock.org - @dblockdotorg <http://twitter.com/#!/dblockdotorg> - 
> artsy.net - github/dblock <https://github.com/dblock>
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"Heroku Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to