Yes, it hadn't occurred to me yet that Heroku's read-only "slug" 
memory was a problem, so I didn't mention it (I haven't worked on a 
project with caching yet). But then Heroku provides the answer with memcached:

http://docs.heroku.com/memcached
...
CACHE = MemCache.new(servers, :namespace => namespace)
...
The CACHE constant can now be used anywhere in the application to 
access the memcached cluser:
 >>> CACHE.set("hello:world:1234", "Hello World!")
 >>> CACHE.get("hello:world:1234")
"Hello World"

Unfortunately, memcached on Heroku is still in Beta and unavailable. 
So I'll follow Jarin:

Why not just store the data in a separate table as a serialized object
or CSV in a blob and run a job to clear out old ones? Then you can
just store the id of the stored data in the session.

Except the volume will be very low, so I'll just clear out the old 
blobs when new blobs are added. Actually, since there will only be a 
few users, I could just store one blob per user and page and keep 
replacing them. No cleanup required at all!

Thanks Cynthia, Jarin, and Jason,

Scott


At 09:31 AM 10/8/2009, you wrote:
>Funny how we often leave out the most critical piece of information 
>in our initial question.  You're on Heroku, use the built-in 
>memcache - <http://docs.heroku.com/memcached>http://docs.heroku.com/memcached
>
>It's precisely what you want (or I've totally misunderstood).
>
>On Oct 7, 2009, at 10:44 PM, Scott Olmsted wrote:
>
>>
>>Cynthia, yes, that's a good idea, just create the file and have it ready.
>>
>>I think that would work on a server running multiple mongrels, 
>>since they all see the same file system, but the app is on Heroku. 
>>At http://docs.heroku.com/constraints their documentation says:
>>
>>Your app is compiled into a slug for fast distribution across the 
>><http://heroku.com/how/dyno_grid>dyno grid. The filesystem for the 
>>slug is read-only. This means you cannot dynamically write to the 
>>filesystem for semi-permanent storage. The following types of 
>>behaviors are not supported:
>>    * Caching pages in the public directory
>>    * Saving uploaded assets to local disk (e.g. with attachment_fu 
>> or paperclip)
>>    * Writing full-text indexes with Ferret
>>    * Writing to a filesystem database like SQLite or GDBM
>>    * Accessing a git repo for an app like git-wiki
>>There are two directories that are writeable: ./tmp and ./log 
>>(under your application root). If you wish to drop a file 
>>temporarily for the duration of the request, you can write to a 
>>filename like #{RAILS_ROOT}/tmp/myfile_#{Process.pid}. There is no 
>>guarantee that this file will be there on subsequent requests 
>>(although it might be), so this should not be used for any kind of 
>>permanent storage.
>>
>>So I can't write out the file and expect it to be there when they 
>>click 'Download'. The right answer would likely be to write it to 
>>S3, but I don't have the time to do that now, I need something 
>>quick and dirty. I guess I'll write it to the database.
>>
>>BTW, other than this puzzle I like Heroku, no Capistrano script, no 
>>deployment problems, just 'git push heroku' and it's running in 
>>less than 30 seconds.
>>
>>Thanks.
>>
>>Scott
>>
>>
>>At 07:17 PM 10/7/2009, you wrote:
>>
>>>Quoting Scott Olmsted <<mailto:[email protected]>[email protected]>:
>>> >
>>> > I'm creating an application that displays a table of information
>>> > based on user input. It is time-expensive to create the data 
>>> for the table.
>>> >
>>> > The user can, after seeing the HTML table, click on a 'Download' link
>>> > and receive the data in CSV form.
>>> >
>>> > I don't want to compute the data again. Is there a way to persist the
>>> > data without putting it in the database? It could be too large to fit
>>> > in the session, so that's out.
>>>
>>>How easy would it be to pull the data and then create both sets of
>>>output at the same time? If the table is one giant datastructure (not
>>>a bunch of separate chunks), you might be able to hold the data in
>>>memory, build the html, and then write the csv file to the file
>>>system. You'll end up creating some csv files that are never
>>>downloaded - but in the cases where the output is approved and the
>>>user wants the download, it is precomputed. You could have a cron job
>>>clear the cache after an hour so you don't accumulate unwanted files.
>>>
>
>
>
--~--~---------~--~----~------------~-------~--~----~
SD Ruby mailing list
[email protected]
http://groups.google.com/group/sdruby
-~----------~----~----~----~------~----~------~--~---

Reply via email to