Cynthia, yes, that's a good idea, just create the file and have it ready.

I think that would work on a server running multiple mongrels, since 
they all see the same file system, but the app is on Heroku. At 
http://docs.heroku.com/constraints their documentation says:

Your app is compiled into a slug for fast distribution across the 
<http://heroku.com/how/dyno_grid>dyno grid. The filesystem for the 
slug is read-only. This means you cannot dynamically write to the 
filesystem for semi-permanent storage. The following types of 
behaviors are not supported:
    * Caching pages in the public directory
    * Saving uploaded assets to local disk (e.g. with attachment_fu 
or paperclip)
    * Writing full-text indexes with Ferret
    * Writing to a filesystem database like SQLite or GDBM
    * Accessing a git repo for an app like git-wiki
There are two directories that are writeable: ./tmp and ./log (under 
your application root). If you wish to drop a file temporarily for 
the duration of the request, you can write to a filename like 
#{RAILS_ROOT}/tmp/myfile_#{Process.pid}. There is no guarantee that 
this file will be there on subsequent requests (although it might 
be), so this should not be used for any kind of permanent storage.

So I can't write out the file and expect it to be there when they 
click 'Download'. The right answer would likely be to write it to S3, 
but I don't have the time to do that now, I need something quick and 
dirty. I guess I'll write it to the database.

BTW, other than this puzzle I like Heroku, no Capistrano script, no 
deployment problems, just 'git push heroku' and it's running in less 
than 30 seconds.

Thanks.

Scott


At 07:17 PM 10/7/2009, you wrote:

>Quoting Scott Olmsted <[email protected]>:
> >
> > I'm creating an application that displays a table of information
> > based on user input. It is time-expensive to create the data for the table.
> >
> > The user can, after seeing the HTML table, click on a 'Download' link
> > and receive the data in CSV form.
> >
> > I don't want to compute the data again. Is there a way to persist the
> > data without putting it in the database? It could be too large to fit
> > in the session, so that's out.
>
>How easy would it be to pull the data and then create both sets of
>output at the same time? If the table is one giant datastructure (not
>a bunch of separate chunks), you might be able to hold the data in
>memory, build the html, and then write the csv file to the file
>system. You'll end up creating some csv files that are never
>downloaded - but in the cases where the output is approved and the
>user wants the download, it is precomputed. You could have a cron job
>clear the cache after an hour so you don't accumulate unwanted files.
>
>
--~--~---------~--~----~------------~-------~--~----~
SD Ruby mailing list
[email protected]
http://groups.google.com/group/sdruby
-~----------~----~----~----~------~----~------~--~---

Reply via email to