I see.
Thanks for the answer.

How well can Causes handle load spikes? I assume its probably dependent on
the amount of RAM the beanstalk server had, pattern of usage and the way
jobs where used in Causes.

I think the flow-to-disk scenario is very important for real fault tolerance
in peak scenarios.

Eran

On Sat, Oct 24, 2009 at 9:31 AM, Keith Rarick <[email protected]> wrote:

>
> On Fri, Oct 23, 2009 at 11:01 PM, Eran Sandler <[email protected]>
> wrote:
> > Hi all,
> > I wanted to know how the persistency in beanstalk works.
> > Does it always put the job in both RAM and disk and just uses the disk as
> a
> > persistent storage or does it also feature a flow-to-disk scenario in
> which
> > when RAM fills up it will always write new jobs to the disk and when all
> > jobs in RAM are done, it will read a chunk from the disk to continue the
> > work.
>
> Everything is stored in memory. There's been some talk of using disk
> to allow storing more jobs than would fit in memory, but nothing more
> than rough ideas.
>
> kr
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"beanstalk-talk" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/beanstalk-talk?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to