Am 28.02.2012 um 19:15 schrieb Yanni Chiu:

> On 28/02/12 11:03 AM, Norbert Hartl wrote:
>> 
>> I first used image persistence but the image grow to large. Then I added
>> fuel as journal to be written before image save in order to be able to
>> recover from an emergency.
>> Now we redo the persistence part. An account object graph is dissected
>> into configuration part and payload part. The payload is written
>> directly to mongo db in a Json format. The configuration part will be
>> written as fuel blob to mongo. Planned is that at startup a configured
>> amount of account fuel blobs are loaded and started. Those
>> configurations have their own proces ses that write back payload to
>> mongo. Etc.
> 
> That's similar to what I do, except no image save and no mongo db.
> 
As soon as the fuel blobs are in mongo (or on disk) no image save is needed 
then and I will start fresh everytime which is more reliable. Mongo is quite 
nice because I have a few hundred megabytes of data. And via cron it runs a 
map/reduce task to create the statistics about the data that has been fetched. 
The statistical data is displayed in charts on the web site again.

Norbert

> Here's how it goes:
> - on restart the image reconstructs a Pier kernel saved via Fuel to a file
> - included in the Pier kernel is what you're calling the "configuration part" 
> - there are custom (non-std Pier) components here.
> - the components in the reconstructed image can read/write a "payload" part, 
> which is serialized to files using Fuel and SandstoneDb.
> - additionally, a "sub-tree" of the Pier kernel can be exported to a file 
> using Fuel serialization, and later imported using Fuel deserialization.
> 
> No URL available at the moment.
> 
> Aside - before Fuel/SandstoneDb, I was using Glorp/PostgreSQL. I've gained a 
> lot of simplicity, but lost sequence numbers and object caching (and maybe 
> more things, that have not been a problem so far). I've written a seqno 
> replacement, but the lack of object caching is not a problem at the moment 
> (i.e. small data sets, so far).
> 
> 


Reply via email to