What I am looking for is a way to store my data in xml using zope and
run map/reduce (or something very much like it) on live data.

1. Should I try to see if localFS will read/write to xml files on the
hadoop filesystem

2. or should I look for python equivalents to hadoop?

3. or should I just use java for this area of my application?

Which approach (or something else) would you take?


On 3/26/08, Andrew Milton <[EMAIL PROTECTED]> wrote:
> +-------[ Tim Nash ]----------------------
> | Does localfs work with virtual file systems?
> If it can be "mounted" and looks like a file system and smells like a
> file system, then localfs or in fact anything else, should know any
> different.
> | Is there a zope mapping product that maps zope to a distributed file system?
> You don't really explain in what way you want it distributed. Zope is an
> application server, so what you're asking for doesn't make any sense.
> You can certainly "distribute" your ZODB across as many file systems as you
> want right now. You can certainly just plonk your Data.fs ZODB on any
> filesystem you want distributed or otherwise.
> If you want a "smarter" ZODB or a different STORAGE layer that's a different
> kettle of fish, but, also NOT what you previously asked for.
> | What is the best way to run map/reduce on xml files that are stored in the 
> zodb?
> The same way you run map/reduce on xml files that are stored anywhere,
> although one could contend that having XML files in a ZODB might be at
> least one too many levels of abstraction.
> --
> Andrew Milton
Zope maillist  -  Zope@zope.org
**   No cross posts or HTML encoding!  **
(Related lists - 
 http://mail.zope.org/mailman/listinfo/zope-dev )

Reply via email to