+-------[ Tim Nash ]----------------------
| > And if your data is large enough to warrant using hadoop you're never
| > going to store them in Zope.
| and off-load the majority of the indexing, why not?
Because the minimum cluster size for hadoop is 64Mb, which means you
really want each object you store to be at least 64Mb in size (or close
to it). Files of this size are not something Zope is good at serving up
out of the ZODB.
| > Procfs is a virtual filesystem, devfs is a virtual filesystem. smb
| OK, hold on while I write a distributed map/reduce system that runs on devfs..
They ARE working on exposing it via DAV... so there's hope for you
| > http://www.stat.purdue.edu/~sguha/code.html#hadoopy
| Thanks for this link (really). I hope this library develops more. It
| looks interesting. I was only thinking along these lines:
| > Although it would probably a lot easier to use ctypes on the c lib and
| > making a nicer interface using that.
| Please explain. Would your idea work better with localfs?
No, it just wouldn't be as ugly to assemble as trying to use swig and
hand-patching Makefiles, and you can make a pythonic layer around it,
that you can place logic into. But hey, you have SOMETHING to get
Zope maillist - Zope@zope.org
** No cross posts or HTML encoding! **
(Related lists -