Mikko Ohtamaa wrote:
Hi,
We are facing a problem where we need to store 270 fields per item. The
fields are laboratory measurements of a patient - 40 measurement values for
7 timepoint. The fields need to be accessed per timepoint, per measurement
and all fields for one patient once. There will be over 10000 patients,
distributed under different hospital items (tree-like, for permission
reasons). Data is not accessed for two patients at once, so we don't need to
scale the catalog.
So I am curious about how we make Plone scale well for this scenario.
- The overhead of a field in AT schema? Should we use normal storage backend
(Python object value) or can we compress or field values into list/dict to
make it faster using a custom storage backend.
- The wake up overhead of AT object? Should we distribute our fields to
several ZODB objects e.g. per timepoint, or just stick all values to one
ZODB objects. All fields per patient are needed on some views once.
- One big Zope objects vs. few smaller Zope objects?
I wouldn't store this in the ZODB, at least not only in the ZODB. Values
like this are better stored in an RDBMS, modelled e.g. with a 40-column
table (ick) used 7 times (one for each time point) for each patient.
You may want to look at collective.tin, or at least at using SQLAlchemy
with custom forms. No auto-generated AT form is going to provide decent
UI anyway, so if you're looking at custom forms, then AT's going to give
you very litte.
Martin
--
Author of `Professional Plone Development`, a book for developers who
want to work with Plone. See http://martinaspeli.net/plone-book
_______________________________________________
Product-Developers mailing list
[email protected]
http://lists.plone.org/mailman/listinfo/product-developers