Mikko Ohtamaa wrote:
Hi,

Hi Mikko,


We are facing a problem where we need to store 270 fields per item. The
fields are laboratory measurements of a patient - 40 measurement values for
7 timepoint. The fields need to be accessed per timepoint, per measurement
and all fields for one patient once. There will be over 10000 patients,
distributed under different hospital items (tree-like, for permission
reasons). Data is not accessed for two patients at once, so we don't need to
scale the catalog.

As others have pointed out don't make 270 individual fields
on your type.

One further potential alternative not mentioned yet could be
to use a plain Python dictionary or list of dictionaries.
If that were the case, the Record(s)Field/Widget from
ATExtensions could be of help.

On top of this ATExtensions also demonstrates how to handle a
custom data type that can be mapped to the ones mentioned above
(look for the FormattableName(s) datatype/field/widget)

Good luck,

        Raphael


So I am curious about how we make Plone scale well for this scenario.

- The overhead of a field in AT schema? Should we use normal storage backend
(Python object value) or can we compress or field values into list/dict to
make it faster using a custom storage backend.

- The wake up overhead of AT object? Should we distribute our fields to
several ZODB objects e.g. per timepoint, or just stick all values to one
ZODB objects. All fields per patient are needed on some views once.

- One big Zope objects vs. few smaller Zope objects?

Cheers,
Mikko Ohtamaa
Oulu, Finland


_______________________________________________
Product-Developers mailing list
[email protected]
http://lists.plone.org/mailman/listinfo/product-developers

Reply via email to