Hi,

I was thinking of a specific scenario of Composite Data Context wrt
MetaModel.

I understand that MetaModel performs number of functions in-memory after
querying the respective data sources. However, if the intermediate
data-sets are large, this operation could be memory intensive and slow. Is
there a thought about tackling such a scenario through a clustered approach
in some future release?

If that is not in the roadmap, what classes should one look at to work on
this?

Regards,
Ashish

Reply via email to