Michael Hunger wrote:
would it be sensible to add something like a manyassoc concern for
valuecomposites?
So that handling of those collection like members would be a no brainer ?
Well, the main thing to be done there, I think, is to allow
ValueComposite base types to be generic, which is not possible today.
Then you could do things like:
interface ValueList<T>
extends ValueComposite
{
Property<List<T>> values();
}
and then subclass it with:
interface MyValueList
extends ValueList<MyValue>
{}
That would be useful. Right now there are some generics trickery with
this that I haven't figured out how to do, mainly to do with extracting
the ValueCompositeType from the above. But it should be doable.
Is this what you are referring to, or something else?
Another issue I have thought about in the last days is:
how to separate state enough from consumers so that they get real types
(not just primitives, see stephans blog post and dans power of values).
The problem is:
if you have something like:
interface Describable extends Property<String> {} and you push this type
to a consumer it always gets the mutable value ( describable.set(String) ).
What I try to avoid is just passing properties to a client but instead
would like to have a
interface ReadOnlyProperty<T> {
T get();
}
interface Property<T> extends ReadOnlyProperty<T> {
void set(T value);
}
and then be able to push the ReadOnly-Version to clients.
Well, it sounds like you would just take the Describable and put it into
a ValueComposite, and you're done.
interface SomeValue
extends ValueComposite
{
Describable description();
}
description() is then immutable. Do .buildWith() to mutate.
Another thing I noticed while looking at the streamline codebase is that
at different places the internals of the entities (state) is exposed
completely and business logic/infrastructure works on the raw
properties. This doesn't seem to be better that the getter/setter
approach in Java Beans ?
The difference is that commands acting on the domain model always call
command methods in the domain, rather than being able to do get calls
and then act on that. This ensures that all logic to mutate the model
stays within the domain model. The state-interfaces, with the raw
propeties, are exposed for use by the query-side, i.e. only to extract
information. The access of that should be hidden behind other mixin
interfaces though, so that the REST layer in StreamFlow only access the
"DTO"-layer of the domain model, rather than the raw properties.
Perhaps using something like Holub's importer/exporter pattern, that
declares interfaces for getting information from an object or pushing
state information to it in a controlled fashion would be more useful?
The exporter idea is ok, i.e. similar to the Query/DTO-idea above. But
changes to the domain model must ALWAYS be done through commands, rather
than raw imported data. That's the only way to ensure that the domain
model is consistent.
Then implementing the exporter interface in an information sink (like a
statistics service) and getting the (immutable) information pushed to
myself is more safe, coherent and maintainable than working on the
objects/entities internal state (which is always subject to change).
Right now the statistics service in StreamFlow listens for events, and
when trigger events come in (i.e. "task completed") the service reads
state from the model, and persists it in a denormalized database for
reporting. If I understand you correctly you want to not allow the
statistics service to access the domain model. But, then something else
needs to know what data the service needs, and package it up, right?
What is the gain by that? Or am I missing something?
/Rickard
_______________________________________________
qi4j-dev mailing list
[email protected]
http://lists.ops4j.org/mailman/listinfo/qi4j-dev