On Friday, October 28, 2016 at 6:26:46 PM UTC+1, Kasey Speakman wrote:
>
> As best I can, I try to look at the system as use cases and events.
>
> With legacy systems, we are tied to a normalized data model (which is 
> really designed for writing and not reading), so for queries we have to 
> "project" from that normalized data model into another model.
>

My understanding of normalization is that its purpose is to avoid 
duplicating data, and by avoiding duplication reduce the chance of errors 
in order to help ensure the quality of stored data.

But the place I'd really like to get to is storing the system's events, and 
> creating whatever models are necessary to answer queries and fulfill use 
> cases from those events. AKA Event Sourcing 
> <http://docs.geteventstore.com/introduction/3.9.0/event-sourcing-basics/>. 
> I am finally getting to do this on a new project. Our existing systems will 
> stay with a normalized data model for the foreseeable future as the cost of 
> change is too high.
>
> But I still try to take the principles of using business-specific events 
> (like StudentRegistered or PaymentDeclined) in my business logic, then 
> translate those into database calls when they are sent for persistence. 
> That also allows me to use those events to update secondary models or 
> trigger other logic. The common alternative, just updating state and doing 
> `repository.Save()` makes it harder to hook into specific business 
> happenings.
>

I only advocate the approach of updating state and saving it when that 
makes sense, and I would say those situations are:

Where a user completely owns the data and is free to modify it as they see 
fit.
Where, due to the nature of the data it is hard to automate use cases over 
it - textual data requiring processing by a human expert for example.
When prototyping a system and the use cases are still being developed.

I find this approach to prototyping beneficial compared with going straight 
to use cases as it leads to a more declarative and extendable data model. I 
mean in the sense that the database isn't straight away hidden behind a 
rigid set of use cases. It tends to lead to systems that are less layered 
too, many times I have seen very layered services when something simpler 
would have sufficed.

Also, if a data model can be built in such a way that illegal states are 
not possible (which is the purpose of normalization), there is less need 
for only allowing it to be modified according to a set of use cases, since 
only legal changes can be made to it. For example, if a user account must 
have an email address associated with it, if there is validation on the 
format of the email address and it cannot be null, then there is not need 
to write a specific transactional end-point to allow a user to update their 
email address, you can just let them modify and save the account record and 
they can still only perform that operation in a way that produces correct 
data.

I take your point though about being able to hook into changes relating to 
specific events.
 

-- 
You received this message because you are subscribed to the Google Groups "Elm 
Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to