On Sep 11, 2008, at 4:06 AM, Federico Fanton wrote:

> - Does Dabo handle concurrent modification of data? (Pessimistic  
> locking)

        That is implemented differently on each backend server. So far no one  
has requested this capability, but we would certainly add it if there  
is enough interest. We abstract each backend into its own class, so it  
would be possible to customize for each server's syntax.

> - I see that Dabo favors the "classic" development style of "first
> design the database, then the business objects from that", do you  
> think
> it would be overkill to make Dabo work the other way round? That is,
> design the bizobjs first and then (automatically) generate the  
> database
> from them.. If I decide to go down that path, would I hit hard  
> problems
> due to Dabo's design?

        The tradeoff in the latter approach is that it makes it much, much  
harder to work with existing databases. I remember testing TurboGears  
a couple of years back, and its model implementation almost *required*  
that the database didn't exist; I had to add all sorts of different  
settings in order to make the underlying SQLObject-based model use the  
existing database information.

        If you're doing brand-new development in which no data exists when  
you start, then yeah, the ORM approach is pretty nice. If you're  
creating apps that work with existing data, it's much less attractive.

> - I need to add a few patterns to Dabo data handling, such as logical
> deletion and data logging/archiving (that is, upon updating a record
> generate a new record instead and mark the old one as such)..
> Would subclassing dBizobj be enough?

        Logical deletion is more of a database design than a bizobj design,  
since the bizobj is concerned with the logic, and thus logical  
deletion == deletion. To implement logical deletion, you'd need to add  
that flag field to the database, and then subclass dBizobj to always  
add 'delflag = 0' or the equivalent to every query.

        I'm much more wary of your read-only update system. That would  
require a lot of code to maintain data integrity, since changing a  
single column would result in a new PK for that record, and you would  
somehow have to track all relationships that may involve that PK and  
update those records, which would then require that any table that  
references those PKs be updated, etc. When archiving is required, I've  
typically created shadow tables in the database that hold the original  
records, along with timestamp/user info on who made the change. Then  
an udpate trigger is created to archive the original values before the  
update happens. This way relational integrity is not an issue, and you  
have a complete trail of changes to the data.

-- Ed Leafe





_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/[EMAIL PROTECTED]

Reply via email to