On Tuesday 08 August 2006 15:24, Markus Schiltknecht wrote: > > An API is always limiting.
Which is a good thing when you are not the one using it but the one committing to support it. :-) > I still feel that I would need ways too many hooks. Especially when you > consider advanced replication features such as data partitioning and > remote query execution. Indeed. We have not prototyped those features. Nonetheless, look at the "silly" query cache example in the distribution (search for QueryCache.java). It shows how the proposed hooks might be used to intercept a query and fake a result set, while at the same time executing some stuff locally. (Warning: this runs on Apache Derby only, as in PostgreSQL we'd need something like PL/J for server side JDBC.). > What also worries me is the use of triggers. ISTM that using triggers is > not deep enough in the database. In the above example, do I really want > to fire a trigger every time the database needs to wake up a process? In > PostgreSQL a trigger normally runs within a transaction. How do you work > around that? As Alfranio has pointed out in another message in this thread, these triggers are high level. We never consider some thing "trigger on lock acquire" (also because it also would hardly be portable). They certainly are more coarse grained than the standard on update stuff. Furthermore, having on commit triggers running within transactional boundaries is very useful. Think about recording global commit order or global timestamps in the originating site after propagation. -- Jose Orlando Pereira ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly