On Mon, 2009-07-06 at 18:27 +0100, Simon Riggs wrote: > In many cases, people add unique indexes solely to allow replication to > work correctly. The index itself may never be used, especially in high > volume applications.
Interesting. Maybe we should at least try to leave room for this feature to be added later. I agree that, from a theoretical perspective, requiring a UNIQUE constraint to use an index is wrong. For one thing, you can't ensure the uniqueness without defining some total order (although you can define an arbitrary total order for cases with no meaningful total order). > How do you handle uniqueness within a stream? Presumably it is possible > and useful to have a stream of data that can be guaranteed unique, yet a > stream would never be uniquely targeted for lookups because of the > volume of data involved. [ Simon is asking me because I work for Truviso, but my response is not officially from Truviso ] There are a few cases worth mentioning here. First, if you have a stream that's backed by a table, you can use a table constraint. Second, you might choose to have an "in-order" constraint (not necessary, the system can fix out-of-order data), which could be a unique constraint that's very cheap to test. Additionally, this is not strictly a constraint, but if you have downstream operators, like COUNT(DISTINCT...), that can be seen as being similar to a constraint. These will often be over a limited span of time, say, a minute or an hour, and we can keep the necessary state. If there are a huge number of distinct values there, then it's a challenge to avoid keeping a lot of state. There are a few other specialized methods that we can use for specific use-cases. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers