Eric Davies <[EMAIL PROTECTED]> writes: > A recent project of ours involved storing/fetching some reasonably large > datasets in a home-brew datatype. The datasets tended to range from a few > megabytes, to several gigabytes. We were seeing some nonlinear slowness > with using native large objects with larger datasets, presumably due to the > increasing depth of the btree index used to track all the little pieces of > the blobs.
Did you do any profiling to back up that "presumably"? It seems at least as likely to me that this was caused by some easily-fixed inefficiency somewhere. There are still a lot of O(N^2) algorithms in the backend that no one has run up against yet ... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]