2007/10/9, Gregory Stark <[EMAIL PROTECTED]>:
> "Nikolay Samokhvalov" <[EMAIL PROTECTED]> writes:
> > Hubert recently posted his thoughts on this topic:
> > http://www.depesz.com/index.php/2007/09/16/my-thoughts-on-getting-random-row/
> >
> > I've encountered with this problem several times in web development and
> > every time found out that the best (in terms of performance) solution is to
> > use some pseudo random approach (such as ">= random() limit 1" or "limit 1
> > offset random()*N" or even pre-caching rows on app side).
> "ORDER BY random() LIMIT 1" should be faster in 8.3 due to the bounded-sort
> optimization. It should be basically the same as the two options above as far
> as how many comparisons are done and how much memory is used. It does have to
> call random() for every record whereas the solutions above only call random()
> once.
> But I think all of these are basically the same to a first degree
> approximation. They all have to do a scan of all the records being considered.
> If you want something faster you need a solution which can use an index to
> scan only the target record. There are ways of doing that but they require
> some application knowledge.

It needs always seq scan :(, and take space on buffer cache. Solution
based on random generated PK are much faster. I collaborate with one
my customer. He shows random products from 10K products on every page
of one eShop. And he cannot understand, so ORDER random() LIMIT is bad
trick, because this trick is on PostgreSQL FAQ.


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to