The only way I can see for a separate lookup table to help would be to prepopulate it with your desired filters and an arbitrary autoinc field. So you would do something like:
INSERT INTO {mytemp} SELECT nid FROM {node} WHERE type='quote'; And then rebuild that (incrementally or full) every time you add or remove a quote node. Then to select, you would select one int at random between 1 and COUNT(*) in mytemp, then pull that record. The above makes the actual "fetch one at random" operation almost O(1), but you need to create and maintain that Actually, I just found this in the MySQL manual in user comments: SELECT FLOOR(RAND() * COUNT(*)) AS rand_row FROM {mytemp}; SELECT nid FROM {mytemp} LIMIT $rand_row, 1; That could actually be faster, at the cost of being two queries so there are technically potential race conditions. (Unlikely to be a problem in practice, but still there.) I've not tried the above; I just saw it and thought it was cool. :-) (It's also MySQL-specific. I don't know if the same thing would work well on other databases.) The MySQL manual has a couple of other suggestions in user comments, too: http://dev.mysql.com/doc/refman/5.0/en/select.html --Larry Garfield On Sunday, January 23, 2011 8:12:08 pm nan wich wrote: > Thanks, Larry, that make me feel a bit better. > > The suggestion is to keep a separate table of applicable nodes, generate a > random number from the number of records, and get that record. Your > explanation seems to indicate that one is simply spreading the "overhead" > out over time - and probably increasing it in total, maybe. Given that the > block can be refreshed on every page load, and a new node > created relatively infrequently, I can also see that there may be some > merit when the number of nodes gets fairly large. Maybe something like > "SELECT n.* FROM {quotes_random} r INNER JOIN {node} n ON n.nid = r.nid > WHERE r.row_num = 1 + RAND() * (SELECT MAX(row_num) FROM > {quotes_random})". (Sorry for the D6 version; it would take me a much > longer time to construct in D7.) > > > Nancy > > Injustice anywhere is a threat to justice everywhere. -- Dr. Martin L. > King, Jr. > > > > ________________________________ > > From: Larry Garfield > I don't know that I've ever tried it with 12k nodes so I can't say from > experience. > > Randomization is a tricky subject. The common method, which Views uses as > does the ->orderRandom() method of DBTNG, boils down to adding a random > number as a new column for each record, ordering by that, and then (in > this case) discarding all but the first record. That involves a > linear-growth creation for the new column (a fixed additional cost per > record) and then an integer ordering. Assuming the sorting function in > the SQL database is sane (which if it isn't you need a new database), that > should be an n*log n algorithm. (That's as fast as a sorting algorithm can > get.)