On 10/24/2011 11:55 AM, Jorge Gonzalez wrote:
B. Create a temp table in SMALL database with the contents of the HUGE database I want to filter. Thi would mean transfering a copy of the table with _millions_ of rows to local, just to discard it afterwards. Seems not very reasonable.
The reasonable solutions are the ones you aren't allowed to do. What is left is the unreasonable ones.
  or move the HUGE one where the SMALL one is (not reasonable).
Or cache PART of the HUGE one on your side. Which may or may not be reasonable, depending
on exact details of your problem.

Question is, my server has enough RAM to slurp the resultset and then search (which is what I'm doing now). If enough RAM is available, no disk-based SQL server can beat that, provided that efficient search algorithms are used (I'm using the resultset to create several RAM based indexes - perl hashes - before doing any searching).
You would be surprised at what disk-based SQL servers can do, particularly when compared to programs written in an interpreted language. If your RAM based searches ran fast enough for your needs, I don't think you'd be writing here. If they don't run fast enough for you, you can implement a quick benchmark to see how a DISK-based SQL server can do.

_______________________________________________
List: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbix-class
IRC: irc.perl.org#dbix-class
SVN: http://dev.catalyst.perl.org/repos/bast/DBIx-Class/
Searchable Archive: http://www.grokbase.com/group/[email protected]

Reply via email to