Hi John,

Just to follow up on this story. I tried to put in the indexes pre-load as you 
suggested and I think they helped a bit. However what seemed to help a lot was 
to select bigger chunks of data.

I have perhaps 10,000 events to sort through. Each event has a subset of data. 
I would pick event X and then do the calculations. Processing each event means 
hitting the disk to grab data.

What I do now instead is to pick say 100 events, so grab 100 X worth of data in 
a big select, and use that data to build a smaller table in memory and finally 
add a fast bit index using buildIndex to the in memory table. Then using 
another loop look through that table 100 times to do my original calculation. 
This seems to help a lot and I am seeing perhaps a factor of 10x speedup.

I guess it is better to gulp your data than sip it. Some params are:

Umbutu linux, 24 GB RAM, 4 x 6 Core machine. But for this exercise I am using 
one core and single threaded.

Thanks again for your kind help John.

Warmest regards, Mike.

DISCLAIMER: This e-mail message and any attachments are intended solely for the 
use of the individual or entity to which it is addressed and may contain 
information that is confidential or legally privileged. If you are not the 
intended recipient, you are hereby notified that any dissemination, 
distribution, copying or other use of this message or its attachments is 
strictly prohibited. If you have received this message in error, please notify 
the sender immediately and permanently delete this message and any attachments.
_______________________________________________
FastBit-users mailing list
[email protected]
https://hpcrdm.lbl.gov/cgi-bin/mailman/listinfo/fastbit-users

Reply via email to