I have to write and maintain this code. Currently I have over 30mm records and that could easily double and what I want to do is dump it all in memory and do a federator ontop of SQLite. The DB's are read only and as long as I can do sql to them, I'm a-ok.
It sounds like you are trying to do an early optimisation of putting all your data in memory since RAM can be faster than disk. Are you sure there is any benefit to this, instead of just using 'pragma cache_size' and/or letting the operating do normal filesystem caching?
You can also create indexes etc to speed things up (which takes more space/memory), but having everything in memory is very fragile if you hit the address space limits. And most software does not play well with running out of memory (have you checked every single library you use will gracefully handle it and will your
code also do so?)
Roger