Thanks for your answers. They seem encouraging.

A few extra comments and questions:

   * We are doing tests for the active/daily (5M records) file with
     BEGIN END packages of 1000 inserts (with 2 indexes), and seems to
     be OK. We need also to do some queries/updates on this file
     (around 100K a day), but it seems that sqlite can cope with them.
   * ¿What about querying/updating the around 200 (6 months) historical
     data files (5M records each)? We know of the limitation to
     connecting to, at most, 32 files. ¿Any advice on improving the
     performance of querying such a huge database?
   * We are thinking on merging 7 daily files (5M records, with 2
     indexes, each file) into one weekly file. ¿Which is the optimum
     way of doing this?

Thanks again,
Pedro Pascual

Pedro Pascual wrote:

Hi,

We are evaluating using sqlite for a huge database: around 1000 millions records splitted into 200 files. Any experiences about this and tricks/howto's?

The characteristics of our project:

   * The environment is a Unix box (IBM pSeries 64 bit) with fast
     (USCSI-3) disks.
   * No record deletes
   * Most of the (history) data will have a low update rate, and will
     be used mainly for queries.
   * Heavy inserts in active file (5000000 per day), closed every day
     (no more inserts)
   * Just two indexes on the data.

Regards,
Pedro Pascual

Reply via email to