On Wed, May 17, 2017 at 3:52 AM, Joseph L. Casale <jcas...@activenetwerx.com
> wrote:

> I am trying to bulk load about a million records each with ~20 related
> records
> into two tables. I am using WAL journal mode, synchronous is off and
> temp_store
> is memory. The source data is static and the database will only be used as
> a means
> to generate reporting and is not vital. I am deferring index creation to
> after the load.
> The load proceeds along quickly to about 150k records where I encounter
> statements
> which perform modifications to previous entries. The incoming data is
> structured
> this way and has relational dependencies so these modifications spread
> throughout
> affect subsequent inserts.
>
> In a scenario such as this, what is the recommended approach?
>
> Thanks,
> jlc
>


> If the updates pertain just to the 150k rows immediately preceding them,
> could you put each 150k chunk into its own table, and then do a join when
> accessing the data? Or even a merge at that point? Could be a lot faster.
>
> Gerry Snyder
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to