> i'm not sure at all it's practical to create an 80MB string with one giant 
> SQL statement in it to send all that data at once.

80MB string is not too bad after all (probably even less than 1% of
the whole memory). So you better do it this way. BTW, it won't be one
SQL statement, it will be 40k SQL statements which will be executed
one-by-one when the parsing is over.


Pavel

On Sun, Feb 6, 2011 at 1:07 PM, David M. Cotter <m...@davecotter.com> wrote:
> bummrz.
>
> here's what i'm trying to do
>
> my software has two music stores
> it downloads an XML for each music store, both at the same time
> it then parses the XML into SQLite
> each song has about 7 to 10 bits of data (columns)
> and there may be 40k songs.
> i need to be able to add all 40k songs to the DB in a way that can be rolled 
> back if a parse error occurs or if the user hits the stop sign or if the user 
> quits during the parse phase (which takes about 30 seconds)
> see?
>
> and the two stores operate independently, on different threads
>
> i'm not sure at all it's practical to create an 80MB string with one giant 
> SQL statement in it to send all that data at once.
>
> so here's my new idea:
>
> is there a way to merge two databases that have already been created?
> so each store, during the update phase, would write to it's own personal 
> database
> only when it's complete and there are no errors or aborts do i then just 
> "merge" the store DB with the main DB, but this depends on the ability to run 
> a single "merge" command?
>
> is there such a thing?
>
> On Feb 6, 2011, at 9:56 AM, Simon Slavin wrote:
>
>>
>> On 6 Feb 2011, at 5:42pm, David M. Cotter wrote:
>>
>>>> If you don't need this behaviour because you're confident you'll never get 
>>>> a clash, then you could accumulate your INSERTs in memory, then blast 
>>>> through them when you would previously have just done the COMMIT.
>>>
>>>
>>> i will never have a clash because i manage the primary keys myself.
>>> is there an SQL way to do that or do you mean i should build my own struct 
>>> to hold the data temporarily?
>>
>> Sorry, no automated way to do it.  You'll have to do it yourself.
>>
>> If, for example, you're using sqlite3_exec() and just handing it a SQL 
>> string that starts "INSERT INTO ..." then you could make an array, store 
>> these strings in it as you're preparing them, and just flush the array to 
>> your database when you have 100 (or 1000) of them.  This will lock up your 
>> database for less time because you won't be holding it locked while you 
>> prepare the SQL commands, just enough time to retrieve them from an array.
>>
>> Simon.
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to