On 6 Feb 2018, at 11:52am, Nick <haveagoodtime2...@gmail.com> wrote:

> But I ran a simple test:
> Two processes will run sqlite3_open() respectively to open the same db. Then
> both of the two processes will insert 10000 records(in Transaction) into the
> db simultaneously. 
> But I find that:
> 
> Process A begin
> Process A insert
>            Process B begin
>            Process B insert
> Process A end
>            Process B end
> 
> Which I guess the Process B did not sleep at all?
> And the count of records is less than 20000 at last.

You should not be able to get less than 20000 rows without either (a) an error 
result of some kind or (b) a corrupt database.

Are your processes using the same database connection or does each one have its 
own ?

Are you checking the result codes returned by all the API calls ?

Can you reliably get less than 20000 rows ?

Does the problem go away if you use threadsafe = 2 ?

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to