Re: [sqlite] inserts, performance, file lock...

2011-11-12 Thread yqpl

CMON

i put the code in my first post you should ask this question!


actualy reopening database is an option i tried - and it keeps the speed at
the good level.
but after first close next commit failed cause database is busy. it is in
code but i used it in later attempts.




Kees Nuyt wrote:
> 
> On Sat, 12 Nov 2011 08:21:15 -0800 (PST), yqpl
>  wrote:
> 
>>
>>ie im loading 1000 files.
>>
>>first rows it is even speeding up the initial speed - 25 files / sec -
>>speeds up to the 30 files/ sec in 50 files.
>>then is starts to slow down evenly (regular slow down for some number of
>>files) until 2 files/ sec at the end - 1000 files.
>>
>>every next time it looks the same :   
>>"first rows it is even speeding up the initial speed - 25 files / sec -
>>speeds up to the 30 files/ sec in 50 files.
>>then is starts to slow down evenly (regular slow down for some number of
>>files) until 2 files/ sec at the end - 1000 files."
> 
> I hope you don't open and close the database connection for every
> file? It will invalidate the cache and the parsed schema.
> 
> Open the database before the first file, close it after the last
> file, bundle some 1 to 5 insert statements per
> transaction.
> 
> -- 
>   (  Kees Nuyt
>   )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32831963.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-12 Thread Kees Nuyt
On Sat, 12 Nov 2011 08:21:15 -0800 (PST), yqpl
 wrote:

>
>ie im loading 1000 files.
>
>first rows it is even speeding up the initial speed - 25 files / sec -
>speeds up to the 30 files/ sec in 50 files.
>then is starts to slow down evenly (regular slow down for some number of
>files) until 2 files/ sec at the end - 1000 files.
>
>every next time it looks the same :   
>"first rows it is even speeding up the initial speed - 25 files / sec -
>speeds up to the 30 files/ sec in 50 files.
>then is starts to slow down evenly (regular slow down for some number of
>files) until 2 files/ sec at the end - 1000 files."

I hope you don't open and close the database connection for every
file? It will invalidate the cache and the parsed schema.

Open the database before the first file, close it after the last
file, bundle some 1 to 5 insert statements per
transaction.

-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-12 Thread yqpl

ie im loading 1000 files.

first rows it is even speeding up the initial speed - 25 files / sec -
speeds up to the 30 files/ sec in 50 files.
then is starts to slow down evenly (regular slow down for some number of
files) until 2 files/ sec at the end - 1000 files.

every next time it looks the same :   
"first rows it is even speeding up the initial speed - 25 files / sec -
speeds up to the 30 files/ sec in 50 files.
then is starts to slow down evenly (regular slow down for some number of
files) until 2 files/ sec at the end - 1000 files."



Nico Williams wrote:
> 
> On Fri, Nov 11, 2011 at 2:38 PM, yqpl  wrote:
>> yes still slows down.
> 
> Can you characterize it?  All index inserts should slow down somewhat
> as the index grows since lookup and insertion will be O(logN)
> operations for b-trees, but also as your indexes grow larger than
> available memory you'll notice dramatic slow downs.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32831712.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-11 Thread Nico Williams
On Fri, Nov 11, 2011 at 2:38 PM, yqpl  wrote:
> yes still slows down.

Can you characterize it?  All index inserts should slow down somewhat
as the index grows since lookup and insertion will be O(logN)
operations for b-trees, but also as your indexes grow larger than
available memory you'll notice dramatic slow downs.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-11 Thread yqpl

yes still slows down.


Nico Williams wrote:
> 
> On Thu, Nov 10, 2011 at 3:19 AM, yqpl  wrote:
>> i did some test do check if indexes make it slow. instead of inserting to
>> disk database i use ":memory:" database - i have copied tables only - i
>> assume without indexes and then do inserts - and it works the same.
> 
> UNIQUE constraints on columns imply indexes.
> 
>> does it prove that it isnt because indexes?
> 
> I don't understand what you mean by "it works the same".  Do you mean
> that it still slows down?
> 
> Nico
> --
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32828341.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-11 Thread Nico Williams
On Fri, Nov 11, 2011 at 1:39 AM, yqpl  wrote:
> Nico Williams wrote:
>> What's your page size?
>
> i have no access now to those files. but i didnt change any thing - so
> default.

You really want to set the page size to something decent -- at least
the filesystem's preferred block size (typically 4KB or 8KB).

The page size has a lot to do with performance: a) it affects the
number of meta-data nodes in the b-trees as well as the depth of the
b-tree, which affects the number of reads needed to do random lookups
or insertions, b) page sizes that are not whole multiples of the
filesystem's preferred block size result in larger I/Os than the page
size anyways because the filesystem will likely want to read the whole
thing in.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-11 Thread Nico Williams
On Thu, Nov 10, 2011 at 3:19 AM, yqpl  wrote:
> i did some test do check if indexes make it slow. instead of inserting to
> disk database i use ":memory:" database - i have copied tables only - i
> assume without indexes and then do inserts - and it works the same.

UNIQUE constraints on columns imply indexes.

> does it prove that it isnt because indexes?

I don't understand what you mean by "it works the same".  Do you mean
that it still slows down?

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-11 Thread yqpl

all settings:

NameValue   Modified
auto_vacuum noneFalse
automatic_index on  False
cache_size  2000False
case_sensitive_like off False
collation_list  [NOCASE], [RTRIM], [BINARY] False
count_changes   off False
default_cache_size  2000False
empty_result_callbacks  off False
encodingUTF-8   False
foreign_keyson  False
freelist_count  13345   False
full_column_names   off False
fullfsync   off False
journal_modedelete  False
journal_size_limit  -1  False
legacy_file_format  on  False
locking_modenormal  False
max_page_count  1073741823  False
page_count  14071   False
page_size   1024False
read_uncommittedoff False
recursive_triggers  off False
reverse_unordered_selects   off False
schema_version  371 False
secure_delete   off False
short_column_names  on  False
synchronous fullFalse
temp_store  default False
temp_store_directory""  False
user_version0   False
wal_autocheckpoint  1000False




Nico Williams wrote:
> 
> What's your page size?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32825288.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-10 Thread yqpl

i have no access now to those files. but i didnt change any thing - so
default.

Nico Williams wrote:
> 
> What's your page size?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32824012.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-10 Thread yqpl

i did some test do check if indexes make it slow. instead of inserting to
disk database i use ":memory:" database - i have copied tables only - i
assume without indexes and then do inserts - and it works the same.

does it prove that it isnt because indexes?



Richard Hipp-3 wrote:
> 
> On Wed, Nov 9, 2011 at 6:17 PM, yqpl  wrote:
> 
>>
>> no matter how big my database is inserts starts with the same speed and
>> keep
>> getting slower.
>> thats why it is better to split 1000 of files into 10x 100 files cause
>> all
>> of those x100 packages will be imported fast. but i also gets this file
>> lock
>> error /
>>
> 
> First create your tables without indices.  Then do your inserts.  After
> all
> the data is in the table, then do your CREATE INDEX statements.
> 
> Also avoid UNIQUE constraints since they generate indices.  Instead add
> CREATE UNIQUE INDEX after all inserts are done.
> 
> Inserting content into a table is fast because it can simply append.
> Inserting content into an index is slower since it has to look up the
> correct place in the middle of the index and insert it there, which
> involves a binary search, etc.
> 
> The latest release of SQLite contains an optimization that makes CREATE
> INDEX go up to 100x faster for very large tables.  So be sure to use
> SQLite
> version 3.7.8 or 3.7.9 to do the CREATE INDEX.
> 
> 
>>
>>
>>
>> Simon Slavin-3 wrote:
>> >
>> >
>> > On 9 Nov 2011, at 10:21pm, yqpl wrote:
>> >
>> >> im starting a transaction
>> >> then make a lot of inserts and commit.
>> >> ive got about 30 inserts per second but after a while it is dropping
>> to
>> >> about 1-2 inserts per second. it takse about ~500 inserts to drop to
>> this
>> >> 1-2 insert per sec.
>> >
>> > Here is a guess, but it is just a guess.
>> >
>> > Your computer has a certain amount of memory, or disk cache free. 
>> While
>> > your whole SQLite database fits in that space, your process is fast.
>>  Once
>> > the database gets bigger than that amount of free, the computer has to
>> > keep swapping bits of file in and out, and this makes the process
>> slower.
>> > So my guess is that the slow down will be related not to the number of
>> > inserts so far, but to the total size of the database file so far.
>> >
>> > Something that will help to clarify this is to get an idea of how big
>> the
>> > files are that you are inserting.  There was a recent post to this list
>> > where the SQLite team had tested the relative speeds of holding entire
>> > image files in a SQLite database compared to holding just the paths of
>> > those files.
>> >
>> > Simon.
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> >
>> >
>>
>> --
>> View this message in context:
>> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815038.html
>> Sent from the SQLite mailing list archive at Nabble.com.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32817068.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Richard Hipp
On Wed, Nov 9, 2011 at 6:17 PM, yqpl  wrote:

>
> no matter how big my database is inserts starts with the same speed and
> keep
> getting slower.
> thats why it is better to split 1000 of files into 10x 100 files cause all
> of those x100 packages will be imported fast. but i also gets this file
> lock
> error /
>

First create your tables without indices.  Then do your inserts.  After all
the data is in the table, then do your CREATE INDEX statements.

Also avoid UNIQUE constraints since they generate indices.  Instead add
CREATE UNIQUE INDEX after all inserts are done.

Inserting content into a table is fast because it can simply append.
Inserting content into an index is slower since it has to look up the
correct place in the middle of the index and insert it there, which
involves a binary search, etc.

The latest release of SQLite contains an optimization that makes CREATE
INDEX go up to 100x faster for very large tables.  So be sure to use SQLite
version 3.7.8 or 3.7.9 to do the CREATE INDEX.


>
>
>
> Simon Slavin-3 wrote:
> >
> >
> > On 9 Nov 2011, at 10:21pm, yqpl wrote:
> >
> >> im starting a transaction
> >> then make a lot of inserts and commit.
> >> ive got about 30 inserts per second but after a while it is dropping to
> >> about 1-2 inserts per second. it takse about ~500 inserts to drop to
> this
> >> 1-2 insert per sec.
> >
> > Here is a guess, but it is just a guess.
> >
> > Your computer has a certain amount of memory, or disk cache free.  While
> > your whole SQLite database fits in that space, your process is fast.
>  Once
> > the database gets bigger than that amount of free, the computer has to
> > keep swapping bits of file in and out, and this makes the process slower.
> > So my guess is that the slow down will be related not to the number of
> > inserts so far, but to the total size of the database file so far.
> >
> > Something that will help to clarify this is to get an idea of how big the
> > files are that you are inserting.  There was a recent post to this list
> > where the SQLite team had tested the relative speeds of holding entire
> > image files in a SQLite database compared to holding just the paths of
> > those files.
> >
> > Simon.
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> >
>
> --
> View this message in context:
> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815038.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread yqpl

and the files are small. could be 10-50 lines of text, orginal lines are
stored and also they are parsed between different tables.



Simon Slavin-3 wrote:
> 
> 
> On 9 Nov 2011, at 10:21pm, yqpl wrote:
> 
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 inserts per second but after a while it is dropping to
>> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
>> 1-2 insert per sec.
> 
> Here is a guess, but it is just a guess.
> 
> Your computer has a certain amount of memory, or disk cache free.  While
> your whole SQLite database fits in that space, your process is fast.  Once
> the database gets bigger than that amount of free, the computer has to
> keep swapping bits of file in and out, and this makes the process slower. 
> So my guess is that the slow down will be related not to the number of
> inserts so far, but to the total size of the database file so far.
> 
> Something that will help to clarify this is to get an idea of how big the
> files are that you are inserting.  There was a recent post to this list
> where the SQLite team had tested the relative speeds of holding entire
> image files in a SQLite database compared to holding just the paths of
> those files.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815040.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Nico Williams
What's your page size?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread yqpl

no matter how big my database is inserts starts with the same speed and keep
getting slower.
thats why it is better to split 1000 of files into 10x 100 files cause all
of those x100 packages will be imported fast. but i also gets this file lock
error /



Simon Slavin-3 wrote:
> 
> 
> On 9 Nov 2011, at 10:21pm, yqpl wrote:
> 
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 inserts per second but after a while it is dropping to
>> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
>> 1-2 insert per sec.
> 
> Here is a guess, but it is just a guess.
> 
> Your computer has a certain amount of memory, or disk cache free.  While
> your whole SQLite database fits in that space, your process is fast.  Once
> the database gets bigger than that amount of free, the computer has to
> keep swapping bits of file in and out, and this makes the process slower. 
> So my guess is that the slow down will be related not to the number of
> inserts so far, but to the total size of the database file so far.
> 
> Something that will help to clarify this is to get an idea of how big the
> files are that you are inserting.  There was a recent post to this list
> where the SQLite team had tested the relative speeds of holding entire
> image files in a SQLite database compared to holding just the paths of
> those files.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815038.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread yqpl

yep im sure.

30 per sec is fine.
parsing time is included. and for real there 30 of files inserted into
databas but for each file there is more inserts. i didn want to make it more
complicated with explaining this.




Richard Hipp-3 wrote:
> 
> On Wed, Nov 9, 2011 at 5:21 PM, yqpl  wrote:
> 
>>
>> Hi,
>>
>> my task is to parse a lot of files and then insert them to sqlite
>> database.
>> it could be thousands of files. i use c#.
>>
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 inserts per second
> 
> 
> I typically get 100,000 rows per second on a modern workstation, from
> inside a transaction.  Are you *sure* you are using a transaction?
> 
> 
> 
>> but after a while it is dropping to
>> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
>> 1-2 insert per sec.
>>
>> ive got indexes on this database - but this dosnt make difference as i
>> checked on copy without indexes.
>>
>> 1) why it is getting tired ans slows down? how to fix it?
>>
>> o tried to this in a loop with 100 inserts to keep speed reasonable. then
>> closing database and reopening.
>> after close and reopen next commit gets SQLITE_BUSY/* The database file
>> is
>> locked */
>>
>> 2) wtf?
>>
>> please help me im stucked
>>
>> code is like:
>>
>> //done by background worker
>> void import(object sender, DoWorkEventArgs e)
>>{
>> DataTable tab;
>>tab = PST_POSTGRES.Postgres.Query(textBox6.Text,
>> textBox1.Text,
>> textBox3.Text, textBox4.Text, textBox5.Text, textBox2.Text);
>>
>>
>>SQLiteWrapper.SQLite db = new SQLiteWrapper.SQLite();
>>db.OpenDatabase(sqlite_db);
>>db.BeginTransaction();
>>
>>
>>foreach (DataRow r in tab.Rows)
>>{
>>
>>if (bw.CancellationPending == true)
>>{
>>e.Cancel = true;
>>break;
>>}
>>//import here
>>
>>foreach (object o in imported)
>>{
>>doinserts(o)
>>}
>>
>>//here is an reopen condition - when gets slower
>>if long)(imported * 1000)) /
>> stoper.ElapsedMilliseconds)
>> < next)
>>{
>>db.CommitTransaction();
>>db.CloseDatabase();
>>db = new SQLiteWrapper.SQLite();
>>db.OpenDatabase(sqlite_db);
>>db.begintransaction();
>>}
>>next = (((long)(imported * 1000)) /
>> stoper.ElapsedMilliseconds);
>>}
>>
>>db.CommitTransaction();
>>db.CloseDatabase();
>>stoper.Stop();
>>}
>> --
>> View this message in context:
>> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32814772.html
>> Sent from the SQLite mailing list archive at Nabble.com.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815037.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 10:21pm, yqpl wrote:

> im starting a transaction
> then make a lot of inserts and commit.
> ive got about 30 inserts per second but after a while it is dropping to
> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
> 1-2 insert per sec.

Here is a guess, but it is just a guess.

Your computer has a certain amount of memory, or disk cache free.  While your 
whole SQLite database fits in that space, your process is fast.  Once the 
database gets bigger than that amount of free, the computer has to keep 
swapping bits of file in and out, and this makes the process slower.  So my 
guess is that the slow down will be related not to the number of inserts so 
far, but to the total size of the database file so far.

Something that will help to clarify this is to get an idea of how big the files 
are that you are inserting.  There was a recent post to this list where the 
SQLite team had tested the relative speeds of holding entire image files in a 
SQLite database compared to holding just the paths of those files.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Richard Hipp
On Wed, Nov 9, 2011 at 5:21 PM, yqpl  wrote:

>
> Hi,
>
> my task is to parse a lot of files and then insert them to sqlite database.
> it could be thousands of files. i use c#.
>
> im starting a transaction
> then make a lot of inserts and commit.
> ive got about 30 inserts per second


I typically get 100,000 rows per second on a modern workstation, from
inside a transaction.  Are you *sure* you are using a transaction?



> but after a while it is dropping to
> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
> 1-2 insert per sec.
>
> ive got indexes on this database - but this dosnt make difference as i
> checked on copy without indexes.
>
> 1) why it is getting tired ans slows down? how to fix it?
>
> o tried to this in a loop with 100 inserts to keep speed reasonable. then
> closing database and reopening.
> after close and reopen next commit gets SQLITE_BUSY/* The database file is
> locked */
>
> 2) wtf?
>
> please help me im stucked
>
> code is like:
>
> //done by background worker
> void import(object sender, DoWorkEventArgs e)
>{
> DataTable tab;
>tab = PST_POSTGRES.Postgres.Query(textBox6.Text, textBox1.Text,
> textBox3.Text, textBox4.Text, textBox5.Text, textBox2.Text);
>
>
>SQLiteWrapper.SQLite db = new SQLiteWrapper.SQLite();
>db.OpenDatabase(sqlite_db);
>db.BeginTransaction();
>
>
>foreach (DataRow r in tab.Rows)
>{
>
>if (bw.CancellationPending == true)
>{
>e.Cancel = true;
>break;
>}
>//import here
>
>foreach (object o in imported)
>{
>doinserts(o)
>}
>
>//here is an reopen condition - when gets slower
>if long)(imported * 1000)) / stoper.ElapsedMilliseconds)
> < next)
>{
>db.CommitTransaction();
>db.CloseDatabase();
>db = new SQLiteWrapper.SQLite();
>db.OpenDatabase(sqlite_db);
>db.begintransaction();
>}
>next = (((long)(imported * 1000)) /
> stoper.ElapsedMilliseconds);
>}
>
>db.CommitTransaction();
>db.CloseDatabase();
>stoper.Stop();
>}
> --
> View this message in context:
> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32814772.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users