CMON!!!! i put the code in my first post you should ask this question!
actualy reopening database is an option i tried - and it keeps the speed at the good level. but after first close next commit failed cause database is busy. it is in code but i used it in later attempts. Kees Nuyt wrote: > > On Sat, 12 Nov 2011 08:21:15 -0800 (PST), yqpl > <y...@poczta.onet.pl> wrote: > >> >>ie im loading 1000 files. >> >>first rows it is even speeding up the initial speed - 25 files / sec - >>speeds up to the 30 files/ sec in 50 files. >>then is starts to slow down evenly (regular slow down for some number of >>files) until 2 files/ sec at the end - 1000 files. >> >>every next time it looks the same : >>"first rows it is even speeding up the initial speed - 25 files / sec - >>speeds up to the 30 files/ sec in 50 files. >>then is starts to slow down evenly (regular slow down for some number of >>files) until 2 files/ sec at the end - 1000 files." > > I hope you don't open and close the database connection for every > file? It will invalidate the cache and the parsed schema. > > Open the database before the first file, close it after the last > file, bundle some 10000 to 50000 insert statements per > transaction. > > -- > ( Kees Nuyt > ) > c[_] > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > -- View this message in context: http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32831963.html Sent from the SQLite mailing list archive at Nabble.com. _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users