CMON
i put the code in my first post you should ask this question!
actualy reopening database is an option i tried - and it keeps the speed at
the good level.
but after first close next commit failed cause database is busy. it is in
code but i used it in later attempts.
Kees Nuyt wrote:
On Sat, 12 Nov 2011 08:21:15 -0800 (PST), yqpl
wrote:
>
>ie im loading 1000 files.
>
>first rows it is even speeding up the initial speed - 25 files / sec -
>speeds up to the 30 files/ sec in 50 files.
>then is starts to slow down evenly (regular slow down for some number of
>files) until 2 files
ie im loading 1000 files.
first rows it is even speeding up the initial speed - 25 files / sec -
speeds up to the 30 files/ sec in 50 files.
then is starts to slow down evenly (regular slow down for some number of
files) until 2 files/ sec at the end - 1000 files.
every next time it looks the sa
On Fri, Nov 11, 2011 at 2:38 PM, yqpl wrote:
> yes still slows down.
Can you characterize it? All index inserts should slow down somewhat
as the index grows since lookup and insertion will be O(logN)
operations for b-trees, but also as your indexes grow larger than
available memory you'll notice
yes still slows down.
Nico Williams wrote:
>
> On Thu, Nov 10, 2011 at 3:19 AM, yqpl wrote:
>> i did some test do check if indexes make it slow. instead of inserting to
>> disk database i use ":memory:" database - i have copied tables only - i
>> assume without indexes and then do inserts - an
On Fri, Nov 11, 2011 at 1:39 AM, yqpl wrote:
> Nico Williams wrote:
>> What's your page size?
>
> i have no access now to those files. but i didnt change any thing - so
> default.
You really want to set the page size to something decent -- at least
the filesystem's preferred block size (typically
On Thu, Nov 10, 2011 at 3:19 AM, yqpl wrote:
> i did some test do check if indexes make it slow. instead of inserting to
> disk database i use ":memory:" database - i have copied tables only - i
> assume without indexes and then do inserts - and it works the same.
UNIQUE constraints on columns im
all settings:
NameValue Modified
auto_vacuum noneFalse
automatic_index on False
cache_size 2000False
case_sensitive_like off False
collation_list [NOCASE], [RTRIM], [BINARY] False
count_changes off False
default_cache_size 2000False
empty_re
i have no access now to those files. but i didnt change any thing - so
default.
Nico Williams wrote:
>
> What's your page size?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
i did some test do check if indexes make it slow. instead of inserting to
disk database i use ":memory:" database - i have copied tables only - i
assume without indexes and then do inserts - and it works the same.
does it prove that it isnt because indexes?
Richard Hipp-3 wrote:
>
> On Wed, N
On Wed, Nov 9, 2011 at 6:17 PM, yqpl wrote:
>
> no matter how big my database is inserts starts with the same speed and
> keep
> getting slower.
> thats why it is better to split 1000 of files into 10x 100 files cause all
> of those x100 packages will be imported fast. but i also gets this file
>
and the files are small. could be 10-50 lines of text, orginal lines are
stored and also they are parsed between different tables.
Simon Slavin-3 wrote:
>
>
> On 9 Nov 2011, at 10:21pm, yqpl wrote:
>
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 i
What's your page size?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
no matter how big my database is inserts starts with the same speed and keep
getting slower.
thats why it is better to split 1000 of files into 10x 100 files cause all
of those x100 packages will be imported fast. but i also gets this file lock
error /
Simon Slavin-3 wrote:
>
>
> On 9 Nov 201
yep im sure.
30 per sec is fine.
parsing time is included. and for real there 30 of files inserted into
databas but for each file there is more inserts. i didn want to make it more
complicated with explaining this.
Richard Hipp-3 wrote:
>
> On Wed, Nov 9, 2011 at 5:21 PM, yqpl wrote:
>
>>
On 9 Nov 2011, at 10:21pm, yqpl wrote:
> im starting a transaction
> then make a lot of inserts and commit.
> ive got about 30 inserts per second but after a while it is dropping to
> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
> 1-2 insert per sec.
Here is a guess,
On Wed, Nov 9, 2011 at 5:21 PM, yqpl wrote:
>
> Hi,
>
> my task is to parse a lot of files and then insert them to sqlite database.
> it could be thousands of files. i use c#.
>
> im starting a transaction
> then make a lot of inserts and commit.
> ive got about 30 inserts per second
I typicall
Hi,
my task is to parse a lot of files and then insert them to sqlite database.
it could be thousands of files. i use c#.
im starting a transaction
then make a lot of inserts and commit.
ive got about 30 inserts per second but after a while it is dropping to
about 1-2 inserts per second. it tak
18 matches
Mail list logo