Thank you Igor! Now I understand your OR!
- Original Message
From: Igor Tandetnik
To: sqlite-users@sqlite.org
Sent: Sun, February 14, 2010 2:54:35 PM
Subject: Re: [sqlite] 1 reader 1 writer but sqlite3_step fails with “database
is locked” error in both processes
Am 14.02.2010 18:53, schrieb Max Vlasov:
>> This is appx. 500MB cache, why not trying with 2,000,000 cache size ? :-)
>>
>>
>>
> Hmm, managed to increase it to only 1,000,000 (x1024) size, larger values
> bring to "Out of memory" finally, and this values (1G) allows up to
> 6,000,000 fast
Hello,
Please, can you give me an advice how to make the SQL Browser work with Vista
64 Bit?
Regards,
Alexandra Kafka
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Then why sqlite3_step() fails for the reader and for the writer?
- Original Message
From: Igor Tandetnik
To: sqlite-users@sqlite.org
Sent: Sun, February 14, 2010 2:54:35 PM
Subject: Re: [sqlite] 1 reader 1 writer but sqlite3_step fails with “database
is locked”
a1rex wrote:
> I thought that I can have 1 writer and many readers
You thought incorrectly. You can have one writer OR many readers.
Igor Tandetnik
___
sqlite-users mailing list
sqlite-users@sqlite.org
Process A updates data base table in the tight loop. Nothing special:
loop
sql = "UPDATE table SET blob=:blob WHERE id=?";
rc = sqlite3_prepare_v2(…)
rc = sqlite3_bind_int(…)
sqlite3_bind_blob(…)
rc = sqlite3_step(…)
rc = sqlite3_reset(…)
rc = sqlite3_finalize(…);
Process B just reads
> This is appx. 500MB cache, why not trying with 2,000,000 cache size ? :-)
>
>
Hmm, managed to increase it to only 1,000,000 (x1024) size, larger values
bring to "Out of memory" finally, and this values (1G) allows up to
6,000,000 fast records for 100 bytes field per record index. Still good,
> Marcus,
>
> although increasing cache size is a good method, it may sometimes give
> unpredictable results (in terms of performance).
hm... why ?
> I looked at the vdbe code (EXPLAIN CREATE INDEX ... ) of the index
> creation
> and it seems like there is no special sorting algorithm (CMIIW
Marcus,
although increasing cache size is a good method, it may sometimes give
unpredictable results (in terms of performance).
I looked at the vdbe code (EXPLAIN CREATE INDEX ... ) of the index creation
and it seems like there is no special sorting algorithm (CMIIW please).
Excluding all "make
Just for my curiosity:
Have you tried to increase the cache as already suggested ?
I ran into a similar problem while playing with a artificial
test database with appx. 10 Mio records and creating an index.
Without drastically increasing the cache size sqlite appears
not to be able to create an
Jerome,
It's an an interesting challenge, thanks for the post
I tried to research more and did some tests.
My test database contains a table with 10,000,000 records of the text 100
chars in length
CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Text] TEXT
)
I suppose your
11 matches
Mail list logo