Re: [sqlite] associating a ticket in the UI to a check-in

2012-10-18 Thread E. Timothy Uy
Never mind, I just needed brackets.

On Thu, Oct 18, 2012 at 8:07 PM, E. Timothy Uy  wrote:

> Is there a way to associate a ticket in the UI to a check-in or am I too
> late?
>
> Thanks,
> Tim
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] associating a ticket in the UI to a check-in

2012-10-18 Thread E. Timothy Uy
Is there a way to associate a ticket in the UI to a check-in or am I too
late?

Thanks,
Tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Remote connection to SQLite db

2012-10-18 Thread Olaf Schmidt

Am 18.10.2012 15:41, schrieb Abhinav:


Does SQL lite have an option of processes connecting to it from remote hosts 
with a port number?


There's nothing directly built-in in the sqlite-library.

In case you need that for Windows (also depends on your development-
environment a bit) my SQLite-COM-wrapper supports an "AppServer-Mode",
which is able to transfer resultsets (serialized Recordsets)
from the serverside to the clientside, supporting compression
as well as encryption... (there's dedicated RPC-Listener-Classes
to easily implement the serverside with a few lines of code -
and also dedicated Client-RPC-Classes which can communicate
with the server-end over TCP/IP.
The write-direction (Inserts, Updates, Deletes) is supported
too - also based on serialized Recordsets, which are able
to serialize to only "the diff, created on the clientside in
a user-session" back to the serverside - and then these
"accumulated changes" of such an backtransported "Diff-Recordset"
can be applied in a single transaction on the server-end.

Though, as said, the wrapper is a COM-library - usable only
on Windows ... (although it is tested and known to work also
on Linux/Wine) ... so, it's usage is restricted to languages,
which can load and access COM-libs (preferrably from VB5/VB6,
or "Office-VBA" - but also from Delphi, C++, Powerbasic, etc. -
but a bit less comfortable then).

There's other "free networking-implementations" as well, which
are known to be "alive and kickin"... as e.g.

http://www.sqlitening.com/support/index.php
(also for windows, implemented in PowerBasic)

ALso check out Marco Bambinis efforts here:
http://www.sqlabs.com/blog/
(not sure, if these new products strictly require MacOS-Hosts -
 the former Real-Server was SQLite-based and ran on more systems
 than just MacOS AFAIK, maybe Marco can help out himself here)

There's also a page on the SQLite-wiki - but not sure which
of the listed projects is actively supported or maintained yet.
http://www.sqlite.org/cvstrac/wiki?p=SqliteNetwork

But what finally remains are of course Webserver-(http-) based
solutions, where you can transfer back and forth with e.g.
JSON-based RPCs or with XMLhttp-Requests (with http-GZ-compression
when needed, or SSL for encryption).


Olaf


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lemon: Non-terminal destructors and cleanup

2012-10-18 Thread Ben

On 18 Oct 2012, at 20:07, Richard Hipp  wrote:

> On Thu, Oct 18, 2012 at 3:03 PM, Ben  wrote:
> 
>> Hi list,
>> 
>> I'm having a little trouble getting my head around memory management
>> within a Lemon-generated parser. Specifically the part of the docs stating
>> when a destructor will or will not be called.
>> 
>> For example, this is a portion of a grammar based on the SQLite parse.y
>> file:
>> 
>> 
>> columnName ::= nm(N). {
>>BSSQLiteColumn *col = [[BSSQLiteColumn alloc] init];
>>col.name = N.textValue;
>>[[parsedTable columns] addObject:col];
>> }
>> 
>> nm(A) ::= id(X). { A = X; }
>> nm(A) ::= STRING(X). { A = X; }
>> id(A) ::= ID(X). { A = X; }
>> 
>> Notes:
>> - The token type here is a struct containing an Objective-C string which
>> needs freeing when done with.
>> - Only a %token_destructor is defined, not any others
>> 
>> 
>> I know that the last three assignments are leaking memory, but I don't
>> know when I should be explicitly freeing my allocated memory within a token
>> and when I should be relying on the destructor defined by
>> %token_destructor{}. Or for that matter whether I should be declaring a
>> more specific symbol destructor.
>> 
>> Can anyone shed some light on how this should be done?
>> 
> 
> If the nonterminal payload is passed into an action (as in your example
> where N is processed because of nm(N)) then Lemon assume that your code
> will free the content, if needed.
> 
> If the rule had been:  columnName ::= nm {...}   (without the (N) argument
> to nm) then the destructor would have been called.

Got it. I've added two release calls for X at the end of the C code blocks for 
these two:
nm(A) ::= STRING(X). { A = X; }
id(A) ::= ID(X). { A = X; }

and now it's working leak-free.

Thank you.

Ben


> The destructor is also called if nm is popped from the stack for any reason
> other than the columnName ::= nm rule, such as when the stack is popped
> during error recovery.
> 
> 
>> 
>> Thanks,
>> 
>> Ben
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lemon: Non-terminal destructors and cleanup

2012-10-18 Thread Richard Hipp
On Thu, Oct 18, 2012 at 3:03 PM, Ben  wrote:

> Hi list,
>
> I'm having a little trouble getting my head around memory management
> within a Lemon-generated parser. Specifically the part of the docs stating
> when a destructor will or will not be called.
>
> For example, this is a portion of a grammar based on the SQLite parse.y
> file:
>
>
> columnName ::= nm(N). {
> BSSQLiteColumn *col = [[BSSQLiteColumn alloc] init];
> col.name = N.textValue;
> [[parsedTable columns] addObject:col];
> }
>
> nm(A) ::= id(X). { A = X; }
> nm(A) ::= STRING(X). { A = X; }
> id(A) ::= ID(X). { A = X; }
>
> Notes:
> - The token type here is a struct containing an Objective-C string which
> needs freeing when done with.
> - Only a %token_destructor is defined, not any others
>
>
> I know that the last three assignments are leaking memory, but I don't
> know when I should be explicitly freeing my allocated memory within a token
> and when I should be relying on the destructor defined by
> %token_destructor{}. Or for that matter whether I should be declaring a
> more specific symbol destructor.
>
> Can anyone shed some light on how this should be done?
>

If the nonterminal payload is passed into an action (as in your example
where N is processed because of nm(N)) then Lemon assume that your code
will free the content, if needed.

If the rule had been:  columnName ::= nm {...}   (without the (N) argument
to nm) then the destructor would have been called.

The destructor is also called if nm is popped from the stack for any reason
other than the columnName ::= nm rule, such as when the stack is popped
during error recovery.


>
> Thanks,
>
> Ben
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Lemon: Non-terminal destructors and cleanup

2012-10-18 Thread Ben
Hi list,

I'm having a little trouble getting my head around memory management within a 
Lemon-generated parser. Specifically the part of the docs stating when a 
destructor will or will not be called.

For example, this is a portion of a grammar based on the SQLite parse.y file:


columnName ::= nm(N). {
BSSQLiteColumn *col = [[BSSQLiteColumn alloc] init];
col.name = N.textValue;
[[parsedTable columns] addObject:col];
}

nm(A) ::= id(X). { A = X; }
nm(A) ::= STRING(X). { A = X; }
id(A) ::= ID(X). { A = X; }

Notes:
- The token type here is a struct containing an Objective-C string which needs 
freeing when done with.
- Only a %token_destructor is defined, not any others


I know that the last three assignments are leaking memory, but I don't know 
when I should be explicitly freeing my allocated memory within a token and when 
I should be relying on the destructor defined by %token_destructor{}. Or for 
that matter whether I should be declaring a more specific symbol destructor.

Can anyone shed some light on how this should be done?

Thanks,

Ben


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Processing time of a transaction

2012-10-18 Thread Paxdo Presse

Perfect, thank you M. Hipp!

Le 18 oct. 2012 à 20:32, Richard Hipp  a écrit :

> On Thu, Oct 18, 2012 at 1:57 PM, Paxdo Presse  wrote:
> 
>> 
>> Hello,
>> 
>> in FAQ http://www.sqlite.org/faq.html :
>> 
>> "Actually, SQLite will easily do 50,000 or more INSERT statements per
>> second on an average desktop computer. But it will only do a few dozen
>> transactions per second. Transaction speed is limited by the rotational
>> speed of your disk drive. A transaction normally requires two complete
>> rotations of the disk platter, which on a 7200RPM disk drive limits you to
>> about 60 transactions per second."
>> 
>> When a transaction only *reads* finally, as below, will it need to do two
>> complete rotations of the disk platter, as described above?
>> 
> 
> No.  Only write transactions.
> 
> 
>> 
>> 
>> - begin transaction
>> - select
>> - select...
>> - end transaction
>> 
>> 
>> thank you,
>> olivier
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Processing time of a transaction

2012-10-18 Thread Richard Hipp
On Thu, Oct 18, 2012 at 1:57 PM, Paxdo Presse  wrote:

>
> Hello,
>
> in FAQ http://www.sqlite.org/faq.html :
>
> "Actually, SQLite will easily do 50,000 or more INSERT statements per
> second on an average desktop computer. But it will only do a few dozen
> transactions per second. Transaction speed is limited by the rotational
> speed of your disk drive. A transaction normally requires two complete
> rotations of the disk platter, which on a 7200RPM disk drive limits you to
> about 60 transactions per second."
>
> When a transaction only *reads* finally, as below, will it need to do two
> complete rotations of the disk platter, as described above?
>

No.  Only write transactions.


>
>
> - begin transaction
> - select
> - select...
> - end transaction
>
>
> thank you,
> olivier
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Processing time of a transaction

2012-10-18 Thread Paxdo Presse

Hello,

in FAQ http://www.sqlite.org/faq.html :

"Actually, SQLite will easily do 50,000 or more INSERT statements per second on 
an average desktop computer. But it will only do a few dozen transactions per 
second. Transaction speed is limited by the rotational speed of your disk 
drive. A transaction normally requires two complete rotations of the disk 
platter, which on a 7200RPM disk drive limits you to about 60 transactions per 
second."

When a transaction only *reads* finally, as below, will it need to do two 
complete rotations of the disk platter, as described above?


- begin transaction
- select
- select...
- end transaction


thank you,
olivier


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Imanuel
No, I can't - 26s vs 15s (old vs new).

But when I run the test in my Delphi test application, 3.7.14.1 takes
285 seconds (tested again right now).
All the time, CPU usage is 25% (on a quad core).


This is my test code:

sqlite3_open('test.db', handle);
t0:=now();
sqlite3_exec(handle, PAnsiChar('CREATE INDEX idx_namen_name ON
Namen(name)'), nil, nil, nil);
showmessage(floattostr((now()-t0)*86400));
sqlite3_close(handle);


The DLL is referenced in this unit:
https://raw.github.com/plashenkov/SQLite3-Delphi-FPC/09d8674805c73d1ab0fa05832750cc0f727102f0/Source/SQLite3.pas

I simply have replaced the dll without changing the linking source code
to test with 3.7.14.1 - I hope that's ok.

Imanuel




Am 18.10.2012 16:49, schrieb Dan Kennedy:
> On 10/18/2012 03:32 PM, Imanuel wrote:
>> Ok, here it is (45mb):
>> http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html
> 
> On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
> version. Are you able to reproduce the performance regression with
> these two?
> 
>   http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
>   http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip
> 
> Dan.
> 
> 
> 
> 
> 
>>
>> Imanuel
>>
>>
>> Am 18.10.2012 00:37, schrieb Imanuel:
>>> No, the performance stays the same.
>>> I have also tried using a big cache_size, but that didn't change
>>> anything, too.
>>>
>>> Yes, I can share the database - it is currently uploading, I will mail
>>> the link tomorrow.
>>>
>>> Imanuel
>>>
>>>
>>> Am 17.10.2012 22:08, schrieb Dan Kennedy:
 On 10/18/2012 01:32 AM, Imanuel wrote:
> Hello
>
> I tested this on an SSD with a database with one single table with
> 5,553,534 entries:
> CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang
> TEXT,
> name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll
> INTEGER
> DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)
>
> When running this command:
> CREATE INDEX idx_namen_name ON Namen(name)
>
> Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
> seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
> Indexing the column "geonameid" makes 24 vs. 312 seconds.
> Neither of the both columns are presorted.
 If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
 performance the same as in 3.6.22?

 Are you able to share the database?

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Black, Michael (IS)
I should mention I'm running Windows XP-64.  32-bit compile though.

Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Black, Michael (IS) [michael.bla...@ngc.com]
Sent: Thursday, October 18, 2012 10:16 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] EXT :Re: CREATE INDEX is 13 times slower with 3.7.14.1 
than with 3.6.22

And using Dan's downloads
3.7.14.1 took 30.4 seconds
3.6.22 took 40.94 (there was a lot of idle time towards the end here...disk I/O 
I assume)
Re-did my compilation again...
3.7.14.1 took 26.8
Recompiled under Visual Studio Express 2010 "cl /O2 sqlite3.c shell.c"
3.7.14.1 took 26.2 seconds

I'm not on an SSD but that shouldn't matter much for this.


Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Dan Kennedy [danielk1...@gmail.com]
Sent: Thursday, October 18, 2012 9:50 AM
To: sqlite-users@sqlite.org
Subject: EXT :Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than 
with 3.6.22

On 10/18/2012 09:49 PM, Dan Kennedy wrote:
> On 10/18/2012 03:32 PM, Imanuel wrote:
>> Ok, here it is (45mb):
>> http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html
>
> On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
> version. Are you able to reproduce the performance regression with
> these two?
>
> http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
> http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip

Second link is incorrect. They should be:

   http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
   http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip

Dan.


>
> Dan.
>
>
>
>
>
>>
>> Imanuel
>>
>>
>> Am 18.10.2012 00:37, schrieb Imanuel:
>>> No, the performance stays the same.
>>> I have also tried using a big cache_size, but that didn't change
>>> anything, too.
>>>
>>> Yes, I can share the database - it is currently uploading, I will mail
>>> the link tomorrow.
>>>
>>> Imanuel
>>>
>>>
>>> Am 17.10.2012 22:08, schrieb Dan Kennedy:
 On 10/18/2012 01:32 AM, Imanuel wrote:
> Hello
>
> I tested this on an SSD with a database with one single table with
> 5,553,534 entries:
> CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang
> TEXT,
> name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll
> INTEGER
> DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)
>
> When running this command:
> CREATE INDEX idx_namen_name ON Namen(name)
>
> Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
> seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
> Indexing the column "geonameid" makes 24 vs. 312 seconds.
> Neither of the both columns are presorted.
 If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
 performance the same as in 3.6.22?

 Are you able to share the database?

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Black, Michael (IS)
And using Dan's downloads
3.7.14.1 took 30.4 seconds
3.6.22 took 40.94 (there was a lot of idle time towards the end here...disk I/O 
I assume)
Re-did my compilation again...
3.7.14.1 took 26.8 
Recompiled under Visual Studio Express 2010 "cl /O2 sqlite3.c shell.c"
3.7.14.1 took 26.2 seconds

I'm not on an SSD but that shouldn't matter much for this.


Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Dan Kennedy [danielk1...@gmail.com]
Sent: Thursday, October 18, 2012 9:50 AM
To: sqlite-users@sqlite.org
Subject: EXT :Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than 
with 3.6.22

On 10/18/2012 09:49 PM, Dan Kennedy wrote:
> On 10/18/2012 03:32 PM, Imanuel wrote:
>> Ok, here it is (45mb):
>> http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html
>
> On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
> version. Are you able to reproduce the performance regression with
> these two?
>
> http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
> http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip

Second link is incorrect. They should be:

   http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
   http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip

Dan.


>
> Dan.
>
>
>
>
>
>>
>> Imanuel
>>
>>
>> Am 18.10.2012 00:37, schrieb Imanuel:
>>> No, the performance stays the same.
>>> I have also tried using a big cache_size, but that didn't change
>>> anything, too.
>>>
>>> Yes, I can share the database - it is currently uploading, I will mail
>>> the link tomorrow.
>>>
>>> Imanuel
>>>
>>>
>>> Am 17.10.2012 22:08, schrieb Dan Kennedy:
 On 10/18/2012 01:32 AM, Imanuel wrote:
> Hello
>
> I tested this on an SSD with a database with one single table with
> 5,553,534 entries:
> CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang
> TEXT,
> name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll
> INTEGER
> DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)
>
> When running this command:
> CREATE INDEX idx_namen_name ON Namen(name)
>
> Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
> seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
> Indexing the column "geonameid" makes 24 vs. 312 seconds.
> Neither of the both columns are presorted.
 If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
 performance the same as in 3.6.22?

 Are you able to share the database?

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Black, Michael (IS)
I used 3.7.14.1
Compiled thusly with Visual Studio Express 2008
cl /O2 sqlite3.c shell.c

CREATE INDEX idx_namen_name ON Namen(name);

Took 26.6 seconds and one CPU was pegged the whole time.

I'm on a 3Ghz 8-core machine.

Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Dan Kennedy [danielk1...@gmail.com]
Sent: Thursday, October 18, 2012 9:50 AM
To: sqlite-users@sqlite.org
Subject: EXT :Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than 
with 3.6.22

On 10/18/2012 09:49 PM, Dan Kennedy wrote:
> On 10/18/2012 03:32 PM, Imanuel wrote:
>> Ok, here it is (45mb):
>> http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html
>
> On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
> version. Are you able to reproduce the performance regression with
> these two?
>
> http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
> http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip

Second link is incorrect. They should be:

   http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
   http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip

Dan.


>
> Dan.
>
>
>
>
>
>>
>> Imanuel
>>
>>
>> Am 18.10.2012 00:37, schrieb Imanuel:
>>> No, the performance stays the same.
>>> I have also tried using a big cache_size, but that didn't change
>>> anything, too.
>>>
>>> Yes, I can share the database - it is currently uploading, I will mail
>>> the link tomorrow.
>>>
>>> Imanuel
>>>
>>>
>>> Am 17.10.2012 22:08, schrieb Dan Kennedy:
 On 10/18/2012 01:32 AM, Imanuel wrote:
> Hello
>
> I tested this on an SSD with a database with one single table with
> 5,553,534 entries:
> CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang
> TEXT,
> name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll
> INTEGER
> DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)
>
> When running this command:
> CREATE INDEX idx_namen_name ON Namen(name)
>
> Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
> seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
> Indexing the column "geonameid" makes 24 vs. 312 seconds.
> Neither of the both columns are presorted.
 If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
 performance the same as in 3.6.22?

 Are you able to share the database?

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Dan Kennedy

On 10/18/2012 09:49 PM, Dan Kennedy wrote:

On 10/18/2012 03:32 PM, Imanuel wrote:

Ok, here it is (45mb):
http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html


On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
version. Are you able to reproduce the performance regression with
these two?

http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip


Second link is incorrect. They should be:

  http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
  http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip

Dan.




Dan.







Imanuel


Am 18.10.2012 00:37, schrieb Imanuel:

No, the performance stays the same.
I have also tried using a big cache_size, but that didn't change
anything, too.

Yes, I can share the database - it is currently uploading, I will mail
the link tomorrow.

Imanuel


Am 17.10.2012 22:08, schrieb Dan Kennedy:

On 10/18/2012 01:32 AM, Imanuel wrote:

Hello

I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang
TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll
INTEGER
DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)

When running this command:
CREATE INDEX idx_namen_name ON Namen(name)

Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
Indexing the column "geonameid" makes 24 vs. 312 seconds.
Neither of the both columns are presorted.

If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
performance the same as in 3.6.22?

Are you able to share the database?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Dan Kennedy

On 10/18/2012 03:32 PM, Imanuel wrote:

Ok, here it is (45mb):
http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html


On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
version. Are you able to reproduce the performance regression with
these two?

  http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
  http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip

Dan.







Imanuel


Am 18.10.2012 00:37, schrieb Imanuel:

No, the performance stays the same.
I have also tried using a big cache_size, but that didn't change
anything, too.

Yes, I can share the database - it is currently uploading, I will mail
the link tomorrow.

Imanuel


Am 17.10.2012 22:08, schrieb Dan Kennedy:

On 10/18/2012 01:32 AM, Imanuel wrote:

Hello

I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll INTEGER
DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)

When running this command:
CREATE INDEX idx_namen_name ON Namen(name)

Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
Indexing the column "geonameid" makes 24 vs. 312 seconds.
Neither of the both columns are presorted.

If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
performance the same as in 3.6.22?

Are you able to share the database?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite writing chunks to a BLOB

2012-10-18 Thread Mike King
I'd missed that - The devil is always in the detail (cue Homer Simpson "DOH!")

Thanks again and sorry to be a nuisance


On 18 October 2012 14:35, Richard Hipp  wrote:
> On Thu, Oct 18, 2012 at 9:03 AM, Mike King  wrote:
>
>> At the moment it's difficult to tell but I envisage 3-4gb being the
>> maximum.
>>
>
> The maximum BLOB size in SQLite is 1GB.  So you would do well to store your
> images in separate files.
>
>
>>
>> Cheers,
>>
>> Mike
>>
>> On 18 October 2012 13:17, Joe Mistachkin  wrote:
>> >
>> > Mike King wrote:
>> >>
>> >> Thanks - sorry to be a pain but is this on the roadmap for the future?
>> >> (For the thing I'm playing with this is the difference between storing
>> >> images in the database or storing them in the filesystem).
>> >>
>> >
>> > Out of curiosity, what size images are you dealing with?  Unless they
>> > are really large, any efficiency gains from using incremental BLOB I/O
>> > would probably be minimal.
>> >
>> > Anyhow, I've created a ticket to track this feature request, here:
>> >
>> > http://system.data.sqlite.org/index.html/info/32d482b38b
>> >
>> > Now, it's on the roadmap; however, I'm not sure about the timeline of
>> > getting this implemented.
>> >
>> > --
>> > Joe Mistachkin
>> >
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-18 Thread Pavel Ivanov
On Thu, Oct 18, 2012 at 6:32 AM, Daniel Polski  wrote:
>> The SELECT statement, including the _prepare() stage and all the _step()s
>> until you've reached the last row, and then the _finalize(), is all one
>> process.  They're all part of the statement and you can assume that the
>> database is still locked until you do a _finalize().
>>
>> If you are using the results of a SELECT to figure out a bunch of other
>> instructions like INSERT or DELETE you can do it two ways:
>>
>> A) Quicky save the results of stepping into an array.  Once the SELECT is
>> finalized, look through the array and figure out what you want to do about
>> it.
>>
>> B) Generate the INSERT/DELETE commands while stepping but keep them
>> somewhere, either as a data array or by accumulating the text of the SQL
>> commands in a string, rather than executing them immediately.  Once you're
>> finished stepping, execute the commands.  (You may choose to use _exec to
>> execute them all in one go.)
>>
>>
>
> Thank you for the clarification, but I still misunderstand the documentation
> some way.
>
> In the documentation about WAL mode it says:
> "Writers merely append new content to the end of the WAL file. Because
> writers do nothing that would interfere with the actions of readers, writers
> and readers can run at the same time. However, since there is only one WAL
> file, there can only be one writer at a time."
>
> Maybe the magic words I don't fully understand are what's written later:
> "A checkpoint operation takes content from the WAL file and transfers it
> back into the original database file. A checkpoint can run concurrently with
> readers, however the checkpoint must stop when it reaches a page in the WAL
> that is past the read mark of any current reader. The checkpoint has to stop
> at that point because otherwise it might overwrite part of the database file
> that the reader is actively using. The checkpoint remembers (in the
> wal-index) how far it got and will resume transferring content from the WAL
> to the database from where it left off on the next invocation."
>
> I logically do understand that there can't be 2 writers updating the
> database at the same time, but I don't understand why the second insert
> statement in the example below won't work without finalizing the SELECT
> query?
>
>
> sqlite3* conn_1;
> sqlite3* conn_2;
> //(..opening db files and so on)
>
> sqlite3_stmt* pVM_1;
> sqlite3_stmt* pVM_2;
>
> sqlite3_prepare(conn_1, "SELECT * FROM test_table_2;", -1, _1, );
> //sets WAL end mark for pVM_1?
> sqlite3_prepare(conn_2, "SELECT * FROM test_table_2;", -1, _2, );
> //sets WAL end mark for pVM_2?
>
> nRet = sqlite3_step(pVM_1); //stepping if < WAL end mark set for pVM_1, set
> new read mark?
> nRet = sqlite3_step(pVM_2); //stepping if < WAL end mark set for pVM_2, set
> new read mark?
>
> //statements below will add content to the end of the WAL file?
> nRet = sqlite3_exec(conn_1, "BEGIN IMMEDIATE TRANSACTION;", 0, 0, );
> nRet = sqlite3_exec(conn_1, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 0,
> );
> nRet = sqlite3_exec(conn_1, "COMMIT;", 0, 0, );
>
> nRet = sqlite3_step(pVM_1); //stepping if < WAL end mark set for pVM_1, set
> new read mark?
> nRet = sqlite3_step(pVM_2); //stepping if < WAL end mark set for pVM_2, set
> new read mark?
>
> //sqlite3_finalize(pVM_1);
> //sqlite3_finalize(pVM_2);
>
> //The execution below will fail with SQLITE_BUSY if the SELECT statement
> pVM_2 isn't finalized
> //(Why won't it append new data in the end of the WAL file just like the
> successful insert above?)
> nRet = sqlite3_exec(conn_2, "BEGIN IMMEDIATE TRANSACTION;", 0, 0, );
> nRet = sqlite3_exec(conn_2, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 0,
> );
> nRet = sqlite3_exec(conn_2, "COMMIT;", 0, 0, );

When INSERT is executed it doesn't add new row to the WAL file, it
have to add new row to some database page possibly changing some other
pages on the way too. These changed pages are written into WAL file.
Because of this fact when INSERT statement is executed it must be
executed on the latest version of the database pages. Otherwise it can
create an alternative database version that can't be merged with
version created by other INSERT statements. So when you execute INSERT
statement, or start IMMEDIATE transaction, or convert existing
read-only transaction into writing transaction SQLite have to make
sure that this transaction sees the latest version of the database.
But when you start read-only transaction SQLite ensures that all the
way through it sees a consistent database state, and that would be the
state of the database at the beginning of the transaction. So when you
start executing SELECT statement you lock conn_2 into the database
state that was at that moment. Then you do database changes on conn_1,
so database state is changed. Then you want to execute changes in
conn_2, but it can't do that because it's locked into database state
which is not latest one. And it 

Re: [sqlite] Bug report: Out-Of-Memory error when doing DELETE from a table with 150 million records

2012-10-18 Thread Ivan P
Hi Richard,

Shouldn't the delete statement be able to flush it's stored rowids to disk
when it understands the memory is not enough for handling. Otherwise it
doesn't seem scalable enough.
To avoid this we decided to change a database structure.

I would consider this thread as not solving my current problem, but solving
SQLite inability to work with large data sets.


On Thu, Oct 18, 2012 at 12:12 AM, Richard Hipp  wrote:

> On Wed, Oct 17, 2012 at 11:58 AM, Ivan P  wrote:
>
> > Hello!
> >
> > I've got Out-Of-Memory error when delete records from a table that has
> > about 150,000,000 records.
> >
> > The table is created as:
> > CREATE TABLE differential_parts_temp (plan_id TEXT, [version_id] INT NOT
> > NULL, [page_index] INT NOT NULL, [md5low] INT64 NOT NULL, [md5high] INT64
> > NOT NULL);
> > CREATE INDEX differentialpartstemp_versionid ON differential_parts_temp
> > (version_id, plan_id);
> >
> > It has approx 150,000,000 records.
> > The VERY MOST (if not all) of those records have
> > plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'
> >
> > The following request leads to "Out of Memory" error (the application
> eats
> > all the free memory up to 3.5 Gb , and then fails)
> > DELETE FROM differential_parts_temp WHERE
> > plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'
> >
>
> In order to do a DELETE, SQLite first identifies every row that needs
> deleting.  It remembers the rowid of each such row in memory.  Normally
> this works fine, but it can give problems when you are trying to delete
> 150M rows, apparently.
>
> One possible work-around:
>
> CREATE TABLE new_dpt AS SELECT * FROM differential_parts_temp WHERE
> plan_id<>bf43c9ae-d681-4f2a-be19-0e0426db2b43';
> DROP TABLE differential_parts_temp;
> ALTER TABLE new_dpt RENAME AS differential_parts_temp;
>
> Another possible work-around:
>
> DELETE FROM differential_parts_temp WHERE rowid IN (SELECT rowid FROM
> differential_parts_temp WHERE
> plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43' LIMIT 100);
> -- repeat the previous statement 150 times, or until sqlite3_changes()
> returns zero.
>
>
>
> >
> > I tried different SQLite clients including the following precompiled
> > binary:
> > http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip
> >
> > Why the DELETE statement can eat so much memory?
> >
> > The total database size is 20 GBytes.
> > SQLite Expert shows the following DB properties:
> > auto_vacuum=none
> > automatic_index=on
> > cache_size=2000
> > case_sensitive_like=off
> > collation_list=[NOCASE], [RTRIM], [BINARY]
> > count_changes=off
> > default_cache_size=2000
> > empty_result_callbacks=off
> > encoding=UTF-8
> > foreign_keys=on
> > freelist_count=0
> > full_column_names=off
> > fullfsync=off
> > journal_mode=delete
> > journal_size_limit=-1
> > legacy_file_format=off
> > locking_mode=normal
> > max_page_count=1073741823
> > page_count=20719252
> > page_size=1024
> > read_uncommitted=off
> > recursive_triggers=off
> > reverse_unordered_selects=off
> > schema_version=27
> > secure_delete=off
> > short_column_names=on
> > synchronous=full
> > temp_store=default
> > temp_store_directory=""
> > user_version=0
> > wal_autocheckpoint=1000
> >
> > Thanks,
> > IP
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-18 Thread Simon Slavin

On 18 Oct 2012, at 2:32pm, Daniel Polski  wrote:

> I logically do understand that there can't be 2 writers updating the database 
> at the same time, but I don't understand why the second insert statement in 
> the example below won't work without finalizing the SELECT query?

The fact that changes are initially made to a journal file (maybe a WAL file) 
and only moved to the normal database file later is a red herring.  Similarly, 
where in the file changes are made (whether they modify existing pages or 
create new ones) is a red herring.  You should consider all the files that make 
up a database to be one consistent whole.

A SELECT needs to keep the database locked against changes.  Because the 
results of the SELECT must reflect the data as it was when the command was 
executed.  So until the SELECT is finalized (or has produced an error), no 
changes can be made to the data because that might make the results of the 
SELECT inconsistent.

So you can run multiple SELECTs at one time, since there's no way for one 
SELECT to invalidate the data returned by another.  But as soon as someone 
tries a command that would change the file, it has to be blocked, because all 
SELECTs under way must be allowed to finish before anything is allowed to make 
changes.

So once again, consider all the steps involved in a statement: _prepare(), 
_step(), and _finalize() to be part of the same process, and once you've 
started it, finish it as quickly as possible.  Earlier versions didn't even 
split this up into three statements: they provided just _exec() which does all 
three.  It might be best to continue thinking of SQLite in this way.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Remote connection to SQLite db

2012-10-18 Thread Igor Tandetnik
Abhinav  wrote:
> Does SQL lite have an option of processes connecting to it from remote hosts 
> with a port number?

No. SQLite is an embedded database, not a client/server one.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Remote connection to SQLite db

2012-10-18 Thread Abhinav
Hi,
Does SQL lite have an option of processes connecting to it from remote hosts 
with a port number? 

Thanks,
Abhinav Varshney
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug report: Out-Of-Memory error when doing DELETE from a table with 150 million records

2012-10-18 Thread Ivan P
The operating system is Windows7 x64 Ultimate, 4 Gb RAM
I have not specified any PRAGMAs when run sqlite3.exe. The program was
launched normally (in non-elevated mode).
The database file is located on the local disk (C:) with a 95 GB of free
space

Here's how I launch this:
C:\temp\131418>sqlite3.exe problem.db
SQLite version 3.7.14.1 2012-10-04 19:37:12
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> delete from differential_parts_temp where
plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43';
Error: out of memory
sqlite>

It works for about 10 minutes on my not loaded i5-2400 3.1Ghz, reaching by
this moment about 1.7Gb of memory in task manager, and then fails.



On Wed, Oct 17, 2012 at 8:44 PM, Simon Slavin  wrote:

>
> On 17 Oct 2012, at 4:58pm, Ivan P  wrote:
>
> > Why the DELETE statement can eat so much memory?
>
> Because it doesn't delete each one row singly, doing all the file updates
> that are needed to delete that row, then move on to the next row.  If it
> did it would take an extremely long time to operate.  Nevertheless it is
> definitely not meant to ever crash in the way you're seeing.  It is meant
> to realise when its available memory is full, flush changes to disk, then
> carry on.
>
> > SQLite Expert shows the following DB properties:
>
> Some of these (e.g foreign_keys) are not properties of the database, but
> of the connection that SQLite Expert has opened to it, so they are not
> relevant to anything you've been doing in the shell tool.  So instead we
> ask the following questions:
>
> Which version of Windows are you running ?  Did you specify any PRAGMAs in
> the shell tool ?  Are you opening the file on a local disk or across a file
> sharing connection ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite writing chunks to a BLOB

2012-10-18 Thread Richard Hipp
On Thu, Oct 18, 2012 at 9:03 AM, Mike King  wrote:

> At the moment it's difficult to tell but I envisage 3-4gb being the
> maximum.
>

The maximum BLOB size in SQLite is 1GB.  So you would do well to store your
images in separate files.


>
> Cheers,
>
> Mike
>
> On 18 October 2012 13:17, Joe Mistachkin  wrote:
> >
> > Mike King wrote:
> >>
> >> Thanks - sorry to be a pain but is this on the roadmap for the future?
> >> (For the thing I'm playing with this is the difference between storing
> >> images in the database or storing them in the filesystem).
> >>
> >
> > Out of curiosity, what size images are you dealing with?  Unless they
> > are really large, any efficiency gains from using incremental BLOB I/O
> > would probably be minimal.
> >
> > Anyhow, I've created a ticket to track this feature request, here:
> >
> > http://system.data.sqlite.org/index.html/info/32d482b38b
> >
> > Now, it's on the roadmap; however, I'm not sure about the timeline of
> > getting this implemented.
> >
> > --
> > Joe Mistachkin
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-18 Thread Daniel Polski

The SELECT statement, including the _prepare() stage and all the _step()s until 
you've reached the last row, and then the _finalize(), is all one process.  
They're all part of the statement and you can assume that the database is still 
locked until you do a _finalize().

If you are using the results of a SELECT to figure out a bunch of other 
instructions like INSERT or DELETE you can do it two ways:

A) Quicky save the results of stepping into an array.  Once the SELECT is 
finalized, look through the array and figure out what you want to do about it.

B) Generate the INSERT/DELETE commands while stepping but keep them somewhere, 
either as a data array or by accumulating the text of the SQL commands in a 
string, rather than executing them immediately.  Once you're finished stepping, 
execute the commands.  (You may choose to use _exec to execute them all in one 
go.)




Thank you for the clarification, but I still misunderstand the 
documentation some way.


In the documentation about WAL mode it says:
"Writers merely append new content to the end of the WAL file. Because 
writers do nothing that would interfere with the actions of readers, 
writers and readers can run at the same time. However, since there is 
only one WAL file, there can only be one writer at a time."


Maybe the magic words I don't fully understand are what's written later:
"A checkpoint operation takes content from the WAL file and transfers it 
back into the original database file. A checkpoint can run concurrently 
with readers, however the checkpoint must stop when it reaches a page in 
the WAL that is past the read mark of any current reader. The checkpoint 
has to stop at that point because otherwise it might overwrite part of 
the database file that the reader is actively using. The checkpoint 
remembers (in the wal-index) how far it got and will resume transferring 
content from the WAL to the database from where it left off on the next 
invocation."


I logically do understand that there can't be 2 writers updating the 
database at the same time, but I don't understand why the second insert 
statement in the example below won't work without finalizing the SELECT 
query?



sqlite3* conn_1;
sqlite3* conn_2;
//(..opening db files and so on)

sqlite3_stmt* pVM_1;
sqlite3_stmt* pVM_2;

sqlite3_prepare(conn_1, "SELECT * FROM test_table_2;", -1, _1, 
); //sets WAL end mark for pVM_1?
sqlite3_prepare(conn_2, "SELECT * FROM test_table_2;", -1, _2, 
); //sets WAL end mark for pVM_2?


nRet = sqlite3_step(pVM_1); //stepping if < WAL end mark set for pVM_1, 
set new read mark?
nRet = sqlite3_step(pVM_2); //stepping if < WAL end mark set for pVM_2, 
set new read mark?


//statements below will add content to the end of the WAL file?
nRet = sqlite3_exec(conn_1, "BEGIN IMMEDIATE TRANSACTION;", 0, 0, );
nRet = sqlite3_exec(conn_1, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 
0, );

nRet = sqlite3_exec(conn_1, "COMMIT;", 0, 0, );

nRet = sqlite3_step(pVM_1); //stepping if < WAL end mark set for pVM_1, 
set new read mark?
nRet = sqlite3_step(pVM_2); //stepping if < WAL end mark set for pVM_2, 
set new read mark?


//sqlite3_finalize(pVM_1);
//sqlite3_finalize(pVM_2);

//The execution below will fail with SQLITE_BUSY if the SELECT statement 
pVM_2 isn't finalized
//(Why won't it append new data in the end of the WAL file just like the 
successful insert above?)

nRet = sqlite3_exec(conn_2, "BEGIN IMMEDIATE TRANSACTION;", 0, 0, );
nRet = sqlite3_exec(conn_2, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 
0, );

nRet = sqlite3_exec(conn_2, "COMMIT;", 0, 0, );


Best Regards,
Daniel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-18 Thread Ryan Johnson

On 18/10/2012 8:45 AM, Richard Hipp wrote:

On Wed, Oct 17, 2012 at 4:11 PM, Efim Dyadkin  wrote:


Hi,

I am testing loss of data in Sqlite database correlated to auto-mounter
malfunction. I am running Sqlite on Linux and my database file is located
on network disk. For a test I stop the auto-mounter right before
transaction is committed. Surprisingly commit succeeds without any error
although hot journal remains on disk. When I get auto-mounter back and open
my database again the transaction is rolled back.

Apparently Sqlite cannot remove the journal due to unmounted path but it
ignores this error because Linux classifies it as ENOENT and unixDelete
function disregards it:

if( unlink(zPath)==(-1)) && errno!=ENOENT ){
 return unixLogError(SQLITE_IOERR_DELETE,
"unlink", zPath);
}

Can somebody please explain why "errno!=ENOENT" is required in here?


The purpose of unlink() is to make it so that the file does not exist.
ENOENT indicates that the file does not exist, and so the purpose of the
unlink() call has been fulfilled.

Suppose SQLite did treat ENOENT as an error.  What could it do about it?
It cannot roll the transaction back because the rollback journal (and
indeed the entire database) has disappeared.  I guess we could return
SQLITE_CORRUPT.  Would that somehow be more useful to the application?
Is there some plausible scenario for which an active journal file gone 
AWOL does *not* indicate a serious problem? To me it indicates that Bad 
Things are going on that sqlite is ill-equipped to deal with, so the 
best it can do is avoid causing any collateral damage by attempting to 
continue normally. Especially if the filesystem went down: it's not like 
any future transaction would succeed anyway...


If a less heavy-handed approach is desirable, perhaps a failed unlink() 
call should trigger an fstat() or seek() on the offending file 
descriptor; that might distinguish whether the file itself is 
inaccessible (as in OP's case) or whether it's just unreachable at the 
expected path (e.g. due to interference from an external agent).


I would still favor a fail-fast approach that returns a scary error 
message, though, the same kind that would be returned if a write failed.


$0.02
Ryan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite writing chunks to a BLOB

2012-10-18 Thread Mike King
At the moment it's difficult to tell but I envisage 3-4gb being the maximum.

Cheers,

Mike

On 18 October 2012 13:17, Joe Mistachkin  wrote:
>
> Mike King wrote:
>>
>> Thanks - sorry to be a pain but is this on the roadmap for the future?
>> (For the thing I'm playing with this is the difference between storing
>> images in the database or storing them in the filesystem).
>>
>
> Out of curiosity, what size images are you dealing with?  Unless they
> are really large, any efficiency gains from using incremental BLOB I/O
> would probably be minimal.
>
> Anyhow, I've created a ticket to track this feature request, here:
>
> http://system.data.sqlite.org/index.html/info/32d482b38b
>
> Now, it's on the roadmap; however, I'm not sure about the timeline of
> getting this implemented.
>
> --
> Joe Mistachkin
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-18 Thread Richard Hipp
On Wed, Oct 17, 2012 at 4:11 PM, Efim Dyadkin  wrote:

> Hi,
>
> I am testing loss of data in Sqlite database correlated to auto-mounter
> malfunction. I am running Sqlite on Linux and my database file is located
> on network disk. For a test I stop the auto-mounter right before
> transaction is committed. Surprisingly commit succeeds without any error
> although hot journal remains on disk. When I get auto-mounter back and open
> my database again the transaction is rolled back.
>
> Apparently Sqlite cannot remove the journal due to unmounted path but it
> ignores this error because Linux classifies it as ENOENT and unixDelete
> function disregards it:
>
> if( unlink(zPath)==(-1)) && errno!=ENOENT ){
> return unixLogError(SQLITE_IOERR_DELETE,
> "unlink", zPath);
> }
>
> Can somebody please explain why "errno!=ENOENT" is required in here?
>

The purpose of unlink() is to make it so that the file does not exist.
ENOENT indicates that the file does not exist, and so the purpose of the
unlink() call has been fulfilled.

Suppose SQLite did treat ENOENT as an error.  What could it do about it?
It cannot roll the transaction back because the rollback journal (and
indeed the entire database) has disappeared.  I guess we could return
SQLITE_CORRUPT.  Would that somehow be more useful to the application?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] transaction commit is successful despite I/O error

2012-10-18 Thread Efim Dyadkin
Hi,

I am testing loss of data in Sqlite database correlated to auto-mounter 
malfunction. I am running Sqlite on Linux and my database file is located on 
network disk. For a test I stop the auto-mounter right before transaction is 
committed. Surprisingly commit succeeds without any error although hot journal 
remains on disk. When I get auto-mounter back and open my database again the 
transaction is rolled back.

Apparently Sqlite cannot remove the journal due to unmounted path but it 
ignores this error because Linux classifies it as ENOENT and unixDelete 
function disregards it:

if( unlink(zPath)==(-1)) && errno!=ENOENT ){
return unixLogError(SQLITE_IOERR_DELETE, 
"unlink", zPath);
}

Can somebody please explain why "errno!=ENOENT" is required in here?

Thanks for any help,
Efim Dyadkin


 This e-mail, including any attached files, may contain confidential and 
privileged information for the sole use of the intended recipient. Any review, 
use, distribution, or disclosure by others is strictly prohibited. If you are 
not the intended recipient (or authorized to receive information for the 
intended recipient), please contact the sender by reply e-mail and delete all 
copies of this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] escape sequences for GLOB

2012-10-18 Thread Richard Hipp
On Thu, Oct 18, 2012 at 2:28 AM, Tristan Van Berkom  wrote:

> Hi all,
>I've been around and around the documentation, run a sequence of
> test cases and still haven't figured this out.
>
> What is the proper default escape sequence to be used for GLOB
> pattern matching in SQLite ?
>
> I've already read in this other thread:
>
> http://osdir.com/ml/sqlite-**users/2012-06/msg00349.html
>
> That the ESCAPE keyword cannot be used to override the
> default escape sequences used in GLOB, but how do I just
> get by with defaults ?
>

The only characters that need escaping are *, ?, and [, and they can all be
escaped by making them into a one-character character class:  [*], [?], and
[[].




>
> If this simply cannot be done, I'd really appreciate if
> someone could point me in the right direction here, must
> I override the 'glob' function in C code ?
>
> If so, that's really not a problem as I'm running this
> from C code anyway... but what should I override it with ?
>
> Best Regards,
> -Tristan
> __**_
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite writing chunks to a BLOB

2012-10-18 Thread Joe Mistachkin

Mike King wrote:
> 
> Thanks - sorry to be a pain but is this on the roadmap for the future?
> (For the thing I'm playing with this is the difference between storing
> images in the database or storing them in the filesystem). 
> 

Out of curiosity, what size images are you dealing with?  Unless they
are really large, any efficiency gains from using incremental BLOB I/O
would probably be minimal.

Anyhow, I've created a ticket to track this feature request, here:

http://system.data.sqlite.org/index.html/info/32d482b38b

Now, it's on the roadmap; however, I'm not sure about the timeline of
getting this implemented.

--
Joe Mistachkin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-18 Thread Simon Slavin

On 18 Oct 2012, at 10:55am, Daniel Polski  wrote:

> What if I create the SELECT sqlite3_stmt and want to step through the data to 
> evalute if an insert is needed?
> If I find a matching row and create another sqlite3_stmt (INSERT) it will 
> convert the SELECT statement to a write transaction? ...Which would make all 
> other threads unable continue without restarting "from the top"? (No need to 
> continue search through their SELECT statements if they cannot begin an 
> INSERT transaction anyway).
> 
> From what I understand you mean this is what happens (I don't think I 
> understood correctly though):
> 
> 1. Connection A creates a read statement for table2, table3.
> 2. Connection B creates a read statement for table2, table3.
> 3. Connection A steps through the data and creates a write statement to 
> begin->insert->commit data to table1, success. (This also changes Connection 
> A's read in step 1 to a write statement.)
> 4. Connection B steps through the data and tries to begin, fails since it 
> can't convert the read at step 2 to a write transaction since Connection A's 
> statement in step 1 now is a write transaction?
> 
> Why does it try to convert the read transaction (table2, table3) to a write 
> transaction when another prepared statment inserts data to an unrelated table 
> (table1)?

The SELECT statement, including the _prepare() stage and all the _step()s until 
you've reached the last row, and then the _finalize(), is all one process.  
They're all part of the statement and you can assume that the database is still 
locked until you do a _finalize().

If you are using the results of a SELECT to figure out a bunch of other 
instructions like INSERT or DELETE you can do it two ways:

A) Quicky save the results of stepping into an array.  Once the SELECT is 
finalized, look through the array and figure out what you want to do about it.

B) Generate the INSERT/DELETE commands while stepping but keep them somewhere, 
either as a data array or by accumulating the text of the SQL commands in a 
string, rather than executing them immediately.  Once you're finished stepping, 
execute the commands.  (You may choose to use _exec to execute them all in one 
go.)

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Creating a view

2012-10-18 Thread Gert Van Assche
Thanks,

Igor & Keith.

gert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite writing chunks to a BLOB

2012-10-18 Thread Mike King
Thanks - sorry to be a pain but is this on the roadmap for the future?
(For the thing I'm playing with this is the difference between storing
images in the database or storing them in the filesystem).

Cheers,


On 18 October 2012 11:23, Joe Mistachkin  wrote:
>
> Mike King wrote:
>>
>> That's great to know but is this supported in system.data.sqlite
>> or is there any plans to do so?
>>
>
> It's not being planned yet.
>
> --
> Joe Mistachkin
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite writing chunks to a BLOB

2012-10-18 Thread Joe Mistachkin

Mike King wrote:
> 
> That's great to know but is this supported in system.data.sqlite
> or is there any plans to do so?
> 

It's not being planned yet.

--
Joe Mistachkin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-18 Thread Daniel Polski

Pavel Ivanov skrev 2012-10-17 16:08:

The problem is you are starting read-only transaction by executing
SELECT and then try to convert this transaction into writing one by
executing BEGIN IMMEDIATE. If in such situation SQLITE_BUSY is
returned you have to finish the transaction and start it again. In
your code solution is easy: finalize SELECT statement before executing
BEGIN IMMEDIATE.


What if I create the SELECT sqlite3_stmt and want to step through the 
data to evalute if an insert is needed?
If I find a matching row and create another sqlite3_stmt (INSERT) it 
will convert the SELECT statement to a write transaction? ...Which would 
make all other threads unable continue without restarting "from the 
top"? (No need to continue search through their SELECT statements if 
they cannot begin an INSERT transaction anyway).


From what I understand you mean this is what happens (I don't think I 
understood correctly though):


1. Connection A creates a read statement for table2, table3.
2. Connection B creates a read statement for table2, table3.
3. Connection A steps through the data and creates a write statement to 
begin->insert->commit data to table1, success. (This also changes 
Connection A's read in step 1 to a write statement.)
4. Connection B steps through the data and tries to begin, fails since 
it can't convert the read at step 2 to a write transaction since 
Connection A's statement in step 1 now is a write transaction?


Why does it try to convert the read transaction (table2, table3) to a 
write transaction when another prepared statment inserts data to an 
unrelated table (table1)?


Best Regards,
Daniel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] escape sequences for GLOB

2012-10-18 Thread Clemens Ladisch
Tristan Van Berkom wrote:
> What is the proper default escape sequence to be used for GLOB
> pattern matching in SQLite ?

There are different escape mechanisms for different characters.
A comment hidden in the source code explains:
| Globbing rules:
|
|  '*'   Matches any sequence of zero or more characters.
|
|  '?'   Matches exactly one character.
|
| [...]  Matches one character from the enclosed list of
|characters.
|
| [^...] Matches one character not in the enclosed list.
|
| With the [...] and [^...] matching, a ']' character can be included
| in the list by making it the first character after '[' or '^'.  A
| range of characters can be specified using '-'.  Example:
| "[a-z]" matches any single lower-case letter.  To match a '-', make
| it the last character in the list.
| [...]
| Hints: to match '*' or '?', put them in "[]".  Like this:
|
| abc[*]xyzMatches "abc*xyz" only


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Imanuel
Ok, here it is (45mb):
http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html

Imanuel


Am 18.10.2012 00:37, schrieb Imanuel:
> No, the performance stays the same.
> I have also tried using a big cache_size, but that didn't change
> anything, too.
>
> Yes, I can share the database - it is currently uploading, I will mail
> the link tomorrow.
>
> Imanuel
>
>
> Am 17.10.2012 22:08, schrieb Dan Kennedy:
>> On 10/18/2012 01:32 AM, Imanuel wrote:
>>> Hello
>>>
>>> I tested this on an SSD with a database with one single table with
>>> 5,553,534 entries:
>>> CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang TEXT,
>>> name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll INTEGER
>>> DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)
>>>
>>> When running this command:
>>> CREATE INDEX idx_namen_name ON Namen(name)
>>>
>>> Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
>>> seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
>>> Indexing the column "geonameid" makes 24 vs. 312 seconds.
>>> Neither of the both columns are presorted.
>> If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
>> performance the same as in 3.6.22?
>>
>> Are you able to share the database?
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] escape sequences for GLOB

2012-10-18 Thread Tristan Van Berkom

Hi all,
   I've been around and around the documentation, run a sequence of
test cases and still haven't figured this out.

What is the proper default escape sequence to be used for GLOB
pattern matching in SQLite ?

I've already read in this other thread:
   http://osdir.com/ml/sqlite-users/2012-06/msg00349.html

That the ESCAPE keyword cannot be used to override the
default escape sequences used in GLOB, but how do I just
get by with defaults ?

If this simply cannot be done, I'd really appreciate if
someone could point me in the right direction here, must
I override the 'glob' function in C code ?

If so, that's really not a problem as I'm running this
from C code anyway... but what should I override it with ?

Best Regards,
-Tristan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite writing chunks to a BLOB

2012-10-18 Thread Mike King
That's great to know but is this supported in system.data.sqlite or is
there any plans to do so?

Cheers

On Thursday, 18 October 2012, Simon Slavin wrote:

>
> On 17 Oct 2012, at 11:59pm, Mike King >
> wrote:
>
> > I'm using the latest System.Data.Sqlite with c# and .Net 4.
> >
> > Is there any method of writing to a BLOB in byte array chunks rather
> > than in one big lump? (I can see how using SQLiteDataReader GetBytes I
> > can read a blob back in chunks).
>
> SQLite proves the equivalent routines:
>
> http://www.sqlite.org/c3ref/blob_open.html
> http://www.sqlite.org/c3ref/blob_write.html
> http://www.sqlite.org/c3ref/blob_close.html
>
> I don't know whether System.Data.Sqlite can call them but the underlying
> engine supports what you want.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org 
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users