Re: [sqlite] Using sqlite in multi fork() without threads

2013-01-07 Thread Dan Kennedy

On 01/07/2013 03:22 PM, Eduardo Morras wrote:


Hi, I use sqlite in some of my projects. In one it follows a
parent/multichild model (multifork). The database is managed by the
parent, open close backups etc, opened before the fork, and after it
the childs use the database connection. I don't want to corrupt the
database, but the documentation only talks about the use with
threads. I use a version compiled with -DSQLITE_THREADSAFE=1 and
shared_cache mode on setted by parent before open db.


I'm not sure I follow this exactly.

You don't want to open a connection in the parent, call fork,
then continue to use the same connection in the child process.
That will lead to problems.

On the other hand, if you are calling fork(), then exec() from
within the child process, then opening a new connection to the
same db, that's fine.




The app works but, is anyone working with sqlite this way? Any advice
to avoid db corrupts, deadlocks, whatever? Please, don't talk about
changing to multithread model, i don't want to use them for this
project.


Thanks

---   --- Eduardo Morras
___ sqlite-users mailing
list sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problems (maybe a bug) when creating a backup using the command line interface '.backup' command

2013-01-02 Thread Dan Kennedy

On 01/02/2013 03:27 PM, Marco ten Thije wrote:

On 12/21/2012 05:18 PM, Dan Kennedy wrote:

Thanks. I think it's this:

http://www.sqlite.org/src/info/0cfd98ee20

Dan.

Thanks. I have looked into ticket, but we also see this problem when the
backup is written and read by the
same SQLite version (3.7.5).

I have recreated a backup using the 3.7.5 version. These are the
hexdumps of both the original file and
the backup created with SQLite 3.7.5:

The original file:

000 5153 694c 6574 6620 726f 616d 2074 0033
010 0004 0101 4000 2020  f1d0  7600
020  9700  2100  0300  0100
030      0100  
040        
050        0200
060 2d00 1de2 000d  0004 00e3 7202 2002
070 7301 e300      
080        


What series of commands did you pass to 3.7.5 to create
this file?

That the change counters at byte offsets 24 and 92 are
different makes this file look like it was created using
3.7.5 and then populated using an older version.

Perhaps there is a bug in 3.7.5 causing this. What happens
if you use 3.7.15 to create the original and do the backup?

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite4 Performance

2013-01-01 Thread Dan Kennedy

On 01/31/2013 11:13 PM, Michael Black wrote:

Do we still get to report bugs?
I checked out the fossil repository
fossil clone http://www.sqlite.org/src4 sqlite4.fossil


I tried my insert test and ran into a problem.
I'm running Redhat 5.7 gcc 4.4.4

This program dies (showing inserts/sec)
123536
113110
110154
107018
105489
100335
100165
100382
100086
99336.9
insert4: src/lsm_shared.c:996: lsmReadlock: Assertion
`(((u32)iShmMax-(u32)iShmMin)<  (1<<30))' failed.
Aborted


Thanks for this. Fixed here:

  http://www.sqlite.org/src4/info/bc0f61fc3b

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite4 Performance

2012-12-31 Thread Dan Kennedy

On 12/31/2012 11:03 AM, Cory Isaacson wrote:

We are doing some basic tests of SQLite4, and the performance seems slow. We
also tested LSM directly and got about the same results. The test is using
an INSERT of two integers, or 2 strings in LSM. It starts at around 9000
INSERTs/second, then degrades to around 5000 INSERTS/second.

Both tests were also tested with bigger transaction windows (100, 1000 rows
per transaction), with the same performance.

Are there configuration or build options that could be slowing it down?


I hope so. This is what I get on Linux:

  http://www.sqlite.org/src4/doc/trunk/www/lsmperf.wiki

Can you share the benchmark code you were using?

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problems (maybe a bug) when creating a backup using the command line interface '.backup' command

2012-12-21 Thread Dan Kennedy

On 12/21/2012 10:54 PM, Marco ten Thije wrote:



What is the size of the two database files?

The size of both files is 160768 bytes.


Also, can we have the first 6 lines of each hex dump
(i.e. enough to see the first 100 bytes)?

The first bytes of the original database:

000 5153 694c 6574 6620 726f 616d 2074 0033
010 0004 0101 4000 2020  f1d0  7600
020  9700  2100  0300  0100
030      0100  
040        
050        0200
060 2d00 1de2 000d  0004 00e3 7202 2002
070 7301 e300      
080        

The first bytes of the backup database:

000 5153 694c 6574 6620 726f 616d 2074 0033
010 0004 0101 4000 2020  0100  7600
020  9700  2100  0100  0100
030      0100  
040        
050        0100
060 2d00 27e2 000d  0004 00e3 7202 2002
070 7301 e300      
080        


Thanks. I think it's this:

  http://www.sqlite.org/src/info/0cfd98ee20

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problems (maybe a bug) when creating a backup using the command line interface '.backup' command

2012-12-21 Thread Dan Kennedy

On 12/21/2012 08:46 PM, Marco ten Thije wrote:

It returns 'ok':

./sqlite3 energy.sqlite
SQLite version 3.7.15.1 2012-12-19 20:39:10
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> pragma integrity_check;
ok
sqlite>

So, the original database looks ok.

I have compared the two databases (original and backup) and they
only differ in a few bytes in the header. The sizes match.

This is a diff between the hexdumps of both databases:

diff energy.hex backup.hex


What is the size of the two database files?

Also, can we have the first 6 lines of each hex dump
(i.e. enough to see the first 100 bytes)?

Thanks,
Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Version 3.7.15.1

2012-12-20 Thread Dan Kennedy

On 12/20/2012 01:26 PM, Patrik Nilsson wrote:

Thank you for the release!

I can't find sqlite-shell-linux-x86-3071501.zip and sqlite-doc-3071501.zip.


Thanks. They are there now.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG:"insert or replace" doesn't works for External Content FTS4 Tables

2012-12-07 Thread Dan Kennedy

On 12/07/2012 11:17 PM, Alexey Pechnikov wrote:

What does the following:

   SELECT * FROM view_address_exists WHERE rowid=64402;



sqlite>SELECT "sys_title:hash" FROM view_address_exists WHERE
rowid=64402;
"sys_title:hash"
e9b4d0bcb5


But what does "SELECT * FROM ..." return? According to table
view_address_exist, does row 64402 actually contain the token
'e7d4683bb2'?

If not, FTS has no way to delete the entry that maps from token
'e7d4683bb2' -> docid=64402.



About documented "When a row is deleted from an external content FTS4
table, FTS4 needs
to retrieve the column values of the row being deleted from the content
table.". I think that "insert or replace" is broken because we can't update
external table in middle of the "insert or replace" command execution
(after the old record complete deletion from FTS table and before  the new
record insertion into FTS table). May be is more reasonable to make the
content of FTS table synchronized with the content of external table by FTS
extension internally?


It would be better that way. But the virtual table interface doesn't
provide any way to do that.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG:"insert or replace" doesn't works for External Content FTS4 Tables

2012-12-07 Thread Dan Kennedy

On 12/07/2012 06:17 PM, Alexey Pechnikov wrote:

"insert or replace" doesn't trigger updating of the FTS index but only
'rebuild' do it:

sqlite>  .s address_fts0
CREATE VIRTUAL TABLE "address_fts0" USING
fts4(content="view_address_exists", "sys_title:hash");

sqlite>  select rowid,"sys_title:hash" from address_fts0 where
"sys_title:hash" match 'e7d4683bb2';
rowid|sys_title:hash
64402|e9b4d0bcb5


What does the following:

  SELECT * FROM view_address_exists WHERE rowid=64402;

return at this point?

From http://www.sqlite.org/fts3.html#section_6_2 :

 "When a row is deleted from an external content FTS4 table, FTS4 needs
to retrieve the column values of the row being deleted from the content
table. This is so that FTS4 can update the full-text index entries for
each token that occurs within the deleted row to indicate that that row
has been deleted. If the content table row cannot be found, or if it
contains values inconsistent with the contents of the FTS index, the
results can be difficult to predict."




sqlite>  insert or replace into "address_fts0" (rowid, "sys_title:hash")
select rowid,"sys_title:hash" from "view_address_exists" where sys_id=64402;

sqlite>  select rowid,"sys_title:hash" from address_fts0 where
"sys_title:hash" match 'e7d4683bb2';
rowid|sys_title:hash
64402|e9b4d0bcb5

sqlite>  INSERT INTO address_fts0(address_fts0) VALUES('rebuild');

sqlite>  select count(*) from address_fts0 where "sys_title:hash" match
'e7d4683bb2';
count(*)
0



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom collation method works for comparison, not for ORDER BY

2012-12-05 Thread Dan Kennedy

On 12/06/2012 06:11 AM, Niall O'Reilly wrote:

On 05/12/12 21:12, Clemens Ladisch wrote:

Do these queries give the correct result?

 select '100A' collate ipaddress<  '127.0.0.1';
 select '100A' collate ipaddress<  ' ABCD';

I.e., does the collation function actually work?


Thanks for the helpful suggestions.
I wish I had thought of something so simple.

Here's what I get:

sqlite>  select '100A' collate ipaddress<  '127.0.0.1';
0
sqlite>  select '100A' collate ipaddress<  ' ABCD';
0
sqlite>  select '127.0.0.1' collate ipaddress<  '::1';
1
sqlite>  select '::1' collate ipaddress = '0::1';
1
sqlite>  select 'dead:beef::' collate ipaddress = 'DEAD:BEEF::';
1
sqlite>

These results match what I intended the collation
function to do.  It appears to work, including
recognizing alternative notations for the same IPv6
address as equivalent.

Similar tests using BETWEEN also work as intended.


It still seems likely that the collation returns the wrong result
some circumstances. Posting the code for it is probably the right
thing to do.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Tracing latencies

2012-12-05 Thread Dan Kennedy

On 12/06/2012 12:47 AM, Black, Michael (IS) wrote:

I'm a bit confused with "always aligned".  None of the lseeks in this
log are 1024 aligned.

And I just ran a test with the 3.7.15.1 latest amalgamation and most
of these seeks are not aligned.  Once in a while it gets lucky.
Alignment sure isn't deliberate in this.  It appears the first page
is 1080 which is already out of alignment.  File#4 here is the wal
file.


I wasn't clear. Writes to the WAL file are not aligned. However,
they are usually sequential, which means on an OS with a smart
enough cache you don't really gain anything at all trying to align
them.

To the database file, where it does matter, writes are always
aligned.






lseek(4, 0, SEEK_SET)   = 0 lseek(4, 0, SEEK_SET)
= 0 lseek(4, 32, SEEK_SET)  = 32 lseek(4, 56,
SEEK_SET)  = 56 lseek(4, 1080, SEEK_SET)
= 1080 lseek(4, 1104, SEEK_SET)= 1104 lseek(4, 2128,
SEEK_SET)= 2128 lseek(4, 2152, SEEK_SET)
= 2152 lseek(4, 3176, SEEK_SET)= 3176 lseek(4, 3200,
SEEK_SET)= 3200




Michael D. Black Senior Scientist Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit Northrop Grumman Information
Systems

 From:
sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Dan Kennedy [danielk1...@gmail.com] Sent: Wednesday,
December 05, 2012 10:27 AM To: sqlite-users@sqlite.org Subject: EXT
:Re: [sqlite] Tracing latencies

On 12/05/2012 09:03 PM, Black, Michael (IS) wrote:

Hmmm...looking at that strace sequencing is there some reason those
24-byte and 1024-byte writes can't be combined?  The 1024-byte
write is occurring at the end boundary of the 24-byte.

That would cut the i/o ops in half and might be a noticeable
improvement. A memory copy would be a lot faster than 2 i/o
requests.

And since disk page size is always a power of 2 would the 1024
buffer be better off as 1000 to align the page i/o better?



Those writes are appending to the WAL file. Writes to the database
file are always aligned page-sized (in this case 1024 byte) blocks.

For the WAL file, we tried it both ways (combining the 24 and 1024
bytes writes into one and leaving them separate) and found that, on
linux, it's faster to call write() twice. In other words, the extra
write() call is cheaper than doing a 1048 byte memcpy(). And you
can't just use 1048 byte buffers everywhere internally, as memory
allocators tend to waste lots of space if you allocate many blocks
that are all just a bit larger than a power-of-two.

Having said that, there has been at least one closed-source VFS
backend that buffers the sequential writes SQLite makes on the
journal and WAL files so that it can make mostly 8KB aligned writes
to the underlying file-system. So on some systems there is a benefit
to writing aligned page blocks even if you are writing sequentially.






Michael D. Black Senior Scientist Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit Northrop Grumman
Information Systems

 From:
sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
on behalf of Keith Chew [keith.c...@gmail.com] Sent: Wednesday,
December 05, 2012 2:11 AM To: General Discussion of SQLite
Database Subject: EXT :Re: [sqlite] Tracing latencies

Hi Dan

On Wed, Dec 5, 2012 at 6:38 PM, Dan Kennedy<danielk1...@gmail.com>
wrote:

If it's not fsync() then IO delays are normally caused by
read(). You could try [strace -T -eread ...] to check.

Are SELECT statements fast on the same database? How large is
the database compared to the machines memory?


Ah, do you think the read on the OS level is starving the writes?
There are not many DB selects from the app, but there are other
read IO activities happening in the background. Still it doesn't
make sense, because WAL mode ensures from sqlite's point of view,
the reader will not be blocked by the writer. So, sqlite is
subjected to similar levels of read IO activity as Mysql (because
there are very few DB selects).

The tables are small, around 20MB in total, compared to 2GB of
memory available.

I have done more investigation. Using strace, I waited to capture
an insert/update with a long latency, and caught one that is 2s
long, below is the strace to the WAL file. We can see that it is
very bursty, all the seeks and writes span close to over 1s.

I recall doing a strace on Mysql a long time ago, and each insert
only does a single seek and write. Maybe it is sqlite's file format
that requires it to seek/write multiple places causing the
slowness? I am only guessing here, as I do not know how to
interpret the strace logs below.

[pid  4015] 21:01:53.634099 _llseek(98, 499928,
[pid  4015] 21:01:53.634245 write(98,
"\0\0\0\4\0\0HU^=\226\213\23\10<\247+\214\332\260\314Wf ", 24
 [pid  4015] 21:01:53.634546 _llseek(98,
499952, [pid  4015] 21:01:

Re: [sqlite] Tracing latencies

2012-12-05 Thread Dan Kennedy

On 12/05/2012 09:03 PM, Black, Michael (IS) wrote:

Hmmm...looking at that strace sequencing is there some reason those 24-byte and 
1024-byte writes can't be combined?  The 1024-byte write is occurring at the 
end boundary of the 24-byte.

That would cut the i/o ops in half and might be a noticeable improvement. A 
memory copy would be a lot faster than 2 i/o requests.

And since disk page size is always a power of 2 would the 1024 buffer be better 
off as 1000 to align the page i/o better?



Those writes are appending to the WAL file. Writes to the
database file are always aligned page-sized (in this case
1024 byte) blocks.

For the WAL file, we tried it both ways (combining the 24
and 1024 bytes writes into one and leaving them separate) and
found that, on linux, it's faster to call write() twice. In
other words, the extra write() call is cheaper than doing
a 1048 byte memcpy(). And you can't just use 1048 byte buffers
everywhere internally, as memory allocators tend to waste lots
of space if you allocate many blocks that are all just a bit
larger than a power-of-two.

Having said that, there has been at least one closed-source VFS backend
that buffers the sequential writes SQLite makes on the journal
and WAL files so that it can make mostly 8KB aligned writes to
the underlying file-system. So on some systems there is a benefit
to writing aligned page blocks even if you are writing sequentially.






Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Keith Chew [keith.c...@gmail.com]
Sent: Wednesday, December 05, 2012 2:11 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] Tracing latencies

Hi Dan

On Wed, Dec 5, 2012 at 6:38 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:

If it's not fsync() then IO delays are normally caused by read().
You could try [strace -T -eread ...] to check.

Are SELECT statements fast on the same database? How large is the
database compared to the machines memory?


Ah, do you think the read on the OS level is starving the writes?
There are not many DB selects from the app, but there are other read
IO activities happening in the background. Still it doesn't make
sense, because WAL mode ensures from sqlite's point of view, the
reader will not be blocked by the writer. So, sqlite is subjected to
similar levels of read IO activity as Mysql (because there are very
few DB selects).

The tables are small, around 20MB in total, compared to 2GB of memory available.

I have done more investigation. Using strace, I waited to capture an
insert/update with a long latency, and caught one that is 2s long,
below is the strace to the WAL file. We can see that it is very
bursty, all the seeks and writes span close to over 1s.

I recall doing a strace on Mysql a long time ago, and each insert only
does a single seek and write. Maybe it is sqlite's file format that
requires it to seek/write multiple places causing the slowness? I am
only guessing here, as I do not know how to interpret the strace logs
below.

[pid  4015] 21:01:53.634099 _llseek(98, 499928,
[pid  4015] 21:01:53.634245 write(98,
"\0\0\0\4\0\0HU^=\226\213\23\10<\247+\214\332\260\314Wf ", 24

[pid  4015] 21:01:53.634546 _llseek(98, 499952,
[pid  4015] 21:01:53.634712 write(98,
"\r\0\0\0\1\0033\0\0033\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
1024
[pid  4015] 21:01:53.740378 _llseek(98, 500976, [500976], SEEK_SET) = 0
[pid  4015] 21:01:53.740449 write(98,
"\0\0\0\3\0\0\0\0^=\226\213\23\10<\247\327\201>\32\227\323\f8", 24) =
24
[pid  4015] 21:01:53.740521 _llseek(98, 501000, [501000], SEEK_SET) = 0
[pid  4015] 21:01:53.740566 write(98,
"\r\3\221\0\t\3G\1\3\243\3\261\3\352\3\325\3\304\3G\3\177\3m\3\\\0\0\0\0\0\0"...,
1024) = 1024
[pid  4015] 21:01:53.740632 _llseek(98, 502024, [502024], SEEK_SET) = 0
[pid  4015] 21:01:53.740677 write(98,
"\0\0\37X\0\0\0\0^=\226\213\23\10<\247dL\17\316\32\30\301\237", 24) =
24
[pid  4015] 21:01:53.740736 _llseek(98, 502048, [502048], SEEK_SET) = 0
[pid  4015] 21:01:53.740781 write(98,
"\r\0\0\0\5\1*\0\3o\2\336\2L\1\273\1*\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
1024) = 1024
[pid  4015] 21:01:53.740844 _llseek(98, 503072, [503072], SEEK_SET) = 0
[pid  4015] 21:01:53.740889 write(98,
"\0\0HR\0\0HU^=\226\213\23\10<\247\276\32g\304j\372Q.", 24) = 24
[pid  4015] 21:01:53.740945 _llseek(98, 503096, [503096], SEEK_SET) = 0
[pid  4015] 21:01:53.740989 write(98,
"\n\0\0\0\25\1\264\0\2@\2\\\2x\2\224\2\260\2\314\2\350\3\4\3
\3<\3X\3t"..., 1024) = 1024
[pid  4015] 21:01:54.636566 _llseek(98, 504120, [504120], SEEK_SET) = 0
[pid  4015] 21:01:54.636636 write(98,
"\0\0\0\4\0\0HU^=\226\213\23\10<\247=]`\3700\351\226n", 24) = 24
[pid  4015] 21:01:54.636812 _llsee

Re: [sqlite] Tracing latencies

2012-12-05 Thread Dan Kennedy

On 12/05/2012 03:11 PM, Keith Chew wrote:

Hi Dan

On Wed, Dec 5, 2012 at 6:38 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:

If it's not fsync() then IO delays are normally caused by read().
You could try [strace -T -eread ...] to check.

Are SELECT statements fast on the same database? How large is the
database compared to the machines memory?


Ah, do you think the read on the OS level is starving the writes?
There are not many DB selects from the app, but there are other read
IO activities happening in the background. Still it doesn't make
sense, because WAL mode ensures from sqlite's point of view, the
reader will not be blocked by the writer. So, sqlite is subjected to
similar levels of read IO activity as Mysql (because there are very
few DB selects).

The tables are small, around 20MB in total, compared to 2GB of memory available.

I have done more investigation. Using strace, I waited to capture an
insert/update with a long latency, and caught one that is 2s long,
below is the strace to the WAL file. We can see that it is very
bursty, all the seeks and writes span close to over 1s.

I recall doing a strace on Mysql a long time ago, and each insert only
does a single seek and write. Maybe it is sqlite's file format that
requires it to seek/write multiple places causing the slowness? I am
only guessing here, as I do not know how to interpret the strace logs
below.


A bunch of sequential writes to the wal file, totaling somewhere
around 4KB. This should be fast. But in the middle you have a 1
second delay:

> [pid  4015] 21:01:53.740989 write(98,
> "\n\0\0\0\25\1\264\0\2@\2\\\2x\2\224\2\260\2\314\2\350\3\4\3
> \3<\3X\3t"..., 1024) = 1024
> [pid  4015] 21:01:54.636566 _llseek(98, 504120, [504120], SEEK_SET) = 0

If you use -T instead of -t with strace, it reports the amount
of wall-clock time spent in each system call.

Dan.




[pid  4015] 21:01:53.634099 _llseek(98, 499928,
[pid  4015] 21:01:53.634245 write(98,
"\0\0\0\4\0\0HU^=\226\213\23\10<\247+\214\332\260\314Wf ", 24

[pid  4015] 21:01:53.634546 _llseek(98, 499952,
[pid  4015] 21:01:53.634712 write(98,
"\r\0\0\0\1\0033\0\0033\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
1024
[pid  4015] 21:01:53.740378 _llseek(98, 500976, [500976], SEEK_SET) = 0
[pid  4015] 21:01:53.740449 write(98,
"\0\0\0\3\0\0\0\0^=\226\213\23\10<\247\327\201>\32\227\323\f8", 24) =
24
[pid  4015] 21:01:53.740521 _llseek(98, 501000, [501000], SEEK_SET) = 0
[pid  4015] 21:01:53.740566 write(98,
"\r\3\221\0\t\3G\1\3\243\3\261\3\352\3\325\3\304\3G\3\177\3m\3\\\0\0\0\0\0\0"...,
1024) = 1024
[pid  4015] 21:01:53.740632 _llseek(98, 502024, [502024], SEEK_SET) = 0
[pid  4015] 21:01:53.740677 write(98,
"\0\0\37X\0\0\0\0^=\226\213\23\10<\247dL\17\316\32\30\301\237", 24) =
24
[pid  4015] 21:01:53.740736 _llseek(98, 502048, [502048], SEEK_SET) = 0
[pid  4015] 21:01:53.740781 write(98,
"\r\0\0\0\5\1*\0\3o\2\336\2L\1\273\1*\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
1024) = 1024
[pid  4015] 21:01:53.740844 _llseek(98, 503072, [503072], SEEK_SET) = 0
[pid  4015] 21:01:53.740889 write(98,
"\0\0HR\0\0HU^=\226\213\23\10<\247\276\32g\304j\372Q.", 24) = 24
[pid  4015] 21:01:53.740945 _llseek(98, 503096, [503096], SEEK_SET) = 0
[pid  4015] 21:01:53.740989 write(98,
"\n\0\0\0\25\1\264\0\2@\2\\\2x\2\224\2\260\2\314\2\350\3\4\3
\3<\3X\3t"..., 1024) = 1024
[pid  4015] 21:01:54.636566 _llseek(98, 504120, [504120], SEEK_SET) = 0
[pid  4015] 21:01:54.636636 write(98,
"\0\0\0\4\0\0HU^=\226\213\23\10<\247=]`\3700\351\226n", 24) = 24
[pid  4015] 21:01:54.636812 _llseek(98, 504144, [504144], SEEK_SET) = 0
[pid  4015] 21:01:54.636860 write(98,
"\r\0\0\0\1\0033\0\0033\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
1024
[pid  6744] 21:01:54.676590 close(98)   = 0


Regards
Keith
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Tracing latencies

2012-12-04 Thread Dan Kennedy

On 12/05/2012 09:22 AM, Keith Chew wrote:

On Wed, Dec 5, 2012 at 1:14 PM, Keith Chew  wrote:

Since there is no fsync (I know this using grep on the strace ouput),
it is hard to tell what to look for in the strace. The output is very
noisy, so it makes it hard to go through them.

Does anyone know any handy flags for strace to only see all the
activities associated with sqlite file access?


I have confirmed using strace there is no fsyncs done by sqlite (using
synchronous=0 or synchronous=1), just seeks and writes. fsync is seen
only when synchronous=2.

After conducting more tests (using synchronous=0), I have some results
which is baffling me:
- Mysql average insert time, with write-cache OFF: 1-2ms
- Mysql average insert time, with write-cache ON: 0-1ms
- sqlite average insert time, with write-cache OFF: 160ms (have seen
up to 4000ms)
- sqlite average insert time, with write-cache ON: 50ms (have seen up to 150ms)

Quite a significant difference. I am stumped on how to make sqlite run faster.


If it's not fsync() then IO delays are normally caused by read().
You could try [strace -T -eread ...] to check.

Are SELECT statements fast on the same database? How large is the
database compared to the machines memory?

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Persistence of -wal and -shm

2012-11-29 Thread Dan Kennedy

On 11/30/2012 12:04 AM, Simon Slavin wrote:

I have a folder with 17 independent databases in, each of them opened
for reading an writing occasionally.  Two of them have both -wal and
-shm files, even though they shouldn't have been opened for read or
write for days, and the last opening of each one closed connections
properly without crashing.

Should these journal files exist even when nothing is open or has
crashed ?  I thought that properly-closed connections deleted journal
files for databases in WAL mode.

Is there some official method to delete them ?  I tried VACUUM on one
and when the connection was closed the journal files were deleted,
but I don't want to do that if their existence definitely indicates
that my code is not closing those files correctly.


You should probably assume that.

Usually *-wal and *-shm files are deleted when the number of
connections to a database drops from one to zero.

I think there is a race condition there though - if the last two
connections disconnect more or less at the same time the files
might not be removed. Seems unlikely that you would strike that
twice, but who knows.

To delete them, just run any query on the database and then disconnect.
i.e.

  sqlite3 databasefile.db "SELECT * FROM sqlite_master;" > /dev/null




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Replace on fts4 table results in unexpected matchinfo result

2012-11-27 Thread Dan Kennedy

On 11/27/2012 11:42 AM, Eric wrote:

The following SQL results in X'01000600'.
For reference, "na" option should generate total number of documents
and the average number of tokens per document.

CREATE VIRTUAL TABLE IF NOT EXISTS fts USING fts4(body);
REPLACE INTO fts ( docid, body ) VALUES (1, "one two three four");
REPLACE INTO fts ( docid, body ) VALUES (2, "one two");
SELECT quote(matchinfo(fts,'na')) FROM fts WHERE fts.body match 'three';

If "REPLACE" is replaced with "INSERT" in the above, the result is
X'02000300', as expected.
In either case, the number of rows in fts is as expected, body column
is correct, and other matchinfo options (pclx at least), seem to
function correctly.

Is REPLACE not allowed for fts4 tables, or is this a bug?


It's a bug. Now fixed here:

  http://www.sqlite.org/src/info/e38fb02d5e

Fix will appear in 3.7.15.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS substring behavior

2012-11-08 Thread Dan Kennedy

On 11/09/2012 01:49 AM, Paul Vercellotti wrote:



Hi there,

I wanted to clarify if FTS could provide any optimization for substring matches 
like '*ion*' or similar?


No. I think it will actually search for tokens that start with the 4
ASCII characters "*ion" if you try that.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Calling "ROLLBACK" outside transaction

2012-10-28 Thread Dan Kennedy

On 10/29/2012 07:35 AM, Igor Korot wrote:

Hi, ALL,
Will I be punished if I call ROLLBACK outside transaction?


No. You will be rewarded with an error code though.

To check if an SQLite connection has an open write-transaction:

  http://www.sqlite.org/c3ref/get_autocommit.html

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] statement prepares OK but step returns SQLITE_NOTADB

2012-10-26 Thread Dan Kennedy

On 10/27/2012 07:06 AM, Simon Slavin wrote:


On 26 Oct 2012, at 11:05pm, Clemens Ladisch
wrote:


Yes; sqlite3_finalize _always_ frees the statement.


And if the statement is already finalized (due to an earlier error,
perhaps) then it is a harmless noop.  So you can do it near the end
of your routine harmlessly.


That's a bit deceptive. Passing the same pointer to sqlite3_finalize()
twice is undefined behavior. You might get an SQLITE_MISUSE error, but
you also might get a segfault.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] I/O error on creating index with 3.7.14

2012-10-26 Thread Dan Kennedy

On 10/26/2012 12:14 PM, Jamie Norrish wrote:

Using 3.7.14, when creating an index on a 27G database (on the table
that contains almost all of the data), I consistently (on Windows XP and
Debian GNU/Linux, on three different machines) get a disk I/O error.
This does not happen using 3.7.13 (only tested on Debian GNU/Linux), nor
does it happen when creating the same index on a smaller (~2G) version
of the database.

Is there more information I should provide as part of a proper bug
report, or is this a known issue, or have I missed a trick somewhere?


Thanks for reporting this. Now fixed here:

  http://www.sqlite.org/src/info/e24ba5bee4

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS returns out of memory when use NEAR and OR

2012-10-24 Thread Dan Kennedy

On 10/24/2012 11:07 PM, Vlad Seryakov wrote:

Hello

For some time already i noticed that when i use NEAR/1 and OR in one
query like SELECT * FROM search WHERE search MATCH 'tom NEAR/1 hanks
or tom hanks'


Are you able to share the database file that you use to reproduce
this? Thanks.

Dan.





i get out of memory error. Running this on 16Gb laptop cannot be
memory issue and the database only has several thousands of records.
Investigating the code i found one place where in fts3EvalNearTest
where it happens:

line 129689 i version 3.7.14.1

nTmp += p->pPhrase->doclist.nList; aTmp = sqlite3_malloc(nTmp*2); if(
!aTmp ){

Adding the check before the malloc solved the problem if (nTmp<= 0)
return res;

Thanks

___ sqlite-users mailing
list sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction inside transaction

2012-10-21 Thread Dan Kennedy

On 10/22/2012 11:34 AM, Igor Korot wrote:

Hi, ALL,
Is it possible to have transaction inside transaction? Will it be
handled correctly?

What I mean is: crate transaction 1 by issuing "BEGIN", create
transaction 2 by issuing "BEGIN".
Close transaction 2 by issuing either "COMMIT" or "ROLLBACK".
If "ROLLBACK" is issued, issue "ROLLBACK" on transaction 1.
If "COMMIT" is issued, continue transaction 1.


See here:

  http://www.sqlite.org/lang_savepoint.html

They don't work exactly as you describe above, but might be
close enough.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-21 Thread Dan Kennedy

On 10/21/2012 03:48 PM, Imanuel wrote:

is there any way to have a fast SQLite
3.7.14.1 via DLL or at least to tell why it's that slow?


I don't think it's the dll itself that is the problem, just
something about the way it is used in Delphi. And I don't
actually know anything about Delphi..

Is the wrapper or the application issuing any PRAGMA statements
to SQLite? Does the wrapper intercept calls to the VFS interface?

Dan.





Imanuel


Am 18.10.2012 17:49, schrieb Imanuel:

No, I can't - 26s vs 15s (old vs new).

But when I run the test in my Delphi test application, 3.7.14.1 takes
285 seconds (tested again right now).
All the time, CPU usage is 25% (on a quad core).


This is my test code:

sqlite3_open('test.db', handle);
t0:=now();
sqlite3_exec(handle, PAnsiChar('CREATE INDEX idx_namen_name ON
Namen(name)'), nil, nil, nil);
showmessage(floattostr((now()-t0)*86400));
sqlite3_close(handle);


The DLL is referenced in this unit:
https://raw.github.com/plashenkov/SQLite3-Delphi-FPC/09d8674805c73d1ab0fa05832750cc0f727102f0/Source/SQLite3.pas

I simply have replaced the dll without changing the linking source code
to test with 3.7.14.1 - I hope that's ok.

Imanuel




Am 18.10.2012 16:49, schrieb Dan Kennedy:

On 10/18/2012 03:32 PM, Imanuel wrote:

Ok, here it is (45mb):
http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html


On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
version. Are you able to reproduce the performance regression with
these two?

   http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
   http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip

Dan.







Imanuel


Am 18.10.2012 00:37, schrieb Imanuel:

No, the performance stays the same.
I have also tried using a big cache_size, but that didn't change
anything, too.

Yes, I can share the database - it is currently uploading, I will mail
the link tomorrow.

Imanuel


Am 17.10.2012 22:08, schrieb Dan Kennedy:

On 10/18/2012 01:32 AM, Imanuel wrote:

Hello

I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang
TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll
INTEGER
DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)

When running this command:
CREATE INDEX idx_namen_name ON Namen(name)

Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
Indexing the column "geonameid" makes 24 vs. 312 seconds.
Neither of the both columns are presorted.

If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
performance the same as in 3.6.22?

Are you able to share the database?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-20 Thread Dan Kennedy

On 10/20/2012 09:14 PM, Simon Slavin wrote:


On 19 Oct 2012, at 9:40pm, Efim Dyadkin
wrote:


You are right about the purpose of unlink but it is out of context.
There are a transaction in progress and hot journal on disk. If
journal can't be deleted by the end of transaction, the transaction
can't be considered to be successfully finished.


This is not correct.  SQLite does not close the journal file at the
end of every transaction unless you have only a single connection to
the database and the journal mode set to DELETE, and that is not
common these days because creating and deleting files is so slow.
The times you should see a journal file deleted is when all
connections to the database have been closed: you've done a _close()
for every _open().


In WAL mode, the WAL file is held open by a connection until it
is closed. And only deleted after the number of connections to the
database drops to zero (the last connection usually unlinks it as
part of the sqlite3_close() call).

If you're running with journal_mode=DELETE (the default), the journal
file is always closed and unlinked at the end of each write
transaction.

If using journal_mode=TRUNCATE or journal_mode=PERSIST on unix, then
the journal file is always closed at the end of a write transaction.
However, on win32 (or any other system where it is impossible to
delete a file while the current or some other process has it open),
the journal file file descriptor may be held open between transactions.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-19 Thread Dan Kennedy

On 10/18/2012 09:05 PM, Pavel Ivanov wrote:

On Thu, Oct 18, 2012 at 6:32 AM, Daniel Polski  wrote:

The SELECT statement, including the _prepare() stage and all the _step()s
until you've reached the last row, and then the _finalize(), is all one
process.  They're all part of the statement and you can assume that the
database is still locked until you do a _finalize().

If you are using the results of a SELECT to figure out a bunch of other
instructions like INSERT or DELETE you can do it two ways:

A) Quicky save the results of stepping into an array.  Once the SELECT is
finalized, look through the array and figure out what you want to do about
it.

B) Generate the INSERT/DELETE commands while stepping but keep them
somewhere, either as a data array or by accumulating the text of the SQL
commands in a string, rather than executing them immediately.  Once you're
finished stepping, execute the commands.  (You may choose to use _exec to
execute them all in one go.)




Thank you for the clarification, but I still misunderstand the documentation
some way.

In the documentation about WAL mode it says:
"Writers merely append new content to the end of the WAL file. Because
writers do nothing that would interfere with the actions of readers, writers
and readers can run at the same time. However, since there is only one WAL
file, there can only be one writer at a time."

Maybe the magic words I don't fully understand are what's written later:
"A checkpoint operation takes content from the WAL file and transfers it
back into the original database file. A checkpoint can run concurrently with
readers, however the checkpoint must stop when it reaches a page in the WAL
that is past the read mark of any current reader. The checkpoint has to stop
at that point because otherwise it might overwrite part of the database file
that the reader is actively using. The checkpoint remembers (in the
wal-index) how far it got and will resume transferring content from the WAL
to the database from where it left off on the next invocation."

I logically do understand that there can't be 2 writers updating the
database at the same time, but I don't understand why the second insert
statement in the example below won't work without finalizing the SELECT
query?


sqlite3* conn_1;
sqlite3* conn_2;
//(..opening db files and so on)

sqlite3_stmt* pVM_1;
sqlite3_stmt* pVM_2;

sqlite3_prepare(conn_1, "SELECT * FROM test_table_2;", -1,_1,);
//sets WAL end mark for pVM_1?
sqlite3_prepare(conn_2, "SELECT * FROM test_table_2;", -1,_2,);
//sets WAL end mark for pVM_2?

nRet = sqlite3_step(pVM_1); //stepping if<  WAL end mark set for pVM_1, set
new read mark?
nRet = sqlite3_step(pVM_2); //stepping if<  WAL end mark set for pVM_2, set
new read mark?

//statements below will add content to the end of the WAL file?
nRet = sqlite3_exec(conn_1, "BEGIN IMMEDIATE TRANSACTION;", 0, 0,);
nRet = sqlite3_exec(conn_1, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 0,
);
nRet = sqlite3_exec(conn_1, "COMMIT;", 0, 0,);

nRet = sqlite3_step(pVM_1); //stepping if<  WAL end mark set for pVM_1, set
new read mark?
nRet = sqlite3_step(pVM_2); //stepping if<  WAL end mark set for pVM_2, set
new read mark?

//sqlite3_finalize(pVM_1);
//sqlite3_finalize(pVM_2);

//The execution below will fail with SQLITE_BUSY if the SELECT statement
pVM_2 isn't finalized
//(Why won't it append new data in the end of the WAL file just like the
successful insert above?)
nRet = sqlite3_exec(conn_2, "BEGIN IMMEDIATE TRANSACTION;", 0, 0,);
nRet = sqlite3_exec(conn_2, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 0,
);
nRet = sqlite3_exec(conn_2, "COMMIT;", 0, 0,);


When INSERT is executed it doesn't add new row to the WAL file, it
have to add new row to some database page possibly changing some other
pages on the way too. These changed pages are written into WAL file.
Because of this fact when INSERT statement is executed it must be
executed on the latest version of the database pages. Otherwise it can
create an alternative database version that can't be merged with
version created by other INSERT statements. So when you execute INSERT
statement, or start IMMEDIATE transaction, or convert existing
read-only transaction into writing transaction SQLite have to make
sure that this transaction sees the latest version of the database.
But when you start read-only transaction SQLite ensures that all the
way through it sees a consistent database state, and that would be the
state of the database at the beginning of the transaction. So when you
start executing SELECT statement you lock conn_2 into the database
state that was at that moment. Then you do database changes on conn_1,
so database state is changed. Then you want to execute changes in
conn_2, but it can't do that because it's locked into database state
which is not latest one. And it can't change the visible database
state because SELECT statement is still in progress.


Exactly. To successfully upgrade a 

Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Dan Kennedy

On 10/18/2012 09:49 PM, Dan Kennedy wrote:

On 10/18/2012 03:32 PM, Imanuel wrote:

Ok, here it is (45mb):
http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html


On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
version. Are you able to reproduce the performance regression with
these two?

http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip


Second link is incorrect. They should be:

  http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
  http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip

Dan.




Dan.







Imanuel


Am 18.10.2012 00:37, schrieb Imanuel:

No, the performance stays the same.
I have also tried using a big cache_size, but that didn't change
anything, too.

Yes, I can share the database - it is currently uploading, I will mail
the link tomorrow.

Imanuel


Am 17.10.2012 22:08, schrieb Dan Kennedy:

On 10/18/2012 01:32 AM, Imanuel wrote:

Hello

I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang
TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll
INTEGER
DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)

When running this command:
CREATE INDEX idx_namen_name ON Namen(name)

Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
Indexing the column "geonameid" makes 24 vs. 312 seconds.
Neither of the both columns are presorted.

If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
performance the same as in 3.6.22?

Are you able to share the database?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-18 Thread Dan Kennedy

On 10/18/2012 03:32 PM, Imanuel wrote:

Ok, here it is (45mb):
http://www.file-upload.net/download-6707980/CREATE_INDEX_test.7z.html


On Linux here 3.6.22 takes around 61 seconds. Against 23 for a new
version. Are you able to reproduce the performance regression with
these two?

  http://www.sqlite.org/sqlite-shell-win32-x86-3062200.zip
  http://www.sqlite.org/sqlite-shell-win32-x86-307140100.zip

Dan.







Imanuel


Am 18.10.2012 00:37, schrieb Imanuel:

No, the performance stays the same.
I have also tried using a big cache_size, but that didn't change
anything, too.

Yes, I can share the database - it is currently uploading, I will mail
the link tomorrow.

Imanuel


Am 17.10.2012 22:08, schrieb Dan Kennedy:

On 10/18/2012 01:32 AM, Imanuel wrote:

Hello

I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll INTEGER
DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)

When running this command:
CREATE INDEX idx_namen_name ON Namen(name)

Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
Indexing the column "geonameid" makes 24 vs. 312 seconds.
Neither of the both columns are presorted.

If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
performance the same as in 3.6.22?

Are you able to share the database?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX is 13 times slower with 3.7.14.1 than with 3.6.22

2012-10-17 Thread Dan Kennedy

On 10/18/2012 01:32 AM, Imanuel wrote:

Hello

I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll INTEGER
DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT 7)

When running this command:
CREATE INDEX idx_namen_name ON Namen(name)

Version 3.6.22 (from here: http://indasoftware.com/sqlite/) takes 36
seconds, while 3.7.14.1 (as DLL, too) takes 279 seconds.
Indexing the column "geonameid" makes 24 vs. 312 seconds.
Neither of the both columns are presorted.


If you set "PRAGMA temp_store = memory" in 3.7.14.1 is the
performance the same as in 3.6.22?

Are you able to share the database?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] light weight write barriers

2012-10-12 Thread Dan Kennedy

On 10/11/2012 11:38 PM, Nico Williams wrote:

On Wed, Oct 10, 2012 at 12:48 PM, Richard Hipp  wrote:

Could you list the requirements of such a light weight barrier?
i.e. what would it need to do minimally, what's different from
fsync/fdatasync ?


For SQLite, the write barrier needs to involve two separate inodes.  The
requirement is this:


...


Note also that when fsync() works as advertised, SQLite transactions are
ACID.  But when fsync() is reduced to a write-barrier, we loss the D
(durable) and transactions are only ACI.  In our experience, nobody really
cares very much about durable across a power-loss.  People are mainly
interested in Atomic, Consistent, and Isolated.  If you take a power loss
and then after reboot you find the 10 seconds of work prior to the power
loss is missing, nobody much cares about that as long as all of the prior
work is still present and consistent.


There is something you can do: use a combination of COW on-disk
formats in such a way that it's possible to detect partially-committed
transactions and rollback to the last good known root, and
backgrounded fsync()s (i.e., in a separate thread, without waiting for
the fsync() to complete).


SQLite WAL mode comes close to that if you run your checkpoints
in the background. Following a power failure, those transactions that
have been checkpointed to the database file are assumed to have been
synced. Then SQLite uses checksums to determine the subset of
transactions in the WAL file that are intact.

I say close, because if you keep on writing to the db while the
checkpoint is running you end up with the WAL file growing indefinitely.
So it doesn't quite work.

Omitting the D in ACID changes everything. With the D in, you need to
fsync() after every transaction. Without it, you need to fsync() before
reclaiming space (i.e. when overwriting old data with new - you need
to be sure that the old data will not be required following recovery
from a power failure, which means an fsync()).

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Combining user-defined functions with aggregate functions

2012-10-10 Thread Dan Kennedy

On 10/10/2012 10:09 PM, Hamish Allan wrote:

On 10 October 2012 16:07, Dan Kennedy<danielk1...@gmail.com>  wrote:


On 10/10/2012 10:01 PM, Ryan Johnson wrote:


On 10/10/2012 10:49 AM, Dan Kennedy wrote:


The easiest workaround is probably to use a temp table to store the
unaggregated results of the FTS query.


What about a nested query?


Actually that will cause a problem too, as the optimizer will
flatten the sub-query.


Is there any way to tell the optimiser to use a temporary table for
the sub-select rather than flattening the sub-query?


I don't think there is. It's this optimization causing the trouble:

  http://www.sqlite.org/optoverview.html#flattening

You could change your query so that it doesn't qualify for that
optimization I suppose. Easiest way would be to append
"LIMIT -1 OFFSET 0" to the end of it. Of course, there is no
guarantee that a future version of SQLite will not see through
that, apply the optimization and hit the limitation.

Dan.







Thank you both for your replies.

H
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Combining user-defined functions with aggregate functions

2012-10-10 Thread Dan Kennedy

On 10/10/2012 10:01 PM, Ryan Johnson wrote:

On 10/10/2012 10:49 AM, Dan Kennedy wrote:

On 10/10/2012 08:14 PM, Hamish Allan wrote:

Short form question:

Working: SELECT a, userfunc(systemfunc) FROM t;
Working: SELECT a, sum(systemfunc) FROM t GROUP BY a;
Not working: SELECT a, sum(userfunc(systemfunc)) FROM t GROUP BY a;

Long form question:

I have a user-defined C function called "hits", loosely based on the
function "rankfunc" from the FTS3 example code, but returning (using
sqlite3_result_int) the total number of hits per row from the
matchinfo (code below).


As things are now, you can't use any of the FTS auxiliary functions
as arguments to an aggregate function.

The easiest workaround is probably to use a temp table to store the
unaggregated results of the FTS query.

What about a nested query?

SELECT r, sum(h) FROM (
SELECT rowid>>32 r, hits(matchinfo(text)) h
FROM text WHERE content MATCH 'a')
GROUP BY r;

Avoids the need to declare a temp table explicitly, and the optimizer
might even be able to avoid materializing the subquery.

... though if it turns out the above mechanical query rewrite works, I
would start to wonder why the original syntax can't be supported...


Actually that will cause a problem too, as the optimizer will
flatten the sub-query.

The error message will be something like "unable to use function
matchinfo() in the requested context".

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Combining user-defined functions with aggregate functions

2012-10-10 Thread Dan Kennedy

On 10/10/2012 08:14 PM, Hamish Allan wrote:

Short form question:

Working: SELECT a, userfunc(systemfunc) FROM t;
Working: SELECT a, sum(systemfunc) FROM t GROUP BY a;
Not working: SELECT a, sum(userfunc(systemfunc)) FROM t GROUP BY a;

Long form question:

I have a user-defined C function called "hits", loosely based on the
function "rankfunc" from the FTS3 example code, but returning (using
sqlite3_result_int) the total number of hits per row from the
matchinfo (code below).


As things are now, you can't use any of the FTS auxiliary functions
as arguments to an aggregate function.

The easiest workaround is probably to use a temp table to store the
unaggregated results of the FTS query.






The function is working fine; I can query with:

SELECT (rowid>>  32), hits(matchinfo(text)) FROM text WHERE content MATCH 'a';

and get output like:

1|21
1|6
1|6
1|5
1|8
1|10
1|8
1|16
1|48

(In case you're wondering what the shift is for, I'm using the high 32
bits of the rowid as a collection identifier, and the low 32 bits as a
document identifier.)

And for the query:

SELECT (rowid>>  32), SUM(LENGTH(content)) FROM text WHERE content
MATCH 'a' GROUP BY (rowid>>  32);

I get output like:

1|6507

In other words, the user-defined function is working fine, and an
aggregate query is working fine.

But if I combine them, replacing the SQL with:

SELECT (rowid>>  32), sum(hits(matchinfo(text))) FROM text WHERE
content MATCH 'a' GROUP BY (rowid>>  32);

I get a result code of 1, SQLITE_ERROR, "SQL error or missing database".

What am I doing wrong?

Thanks,
Hamish

// 
// Query code

// const char *sql = "SELECT (rowid>>  32), hits(matchinfo(text)) FROM
text WHERE content MATCH ?"; // works
// const char *sql = "SELECT (rowid>>  32), sum(length(content)) FROM
text WHERE content MATCH ? GROUP BY (rowid>>  32)"; // works
const char *sql = "SELECT (rowid>>  32), sum(hits(matchinfo(text)))
FROM text WHERE content MATCH ? GROUP BY (rowid>>  32)"; // doesn't
work

result = sqlite3_prepare_v2(db, sql, -1,, NULL);
assert(result == SQLITE_OK);

result = sqlite3_bind_text(stmt, 1, "a", -1, SQLITE_STATIC);
assert(result == SQLITE_OK);

do
{
 result = sqlite3_step(stmt);
 if (result == SQLITE_ROW)
 {
 sqlite_int64 c0 = sqlite3_column_int64(stmt, 0);
 int c1 = sqlite3_column_int(stmt, 1);

 printf("%lld|%d", c0, c1);
 }
}
while (result == SQLITE_ROW);

// 
// User-defined function code

static void OLVSqliteHitsFunc(sqlite3_context *pCtx, int nVal,
sqlite3_value **apVal)
{
 if (nVal != 1)
 {
 sqlite3_result_error(pCtx, "Wrong number of arguments to
PDFXHitsFunc", -1);
 return;
 }

 int hits = 0;
 unsigned int *blob = (unsigned int *)sqlite3_value_blob(apVal[0]);

 unsigned int numberOfPhrases = blob[0];
 unsigned int numberOfColumns = blob[1];

 int phrase;
 for (phrase = 0; phrase<  numberOfPhrases; ++phrase)
 {
 unsigned int *phraseBlob =[2 + phrase * numberOfColumns * 3];

 int column;
 for (column = 0; column<  numberOfColumns; ++column)
 hits += phraseBlob[3 * column];
 }

 sqlite3_result_int(pCtx, hits);
}
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX on huge DB fails with SQLITE_NOMEM

2012-10-09 Thread Dan Kennedy

On 10/09/2012 03:30 PM, Marcus Grimm wrote:

Hello,

I'm currently investigating how far I can go with my
favorite DB engine. For that purpose I'm testing
my application with an artificial database that is
appx. 50 times bigger that the maximum I have
seen in the field so far.
The test creates a database from the scratch and just fills
the tables with random data. To speed up this operation
(which takes 3 hours) I drop all irrelevant indices
prior running the inserting.
Afterwards I need to create these indices because they are
necessary for the regular database operations.
Now, this (CREATE INDEX) fails after a few minutes with an error code
of 7 (malloc failed). I'm using the native C-Api...
I also specify: "PRAGMA cache_size=50;" if that matters.


Try changing that to 2000 or something before executing the
CREATE INDEX statement.

Assuming you don't have SQLite configured to stored temp files
in memory, a CREATE INDEX statement starts by sorting the data
using a merge-sort before building the b-tree structure to
represent the index. This is much faster than just inserting keys
into the new b-tree in arbitrary order. But, the merge-sort code
assumes that it can plausibly allocate up to (cache-size * page-size)
bytes of memory using malloc() (not as a single chunk - in total).
And if a malloc() fails, you get this SQLITE_NOMEM error. In this
case, with "PRAGMA cache_size=50", that might end up being too
much.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] interesting deadlock.

2012-10-05 Thread Dan Kennedy

On 10/05/2012 04:29 AM, Jonathan Engle wrote:

Ok, so here's a question (trying an experiment to see if this will
work, but throwing it out here as well).

What if the source db handle for the backup is opened to use private
cache?  Will this have any effect at all or is it the cache mode of
db2 (using your example below)?


That sounds like it will work around the problem. If db1 is
using a private cache the problem cannot occur.

Fix is here:

  http://www.sqlite.org/src/info/89b8c377a6

Should appear in 3.7.15.






On Sep 28, 2012, at 1:49 AM, Dan Kennedy wrote:


On 09/28/2012 03:32 AM, Jonathan Engle wrote:

I've been picking away at this for the last few days and have it
narrowed down fairly well.

It looks like if I turn off shared cache, it works fine (same
application code).

If I run with SQL_DEBUG enabled, the first issue I run into in
an assertion in sqlite3BtreeEnter: assert(
sqlite3_mutex_held(p->db->mutex) ); The call stack from it is

sqlite3BackupUpdate backupOnePage
sqlite3BtreeGetReserve(p->pSrc) sqlite3BtreeEnter

Look up the stack, it looks like sqlite3BackupUpdate locks the
mutex on the destination database but not the source.


Say you have an active backup operation (one created by
backup_init() but not yet completed) using source database handle
db1. In non-shared-cache mode. The backup is half-way done - 50% of
the source database pages have been copied to the destination.

If the source db is written by another process at this point, or
using a database handle other than db1, the backup operation has to
start over from the beginning on the next call to
sqlite3_backup_step().

However, if the app writes to the source database using handle db1,
SQLite will automatically update the backup database as well. So
that the backup operation doesn't have to restart. That's the call
to sqlite3BackupUpdate() above. As you say, the code assumes that
the mutex on the source database handle (i.e. db1) is already
held.

Turns out that this assumption is only true in non-shared-cache
mode. Because of the way the code is structured, in shared-cache
mode, this call to sqlite3BackupUpdate() will be made even if the
source database is updated using a second database handle - db2.
But the backup code still calls routines that assume the db1 mutex
is held... Bug.

In the deadlock scenario, all the threads are blocked in
lockBtreeMutex(). This routine is supposed to prevent deadlock by
ensuring that mutexes are only obtained in a globally defined
order. But that could malfunction in unpredictable ways if two
threads were running the lockBtreeMutex() code on behalf of the
same database connection simultaneously. The mutex on the database
handle is supposed to prevent that from happening, but since the
bug above allows lockBtreeMutex() to be called without actually
holding the mutex it easily might.

I think the fix will likely be to have shared-cache mode work like
non-shared-cache mode - force the backup to start over if the
source database is written via a second database handle (i.e.
db2).

Dan.






Tried as a test adding locking the source db, bad results.
Altered the definition of asserts to make them not fatal, got a
ton of assertions then deadlocking again.

Haven't tried to make a sample program yet, but the gist of it
would be to have one (or more threads) doing lots of small
transactions updating the database while simultaneously having
another thread continuously making a backup of the db
(unrealistic scenario, just makes the race easier to see).

It may or may not matter whether or not encryption is used, or
more importantly whether SQLITE_HAS_CODEC is defined, since the
portion of code that's asserting is only there when
SQLITE_HAS_CODEC is defined.

At this point, I guess I'll just run without enabling shared
cache, which seems to work just fine (a little better with
regards to backups actually) and just hope this gets fixed in a
future release.

Jon



It looks like it's unhappy that the mutex for the source database
in the

On Aug 25, 2012, at 1:33 PM, Jonathan Engle wrote:


No, the deadlock is deeper than that, it's stuck trying to
lock mutexes.  My current theory is that the thread trying to
update the page in the backup destination database is what's
causing trouble.

I also forgot to mention, each thread is using a different
connection object and that it's using shared cache mode.

Jon On Aug 25, 2012, at 12:57 PM, Patrik Nilsson wrote:


Do you test for the backup errors, i.e. SQLITE_BUSY and
SQLITE_LOCKED?

Do you test for step errors, i.e.  SQLITE_BUSY?

If you get the busy error, you can wait a while and try again
or start over.

/Patrik

On 08/24/2012 05:46 PM, Jonathan Engle wrote:

Ran into this recently, it's happened on one machine
running a beta test of our software.  This is a
multi-threaded application, and I've run into a sequence of
steps that deadlocks hard that as far as I can tell from
the documentation shouldn't. This is using SQLite 3.7.13

Re: [sqlite] interesting deadlock.

2012-09-28 Thread Dan Kennedy

On 09/28/2012 03:32 AM, Jonathan Engle wrote:

I've been picking away at this for the last few days and have it
narrowed down fairly well.

It looks like if I turn off shared cache, it works fine (same
application code).

If I run with SQL_DEBUG enabled, the first issue I run into in an
assertion in sqlite3BtreeEnter: assert(
sqlite3_mutex_held(p->db->mutex) ); The call stack from it is

sqlite3BackupUpdate backupOnePage sqlite3BtreeGetReserve(p->pSrc)
sqlite3BtreeEnter

Look up the stack, it looks like sqlite3BackupUpdate locks the mutex
on the destination database but not the source.


Say you have an active backup operation (one created by backup_init()
but not yet completed) using source database handle db1. In
non-shared-cache mode. The backup is half-way done - 50% of the source
database pages have been copied to the destination.

If the source db is written by another process at this point, or using
a database handle other than db1, the backup operation has to start over
from the beginning on the next call to sqlite3_backup_step().

However, if the app writes to the source database using handle
db1, SQLite will automatically update the backup database as well. So
that the backup operation doesn't have to restart. That's the call to
sqlite3BackupUpdate() above. As you say, the code assumes that the
mutex on the source database handle (i.e. db1) is already held.

Turns out that this assumption is only true in non-shared-cache mode.
Because of the way the code is structured, in shared-cache mode, this
call to sqlite3BackupUpdate() will be made even if the source database
is updated using a second database handle - db2. But the backup code
still calls routines that assume the db1 mutex is held... Bug.

In the deadlock scenario, all the threads are blocked in
lockBtreeMutex(). This routine is supposed to prevent deadlock
by ensuring that mutexes are only obtained in a globally defined
order. But that could malfunction in unpredictable ways if two threads
were running the lockBtreeMutex() code on behalf of the same database
connection simultaneously. The mutex on the database handle is
supposed to prevent that from happening, but since the bug above
allows lockBtreeMutex() to be called without actually holding
the mutex it easily might.

I think the fix will likely be to have shared-cache mode work like
non-shared-cache mode - force the backup to start over if the source
database is written via a second database handle (i.e. db2).

Dan.






Tried as a test adding locking the source db, bad results. Altered
the definition of asserts to make them not fatal, got a ton of
assertions then deadlocking again.

Haven't tried to make a sample program yet, but the gist of it would
be to have one (or more threads) doing lots of small transactions
updating the database while simultaneously having another thread
continuously making a backup of the db (unrealistic scenario, just
makes the race easier to see).

It may or may not matter whether or not encryption is used, or more
importantly whether SQLITE_HAS_CODEC is defined, since the portion of
code that's asserting is only there when SQLITE_HAS_CODEC is
defined.

At this point, I guess I'll just run without enabling shared cache,
which seems to work just fine (a little better with regards to
backups actually) and just hope this gets fixed in a future release.

Jon



It looks like it's unhappy that the mutex for the source database in
the

On Aug 25, 2012, at 1:33 PM, Jonathan Engle wrote:


No, the deadlock is deeper than that, it's stuck trying to lock
mutexes.  My current theory is that the thread trying to update the
page in the backup destination database is what's causing trouble.

I also forgot to mention, each thread is using a different
connection object and that it's using shared cache mode.

Jon On Aug 25, 2012, at 12:57 PM, Patrik Nilsson wrote:


Do you test for the backup errors, i.e. SQLITE_BUSY and
SQLITE_LOCKED?

Do you test for step errors, i.e.  SQLITE_BUSY?

If you get the busy error, you can wait a while and try again or
start over.

/Patrik

On 08/24/2012 05:46 PM, Jonathan Engle wrote:

Ran into this recently, it's happened on one machine running a
beta test of our software.  This is a multi-threaded
application, and I've run into a sequence of steps that
deadlocks hard that as far as I can tell from the documentation
shouldn't. This is using SQLite 3.7.13 with SEE. The source
database is using WAL mode, all transactions are done as
IMMEDIATE, synchronous mode is set to 0, and it is encrypted.
The destination database for the backup is not encrypted, and
is default (non-WAL, full synchronous) modes.


There are multiple threads active:

- one performing a write - two performing reads - one closing a
connection - one is in the middle of a backup operation

Here are the call stacks for the threads:


Writing thread:

sqlite3_step sqlite3VdbeExec sqlite3VdbeHalt
sqlite3BtreeCommitPhaseOne sqlite3PagerCommitPhaseOne
pagerWalFrames 

Re: [sqlite] How much disk space is required to checkpoint?

2012-09-25 Thread Dan Kennedy

On 09/26/2012 05:48 AM, Simon Slavin wrote:


On 25 Sep 2012, at 10:54pm, David Barrett
wrote:


If my database is X GB, and the WAL file is Y GB, how much total
disk space is required to:

1) Checkpoint the database


I think it's a maximum of X+Y+Y (including the existing X+Y) plus or
minus a page or two.


2) Vacuum the database


I think it's a maximum of 3*(X+Y) (including the existing X+Y) plus
or minus a page or two.  It might be 2*(X+Y).  I'm not certain how of
how checkingpointing is done.  Both of those answers are dependent on
the database not being corrupt, and SQLite being in a proper working
state.


Additionally, if there are any temporary files created, where do
they exist?  We've done some poking around and are having some
crazy theory about a temporary file being created inside /tmp, but
in a *deleted* state.


As far as I know, SQLite does not intentionally play tricks like
that.  Your OS may report the length of a temporary file as zero
because it's still open, but SQLite is just using the normal file
writing calls.


Actually it might. On unix, if you unlink a file while one or more
clients have it open, the clients can keep reading and writing the
file descriptor. No other can open the file though (as it is not
linked into the file-system). So when SQLite requires a temporary
file, it normally creates and opens a file with a randomized name
in /tmp then calls unlink() on it immediately - before it starts
using the file.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index not used in query

2012-09-20 Thread Dan Kennedy

On 09/20/2012 04:51 PM, Scholz Maik (CM-AI/PJ-CF42) wrote:

Hi,

I have a problem with a missing use of an index.

My Example:
sqlite>  create table tab1 (o INT, a INT, b INT,UNIQUE(o,b));

sqlite>  select * from sqlite_master;
table|tab1|tab1|2|CREATE TABLE tab1 (o INT, a INT, b INT,UNIQUE(o,b))
index|sqlite_autoindex_tab1_1|tab1|3|

=>   I expect that the index "sqlite_autoindex_tab1_1" is on column o and b?

sqlite>  explain query plan SELECT * from tab1 WHERE o=1;
0|0|0|SEARCH TABLE tab1 USING INDEX sqlite_autoindex_tab1_1 (o=?) (~10 rows)

=>   OK:  "sqlite_autoindex_tab1_1" is used

sqlite>  explain query plan SELECT * from tab1 WHERE a=1;
0|0|0|SCAN TABLE tab1 (~10 rows)

=>   OK:  "sqlite_autoindex_tab1_1" is not used because a is not indexed.

sqlite>  explain query plan SELECT * from tab1 WHERE b=1;
0|0|0|SCAN TABLE tab1 (~10 rows)

=>   NOK:  Why is "sqlite_autoindex_tab1_1" not used?


Creating an index basically creates a sorted list on disk. In your
case, the elements of the list are sorted in order of column "o", with
column "b" used as a tie-breaker. In the same way as the entries in
a phone book are sorted by surname with the first name or initial of
the subscriber used as a tie-breaker.

So with a phone book, you can do these easily enough:

  SELECT * FROM subscribers WHERE surname = 'Smith';
  SELECT * FROM subscribers WHERE surname = 'Smith' AND fname = 'Joe';

But this is hard:

  SELECT * FROM subscribers WHERE fname = 'Joe';

To find all subscribers named 'Joe', you would have to search the whole
phone book from beginning to end. The fact that it's in sorted order
would not help.

Perhaps you need a second index on column "b" only.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL, synchronous=1 and durability

2012-09-11 Thread Dan Kennedy

On 09/11/2012 11:03 PM, Daniel Frimerman wrote:

In FULL mode, the above is sync'ed, although I don't know whether
individual writes to the WAL file are sync'ed, or when all the data
belonging to the commit is written the WAL is sync'ed.
In NORMAL mode this is not done and that is why it is much faster (at least
15 times faster in my basic tests).

The steps below are part of a checkpoint that apply to NORMAL and FULL.
This is when the WAL file reached it's page limit, or called manually by
the user.  This step might be redundant for FULL mode, but that's not
important.

  2. fsync() the WAL file.
  3. Copy the data from the WAL file into the database file.
  4. fsync() the database file.



Synchronous=FULL mode is different. During step 1 above in FULL
mode, SQLite calls fsync() on the WAL file after writing each
transaction to it - before the users COMMIT command returns.

Since someone else kindly pointed out that COMMIT can require several
writes -- is each and every write required for a COMMIT synchronised to
prevent corruption? Or does SQLite utilise the same idea as with NORMAL
mode using checksums to prevent corruption? Perhaps that is why WAL+FULL is
faster than non-WAL+FULL? The more sequential writes in WAL mode (as
explained in the manual) must help here.  If something like checksums is
used to prevent corruption, maybe that also can be applied to non-WAL
journal modes, as an option to speed up their NORMAL mode?


One fsync() per transaction written to the WAL file.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [Question] Does it necessary to sync directory on persist journal mode?

2012-09-11 Thread Dan Kennedy

On 09/11/2012 02:22 PM, Yongil Jang wrote:

Please, don't forget my questions.

2012/9/10 Yongil Jang


Dear sqlite-users,

I have a question about directory sync.

Question: In case of journal file already exists on disk with persist
journal mode,
does it necessary to sync directory for first
fsync(unixSync) call?

AFAIK, directory sync should be called when a file is created or deleted.
Does it mean that calling directory sync doesn't necessary for existing
files?


I think you're right, in persistent journal mode once the file
has been created the second and subsequent fsync() calls are
redundant.

Did removing the extra fsync() calls produce any performance
improvement?

In general, SQLite can't use this trick because in a multi-process
environment there is no way to tell if an existing journal file has
already been synced to disk (e.g. the process that created it may
have crashed before calling fsync(), or it may have been created by
a process running with synchronous=off). But if this does produce a
performance improvement, perhaps we could do something in
locking_mode=EXCLUSIVE mode.




When I tested it with 1,000 insert queries, about 1,000 fsync calls were
decreased on persist journal mode.
To compare this, I just changed "unixOpen" function as like as follows.

   int syncDir = 0;
   if(isCreate&&  (eType==SQLITE_OPEN_MASTER_JOURNAL ||
eType==SQLITE_OPEN_MAIN_JOURNAL || eType==SQLITE_OPEN_WAL))
   {
 int exists = 0;
 unixAccess(pVfs, zPath, SQLITE_ACCESS_EXISTS,);
 if(!exists) {
   syncDir = 1;
 }
   }

Thank you for reading this message.

Best regards,
YongIl Jang



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL, synchronous=1 and durability

2012-09-11 Thread Dan Kennedy

On 09/11/2012 01:21 PM, Keith Chew wrote:

Hi Dan


In both NORMAL mode, we do the following:

   1. Write a bunch of transactions into the WAL file.
   2. fsync() the WAL file.
   3. Copy the data from the WAL file into the database file.
   4. fsync() the database file.

If a power failure occurs at any time, the next process to read the
database scans the WAL file and attempts to read as many transactions
as possible. If the checksum fails at any point, it stops reading.

So you can lose data. Say a power failure occurs between steps 1
and 2 above. If your WAL had 5 unsynced transactions in it then
following recovery you may find that none, some or all of them have
survived, depending on how much of the WAL file actually made it
to disk before the power failed.



Thank you very much for the explanation, it is very clear! I have one
last question.

In my ext3 file system, I have set barrier=1, disables write cache,
and also set commit=1 for per second disk sync from the kernel. In
your opinion, if I used NORMAL, would it mean:
(1) The most number of transactions I can lose in the WAL file is 1
second's worth?
(2) If the WAL contained say 10 seconds worth of transactions (1
transaction per second) before the power failure, and the 11th second
the transaction failed to make it, ie stuck between your step 1 and 2,
will the first 10 seconds worth of transactions still make it? I
presume yes, since you have checksums in the WAL file?


That sounds right to me. The first 10 seconds worth of transactions
and their checksums will have made it to disk so SQLite will be able
to recover them the next time the db is opened.


(3) If there is a power failure anywhere between steps 2, 3, and 4, I
presume that will be OK, since the recovery will be done on next
startup.


Right. Once step 2 has taken place your transactions have been synced
to disk. So even if a power failure occurs before they are copied into
the db file, they can't get lost.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL, synchronous=1 and durability

2012-09-10 Thread Dan Kennedy

On 09/11/2012 11:12 AM, Keith Chew wrote:

Hi Jay


   In WAL mode that's only half incorrect.  Your description of NORMAL
   seems correct, but FULL should be fully durable.  The WAL file may
   need to be checkpointed on startup, but the if the commit happens in
   WAL/FULL, you should have full durability.



This is the reason for my question. As far as I can see, the
difference between NORMAL and FULL in WAL mode is that FULL has one
transaction in WAL to be checkpointed, and NORMAL has multiple. Since
both of them need to checkpoint for durability, how is it that FULL
can guarantee durability for 1 transaction, but not NORMAL for
multiple? From a development point of view, I would imagine that both
will use the same checkpoint routine, but yet I cannot understand why
FULL has a guarantee and NORMAL does not..


In WAL mode, when a transaction is written to disk, the modified
pages are appended to the *-wal file. Later on, during a checkpoint, all 
the modified pages in the *-wal file are copied back into the

database file. In both synchronous=NORMAL and synchronous=FULL the
*-wal file may contain more than one transaction.

The WAL file includes a running checksum so that if a power failure
occurs, the next client to read the database can determine the prefix
of the WAL file that can be considered trustworthy (as the power
failure may have corrupted more recently written parts of the file).

In both NORMAL mode, we do the following:

  1. Write a bunch of transactions into the WAL file.
  2. fsync() the WAL file.
  3. Copy the data from the WAL file into the database file.
  4. fsync() the database file.

If a power failure occurs at any time, the next process to read the
database scans the WAL file and attempts to read as many transactions
as possible. If the checksum fails at any point, it stops reading.

So you can lose data. Say a power failure occurs between steps 1
and 2 above. If your WAL had 5 unsynced transactions in it then
following recovery you may find that none, some or all of them have
survived, depending on how much of the WAL file actually made it
to disk before the power failed.

Synchronous=FULL mode is different. During step 1 above in FULL
mode, SQLite calls fsync() on the WAL file after writing each
transaction to it - before the users COMMIT command returns.

In this case if the same power failure occurs there is no chance
that any successfully committed transactions will be lost, as they
are guaranteed to have made it to disk.

So, in WAL mode, both sync=NORMAL and sync=FULL prevent database
corruption. However if a power failure occurs in NORMAL mode, some
recently committed transactions may have disappeared following
recovery.

Dan.








___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possible bug - journal_mode DELETE/TRUNCATE

2012-09-09 Thread Dan Kennedy

On 09/10/2012 03:15 AM, Daniel Frimerman wrote:

My apologies about the attachment; should have known better.
This should be better:
https://dl.dropbox.com/u/50838941/SQLite3_Test.zip

I only get the problem with DELETE and TRUNCATE journal_mode (synchronous
set to NORMAL or FULL), but not with PERSIST (synchronous set to NORMAL or
FULL) or WAL (synchronous set to FULL).


When using "PRAGMA journal_mode=DELETE", after you reboot the
system is there a *-journal file present in the directory
next to your database file?

If so and you rename it before opening the database, are all
50 records present?





The reason I think there has to be 50 rows is because on FULL mode for
example, the I/O buffers are flushed, and it's consistently missing 1
record as opposed to any other number of records. I insert 50 records, the
sqlite3 command line utility executes the script, and I get to a stage
where I can write commands to the console.  What I mean is that as far as
sqlite is concerned, it has written the data to disk and also instructed
the OS to flush the buffers.  Perhaps it finalises something from the last
insert only when the next insert comes in?
It could be a coincidence of some sort, by sheer difference of
implementation of different journals that the "problem" doesn't show itself
with PERSIST or WAL journals.

I turned off host I/O cache in VirtualBox, so any writes by the guest OS
have to be physically written to the virtual disk on the host.  The guest
has "standard" I/O caching on disk, but FlushFileBuffers() should have done
its job.  There is no reason why any link in the chain should report data
written to disk without actually doing it, unless there is a problem.

I suppose I should just go ahead and test it on the physical PC.  If there
is a problem with that also, then I suppose I could blame the OS for not
flushing stuff to disk properly.

Now I gotta find me a machine

Regards,

Dan



On Sun, Sep 9, 2012 at 8:14 PM, Pavel Ivanov  wrote:


Note: attachments are stripped out of this list. So if you want for
anybody else to see your zip file you need to put it on some website
and post link here.

About the problem you have: I wonder how are you sure that there
should be 50 rows in the database and not 49? If you are resetting the
OS before it has a chance to properly commit everything then it's okay
for last transaction to be missing. But if you are sure that you are
resetting the OS after everything is settled then maybe you are not
committing your last transaction properly? Or maybe there's some bug
in your virtualization layer and you need to try the same thing on a
real hardware?


Pavel


On Sun, Sep 9, 2012 at 8:50 AM, Daniel Frimerman
  wrote:

I am fairly new to sqlite and as a result of not reading the manual and

not

doing some performance testing, I got punished somewhat.  I did not
anticipate that on journal_mode=DELETE and synchronous=FULL, I would get

no

more than 5 inserts (in auto-commit mode) per second.  It crippled a
certain batch operation on a live system.  That's water under the bridge;
it's the testing afterwards and a potential minor problem that I found is
what I am now interested in.

I tested all journal mode settings for sqlite, as well as the synchronous
setting.  Some things that I discovered were not so obvious from reading
the docs, such as the WAL journal mode combined with NORMAL synchronous
setting, which is nowhere near as "durable" as NORMAL setting for other
journal modes. I.e. NORMAL mode for DELETE journal in 99% of cases saves
all inserted data - reproducing the slightest of chances that consistency
is compromised was rather hard.  This is reflected in performance

testing:

NORMAL is only slightly faster than FULL mode for non-WAL journal

settings

(btw, journal_mode=OFF was never tested in any of my tests). But, I
understood, that in WAL+NORMAL mode is equivalent to FULL+non-WAL mode
where consistency/corruption is concerned.  That is, the database cannot
get corrupted in WAL+NORMAL.  The gain in speed for WAL+NORMAL trades off
durability and in my tests I easily reproduced that.

Okay, that was not really related to the possible bug I found.  I've
attached a ZIP file containing some batch files that create a table,

insert

some rows, at which point you hard-reset the OS, log back in and check if
the number of rows in the DB matches what you inserted. Although the
non-WAL journal modes are somewhat similar, the little problem that I've
come to find only happens on DELETE/TRUNCATE, but not on PERSIST or WAL.
The problem is basically as follows: in DELETE and TRUNCATE journal mode
combined with NORMAL/FULL synchronous mode, there is always 1 row missing
during my simulated power-cut.

I used VirtualBox 4.1.22 and Windows XP Pro (SP3) and sqlite3 3.7.14
(command line as well as through my testing application). In VirtualBox,
under storage settings for the VM, I used IDE Controller (afaik it's

Re: [sqlite] FTS4 - reversing the index to plain text

2012-09-04 Thread Dan Kennedy

On 09/04/2012 07:51 PM, Pavel Hlavnicka wrote:

Dear all,

we are using sqlite FTS4 to build a fulltext index on data which
should not be available to the user without a decryption inside the
application. FTS4 matches perfectly - we can use either contentless
database or compress/uncompress parameters to encrypt the plain text
data.

My question is if the advanced user could be able to rebuild the plain
text data just from the full text index.

I did some experiments and it seems that using Fts4aux can give just a
list of tokens and any time the offsets() function is called it either
fails (for contentless index) or needs to read the plaintext using the
function defined in the uncompress function (what would mean the user
who can not decrypt the data can not use the offsets() function).

On the other hand the documentation on the index structure
(http://www.sqlite.org/fts3.html#section_9_4) says it keeps offsets
internally.

Questions:

1) Is it possible to obtain term offsets from contentless FTS4 table
(though possibly accessing the index binary format directly)
2) Is it possible to obtain term offsets from FTS4 table which defines
uncompress function without this function beging called?


The offsets stored in the full-text index are measured in tokens,
not bytes or characters. From earlier on the same webpage:

  A list of term offsets, one for each occurrence of the term within
the document. A term offset indicates the number of tokens (words) that
occur before the term in question, not the number of characters or
bytes. For example, the term offset of the term "war" in the phrase
"Ancestral voices prophesying war!" is 3.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_MISUE returned from sqlite3_step with inconsistent errMsg

2012-08-31 Thread Dan Kennedy

On 08/31/2012 12:42 PM, Daniel Dawson wrote:

Hey Guys,

I am calling sqlite3_step and checking the return value,

Usually I get an SQLITE_DONE, or an SQLITE_BUSY which I handle.

However, sometimes I get an SQLITE_MISUSE return code. If I call sqlite3_errmsg straight 
after receiving the code then I get "Database is Locked" - The documentation 
states that SQLITE_MISUSE only occurs if I call the library routines incorrectly.

I don't understand why this would happen occasionally. I am not using any 
custom compiler options, everything is just set up by default.

Any help on the matter would be great, thanks.


There's a good chance there is something wrong with the sqlite3_stmt*
value passed to sqlite3_step(). Maybe it is NULL, invalid or points
to a statement object that has been finalized.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] foreign keys across database boundaries

2012-08-27 Thread Dan Kennedy

On 08/27/2012 03:44 PM, Baruch Burstein wrote:

Can foreign keys or triggers be set between ATTACHed database tables?


No. They can not.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any plans to provide the sqlite3RunParser in the API

2012-08-14 Thread Dan Kennedy

On 08/14/2012 04:05 PM, Bishwa Shrestha wrote:

On 08/13/2012 05:51 PM, Jay A. Kreibich wrote:

On Mon, Aug 13, 2012 at 03:43:46PM +0200, Bishwa Shrestha scratched on
the wall:

Hi,

I've recently started using sqlite3 c-library. I'm using the
in-memory databases which are pretty fast since I'm loading data
dynamically into sqlite3 from a different backend (had to do this as
we wanted to provide an SQL interface to our users).

   If you want to provide an SQL interface to an existing data source, I
   would strongly recommend you look at Virtual Tables.  They look and
   smell like normal tables, but all of their access and data generation
   is done via application level code.  You can write a series of
functions
   that essentially slip in between the SQL engine and the data access
layer,
   and can redirect requests for table data to an external source.

   Virtual Tables can be read-write or read-only.  Read-only tables are
   pretty fast to implement, while read-write tables would allow
   external applications to modify the backend (no idea if that is
   desirable or not in your case).

   Virtual Tables are an extremely powerful and often overlooked part of
   SQLite.  You can do some pretty amazing and powerful things with them,
   as evident by the Full Text Search engine in SQLite, as well as the
   R-Trees extension, both of which are built on top of Virtual Tables.

   SQLite docs:
   http://www.sqlite.org/vtab.html
   http://www.sqlite.org/c3ref/create_module.html

   Virtual Tables are also covered fairly well in "Using SQLite":
   http://shop.oreilly.com/product/9780596521196.do

-j


Hi Jay, thank you for your suggestion. I am looking at sqlite3 virtual
tables and am liking what I see :) .

However, please note that this doesn't completely answer my original
question. It would be nice if someone could respond in that regard.


There are no plans to make sqlite3RunParser() part of the
API.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why "TRUNCATE TABLE" can't empty a table?

2012-08-14 Thread Dan Kennedy

On 08/14/2012 02:04 PM, daedae11 wrote:

Following is my program:
rc = sqlite3_exec(db, "TRUNCATE TABLE students;", NULL, NULL, );


DELETE FROM students;

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] MATCH multiple

2012-08-10 Thread Dan Kennedy

On 08/10/2012 12:48 PM, E. Timothy Uy wrote:

I recently switched a table to FTS4. Usually I query this table using the
IN operator, e.g.,

SELECT * FROM Table WHERE term IN (a,b,c)

If I want to use MATCH, is my only option

SELECT * FROM Table WHERE term MATCH 'a or b or c'


I think the above is your best option. Don't forget to use
CAPITAL letters for "OR" or it won't work.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 database unreadable on Mountain Lion

2012-08-03 Thread Dan Kennedy

On 08/03/2012 11:33 PM, Tobias Giesen wrote:

Hello,

thanks for the replies!

A sample Mountain-unreadable file is here:
http://www.superflexible.com/broken.sql

Works fine under Snow Leopard.

I get the same results when using my own application as when using
/usr/bin/sqlite3. So, we can concentrate on the sqlite3 command line tool.

I noticed that with the new sqlite version, I can no longer use the
CONSTRAINT keyword. In other words, in a CREATE TABLE script, I
had to change this:
CONSTRAINT PK_SECTIONS PRIMARY KEY (ID),
to this:
PRIMARY KEY (ID),


The broken.sql file works with 3.7.12 here. And I can do:

  SQLite version 3.7.12 2012-05-14 01:41:23
  Enter ".help" for instructions
  Enter SQL statements terminated with a ";"
  sqlite> CREATE TABLE t1(x, CONSTRAINT pk_cons PRIMARY KEY(x));
  sqlite>

There was a problem similar to your description at one point, but
it should have been fixed before the 3.7.12 release. What do you
get from the shell command "SELECT sqlite_source_id();" on
Mountain Lion?

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 database unreadable on Mountain Lion

2012-08-03 Thread Dan Kennedy

On 08/03/2012 09:33 PM, Tobias Giesen wrote:

Hello,

here's a problem that's puzzling me.

I have one particular type of database that has become unreadable on
the new Mac OS 10.8. It must be related to the SQL structure. The error
I get is "database disk image is malformed". But the same file, on
Snow Leopard, works fine.

The SQLite version on Snow Leopard is 3.6.12, and on Mountain Lion it
is 3.7.12.

The strange thing is, when I attempt to load the sqlite3.dylib from
Snow Leopard under Mountain Lion, it also does not work. But I'm not
totally sure if loading the older sqlite3 library actually worked.

Does anybody have any advice for me?


I'd like to see the database file if possible. Are you able to
post it somewhere or else mail it directly to me?

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] File is encrypted or not a database - checked header which is OK

2012-08-02 Thread Dan Kennedy

On 08/02/2012 02:43 PM, Wim Paulussen wrote:



I have a database giving me the dreaded 'File is encryptes or not a
database' command. I checked for the 'SQlite format 3' part, which is
present. All other databases in the same directory work as expected,
there are no journal files present, so I am at a loss how to get this
solved. Anyone suggestions where the problem could be ?


SQLITE_NOTADB means that the database header is corrupted in
some way. Usually this means the "SQLite format 3" is missing,
but there are other header fields that may be corrupted also.

I think if you try to open a WAL-mode database using a client
that does not support WAL (i.e. anything ealier than 3.7, or
a 3.7.x build that has been compiled with SQLITE_OMIT_WAL) you
will get this error too.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] error in example rank function

2012-07-21 Thread Dan Kennedy

On 07/22/2012 05:39 AM, AJ ONeal wrote:

Back to looking at http://www.sqlite.org/fts3.html#appendix_a

Notice the line:

 if( nVal!=(1+nCol) ) goto wrong_number_args;

nVal will always be 2 with the given use case:

 rank(matchinfo(documents), documents_data.weight)

or in the previous use case it will be 1

 rank(matchinfo(documents))

Seems that it would be best to assign a default weight of 1 if nVal == 1 or
the double value of apVal[1] otherwise.

Also the line

 double weight = sqlite3_value_double(apVal[iCol+1]);

should be simplified to:

 double weight = sqlite3_value_double(apVal[1]);

AJ ONeal



Ranking of search results is a tough problem. Different people have
different ideas as to how it should be done. So FTS supplies the
matchinfo() primitive to allow people to build their own ranking
functions.

So Appendix A of that document is supposed to be a kind of tutorial
that walks you through your options, in increasing order of complexity,
for building a ranking function. The different blocks of example
code are not meant to work together.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unrecognized parameter: content=""

2012-07-21 Thread Dan Kennedy

On 07/21/2012 02:03 PM, AJ ONeal wrote:

Weird: now that I've reproduced the error (using the script), I can no
longer reproduce the successful execution:

sqlite3 ':memory:' 'CREATE VIRTUAL TABLE t1 USING fts4(content="", a, b,
c);'


Hmm... yet when I open another terminal window it begins to work again.
And when I go back to the previous terminal window it fails again.

What about my terminal session / environment might be contributing to this
behavior?


Using two different SQLite binaries.

content= is supported by version 3.7.11 or newer (or, unofficially,
3.7.9 or newer).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using the SQLite Online Backup AP

2012-07-09 Thread Dan Kennedy

On 07/10/2012 11:42 AM, Sreekumar TP wrote:

In the description of the backup API, (http://www.sqlite.org/backup.html) ,
it is said


"It cannot be used to copy data to or from in-memory databases."


This statement identifies a shortcoming of the "old" method of
creating backups (taking an exclusive lock and then copying the
file with "cp" or similar). It is a problem addressed by the
backup API.




which is followed by
Example 1: Loading and Saving In-Memory Databases


Isn't this contradictory ?  If sqlite backup API is not meant for use with
in-memory databases, the example should be removed,else it could be
misleading..

Kind Regards,
Sreekumar
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange issue with sqlite 3.7.9

2012-07-04 Thread Dan Kennedy

On 07/04/2012 08:26 PM, Alan Chandler wrote:

Due to hardware problems with my Debian Stable server, I have just
upgraded to Ubuntu-Server 12.04.

I have installed sqlite3 and when I ask it the version (with .version)
it replies
SQLite 3.7.9 2011-11-01 00:52:41 c7c6050ef060877ebe77b41d959e9df13f8c9b5e

Which exactly complies with the sqlite web site for this release.

Sometime ago I reported a problem with 3.7.10 referenced here
http://www.sqlite.org/src/info/b7c8682cc1


The commit referenced by that page:

   http://www.sqlite.org/src/info/b23ae131874bc5c621f0

went into 3.7.9. So the problem was probably introduced in
3.7.9, not 3.7.10.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3_step

2012-07-03 Thread Dan Kennedy

On 07/04/2012 03:30 AM, Jay A. Kreibich wrote:

On Tue, Jul 03, 2012 at 04:26:42PM -0400, Richard Hipp scratched on the wall:


Actually, you can bind on a DDL statement, but bindings are only valid for
the lifetime of the statement itself, not for the whole lifetime of the
object created by the CREATE statement.


   Is that a side-effect of the fact that CREATE statements are copied
   into the sqlite_master table as literals, and not re-written?  (Is
   that even true?)


Pretty much. SQLite messes around with the start of CREATE TABLE
statements to remove the "TEMP" keyword or fully qualified database
name if one is present, but otherwise just copies the users input
into the sqlite_master table. i.e. if you do:

  CREATE TABLE main.x1(a, b);

SQLite rewrites "CREATE TABLE main." as "CREATE TABLE ". The
rest of the CREATE TABLE is stored in sqlite_master as is.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Consequences of lexicographic sorting of keys in SQLite4?

2012-07-02 Thread Dan Kennedy

On 07/02/2012 04:29 PM, Niall O'Reilly wrote:


On 29 Jun 2012, at 23:58, Richard Hipp wrote:


But you know:  How often do people use BLOBs as keys?  What other SQL
engines other than SQLite even allow BLOBs as keys?  Are we trying to
optimize something that is never actually used?


For an IPAM application I have on my back burner, BLOB seems
a natural way to express IPv[46] addresses, ranges, and prefixes.
A bulkier alternative would be hexadecimal encoding as text.


That would be a reasonable use. But the blob in this case will be what,
eight bytes (or 10 in its encoded form)? So the encoding and decoding
(it's actually not clear we will ever want to decode, but anyhow) isn't
going to cost much in the way of CPU. And making the keys memcmp()
compatible allows some other optimizations - prefix compression and so
on. Plus I think memcmp() will be generally faster than any other
type of comparison.

Creating and using an index on larger blobs might be different of
course.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Error indicated through SQLITE_CONFIG_LOG callback but sqlite3_step() does not fail

2012-07-01 Thread Dan Kennedy

On 06/30/2012 04:27 AM, Jeff Archer wrote:

Simon Slavin slavins at bigfraud.org Fri Jun 29 17:16:36 EDT 2012

Do you do the _prepare() first, then make a change to the database schema

?  For instance


Start of app
Prepare the INSERT statement
CREATE TABLE
Bind the INSERT statement
Step the INSERT statement


No.
Create Table

Then in a separate function
Prepare
Bind
Step
Finalize


If the prepare/step/finalize uses a different database handle,
it may prepare the statement using an old version of the
schema still cached in memory. Then, when the statement is
executed and the database file read, SQLite realizes the old
schema is out of date, loads the new schema and re-prepares
the statement.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to avoid duplicate entries in FTS table?

2012-07-01 Thread Dan Kennedy

On 07/01/2012 05:17 PM, Navaneeth.K.N wrote:

Hello,

On Sun, Jul 1, 2012 at 2:27 PM, Petite Abeillewrote:



On Jul 1, 2012, at 9:11 AM, Navaneeth.K.N wrote:


Now, repeating a "pattern" and "id" combination is an error to me. There
should be always one "pattern" to "id" combination. If this was not a
virtual table, I'd have solved the problem by creating a primary key on
both "pattern" and "id". But this trick is not working on FTS tables.


Perhaps you could try the following setup:

(1) Create a regular table to hold your unique patterns, using an unique
constraint

create table foo
(
   id integer not null constraint foo_pk primary key,
   bar text,
   constraint  foo_uk unique( bar )
)

(2) Create a FTS table with external content [1] to search the above

create virtual table foo_bar using fts4
(
   content = "foo",
   bar text
)



I knew this. I was more worried about the performance. Will there be a
performance difference comparing to data stored directly on the FTS table?


No. Internally FTS just uses your table instead of one that it
creates itself.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd insertion error FTS4 + ICU

2012-06-19 Thread Dan Kennedy
On 06/19/2012 04:28 AM, E. Timothy Uy wrote:
> Dear Dan,
> 
> With the change from U8_NEXT to U16_NEXT, I am able to insert 一日耶羅波安出. I
> was also able to insert the rest of the data set (about 31000 more rows
> containing both traditional and simplified Chinese). Is this an ICU error?
> Seems like everything should be using U8_ in the tokenizer.

U16_NEXT is correct, as that buffer contains utf-16 characters. Data
is converted to utf-16 before it is tokenized as ICU does not provide
a break-iterator that operates directly on utf-8.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd insertion error FTS4 + ICU

2012-06-18 Thread Dan Kennedy
On 06/19/2012 03:39 AM, E. Timothy Uy wrote:
> If anyone can unravel this mystery, it would be much appreciated. For now,
> I inserted a comma - 一日、耶羅波安出 and it works. I suspect it must be somehow
> that the sequence of bytes encodes another character, which throws the
> tokenizer out of whack or maybe the fts4aux table.

Can you try with this:

  http://www.sqlite.org/src/info/892b74116a

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd insertion error FTS4 + ICU

2012-06-18 Thread Dan Kennedy
On 06/19/2012 02:11 AM, E. Timothy Uy wrote:
> I recompiled ICU using U_CHARSET_IS_UTF8 and the error persists.
> 
> On Mon, Jun 18, 2012 at 11:45 AM, E. Timothy Uy  wrote:
> 
>> Hopefully someone has some insight on this. I am using FTS4 with
>> tokenize=icu (and PRAGMA encoding="UTF-8"). I'm getting getting an error
>> inserting the following into my virtual table:
>>
>> 一日耶羅波安出

Can you post the list of codepoints in this text? Or the hex
of the utf-16 or utf-8 encoding of the same?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS Example Fails (matchinfo arguments)

2012-06-14 Thread Dan Kennedy

On 06/14/2012 01:27 PM, Sergei G wrote:

I am running sqlite3 version 3.7.3 on debian.

I run the following commands from fts3.html documentation page:

CREATE VIRTUAL TABLE t1 USING fts4(a, b);
INSERT INTO t1 VALUES('transaction default models default', 'Non
transaction reads');
INSERT INTO t1 VALUES('the default transaction', 'these semantics present');
INSERT INTO t1 VALUES('single request', 'default data');

SELECT matchinfo(t1) FROM t1 WHERE t1 MATCH 'default transaction
"these semantics"';

SELECT matchinfo(t1, 'ns') FROM t1 WHERE t1 MATCH 'default transaction';

The last line with match info(t1, 'ns') fails with the following error
message:

Error: wrong number of arguments to function matchinfo()

I have originally worked on my own table, but found the same error. So, I
have tried example above and it failed for me in the same way.


I think the two argument version of matchinfo() is only in 3.7.4 and
newer. Prior to that it only accepted one argument.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] (no subject)

2012-06-12 Thread Dan Kennedy

On 06/12/2012 05:41 PM, Nicholas Thompson wrote:

Hi, I am new to SQLITE

Would be using Microsoft VS 2008&  2010

Is it possible to safely use multiple threads in a process with each
thread making
its own connection to the same database

rc = sqlite3_open("file::memory:?cache=shared",);


Should be safe, yes. Assuming you are using the library in either
"multi-thread" or "serialized" (the default) mode.

  http://www.sqlite.org/threadsafe.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation Fault when calling sqlite3_finalize

2012-05-31 Thread Dan Kennedy

On 05/31/2012 10:46 PM, Stephen Wood wrote:

Hello all,
I'm having some memory allocation difficulties while using the SQLite C
API. The goal of the function that is currently giving me issues is just
trying to read the column names from a table.


Tip: Column names are available after sqlite3_prepare_v2() has
returned. You don't need the sqlite3_step() call in your function.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What does "The use of the amalgamation is recommended for all applications." mean, precisely?

2012-05-24 Thread Dan Kennedy

On 05/24/2012 10:53 PM, Larry Brasfield wrote:

On the support page http://www.sqlite.org/howtocompile.html, it says:

"The use of the amalgamation is recommended for all applications."

Is this a general recommendation, to use the amalgamated source file
as the preferred way of including SQLite functionality in one's
application, rather than using a separately compiled library?

Or should I read this as a recommendation just in case I need to
compile SQLite from source, and need to decide between using the
amalgamation or using the individual source files?


 From the context of the recommendation, one can clearly infer that it
addresses only how SQLite source is presented to the compiler. Whether
the compiler's output is embedded in its own object file, a DLL, or
directly into a calling program is not addressed.



The statement above is accurate.

But at the same time, it is a general recommendation that
you just pull sqlite3.c into your project instead of messing
around with shared libraries and so on.

Why risk ending up with an unexpected (possibly old) version
by linking at runtime just to save users less than 300K of disk
space?



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_analyzer failing for large files

2012-05-23 Thread Dan Kennedy

On 05/23/2012 05:06 PM, Demper Delic wrote:



When trying to analyze a 3.5 GB database file (which passes PRAGMA
integrity_check) using the Windows analyzer provided in the download
page, it fails immediately with an "Empty or malformed database:
db.sqlite" error message. It works on some smaller databases I've
tested. My guess is that this is caused by a 32-bit signed integer
overflow on line 42 of tool/spaceanal.tcl


Thanks for reporting this. It is as you say.

Please try the new build up now, which uses Tcl 8.5.11 instead of
8.5.9. Make sure you really do get the new one, sha1 should be:

  11294bec98274d6f0bc99c75f537e6c1b94ad71c

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_count vs. sqlite3_data_count

2012-05-16 Thread Dan Kennedy

On 05/16/2012 06:04 PM, Stephan Beal wrote:

On Wed, May 16, 2012 at 1:02 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:


Not so. It's the number of columns of data currently available.
Which is sometimes the same as sqlite3_column_count() and
sometimes zero.



Can sqlite3_data_count() be used to determine if calling
sqlite3_colum_xxx() is legal, i.e. if sqlite3_step() has succeeded?


Yes.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_count vs. sqlite3_data_count

2012-05-16 Thread Dan Kennedy

On 05/16/2012 05:57 PM, Petite Abeille wrote:


On May 16, 2012, at 12:49 PM, Baruch Burstein wrote:


sqlite3_column_count


Number of columns.

http://www.sqlite.org/c3ref/column_count.html


sqlite3_data_count


Number of rows.

http://www.sqlite.org/c3ref/data_count.html


Not so. It's the number of columns of data currently available.
Which is sometimes the same as sqlite3_column_count() and
sometimes zero.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_count vs. sqlite3_data_count

2012-05-16 Thread Dan Kennedy

On 05/16/2012 05:49 PM, Baruch Burstein wrote:

sqlite3_column_count
sqlite3_data_count

What is the difference between the two?


sqlite3_data_count() will return zero if the statement handle
does not currently point to a row of data. For example if it
has just been prepared or reset, or if the last call to sqlite3_step()
returned something other than SQLITE_ROW.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] Bad performance with foreign key contraint

2012-05-14 Thread Dan Kennedy

On 05/14/2012 10:39 AM, Jonas Malaco Filho wrote:

I found this on the docs :
If this SELECT returns any rows at all, then SQLite concludes that
deleting the row from the parent table would violate the foreign key
constraint and returns an error. Similar queries may be run if the
content of the parent key is modified *or a new row is inserted into the
parent table*. If these queries cannot use an index, they are forced to
do a linear scan of the entire child table. In a non-trivial database,
this may be prohibitively expensive.

I have to ask: _why would similar queries on the child table be run upon
insertions into the parent table_?


Fair question. Note the "may". They're not always run. Here's an
example:

  CREATE TABLE p(x PRIMARY KEY);
  CREATE TABLE c(y REFERENCES p(x) DEFERRABLE INITIALLY DEFERRED);
  BEGIN;
INSERT INTO c VALUES('abc');
INSERT INTO p VALUES('abc');
  COMMIT;

When the first insert is run, SQLite records the fact that there is
a foreign key violation in the database (since there is no parent
key for 'abc'). When the second insert is run, SQLite checks if it
resolves the existing foreign key violation by checking for rows
in "c" that match the new parent key. i.e. it internally does:

   SELECT count(*) FROM c WHERE y='abc';

However, if you were to do this:

  BEGIN;
INSERT INTO p VALUES('def');
  COMMIT;

Then SQLite would not have to scan table "c" - as it knows that there
are no FK violations in the database and so no reason to check if
any have been resolved.

Quite probably this thread should be on sqlite-users@sqlite.org
instead of this list. Since it is of interest to users, not just
those working on the development of SQLite.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Two-line patch to fix memory leak

2012-05-11 Thread Dan Kennedy

On 05/11/2012 11:28 PM, Eric Sink wrote:


In sqlite3Fts3Matchinfo():

Near the end of the function, I added two lines:

if( rc!=SQLITE_OK ){
sqlite3_result_error_code(pContext, rc);
}else{
int n = pCsr->nMatchinfo * sizeof(u32);
sqlite3_result_blob(pContext, pCsr->aMatchinfo, n, SQLITE_TRANSIENT);
ADDED: sqlite3_free(pCsr->aMatchinfo);
ADDED: pCsr->aMatchinfo = 0;
}


Thanks for posting this.

Looking at the code, I can't see how the memory leak occurs. Do
you have any idea how to trigger it? Other than this patch, are
you using unmodified 3.7.11 FTS code?

We can't apply the patch as is, even though it is safe (does not
introduce any bugs), because it makes queries that use the 'x'
format specifier with matchinfo() much less efficient.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data race (file race) in pager

2012-05-04 Thread Dan Kennedy

On 05/04/2012 11:21 PM, Paul Thomson wrote:

I am working on a tool that (among other things) can detect data
races, including file access races. I have detected a file race in
SQLite on the database file that appears to be real, although I am not
certain - I have no experience with SQLite. I compiled SQLite with:
#define SQLITE_THREADSAFE 2

I use a simple test case where two threads call opendb on the same
file, write into the same table and then close the database. The file
race appears to occur due to sqlite3PagerReadFileheader (as the
database is opened?) in one thread and pager_write_pagelist in the
other. It looks as though the page that was written was pgno 1. Can
any experts explain whether these two accesses are in fact
synchronised, or if the race is benign?


Is the issue that PagerReadFileheader() may be called to read the
file-header at the same time as the first page of the database
(which contains the file-header being read) is being written by the
second connection?

If so, it's a known issue. Immediately after opening a db file,
SQLite reads the first 100 bytes of it. Since it holds no locks
on the file at this point, some other process may be writing at
the same time. So the data read cannot be trusted.

SQLite knows this. The only field it uses from the file-header
read at this point is the page-size field (as in "PRAGMA page_size",
default 1024 bytes). Later on, when it actually accesses the database,
it obtains a SHARED lock and reads the first page of data from
the file - using the page-size garnered from the unsafe read made
of the file-header. Once the first page of the db is loaded, it takes
another look at the page-size field in the header (part of the first
page). If it turns out that the page-size is not as expected, it
throws away any data in the cache and tries to read the first page
again, this time hopefully with the correct page size.

So the call to sqlite4PagerReadFileheader() is just an optimization
that increases the chances that when the database is first read
SQLite will guess the correct page-size - and not have to load,
discard, and reload the first page of the db.

The proprietary zipvfs extension (implemented as a VFS layer that
intercepts all IO calls) always returns a buffer full of zeroes
when SQLite attempts the initial 100 byte read. And it seems to
work Ok.

The tool you're working on sounds pretty cool. Is it going to
be open-source?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE Header access

2012-05-03 Thread Dan Kennedy

On 05/04/2012 12:52 AM, Igor Tandetnik wrote:

On 5/3/2012 1:42 PM, msantha wrote:

I am using sqlite in my application only for read access.


But someone else might open and modify the same database. You may know
this doesn't happen, but SQLite doesn't.


The DB gets hit
often by my application and I could see that the header(100 bytes) of the
database is read every time when i access the database.


Most likely, checking the schema cookie, to confirm that the database
schema remains unchanged and prepared statements are still valid.


It's checking if the "change-counter", a field updated each time the
database is written, has changed. If the change-counter has changed
since the last time the db was written, all cached pages are discarded
from the cache and SQLite moves on to checking the schema cookie.


can we make it read it only once?


Try starting a transaction at the beginning, and keeping it open the
whole time the application runs.


You could also use "PRAGMA locking_mode=EXCLUSIVE". That way, SQLite
could be as confident as you are that no other process is modifying
the db. And it won't feel the need to inspect the db header each
time you read.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why are two select statements 2000 times faster than one?

2012-04-13 Thread Dan Kennedy

On 04/14/2012 03:14 AM, Steinar Midtskogen wrote:

Puneet Kishor  writes:


If you want the results in separate columns, you can do something like

SELECT Min(a) minimum, 'none' maximum FROM t UNION ALL SELECT 'none' minimum, 
Max(a) minimum FROM t;


Then it does a full scan again.

But Igor's suggestion "SELECT (SELECT min(unix_time) FROM table),
(SELECT max(unix_time) FROM table)" works fine, and means less code.


This:

  http://www.sqlite.org/optoverview.html#minmax

Both the subqueries qualify for the optimization, so the overall
query is fast. With the UNION ALL version, the second column in the
result set disqualifies both sides from using the optimization. So
it is slow.

I think if you were to change the UNION ALL version to the following
it would be just as fast as the sub-selects.

  SELECT Min(a) minimum FROM t
UNION ALL
  SELECT Max(a) minimum FROM t;




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode problem when setting PRAGMA journal_mode

2012-04-11 Thread Dan Kennedy

On 04/11/2012 09:50 PM, Nick Shaw wrote:

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Simon Slavin
Sent: 10 April 2012 19:04
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Unicode problem when setting PRAGMA journal_mode


Realised I made a typo below: should have said "PRAGMA journal_mode = DELETE" 
(though setting it to WAL or OFF causes the same problem).


Are you by any chance having a technical problem with the PRAGMA command itself ?  For 
instance, suppose the command was encoded in UTF-16 and SQLite was expecting ASCII.  Figure 
out what the default>  value should be (e.g. use the shell tool to do a "PRAGMA 
journal_mode;") then change your normal app to set the mode to that value.  See if it 
gives you the same problem.


I'm not getting the same problem with other PRAGMA commands (quick_check(1) and 
synchronous=NORMAL), so I don't know why journal_mode would be any different.  
If I leave this one PRAGMA out, everything is fine.  If I include it, the DB 
always fails to close.  But only in Unicode.

I'll try your suggestion of setting it to what it currently is (it *should* be 
DELETE) and see what it does.


After sqlite3_close() returns SQLITE_BUSY, you can use
sqlite3_next_stmt() to loop through those statements that
SQLite thinks are unfinalized. Then use sqlite3_sql() to
identify each.

The results might reveal something.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign Key Problems

2012-04-03 Thread Dan Kennedy

On 04/03/2012 11:18 PM, Pete wrote:

Thanks you SImon.  I see this is because the version of sqlite3 I have does
not support foreign keys.

I am running OS X 10.6.8 and sqlite3 comes with the OS.  Does anyone know
where I can get a version of sqlite3 for OS X that does support foreign
keys?  It would have to be a compiled binary since I don't access to
compilers (or the skills to use them).


From here:

  http://www.sqlite.org/download.html

First artifact listed under "Precompiled Binaries For Mac OS X (x86)".
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simultaneous opening of database sometimes fails even with 1 second timeout and only 2 processes

2012-04-03 Thread Dan Kennedy

On 04/03/2012 04:20 PM, Gregory Petrosyan wrote:

On Tue, Apr 3, 2012 at 12:53 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:

As a side note: why does not SQLite automatically retry implicit
transactions after invoking busy handler?


It's a race. That's what SQLITE_BUSY means.

As Jay says, deadlock is not possible for implicit transactions.
SQLite will keep retrying until either your busy-handler returns
zero (if you configured a busy-handler) or the timeout is reached
(if you configured a timeout). It sounds like the latter in this
case.


I don't follow the logic here: if deadlock is not possible with
implicit transactions, SQLite should retry them after specified
timeout, but not return _BUSY, forcing me into implementing the same
retry logic, no?


It continues attempting the operation until the length of time
since the first attempt exceeds the configured timeout. At which
point it returns SQLITE_BUSY.


If you are using WAL mode, check that connections are being closed
properly. If they are not, a *-wal file may be left on the disk even
after all active clients have disconnected. In this case, when the
next client connects it has to read and process the entire *-wal
file. This is called "recovery", and the client holds an exclusive
lock on the db file while it is underway. Any other clients
that attempt to read or write the db see an SQLITE_BUSY. Any
busy-handler (or timeout) will apply here, but if recovery takes a
while then a short timeout could be exceeded.


Am I correct that if e.g. I have 2 processes, A and B, using the same
WAL-enabled database, then following:
1. A creates the database and starts to write data (connection is open
all the time)
2. B tries to connect to the database
can easily result in SQLITE_BUSY returned to either A or B?


No.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simultaneous opening of database sometimes fails even with 1 second timeout and only 2 processes

2012-04-03 Thread Dan Kennedy

On 04/03/2012 12:48 PM, Gregory Petrosyan wrote:

On Mon, Apr 2, 2012 at 7:51 PM, Jay A. Kreibich  wrote:

On Mon, Apr 02, 2012 at 07:40:58PM +0400, Gregory Petrosyan scratched on the 
wall:

Hello,

I am experiancing a weird problem: sometimes (1 time in a 10-100) when
2 processes try to open the same database file (and execute something
like 'create table foo if not exists'), one of them fails with
SQLITE_BUSY ??? despite 1 second (or bigger) timeout.

Processes themselves produce almost no DB activity; they merely start
and initialize the database roughly at the same time.
sqlite3_open_v2() is immediately followed by sqlite3_busy_timeout().

Unfortunately I can't write a small program that reproduces this
reliably: my test program never crashes this way (except when timeout
is really small ??? say, 10ms). Yet, this behaviour is rare but
reproducible, both from Python (apsw) on Windows, Python (apsw) on OS
X and native C API on OS X (in entirely different program).

Can you please point me at what can be the cause of this?


  You may be seeing a deadlock situation.  This is most commonly
  associated with explicit transactions (that are open for a longer
  period of time), but it is possible with implicit transactions.

  If two connections attempt to write to the DB at the same time, it is
  possible for them to deadlock on the file locking.  SQLite recognizes
  this and has one of the connections back off with an SQLITE_BUSY error.
  If this happens in an explicit transaction, the program needs to
  ROLLBACK the current transaction and start over.  In the case of an
  implicit transaction around a statement, you can safely re-run
  the statement.


Thanks a lot for the reply.

Shouldn't it be extremely unlikely for this situation to happen, though?

Can it be diagnosed with more certaincy? I feel uncomfortable guessing
(hoping) that the bug exists due to this race/deadlock.

As a side note: why does not SQLite automatically retry implicit
transactions after invoking busy handler?


It's a race. That's what SQLITE_BUSY means.

As Jay says, deadlock is not possible for implicit transactions.
SQLite will keep retrying until either your busy-handler returns
zero (if you configured a busy-handler) or the timeout is reached
(if you configured a timeout). It sounds like the latter in this
case.

If you are using WAL mode, check that connections are being closed
properly. If they are not, a *-wal file may be left on the disk even
after all active clients have disconnected. In this case, when the
next client connects it has to read and process the entire *-wal
file. This is called "recovery", and the client holds an exclusive
lock on the db file while it is underway. Any other clients
that attempt to read or write the db see an SQLITE_BUSY. Any
busy-handler (or timeout) will apply here, but if recovery takes a
while then a short timeout could be exceeded.

Other than that, you probably just want to increase the busy-timeout
some.











___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign Key Problems

2012-04-02 Thread Dan Kennedy

On 04/02/2012 07:22 AM, Pete wrote:

I'm running into an issue with foreign keys where no matter what value I
supply for a child key, I get a foreign key mismatch error.  Here are my
test tables.


"foreign key mismatch" indicates a schema problem. Usually a missing
index. See here:

  http://www.sqlite.org/foreignkeys.html#fk_indexes

It's likely you need to create a UNIQUE index on t1.RefColumn.





CREATE TABLE "t1" ("RefColumn" TEXT ,"Data" TEXT )

CREATE TABLE "t2" ("FKeyColumn" TEXT  REFERENCES "t1"("RefColumn"),"Data"
TEXT )


PRAGMA foreign_keys is set to 1.


Put a couple of entries into t1:


SELECT * FROM t1


RefColumn   Data

--  --

a   aaa

b   bbb


Now insert a row into t2

INSERT INTO t2 VALUES ('a','aaa');

I get a foreign Key mismatch error.  No matter what value I supply for
FKeyColumn, even NULL, I get the same error.

I disabled foreign keys, then the INSERT worked.  Enabled foreign keys
again, it fails again.

And even more concerning "DELETE FROM t2" also produces a foreign key
mismatch error.

What am I doing wrong?




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [patch] sqlite-3.7.11: valgrind errors in testsuite

2012-03-31 Thread Dan Kennedy

On 03/31/2012 04:04 PM, Yuriy Kaminskiy wrote:

valgrind ./testfixture test/trigger7.test

Note: line numbers below are off-by-2.

trigger7-2.1... Ok
trigger7-2.2...==11533== Invalid read of size 1

Seems always reproducible.


Thanks for reporting this.

These tests are run with valgrind as part of the release process.
But this error only shows up when you build with SQLITE_OMIT_LOOKASIDE
(or disable the lookaside allocator some other way). I think this is
probably better - perhaps we should disable lookaside when running
valgrind tests during release testing too.


It seems patch below fixes it (but I'm not sure if it is correct/sufficient;
codepath seems common, why it is only triggered by this test is not clear).


The patch looks correct and safe to me. The only possible problem
is that the modification adds (a tiny amount of) code to one of
the most performance sensitive functions in the library.

I think the reason this hasn't shown up before is that Mem.z is
not valid unless either the MEM_Str or MEM_Blob flag is set on
the memory cell. So accessing it when these flags are both clear
is a bug in sqlite3VdbeMemGrow(). The bug is only triggered when
the 'preserve' argument to sqlite3VdbeMemGrow() is true.

And the only place where sqlite3VdbeMemGrow() is called with
the preserve flag set to true and a memory cell that does not
have either the MEM_Str or MEM_Blob is from the code to do
EXPLAIN listing of trigger programs.

This fix changes the EXPLAIN listing code so that it doesn't do
that:

http://www.sqlite.org/src/info/c9342ca581?sbs=0

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] xBestIndex/xFilter Bug(?) when using LIKE "F%"

2012-03-29 Thread Dan Kennedy

On 03/23/2012 03:59 PM, Kurt Keller wrote:

I have a virtual table implementation, that implements the
xBestIndex/xFilter funktions and I found the following problem:

This query

   SELECT * FROM vf WHERE field LIKE "F%";

will result in a call to xBestIndex with the following constraint

   field>= ... AND field<  ...

when SQLITE calls xFilter, the right hand values are delivered
correctly with

   field>= "F" AND field<  "g"  /* !! */


Thanks for reporting this.

This commit disables the LIKE optimization on virtual tables:

  http://www.sqlite.org/src/info/0bacb879e1

A workaround for earlier versions would be to define 
SQLITE_OMIT_LIKE_OPTIMIZATION when compiling.


Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possible bug? Primary key not created for CREATE TABLE(id INTEGER primary key);

2012-03-25 Thread Dan Kennedy

On 03/25/2012 02:07 PM, Neo Anderson wrote:


See the following result:

sqlite>  select * from sqlite_master order by type;
type|name|tbl_name|rootpage|sql
index|sqlite_autoindex_t_1|t|4|
index|sqlite_autoindex_t3_1|t3|7|
table|t|t|2|CREATE TABLE t(id int primary key, key, value)
table|t2|t2|5|CREATE TABLE t2(id INTEGER primary key, key, value)
table|t3|t3|6|CREATE TABLE t3(id not null primary key, key, value)
table|t4|t4|8|CREATE TABLE t4(id INTEGER primary key, key, value)

No indexes created for t2 and t4. Or it is just a illusion?


INTEGER primary keys are special:

  http://sqlite.org/lang_createtable.html#rowid
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] slightly confusing documentation on limits.html page

2012-03-24 Thread Dan Kennedy

On 03/23/2012 01:19 AM, Jeff Robbins wrote:

Item 12 on the limits page (http://www.sqlite.org/limits.html) states
that a maximum SQLite database size is about 140 terabytes.  But item
13 on the same page refers to 14 terabytes as the maximum.

The calculation given in item 12 yields this value:

2147483646 * 65536 = 140,737,488,224,256

I don't see a calculation supporting item 13's maximum of 14
terabytes, so perhaps the two references to "14" in item 13 are a
typo?  Or perhaps they refer to another actual limit?  In either
case, the seeming contradiction is confusing.


I'd say the 14 is a typo.

The point of the statement in item 14 is that an SQLite database
may be at most 2^47 bytes in size. And since each row is at least
one byte, it is not possible to test the theoretical 2^64 row
limit.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Do we need locking with multithread off ?

2012-03-18 Thread Dan Kennedy

On 03/18/2012 01:12 PM, Arnav Aggarwal wrote:

On Sun, Mar 18, 2012 at 11:19 AM, Dan Kennedy<danielk1...@gmail.com>  wrote:


  On 03/18/2012 11:26 AM, Arnav Aggarwal wrote:


On Sat, Mar 17, 2012 at 12:07 PM, Arnav Aggarwal<
arnav.aggarwal.2...@gmail.com>   wrote:




  On Sat, Mar 17, 2012 at 11:51 AM, Roger Binns<rog...@rogerbinns.com>**
wrote:

-BEGIN PGP SIGNED MESSAGE-

Hash: SHA1

On 16/03/12 22:50, Arnav Aggarwal wrote:


I don't have any choice of using a different filesystem..



I'd suggest you write your own VFS then.  You can choose exactly how
locking is done.  It isn't that much work and you can map to the exact
semantics of the filesystem rather than pretending it is unix which it
evidently isn't.


My system behaves quite similar to that of  unix and mostly POSIX

compliant. But, "fcntl" locks are not supported.
Is there any known problem using a "unix-dotfile" vfs ?


  In such a scenario, can I safely delete the lock file and journal file

(if size 0) ?



Depends on why they are size zero.  If your crummy filesystem doesn't
implement barriers correctly then it is quite possible that they
shouldn't
be zero length.

(I'm assuming your want your database to survive unexpected power
failures)


May be I can leave the journal files as it is. I believe sqlite code can

take care of them.
But, lock directories must be deleted else the application fails to
start.
Can these be safely deleted at apllication start up before opening the
database ?



Guys, Sorry for being impatient and posting this again.

Any comments on this issue ?

Just to summarize, I am using "dotfile" locking. Only one instance of
application will run (it may be multithreaded). The sqlite code is
statically linked with the application. This is the only application that
can access the database file.
On application crash, lockfiles are sometimes not removed due to which
application fail to restart.

My question, can I safely remove the lockfile on application startup
before
opening the database and before creating any thread ?



If, at some point, you are sure that no processes have
the database file open, you can safely delete any old
lock files.

Don't ever delete a journal or wal file though of course.a


Thanks a lot Dan for confirming this. I will remove the code to delete the
0 byte sized journal file.


A zero byte file you could safely delete (provided you can
guarantee that no other database clients exist). Although SQLite
should delete it next time a transaction is committed anyhow
so it may not be worth bothering with.

Deleting any larger journal (that may contain rollback
information from an aborted transaction) may corrupt the db
though.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Do we need locking with multithread off ?

2012-03-17 Thread Dan Kennedy

On 03/18/2012 11:26 AM, Arnav Aggarwal wrote:

On Sat, Mar 17, 2012 at 12:07 PM, Arnav Aggarwal<
arnav.aggarwal.2...@gmail.com>  wrote:




  On Sat, Mar 17, 2012 at 11:51 AM, Roger Binnswrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 16/03/12 22:50, Arnav Aggarwal wrote:

I don't have any choice of using a different filesystem..


I'd suggest you write your own VFS then.  You can choose exactly how
locking is done.  It isn't that much work and you can map to the exact
semantics of the filesystem rather than pretending it is unix which it
evidently isn't.



My system behaves quite similar to that of  unix and mostly POSIX
compliant. But, "fcntl" locks are not supported.
Is there any known problem using a "unix-dotfile" vfs ?



In such a scenario, can I safely delete the lock file and journal file
(if size 0) ?


Depends on why they are size zero.  If your crummy filesystem doesn't
implement barriers correctly then it is quite possible that they shouldn't
be zero length.

(I'm assuming your want your database to survive unexpected power
failures)



May be I can leave the journal files as it is. I believe sqlite code can
take care of them.
But, lock directories must be deleted else the application fails to start.
Can these be safely deleted at apllication start up before opening the
database ?




Guys, Sorry for being impatient and posting this again.
Any comments on this issue ?

Just to summarize, I am using "dotfile" locking. Only one instance of
application will run (it may be multithreaded). The sqlite code is
statically linked with the application. This is the only application that
can access the database file.
On application crash, lockfiles are sometimes not removed due to which
application fail to restart.

My question, can I safely remove the lockfile on application startup before
opening the database and before creating any thread ?


If, at some point, you are sure that no processes have
the database file open, you can safely delete any old
lock files.

Don't ever delete a journal or wal file though of course.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite compile options for multithreaded application

2012-03-14 Thread Dan Kennedy

On 03/14/2012 06:09 PM, rahul dev wrote:

Guys,

I am using sqlite version 3.7.10. My application is multi-threaded and each 
thread opens a separate connection to the the same database file.
I want my database operations to be thread-safe and as concurrent as possible. 
Can you please tell me what sqlite compile flags should I use ?

I am passing THREADSAFE=2. Is that right ?
Or should I use THREADSAFE=1 ? But, in that case even the concurrent reads will 
be serialized. So, THREADSAFE=2 seems to be a better option ?


With THREADSAFE=1, calls made on a single database handle
are serialized. But calls made on separate handles may still
run concurrently. THREADSAFE=2 just allows you to avoid the
overhead imposed by the mutex used to serialize the calls
made on a single database handle.

You can also change the threading model using sqlite3_config()
at runtime.



Secondly, I am running the application on a propreitary

filesystem/operating-system that does *NOT* support unix like file
locking semantics. However, flock() is supported on my system. What
option should I pass so that "dotlockLockingStyle" or "unix-flock" may
be used ?




Pass something like "unix-flock" as the fourth argument to
sqlite3_open_v2() when opening the database connection.


If I pass 'SQLITE_ENABLE_LOCKING_STYLE', my compilation fails with
error error: storage size of 'fsInfo' isn't known cc1: warnings being
treated as errors /src/sqlite3.c:30079: error:

implicit declaration of function 'fstatfs'


Is this a known problem ? What should I do to take care of the above

compilation problem ?

SQLITE_ENABLE_LOCKING_STYLE only works on osx.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backup API handling of corrupt source DB

2012-03-07 Thread Dan Kennedy

On 03/07/2012 06:22 PM, Marcus Grimm wrote:

Dear list,

I'm using the backup api to frequently backup
a running sqlite database.

I'm wondering if the backup API is able to detect a corrupt
database or will it simply also backup a corrupt DB ?


Backup just copies pages. It will not detect corruption.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about file change counter

2012-03-06 Thread Dan Kennedy

On 03/06/2012 06:45 PM, Max Vlasov wrote:

On Tue, Mar 6, 2012 at 2:37 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:


On 03/06/2012 02:41 PM, Max Vlasov wrote:


 But for correct work this vfs should rely on the
fact that file change counter will stay the same until the final write to
the first sector.




Do you just want some hook called when a transaction is committed
and the file is completely updated? That might be the xSync() on
the db file.



It's an interesting suggestion, thanks. I just thought that using xSync I
even can only implement my own db change counter. In any case the question
is will it be called for rollback?


It can. If a large transaction (one that won't fit in the pager
cache) begins modifying the database before it is committed, then
is rolled back, the database file will be synced after the rollback
is finished. So if you stored a snapshot at that point it would be
a duplicate of the previous snapshot.

If this is a big deal, you probably could filter these states out
by keeping an eye on the change-counter.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about file change counter

2012-03-06 Thread Dan Kennedy

On 03/06/2012 02:41 PM, Max Vlasov wrote:

Hi,

there are some ideas for vfs implementation that would require tracking
file change counter. As I recall, I asked about specific api for working
with file change counter, and seems like there was no such. On the other
side it looks like it's not a problem since probably until db format change
significantly the location of this value in the file will stay unchanged.

One of the recent thought I had was implementing time machine vfs. So on
the developer level he opens the db either in general mode or history
(read-only) mode providing the date (or version) for working with the db as
it was in the past. Technically implementing such vfs should not be a
problem. For any file write (sector write) we save the prior contents of
the location together with offset-size and version information (file change
counter is good for this). The history mode instead of reading the main
data, uses the saved history data and searches for best fit to get
requested sector contents. But for correct work this vfs should rely on the
fact that file change counter will stay the same until the final write to
the first sector.


In rollback mode (not journal_mode=wal), if the locking mode is normal
(not locking_mode=exclusive), the file-change counter will be updated
as each transaction is committed. But it is not the last thing written
to the file.

Do you just want some hook called when a transaction is committed
and the file is completely updated? That might be the xSync() on
the db file.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT COUNT (DISTINCT column_name FROM table)

2012-03-02 Thread Dan Kennedy

On 03/03/2012 10:30 AM, Sreekumar TP wrote:

Could someone throw some light on this issue too?


I can't see from the stack trace why this is crashing.

Does it crash if you run the query from the sqlite shell?

Maybe try building the shell without optimizations, and
then running it under valgrind.

Dan.




Sreekumar
On Mar 2, 2012 10:05 AM, "Sreekumar TP"<sreekumar...@gmail.com>  wrote:


The backtrace
===


Program received signal SIGSEGV, Segmentation fault.
0x2b657288 in sqlite3Parser (yyp=0x2d401e40, yymajor=119, yyminor=...,
pParse=0x2d4035c8) at sqlite3.c:106740
106740  }while( yymajor!=YYNOCODE&&  yypParser->yyidx>=0 );
#0  0x2b657288 in sqlite3Parser (yyp=0x2d401e40, yymajor=119, yyminor=...,
pParse=0x2d4035c8) at sqlite3.c:106740
 yyminorunion = {
   yyinit = 735636932,
   yy0 = {
 z = 0x2bd8edc4 "FROM dir_table",
 n = 4
   },
   yy4 = 735636932,
   yy90 = {
 a = 735636932,
 b = 0x4
   },
   yy118 = {
 pExpr = 0x2bd8edc4,
 zStart = 0x4,
 zEnd = 0x2b697000 ""
   },
   yy203 = 0x2bd8edc4,
   yy210 = 196 '\304',
   yy215 = {
 value = 735636932,
 mask = 4
   },
   yy259 = 0x2bd8edc4,
   yy292 = {
 pLimit = 0x2bd8edc4,
 pOffset = 0x4
   },
   yy314 = 0x2bd8edc4,
   yy322 = 0x2bd8edc4,
   yy342 = {
 eOperator = {
   z = 0x2bd8edc4 "FROM dir_table",
   n = 4
 },
 not = 728330240
   },
   yy384 = 0x2bd8edc4,
   yy387 = 0x2bd8edc4
 }
 yyact = 21104640
 yyendofinput = 0
 yypParser = 0x2d401e40
#1  0x2b65a210 in sqlite3RunParser (pParse=0x2d4035c8, zSql=0x2bd8eda4
"SELECT COUNT(distinct Alb_Name) FROM dir_table", pzErrMsg=0x2bd8cbf8) at
sqlite3.c:107465
 nErr = 0
 i = 36
 pEngine = 0x2d401e40
 tokenType = 119
 lastTokenParsed =
 enableLookaside = 1 '\001'
 db = 0x6a14b0
 mxSqlLen = 10
#2  0x2b65ac68 in sqlite3Prepare (db=0x6a14b0, zSql=0x2bd8eda4 "SELECT
COUNT(distinct Alb_Name) FROM dir_table", nBytes=-1, saveSqlFlag=0,
pReprepare=0x0, ppStmt=0x2bd8cca8, pzTail=0x0) at sqlite3.c:90212
 pParse = 0x2d4035c8
 zErrMsg = 0x0
 rc =
 i =
#3  0x2b65b468 in sqlite3LockAndPrepare (db=,
zSql=, nBytes=-1, ppStmt=,
pzTail=0x0) at sqlite3.c:90304
 rc =
#4  sqlite3_prepare (db=, zSql=,
nBytes=-1, ppStmt=, pzTail=0x0) at sqlite3.c:24831
 rc = 21104640
#5  0x00466730 in sql_stmt (db=9, stmt=0xffc0) at /localhome/user/a/b/c/d/e/f/g/h/k.c:496
 errmsg = 0x0
 retval =
 __PRETTY_FUNCTION__ = "\000\000\000\000\000\000\000\000"








On Thu, Mar 1, 2012 at 11:42 PM, Dan Kennedy<danielk1...@gmail.com>wrote:


On 03/01/2012 10:54 PM, Sreekumar TP wrote:


version 3.7.7.1
The query works on x86, but fails on MIPS processor!



Are you able to post a stack trace? Ideally generated by
the gdb "where full" command. Thanks.

Dan.





Sreekumar



On Thu, Mar 1, 2012 at 7:00 PM, Dan Kennedy<danielk1...@gmail.com>
  wrote:

  On 03/01/2012 05:48 PM, Sreekumar TP wrote:


  In my system, the statement causes sqlite3parser function to crash.

My compiler is  mips , little endian, gcc version is 4.5.2



SQLite version? 3.7.10 seems Ok here.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users<http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users>
<http://sqlite.org:8080/**cgi-bin/mailman/listinfo/**sqlite-users<http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>




  __**_

sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users<http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>



__**_
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users<http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT COUNT (DISTINCT column_name FROM table)

2012-03-01 Thread Dan Kennedy

On 03/01/2012 10:54 PM, Sreekumar TP wrote:

version 3.7.7.1
The query works on x86, but fails on MIPS processor!


Are you able to post a stack trace? Ideally generated by
the gdb "where full" command. Thanks.

Dan.





Sreekumar



On Thu, Mar 1, 2012 at 7:00 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:


On 03/01/2012 05:48 PM, Sreekumar TP wrote:


In my system, the statement causes sqlite3parser function to crash.
My compiler is  mips , little endian, gcc version is 4.5.2



SQLite version? 3.7.10 seems Ok here.

Dan.

__**_
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users<http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT COUNT (DISTINCT column_name FROM table)

2012-03-01 Thread Dan Kennedy

On 03/01/2012 05:48 PM, Sreekumar TP wrote:

In my system, the statement causes sqlite3parser function to crash.
My compiler is  mips , little endian, gcc version is 4.5.2


SQLite version? 3.7.10 seems Ok here.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it possible to reduce the number of times sqlite3BTreeMoveToUnpacked is called with an SQLITE C++ application?

2012-02-28 Thread Dan Kennedy

On 02/28/2012 04:29 PM, Frank Chang wrote:


Good morning, We have a SQLITE C++ application which tries to find the 
intersection between the blobs in separate sqlite database tables(which we call 
subgraphs) ,containing record numbers. When we profile the code below we find 
that the top profiler user is sqlite3BTreeMoveToUnpacked. I have attached the 
profiler outputs whivh we obtained using SQLITE 3.7.10.
 We were wondering if it is possible to reduce the number of times 
sqlite3BTreeMoveToUnpacked is called with an SQLITE C++ application? Thank you.


MoveToUnpacked() is the routine that seeks for a specified
rowid within a b-tree structure. In the code below it will
be called once for each call to sqlite3_blob_open() or
sqlite3_blob_reopen().

Are you working through the blobs in a table in rowid order?
If so, you would be better off with a SELECT statement that
iterates through multiple rows. Otherwise, I think you're stuck
with the seeks, and the overhead they impose.

Dan.






void cIntersectingDedupe::GetSubGraphBlob(sSUBGRAPHINFO *SubGraph_,unsigned 
long *SubGraphBlob_,
int *Size_) {
int Size;



// Grab a BLOB and put it into a unsigned long buffer. As the BLOB contains 
record numbers,
// we will never see a value of zero. Thus, we use 0 to mark the end of the 
array.
//
// Note that we can have BLOBs of size 0, though. If we used DistillSubGraph to
// dedupe the subgraph, some records will be consolidated into others. The
// donor record's BLOB gets zapped because all of it's BLOB was rolled into the
// donee (All your BLOB are belong to us!)

// First time, open the BLOB for real, else we can re-open (faster):
if (SubGraph_->hBlob==0)
sqlite3_blob_open(SubGraph_->Database,"main","AggregatedData","Rows",SubGraph_->IteratorPos+1,0,_->hBlob);
else
sqlite3_blob_reopen(SubGraph_->hBlob,SubGraph_->IteratorPos+1);
Size=sqlite3_blob_bytes(SubGraph_->hBlob)/sizeof(unsigned long);
sqlite3_blob_read(SubGraph_->hBlob,SubGraphBlob_,Size*sizeof(unsigned long),0);
SubGraphBlob_[Size]=0;
if (Size_!=0)
*Size_=Size;
}
void cIntersectingDedupe::IntersectBlobs(sSUBGRAPHINFO *SubGraph_,unsigned long 
*IntersectionBlob_,
unsigned long *SubGraphBlob_) {
int Pos1,Pos2,PosOut;
GetSubGraphBlob(SubGraph_,SubGraphBlob_);
// Perform the intersection. We walk though the two blobs, if the blobs contain 
the same
// value, that value is copied to PosOut in Blob_, else, the blob that is 
'behind' is
// incremented so it can 'catch up' to the other:
Pos1=Pos2=PosOut=0;
while (IntersectionBlob_[Pos1]!=0&&  SubGraphBlob_[Pos2]!=0) {
if (IntersectionBlob_[Pos1]==SubGraphBlob_[Pos2]) {
IntersectionBlob_[PosOut++]=IntersectionBlob_[Pos1++];
Pos2++;
} else if (IntersectionBlob_[Pos1]

Re: [sqlite] FTS simple tokenizer

2012-02-28 Thread Dan Kennedy

On 02/28/2012 12:09 AM, Jos Groot Lipman wrote:

It was reported before (and not solved)
http://www.mail-archive.com/sqlite-users@sqlite.org/msg55959.html


The document sources are updated now. So the fix will appear on
the website next time it is regenerated.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS simple tokenizer

2012-02-26 Thread Dan Kennedy

On 02/27/2012 05:59 AM, Hamish Allan wrote:

The docs for the simple tokenizer
(http://www.sqlite.org/fts3.html#tokenizer) say:

"A term is a contiguous sequence of eligible characters, where
eligible characters are all alphanumeric characters, the "_"
character, and all characters with UTF codepoints greater than or
equal to 128."

If I do:

CREATE VIRTUAL TABLE test USING fts3();
INSERT INTO test (content) VALUES ('hello_world');

SELECT * FROM test WHERE content MATCH 'orld';
SELECT * FROM test WHERE content MATCH 'world';

I get no match for the first query, because it doesn't match a term,
but I get a match for the second, whereas according to my reading of
the docs "world" shouldn't be a term because the underscore character
shouldn't be considered a term break.

Can anyone please help me understand this behaviour?


Documentation bug. Eligible characters are just alphanumerics and
UTF codepoints greater than 128.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 FTS4 matchinfo() problem

2012-02-24 Thread Dan Kennedy

On 02/23/2012 10:22 PM, Felix Timm wrote:

Dear Sir or Madam,

I am using the FTS4 module of SQLite3 and have a problem with the Example of 
the matchinfo() auxiliary function.

At the moment I am just trying to run the matchinfo() example from 
http://www.sqlite.org/fts3.html#matchinfo and it does not work.

I tried it on Windows 7 with SQLite 3.5.9 (precompiled version), on Ubuntu 
Linux OS 10.04.4 with SQLite 3.6.22 and on Mac OS 10.6.8 with SQLite 3.7.10 ( 
both manually compiled version).
In all cases the FTS4 module is enabled and everything from the Documentation 
does work (the offsets and snippet function, full-text-search queries, etc.).
On Ubuntu and Mac OS I do not get any result when using the matchinfo() 
example. On Windows the Command Line gives me strange characters.
II tried it in the following ways: Using a PHP script, using NaviCat for 
SQLite3 and on the command line itself. In every cases it is the same.


I tdidn't found anything regarding this problem on the web. Consequently I do 
not have any clue how to fix it.
Maybe it is because PHP has problems in dealing with BLOBs?


Probably so. The matchinfo() function returns a blob that should
be interpreted as an array of 32-bit integers in native byte
order. If that goes straight to a terminal the results will be
unpredictable.

Try "SELECT quote(matchinfo(...)) FROM ..." on the command line
to see the contents of the blob in hex format.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] xFilter or xBestIndex needs to know which columns were selected

2012-02-24 Thread Dan Kennedy

On 02/24/2012 02:02 PM, Steinar Midtskogen wrote:

Hello

Is it possible to find out in xFilter or xBestIndex which columns were
selected?  That is, if I do "SELECT a, b, c FROM t" where t is a
virtual table, I would like to know in xFilter or xBestIndex that the
result will only consist of the rows a, b and c.


No way that I know of to do that at present. I've wanted to do similar
things in the past too.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] synchronization issue: no directory sync after unlink

2012-02-23 Thread Dan Kennedy

On 02/23/2012 06:02 PM, Florent Bayendrian wrote:

Hi,

I have a synchronization issue on an embedded device running Linux : if a
power shutdown is done just after a commit, sqlite will restore the
database to the previous state using the journal file. At the end of a
transaction the unlink of the journal file is not enough, the journal could
physically persist on the file system several seconds after the call to
unlink and only a sync on the directory could guarantee that the journal
file has been physically deleted. You can strace sqlite3 tool to see that
the unlink of the journal file is not followed by any sync.

The fix is simple, sqlite3OsDelete should be called with the last parameter
(dirSync) set to 1. This fix is necessary to be compliant with the last
property of an ACID database.


You're correct. The way things are now, if you commit a transaction
and then the power fails very quickly afterwards, following a reboot
you may find that your transaction has been rolled back.

And the way to fix it would be to sync the directory following the
unlink() as part of the commit.

The downside is, of course, that that extra sync will slow down all
transactions committed using "PRAGMA journal_mode=DELETE". With smallish
transactions, it would be reasonable to assume that the overhead might
be somewhere around 20-30%.

Since there is no risk of database corruption, only transaction
rollback following an unlucky power failure, we figure that the extra
durability is not worth the performance cost.

Easiest workaround would be to use either journal_mode=PERSIST or
journal_mode=WAL. Or to create a VFS shim that makes sure the syncDir
flag is set on all calls to xDelete.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with LIKE and greek characters

2012-02-22 Thread Dan Kennedy

On 02/22/2012 03:53 PM, Robert Gdula wrote:

Hi,

I've problem with greek characters when I'm using SQLite, it's no working,
but for English charaters is ok, problem it is only for LIKE, when I use
WHERE for greek characters it's working ok, how to resolve this problem ?


SQLite's built-in LIKE operator only understands upper and lower case
equivalence for the 26 letters used in English.

If you need it to understand the upper/lower case relationships between
any other characters, either use the ICU extension or create your own
implementation of LIKE:

  http://www.sqlite.org/lang_corefunc.html#like

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


<    3   4   5   6   7   8   9   10   11   12   >