Re: [sqlite] Permissions issue with SQLite

2011-09-22 Thread Dan Kennedy

On 09/22/2011 09:20 PM, Stephan Beal wrote:

On Thu, Sep 22, 2011 at 4:16 PM, Magnus Thor Torfason<
zulutime@gmail.com>  wrote:


SQLite version 3.3.6



Just to preempt the inevitable request to try it on a current version: this
is reproducible on 3.7.2 (Ubuntu 10.10).



For new versions, new db files are created with the permissions
specified by compilation option SQLITE_DEFAULT_FILE_PERMISSIONS.
Subject to umask of course.

  http://www.sqlite.org/compile.html#default_file_permissions
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE issues - IPAD development

2011-09-21 Thread Dan Kennedy

On 09/22/2011 01:30 AM, ecky wrote:


Howdy

I have a SQLITE database I created using the firefox plugin (SQLITE
manager).   I'm using that database in my IPAD app and I can read and write
to it.

However... I'm trying to debug some issue with my app so I copy the database
file off my IPAD back to my MAC and try to read the database in the firefox
plugin.   I get no errors but the database has no tables in it :-(


This often means that, for whatever reason, the user is not
opening the database file that they think they are opening.




Maybe compatibility issue between SQLITE IOS and Firefox maybe?
Thanks in advance


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Disk I/O Error on Ext3/write-back/barrier FS.

2011-09-20 Thread Dan Kennedy

On 09/20/2011 08:58 PM, Berthier, Emmanuel wrote:

From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dan Kennedy

Can you post the hex of the first 28 bytes of the journal
file?


000 d5d9 f905 a120 d763  0400 a15c eae6
010  6300  0002  0004  
020        


Journal file should be, as you say, 4640 bytes in size. 3608 bytes
is the right size for a journal file that contains 3 pages, not 4.

Are you using synchronous=NORMAL or synchronous=FULL?

What does sqlite3_vfs_find(0)->xDeviceCharacteristics() return?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Disk I/O Error on Ext3/write-back/barrier FS.

2011-09-20 Thread Dan Kennedy

On 09/20/2011 03:43 PM, Berthier, Emmanuel wrote:

Hi,

I'm using Sqlite 3.6.22 Froyo Branch on Android 2.3.4 distribution on a handset 
with eMMC storage memory.
In case of power-cut (battery removal or kernel panic), we encounter some rare 
but critical errors during database rollback sequence: journal file is 
corrupted:
# sqlite3 contacts2.db
sqlite>  PRAGMA integrity_check;
PRAGMA integrity_check;
Error: disk I/O error

After analyzing the journal header, it appears that:
- the header contains the right signature
- the page number is greater than what the file can contain. (ex: page nb=4 so 
journal size should be = 512+4x1024=4608 B but is currently 3608 B).


Can you post the hex of the first 28 bytes of the journal
file?

If you move the journal file out of the way, does the
integrity-check succeed?

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] OMIT_VIRTUALTABLE

2011-09-14 Thread Dan Kennedy

On 09/14/2011 03:38 PM, Baruch Burstein wrote:

When using OMIT_VIRTUALTABLE, I get a warning at line 11688 (3.7.7.1
amalgamation) that sqlite3GetVTable is used but never defined.
If I add

#ifndef SQLITE_OMIT_VIRTUALTABLE

#endif

around it, I get an error at line 81609, because this function is actually
used there!

a) Is this a bug, and when will it be fixed?
b) Can someone suggest a quick workaround?


You need to build from the full sources, not the amalgamation
package to use OMIT_VIRTUAL_TABLE. See the first "Important Node"
here:

  http://www.sqlite.org/compile.html#omitfeatures
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange performance problem

2011-09-05 Thread Dan Kennedy

On 09/06/2011 12:04 AM, Stephan Beal wrote:

On Mon, Sep 5, 2011 at 6:59 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:


You could get the same effect by wrapping your loop in a BEGIN/COMMIT
block.



Out of curiosity: would a BEGIN/ROLLBACK be equivalent for this case (where
only SELECTs are used)?


It would be. A ROLLBACK might use a trivial amount more CPU cycles.
Nothing you would notice though.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange performance problem

2011-09-05 Thread Dan Kennedy

On 09/05/2011 10:47 PM, Rado Rado wrote:

I'm running simple prepared SELECT statement in loop ( about 3000 times ).
It is something like "SELECT value FROM t WHERE t_id=? AND name=?". For most
calls the row does not exist, step() returns SQLITE_DONE so I call reset
after that(). The loop takes about 0.25 second and result seems to be
correct.

When I execute any SELECT query (using different table, like SELECT * FROM
t2) which returns some row and I won't call reset() so it stays open, when I
execute the loop described above after this, it is much faster (0.08 sec.).

Is it because of some lock obtained for my process by opened statement? Or
am i doing something wrong?


It will be the overhead of obtaining a read lock. In the first case,
you are obtaining and releasing a database read lock (a system call)
3000 times. In the second case, you are only doing it once.

You could get the same effect by wrapping your loop in a BEGIN/COMMIT
block.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-02 Thread Dan Kennedy

On 09/02/2011 07:32 PM, Filip Navara wrote:

On Fri, Sep 2, 2011 at 11:04 AM, Filip Navara<filip.nav...@gmail.com>  wrote:
*snip*

The time to create an index on my 266 Mb experimental database is more
than 9 minutes.

*snip*

I erroneously measured the time with DEBUG build, so I've rerun the
experiment with several SQLite versions:

[2869ed2829] Leaf: Avoid using uninitialized variables after failures
in the merge sort code. (user: drh, tags: trunk)
CPU Time: user 107.359888 sys 135.050066

[7769fb988d] Instead of a temporary b-tree, use a linked-list and
merge-sort to sort records in main memory in vdbesort.c. (user: dan,
tags: merge-sort)
CPU Time: user 118.451559 sys 132.117247

[71075673c6] Leaf: If all data being sorted fits in memory, avoid
writing any data out to temporary files in vdbesort.c. (user: dan,
tags: merge-sort)
CPU Time: user 116.813549 sys 132.710051


Thanks for doing this. There is (hopefully) a fix for the performance
regression up now.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-02 Thread Dan Kennedy

On 09/02/2011 09:24 PM, Simon Slavin wrote:


On 2 Sep 2011, at 10:04am, Filip Navara wrote:


The time to create an index on my 266 Mb experimental database is more
than 9 minutes. The database is available at
http://www.emclient.com/temp/mail_index.zip and the command I use to
create the index is

  CREATE INDEX "idx_MailAddresses_address" ON "MailAddresses" ("type",
"address", "parentId");

I had run the shell under profiler


Strangely, on my Mac running the shell tool provided with OS X 10.7.1, SQLite 
3.7.5, there seems to be a problem.  It's still going after more than 2 hours.

Loading your database and running your CREATE INDEX command, the application 
only seems to be using about 1% of one of my CPUs.  I looked to see if it was 
i/o bound instead of CPU bound but it seems only to be reading 320KB/s and my 
computer can handle a lot more than that.  (All above figures from Activity 
Monitor.)


We were just wondering a half hour ago how long this would
take with 3.7.7. Thanks!

Released versions of SQLite build an index by inserting
all values from the indexed column(s) in whatever order
they appear in the table (i.e. unsorted order) into a new
b-tree. This is fine if the index b-tree you are constructing
fits in the cache.

If it doesn't fit in the cache you have a problem. Each
time you go to insert a new entry into the b-tree you have
to find the leaf page that the new entry will be added to.
Since your b-tree doesn't fit in the cache, odds are that
this means reading the page from the file-system. And since
you are inserting in arbitrary order, the page could be
anywhere in the database (or WAL) file. In the worst case,
if your page is not cached in OS memory, you may even have
to shift the disk arm to get at it. Way slow.

The result is that reading data from disk becomes the
bottleneck when writing unsorted values to a b-tree. Hence
your 1% CPU measurement.

The new version uses a merge-sort to sort all the index
entries before it inserts them into the b-tree. This way
it doesn't matter if your b-tree is larger than the cache,
as you are always inserting into the right-most leaf node.
No need to go searching through the file-system/disk for
pages while building the b-tree.

Dan.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clarification about Triggers

2011-08-31 Thread Dan Kennedy

On 08/31/2011 06:34 PM, Black, Michael (IS) wrote:

Doohyes "we" missed that.  But shouldn't new.rowid be undefined then rather than return -1?  
Much like old.rowid is undefined?  That might have helped "us" in recognizing "our" 
mistake.


Fair enough. Sounded pompous. I say "we" because I only realized
what was happening after trying to debug the script as if it were
an SQLite bug in AFTER triggers.


The docs say
The value of NEW.rowid is undefined in a BEFORE INSERT trigger in which the 
rowid is not explicitly set to an integer.
http://www.sqlite.org/lang_createtrigger.html

And...shouldn't "after" or "before" or "instead" be mandatory?  The docs don't 
declare a default condition either.


I think it's an SQL thing. BEFORE is the default. SQLite
docs don't say that though.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clarification about Triggers

2011-08-31 Thread Dan Kennedy

On 08/30/2011 10:48 PM, Black, Michael (IS) wrote:

I found that if you used the default rowid it always gave -1 for the value.
That's why I put in it's own key.

I don't understand why this doesn't work...perhaps somebody can point out the 
error here...new.rowid contains -1.  I would think that rowid ought to be 
available after the insert.
This is using 3.7.4

create table temp_01(val float);
create table temp_02(val float);
create table total(val float);
create table row(row2 integer);
create trigger after insert on temp_01
begin
insert into total values((select new.val+temp_02.val from temp_02 where 
temp_02.rowid=new.rowid));
insert into row values(new.rowid);
end;


The error is that we are mistaking the above for an AFTER
trigger. It is not. It is a BEFORE trigger named "after".
And the value of new.rowid is not defined in a BEFORE
trigger.

Rewrite as:

  CREATE TRIGGER my_new_trigger AFTER INSERT ...

and it should work.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange foreign key constraint failed with DROP TABLE

2011-08-22 Thread Dan Kennedy
On 08/20/2011 12:50 AM, Duquette, William H (318K) wrote:
>
> On 8/19/11 10:44 AM, "Boris Kolpackov"<bo...@codesynthesis.com>  wrote:
>
>> Hi William,
>>
>> "Duquette, William H (318K)"<william.h.duque...@jpl.nasa.gov>  writes:
>>
>>> On 8/19/11 10:18 AM, "Boris Kolpackov"<bo...@codesynthesis.com>  wrote:
>>>
>>> BEGIN TRANSACTION;
>>> DROP TABLE employer;
>>> DROP TABLE employee;
>>> COMMIT;
>>>
>>> According to the sqlite docs, dropping a table when FK constraints are
>>> enabled does an implicit "DELETE FROM" first.
>>> so how can there still be FK constraint violations?
>>> Am I missing something?
>> If so, then that would be the two of us. Though I think this is a bug
>> in SQLite.
>
> I think it might be.

It is. Now fixed in the trunk.

   http://www.sqlite.org/src/info/b1d3a2e531

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fts3 table name cannot be the same as column name

2011-08-18 Thread Dan Kennedy
On 08/17/2011 10:10 PM, George Brink wrote:
> I think this is a bug... If you trying to create FTS3/FTS4 table, make
> sure that column name in FTS table is not the same as the table name.
>
> C:\Projects>sqlite3 aaa
> SQLite version 3.7.7.1 2011-06-28 17:39:05
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite>  create virtual table abc using fts3(abc text);
> Error: vtable constructor failed: abc
> sqlite>  create virtual table abc using fts3(abcd text);
> sqlite>  .exit

It's because of the "hidden column". See the first paragraph
under the second example block here:

   http://www.sqlite.org/fts3.html#section_1_4

Docs should be updated to say that you can't create a column
with the same name as the table.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] tcl incrblob interface

2011-08-11 Thread Dan Kennedy
On 08/10/2011 11:24 PM, Victor Mayevski wrote:
> Hello,
>
> I am trying to learn how to use the "incrblob" command in the Tcl
> interface and I can't get it to work. I create a one column table "t",
> insert one empty value into it, than do "db incrblob t a 1", which
> works fine, I get a file pointer back "incrblob_1".
> Then I do "puts incrblob_1 "hello world"", which also works. However,
> "chan close incrblob_1" fails with "Error: invalid argument" and doing
> "select * from t" shows no new data inserted.
> Any ideas what I am doing wroing?

Note that the incrblob interface cannot grow the size of a blob.
Only modify it's contents. Is the initial "empty value" large
enough to hold "hello world\n"?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possible bug in FTS3 or Parametrization?

2011-08-05 Thread Dan Kennedy
On 08/05/2011 05:44 PM, Martin Gill wrote:
> On 5 August 2011 10:55, Dan Kennedy<danielk1...@gmail.com>  wrote:
>>
>> On 08/05/2011 02:58 PM, Martin Gill wrote:
>>>
>>> CREATE VIRTUAL TABLE full_text USING FTS3 (id,text)
>>> INSERT INTO full_text VALUES ('1', 'I have a foréign character.')
>>
>> SQLite does not recognize upper/lower-case mappings for non-ASCII
>> characters. That is why querying for "HAVE" works but "FORÉIGN"
>> does not. If you need upper/lower case mapping for non-ASCII characters,
>> you will need to use the ICU extension.
>>
>
> That may be part of the problem. System.Data.SQlite isn't compiled with ICU, 
> so
> I cannot check that quickly. I do though get exactly the same odd
> behaviour using
> the porter tokenizer.
>
>>> SELECT id, text FROM full_text WHERE NOT(text = @p1) AND text MATCH @p2
>>>
>>> p1 = "foréign"
>>> p2 = "FORÉIGN"
>>>
>>> Not using parametrized SQL also returns a correct result.
>>
>> I don't see how either the query above or a version that uses SQL
>> literals could be returning any rows.. Not with the "CREATE VIRTUAL
>> TABLE" as it stands above. Unless you specify the ICU tokenizer (or
>> some other tokenizer that understands non-ASCII upper/lower case),
>> "FORÉIGN" should not match the row in table "full_text".
>>

>  SQLiteParameter param1 = command.CreateParameter();
>  param1.Value = "FORÉIGN";
>  param1.ParameterName = "p1";
>  param1.DbType = System.Data.DbType.String;
>  command.Parameters.Add(param1);
>
>  SQLiteParameter param2 = command.CreateParameter();
>  param2.Value = "foréign";
>  param2.ParameterName = "p2";
>  param2.DbType = System.Data.DbType.String;
>  command.Parameters.Add(param2);

p2 is lower-case here, so it matches. If the upper-case value were
bound to p2, it would not.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possible bug in FTS3 or Parametrization?

2011-08-05 Thread Dan Kennedy
On 08/05/2011 02:58 PM, Martin Gill wrote:
> Hi all
>
> I have discovered what may be a bug in either SQLLite or System.Data.SQLite
> (v 1.0.66.0 using SQLite 3.6.23.1). I certainly don't understand what's
> happening and I'd apprecite it if someone could help me out; perhaps I'm
> missing something.
>
> To recreate the issue create a database with the following commands:
>
> CREATE VIRTUAL TABLE full_text USING FTS3 (id,text)
> INSERT INTO full_text VALUES ('1', 'I have a foréign character.')

SQLite does not recognize upper/lower-case mappings for non-ASCII
characters. That is why querying for "HAVE" works but "FORÉIGN"
does not. If you need upper/lower case mapping for non-ASCII characters,
you will need to use the ICU extension.

> SELECT id, text FROM full_text WHERE NOT(text = @p1) AND text MATCH @p2
>
> p1 = "foréign"
> p2 = "FORÉIGN"
>
> Not using parametrized SQL also returns a correct result.

I don't see how either the query above or a version that uses SQL
literals could be returning any rows.. Not with the "CREATE VIRTUAL
TABLE" as it stands above. Unless you specify the ICU tokenizer (or
some other tokenizer that understands non-ASCII upper/lower case),
"FORÉIGN" should not match the row in table "full_text".



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fwd: Re: assert() in debug build when running query

2011-08-03 Thread Dan Kennedy


 Original Message 
Subject: Re: assert() in debug build when running query
Date: Wed, 03 Aug 2011 15:00:46 +0200
From: Jens Miltner 
To: General Discussion of SQLite Database 

Sorry, should have mentioned that this happens with version 3.6.7.3 and 
up (at least 3.6.7.3 is the version we discovered it in and I just tried 
with 3.7.7.1 and it still happens there).

-jens


Am 03.08.2011 um 12:02 schrieb Jens Miltner:

> Hi,
>
> I suddenly get an assert() in the debug build of sqlite when running a 
> certain query. The same query was executed many times before and did not 
> assert, so I suspect it must have to do with certain data combinations.
> However, I was able to reduce to a very simple data set & fairly simple 
> query, so it looks like it's something that might happen more often...
>
> I have put the SQL statements to create the schema, add sample data and 
> execute the query in a file at 
> .
>
> When executing this with the debug build of the commandline too, I get the 
> following assertion:
>
>> Gandalf:~ jens$ ./build/Debug/sqlite3 < sqliteassert.sql
>> sqlite3.c:64661: failed assertion `memIsValid([u.aj.p1+u.aj.idx])'
>> Abort trap
>
> I have no idea what this assertion is flagging and whether this is something 
> critical, but since it's asserting we have to take this serious...
> (FWIW, this is on Mac OS X 10.6.7)
>
> Can one of the sqlite team please have a look at this and check whether it's 
> something critical?
>
>
> Thanks,
> -jens
>
>
>



Jens Miltner

--
Thou shalt not follow the NULL pointer, for chaos and madness await thee 
at its end.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deep copy of 2 sqlite3*?

2011-08-02 Thread Dan Kennedy
On 08/03/2011 09:20 AM, Nikki Tan wrote:
> Hi sqlite-users!
>  It's just that I'm writing a copy constructor of my own mini sqlite
> wrapper, and I ended up having to do a deep copy of a sqlite3* points to a
> SQLite connection. And I'm just wondering is it okay that I just do it with
> memcpy(), by digging into the code for the definition of struct sqlite3  and
> count the total bytes of this struct?

Use sqlite3_open() to open a new database handle for the copy
of the wrapper class.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LevelDB benchmark

2011-07-30 Thread Dan Kennedy
On 07/30/2011 12:19 PM, Dan Kennedy wrote:
> On 07/30/2011 04:45 AM, Gabor Cselle wrote:
>> Just a heads-up that we just posted an update to the benchmarks:
>> - No more superfluous index on the primary key
>> - WAL turned on with auto-checkpointing every 4096 pages
>>
>> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>>
>> A diff of the results is here:
>> http://code.google.com/p/leveldb/source/diff?spec=svn45=45=side=/trunk/doc/benchmark.html_path=/trunk/doc/benchmark.html=44
>>
>> A new version of db_bench_sqlite3.cc is also included in the same revision.
>>
>> As predicted by people here, SQLite shows significant performance
>> improvements across the board except for large values (which I attribute to
>> WAL).
>>
>> We're planning to put together a benchmark that uses 64-bit integers as
>> keys, rather than 128-bit/16-byte blobs (My understanding is that SQLite
>> stores 64-bit, not 32-bit integers). I'll post the results to this mailing
>> list.
>
> That's correct. 64-bit integers.
>
>> Thanks everyone for your suggestions.
>
> Another one you could do, if you're making changes, is add a
> ReadSequential() function to db_bench_sqlite3.cc. Leveldb and KC are
> both using an iterator to read the keys sequentially, but SQLite is
> doing a separate lookup of each key.
>
> How about something like this:
>
> void ReadSequential() {
>   int status;
>   sqlite3_stmt *pStmt;
>   int i;
>   std::string read_str = "SELECT * FROM test";

Oops. Should be: "SELECT * FROM test ORDER BY key".

>
>   status = sqlite3_prepare_v2(db_, read_str.c_str(), -1,, NULL);
>   ErrorCheck(status);
>   for (i=0; i<  reads_&&  SQLITE_ROW==sqlite3_step(pStmt); i++){
> bytes_ += sqlite3_column_bytes(pStmt, 1) +
> sqlite3_column_bytes(pStmt, 2);
> FinishedSingleOp();
>   }
>
>   status = sqlite3_finalize(pStmt);
>   ErrorCheck(status);
> }
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LevelDB benchmark

2011-07-29 Thread Dan Kennedy
On 07/30/2011 04:45 AM, Gabor Cselle wrote:
> Just a heads-up that we just posted an update to the benchmarks:
> - No more superfluous index on the primary key
> - WAL turned on with auto-checkpointing every 4096 pages
>
> http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
>
> A diff of the results is here:
> http://code.google.com/p/leveldb/source/diff?spec=svn45=45=side=/trunk/doc/benchmark.html_path=/trunk/doc/benchmark.html=44
>
> A new version of db_bench_sqlite3.cc is also included in the same revision.
>
> As predicted by people here, SQLite shows significant performance
> improvements across the board except for large values (which I attribute to
> WAL).
>
> We're planning to put together a benchmark that uses 64-bit integers as
> keys, rather than 128-bit/16-byte blobs (My understanding is that SQLite
> stores 64-bit, not 32-bit integers). I'll post the results to this mailing
> list.

That's correct. 64-bit integers.

> Thanks everyone for your suggestions.

Another one you could do, if you're making changes, is add a
ReadSequential() function to db_bench_sqlite3.cc. Leveldb and KC are
both using an iterator to read the keys sequentially, but SQLite is
doing a separate lookup of each key.

How about something like this:

   void ReadSequential() {
 int status;
 sqlite3_stmt *pStmt;
 int i;
 std::string read_str = "SELECT * FROM test";

 status = sqlite3_prepare_v2(db_, read_str.c_str(), -1, , NULL);
 ErrorCheck(status);
 for (i=0; i < reads_ && SQLITE_ROW==sqlite3_step(pStmt); i++){
   bytes_ += sqlite3_column_bytes(pStmt, 1) + 
sqlite3_column_bytes(pStmt, 2);
   FinishedSingleOp();
 }

 status = sqlite3_finalize(pStmt);
 ErrorCheck(status);
   }

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [FTS3] Understanding the Flow of data through the tokenizer

2011-07-24 Thread Dan Kennedy
On 07/24/2011 08:16 PM, Abhinav Upadhyay wrote:
> Hi,
>
> I am trying to write my own custom tokenizer to filter stopwords apart
> from doing normalization and stemming. I have gone through the
> comments in fts3_tokenizer.h and also read the implementation of the
> simple tokenizer. While overall I am able to understand what I need to
> do to implement this tokenizer, but I still cannot visualize how the
> FTS engine calls the tokenizer and what data in what form it passes to
> it.
>
> Does the FTS engine pass the complete document data to the tokenizer
> or it passes some chunks of data, or individual words ? I need to
> understand this part because the next function needs to set the
> offsets accordingly. By just going through the code of the simple
> tokenizer I could not completely comprehend it (it would have been
> better if I could debug it).
>
> By the next functio I mean this: int (*xNext)(
>  sqlite3_tokenizer_cursor *pCursor,   /* Tokenizer cursor */
>  const char **ppToken, int *pnBytes,  /* OUT: Normalized text for token */
>  int *piStartOffset,  /* OUT: Byte offset of token in input buffer */
>  int *piEndOffset,/* OUT: Byte offset of end of token in input buffer 
> */
>  int *piPosition  /* OUT: Number of tokens returned before this one */
>);
> };
>
> It would be better if you could explain what is the role of these
> parameters: piEndOffset , piStartOffset ?

Each time xNext() returns SQLITE_OK to return a new token, xNext()
should set:

   *piStartOffset to the number of bytes in the input buffer before
   start of the token being returned,

   *piEndOffset to *piStartOffset plus the number of bytes in the
   token text, and

   *piPosition to the number of tokens that occur in the input buffer
   before the token being returned.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3: synonyms dictionary and tokens length

2011-07-15 Thread Dan Kennedy
On 07/15/2011 01:10 PM, Alexey Pechnikov wrote:
> 2011/7/15 Dan Kennedy<danielk1...@gmail.com>:
>> I think you could just have the tokenizer return "dbms" whenever
>> it sees "sqlite" in the input.
>
> But queries to original text will not work:
> select text from fts where fts match 'sqlite educate';

I think it will. Query strings - like 'sqlite educate' - are
also parsed using the tokenizer. So the query will be transformed
to 'dbms educate' before it is run.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3: synonyms dictionary and tokens length

2011-07-14 Thread Dan Kennedy
On 07/15/2011 03:28 AM, Alexey Pechnikov wrote:
> I want to add the table CREATE TABLE ext_fts_synonyms(word text not
> null unique, synonym text not null); insert into
> ext_fts_synonyms('sqlite','sqlite dbms');
>
> And replace in tokenizer the term 'sqlite' to 2 terms 'sqlite' and
> 'dbms' for search by queries like to
>> select text from fts where fts match 'dbms educate';
> 'SQLite may be useful for education'
>
> But how to return from tokenizer 2 terms or more instead of single
> term?..

I think you could just have the tokenizer return "dbms" whenever
it sees "sqlite" in the input.

Both documents and tokens that appear in queries are transformed
by tokenizers.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] caveat on blanket UPDATE triggers

2011-07-14 Thread Dan Kennedy
On 07/14/2011 10:42 PM, Ben Harper wrote:
> This may be specific to RTree indexes - I haven't taken the time to fully 
> understand it.
>
> Dangerous:
> CREATE TRIGGER ON UPDATE ON tablename { SET RTREE MIN/MAX }
>
> Fine:
> CREATE TRIGGER ON UPDATE OF geometry OF tablename { SET RTREE MIN/MAX }
>
> Failure to specify the field name explicitly results in a corrupt RTree.

Is this an SQLite problem? Or a problem with some system built on
top of SQLite? If it is an SQLite problem, do you have an SQL
script or program to demonstrate it?

SQLite is not supposed to allow triggers on r-tree tables.

Thanks,
Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3: synonyms dictionary and tokens length

2011-07-14 Thread Dan Kennedy
On 07/14/2011 07:42 PM, Alexey Pechnikov wrote:
>> No. Don't return anything for a stop word. Just advance to the next
>> non stop-word token and return it.
>
> Thanks, I did and it's work.
>
> And another question... Is there any way to use multi-word synonyms? Like to:
> sqlite ->  Open Source SQLite DBMS
>
> I think the single token "Open Source SQLite DBMS" will not useful.

I don't quite follow. Can you rephrase the question?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3: synonyms dictionary and tokens length

2011-07-14 Thread Dan Kennedy
On 07/14/2011 05:29 PM, Alexey Pechnikov wrote:
> With 0-length token in icuNext there is the error:
> Error: SQL logic error or missing database
>
> May xNext returns 0 length when the token is stopword?

No. Don't return anything for a stop word. Just advance to the next
non stop-word token and return it.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach file/disk DB to memory DB

2011-07-13 Thread Dan Kennedy
On 07/13/2011 10:18 PM, Alexandre G wrote:
>
> If I was to attach a database located in a file on disk to a database located
> in memory, would the file DB be loaded in memory too, or would its usage
> remain disk-based?

It remains on disk. Whether or not the main database is in-memory does
not affect the way attached databases are handled.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3: synonyms dictionary and tokens length

2011-07-13 Thread Dan Kennedy
On 07/13/2011 05:05 PM, Alexey Pechnikov wrote:
> With synonyms dictionary the result token length can be more then
> original token length.
> Is it problem for current realization of FTS?

I don't think so. If it is, it's a bug.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data type of the blob returned by matchinfo()

2011-07-13 Thread Dan Kennedy
On 07/13/2011 04:57 AM, Abhinav Upadhyay wrote:
> Hi,
>
> Quoting the ranking function given in the appendix of the FTS3
> documentation page (http://www.sqlite.org/fts3.html#appendix_a)
>
> static void rankfunc(sqlite3_context *pCtx, int nVal, sqlite3_value **apVal){
>int *aMatchinfo;/* Return value of matchinfo() */
> ...
> ...
> aMatchinfo = (unsigned int *)sqlite3_value_blob(apVal[0]);
> ...
> ...
>
> aMatchinfo is declared as int * and the value obtained from
> sqlite3_value_blob() is being case to unsigned int *. This is causing
> a compiler warning, so I am wondering what is the datatype of the
> matchinfo blob (int * or unsigned int *) ? Although common sense says
> it should be unsigned int *, but just wanted to confirm .

The docs say "...the blob consists of zero or more 32-bit unsigned
integers in machine byte-order...". So I guess all of the "int"
declarations in that function should probably be "unsigned int".
Or, just changing the cast to an (int *) will also work.

   http://www.sqlite.org/fts3.html#matchinfo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] possible bug "foreign key mismatch" in TCL

2011-07-12 Thread Dan Kennedy
On 07/13/2011 07:18 AM, Victor Mayevski wrote:
> I am getting a "foreign key mismatch" in TCL in the following situation:
> #The system is Ubuntu 10.10 32bit, using ActiveState 8.6 32 bit TCL binaries
>
> % package require sqlite3
> 3.7.6.3
>
>
> #Two example tables:
> create table users (name primary key unique, pass not null)
> create table users1 (name references users (name), pass references users 
> (pass))

Maybe you need a unique constraint on column 'pass' of table "users".

   http://www.sqlite.org/foreignkeys.html#fk_indexes

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 code from the website

2011-07-01 Thread Dan Kennedy
On 06/30/2011 08:55 PM, Ryan Henrie wrote:
> Even if I use the stock example from the web page, with only 2 columns,
> and the exact schema from the example, I get the same result.
>
> I'm wondering if it is a bug in the example code on the website (ie the
> source code has moved on, invalidating an example based on old code), or
> it's just something in turning their c code example into a full
> extension that I didn't do right.
>
>   From the extension's source code:
>
> nCol = aMatchinfo[1];
> if( nVal!=(*1+nCol*) ) goto wrong_number_args;
>
> So, it should scale with the number of columns.  (I would hope it's not
> hardcoded to a set number of columns!)

The page is a bit deceptive. The key phrase relating to the C code
example is:

   "Instead of a single weight, it allows a weight to be externally
assigned to each column of each document."

Making the C code function incompatible with the SQL example above
it. The C code function would work with the example in its header
comment:

 CREATE VIRTUAL TABLE documents USING fts3(title, content);

 SELECT docid FROM documents
 WHERE documents MATCH 
 ORDER BY rank(matchinfo(documents), 1.0, 0.5) DESC;

It wouldn't be real hard to adapt the C code so that it accepted
a single weight argument like the hypothetical function in the
SQL example above it.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 code from the website

2011-06-29 Thread Dan Kennedy
On 06/30/2011 10:31 AM, Ryan Henrie wrote:
> Reference Page: http://www.sqlite.org/fts3.html#appendix_a
>
> At the bottom of the page, there is a sample c file to calculate the
> rank, and a FTS query to use it.  I can't get it to work.
>
> You can see my files here:
>
> http://coldmist.homeip.net/quotes_sql_test.txt
> http://coldmist.homeip.net/rank.c.txt
>
>   >  gcc -shared -fPIC -I/opt/include -o rank.so rank.c
>   >  rm test.sql; sqlite3 test.sql
> The C file compiles without errors or warnings on my x86 Linux machine
> (and I verified one plugin I found compiled and worked fine, just to
> remove build issues as a cause), but when I execute the import, it
> complains with this:
>
> Error: near line 16: wrong number of arguments to function myrank()

Looks like myrank() is supposed to be passed 5 arguments in this
case. The return value of matchinfo() and a weight for each column.
Your table has 4 columns, hence 5 arguments.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Indexes on columns

2011-06-24 Thread Dan Kennedy
On 06/24/2011 12:26 PM, logan...@gmail.com wrote:
> Hello,
>
> My understanding is that an index is automatically created on any column
> that is used in the primary key (or a composite index is created if the key
> is composed of different columns). If this is correct then why don't I see
> indexes for those in my table (I'm using SQLite Administrator and Firefox
> plugin based SQLite manager). I do see indexes for the columns that I added
> a unique constraint upon.
>
> Is the above just a GUI error in these tools or an index need to be created
> separately on the columns used in primary keys?

Maybe your tables have "integer primary keys". Those are an exception
See here:

   http://www.sqlite.org/lang_createtable.html#rowid

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3 WAL (file is encrypted or is not a database, database table is locked, database disk image is malformed') at high load

2011-06-23 Thread Dan Kennedy
On 06/23/2011 12:11 PM, Anoop K wrote:
> I am using sqlite3(sqlite-3.7.0.1) with WAL enabled as storage for a
> multiprocessing daemon(python). On doing a BULK insert of .5 million rows
> each of size 230 bytes in batches of 500 where each batch is a transaction,
> following errors happen in other processes which perform (<10) SELECTS and
> INSERTS.
>
> These errors does not happen always and database do function even after
> these errors. SELECTS, INSERTS queries after the BULK insert operation do
> succeed after the batch operation is completed.
>
> - file is encrypted or is not a database
> - database table is locked
>
> Sqlite3 configuration
>
> - 'PRAGMA synchronous=OFF'
> - 'PRAGMA journal_mode=wal'
> - 'PRAGMA wal_checkpoint=FULL' Rest of the configuration parameters have
> have default values
>
> If I continue BULK inserts with a sleep of 30 sec, after 10+ runs I
> see *'database
> disk image is malformed' . *After that the table to which BULK inserts were
> made does not work. They fail with same error '*database disk image is
> malformed'.*. Other tables do respond to queries.
>
> Any clues on the root cause ? Is this a Sqlite3 WAL bug ?

Do you have a program you can share that exhibits this bug?

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in composite foreign key constraints.

2011-06-10 Thread Dan Kennedy
On 06/11/2011 12:08 AM, Max wrote:
> CREATE TABLE TestTable (
>   id integer primary key,
>   name varchar,
>   source_id integer not null,
>   parent_id integer,
>   foreign key(source_id, parent_id) references TestTable(source_id, id)
> );
>
> CREATE UNIQUE INDEX testindex on TestTable(source_id, id);
>
> PRAGMA foreign_keys=1;
> INSERT INTO TestTable VALUES (1, 'Parent', 1, null);
> INSERT INTO TestTable VALUES (2, 'Child', 1, 1);
>
> UPDATE TestTable SET parent_id=1000 where id=2;
>
> Error: near line 15: database disk image is malformed

Thanks. Should be fixed here:

   http://www.sqlite.org/src/info/2b3d9996a8

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in composite foreign key constraints.

2011-06-10 Thread Dan Kennedy
On 06/10/2011 10:33 PM, Max wrote:
> I just grabbed a 3.7.6.3 binary from the sqlite page.  The second schema
> still fails to throw a constraint error.

Hopefully fixed here:

   http://www.sqlite.org/src/info/442d8d8bfe

Grab the zip file from the "Zip Archive" link on the above page.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What is so slow on dropping tables?

2011-06-01 Thread Dan Kennedy
On 06/01/2011 02:18 AM, Jan Hudec wrote:
> Hellow folks,
>
> At $work we have an application that processes *huge* (tens of millions of
> rows in some of the larger tables, sometimes over 30GiB file size). This
> application changes and when it does, it drops some tables and calculates
> them again. What is somewhat surprising is that dropping the tables itself
> takes quite long (order of minutes) time.
>
>   - What is the reason it might take that long? I didn't expect removing the
> table entry in sqlite_master and adding it's pages to the free list to
> take that long.
>   - Is there any way to speed it up? The application works in big tasks, each
> of which opens a transaction and creates one or few tables, dropping any
> old versions of those tables first. So could perhaps moving the drops out
> of the transaction help? It would be correct, once the table is found
> obsolete, it would be found obsolete after rollback and retry again, but
> it would take quite a bit of refactoring, so I'd only do it if it's likely
> to help significantly.

If you have foreign-keys enabled (and one or more FK's that involve
the table being dropped), that can slow things down. If this is
the case, try using the pragma to disable FKs before running the
DROP TABLE.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 reserved characters

2011-06-01 Thread Dan Kennedy
On 06/01/2011 06:30 AM, Mohd Radzi Ibrahim wrote:
> Hi,
> I was searching for any reference to reserved characters used in FTS4, but 
> failed to find any.
>
> I have problem with query with - ftstable match 'width 5" '
>
> But it's ok with - ftstable match 'width 5'
>
> to fix this, I replaces every double-qoute in query with empty space.
>
> My question, is there any other characters that has special meaning in FTS 
> match query?

Those mentioned in the query syntax here:

   http://www.sqlite.org/fts3.html#section_3

The query parser is probably confusing the unmatched '"' character
for the start of a quoted phrase. I guess you also have to watch
out for '-', '*' and ':'. Also the uppercase keywords AND, OR, NOT
and NEAR.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Addition: Re: SQLite locking question

2011-05-06 Thread Dan Kennedy
On 05/06/2011 05:53 PM, Ole Reinhardt wrote:
> Hi Dan,
>
>>> I have to correct myself a little... the hanging sqlite3_open_v2() does
>>> not happe while the prepare of the first query but during the first
>>> sqlite3_step() of the first query...
>>
>> Each shared-cache has a mutex associated with it. When
>> sqlite3_step is called it grabs the mutexes for all shared-caches
>> it might use. The mutexes are not released until the call
>> to sqlite3_step() returns.
>>
>> So if you have one query that spends a lot of time in sqlite3_step()
>> you are going to block your other threads. Unfortunately, that is
>> the nature of shared-cache mode.
>
> Thanks for the info. But is this even true when enabling read
> uncommitted isolation mode using:
>
> sqlite3_exec(log_database_local, "PRAGMA read_uncommitted=True;", NULL,
> 0,)
>
> ??
>
> I suspect it's the master table lock that is hold there, right?

You are correct that your query will hold a shared-lock on the
master table at the shared cache level, but it's the mutex that
is causing you problems. Shared-cache locks are held for the
duration of a transaction to ensure transactions are correctly
isolated. In this case the master table is locked to make sure
that the table your query is accessing is not dropped by another
thread while you are using it.

Mutexes are held for the duration of an sqlite3_step() call to make
sure SQLite doesn't segfault when two threads try to access the same
shared-cache object.

> Any other chance to speed this up (apart from the obvious "optimize the
> query, do not use distinct on large tables)=

Without seeing the query or database schema? Not really... Depending
on the exact query an index on "xyz" might help.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Addition: Re: SQLite locking question

2011-05-06 Thread Dan Kennedy
On 05/06/2011 05:17 PM, Ole Reinhardt wrote:
> Hi all,
>
> I have to correct myself a little... the hanging sqlite3_open_v2() does
> not happe while the prepare of the first query but during the first
> sqlite3_step() of the first query...

Each shared-cache has a mutex associated with it. When
sqlite3_step is called it grabs the mutexes for all shared-caches
it might use. The mutexes are not released until the call
to sqlite3_step() returns.

So if you have one query that spends a lot of time in sqlite3_step()
you are going to block your other threads. Unfortunately, that is
the nature of shared-cache mode.

Dan.



>
> So in pseudo code the database access looks like this:
>
> first thread:
>
> sqlite3_open_v2("log.sqlite", SQLITE_OPEN_READONLY |
> SQLITE_OPEN_NOMUTEX, NULL);
> sqlite3_exec(log_database_local, "PRAGMA read_uncommitted=True;", NULL,
> 0,)
> sqlite3_prepare_v2(log_database_local, "select distinct xyz from log
> where ...)
>
> while(sqlite3_step(sql_stmt) == SQLITE_ROW)...
>
> While every step takes _very_ long time as the log table has quite a lot
> entries (>  5.000.000)
>
>
>
> second thread:
>
> sqlite3_open_v2("log.sqlite", SQLITE_OPEN_READONLY |
> SQLITE_OPEN_NOMUTEX, NULL);
>
> --->  This sqlite3_open_v2 does not return until the prepare
> sqlite3_step() statement of the first thread has completed...
>
>
> again: Any help or short hint would be very appreciated!
>
> Thanks in advance,
>
> Ole Reinhardt
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual tables and OR clause

2011-05-04 Thread Dan Kennedy
On 05/04/2011 08:58 PM, Schrum, Allan wrote:
> Hi Folks,
>
> Using virtual tables the WHERE clause is broken up and sent to the "best 
> index" function to determine the best index. Then the "filter" function is 
> called to perform the actual work. I've noticed that the SQLITE engine seems 
> to process OR clauses outside of the virtual table process, while AND clauses 
> are provided to the "filter" function to use. How can we get the OR clauses 
> sent to the "filter" function where we can make use of that information?
>
> Using SQLITE 3.6.18.

If you do this:

   SELECT * FROM vtab WHERE a=1 OR b=2

Then SQLite will invoke xBestIndex once for each of the two
conditions and once for a full-scan (no WHERE conditions at
all).

If it thinks there is advantage in doing so (based on the
estimatedCost values returned by the three xBestIndex calls),
SQLite may implement the query by using xFilter/xNext to get
all the a=1 rows from the virtual table, then again for all of
the b=2 rows. It uses the rowid values to avoid returning
duplicates to the caller.

It is not possible for SQLite to request a (a=1 OR b=2) with
a single xFilter/xNext scan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select * on empty virtual table

2011-04-29 Thread Dan Kennedy
On 04/29/2011 11:15 PM, Joseph Stockman wrote:
>
> We have tried a null pointer, a zero-length static, a dummy static and 
> strings created on the heap via sqlite3_mprintf.
> All cause the same segmentation fault.

Run your app under valgrind. Post the errors it finds if they
don't make the problem obvious.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No journal vs WAL journal

2011-04-26 Thread Dan Kennedy
On 04/26/2011 08:13 AM, Nikolaus Rath wrote:
> Nikolaus Rath  writes:
> [ WAL Performance ]
>> Really no one an idea of why this is happening?
>
> I tried to investigate this further myself and created a small example
> program (attached). It fills a table with 500,000 entries without using
> explicit transactions. The final db is 24 MB big.
>
> On my system, the required times are (the xxx in WAL(xxx) is the
> wal_autocheckpoint value):
>
> synchronous=OFF, journal_mode=OFF: 13143.2 ms
> synchronous=OFF, journal_mode=WAL(1000):   12130.1 ms
> synchronous=NORMAL, journal_mode=WAL(1000):68430.8 ms
> synchronous=OFF, journal_mode=WAL(1):  12279.0 ms
> synchronous=NORMAL, journal_mode=WAL(1):   33475.0 ms
> synchronous=OFF, journal_mode=WAL(5):  13058.9 ms
> synchronous=NORMAL, journal_mode=WAL(5):   27820.7 ms
>
>
> Unfortunately these results just deepen my confusion:
>
>   - Why is WAL faster than no journal if sync=OFF? Doesn't WAL require
> every page to be written twice?

In a lot of cases, writing sequential data to a file is faster
than writing the same amount of data to a bunch of random offsets.
And if you do have to write to a bunch of offsets within a file, then
it is faster if they are sorted in ascending order before doing so.

This is what SQLite does in WAL mode. Writes data sequentially to
the WAL file, then during a checkpoint writes to a bunch of different
offsets within the database file (in sorted order). Maybe that is
faster than writing each transaction directly to the database file
in this case.

>   - Why is WAL(5) so much slower than no journal with sync=NORM? If
> the WAL file can grow up to 50MB, but the db is only 25MB big, it
> seems to me that only one checkpoint (and therefore only one fsync())
> should be required. Am I missing something again?

Maybe creating the large wal file is slow. Usually overwriting an
existing region of a file is faster than extending one. If you run
the experiment with sufficient inserts (so that the 50MB wal file
is created, checkpointed and then overwritten a few times) the
results might look better.

Guesswork, obviously...

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problems while upgrading to 3.7.3

2011-04-21 Thread Dan Kennedy
On 04/21/2011 05:42 PM, Gaurav Srivastava wrote:
> Hi
>
> I have recently upgraded my sqlite version from 3.5.1 to 3.7.3. With the
> update I have been seeing some issues with sqlite3_prepare_v2() API.
> The workflow here is:
> sqlite3VdbeSetSql(pVDbe, zSql, saveSqlflag)
>  ^
>  |
>  |
> sqlite3prepare(...)
>  ^
>  |
>  |
> sqlite3LockandPrepare(...)
>  ^
>  |
>  |
> sqlite3_prepare_v2(savesqlflag=1).
>
> I have created a statement using sqlite3_prepare_v2. I execute a query and
> then use sqlite3_step(stmt) to step over the results. Firstly, it fails due
> to SQLITE_SCHEMA error and then uses sqlite3_reprepare to retry. In that
> function the isPrepareV2 variable becomes false and so I am getting the
> value SQLITE_OK instead of SQLITE_ROW. The second thing is in
> sqlite3VdbeSwap function, we are not properly swapping the isPrepareV2
> values.

I think the code is Ok there. The important part is that after the
call to sqlite3Reprepare() returns, the isPrepareV2 flag is still set.
Are you finding this is not the case?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-3.7.6.1 coredump/hang

2011-04-16 Thread Dan Kennedy
On 04/15/2011 05:35 PM, Thomas Klausner wrote:
> On Fri, Apr 15, 2011 at 04:32:08PM +0700, Dan Kennedy wrote:
>> How are you building sqlite3? Using configure/make from the
>> sqlite-autoconf-3070601.tar.gz package on the website?
>
> Yes, using pkgsrc.

What happens if you just try to build sqlite-autoconf-3070601.tar.gz
with "./configure && make"? That seems to work here with
"i386cd-5.1.iso" on a vm.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-3.7.6.1 coredump/hang

2011-04-15 Thread Dan Kennedy
On 04/15/2011 04:08 PM, Thomas Klausner wrote:
> On Fri, Apr 15, 2011 at 01:42:00PM +0700, Dan Kennedy wrote:
>> Are there any other threads in the process?
>
> My program is not using threads.
> sqlite3 is linked against libpthread. I don't think it's using threads
> (you probably know that better) but the c library behaves differently
> when libpthread is linked.
>
>> Try attaching [gdb] to the running process:
>>
>> gdb  
>>
>> Then run the [bt] command again.
>
> I recompiled with -g and ran it again, now sqlite3 is consistenly
> dumping core.
> I stepped through it line by line, but:
> open_db (p=) at shell.c:1389
> 1389sqlite3_enable_load_extension(p->db, 1);
> (gdb)
> 0x00401d20 in sqlite3_enable_load_extension@plt ()
> (gdb)
> Single stepping until exit from function
> sqlite3_enable_load_extension@plt,
> which has no line number information.
> 0x00401ba0 in ?? ()
> (gdb)
> Cannot find bounds of current function
> (gdb) c
> Continuing.
>
> Program received signal SIGSEGV, Segmentation fault.
> __pollts50 (fds=0x4, nfds=4152832323, ts=0x7f7ff7873143,
> sigmask=) at
> /archive/cvs/src/lib/libpthread/pthread_cancelstub.c:424
> 424 /archive/cvs/src/lib/libpthread/pthread_cancelstub.c: No such
> file or directory.
>  in /archive/cvs/src/lib/libpthread/pthread_cancelstub.c
> (gdb) Quit
> (gdb) bt
> #0  __pollts50 (fds=0x4, nfds=4152832323, ts=0x7f7ff7873143,
> sigmask=) at
> /archive/cvs/src/lib/libpthread/pthread_cancelstub.c:424
> #1  0x7f7ff7ffba00 in ?? ()
> #2  0x in ?? ()

How are you building sqlite3? Using configure/make from the
sqlite-autoconf-3070601.tar.gz package on the website? Some
other method? Can you post the output of the build process?

Also, what is the output if you set a breakpoint in
sqlite3_enable_load_extension() in the shell, then execute the
following gdb commands?

   (gdb) p *db
   (gdb) p sqlite3Config

Dan.






>
> I'm not sure how to get more debugging info :(
>
> Suggestions where I shall add printfs? :)
>   Thomas
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-3.7.6.1 coredump/hang

2011-04-15 Thread Dan Kennedy
On 04/15/2011 01:32 PM, Thomas Klausner wrote:
> On Fri, Apr 15, 2011 at 01:20:11PM +0700, Dan Kennedy wrote:
>> Can you post the stack-trace from the core file? With debugging
>> symbols if possible.
>
> Sorry, the backtrace is unusable:
> (gdb) bt
> #0  __nanosleep50 (rqtp=0x7f7fc1b0, rmtp=0x0)
>  at /archive/cvs/src/lib/libpthread/pthread_cancelstub.c:389
> #1  0x7f7ff7708008 in ?? ()
> #2  0x01a4 in ?? ()
> #3  0x0202 in ?? ()
> #4  0x in ?? ()

Are there any other threads in the process?

>
> Is there a way to get more useful output from the sqlite3 executable
> (that hangs?)?

Try attaching [gdb] to the running process:

   gdb  

Then run the [bt] command again.

Thanks,
Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-3.7.6.1 coredump/hang

2011-04-15 Thread Dan Kennedy
On 04/15/2011 12:27 PM, Thomas Klausner wrote:
> Hi!
>
> With 3.7.6 and 3.7.6.1 on NetBSD-5.99.49/amd64, a program I'm using
> started dumping core.

Can you post the stack-trace from the core file? With debugging
symbols if possible.

> Perhaps non-standard options used during compilation are
> -DUSE_PREAD
> -DSQLITE_ENABLE_UNLOCK_NOTIFY=1

Could easily be the ENABLE_UNLOCK_NOTIFY.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 compress/uncompress functions

2011-04-13 Thread Dan Kennedy
On 04/13/2011 02:49 PM, Alexey Pechnikov wrote:
> What type of value return _uncompress_? The original content may be
> blob or text, so uncompress may return blob I think... Or we need to
> store in compressed field the type of this.

I guess that is correct. uncompress() should return the same type
and value as was passed to compress().
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 compress/uncompress functions

2011-04-12 Thread Dan Kennedy
On 04/13/2011 06:22 AM, Alexey Pechnikov wrote:
> Which functions will be correct? As example, affinity for uncompress
> functions can be as "blob" as "text". Now this is not important but
> some tests will be nice for future compability.

You mean what type of value should compress() return?

Any type. So long as uncompress() can transform the value back
to the original text.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] User function calculates for every row before applying any where condition

2011-04-11 Thread Dan Kennedy
On 04/11/2011 05:32 PM, Enrico Thierbach wrote:
>
> On 11.04.2011, at 10:18, Maxim V. Shiyanovsky wrote:
>
>> Does SQlite (3.6.23.1 to be precise) perform any optimization when user 
>> function appears in the statement?
>>
>> I defined custom function MY_FUNC(a TEXT, b TEXT) and bound it with 
>> sqlite3_create_function.
>>
>> Suppose, we have 2 tables:
>> Tbl1(id1 INTEGER, id2 INTEGER, str TEXT)  with 2 records
>> Tbl2(id INTEGER, Tbl1_id INTEGER, str TEXT) with 5000 records
>>
>> Using query:
>> SELECT Tbl2.id, MY_FUNC(Tbl1.str, Tbl2.str) FROM Tbl2 JOIN Tbl1 ON 
>> Tbl1.id1=Tbl2.Tbl1_id WHERE Tbl2.id>1000 AND Tbl1.id2=1 LIMIT 50
>> I hope MY_FUNC would be evaluated 50 times at most, but it appeared 1.
>>
>> Using  subquery does not do anything with the problem.
>
> SELECT sq.id, MY_FUNC(sq.str1, sq.str2) FROM
>(SELECT Tbl2.id, Tbl1.str AS str1, Tbl2.str AS str2 FROM Tbl2 JOIN Tbl1 ON 
> Tbl1.id1=Tbl2.Tbl1_id WHERE Tbl2.id>1000 AND Tbl1.id2=1 LIMIT 50) sq
>
> really doesn't help?

This doesn't seem right. What is the whole database schema? Is this
the exact query in use or a modified version?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] test/exists.test 2.$tn.1 fails if both DBs are in WAL mode

2011-04-10 Thread Dan Kennedy

 >> For me, this test from test/exists.test fails if both database
 >> connections are opened in WAL mode, that is if "PRAGMA journal_mode=WAL"
 >> is issued right after each DB is opened.
 >>
 >># TABLE objects.
 >>#
 >>do_test   {
 >>  sql1 { DROP TABLE IF EXISTS t1 }
 >>  sql2 { CREATE TABLE t1(x) }
 >>  sql1 { DROP TABLE IF EXISTS t1 }
 >>  sql2 { SELECT name FROM sqlite_master WHERE type = 'table' }
 >>} {}

On 04/10/2011 04:12 PM, Ralf Junker wrote:
> The test passes well if I recreate the database file anew just prior to
> running it so it acts upon an empty database.

The [do_multiclient_test] is supposed to delete the database file
before running the script passed as the third argument (of which
this [do_test] is a part). So you're right, it is supposed to run
starting with an empty database.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can't load sqlite 3.75 under tcl 8.4.4.0

2011-04-10 Thread Dan Kennedy
On 04/10/2011 11:43 AM, thoselai...@shaw.ca wrote:
> % package require sqlite
> couldn't load library "C:/Tcl/lib/sqlite3/sqlite3/.dll": this library or a 
> dependent library could not be found in library path
>
> What is sqlite3/.dll ?
> Where did that "/" come from?
>
> The pkgindex file contains -
>
> package ifneeded sqlite 3.7.5 [list load [file join $dir \
>  sqlite3[info sharedlibextension]] sqlite3]

Maybe there is a space or tab or something between "sqlite3" and
"[info".

Does the command "load C:/Tcl/lib/sqlite3/sqlite3.dll" work if
you enter it directly?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Disabling a unique index

2011-04-08 Thread Dan Kennedy
On 04/08/2011 04:42 PM, Jaco Breitenbach wrote:
> Dear experts,
>
> My application makes use of a SQLite table with a unique index.  During
> normal processing, the unique index is used to ensure no duplicate records
> are entered into the table.  However, at regular intervals large numbers of
> records (millions of records) that are already known to be unique, are
> inserted into the SQLite table from another source.  In order to speed up
> these bulk inserts, I first drop the index on the SQLite table, do the bulk
> insert, and then recreate the index.
>
> The problem I'm encountering is that dropping of the index can potentially
> take much longer than recreating it.  In a test I've run this morning, it
> took 53 minutes to complete the "DROP INDEX my_unique_index", but only 9
> minutes to recreate it.

Is your database an auto-vacuum database? If so, it might be faster if
you issue a "PRAGMA auto_vacuum = 2" before dropping the index.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-05 Thread Dan Kennedy
On 04/05/2011 04:49 PM, Filip Navara wrote:
> Hello,
>
> we are having problem with database that originated on computer of one
> of our customers.
>
> The database is used in WAL mode with auto_vacuum=1 and page_size=1024.
>
> When running the "pragma incremental_vacuum(1);" command the WAL file
> grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
> per page + some overhead). This causes the transaction to run for much
> longer time than expected and eventually the WAL file grows to several
> gigabytes when we try to run incremental_vacuum for 4096 pages.
>
> Additional facts:
> - The database was created with SQLite 3.7.5 running on Windows
> - "pragma integrity_check;" reports ok
> - Some free trunk list pages contain weird values in the "number of
> entries" field
>
> I have attached most of the dumps that I could create with the regular
> SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
> the free list dump). I'm willing to send the whole database file
> (~5Gb) to sqlite developers on request.
>
> My questions are:
> - Is is expected behavior that "pragma incremental_vacuum(1);" can
> create 14Mb WAL file even though the page size is just 1Kb?
> - If not, is it a know bug? If yes, in what circumstances?
> - Is the free list corrupted? And if it is, how could this happen and
> should "pragma integrity_check;" verify it?

Was there a *-wal file in the file-system when you ran the
[showdb] tool? [showdb] just reads the database file, it is
not smart enough to know when it should read pages out of the
wal file instead of the db. So if there is a wal file that
contains content, [showdb] could report corruption.

Also, can you try with this patch?

   http://www.sqlite.org/src/ci/311d0b613d

It might help with the 14MB wal files.

Dan.







___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BEGIN DEFERRED TRANSACTION causes core dump in a heavily periodic load with BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION in version 3.7.5

2011-04-05 Thread Dan Kennedy
On 04/06/2011 01:39 AM, ChingChang Hsiao wrote:
> I believe it is a bug in 3.7.5. It didn't happen in 3.6.22. It causes core 
> dump when using "BEGIN DEFERRED TRANSACTION" in one of our application to 
> access DB periodically(every 1 second) . There are other applications access 
> the same DB periodically but using "BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION". 
> Only the application with "BEGIN DEFERRED TRANSACTION" went to core dump. It 
> seems that it doesn't get the lock for some reasons and fails in assert. 
> After changing from "BEGIN DEFERRED TRANSACTION" to "BEGIN EXCLUSIVE 
> TRANSACTION" in this application, the problem is gone. The core dump report 
> is shown as below.
>
> ChingChang
>
>

In frame 3, what do you get for "p *pBt"? And "p *pBt->db"?

Dan.


> (gdb) bt
> #0  0x3370bb04 in raise () from /lib/libc.so.6
> #1  0x3370d2f4 in abort () from /lib/libc.so.6
> #2  0x337032a4 in __assert_fail () from /lib/libc.so.6
> #3  0x100dc940 in btreeInvokeBusyHandler (pArg=0x102b3b50) at sqlite3.c:47153
> #4  0x1013f1dc in sqlite3VdbeHalt (p=0x103ae298) at sqlite3.c:38543
> #5  0x1018fda8 in sqlite3VdbeExec (p=) at sqlite3.c:63340
> #6  sqlite3Step (p=0x103ae298) at sqlite3.c:59036
> #7  0x101987e8 in sqlite3_step (pStmt=0x103ae298) at sqlite3.c:59101
> #8  0x1016cb7c in sqlite3_exec (db=0x10856e18, zSql=0x106b3aa4 "COMMIT;",
>  xCallback=0, pArg=0x0, pzErrMsg=0x388a87c0) at sqlite3.c:84523
> #9  0x1003f744 in SqlQuery::execw (this=0x388a8844,
>  sql_stmt=0x106b3aa4 "COMMIT;", context=0x101b91b8 "SlotUtilEvent.cpp",
>  linenum=69, warnings=) at SqlQuery.cpp:281
> #10 0x10089db8 in SlotUtilEvent::run (this=0x10a81e94) at SlotUtilEvent.cpp:94
> #11 0x10003f40 in HwMonListener::run (this=0x106b28a8)
>  at 
> /mnt/local/cch/bugfix_test_11_01_02232011/isg6000/isg6k/mgmt-crd/linuxapps/hwmon/hwmon.cpp:1993
> #12 0x10025c8c in Thread::start_thread (arg=0x106b28a8) at thread.cpp:199
> #13 0x334265cc in ?? () from /lib/libpthread.so.0
> #14 0x337b0b88 in clone () from /lib/libc.so.6
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full Text Search

2011-03-26 Thread Dan Kennedy
On 03/26/2011 02:18 PM, Sumesh KS wrote:
> Hi,
>
> I am Sumesh, student from india. I currently doing a project using qt
> and sqlite. I want to implement Full Text Search in that project.
> Anyone please tell me, from where i start to learn FTS and it's
> working and how it is implemented.

   http://www.sqlite.org/fts3.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "transaction" method of the Tcl bindings to SQLite

2011-03-24 Thread Dan Kennedy
On 03/24/2011 07:23 PM, Richard Hipp wrote:
> On Wed, Mar 23, 2011 at 7:05 AM, Jean-Marie CUAZ  wrote:
>
>> Hello,
>>
>> In a Tcl script, is it safe to call the Tcl "_return_" command inside
>> and before the end of a multi-statement "_transaction_" method (for
>> aborting the Tcl procedure in case of Sql/application error) ?
>>
>> In other words is the transaction handled at the SQLite level "closed"
>> safely (and the statements executed before the "return" invocation be
>> rolled back automatically) ?
>>
>
> Yes.

Clarification: The transaction will be closed correctly in all cases.
But the statements will only be rolled back if the script returns
TCL_ERROR (i.e. [return -code error]).

Any other error code (ok, break or continue) closes the transaction
but commits the statements.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA cache_size problem

2011-03-23 Thread Dan Kennedy
On 03/23/2011 03:26 PM, Jaco Breitenbach wrote:
>   Dear experts,
>
> I have compiled sqlite-autoconf-3070500 on a RedHat Linux machine.  My
> application is written in C, so uses the SQLite C interface.
>
> At startup I connect to an admin datafile, and then attach 20 more datafiles
> to the connection.  After creating the initial connection (before attaching
> to the other datafiles), I issue "PRAGMA cache_size=1048576".  If I query
> the cache size with "PRAGMA cache_size", the correct value is returned.
> However, this appears to have no actual effect on the application.  As I
> proceed to insert data randomly into the 20 attached datafiles, the
> application's memory footprint is capped at 64 MB, even though the combined
> size of the datafiles is well over 400 MB.
>
> As a test I recompiled SQLite with -DSQLITE_DEFAULT_CACHE_SIZE=1048576.
> Without issuing the PRAGMA, the memory (cache) size now continues to grow to
> match the size of the datafiles as expected.

Each attached database has a separate limit. You will need to do:

   ATTACH 'new.db' AS aux;
   PRAGMA aux.cache_size = 1048576;

or something.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best approach for xBestIndex/xFilter effectiveness

2011-03-22 Thread Dan Kennedy
On 03/23/2011 01:07 AM, Jay A. Kreibich wrote:
> On Tue, Mar 22, 2011 at 06:25:04PM +0700, Dan Kennedy scratched on the wall:
>
>> SQLite assumes that the result of each expression in the WHERE
>> clause depends only on its inputs. If the input arguments are
>> the same, the output should be do. Since random() has no inputs,
>> SQLite figures that it must always return the same value.
>
>To what degree?  And expression like "...WHERE 20<= (random()%100)"
>has no "inputs" other than constants, but is still evaluated once per
>row.  Or is it just raw functions and column references, and not the
>expression as a whole?


I think once you are trying to predict how many times or exactly
when a user function will be called for a given SQL statement you
are technically into the realms of undefined behaviour.

And again, technically, SQLite assumes that the value returned by
a user-defined function are a function of its inputs. Once instance
of where this assumption is used is with virtual tables. If you do:

   SELECT * FROM vtab WHERE col = userfunction();

and the xBestIndex() method says it can handle "col = ?" but does
not set the corresponding "aConstraintUsage[x].omit" flag, SQLite
will evaluate userfunction() once to pass to the xFilter
method, and then again for each row visited by the virtual table
cursor. If the result of userfunction() is not
stable, the query could return difficult to explain results.

I think there might be other such examples too. Left joins. Where
clauses that include OR operators. That sort of thing.

That said, we're aware of the way random() and user-functions with
side-effects are often used. I don't think it's something that
would get changed capriciously.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best approach for xBestIndex/xFilter effectiveness

2011-03-22 Thread Dan Kennedy
On 03/22/2011 04:26 PM, Max Vlasov wrote:
> Hi,
>
> recently I finally started experimenting with virtual tables and there's at
> least one thing I can not understand.
>
> As I see xBestIndex/xFilter were developed to allow fast searching if the
> implementation is able to do this. But there's also sql language that allows
> very exotic queries. Some of them may be recognized by the implementation,
> some not. If the former, one just can rely on sqlite double checking and
> just do full scan. But there are also cases when it looks like recognition
> is not possible. For example
>
> SELECT * FROM vtest where id>  random()
>
> in this case xBestIndex just assumes some constant as the expression, so the
> one who implements just can't detect probably unresolved query and thinks
> that it can search quickly (binary search, for example). The call to xFilter
> just passes first random value and sqlite will never call it again for the
> same enumeration. So xFilter thinks this is the constant value used in the
> query and jumps to the first correct row row never planning to jump back.
> But this is actually a misleading action since in real world sqlite calls
> random on every row and the rows bypassed are actually important and can be
> evaluated to true. I mentioned random(), but there may be other cases, for
> example when other fields are part of expressions.

SQLite assumes that the result of each expression in the WHERE
clause depends only on its inputs. If the input arguments are
the same, the output should be do. Since random() has no inputs,
SQLite figures that it must always return the same value.

You can see a similar effect with:

   CREATE TABLE t1(a PRIMARY KEY, b);
   SELECT * FROM t1 WHERE a > random();  -- random() evaluated once.
   SELECT * FROM t1 WHERE +a > random(); -- random() evaluated many times
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Error: disk I/O error" on big databases vacuuming

2011-03-08 Thread Dan Kennedy
On 03/08/2011 11:51 PM, Alexey Pechnikov wrote:
> I try to vacuum database about 11Gb size on debian squeeze host with 1,5 Gb 
> RAM:
>
> sqlite3 test.db 'vacuum;'
> Error: disk I/O error
>
> Note: any new files does not created on vacuuming process (may be
> created journal, does not it?).
>
> But this work correct:
> sqlite3 test.db '.dump'|sqlite3 test2.db

Is the temp file space filling up?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] long insert statement failing on iPhone

2011-02-28 Thread Dan Kennedy
On 02/26/2011 09:37 AM, Mickey Mestel wrote:
> hi all,
>
>   new to the list.
>
>   we are compiling the sqlite.c amalgamation file into our project, as we 
> want to encrypt the database, and the sqlite that comes with the iOS sdk does 
> not allow for that.
>
>   we are developing on iOS 4.2, and to date have tried 3 different 
> versions of sqlite.c, all of which have the same effect.
>
>   the issue is that when we prepare a statement like:
>
>   insert into tablename (foo, foo1, foo2) values (?,?,?)
>
>   and that statement is longer that roughly 250 characters, the table 
> name and sometimes the column name are getting munged, most often over 
> written with one of the column names in the table, or column list in the 
> statement, wherever it is getting it from.
>
>   i've just started on this problem, so i may not yet have a lot of 
> details, but by the time we are in yy_reduce() and calling sqlite3Insert(), 
> the corruption has already taken place.  so i'm trying to trace this back and 
> see what is going on.
>
>   this only happens on the device, not the simulator, unfortunately.

I've have never tried this. But if I had the problem above, I would.

 
http://landonf.bikemonkey.org/code/iphone/iPhone_Simulator_Valgrind.20081224.html

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reason for random names for the master journal?

2011-02-25 Thread Dan Kennedy

>> For example, say you have two databases in a transaction - mine.db
>> and yours.db. A crash happens while committing the transaction
>> and the file mine.db-mjMASTER is left in the file-system.
>>
>> Following recovery, if a process tries to read mine.db, the
>> transaction on mine.db will be rolled back. But the master
>> journal file cannot be deleted yet (since the interrupted
>> transaction on yours.db has not yet been reverted).
>>
>> If the process then attaches "ours.db" and then tries to
>> write a multi-file transaction that involves mine.db and
>> ours.db it has a problem - it wants to create and populate
>> mine.db-mjMASTER but it cannot, since that file is still
>> in use.
>>
>> We use the random names to get around this problem.
>> You could probably get away with using "-mjMASTER1",
>> "-mjMASTER2" etc. if you wanted to. See the loop around
>> line 1729 of vdbeaux.c.
>>
 >
> We have 2 databases. We open one and attach the other one to it at the
> very beginning of our single (multi-threaded) database process.
> Subsequently, we starting issuing SQL statements that could affect
> either one or both databases. It seems we could get by with a single
> master-journal filename. Do you agree?

Probably. Sounds Ok in theory.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reason for random names for the master journal?

2011-02-25 Thread Dan Kennedy
On 02/26/2011 12:30 AM, Johns Daniel wrote:
> What is the reason was for having random filenames for the SQLite
> master journal file (like mine.db-mj501CA440, mine.db-mj1C17,
> mine.db-mj66677495, etc)?
>
> Here is the reason for this strange question. We have discovered that
> the JFFS2 filesystem in Linux is leaking kernel memory each time we
> open/close/delete a file with a different name! I have not figured out
> how to fix the leak in JFFS2 yet. I was wondering whether this
> workaround is safe: Change our SQLite code to always use a fixed
> filename (like mine.db-mjMASTER).

The problem is that if a crash occurs during a multi-file commit,
the master journal cannot be removed from the file-system until
the transactions have been rolled back for all database files.
And this doesn't happen until the first time each of them is
read.

For example, say you have two databases in a transaction - mine.db
and yours.db. A crash happens while committing the transaction
and the file mine.db-mjMASTER is left in the file-system.

Following recovery, if a process tries to read mine.db, the
transaction on mine.db will be rolled back. But the master
journal file cannot be deleted yet (since the interrupted
transaction on yours.db has not yet been reverted).

If the process then attaches "ours.db" and then tries to
write a multi-file transaction that involves mine.db and
ours.db it has a problem - it wants to create and populate
mine.db-mjMASTER but it cannot, since that file is still
in use.

We use the random names to get around this problem.
You could probably get away with using "-mjMASTER1",
"-mjMASTER2" etc. if you wanted to. See the loop around
line 1729 of vdbeaux.c.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Upgrade from 3.5.8 -> 3.7.5 - increase memory usage

2011-02-20 Thread Dan Kennedy
On 02/21/2011 09:38 AM, Simon Slavin wrote:
>
> On 21 Feb 2011, at 1:47am, Roger Binns wrote:
>
>> On 02/20/2011 02:48 PM, Todd Shutts wrote:
>>> The application
>>> never used more than 10MB and it is currently using 57+MB and continues
>>> to climb.
>>
>> The single most likely explanation is this is WAL in action, the memory is
>> from a memory mapped file and a WAL checkpoint will release it.
>>
>>   http://www.sqlite.org/wal.html
>
> Todd's question, or something like it, has been asked at least seven times on 
> this list in the seven months since WAL was released.  Might I suggest that 
> an explanation of WAL's use of lots of memory be placed higher up on that 
> page and headed something like "Memory usage" ?
>
> Also, something like "May use many megabytes of memory between checkpoints." 
> should be listed in the 'disadvantages' list.  The explanation of 
> checkpointing is fine, but users don't know they need to read it until 
> they've asked here.

The size of the memory mapped file is about 8 bytes per page
in the WAL log, allocated in 32 KB chunks.

So if the poster is using a page size of 1024 bytes, a 47MB
memory mapped file  corresponds to roughly a 6GB WAL log. Or
if he is using the win32 default page size of 4K, a WAL file
four times that large.

WAL file growth has (apparently) been a problem for people with
really busy systems. But so far it has been the actual size of
the file on disk that bothers them, not the memory mapped bit.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS slowdown with matchinfo

2011-02-17 Thread Dan Kennedy
On 02/17/2011 05:41 AM, Iker Arizmendi wrote:
> Dan Kennedy wrote:
>>
>> Can you make the database available for download? And
>> supply the exact query you are using too? I'd like to
>> know why this is. Thanks.
>>
>> Dan.
>>
>
> You can find a tarball of the DB file here:
>
>  http://www.research.att.com/people/Arizmendi_Iker/geo.db.tgz
>
> This query runs in around 1.2 seconds:
>
>  SELECT length(content)
>  FROM locateme
>  WHERE locateme MATCH 'newark OR new OR brunswick';
>
> And this one in around 8.5 minutes:
>
>  SELECT length(matchinfo(locateme, 'x'))
>  FROM locateme
>  WHERE locateme MATCH 'newark OR new OR brunswick';
>

The database uses a custom tokenizer - "stopwords" - so I can't
run the queries directly. If I dump the data into a regular fts3
table using the default tokenizer and then run your queries with
3.7.5 they both run in pretty much the same amount of time. Both
much quicker than 1 second on a Linux PC.

There was a bug causing excessive calls to realloc() fixed a
little while ago, although from memory I don't think it would
have hit this case. The symptoms are similar though, so I could
easily be wrong on that.

Suggest upgrading to 3.7.5 to see if that clears the problem.

If you can get this slowdown with 3.7.5 and one of the built in
tokenizers, please post so I can look again.

Thanks,
Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] foreign key on delete no action

2011-02-16 Thread Dan Kennedy

> foreign key ChildTable (ChildColumn1, ChildColumn2)
>   references ParentTable (ParentColumn1, ParentColumn2)
>   on delete no action
>
> I have a few questions:
>
> 1. What does SQLite do if the action is just left blank? Does it have the 
> same affect as explicitly writing "no action"?

Yes. Defaults to NO ACTION.

> foreign key ChildTable (ChildColumn1, ChildColumn2)
>   references ParentTable (ParentColumn1, ParentColumn2)
>
> 2. I modified the schema to include "no action" on an app on my iPad (running 
> SQLite version 3.6.23.2). But when I move it to my Mac (running SQLite 
> version 3.6.12) and then run:
>
> pragma integrity_check;
>
> I get an error:
>
> SQL error: malformed database schema (ChildTable) - near "no": syntax error
>
> Is this to be expected? I know that foreign key actions are supported in 
> SQLite version 3.6.19 onward, but previous version supported parsing the 
> foreign key schema. Did it not support parsing "no action"?

Right. NO ACTION was not parsed prior to 3.6.19.

> 3. For backwards parsing compatibility, am I better off just leaving the 
> action blank instead of explicitly writing "on delete no action"?

Yes. Good plan.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS slowdown with matchinfo

2011-02-15 Thread Dan Kennedy
On 02/16/2011 12:50 AM, Iker Arizmendi wrote:
> Hello all,
>
> I'm running into what seems like an abnormally large
> performance drop on on some FTS queries that use
> matchinfo when compared to those that don't.
>
> I created an FTS table using the following:
>
>  CREATE VIRTUAL TABLE test
>  USING FTS4(rowid, content);
>
> and then filled it with ~2 million docs of ~10 tokens
> each which resulted in a file of around 275 MB. After
> running "optimize" on the table I issued a query with
> 3 terms like so:
>
>  SELECT length(content) FROM test
>  WHERE MATCH "w1 OR w2 OR w3"
>
> which returned ~164,000 rows in 1.1 seconds. However,
> if I throw in a call to matchinfo:
>
>  SELECT length(matchinfo(test, 'x')) FROM test
>  WHERE MATCH "w1 OR w2 OR w3"
>
> the query takes 7.5 minutes. It seems FTS is getting
> stuck calculating the 2nd and 3rd part of the "x"
> matchinfo data ("hits all rows" and "docs with hits")
> but it's not clear why this should take so long.
> Any ideas on what might be causing the slowdown?

Can you make the database available for download? And
supply the exact query you are using too? I'd like to
know why this is. Thanks.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR

2011-02-15 Thread Dan Kennedy
On 02/15/2011 06:04 PM, venkat easwar wrote:
> Forgot to mention what error I am getting.
>
> near "order": syntax error
> near "limit": syntax error - if i remove the order by clause

See under the "Optional LIMIT and ORDER BY Clauses" heading
on this page:

   http://www.sqlite.org/lang_update.html

You need to build SQLite with SQLITE_ENABLE_UPDATE_DELETE_LIMIT
defined.

   http://www.sqlite.org/compile.html#enable_update_delete_limit

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] upgrading DB from 3.6.23 to 3.7.5

2011-02-11 Thread Dan Kennedy
On 02/11/2011 08:08 PM, Sam Carleton wrote:
> On Fri, Feb 11, 2011 at 6:54 AM, Philip Graham Willoughby<
> phil.willoug...@strawberrycat.com>  wrote:
>
>> Hi Sam,
>>
>> On 11 Feb 2011, at 05:29, Sam Carleton wrote:
>>> I am sure it is bad form, but attached is one of the 3.6.23 DB, it is
>> only
>>> 12K.
>>
>> The mailing list software strips attachments; can you share it on dropbox
>> (free account here: http://www.dropbox.com/referrals/NTEwMzQxNDc1OQ ) or
>> some similar service?
>>
>
> dropbox is very handy, but I don't like now their public share does NOT have
> a web page front end, it is just a link that start downloading the file.
> Here is a link to the file via yousendit.com, it has a bit nicer
> presentation:)  Thanks for the help!
>
> https://www.yousendit.com/download/MzZHT213TXZtUUh2Wmc9PQ

This database works fine with 3.7.5 here. What error are you
getting?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Outer query returning results not found in subquery

2011-02-10 Thread Dan Kubb
Hi,

I have a query that when executed stand-alone returns different results than 
when it's a subquery. At first I wasn't sure if it was just me, so I submitted 
a question to Stackoverflow with some detail and someone else replied that they 
could reproduce what I was seeing:

  
http://stackoverflow.com/questions/4870293/sqlite-outer-query-is-returning-results-not-found-in-inner-query

I'll post the question I asked below since I think it explains what I'm seeing 
relatively clearly, and provides instructions on how to reproduce:

--

I just wondered if anyone has run into a case in SQLite (3.7.4) where a query 
would return one set of results, and when it becomes a subquery the results are 
completely different? I found the problem in a more complex query, but here's a 
simpler example that demonstrates the same behaviour:

Database setup:

CREATE TABLE "test" ("letter" VARCHAR(1) PRIMARY KEY, "number" INTEGER NOT 
NULL);

INSERT INTO "test" ("letter", "number") VALUES('b', 1);
INSERT INTO "test" ("letter", "number") VALUES('a', 2);
INSERT INTO "test" ("letter", "number") VALUES('c', 2);

Initial query:

SELECT "letter", "number" FROM "test" ORDER BY "letter", "number" LIMIT 1;

This returns "a|2", the second row from the results as you would expect given 
that we're sorting on the letter then the number. However, here's what I did 
not expect:

Initial query as a subquery:

SELECT DISTINCT "number" FROM (SELECT "letter", "number" FROM "test" ORDER 
BY "letter", "number" LIMIT 1) AS "test";

This returns "1", which is not at all what I expected. What I expected to see 
is "2". My understanding of how a subquery works is that it should return the 
same results *as if* the inner query was materialized, and the outer query was 
applied against those results (even though I realize that databases go to 
extreme lengths not to materialize results until necessary).

Is my assumption incorrect? I tested the same query in PostgreSQL and MySQL and 
it worked as I expected (i.e. it returned "2"). What it looks like to me is 
that I've hit a bug in how SQLite collapses subqueries, but I'm not sure.

Just to reiterate, the above example is simplified from what I'm actually 
doing. I'm not just using DISTINCT on a subquery that returns a single row, but 
rather it returns many rows, some of which have the same value for a column 
hence my need for DISTINCT. The above example is the simplest way I could think 
of to demonstrate what's happening.

-- 

Thanks,

Dan Kubb
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] determine SQLITE_MAX_LENGTH or sql_limit/sqlite3_limit

2011-02-09 Thread Dan Kennedy
On 02/10/2011 01:56 AM, Vannus wrote:
> Zeoslib is reading sqlite field lengths incorrectly, as it checks for
> brackets after the field typename ie. CHAR(123)
> presumably this is only affecting me because I haven't defined field lengths
> in my sqlite3 db.
>
> I don't want to hard-code 1,000,000,000 or 2147483647 in as the field length
> - but how do I check what SQLITE_MAX_LENGTH or the limit imposed by
> sql_limit or sqlite3_limit is?

At http://www.sqlite.org/c3ref/limit.html the third paragraph
has:

   Regardless of whether or not the limit was changed, the
   sqlite3_limit() interface returns the prior value of the limit.
   Hence, to find the current value of a limit without changing it,
   simply invoke this interface with the third parameter set to -1.

Maybe you can use that.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database disk image is malformed 3.7.x

2011-02-09 Thread Dan Kennedy

> I didn't find a way yet to reproduce the issue with a "clean" database.
> Only way I can reproduce it is with some of the database, like the
> test.db3. So I'm running out of ideas.

This is the theory. test.db3 is an auto-vacuum database.

   http://www.sqlite.org/src/info/89b8c9ac54

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database disk image is malformed 3.7.x

2011-02-09 Thread Dan Kennedy
On 02/10/2011 12:10 AM, Dan Kennedy wrote:
> On 02/09/2011 08:17 PM, Dennis Geldhof wrote:
>> I checked some things for the attached database. It is created with
>> sqlite version 3.6.3 or 3.6.23.1 with the help of the system.data.sqlite
>> wrapper. The header displays (with the ./showdb) version 3.7.4, so it is
>> opened and changed with a database viewer tool. Before executing the
>> "query" the database integrity is OK in both 3.6.23.1 and 3.7.4, after
>> executing it is OK for 3.6.23.1 but malformed for 3.7.4. The "query" is
>> executed using the system.data.sqlite wrapper (sqlite v3.6.23.1). It
>> does not matter if the database was encrypted while executing the
>> "query", the result stays the same.
>
> Thanks for doing this. It sounds like it is worth trying to figure
> out what makes this db special.
>
> Are you able to make the test.db3 database available for download or
> mail it to me directly? This mailing list strips out attachments.

You already did that... My mistake.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database disk image is malformed 3.7.x

2011-02-09 Thread Dan Kennedy
On 02/09/2011 08:17 PM, Dennis Geldhof wrote:
> I checked some things for the attached database. It is created with
> sqlite version 3.6.3 or 3.6.23.1 with the help of the system.data.sqlite
> wrapper. The header displays (with the ./showdb) version 3.7.4, so it is
> opened and changed with a database viewer tool. Before executing the
> "query" the database integrity is OK in both 3.6.23.1 and 3.7.4, after
> executing it is OK for 3.6.23.1 but malformed for 3.7.4. The "query" is
> executed using the system.data.sqlite wrapper (sqlite v3.6.23.1). It
> does not matter if the database was encrypted while executing the
> "query", the result stays the same.

Thanks for doing this. It sounds like it is worth trying to figure
out what makes this db special.

Are you able to make the test.db3 database available for download or
mail it to me directly? This mailing list strips out attachments.

Dan.





>
> The "query" is used to change some table structures in the database and
> executes the following actions on the database;
> BEGIN TRANSACTION;
>
> ALTER TABLE [A] RENAME TO [TMP_A]
> ALTER TABLE [B] RENAME TO [TMP_B]
>
> CREATE TABLE [A] (); -- where A is a new table, stripped columns
> CREATE TABLE [B] (); -- where B is a new table, stripped columns
> CREATE TABLE [C] (); -- stripped columns
>
> INSERT INTO [A]
> SELECT
>   (SELECT [ColumnA] FROM [D] WHERE [ColumnB] = [TMP_A].[ColumnB]),
>   (SELECT [ColumnC] FROM [TMP_B] WHERE ([ColumnD] =
> [TMP_A].[ColumnD]) AND (NOT [ColumnC] ISNULL) LIMIT 1),
>   -- stripped more columns
> FROM [TMP_A];
>
> -- stripped some more INSERT INTO [] SELECT
> -- stripped some UPDATE
>
> DROP TABLE [TMP_A];
> DROP TABLE [TMP_B];
> DROP TABLE [D];
> DROP TABLE [E];
> DROP TABLE [F];
>
> COMMIT TRANSACTION;
>
>
> When omitting the DROP TABLE [TMP_B], the 3.7.4 does not detect
> corruption directly, until some more updates are executed on the
> database.
>
> When the database before the "query" is .dump-ed to file, and .read into
> a new 3.6.23.1 database, the corruption does not become visible after
> the "query". So it seems to be in the header of that database.
>
> I didn't find a way yet to reproduce the issue with a "clean" database.
> Only way I can reproduce it is with some of the database, like the
> test.db3. So I'm running out of ideas.
>
> Dennis
>
> This message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail. Please notify the sender 
> immediately by e-mail if you have received this e-mail by mistake and delete 
> this e-mail from your system. E-mail transmission cannot be guaranteed to be 
> secure or error-free as information could be intercepted, corrupted, lost, 
> destroyed, arrive late or incomplete, or contain viruses. The sender 
> therefore does not accept liability for any errors or omissions in the 
> contents of this message, which arise as a result of e-mail transmission. If 
> verification is required please request a hard-copy version.
>   Please consider the environment before printing this email message
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database disk image is malformed 3.7.x

2011-02-08 Thread Dan Kennedy
On 02/08/2011 10:24 PM, Dennis Geldhof wrote:
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>> boun...@sqlite.org] On Behalf Of Dan Kennedy
>> Sent: dinsdag 8 februari 2011 14:51
>> To: sqlite-users@sqlite.org
>> Subject: Re: [sqlite] database disk image is malformed 3.7.x
>>
>
>> Then run the resulting executable with the path to a database
>> file as the first argument and "dbheader" as the second. i.e.
>>
>> ./showdb test.db dbheader
>>
>> where "test.db" is the database file name. The program prints
>> out a short report that, if the database was ever written by
>> 3.7.0 or newer, includes the version number of the most recent
>> version to do so.
>
> Hi Dan,
>
> The tool shows me the SQLite version of the test.db3 is indeed 3007004.
> So the database created with version 3.6.23.1, is once modified with a
> 3.7.4 and then modified with 3.6.23.1 again.
>
> So I guess we run into something similar to this ticket;
> http://www.sqlite.org/src/info/51ae9cad317a1 .
>
> One strange thing though, I also have a database with version 3007002
> which does not experience the database corruption. Can it be that the
> size just matches accidentally for that database?

The only problem we know of was caused by 3.7.0, which was replaced
by 3.7.0.1 when the problem was discovered. Even with 3.7.0, you needed
the right sequence of writes from 3.7.0 and some earlier version.

I don't know how the corruption you're seeing is caused. I would like
to though.

Do you use auto-vacuum mode? Or incremental vacuum?

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database disk image is malformed 3.7.x

2011-02-08 Thread Dan Kennedy
On 02/08/2011 08:26 PM, Dennis Geldhof wrote:
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>> boun...@sqlite.org] On Behalf Of Dan Kennedy
>> Sent: dinsdag 8 februari 2011 12:33
>> To: sqlite-users@sqlite.org
>> Subject: Re: [sqlite] database disk image is malformed 3.7.x
>>
>
>> We're very interested in how this happened. Do you ever write to the
>> db using the 3.7.4 tools? Or write to it with any other 3.7.X version?
>
> I have several machines over here (with developers) and the corruption
> only occurs on some machines. Besides that I am not able to reproduce
> this issue with a brand new database (maybe because my tools are at
> version 3.7.4). I can only reproduce this issue on some of the database
> that are in use more than 2 months. Looking at the description you gave
> and the symptoms we have, maybe this is caused by issue;
> http://www.sqlite.org/src/info/51ae9cad317a1 . Because the
> System.Data.Sqlite wrapper is still at sqlite version 3.6.23.1 and the
> tools update automatically we could have triggered that issue.
> Is there a way to check if the database was ever opened with a sqlite
> version newer than 3.6.23.1, so we can make sure it was ever edited by a
> tool?

Grab the C file from this link and compile it to a standalone
executable.

 
http://www.sqlite.org/src/raw/tool/showdb.c?name=471c0f8fa472e71bb7654500096a5bdb4ea1fb2a

Then run the resulting executable with the path to a database
file as the first argument and "dbheader" as the second. i.e.

   ./showdb test.db dbheader

where "test.db" is the database file name. The program prints
out a short report that, if the database was ever written by
3.7.0 or newer, includes the version number of the most recent
version to do so.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database disk image is malformed 3.7.x

2011-02-08 Thread Dan Kennedy
On 02/08/2011 06:00 PM, Dennis Geldhof wrote:
> Hi all,
>
>
>
> I experienced some strange behavior between different versions of
> sqlite. Our application uses the System.Data.Sqlite wrapper
> (http://sqlite.phxsoftware.com/) which is on sqlite version 3.6.23.1,
> but the tools we use to view the database are on sqlite version 3.7.4.
> In the application no signs of database corruption are found, but the
> tools cannot open the database (tools; http://osenxpsuite.net/?xp=3  /
> http://www.sqliteexpert.com/).

Version 3.7.0 of SQLite introduced a field in the database file header
containing the logical size of the database. So that a database file
can be arbitrarily extended (AKA preallocated) without corrupting it.
Earlier versions of SQLite just used the size of the file as the size
of the database.

   http://www.sqlite.org/releaselog/3_7_0.html

The symptoms you are reporting come about if the header field indicates
that the database image is *larger* than the file on disk. If this is
the case, version 3.7.0 and newer assume that the file is corrupt and
report the error you are seeing. Earlier versions never read the header
field and never see a problem. Since the integrity check passes, it is
likely that the database is fine except that the header field is set
incorrectly.

We're very interested in how this happened. Do you ever write to the
db using the 3.7.4 tools? Or write to it with any other 3.7.X version?
Vacuum it?

> Attached you will find a database file which is not encrypted, and all
> tables are dropped. The corruption is still in there and is used to
> generate the output mentioned above.

I think the mailing list stripped your attachment. Can you put somewhere
we can download it from? Or just mail it to me if you like. Thanks.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multithreading problem

2011-02-04 Thread Dan Kennedy
On 02/04/2011 01:04 AM, Tiberio, Sylvain wrote:
> You're right!
>
> I checked the Solaris documentation the correct flag to compile
> multithread program is -D_REENTRANT.
>
> I have reconfigured and remade sqlite libs:
>./configure --enable-threadsafe CFLAGS=-D_REENTRANT
>make
>
> I have remade my test file ...and it is working well now!
> Thanks you very much!!!
>
> Another question: Why ./configure doesn't set itself this flag when I
> use --enable-threadsafe in a Solaris system?

A bug I suppose. Will fix it for 3.7.6.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger for incrementing a column is slow

2011-02-04 Thread Dan Kennedy
On 02/04/2011 03:24 PM, Kevin Wojniak wrote:
>
> On Feb 3, 2011, at 10:35 PM, Dan Kennedy wrote:
>
>> Do you have a test program that we can use to reproduce this phenomenon?
>>
>> Dan.
>
> Here is a complete program:
> http://pastie.org/pastes/1527560
>
> Set USE_TRIGGER to see the trigger version.

Thanks for this.

The problem is that the trigger version is creating a statement journal
for each INSERT statement. It doesn't *really* need to, as there is no
way that this statement/trigger can hit a constraint after modifying
any rows. However at the moment I think SQLite opens a statement
transaction for any statement that will fire one or more triggers.

Statement journals:

   http://www.sqlite.org/tempfiles.html#stmtjrnl

The effect is that when using the no-trigger version, all that most
of your INSERT and UPDATE statements have to do is modify the database
within the cache. However the trigger version has to copy the original
page data into the statement journal before it can modify them. If the
statement journal is stored in a temporary file, this means many calls
to write().

You can improve the situation some by using an in-memory statement
journal:

   PRAGMA temp_store = memory;

But the trigger version is still slower. Because of the statement
transaction SQLite is opening. Sub-optimal, that.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger for incrementing a column is slow

2011-02-03 Thread Dan Kennedy
On 02/04/2011 06:01 AM, Kevin Wojniak wrote:
> On Feb 3, 2011, at 2:27 PM, Jim Wilcoxson wrote:
>
>> On Thu, Feb 3, 2011 at 5:07 PM, Kevin Wojniak<kain...@kainjow.com>  wrote:
>>
>>>
>>> On Feb 3, 2011, at 11:41 AM, Petite Abeille wrote:
>>>
>>>> On Feb 3, 2011, at 6:53 PM, Kevin Wojniak wrote:
>>>>
>>>>> The trigger is ran once via sqlite3_exec();
>>>>
>>>> Hmm... you mean the trigger is run every single time you perform an
>>> insert, no?
>>>
>>> Yes. I should say the trigger is created once via sqlite3_exec().
>>>
>>>>> Any insight as to why the trigger is significantly slower?
>>>
>>
>> Perhaps SQLite is having to recompile the trigger SQL on every use, whereas
>> your update stmt is prepared.  I tried triggers once, and they were slow for
>> me too.
>>
>> Jim
>> --
>> HashBackup: easy onsite and offsite Unix backup
>> http://www.hashbackup.com
>
> If that were the case that'd seem like a major design flaw.
>
>
> I created a timing profile using the trigger and without. On the version 
> without, most of the time is spent all in sqlite, as expected. The one with 
> triggers had a ton more time spent in the various kernel file system 
> functions (hfs, I'm on Mac OS 10.6.6). So it seems like the triggers are 
> creating significant more file access. I have all the inserts surrounded by 
> BEGIN/END TRANSACTION, so I don't see why this would be doing any more 
> necessary work.

Do you have a test program that we can use to reproduce this phenomenon?

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multithreading problem

2011-02-03 Thread Dan Kennedy
On 02/03/2011 11:00 PM, Tiberio, Sylvain wrote:
> Here the modification in sqlite3.c:
>
> if( unlink(zPath)==(-1)&&  errno!=ENOENT ){
>   perror(zPath);
>   return SQLITE_IOERR_DELETE;
> }
>
> And here is the result:
>
> /home/tiberio/perso/source/sql/bug/try.db-wal: No such file or directory

That error message suggests that errno should be set to ENOENT.
And when you used the main thread to do the work it seems like
it was, since you didn't get the error then.

Do you have to do something special in Solaris to get errno
to work in multi-threaded apps? Something like
-D_POSIX_C_SOURCE=199506L or -mt perhaps?

Is SQLite being compiled with the same thread-related switches as
the rest of the app?

Dan.



>
> Sylvain
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Thursday, February 03, 2011 4:51 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Multithreading problem
>
> On 02/03/2011 10:22 PM, Tiberio, Sylvain wrote:
>>
>> Dan,
>>
>> Thanks for your attention.
>>
>> sqlite3_extended_errcode() return 0xA0A that means
> SQLITE_IOERR_DELETE.
>
> Earlier versions of SQLite ignored the return code of unlink(). That
> is probably why you're not seeing a problem with 3.6.22.
>
> Search the code for a function called "unixDelete". In sqlite3.c if
> you are using the amalgamation, os_unix.c otherwise. Near the top
> of that function is this:
>
> if( unlink(zPath)==(-1)&&  errno!=ENOENT ){
>   return SQLITE_IOERR_DELETE;
> }
>
> That's where your error is coming from. If you can put a call to
> perror() or print the value of errno just before SQLITE_IOERR_DELETE,
> it might show why that call to unlink() is failing. Printing out
> "zPath" as well is probably a good idea.
>
> Dan.
>
>
>
>
>>
>> Here are others information:
>> - My problem occurs in Sparc/Solaris 10 system.
>> - After my program error, the file try.db exists and has the correct
>> right -rw-r--r--, correct owner/group and a null size.
>>
>> Regards,
>>
>> Sylvain
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
>> Sent: Thursday, February 03, 2011 3:16 PM
>> To: sqlite-users@sqlite.org
>> Subject: Re: [sqlite] Multithreading problem
>>
>> On 02/02/2011 09:31 PM, Tiberio, Sylvain wrote:
>>> Hi!
>>>
>>>
>>>
>>> I have a problem when I try to create a new database in a thread and
>> try
>>> to add a table on it.
>>>
>>>
>>>
>>> The following C code (see in the end of this e-mail) produces:
>>
>> The program is working Ok with 3.7.5 here.
>>
>> After the IO error in sqlite3_exec(), what value does
>> sqlite3_extended_errcode() return?
>>
>> Dan.
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multithreading problem

2011-02-03 Thread Dan Kennedy
On 02/03/2011 10:22 PM, Tiberio, Sylvain wrote:
>
> Dan,
>
> Thanks for your attention.
>
> sqlite3_extended_errcode() return 0xA0A that means SQLITE_IOERR_DELETE.

Earlier versions of SQLite ignored the return code of unlink(). That
is probably why you're not seeing a problem with 3.6.22.

Search the code for a function called "unixDelete". In sqlite3.c if
you are using the amalgamation, os_unix.c otherwise. Near the top
of that function is this:

   if( unlink(zPath)==(-1) && errno!=ENOENT ){
 return SQLITE_IOERR_DELETE;
   }

That's where your error is coming from. If you can put a call to
perror() or print the value of errno just before SQLITE_IOERR_DELETE,
it might show why that call to unlink() is failing. Printing out
"zPath" as well is probably a good idea.

Dan.




>
> Here are others information:
> - My problem occurs in Sparc/Solaris 10 system.
> - After my program error, the file try.db exists and has the correct
> right -rw-r--r--, correct owner/group and a null size.
>
> Regards,
>
> Sylvain
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Thursday, February 03, 2011 3:16 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Multithreading problem
>
> On 02/02/2011 09:31 PM, Tiberio, Sylvain wrote:
>> Hi!
>>
>>
>>
>> I have a problem when I try to create a new database in a thread and
> try
>> to add a table on it.
>>
>>
>>
>> The following C code (see in the end of this e-mail) produces:
>
> The program is working Ok with 3.7.5 here.
>
> After the IO error in sqlite3_exec(), what value does
> sqlite3_extended_errcode() return?
>
> Dan.
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multithreading problem

2011-02-03 Thread Dan Kennedy
On 02/02/2011 09:31 PM, Tiberio, Sylvain wrote:
> Hi!
>
>
>
> I have a problem when I try to create a new database in a thread and try
> to add a table on it.
>
>
>
> The following C code (see in the end of this e-mail) produces:

The program is working Ok with 3.7.5 here.

After the IO error in sqlite3_exec(), what value does
sqlite3_extended_errcode() return?

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL for single user database on NFS and Solaris

2011-01-31 Thread Dan Kennedy
On 01/31/2011 11:53 AM, Shawn Wilsher wrote:
> On Sun, Jan 30, 2011 at 8:26 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:
>> If you set "PRAGMA locking_mode=EXCLUSIVE" before reading or
>> writing the WAL-mode database it might work.
> That would make us only be able to use one database connection though,
> would it not?

Right. At most a single connection at a time.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL for single user database on NFS and Solaris

2011-01-30 Thread Dan Kennedy
On 01/30/2011 10:06 AM, Ginn Chen wrote:
> I think a single process accessing a single Sqlite database at a time over 
> NFS is supposed to be fine.
>
> But it is not working on Solaris.
> On Solaris, man page of mmap() has
>
>   EAGAINThe file to be  mapped  is  already  locked  using
> advisory   or   mandatory   record   locking.  See
> fcntl(2).
>
> I found if the file on NFS has been locked with fcntl(), mmap() may fail.
> It doesn't fail if both l_start and l_end are 0.
>
> Thus, I got "Error: disk I/O error" for any operation after setting 
> journal_mode to WAL.
> See https://bugzilla.mozilla.org/show_bug.cgi?id=629296
>
> The problem is setting journal_mode to WAL is successful, but it cannot be 
> set back.
> User has to move the db file to another filesystem and set journal_mode to 
> delete.

If you set "PRAGMA locking_mode=EXCLUSIVE" before reading or
writing the WAL-mode database it might work.

   http://www.sqlite.org/wal.html#noshm

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query question

2011-01-25 Thread Dan Kennedy
On 01/25/2011 09:47 PM, Ian Hardingham wrote:
> Hey guys.
>
> I have the following table:
>
> ratingsTable (id INTEGER PRIMARY KEY AUTOINCREMENT, mtId INTEGER, user
> TEXT, rating INTEGER);
>
> mtId links to another table's primary key
>
> I wish to have a query which gives me the mtId which is represented most
> often in the ratingsTable.
>
> Does anyone have any advice?


   SELECT mtId FROM ratingsTable
   GROUP BY mtId
   ORDER BY count(*) DESC
   LIMIT 1;

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXTERNAL:Re: SQLite version 3.7.5 - code freeze

2011-01-25 Thread Dan Kennedy
On 01/25/2011 07:30 PM, Black, Michael (IS) wrote:
> Yes -- MSVC spits out a bunch of constant expression warnings.  Most are 
> bogus (e.g. while (1)). I looked at them all.
>
> In this case if it's a compile-time option (which it is) wouldn't you 
> normally put it in an #if block?

Normally. But in this case we need the C compiler to analyze
and optimize the expression and determine whether it evaluates
to 0 or 1 on the target. The pre-processor can't do it.

> It confused me when looking at the code as one or the other block will never 
> executedead code.
>
> And...while looking at where it's called I noticed
>
> sqlite3Prepare16
>const void *zSql, /* UTF-8 encoded SQL statement. */
>
> sqlite3_prepare16
>const void *zSql, /* UTF-8 encoded SQL statement. */
>
> Should be UTF-16 in the comment?  That would match the prototypes.

Should be. Yes.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.5 - code freeze

2011-01-24 Thread Dan Kennedy
On 01/25/2011 12:20 PM, Roger Binns wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 01/24/2011 06:05 PM, Roger Binns wrote:
>> I'm finding custom VFS code is no longer working at all.
>
> The cause is a new xFileControl operation SQLITE_FCNTL_SYNC which appears to
> needlessly duplicate the existing xSync method.
>
> Additionally unlike other file controls it returns results via a parameter
> passed in instead of via the return code of the xFileControl method itself.
>
> And even more amusingly none of the existing SQLite provided VFS code
> implement it anyway!
>
> Hopefully it can just be removed.
>
> If not I am going to have to build a table in my code for various file
> control operations of when it is normal to have errors, which ones return
> errors via the normal error code mechanism and which do it by passing in
> pointers to where it should be written etc.


Does the aspw xFileControl method modify the value of the output
parameter in this case?

The idea was that existing VFS implementations would return SQLITE_ERROR
since they do not recognize SQLITE_FCNTL_SYNC but not modify the value
of the integer that the parameter points to. Maybe that is too
optimistic an assumption.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.5 - code freeze

2011-01-24 Thread Dan Kennedy
On 01/25/2011 04:10 AM, Black, Michael (IS) wrote:
> Is this if statement meant to be constant?  Or should it be comparing the 
> requested encoding instead of SQLITE_UTF16NATIVE?
>
> SQLITE_PRIVATE int sqlite3Utf16ByteLen(const void *zIn, int nChar){
>int c;
>unsigned char const *z = zIn;
>int n = 0;
>
>if( SQLITE_UTF16NATIVE==SQLITE_UTF16BE ){
>  while( nREAD_UTF16BE(z, 1, c);
>n++;
>  }
>}else{
>  while( nREAD_UTF16LE(z, 1, c);
>n++;
>  }
>}
>return (int)(z-(unsigned char const *)zIn);
> }

The ( SQLITE_UTF16NATIVE==SQLITE_UTF16BE ) expression is supposed
to be constant (for a given architecture). One branch of the "if"
condition runs on bit-endian hosts, the other on little-endian.

Is that the expression that is causing the warning?

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] JOIN bug in 3.7.2, not in 3.6.22

2011-01-17 Thread Dan Kennedy


> -- if endtime is in a different position in the table, the query works
> CREATE TABLE interval (endtime INTEGER, entryid INTEGER, starttime INTEGER);
> CREATE TABLE entry (id INTEGER PRIMARY KEY AUTOINCREMENT);
>
> INSERT INTO entry (id) VALUES ( 42);
>
> INSERT INTO interval (endtime, entryid, starttime) VALUES (2, 42, 1);
> -- if the endtime below is greater than or equal to the endtime above,
> the query works
> INSERT INTO interval (endtime, entryid, starttime) VALUES (1, 42, 3);
> -- if endtime is removed from the projection, the query works
> -- if the LEFT JOIN is changed to INNER JOIN, the query works

> SELECT starttime, endtime from entry LEFT JOIN interval ON
> interval.entryid = entry.id GROUP BY entry.id HAVING starttime =
> MAX(starttime);

Strictly speaking, the result of the HAVING expression is undefined
here. As "starttime" is not an aggregate or a part of the GROUP BY
clause.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Understanding EXPLAIN QUERY

2011-01-16 Thread Dan Kennedy
On 01/17/2011 06:36 AM, Sam Carleton wrote:
> I am trying to optimize a query by using the EXPLAIN QUERY, but the
> documentation on the web (http://www.sqlite.org/eqp.html) does not match the
> version of SQLite I am using (v3.6.23.1).  The documentation says there are
> three columns, but I am only seeing two columns.  What do the two columns
> mean?

Columns 0 and 1 of the old output are the same as columns 1 and 2 of
the new output.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to track a record if the aliased id is changed

2011-01-13 Thread Dan Kennedy
On 01/13/2011 02:31 AM, Max Vlasov wrote:
> Hi,
> for queries like UPDATE ... WHERE rowid=... one can in most cases reread the
> record (based on the rowid) and keep for example the cursor in the grid at
> the same record. But what if one of changed field is aliased to rowid, is
> there a way to find/track the changed record? In other words, how to find
> out the new rowid value for this record

   CREATE TEMP TRIGGER...

> I thought about a partial solution: to track that the rowid used in the
> query is no longer exists so there was a change to some new value, but what
> the value is unknown.
>
> There's also pragma table_info returning pk info, and I probably can track
> the aliased fields but there's at least one exception in the docs...
>
> Any thoughts?
>
> Thanks,
>
> Max Vlasov
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] long running INSERT (up to now ~ 11 hours and still working)

2011-01-11 Thread Dan Kennedy
On 01/11/2011 03:00 PM, Oliver Peters wrote:
> Hello,
>
> I'm on WinXP and using sqlite 3.7.4 with the CLI. I try to insert ~ 10,100,000
> records into a schema with different tables (http://pastebin.com/cbsPHNEj). 
> The
> db file has already 1.9 GB when I start the INSERTs via
>
> sqlite3 -bail extra.db3<  inserts.sql
>
> The statements in inserts.sql look like
>
> PRAGMA foreign_keys = ON;
> ...
> PRAGMA foreign_keys = OFF;
>
> Most INSERTS are done into the table Verteilerdaten (>10,000,000). I think the
> time depends heavily on the activated FOREIGN KEYs - is my assumption correct
> and is this a behaviour I only can avoid by not switching this PRAGMA on?

It could be correct. Once a database gets large enough, the speed of
INSERT statements tends to be limited by seeking around the file to
read data. Particularly if your transactions are also large.

If you insert a row into a table that has a foreign key constraint,
it has to search for the corresponding key in the parent table. So
if the parent table is also large (too large for the cache), those
searches might be slowing you down significantly. So if you can get
away with doing the inserts with foreign keys turned off, it is
worth trying.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUSY on COMMIT when EXCLUSIVE is held is an unexpected result

2010-12-17 Thread Dan Kennedy
On 12/17/2010 07:54 PM, Ben Harper wrote:
> I have this case:
>
> Thread 1 Thread 2
> --
> BEGIN EXCLUSIVE
>   BEGIN EXCLUSIVE ->  BUSY
>   ... etc ...
>   BEGIN EXCLUSIVE ->  BUSY
> COMMIT
>   BEGIN EXCLUSIVE ->  OK
>   ...
>
>
> The commit statement of Thread 1 will sometimes fail with a BUSY error.
> A simple workaround is to try the commit a couple times if you receive
> a BUSY error. Having done this, I cannot get this test to fail anymore.

I'm not sure how you got this to happen. One way is to try the COMMIT
from within a user-defined function executed by a write statement - but
that's not terribly likely.

Were you using WAL mode? Or regular rollback mode (the default)?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL index in memory - multiple connections

2010-12-09 Thread Dan Kennedy
On 12/09/2010 04:08 PM, Yoni Londner wrote:
> Hi,
>
> I want to use the feature that enable WAL to use heap memory instead of
> shared memory for WAL index.
> Using locking_mode=exclusive is not good enough option, since I want to
> access the DB from two threads, and with locking_mode=exclusive I get
> "database is locked" error.
>
> How can I use in memory WAL index while using multiple DB connections?

Presumably each thread has its own db handle.

You could enable shared-cache mode (although that gives you table-level
locking, not the MVCC WAL provides).

The alternative is to create your own VFS that stores the WAL index in
heap memory.

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with disableTerm() and virtual tables

2010-12-07 Thread Dan Kennedy

>
> When I attempt to perform a certain join (shown below), the disableTerm 
> function fails in the ALWAYS assertion, because the wtFlags field already has 
> the TERM_CODED bit set. As far as I can tell, it is looking at the first 
> constraint in the ON clause of the LEFT JOIN, possibly for the second time.
>
> The problem goes away on any of the following conditions:
>
> -  native tables are used as opposed to virtual tables
> -  the first constraint of the WHERE clause (one.a = 3) is omitted
> -  the constant from the WHERE clause is repeated in the ON clause 
> (three.a = 3)
>
> I suspect there is a subtle difference in parsing and/or code generation 
> between native and virtual tables the leads to this effect

I couldn't immediately reproduce this using the "echo" virtual
table module. It could have something to do with the scanning
costs your virtual tables are returning to SQLite.

Does this happen in 3.7.4?

Can you provide us with code for virtual tables that cause the bug
to occur?

Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3 bug?

2010-12-07 Thread Dan Kennedy
On 12/08/2010 04:18 AM, Iker Arizmendi wrote:
> The function that opens a cursor for the simple tokenizer,
> simpleOpen, does not set the "pTokenizer" member of the
> returned cursor. Ie, it appears the following line is
> missing:
>
>  c->base.pTokenizer = pTokenizer;
>
> which causes problems in simpleNext . Possible bug?

How do we reproduce the problem?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL file size

2010-12-07 Thread Dan Kennedy
On 12/07/2010 09:49 PM, Yoni Londner wrote:
> Hi,
>
> Yes, in this scheme the checksum is based on salt values and own frame
> content.a
>
> Note that the current design solve a potential DB corruption bug in
> sqlite. current WAL design is base on the fact that once sqlite writes
> pages successfully to the WAL, they will never get corrupted. but this
> assumption is not true. take for example the following situation:
>
> H 1 1 1 2 2 2 3 3 3 3
>
> We have here 10 pages in 3 transactions. lets say that sqlite stated a
> checkpoint, succesfully checkpointed transaction 1 and 2, and started
> copy transaction 3 to the DB. while copying the first pages of
> transaction 3, pages from transaction 4 are written to the WAL.
> now, since the pages most likely are not aligned to the sector size, the
> OS might read part of last page of transaction 3, and write it along
> with the first page of transaction 4.
> If a power failure occur at this point, then the first pages of
> transactions 3 already copied to the DB, while last page of transaction
> 3 is corrupted, so when recovering, sqlite will not complete copying
> transaction 3 to the DB, and DB we stay corrupted.

In synchronous=full mode, we add extra copies of the last frame of
each transaction (the one with the commit flag set) to beat this.
So the WAL log would look like this:

H 1 1 1a 1b 2 2 2a 2b 3 3 3...

Frame 1b is a copy of 1a, and 2b is a copy of 2a. So although a power 
failure while writing the first frame of transaction 3 can damage
frame 2b, this doesn't matter as it is just a duplicate.

There may be more than one duplicate inserted if the device has very
large sectors.

> while this problem can occur on any device, it is more likely to happen
> on devices which use flash memory (mostly mobile devices), since the
> size of a sector of flash memory tend to be larger than on non flash memory.

It's a real problem and it does come up in practice. You are right
to include a solution in your plan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


<    5   6   7   8   9   10   11   12   13   14   >