[sqlite] [sqlite-dev] SQLite version 3.8.12 enters testing

2015-10-08 Thread Filip Navara
All the extensions have to be explicitly enabled by compile-time flags.
This is the case for FTS3/4, which has been included in the amalgamation
for several years. The RTree and also the new JSON extension are included
as well. It only seems logical that FTS5 should be included too as it is
part of official releases.

F.

On Thu, Oct 8, 2015 at 4:01 PM, Hick Gunter  wrote:

> I expect users running SQLite on embedded devices would be thrilled...
>
> -Urspr?ngliche Nachricht-
> Von: sqlite-users-bounces at mailinglists.sqlite.org [mailto:
> sqlite-users-bounces at mailinglists.sqlite.org] Im Auftrag von Filip Navara
> Gesendet: Donnerstag, 08. Oktober 2015 15:55
> An: sqlite-dev at mailinglists.sqlite.org
> Cc: General Discussion of SQLite Database
> Betreff: Re: [sqlite] [sqlite-dev] SQLite version 3.8.12 enters testing
>
> Would it be possible to include FTS5 in the amalgamation?
>
> Thanks,
> Filip Navara
>
> On Wed, Oct 7, 2015 at 4:42 PM, Richard Hipp  wrote:
>
> > The release checklist for version 3.8.12
> > (https://www.sqlite.org/checklists/3081200/index) is now active.  The
> > 3.8.12 release will occur when the checklist goes all-green.
> >
> > A preliminary change log for version 3.8.12 can be seen at
> > https://www.sqlite.org/draft/releaselog/3_8_12.html and preliminary
> > documentation can be seen at https://www.sqlite.org/draft/
> >
> > If you have issues or concerns with the current SQLite trunk, please
> > speak up *now*.
> >
> > --
> > D. Richard Hipp
> > drh at sqlite.org
> > ___
> > sqlite-dev mailing list
> > sqlite-dev at mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-dev
> >
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
>  Gunter Hick
> Software Engineer
> Scientific Games International GmbH
> FN 157284 a, HG Wien
> Klitschgasse 2-4, A-1130 Vienna, Austria
> Tel: +43 1 80100 0
> E-Mail: hick at scigames.at
>
> This communication (including any attachments) is intended for the use of
> the intended recipient(s) only and may contain information that is
> confidential, privileged or legally protected. Any unauthorized use or
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please immediately notify the sender
> by return e-mail message and delete all copies of the original
> communication. Thank you for your cooperation.
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] [sqlite-dev] SQLite version 3.8.12 enters testing

2015-10-08 Thread Filip Navara
Would it be possible to include FTS5 in the amalgamation?

Thanks,
Filip Navara

On Wed, Oct 7, 2015 at 4:42 PM, Richard Hipp  wrote:

> The release checklist for version 3.8.12
> (https://www.sqlite.org/checklists/3081200/index) is now active.  The
> 3.8.12 release will occur when the checklist goes all-green.
>
> A preliminary change log for version 3.8.12 can be seen at
> https://www.sqlite.org/draft/releaselog/3_8_12.html and preliminary
> documentation can be seen at https://www.sqlite.org/draft/
>
> If you have issues or concerns with the current SQLite trunk, please
> speak up *now*.
>
> --
> D. Richard Hipp
> drh at sqlite.org
> ___
> sqlite-dev mailing list
> sqlite-dev at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-dev
>


Re: [sqlite] Partial indexes not working for me

2015-01-29 Thread Filip Navara
Actually running ANALYZE didn't seem to help. There are other partial
indexes I tried and none of them were used:

sqlite> create index "i2" on "t" ("id") where "flags" & 1;
sqlite> explain query plan select * from "t" where "flags" & 1;
0|0|0|SCAN TABLE t
sqlite> create index "i3" on "t" ("id") where "syncFolder" <> 0;
sqlite> explain query plan select * from "t" where "syncFolder" <> 0;
0|0|0|SCAN TABLE t

It is an oversimplification of my actual database, where all of these
queries are used together in one condition

("flags" & 1) AND ("uniqueId" IS NULL OR "syncFolder" <> 0)

that I was hoping to cover with a partial index. Currently I use a bunch of
triggers to basically create the index myself, but I was hoping to replace
it with the partial indexes and avoid having the complex triggers. The
table "t" usually has thousands to millions of rows, while the index itself
should cover only few rows in most cases, typically none.

Best regards,
Filip Navara



On Thu, Jan 29, 2015 at 9:29 PM, Richard Hipp <d...@sqlite.org> wrote:

> On 1/29/15, Filip Navara <filip.nav...@gmail.com> wrote:
> > Hello,
> >
> > I tried really hard to get partial indexes working, but SQLite refuses to
> > use them:
> >
> >> create table "t" ("id" INTEGER NOT NULL PRIMARY KEY, "flags" INTEGER,
> > "uniqueId", "syncFolder" INTEGER);
> >> create index "i1" on "t" ("id") where "uniqueId" IS NULL;
> >> explain query plan select * from "t" where "uniqueId" IS NULL;
> > 0|0|0|SCAN TABLE t
> >> explain query plan select * from "t" indexed by "i1" where "uniqueId" IS
> > NULL;
> > Error: no query solution
> >
> > Any advice what am I doing wrong?
> >
>
> I seem to recall adding a rule to the query planner that refuses to
> use an IS NULL constraint with an index unless you have first run
> ANALYZE.  It might also require compiling with SQLITE_ENABLE_STAT4.
>
> The usual case with partial indexes is WHERE field IS NOT NULL ---
> with a "NOT".  You are taking partial indexes into an area for which
> they were not optimized.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Partial indexes not working for me

2015-01-29 Thread Filip Navara
Hello,

I tried really hard to get partial indexes working, but SQLite refuses to
use them:

> create table "t" ("id" INTEGER NOT NULL PRIMARY KEY, "flags" INTEGER,
"uniqueId", "syncFolder" INTEGER);
> create index "i1" on "t" ("id") where "uniqueId" IS NULL;
> explain query plan select * from "t" where "uniqueId" IS NULL;
0|0|0|SCAN TABLE t
> explain query plan select * from "t" indexed by "i1" where "uniqueId" IS
NULL;
Error: no query solution

Any advice what am I doing wrong?

Thanks,
Filip
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.8.5 beta

2014-05-27 Thread Filip Navara
Visual Studio 2013 Update 1 and newer allows explicitly to target Windows
XP, but it has to be compiled with correct runtime library.

Best regards,
Filip Navara


On Tue, May 27, 2014 at 11:33 AM, Jan Nijtmans <jan.nijtm...@gmail.com>wrote:

> 2014-05-27 5:57 GMT+02:00 jose isaias cabrera <cabr...@wrc.xerox.com>:
> > "Richard Hipp" wrote...
> 
> >> Please report any problems to this mailing list and/or directly to me.
>
> > I am having problem using the pre-built DLL:
> > http://www.sqlite.org/snapshot/sqlite-dll-win32-x86-201405262205.zip
> >
> > I am getting this error:
> >
> > ProgName.exe Entry Point Not Found
> > The procedure entry point FlsAlloc cound not be located in the dynamic
> Link
> > library KERNEL32.dll
>
> The entry point FlsAlloc doesn't occur anywhere in the sqlite3.c source
> code, it is generated by Visual Studio. Previous sqlite3.dll versions
> were generated with MinGW (version 3.17, I guess) that's why
> they worked fine on Windows XP.
>
> If you are interested to try an sqlite3.dll built by the latest
> MinGW (version 4.0.3), you can have a look here:
> <https://sourceforge.net/projects/cyqlite/files/>
> This project is still in the startup phase, but I'm trying to fill the
> gap Microsoft is leaving behind..
>
> Thanks!
>
> Regards,
>Jan Nijtmans
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unused partial index

2013-10-30 Thread Filip Navara
Hi,

I tried to convert our custom tables and triggers into partial indexes.
Unfortunately it seems that the query planner is unable to recognize
identical terms. Could this be fixed or is there any workaround?

Thanks,
Filip Navara

SQLite version 3.8.1 2013-10-17 12:57:35
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> CREATE TABLE "a" ("id" INTEGER PRIMARY KEY, "flags" INTEGER);
sqlite> CREATE INDEX "b" ON "a" ("flags") WHERE "flags" & 16;
sqlite> EXPLAIN QUERY PLAN SELECT * FROM "a" WHERE "flags" & 16;
0|0|0|SCAN TABLE a
sqlite> EXPLAIN QUERY PLAN SELECT * FROM "a" INDEXED BY "b" WHERE "flags" &
16;
Error: no query solution
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite DB's disk image getting malformed

2013-06-20 Thread Filip Navara
This sounds very much like bug described in 7.4 in
http://www.sqlite.org/howtocorrupt.html, except for the SQLite version. Are
you sure that it's version 3.6.14.2 and not newer?

Best regards,
Filip Navara


On Wed, Jun 19, 2013 at 9:09 PM, Dhiraj Sadhwani <sadhwan...@gmail.com>wrote:

> Hello,
> I have sqlite code amalgamation(version 3.6.14.2.) as a part of a
> dylib/dll.
> As part of exported apis from this dylib we open a sql session/connection
> and keep it open for a longish duration and share the session handle among
> different threads of the same process for doing transactions on the DB.
> We do not use any other locks for transaction synchronization on the DB
> apart from the default ones being used in the sqlite amalgamation code.
>
> Different versions of this dylib could be used by multiple processes to
> access/perform-transactions on the same DB file at the same time.
>
> The DB generally is a small one with hardly 9-10 entries present at any
> time at max, though it does lot of over-wrties/re-writes.
>
> Off-late I am observing transaction failures(though the frequency is quite
> less) with error as: disk image of the DB being malformed and on running
> integrity_check it reports following (2 of cases mentioned below)
>
> sqlite> pragma integrity_check;
> *** in database main ***
> Main freelist: freelist leaf count too big on page 9
> On tree page 2 cell 0: 2 of 3 pages missing from overflow list starting at
> 10
> Page 11 is never used
> Page 12 is never used
> sqlite> .exit
> .
> .
> sqlite> pragma integrity_check;
> *** in database main ***
> On tree page 11 cell 0: 1 of 2 pages missing from overflow list starting at
> 9
> Page 10 is never used
> sqlite>
>
>
> Any idea if the usage above mentioned has a significant flaw.
> I highly appreciate any help on what could be causing the issue and further
> directions to probe on and avoid the issue.
>
> Thanks,
> Dhiraj
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Improving access speed to fetch indexed column.

2012-07-12 Thread Filip Navara
This roughly resembles an issue I witnessed on our databases about
year ago (thread "Improving the query optimizer" on this mailing
list). SQLite doesn't use covering index for fulfilling queries unless
the query is filtered/ordered by a column included in the index. In
many cases the covering index is actually smaller than the full table
and thus it's less data to read.

We use a modified version of SQLite with the attached patch to
alleviate the issue.

Best regards,
Filip Navara

On Thu, Jul 12, 2012 at 12:59 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Wed, Jul 11, 2012 at 7:49 PM, Kohji Nakamura <k.nakam...@nao.ac.jp>wrote:
>
>> Hello all,
>>
>> I found that the access to an indexed column without "order by" is slower
>> than the one with "order by" in SQLite 3071300.
>> Using an index rather than an actual column is faster even if there is no
>> need to use the index when the column has index.
>> In general, to fetch column value, there is no need to access actual
>> column when it has a dedicated index or it is a first column of composite
>> index.
>> I hope SQLite would do this optimization which is common to other DBMSs.
>>
>> Followings are the results of the comparison. Time column of main table
>> has an index.
>>
>> After disk cache is cleared,
>> SQL: select time from main order by time;
>> Total : 38.1312 sec
>>
>> SQL: select time from main;
>> Total : 95.395 sec
>>
>
> Can you please send us the output of EXPLAIN QUERY PLAN for these two
> queries on your schema?
>
>
>>
>> When data is cached,
>> SQL: select time from main order by time;
>> Total : 0.497981 sec
>>
>> SQL: select time from main;
>> Total:: 0.925122 sec
>>
>> Thank you for developing a very cool DBMS, SQLite!
>> Kohji Nakamura
>> --
>> k.nakam...@nao.ac.jphttp://www.nao.ac.jp/E/index.html
>> National Astronomical Observatory of Japan
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Inconsistent limit checks

2011-12-06 Thread Filip Navara
Hi,

there seems to be an inconsistency for limit checks between zeroblob() and
the rest of code:

sqlite> CREATE TABLE "a" ("c" BLOB);
sqlite> INSERT INTO "a" VALUES (zeroblob(11));
Error: string or blob too big
sqlite> INSERT INTO "a" VALUES (zeroblob(10));
sqlite> SELECT "c" FROM "a";
Error: string or blob too big

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Excessive memory use in SQLite 3.7.9

2011-12-05 Thread Filip Navara
Hello,

we are experiencing a problem with SQLite 3.7.9 where 213 Mb (!) are
allocated for the following query:

SELECT "partBody" IS NULL AS "partBodyIsNull" FROM LocalMailContents;

The database has a table LocalMailContents with several text / integer
columns and two BLOB columns at the end ("partHeader" and "partBody"). Both
of the blob columns either contain NULL or a value created by zeroblob()
function and later filled by the blob API. In our tests the last column is
filled with large data, specifically about 213 Mb. SQLite actually tries to
read the data when evaluating the above query and moreover it reads all the
data into a single memory chunk.

The code that is responsible for the read is in OP_Column:

  if( u.am.aOffset[u.am.p2] ){
assert( rc==SQLITE_OK );
if( u.am.zRec ){
  MemReleaseExt(u.am.pDest);
  sqlite3VdbeSerialGet((u8 *)[u.am.aOffset[u.am.p2]],
u.am.aType[u.am.p2], u.am.pDest);
}else{
  u.am.len = sqlite3VdbeSerialTypeLen(u.am.aType[u.am.p2]);
  sqlite3VdbeMemMove(, u.am.pDest);
  rc = sqlite3VdbeMemFromBtree(u.am.pCrsr, u.am.aOffset[u.am.p2],
u.am.len, u.am.pC->isIndex, );
  if( rc!=SQLITE_OK ){
goto op_column_out;
  }
  u.am.zData = u.am.sMem.z;
  sqlite3VdbeSerialGet((u8*)u.am.zData, u.am.aType[u.am.p2],
u.am.pDest);
}
u.am.pDest->enc = encoding;
  }else{

A test database is available at
http://www.emclient.com/temp/mail_data.zipthat exhibits the behavior.

Is this something that is intended behavior? Is the memory allocation
really necessary?

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL file size

2011-11-30 Thread Filip Navara
On Wed, Nov 30, 2011 at 4:02 PM, Richard Hipp <d...@sqlite.org> wrote:

> On Wed, Nov 30, 2011 at 4:41 AM, Sreekumar TP <sreekumar...@gmail.com
> >wrote:
>
> > Hello,
> >
> > I have sqlite configure to do manual checkpoint. I do checkpoint every
> few
> > hundred records. I expect the WAL to reduce in size and become zero when
> > there are no more inserts done.
> > However, I see that the WAL file size is static and does not reduce in
> > size.  Why isnt manual checkpoint reducing the size of WAL file.
> >
>
> Because we have experimentally determined that it is faster to overwrite an
> existing file than to append to a file.  So we don't truncate the WAL file
> on a checkpoint, which makes subsequent writes to the WAL file go faster.
>
> SQLite will truncate the WAL file down to the size specified by the "PRAGMA
> journal_size_limit" setting, if you have set the journal_size_limit.
>
>
Would it be possible to enhance SQLite to allow resetting the WAL file down
to zero bytes?

Something like
  pragma journal_size_limit=0;
  pragma wal_checkpoint=restart;

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3/FTS4 - Finding the term(s) that completes the input

2011-11-21 Thread Filip Navara
On Mon, Nov 21, 2011 at 4:13 AM, Mohit Sindhwani <m...@onghu.com> wrote:

> Hi Abhinav,
>
>
> On 21/11/2011 2:52 AM, Abhinav Upadhyay wrote:
>
>> On Mon, Nov 21, 2011 at 12:17 AM, Mohit Sindhwani<m...@onghu.com>  wrote:
>>
>>> What I'd like to be able to do is something like this:
>>>
>>> - let's say that the FTS4 table has values such as:
>>> * mohit sindhwani, onghu
>>> * john doe, gmail
>>> * james ling, alibaba
>>> * john barn, yahoo
>>> ...and so on
>>>
>>> If the user enters "j", I'd like to suggest that this would complete to
>>> the
>>> words:
>>> john and james in the current set
>>>
>>> If the user enters 'ling j', I'd like to restrict it and say:
>>>
>>>> james is the only word that can be matched now
>>>> james ling, alibaba is the result
>>>>
>>>
>>> I think you might want to look at Token Prefix queries:
>>> http://sqlite.org/fts3.html#**section_3<http://sqlite.org/fts3.html#section_3>
>>>
>>
> I think my examples muddied the waters.  I have looked at Section 3 of the
> FTS documents and that lets me bring back the "full result" that matches -
> so, if I search for 'ling j', it can tell me that the result that matches
> is 'james ling, alibaba'.  What it does not let me do is figure out that
> the partly entered term "j" completes to the word 'james' - that is the
> part I'm trying to figure out.
>
> Best Regards,
> Mohit.


http://sqlite.org/fts3.html#section_4_1

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] disk I/O error

2011-11-20 Thread Filip Navara
On Sun, Nov 20, 2011 at 1:21 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Sun, Nov 20, 2011 at 6:53 AM, Tal Tabakman <tal.tabak...@gmail.com>wrote:
>
>> in a continuation to the below, it seems I get the following extended err
>> code
>> SQLITE_IOERR_WRITE
>> what are the reasons for that one ? is it a symptom of mem blow-up ?
>> cheers
>> Tal
>>
>>
> What operating system did you say you were using?
>

Just a note: Most recent versions of SQLite on Windows are broken
whenever sqlite3_config(SQLITE_CONFIG_LOG, ...) is used. It is fixed
in the repository, but not in any officially released version.

For debugging this kind of problems on Windows it is very helpful to
make a log with Process Monitor
(http://technet.microsoft.com/en-us/sysinternals/bb896645) and filter
the results on the directory containing the database file. I'd be
willing to look at it if such an error is reproduced and the log is
available.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.7.7.1 database corruption

2011-11-14 Thread Filip Navara
Hi Alexandr,

the mailing list doesn't allow attachments. Please send us the output
of "pragma integrity_check;" on the corrupted database file. Also, is
the database used in WAL mode?

Best regards,
Filip Navara

2011/11/14 Alexandr Němec <a.ne...@atlas.cz>:
> Dear all,
>
> we are using SQLite in our projects. We are using the 3.7.7.1 version
> compiled on Windows platform from the amalgamation file. One of our servers
> crashed badly (power outage) and after the server restarted, we saw that the
> database was corrupt (all queries we tried return SQLITE_CORRUPT). We had a
> look at http://www.sqlite.org/howtocorrupt.html and verified, that the
> hardware (disk, memory) of the server should be ok - we used some hardware
> tools to check. In the past we already had some crash situations and SQLite
> could recover from the crash nicely, so we were surprised that after this
> crash the database was corrupt. In fact, we cant create a repro, because we
> do not know, what the server was exactly doing, so it might be difficult to
> find the reason, but we saved the corrupted database in the hope, that
> somebody from the develpers (maybe) could put some light on this problem.
> Because the database is rather small and is used to save some configuration
> data only, I have att
> ached it to this message.
>
> Thanks for any suggestions.
>
> Alex
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Critical bug in Windows VFS retry logic

2011-11-10 Thread Filip Navara
On Thu, Nov 10, 2011 at 7:00 PM, Filip Navara <filip.nav...@gmail.com> wrote:
[snip]

The bug has been present since this check-in on 2011-07-28:

http://sqlite.org/src/info/8a145863d1a8711953ae72d706404fb96f6fec06

F.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Critical bug in Windows VFS retry logic

2011-11-10 Thread Filip Navara
Hi,

we found a rather serious bug in the retry logic in Windows VFS in
SQLite 3.7.9. The function winAccess has roughly the following
structure:

while( !(rc = GetFileAttributesExW((WCHAR*)zConverted,
 GetFileExInfoStandard,
 )) && retryIoerr() ){}
if( rc ){
  ...
}else{
  logIoerr(cnt);
  if( GetLastError()!=ERROR_FILE_NOT_FOUND ){
winLogError(SQLITE_IOERR_ACCESS, "winAccess", zFilename);
free(zConverted);
return SQLITE_IOERR_ACCESS;
  }else{
attr = INVALID_FILE_ATTRIBUTES;
  }
}

Note the logIoerr line. It internally calls sqlite3_log, which in turn
can change the "last error" that is checked on the next line.

This results in this totally useless log:

11/10/2011 17:23:46 10 delayed 25ms for lock/sharing conflict
11/10/2011 17:23:46 3338 os_win.c:34346: (0) winAccess(C:\Users\Filip
Navara\AppData\Roaming\eM Client\event_data.dat-journal) - The
operation completed successfully.
11/10/2011 17:23:46 3338 statement aborts at 5: [ATTACH DATABASE
'C:\Users\Filip Navara\AppData\Roaming\eM Client\event_data.dat' AS
'event_data';] disk I/O error

Also, the library actually fails to behave correctly since even if
ERROR_FILE_NOT_FOUND is returned, it is overwritten by the the
sqlite3_log call and the function returns an error.

Solution is to save the value returned by GetLastError() before
calling logIoerr.

Also note that the underlying error is not caused by an antivirus, it
is caused by Windows Search. It can also be easily workarounded in a
better way by renaming a file before deleting it in winClose and/or
winDelete. That will ensure that even if the file remains in the
"delete pending" state (ie. because other application has a handle on
the file) it will not cause further CreateFile calls on the same name
to fail.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Malformed database error when using FTS3/4

2011-10-13 Thread Filip Navara
Reproduced on Windows, SQLite 3.7.8.

F.

On Thu, Oct 13, 2011 at 5:45 PM, Wendland, Michael
 wrote:
> Hello,
>
> I've encountered an error running 3.7.8 and 3.7.7.1 that seems rather 
> strange.  It's reproducible so far as I know.
>
> CREATE VIRTUAL TABLE fts USING fts3( tags TEXT);
> INSERT INTO fts (tags) VALUES ('tag1');
> SELECT * FROM fts WHERE tags MATCH 'tag1';
>
> You can run an UPDATE to change the value in the tags field and rerun that 
> SELECT statement without error.  Do not create a second row, as this error 
> seems to appear only when there is a single row in the virtual table.  Now 
> close the database and reopen it, then try the following statements.
>
>                UPDATE fts SET tags = 'tag1' WHERE rowid = 1;
>                SELECT * FROM fts WHERE tags MATCH 'tag1';
>
> That SELECT returns "Error: database disk image is malformed".  Taking a dump 
> before and after the UPDATE shows that a segdir entry is being assigned a 
> different value despite the tags entry remaining exactly the same.  Changing 
> the value further does not rectify the situation, nor does using a different 
> string.  Running a "PRAGMA integrity_check" returns "ok", so the underlying 
> database is fine (and you can use other tables and any rows you insert after 
> just fine).  Inserting a new row and then changing the value rectifies the 
> problem.  Tests seem to indicate that it doesn't matter what the rowid is (or 
> the  sequence of inserts and deletes have been run on the virtual table 
> beforehand), the problem appears when changing the only row in the virtual 
> table.
>
> I've reproduced this using a CLI compiled from the autoconf source 
> amalgamation on RHEL 5.5 (in a VM) and the precompiled Windows CLI.
>
> Is anyone able to help?  (I haven't accidentally posted this to the wrong 
> list have I?)
>
> - Michael
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with vacuum when count_changes=1

2011-09-30 Thread Filip Navara
On Fri, Sep 30, 2011 at 2:21 PM, Reg Server <reg_ser...@mail.com> wrote:
[snip]
> I don't have Mac but I tried mentioned steps using sqlite3.exe using both
> versions 3.7.8 and 3.7.5. It crashed every time.
>
> I'm don't know where to get older versions to investigate where the crash
> started happening. I just know for sure that ancient version 3.6.6.1 we were
> using before did not crash.

It's fixed now, see http://www.sqlite.org/src/info/c48d99d690

Thanks, Dan!

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with vacuum when count_changes=1

2011-09-30 Thread Filip Navara
Reproduced with the following steps:

> sqlite3 test.db
pragma count_changes=1;
create table a (a);
vacuum;

(SQLite 3.7.8 on Windows)

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index not used in subquery

2011-09-22 Thread Filip Navara
Hi,

I have the following schema:

CREATE TABLE a (id INT, partName TEXT, content BLOB, PRIMARY KEY(id, partName));
CREATE TABLE b (id, partName, content);
CREATE VIEW aView AS SELECT a.id, a.partName, COALESCE(a.content,
(SELECT b.content FROM b WHERE b.id=a.id AND b.partName=a.partName))
FROM a;
CREATE INDEX bIndex ON b (id, partName);

and the following query has a query plan that uses bIndex:

EXPLAIN QUERY PLAN SELECT b.content FROM b WHERE b.id=? AND b.partName=?
0|0|0|SEARCH TABLE b USING INDEX bIndex (id=? AND partName=?) (~9 rows)

However the subquery in the view yields a different query plan:

EXPLAIN QUERY PLAN SELECT * FROM aView WHERE id=? AND partName=?;
0|0|0|SEARCH TABLE a USING INDEX sqlite_autoindex_a_1 (id=? AND
partName=?) (~1 rows)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SCAN TABLE b (~1 rows)

Is there a reason why the index is not used in the subquery?

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Internal v. External BLOBs

2011-09-21 Thread Filip Navara
> We did some experiments to try to answer this question, and the results
> seemed interesting enough to share with the community at large.

Are the test cases available somewhere?

I'd be interested in running them on Windows setup with/out SSD.

Thanks,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Verified crash with SQLite 3.7.8 and 3.7.7.1

2011-09-15 Thread Filip Navara
Hi,

there's a bug report filled at
http://system.data.sqlite.org/index.html/info/ce53939214b563cb49ad2375f7c2cb365907898d
for System.Data.SQLite, but the underlying issue is reproducible with
the same database on SQLite 3.7.7.1 and my recent builds of SQLite
3.7.8. Both crash with access violation. "pragma integrity_check;"
returns ok for the database.

The database file is linked in the ticket and the query is

SELECT invItemID, storageLocationID, responsiblePersonID, deviceID,
costCenterID, COALESCE(changeSuggCount, 0) AS changeSuggCount,
containerInvItemID
FROM InvItem
LEFT OUTER JOIN
(
  SELECT iicsID, invItemID AS iiID, COUNT(*) AS changeSuggCount FROM
InvItemChangeSuggestion
  GROUP BY iiID
) AS ChangeSugg ON ChangeSugg.iiID = InvItem.invItemID
LEFT OUTER JOIN
(
  SELECT iicsID AS iicsID2, propID, val FROM InvItemPropValueChangeSuggestion
  WHERE (propID = -4)
  AND (val = 'AB123987')
  ORDER BY apprVersion DESC
  LIMIT 1
) AS PropValChangeSugg ON PropValChangeSugg.iicsID2 = ChangeSugg.iicsID
WHERE
(rowStatus = 1 AND (val = 'AB123987'))
OR ((invVersionID = 1)
AND (rowStatus = 1)
AND invItemID IN
(
  -- ha eredetileg illeszkedik a filter-re
  SELECT invItemID FROM InvItem
  WHERE (barCode = 'AB123987')
  AND (invVersionID = 1)
  AND (rowStatus = 1)

  UNION

  -- ha a m«¯dos«ìt«âsok ut«ân a(z) barCode illeszkedik a filter-re
  SELECT invItemID FROM
  (
SELECT invItemID, val FROM
(
  SELECT invItemID, propID, val
  FROM InvItemChangeSuggestion, InvItemPropValueChangeSuggestion
  WHERE InvItemChangeSuggestion.iicsID =
InvItemPropValueChangeSuggestion.iicsID
  AND propID = -4
  AND apprVersion <> 0
  ORDER BY registerTime ASC
)
GROUP BY invitemID, propID
  )
  WHERE (val = 'AB123987')

  EXCEPT

  -- a(z) barCode eredetileg illeszkedett, de a m«¯dos«ìt«âsok ut«ân
m«âr nem illeszkedik a filter-re
  SELECT invItemID FROM
  (
SELECT invItemID, val FROM
(
  SELECT invItemID, propID, val
  FROM InvItemChangeSuggestion, InvItemPropValueChangeSuggestion
  WHERE InvItemChangeSuggestion.iicsID =
InvItemPropValueChangeSuggestion.iicsID
  AND propID = -4
  AND apprVersion <> 0
  ORDER BY registerTime ASC
)
GROUP BY invitemID, propID
  )
  WHERE (val <> 'AB123987')
));

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Double Spaced Blob Text

2011-09-07 Thread Filip Navara
I guess the database is encoded in UTF-16 and you are reading the text
using sqlite3_column_blob instead of sqlite3_column_text.

F.

On Wed, Sep 7, 2011 at 5:15 PM, Daniel Spain  wrote:
>
>
>
>
> hello all i am creating a text mud gamibng engine and has been going great up 
> until recently.when a user is examining their current room all the text is 
> double spaced.i have tried this as both a blob and text field with a max 
> length of 4096. here is my funtion to display long room descriptions: GBOOL 
> EXPORTlook(VOID)/* examine current room */{       if(islit(chrptr->loc))      
>   {            prfmlt(LOKOTH,chrptr->name);             
> outloc(getloc(usrnum),usrnum,-1);            
> prfmlt(LOOKING,database_blob("Rooms",chrptr->loc,ROOMKEY_LONGDESC));          
>   outmlt(usrnum);       }       else       {           prfmlt(TOODRK);        
>    outmlt(usrnum);       }       return(TRUE);} here is the database_blob 
> function: CHAR * EXPORTdatabase_blob(const CHAR *table,const LONG 
> record,const LONG field)/* return blob field value from a table */{     const 
> CHAR *tail;     static CHAR ret[4096];      strcpy(ret,"NULL");      
> sprintf(vdatmp,"SELECT * FROM %s WHERE Record = %ld",table,record)
>  ;     
> if(sqlite3_prepare_v2(database,vdatmp,strlen(vdatmp),,)==SQLITE_OK)
>      {         while(sqlite3_step(statement) == SQLITE_ROW)         {         
>       strncpy(ret,sqlite3_column_blob(statement,field),4096);         }     } 
>      return(ret);}  it all works fine it just double spaces the returned 
> text.some of my beta testers stripped all the periods from the text and 
> saidit worked fine so i don't know if the periods are somehow sayingmove to 
> the next line or what. thanks.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-05 Thread Filip Navara
On Mon, Sep 5, 2011 at 1:18 PM, Jaco Breitenbach
<jjbreitenb...@gmail.com> wrote:
> Dear all,
>
> Any speculation on if and how the new code would affect inserts into an
> indexed table?
>
> Jaco

It doesn't affect them at all. The optimization specifically targets
sorting large amount of data.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-05 Thread Filip Navara
On Mon, Sep 5, 2011 at 9:39 AM, Baruch Burstein <bmburst...@gmail.com> wrote:
> This is unrelated, but I have never done any serious testing, and am
> wondering what the user/sys results are in all these performance tests. Is
> user I/O time and sys CPU time?

User is the time spent in user-mode code, ie. mostly SQLite code and
operating system libraries.
Sys is the time spent in kernel-mode code, ie. kernel, drivers, I/O operatons.

In both cases the values are per-process and thus they should be more
accurate than wall clock time.

Sum of both values should give the total time spent executing a given command.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-04 Thread Filip Navara
Updated results against current trunk:

CPU Time: user 82.009726 sys 129.636831

I rerun the profiler and looked at the results. Most of them looked
sensible, but one thing still struck me. Once the new index is created
in the VDBE Sorter it gets copied back into the new Btree using a loop
with the following OPCodes:

  sqlite3VdbeAddOp2(v, OP_SorterData, iSorter, regRecord);
  sqlite3VdbeAddOp3(v, OP_IdxInsert, iIdx, regRecord, 1);
  sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT);

The OPFLAG_USESEEKRESULT is no-op in this case since OP_IsUnique is
never used in the merge-sort branch of code. OP_IdxInsert always seeks
for the new Btree row in the resulting tree. Couldn't we exploit the
fact that the rows are already sorted to construct the Btree faster?

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-03 Thread Filip Navara
On Fri, Sep 2, 2011 at 6:04 PM, Dan Kennedy <danielk1...@gmail.com> wrote:
> On 09/02/2011 07:32 PM, Filip Navara wrote:
>>
>> On Fri, Sep 2, 2011 at 11:04 AM, Filip Navara<filip.nav...@gmail.com>
>>  wrote:
>> *snip*
>>>
>>> The time to create an index on my 266 Mb experimental database is more
>>> than 9 minutes.
>>
>> *snip*
>>
>> I erroneously measured the time with DEBUG build, so I've rerun the
>> experiment with several SQLite versions:
>>
>> [2869ed2829] Leaf: Avoid using uninitialized variables after failures
>> in the merge sort code. (user: drh, tags: trunk)
>> CPU Time: user 107.359888 sys 135.050066
>>
>> [7769fb988d] Instead of a temporary b-tree, use a linked-list and
>> merge-sort to sort records in main memory in vdbesort.c. (user: dan,
>> tags: merge-sort)
>> CPU Time: user 118.451559 sys 132.117247
>>
>> [71075673c6] Leaf: If all data being sorted fits in memory, avoid
>> writing any data out to temporary files in vdbesort.c. (user: dan,
>> tags: merge-sort)
>> CPU Time: user 116.813549 sys 132.710051
>
> Thanks for doing this. There is (hopefully) a fix for the performance
> regression up now.

Thanks, it looks much better now:

[68e26c4487] Leaf: The build works again with
-DSQLITE_OMIT_MERGE_SORT. The merge-sorter now avoids spilling to disk
(letting the in-memory linked list grow without bound) if PRAGMA
temp_store=3. (user: drh, tags: merge-sort)
CPU Time: user 86.486954 sys 132.273248

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-02 Thread Filip Navara
On Fri, Sep 2, 2011 at 5:23 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 2 Sep 2011, at 3:24pm, Simon Slavin wrote:
>
>> On 2 Sep 2011, at 10:04am, Filip Navara wrote:
>>
>>> The time to create an index on my 266 Mb experimental database is more
>>> than 9 minutes. The database is available at
>>> http://www.emclient.com/temp/mail_index.zip and the command I use to
>>> create the index is
>>>
>>> CREATE INDEX "idx_MailAddresses_address" ON "MailAddresses" ("type",
>>> "address", "parentId");
>>>
>>> I had run the shell under profiler
>>
>> Strangely, on my Mac running the shell tool provided with OS X 10.7.1, 
>> SQLite 3.7.5, there seems to be a problem.  It's still going after more than 
>> 2 hours.
>
> I correct myself: it had finished.  It just wasn't showing the next prompt, 
> for some reason.  After force-quitting and restarting the shell it showed the 
> index as existing and the index worked.
>
> Simon.

With SQLite 3.7.7 and older the index creation takes eons since the
file is in WAL mode and the journal file grows uncontrollably. Since I
run it on small SSD I was never able to let it finish, it always
filled up the disk first.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up index creation

2011-09-02 Thread Filip Navara
On Fri, Sep 2, 2011 at 11:04 AM, Filip Navara <filip.nav...@gmail.com> wrote:
*snip*
> The time to create an index on my 266 Mb experimental database is more
> than 9 minutes.
*snip*

I erroneously measured the time with DEBUG build, so I've rerun the
experiment with several SQLite versions:

[2869ed2829] Leaf: Avoid using uninitialized variables after failures
in the merge sort code. (user: drh, tags: trunk)
CPU Time: user 107.359888 sys 135.050066

[7769fb988d] Instead of a temporary b-tree, use a linked-list and
merge-sort to sort records in main memory in vdbesort.c. (user: dan,
tags: merge-sort)
CPU Time: user 118.451559 sys 132.117247

[71075673c6] Leaf: If all data being sorted fits in memory, avoid
writing any data out to temporary files in vdbesort.c. (user: dan,
tags: merge-sort)
CPU Time: user 116.813549 sys 132.710051

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Speeding up index creation

2011-09-02 Thread Filip Navara
Hi,

I'm experimenting with the new code in SQLite's trunk and measuring
the improvements in index creation. The I/O required to create index
and also the disk space requirements has reduced significantly, so the
CREATE INDEX is now dominated by CPU time. Almost no memory is used
for cache though, which I find odd.

The time to create an index on my 266 Mb experimental database is more
than 9 minutes. The database is available at
http://www.emclient.com/temp/mail_index.zip and the command I use to
create the index is

  CREATE INDEX "idx_MailAddresses_address" ON "MailAddresses" ("type",
"address", "parentId");

I had run the shell under profiler and most of the time is spent in
the following code path:

Name / Inclusive % time / Exclusive % time

sqlite3BtreeInsert / 76,68 / 0,40
.. btreeMoveto / 47,75 / 0,18
 sqlite3BtreeMovetoUnpacked / 40,49 / 3,33
.. sqlite3VdbeRecordCompare / 27,04 / 4,69
 sqlite3MemCompare / 15,36 / 7,85

I'm wondering if there's any way to speed up the index creation. Is
larger cache size supposed to improve the performance? Did I perhaps
hit some "worst case" scenario?

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Improving the query optimizer

2011-08-23 Thread Filip Navara
On Tue, Aug 16, 2011 at 10:39 AM, Filip Navara <filip.nav...@gmail.com> wrote:
> 1) Consider the following schema:
>
> create table a (a, b, c, d);
> create index aIdx on a (a, d);
>
> Now the data in columns "b" and "c" can be pretty large. Let's make
> the following query:
>
> select a, d from a;
>
> Full-table scan is done even if an index exists that covers all the
> data required by the query. This is a very simplified example of a
> general problem where the query contains several conditions in the
> WHERE query, but none of them can be answered by lookup in the index.
> A covering index exists though and using it causes much less data to
> be loaded from the disk.
>
> explain query plan select a, d from a;
> 0|0|0|SCAN TABLE a (~100 rows)
> explain query plan select a, d from a where a >= 0 or a < 0;
> 0|0|0|SEARCH TABLE a USING COVERING INDEX aIdx (a>?) (~25 rows)
> 0|0|0|SEARCH TABLE a USING COVERING INDEX aIdx (a___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Version 3.7.8

2011-08-16 Thread Filip Navara
On Tue, Aug 16, 2011 at 2:19 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Tue, Aug 16, 2011 at 5:37 AM, Filip Navara <filip.nav...@gmail.com>wrote:
>
>> Is version 3.7.8 scheduled for September 28 (according to changes.in)
>> or August 28 (bi-monthly schedule)?
>>
>> Current target release date is 2011-09-28.  But that might change.
> Meanwhile, the trunk is stable if you want to use it.

I guess we will use that then. We are specifically interested in the
changes from the merge sort branch to speed up our next database
schema update.

Thanks,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Improving the query optimizer

2011-08-16 Thread Filip Navara
On Tue, Aug 16, 2011 at 2:54 PM, Filip Navara <filip.nav...@gmail.com> wrote:
> On Tue, Aug 16, 2011 at 2:24 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>>
>> On 16 Aug 2011, at 9:39am, Filip Navara wrote:
>>
>>> create table a (a, b, c, d);
>>> create index aIdx on a (a, d);
>>>
>>> Now the data in columns "b" and "c" can be pretty large. Let's make
>>> the following query:
>>>
>>> select a, d from a;
>>>
>>> Full-table scan is done even if an index exists that covers all the
>>> data required by the query.
>>
>> That's a little strange.
>>
>> Please execute the SQL command ANALYZE on that database.  Then do your 
>> 'EXPLAIN' tests again.  It won't fix the problem but it'll tell us something 
>> useful.
>>
>> http://www.sqlite.org/lang_analyze.html
>
> It doesn't fix anything. It's not even possible to force the index
> usage using INDEXED BY.
>

I forgot to attach the data, sorry. These are the real queries on the
actual database data.

attach 'mail_index.dat' as mail_index;
attach 'folders.dat' as folders;

-- Schema --

CREATE TABLE mail_index."MailItems" (
"id" INTEGER NOT NULL PRIMARY KEY,
"uniqueId",
"versionId",
"syncFlags" INTEGER,
"folder" INTEGER,
"syncFolder" INTEGER,
"date" TIMESTAMP,
"subject" TEXT,
"inReplyTo" TEXT,
"messageId" TEXT,
"importance" INTEGER,
"account" TEXT,
"flags" INTEGER,
"type" INTEGER,
"size" INTEGER,
"editTime" INTEGER,
"operationsPerformed" INTEGER,
"receivedDate" TIMESTAMP,
"replyDate" TIMESTAMP,
"forwardDate" TIMESTAMP, "references" TEXT);
CREATE INDEX mail_index."idx_MailItems" ON "MailItems" ("folder", "id");
CREATE INDEX mail_index."idx_MailItems_accountAndFlags" ON "MailItems"
("account", "flags");
CREATE INDEX mail_index."idx_MailItems_flags" ON "MailItems"
("folder", "flags", "syncFlags");
CREATE INDEX mail_index."idx_MailItems_folderAndAccountAndFlags" ON
"MailItems" ("folder", "account", "flags");
CREATE INDEX mail_index."idx_MailItems_uniqueId" ON "MailItems"
("folder", "syncFolder", "uniqueId");

CREATE TABLE folders."Folders" (
"id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
"uniqueId",
"versionId",
"flags" INTEGER DEFAULT 0,
"name" TEXT,
"delimiter" CHAR DEFAULT '/',
"path" TEXT COLLATE NOCASE,
"parentFolderId" INTEGER);
CREATE INDEX folders.idx_Folders_idAndFlags ON "Folders" ("id", "flags");
CREATE INDEX folders.idx_Folders_parentAndFlags ON "Folders"
("parentFolderId", "flags");
CREATE UNIQUE INDEX folders.idx_Folders_path ON "Folders" ("path");

-- Before ANALYZE --

SELECT COUNT(*) FROM (SELECT id FROM mail_index.MailItems WHERE NOT
("flags" & 2) AND NOT ("flags" & 65536) AND "folder" not in
(32,301,140,160,187,185,273,270,281,478,481,7,6) AND  "folder" not in
(SELECT "id" FROM folders.Folders WHERE ("flags" & 8)));
155963

-- Plain query
EXPLAIN QUERY PLAN SELECT COUNT(*) FROM (SELECT id FROM
mail_index.MailItems WHERE NOT ("flags" & 2) AND NOT ("flags" & 65536)
AND "folder" not in (32,301,140,160,187,185,273,270,281,478,481,7,6)
AND  "folder" not in (SELECT "id" FROM folders.Folders WHERE ("flags"
& 8)));
0|0|0|SCAN TABLE MailItems (~62500 rows)
0|0|0|EXECUTE LIST SUBQUERY 1
0|0|0|EXECUTE LIST SUBQUERY 1
1|0|0|SCAN TABLE Folders (~50 rows)

-- Added ORDER BY
EXPLAIN QUERY PLAN SELECT COUNT(*) FROM (SELECT id FROM
mail_index.MailItems WHERE NOT ("flags" & 2) AND NOT ("flags" & 65536)
AND "folder" not in (32,301,140,160,187,185,273,270,281,478,481,7,6)
AND  "folder" not in (SELECT "id" FROM folders.Folders WHERE ("flags"
& 8)) ORDER BY "folder");
1|0|0|SCAN TABLE MailItems USING COVERING INDEX
idx_MailItems_folderAndAccountAndFlags (~62500 rows)
1|0|0|EXECUTE LIST SUBQUERY 2
1|0|0|EXECUTE LIST SUBQUERY 2
2|0|0|SCAN TABLE Folders (~50 rows)
0|0|0|SCAN SUBQUERY 1 (~62500 rows)

-- Added "folder" > 0
EXPLAIN QUERY PLAN SELECT COUNT(*) FROM (SELECT id FROM
mail_index.MailItems WHERE NOT ("flags" & 2) AND NOT ("flags" & 65536)
AND "folder" not in (32,301,140,160,187

Re: [sqlite] Improving the query optimizer

2011-08-16 Thread Filip Navara
On Tue, Aug 16, 2011 at 2:56 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Tue, Aug 16, 2011 at 8:54 AM, Filip Navara <filip.nav...@gmail.com>wrote:
>
>> On Tue, Aug 16, 2011 at 2:24 PM, Simon Slavin <slav...@bigfraud.org>
>> wrote:
>> >
>> > On 16 Aug 2011, at 9:39am, Filip Navara wrote:
>> >
>> >> create table a (a, b, c, d);
>> >> create index aIdx on a (a, d);
>> >>
>> >> Now the data in columns "b" and "c" can be pretty large. Let's make
>> >> the following query:
>> >>
>> >> select a, d from a;
>> >>
>> >> Full-table scan is done even if an index exists that covers all the
>> >> data required by the query.
>> >
>> > That's a little strange.
>> >
>> > Please execute the SQL command ANALYZE on that database.  Then do your
>> 'EXPLAIN' tests again.  It won't fix the problem but it'll tell us something
>> useful.
>> >
>> > http://www.sqlite.org/lang_analyze.html
>>
>> It doesn't fix anything. It's not even possible to force the index
>> usage using INDEXED BY.
>>
>
> Force the covering index to be used by adding an ORDER BY clause:
>
>     SELECT a, d FROM a ORDER BY a, d;
>
> Run the experiment.  Does that make the query go any faster?

Yes, for the particular query it cuts down the time from 33 seconds to
0.4 seconds.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Improving the query optimizer

2011-08-16 Thread Filip Navara
On Tue, Aug 16, 2011 at 2:24 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 16 Aug 2011, at 9:39am, Filip Navara wrote:
>
>> create table a (a, b, c, d);
>> create index aIdx on a (a, d);
>>
>> Now the data in columns "b" and "c" can be pretty large. Let's make
>> the following query:
>>
>> select a, d from a;
>>
>> Full-table scan is done even if an index exists that covers all the
>> data required by the query.
>
> That's a little strange.
>
> Please execute the SQL command ANALYZE on that database.  Then do your 
> 'EXPLAIN' tests again.  It won't fix the problem but it'll tell us something 
> useful.
>
> http://www.sqlite.org/lang_analyze.html

It doesn't fix anything. It's not even possible to force the index
usage using INDEXED BY.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Version 3.7.8

2011-08-16 Thread Filip Navara
Is version 3.7.8 scheduled for September 28 (according to changes.in)
or August 28 (bi-monthly schedule)?

Thanks,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query optimizer and foreign keys

2011-08-16 Thread Filip Navara
Hello!

I'm seeking some advice on improving SQLite's query planning in
relation to foreign keys. Sometimes it may be useful to exploit the
fact that the columns in separate tables refer to the same
information.

Consider the following schema:

create table a (a, b, c);
create table b (a, b REFERENCES a(b), c);
create index bIdx on b(b);

It may be worthwhile to use the foreign key references to append the
"b.b=a.b" clause to queries spanning multiple tables:

explain query plan select a, b, c from a where b in (select b from b
where c='');
0|0|TABLE a
0|0|TABLE b WITH AUTOMATIC INDEX

explain query plan select a, b, c from a where b in (select b from b
where c='' and b.b=a.b);
0|0|TABLE a
0|0|TABLE b WITH INDEX bIdx

The second query plan could prove to be more useful if the number of
rows selected from table "a" is small and number of rows in table "b"
is big. Each lookup into table "b" is log(n) in this case, where n is
the number of rows in table "b". Obviously adding the clause can prove
to be useful in certain queries and not useful in others depending on
the result set sizes, but I believe that it is possible to count the
estimates and choose the better strategy.

Sometimes the foreign key information could be used to skip the lookup
in the original table altogether. Let's look at the following query:

select b from a where b in (select b from b where c='');

As long as a.b = b.b the query could actually be rewritten as:

select b from b where c='';

What use cases could you think of that would benefit from use of
foreign key information in query planning? What caveats could you
think of?

Thanks,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Improving the query optimizer

2011-08-16 Thread Filip Navara
Hello!

Over the last few weeks we were profiling our usage of SQLite. We
found three common patterns that are not treated well by the
optimizer. I believe at least two of them can be fixed easily and they
are included below.

1) Consider the following schema:

create table a (a, b, c, d);
create index aIdx on a (a, d);

Now the data in columns "b" and "c" can be pretty large. Let's make
the following query:

select a, d from a;

Full-table scan is done even if an index exists that covers all the
data required by the query. This is a very simplified example of a
general problem where the query contains several conditions in the
WHERE query, but none of them can be answered by lookup in the index.
A covering index exists though and using it causes much less data to
be loaded from the disk.

explain query plan select a, d from a;
0|0|0|SCAN TABLE a (~100 rows)
explain query plan select a, d from a where a >= 0 or a < 0;
0|0|0|SEARCH TABLE a USING COVERING INDEX aIdx (a>?) (~25 rows)
0|0|0|SEARCH TABLE a USING COVERING INDEX aIdx (a explain query plan select a from a where a='test' AND a IS NOT NULL;
0|0|TABLE a WITH INDEX aIdx

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug: Database with non-loadable schema can be created

2011-07-01 Thread Filip Navara
Hello,

I have hit a bug that allows creation of a database that couldn't be
loaded. Step to reproduce are listed below.

Best regards,
Filip Navara

>sqlite3.exe
SQLite version 3.7.6.3
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> ATTACH DATABASE 'attached.dat' AS 'attached';
sqlite> CREATE TABLE "attached"."a" ("b");
sqlite> CREATE TRIGGER "attached"."ta" AFTER INSERT ON "attached"."a"
BEGIN SELECT 0; END;

>sqlite3.exe attached.dat
SQLite version 3.7.6.3
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .schema
Error: malformed database schema (ta) - trigger "ta" cannot reference objects in
 database attached
sqlite>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug: ERROR_DISK_FULL is not handled in Win32 OS layer

2011-06-09 Thread Filip Navara
Hi,

the Win32 OS layer function winWrite checks for the
ERROR_HANDLE_DISK_FULL error code to detect "disk full" condition:

if( pFile->lastErrno==ERROR_HANDLE_DISK_FULL ){

This check is not sufficient since the error code could also be
ERROR_DISK_FULL. See [1] for description of the error. Apparently we
get the later error on NTFS formatted systems when the disk is out of
space. It would greatly help us with diagnostics if the correct error
code was returned from SQLite (SQLITE_FULL).

Best regards,
Filip Navara

[1] http://msdn.microsoft.com/en-us/library/ms681382(v=vs.85).aspx
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 'integer'

2011-04-16 Thread Filip Navara
http://www.sqlite.org/datatype3.html

F.

On Sat, Apr 16, 2011 at 7:40 PM, Tobias Vesterlund
 wrote:
> Hi,
>
> I ran into something I don't understand, maybe someone here can shed some 
> light on it for me.
>
> I have a table named Tg which is created (with tcl) by:
>
> CREATE TABLE Tg (TgConfigId INTEGER PRIMARY KEY AUTOINCREMENT,
>        OtherColumn INTEGER);
>
> If I do:
> INSERT INTO Tg (TgConfigId) VALUES (1);
>
> The following select works:
> SELECT * FROM Tg WHERE TgConfigId = 1;
>
> But if I insert '1' instead I have to select on '1', 1 no longer works. That 
> makes some sense, but not entirely, as what I inserted isn't an integer any 
> longer but a string. Why would I be allowed to insert values with '' in a 
> column that is specified to be an integer?
>
> Regards,
> Tobias
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corruption on many Windows XP hosts

2011-04-14 Thread Filip Navara
Just to be sure. Could you run the application under Application
Verifier with heap checks enabled?

Best regards,
Filip Navara

On Thu, Apr 14, 2011 at 11:40 AM, James Green <james.mk.gr...@gmail.com> wrote:
> Support just flagged another site. This one has a more "interesting"
> corruption issue in that data intended for one table ended up across two
> others.
>
> We have no single component that would open all of those tables during it's
> runtime. Probably of little consequence we also noticed that F-Secure issued
> *something* about sqlite3odbc.dll.
>
> Event Type:    Error
> Event Source:    F-Secure Gatekeeper
> Event Category:    None
> Event ID:    1
> Date:        05/04/2011
> Time:        13:06:40
> User:        N/A
> Computer:    E4886_C_8377
> Description:
> The description for Event ID ( 1 ) in Source ( F-Secure Gatekeeper ) cannot
> be found. The local computer may not have the necessary registry information
> or message DLL files to display messages from a remote computer. You may be
> able to use the /AUXSOURCE= flag to retrieve this description; see Help and
> Support for details. The following information is part of the event: ,
> \Device\HarddiskVolume2...sqlite3odbc.dll.
> Data:
> : 00 00 14 00 02 00 84 00   ..„.
> 0008: 00 00 00 00 01 00 05 c0   ...À
> 0010: 00 00 00 00 02 01 00 00   
> 0018: 00 00 00 00 00 00 00 00   
> 0020: 00 00 00 00 00 00 00 00   
> 0028: 01 00 00 00 08 08 00 00   
> 0030: 01 00 00 00 40 00 00 00   @...
> 0038: 00 00 00 00               
>
> No logs further to that could be found :(
>
> We did however decide to copy the database and run a quick vacuum &
> integrity_check:
> sqlite> vacuum;
> sqlite> pragma integrity_check;
> wrong # of entries in index IXED_EventDetails
> wrong # of entries in index sqlite_autoindex_EventDetails_1
>
> The wrongly-located data was still present afterwards. EventDetails was not
> involved in this mis-placed data issue.
>
> James
>
> On 12 April 2011 15:01, James Green <james.mk.gr...@gmail.com> wrote:
>
>> Incidentally running pragma integrity_check on one of the samples I
>> have been given returns:
>>
>> *** in database main ***
>> On tree page 1824 cell 0: invalid page number 218103808
>> On page 73608 at right child: 2nd reference to page 92497
>> Page 92110 is never used
>> Page 92494 is never used
>> Page 92496 is never used
>> Page 92499 is never used
>> Page 92508 is never used
>> Page 92511 is never used
>> Page 92512 is never used
>> Page 92513 is never used
>> Error: database disk image is malformed
>>
>> James
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-07 Thread Filip Navara
After throughout analysis of my original problem I have probably found
the culprit.

The problem happens when incremental_vacuum (incrVacuumStep) tries to
remove the last page and the last page is free list leaf. In that case
the following code path is taken:

/* Remove the page from the files free-list. This is not required
** if nFin is non-zero. In that case, the free-list will be
** truncated to zero after this function returns, so it doesn't
** matter if it still contains some garbage entries.
*/
Pgno iFreePg;
MemPage *pFreePg;
rc = allocateBtreePage(pBt, , , iLastPg, 1);

The allocateBtreePage searches the free list for the leaf page. Since
the page happens to located at the end of the free list all the trunk
pages have to be walked through. For each of the trunk pages that
contain pointer to at least one leaf page the following code is hit:

/* Extract a leaf from the trunk */
u32 closest;
Pgno iPage;
unsigned char *aData = pTrunk->aData;
rc = sqlite3PagerWrite(pTrunk->pDbPage);
if( rc ){
  goto end_allocate_page;
}

Now the problem is that sqlite3PagerWrite is called regardless of
whether the page is modified or not. So most of the free list trunk
pages end up being marked as dirty and get to the journal / WAL even
though they weren't changed at all. This makes the journal / WAL grow
to several megabytes instead of less than 4 kilobytes in my case.

My proposed solution is to move the sqlite3PagerWrite call just before
the code that actually modifies the page, as in the patch below.

Best regards,
Filip Navara

--- old\btree.c 2011-01-28 08:15:16.0 +0100
+++ btree.c 2011-04-07 10:57:07.052116400 +0200
@@ -4893,10 +4893,6 @@
 u32 closest;
 Pgno iPage;
 unsigned char *aData = pTrunk->aData;
-rc = sqlite3PagerWrite(pTrunk->pDbPage);
-if( rc ){
-  goto end_allocate_page;
-}
 if( nearby>0 ){
   u32 i;
   int dist;
@@ -4928,6 +4924,10 @@
   TRACE(("ALLOCATE: %d was leaf %d of %d on trunk %d"
  ": %d more free pages\n",
  *pPgno, closest+1, k, pTrunk->pgno, n-1));
+  rc = sqlite3PagerWrite(pTrunk->pDbPage);
+  if( rc ){
+goto end_allocate_page;
+  }
   if( closesthttp://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-07 Thread Filip Navara
On Thu, Apr 7, 2011 at 9:48 AM, Jonathan Allin
<jonat...@jonathanallin.com> wrote:
>
>> After some more diagnosis I discovered that the free list is not
>> corrupted afterall. It's just that the showdb tool fails to work
>> properly with databases larger than 4Gb and since the free list trunk
>> pages are located beyond the 4Gb barrier it caused problems.
>>
>
> What is showdb? I could only find
> http://www.koders.com/c/fid038880E49A3473D4D58E9755F1A3105D8F5C2D21.aspx?s=s
> ubstr
>
> Thanks,
>
> ¬Jonathan

What you found seems to be very old version of the same utility.
Current version is available in the sqlite3 source distribution (the
full one, not the amalgamation) or at
http://www.sqlite.org/src/dir?name=tool

There are three tools to dump various information from the SQLite3
database files:
- showdb for dumping information from the main database file. It
supports decoding of the header fields, free list, btree pages and raw
dumps of database pages in readable hexadecimal format.
- showwal for dumping contents of the WAL file
- showjournal for dumping contents of the journal file

The tools are very useful when dealing with corrupted databases or
when trying to understand the SQLite3 file format.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-06 Thread Filip Navara
On Wed, Apr 6, 2011 at 9:27 AM, Filip Navara <filip.nav...@gmail.com> wrote:
> On Wed, Apr 6, 2011 at 6:36 AM, Dan Kennedy <danielk1...@gmail.com> wrote:
>> On 04/05/2011 04:49 PM, Filip Navara wrote:
>>> Hello,
>>>
>>> we are having problem with database that originated on computer of one
>>> of our customers.
>>>
>>> The database is used in WAL mode with auto_vacuum=1 and page_size=1024.
>>>
>>> When running the "pragma incremental_vacuum(1);" command the WAL file
>>> grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
>>> per page + some overhead). This causes the transaction to run for much
>>> longer time than expected and eventually the WAL file grows to several
>>> gigabytes when we try to run incremental_vacuum for 4096 pages.
>>>
>>> Additional facts:
>>> - The database was created with SQLite 3.7.5 running on Windows
>>> - "pragma integrity_check;" reports ok
>>> - Some free trunk list pages contain weird values in the "number of
>>> entries" field
>>>
>>> I have attached most of the dumps that I could create with the regular
>>> SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
>>> the free list dump). I'm willing to send the whole database file
>>> (~5Gb) to sqlite developers on request.
>>>
>>> My questions are:
>>> - Is is expected behavior that "pragma incremental_vacuum(1);" can
>>> create 14Mb WAL file even though the page size is just 1Kb?
>>> - If not, is it a know bug? If yes, in what circumstances?
>>> - Is the free list corrupted? And if it is, how could this happen and
>>> should "pragma integrity_check;" verify it?
>>
>> Was there a *-wal file in the file-system when you ran the
>> [showdb] tool? [showdb] just reads the database file, it is
>> not smart enough to know when it should read pages out of the
>> wal file instead of the db. So if there is a wal file that
>> contains content, [showdb] could report corruption.
>>
>> Also, can you try with this patch?
>>
>>   http://www.sqlite.org/src/ci/311d0b613d
>>
>> It might help with the 14MB wal files.
>>
>> Dan.
>
> Hi Dan,
>
> there was no -wal file when I ran the showdb tool.
>
> I recompiled sqlite from the current 3.7.6 snapshot
> (sqlite-amalgamation-201104052208.zip) and verified that it contained
> your patch. Unfortunately it didn't help with the large WAL file for
> this particular database.
>
> Best regards,
> Filip Navara
>

After some more diagnosis I discovered that the free list is not
corrupted afterall. It's just that the showdb tool fails to work
properly with databases larger than 4Gb and since the free list trunk
pages are located beyond the 4Gb barrier it caused problems.

This leaves me with the incremental vacuum problem, where the
journal/WAL grows uncontrollably.

Also I have created an anonymized version of the database that doesn't
contain most of the original data and where all free list leaf pages
are overwritten with zeroes. It can be downloaded from
http://www.emclient.com/temp/mail_data.zip (~ 30Mb).

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-06 Thread Filip Navara
On Wed, Apr 6, 2011 at 6:36 AM, Dan Kennedy <danielk1...@gmail.com> wrote:
> On 04/05/2011 04:49 PM, Filip Navara wrote:
>> Hello,
>>
>> we are having problem with database that originated on computer of one
>> of our customers.
>>
>> The database is used in WAL mode with auto_vacuum=1 and page_size=1024.
>>
>> When running the "pragma incremental_vacuum(1);" command the WAL file
>> grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
>> per page + some overhead). This causes the transaction to run for much
>> longer time than expected and eventually the WAL file grows to several
>> gigabytes when we try to run incremental_vacuum for 4096 pages.
>>
>> Additional facts:
>> - The database was created with SQLite 3.7.5 running on Windows
>> - "pragma integrity_check;" reports ok
>> - Some free trunk list pages contain weird values in the "number of
>> entries" field
>>
>> I have attached most of the dumps that I could create with the regular
>> SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
>> the free list dump). I'm willing to send the whole database file
>> (~5Gb) to sqlite developers on request.
>>
>> My questions are:
>> - Is is expected behavior that "pragma incremental_vacuum(1);" can
>> create 14Mb WAL file even though the page size is just 1Kb?
>> - If not, is it a know bug? If yes, in what circumstances?
>> - Is the free list corrupted? And if it is, how could this happen and
>> should "pragma integrity_check;" verify it?
>
> Was there a *-wal file in the file-system when you ran the
> [showdb] tool? [showdb] just reads the database file, it is
> not smart enough to know when it should read pages out of the
> wal file instead of the db. So if there is a wal file that
> contains content, [showdb] could report corruption.
>
> Also, can you try with this patch?
>
>   http://www.sqlite.org/src/ci/311d0b613d
>
> It might help with the 14MB wal files.
>
> Dan.

Hi Dan,

there was no -wal file when I ran the showdb tool.

I recompiled sqlite from the current 3.7.6 snapshot
(sqlite-amalgamation-201104052208.zip) and verified that it contained
your patch. Unfortunately it didn't help with the large WAL file for
this particular database.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-05 Thread Filip Navara
On Tue, Apr 5, 2011 at 11:49 AM, Filip Navara <filip.nav...@gmail.com> wrote:
> Hello,
>
> we are having problem with database that originated on computer of one
> of our customers.
>
> The database is used in WAL mode with auto_vacuum=1 and page_size=1024.
>
> When running the "pragma incremental_vacuum(1);" command the WAL file
> grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
> per page + some overhead). This causes the transaction to run for much
> longer time than expected and eventually the WAL file grows to several
> gigabytes when we try to run incremental_vacuum for 4096 pages.
>
> Additional facts:
> - The database was created with SQLite 3.7.5 running on Windows
> - "pragma integrity_check;" reports ok
> - Some free trunk list pages contain weird values in the "number of
> entries" field
>
> I have attached most of the dumps that I could create with the regular
> SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
> the free list dump). I'm willing to send the whole database file
> (~5Gb) to sqlite developers on request.
>
> My questions are:
> - Is is expected behavior that "pragma incremental_vacuum(1);" can
> create 14Mb WAL file even though the page size is just 1Kb?
> - If not, is it a know bug? If yes, in what circumstances?
> - Is the free list corrupted? And if it is, how could this happen and
> should "pragma integrity_check;" verify it?
>
> Best regards,
> Filip Navara

Also the free list trunk pages seem to filled with 3-byte patterns.
Each page has different pattern, but they look very similar. I'm not
sure if these patterns come from the original data or not.

02D1E6000  00 0B 47 9A B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
..Gť¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6010  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6020  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6030  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6040  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6050  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6060  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6070  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6080  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6090  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E60A0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E60B0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E60C0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E60D0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E60E0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E60F0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6100  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6110  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6120  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6130  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6140  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6150  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6160  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6170  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6180  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6190  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E61A0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E61B0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E61C0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E61D0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E61E0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E61F0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6200  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6210  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6220  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡ĽÂ

[sqlite] Problem with incremental_vacuum and WAL

2011-04-05 Thread Filip Navara
Hello,

we are having problem with database that originated on computer of one
of our customers.

The database is used in WAL mode with auto_vacuum=1 and page_size=1024.

When running the "pragma incremental_vacuum(1);" command the WAL file
grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
per page + some overhead). This causes the transaction to run for much
longer time than expected and eventually the WAL file grows to several
gigabytes when we try to run incremental_vacuum for 4096 pages.

Additional facts:
- The database was created with SQLite 3.7.5 running on Windows
- "pragma integrity_check;" reports ok
- Some free trunk list pages contain weird values in the "number of
entries" field

I have attached most of the dumps that I could create with the regular
SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
the free list dump). I'm willing to send the whole database file
(~5Gb) to sqlite developers on request.

My questions are:
- Is is expected behavior that "pragma incremental_vacuum(1);" can
create 14Mb WAL file even though the page size is just 1Kb?
- If not, is it a know bug? If yes, in what circumstances?
- Is the free list corrupted? And if it is, how could this happen and
should "pragma integrity_check;" verify it?

Best regards,
Filip Navara

-- showdb mail_data.dat dbheader --

Pagesize: 1024
Available pages: 1..1389458
 000: 53 51 4c 69 74 65 20 66 6f 72 6d 61 74 20 33 00 SQLite format 3.
 010: 04 00 02 02 00 40 20 20 00 00 00 2f 00 55 33 92 .@  .../.U3.
 020: 00 4b 47 99 00 54 53 34 00 00 00 01 00 00 00 04 .KG..TS4
 030: 00 00 00 00 00 00 00 04 00 00 00 01 00 00 00 00 
 040: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 
 050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2f .../
 060: 00 2d e2 1d 00  .-...
Decoded:
 010: 04 00  1024  Database page size
 012: 022  File format write version
 013: 022  File format read version
 014: 000  Reserved space at end of page
 018: 00 00 00 2f  47  File change counter
 01c: 00 55 33 92 5583762  Size of database in pages
 020: 00 4b 47 99 4933529  Page number of first freelist page
 024: 00 54 53 34 5526324  Number of freelist pages
 028: 00 00 00 01   1  Schema cookie
 02c: 00 00 00 04   4  Schema format version
 030: 00 00 00 00   0  Default page cache size
 034: 00 00 00 04   4  Largest auto-vac root page
 038: 00 00 00 01   1  Text encoding
 03c: 00 00 00 00   0  User version
 040: 00 00 00 01   1  Incremental-vacuum mode
 044: 00 00 00 00   0  meta[7]
 048: 00 00 00 00   0  meta[8]
 04c: 00 00 00 00   0  meta[9]
 050: 00 00 00 00   0  meta[10]
 054: 00 00 00 00   0  meta[11]
 058: 00 00 00 00   0  meta[12]
 05c: 00 00 00 2f  47  Change counter for version number
 060: 00 2d e2 1d 3007005  SQLite version number

-- showdb mail_data.dat 4933529t (truncated) --

Pagesize: 1024
Available pages: 1..1389458
Decode of freelist trunk page 4933529:
 000: 00 0b 47 9a  739226  Next freelist trunk page
 004: b5 b7 bc b5   -1246249803  Number of entries on this page
Decode of freelist trunk page 739226:
 000: 00 0b 47 9b  739227  Next freelist trunk page
 004: 9f 65 86 9f   -1620736353  Number of entries on this page
Decode of freelist trunk page 739227:
 000: 00 0b 47 9c  739228  Next freelist trunk page
 004: d9 ed ea d9   -638719271  Number of entries on this page
Decode of freelist trunk page 739228:
 000: 00 0b 47 9d  739229  Next freelist trunk page
 004: d9 ed ea d9   -638719271  Number of entries on this page
Decode of freelist trunk page 739229:
 000: 00 0b 47 9e  739230  Next freelist trunk page
 004: db f0 ed db   -604967461  Number of entries on this page
Decode of freelist trunk page 739230:
 000: 00 0b 47 9f  739231  Next freelist trunk page
 004: db f0 ed db   -604967461  Number of entries on this page
Decode of freelist trunk page 739231:
 000: 00 0b 47 a1  739233  Next freelist trunk page
 004: db f0 ed db   -604967461  Number of entries on this page
Decode of freelist trunk page 739233:
 000: 00 0b 47 a2  739234  Next freelist trunk page
 004: d8 ec e9 d8   -655562280  Number of entries on this page
Decode of freelist trunk page 739234:
 000: 00 0b 47 a3  739235  Next freelist trunk page
 004: d8 ec e9 d8   -655562280  Number of entries on this page
...
Decode of freelist trunk page 739463:
 000: 00 0b 48 88  739464  Next freelist trunk page
 004: e9 d8 ec e9   -371659543  Number of entries on this page
Decode of freelist trunk page 739464:
 000: 00 0b 47 8d  739213  Next freelist trunk page
 004: 00 00 00 f8 248  Number of entries on this page
Decode of freelist trunk page 739213:
 000: 00 0b 46 93  

Re: [sqlite] Help with database corruption?

2010-08-23 Thread Filip Navara
I have uploaded a slightly different (and smaller) version of the
script. It triggers the bug in both version 3.7.0.1 and 3.7.1 on my
machine.

Thanks for testing,
F.

On Mon, Aug 23, 2010 at 3:00 PM, Black, Michael (IS)
<michael.bla...@ngc.com> wrote:
> I confirmed this shows a malformed result on 3.6.23.1 and 3.7.0.1
>
> But it runs just fine under 3.7.1
>
> So apparently whatever bug you triggered has been fixed now.
>
>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
>
>
> 
>
> From: sqlite-users-boun...@sqlite.org on behalf of Filip Navara
> Sent: Mon 8/23/2010 7:43 AM
> To: General Discussion of SQLite Database
> Cc: d...@hwaci.com
> Subject: EXTERNAL:Re: [sqlite] Help with database corruption?
>
>
>
> I can now reliably corrupt the database using standard commands. An
> SQL script can be downloaded at the address below that creates a
> database and then stresses it until a corruption happens.
>
> http://www.emclient.com/temp/sqlite_corrupt_log.zip
>
> Please help fix the problem or at least confirm that others can
> reproduce it using the same script.
>
> Best regards,
> Filip Navara
>
> On Fri, Aug 20, 2010 at 5:25 PM, Filip Navara <filip.nav...@gmail.com> wrote:
>> Hello,
>>
>> is there anybody willing to help analyze corrupted database for
>> possible bug in SQLite?
>>
>> It is a database file taken from one of our test machines and it is
>> only few days old at most. The database file was only ever accessed
>> with SQLite 3.7.0.1. It has page size of 1024, WAL mode,
>> synchronous=FULL and incremental vacuum. No power failure happened on
>> that machine. Apparently somewhere during the course of execution of
>> our application the database free page list become corrupted. This is
>> for the third time this week the same error happened, but only this
>> time I have a copy of the database file. It is 887 Mb big (although
>> the real data consume less than 2 Mb) and so it is rather problematic
>> to provide the file. A copy is available at
>> http://www.emclient.com/temp/mail_data.zip.
>>
>> The database was accessed on Windows machine using SQLite 3.7.0.1
>> 64-bit build. The following statements are the only ones that were
>> executed against the database besides SELECTs and initial schema
>> definition:
>>
>> INSERT INTO LocalMailContents (
>>   "id", "partName", "contentType", "contentId",
>>   "contentDescription", "contentTransferEncoding",
>>   "contentMD5", "contentDisposition", "contentLanguage",
>>   "contentLocation", "partHeader", "partBody",
>>   "synchronizationKey", "contentLength")
>>   VALUES
>>   (@id, @partName, @contentType, @contentId, @contentDescription,
>>   �...@contenttransferencoding, @contentMD5, @contentDisposition,
>>   �...@contentlanguage, @contentLocation, @partHeader, @partBody,
>>   �...@synchronizationkey, @contentLength)
>> UPDATE LocalMailContents SET synchronizationk...@synchronizationkey
>> WHERE i...@id AND partna...@partname
>> UPDATE LocalMailContents SET parthead...@partheader WHERE i...@id AND
>> partna...@partname
>> UPDATE LocalMailContents SET partbo...@partcontent WHERE i...@id AND
>> partna...@partname
>> DELETE FROM LocalMailContents WHERE id IN ();
>> PRAGMA freelist_count;
>> PRAGMA incremental_vacuum();
>>
>> The error messages produced by "pragma integrity_check" are
>>
>> *** in database main ***
>> Main freelist: invalid page number 866828
>> Page 396 is never used
>> Page 473 is never used
>> Page 14780 is never used
>> Page 14915 is never used
>> Page 153649 is never used
>> Page 210894 is never used
>> Page 319247 is never used
>> Page 397006 is never used
>> Page 416545 is never used
>> Page 416636 is never used
>> Page 416704 is never used
>> Page 416705 is never used
>> Page 416706 is never used
>> Page 416707 is never used
>> Page 416708 is never used
>> Page 416710 is never used
>> Page 416711 is never used
>> ...
>>
>> I tried to locate the missing freelist pages in the database file and
>> they definitely were there at some point, but I am not familiar enough
>> with the file format to track the whole freelist and find where the
>> corruption exactly happen. All I know is that page 388 is corrupted
>> and points to a location outside of the database file.
>>
>> Thanks,
>> Filip Navara
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help with database corruption?

2010-08-23 Thread Filip Navara
I can now reliably corrupt the database using standard commands. An
SQL script can be downloaded at the address below that creates a
database and then stresses it until a corruption happens.

http://www.emclient.com/temp/sqlite_corrupt_log.zip

Please help fix the problem or at least confirm that others can
reproduce it using the same script.

Best regards,
Filip Navara

On Fri, Aug 20, 2010 at 5:25 PM, Filip Navara <filip.nav...@gmail.com> wrote:
> Hello,
>
> is there anybody willing to help analyze corrupted database for
> possible bug in SQLite?
>
> It is a database file taken from one of our test machines and it is
> only few days old at most. The database file was only ever accessed
> with SQLite 3.7.0.1. It has page size of 1024, WAL mode,
> synchronous=FULL and incremental vacuum. No power failure happened on
> that machine. Apparently somewhere during the course of execution of
> our application the database free page list become corrupted. This is
> for the third time this week the same error happened, but only this
> time I have a copy of the database file. It is 887 Mb big (although
> the real data consume less than 2 Mb) and so it is rather problematic
> to provide the file. A copy is available at
> http://www.emclient.com/temp/mail_data.zip.
>
> The database was accessed on Windows machine using SQLite 3.7.0.1
> 64-bit build. The following statements are the only ones that were
> executed against the database besides SELECTs and initial schema
> definition:
>
> INSERT INTO LocalMailContents (
>   "id", "partName", "contentType", "contentId",
>   "contentDescription", "contentTransferEncoding",
>   "contentMD5", "contentDisposition", "contentLanguage",
>   "contentLocation", "partHeader", "partBody",
>   "synchronizationKey", "contentLength")
>   VALUES
>   (@id, @partName, @contentType, @contentId, @contentDescription,
>   �...@contenttransferencoding, @contentMD5, @contentDisposition,
>   �...@contentlanguage, @contentLocation, @partHeader, @partBody,
>   �...@synchronizationkey, @contentLength)
> UPDATE LocalMailContents SET synchronizationk...@synchronizationkey
> WHERE i...@id AND partna...@partname
> UPDATE LocalMailContents SET parthead...@partheader WHERE i...@id AND
> partna...@partname
> UPDATE LocalMailContents SET partbo...@partcontent WHERE i...@id AND
> partna...@partname
> DELETE FROM LocalMailContents WHERE id IN ();
> PRAGMA freelist_count;
> PRAGMA incremental_vacuum();
>
> The error messages produced by "pragma integrity_check" are
>
> *** in database main ***
> Main freelist: invalid page number 866828
> Page 396 is never used
> Page 473 is never used
> Page 14780 is never used
> Page 14915 is never used
> Page 153649 is never used
> Page 210894 is never used
> Page 319247 is never used
> Page 397006 is never used
> Page 416545 is never used
> Page 416636 is never used
> Page 416704 is never used
> Page 416705 is never used
> Page 416706 is never used
> Page 416707 is never used
> Page 416708 is never used
> Page 416710 is never used
> Page 416711 is never used
> ...
>
> I tried to locate the missing freelist pages in the database file and
> they definitely were there at some point, but I am not familiar enough
> with the file format to track the whole freelist and find where the
> corruption exactly happen. All I know is that page 388 is corrupted
> and points to a location outside of the database file.
>
> Thanks,
> Filip Navara
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Help with database corruption?

2010-08-20 Thread Filip Navara
Hello,

is there anybody willing to help analyze corrupted database for
possible bug in SQLite?

It is a database file taken from one of our test machines and it is
only few days old at most. The database file was only ever accessed
with SQLite 3.7.0.1. It has page size of 1024, WAL mode,
synchronous=FULL and incremental vacuum. No power failure happened on
that machine. Apparently somewhere during the course of execution of
our application the database free page list become corrupted. This is
for the third time this week the same error happened, but only this
time I have a copy of the database file. It is 887 Mb big (although
the real data consume less than 2 Mb) and so it is rather problematic
to provide the file. A copy is available at
http://www.emclient.com/temp/mail_data.zip.

The database was accessed on Windows machine using SQLite 3.7.0.1
64-bit build. The following statements are the only ones that were
executed against the database besides SELECTs and initial schema
definition:

INSERT INTO LocalMailContents (
   "id", "partName", "contentType", "contentId",
   "contentDescription", "contentTransferEncoding",
   "contentMD5", "contentDisposition", "contentLanguage",
   "contentLocation", "partHeader", "partBody",
   "synchronizationKey", "contentLength")
   VALUES
   (@id, @partName, @contentType, @contentId, @contentDescription,
@contentTransferEncoding, @contentMD5, @contentDisposition,
@contentLanguage, @contentLocation, @partHeader, @partBody,
@synchronizationKey, @contentLength)
UPDATE LocalMailContents SET synchronizationk...@synchronizationkey
WHERE i...@id AND partna...@partname
UPDATE LocalMailContents SET parthead...@partheader WHERE i...@id AND
partna...@partname
UPDATE LocalMailContents SET partbo...@partcontent WHERE i...@id AND
partna...@partname
DELETE FROM LocalMailContents WHERE id IN ();
PRAGMA freelist_count;
PRAGMA incremental_vacuum();

The error messages produced by "pragma integrity_check" are

*** in database main ***
Main freelist: invalid page number 866828
Page 396 is never used
Page 473 is never used
Page 14780 is never used
Page 14915 is never used
Page 153649 is never used
Page 210894 is never used
Page 319247 is never used
Page 397006 is never used
Page 416545 is never used
Page 416636 is never used
Page 416704 is never used
Page 416705 is never used
Page 416706 is never used
Page 416707 is never used
Page 416708 is never used
Page 416710 is never used
Page 416711 is never used
...

I tried to locate the missing freelist pages in the database file and
they definitely were there at some point, but I am not familiar enough
with the file format to track the whole freelist and find where the
corruption exactly happen. All I know is that page 388 is corrupted
and points to a location outside of the database file.

Thanks,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] partial index?

2010-08-20 Thread Filip Navara
On Fri, Aug 20, 2010 at 1:47 PM, Tim Romano  wrote:
> Igor,
> Here's the example where a partial index can "hide" rows.
>
> From the wikipedia article cited by the OP:
>
> 
> It is not necessary that the condition be the same as the index criterion;
> Stonebraker's paper below presents a number of examples with indexes similar
> to the following:
>
>  create index partial_salary on employee(age) where salary > 2100;
>
> 
>
> What would happen if you issued these queries?
>
>                  select max(age) from employee
>                  select avg(age) from employee
>
> Would the ages of employees earning <= 2100 be included?

Yes

> Is the partial-index used under those circumstances?

No, it would change outcome of the query. The partial index is used
only for optimizing queries that satisfy the index condition.

F.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.0

2010-07-22 Thread Filip Navara
On Thu, Jul 22, 2010 at 1:31 PM, Dan Kennedy  wrote:
[snip]
> You cannot delete a file while it is open on windows, so
> this doesn't come up on win32.

Sure you can, except:
- The correct sharing rights have to be specified for this to be
allowed (FILE_SHARE_DELETE). As far as I know SQLite opens the files
without this sharing right.
- The file is deleted after the last handle to it is closed. This
means that the same filename couldn't be reused until all the handles
are closed.

F.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite-snapshot-201007091257

2010-07-13 Thread Filip Navara
Hi Shane!

I used two separate builds. One was built with the following command line:

cl /DSQLITE_ENABLE_FTS3=1 /DSQLITE_ENABLE_COLUMN_METADATA=1
/DTHREADSAFE=1 /O2 /LD sqlite3.c sqlite3.def

and the other was built from inside Visual Studio 2010 UI and reports
the following compile options:

sqlite> select sqlite_source_id();
2010-07-09 12:57:54 0c32c4bbdd74297767dcf4ec4295f9cc72875af0
sqlite> PRAGMA compile_options;
DEBUG
ENABLE_COLUMN_METADATA
ENABLE_FTS3
TEMP_STORE=1
THREADSAFE=1

Both suffer from the same problem and I tested it on various different
database files.

The reason why you probably can't reproduce it is because the problem
manifests only when other application blocks access to the journal
file. In my case the other application is TortoiseSVN, specifically
the TSVNCache process. In fact that was the original reason why we
started using journal_mode=truncate, which was supposed to avoid
deleting the journal file, but for some reason the new SQLite version
tries to do it anyway while the older one didn't.

Best regards,
Filip Navara

On Tue, Jul 13, 2010 at 11:21 PM, Shane Harrelson <sh...@sqlite.org> wrote:
> I tried to reproduce the issue with the latest version, as well as with the
> referenced snapshot, and could not.
> I ran two instances of the sqlite CLI as you indicated without issue.
>
> C:\work\sqlite\win32\Debug>sqlite_snapshot test.db
> SQLite version 3.7.0
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> pragma encoding="UTF-8";
> sqlite> pragma auto_vacuum=incremental;
> sqlite> pragma journal_mode=truncate;
> truncate
> sqlite> select sqlite_version();
> 3.7.0
> sqlite> select sqlite_source_id();
> 2010-07-09 12:57:54 0c32c4bbdd74297767dcf4ec4295f9cc72875af0
> sqlite> PRAGMA compile_options;
> DEBUG
> OMIT_LOAD_EXTENSION
> TEMP_STORE=1
> TEST
> THREADSAFE=1
> sqlite>
>
> I included the output of version, source_id, and compile_options for
> reference.
>
> What options are you compiling with?  Is there perhaps a journal file
> somewhere that you're unaware of?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite-snapshot-201007091257

2010-07-13 Thread Filip Navara
Steps to reproduce, run sqlite3 test.db twice. The first instance:

SQLite version 3.7.0
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> pragma encoding="UTF-8";
sqlite> pragma auto_vacuum=incremental;
sqlite> pragma journal_mode=truncate;
truncate

The second instance:

SQLite version 3.7.0
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> pragma encoding="UTF-8";
sqlite> pragma auto_vacuum=incremental;
Error: disk I/O error

Sorry for the spamming.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite-snapshot-201007091257

2010-07-13 Thread Filip Navara
The second problem is definitely related to the change in the
implementation of "pragma journal_mode" though I am not completely
sure how. The previous version of our code did

pragma encoding="UTF-8";
pragma auto_vacuum=incremental;
pragma journal_mode=truncate;

after opening the database file. When I change the code to

pragma main.journal_mode=truncate;
pragma encoding="UTF-8";
pragma auto_vacuum=incremental;

the problem doesn't happen anymore. Even if I use this code sequence
SQLite still tries to access the WAL file.

F.

On Tue, Jul 13, 2010 at 5:23 PM, Filip Navara <filip.nav...@gmail.com> wrote:
> Btw, the failing application call is DeleteFileW.
> F.
>
> On Tue, Jul 13, 2010 at 5:02 PM, Filip Navara <filip.nav...@gmail.com>
> wrote:
>>
>> Hi,
>> for a few days we have been running our application with SQLite built from
>> the 201007091257 snapshot. The application runs on Windows 7 systems that
>> are fully updated. We didn't enable the WAL mode and the following two
>> problems occurred:
>> - Every time a database file is opened SQLite tries to access the WAL file
>> even though WAL was never used on the database.
>> - Some commands fail intermittently when two connections are opened to the
>> same database. An SQLITE_IOERR error is returned.
>> Log from Process Monitor are available at
>> http://www.emclient.com/temp/logfile-3.6.21.1.txt
>> http://www.emclient.com/temp/logfile-3.6.21.1.pml
>> http://www.emclient.com/temp/logfile-3.7.txt
>> http://www.emclient.com/temp/logfile-3.7.pml
>> Apparently the second problem is caused by the following call that wasn't
>> present in the 3.6.21.1 logs:
>> 16:53:00,5982911 MailClient.exe 2712 CreateFile C:\Users\Filip
>> Navara\AppData\Roaming\eM Client\main.dat-journal SHARING VIOLATION Desired
>> Access: Read Attributes, Delete, Disposition: Open, Options: Non-Directory
>> File, Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete,
>> AllocationSize: n/a
>> We'd welcome any advice on solving the issue and we will provide any
>> additional information requested.
>> Best regards,
>> Filip Navara
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite-snapshot-201007091257

2010-07-13 Thread Filip Navara
Btw, the failing application call is DeleteFileW.

F.

On Tue, Jul 13, 2010 at 5:02 PM, Filip Navara <filip.nav...@gmail.com>wrote:

> Hi,
>
> for a few days we have been running our application with SQLite built from
> the 201007091257 snapshot. The application runs on Windows 7 systems that
> are fully updated. We didn't enable the WAL mode and the following two
> problems occurred:
>
> - Every time a database file is opened SQLite tries to access the WAL file
> even though WAL was never used on the database.
> - Some commands fail intermittently when two connections are opened to the
> same database. An SQLITE_IOERR error is returned.
>
> Log from Process Monitor are available at
> http://www.emclient.com/temp/logfile-3.6.21.1.txt
> http://www.emclient.com/temp/logfile-3.6.21.1.pml
> http://www.emclient.com/temp/logfile-3.7.txt
> http://www.emclient.com/temp/logfile-3.7.pml
>
> Apparently the second problem is caused by the following call that wasn't
> present in the 3.6.21.1 logs:
> 16:53:00,5982911 MailClient.exe 2712 CreateFile C:\Users\Filip
> Navara\AppData\Roaming\eM Client\main.dat-journal SHARING VIOLATION Desired
> Access: Read Attributes, Delete, Disposition: Open, Options: Non-Directory
> File, Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete,
> AllocationSize: n/a
>
> We'd welcome any advice on solving the issue and we will provide any
> additional information requested.
>
> Best regards,
> Filip Navara
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Regression with sqlite-snapshot-201007091257

2010-07-13 Thread Filip Navara
Hi,

for a few days we have been running our application with SQLite built from
the 201007091257 snapshot. The application runs on Windows 7 systems that
are fully updated. We didn't enable the WAL mode and the following two
problems occurred:

- Every time a database file is opened SQLite tries to access the WAL file
even though WAL was never used on the database.
- Some commands fail intermittently when two connections are opened to the
same database. An SQLITE_IOERR error is returned.

Log from Process Monitor are available at
http://www.emclient.com/temp/logfile-3.6.21.1.txt
http://www.emclient.com/temp/logfile-3.6.21.1.pml
http://www.emclient.com/temp/logfile-3.7.txt
http://www.emclient.com/temp/logfile-3.7.pml

Apparently the second problem is caused by the following call that wasn't
present in the 3.6.21.1 logs:
16:53:00,5982911 MailClient.exe 2712 CreateFile C:\Users\Filip
Navara\AppData\Roaming\eM Client\main.dat-journal SHARING VIOLATION Desired
Access: Read Attributes, Delete, Disposition: Open, Options: Non-Directory
File, Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete,
AllocationSize: n/a

We'd welcome any advice on solving the issue and we will provide any
additional information requested.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Issue with locked journal file

2010-06-24 Thread Filip Navara
Ok, I'll expand on that suggestion, try PRAGMA journal_mode=truncate;

If you are using TortoiseSVN on the machine than the most likely cause
is the TSVNCache process. We had countless issues with it.

Best regards,
F.N.

On Wed, Jun 23, 2010 at 8:18 PM, Pavel Ivanov  wrote:
> Is it possible in your application to try journal_mode = PERSIST? I
> guess virus checkers wouldn't block writing to the file. But this way
> you should be sure that journal size wouldn't be so big as for you to
> want to truncate it...
>
>
> Pavel
>
> On Wed, Jun 23, 2010 at 7:02 AM, John Wood  wrote:
>> Hi all,
>>
>> I'm having real problems with locked sqlite journal files.
>>
>> (Win32, V 3.6.23.1).
>>
>> Very occasionaly, a commit fails with extended error code
>> SQLITE_IOERR_DELETE | SQLITE_IOERR.
>>
>> It seems that the most likely cause is another application locking the file
>> at that point (e.g. virus checker).
>>
>> Although it's surprising that this would happen (as the journal file is in a
>> hidden system directory (e.g. C:\users\john\AppData\Local\MyApp)) I guess
>> badly behaved apps may well briefly open and lock recently changed files.
>>
>> (As an aside, this is clearly an issue the sqlite team have tried to
>> address, as in winDelete() you can see it retries several times, and there
>> is a comment which refers to virus checkers).
>>
>> So anyway, instead I switched to using TRUNCATE journal mode instead.
>>
>> But, I'm just getting the same issue. This time the extended error code is
>> SQLITE_IOERR_TRUNCATE | SQLITE_IOERR.
>>
>> So, on some user's machines, very occasionaly SOMETHING is locking the
>> journal file.
>>
>> My question is this: is there a way to make sqlite open the journal file in
>> a locked manner, and keep the lock in effect while it's using it?
>>
>> That way, any naughty app would just not be able to open read handles.
>>
>> (This is certainly possible in Win32: you can specify what share mode you
>> require when opening a file handle).
>>
>> Any ideas?
>>
>> John
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite 2.1 -> 3 migration

2010-02-22 Thread Filip Navara
On Wed, Feb 17, 2010 at 11:51 PM, Andreas Weller
<wel...@andreas-weller.de> wrote:
> Hi.
> Unfortunately my Webhosting provider ceased sqlite 2 support. I
> downloaded the database file using FTP and tried to dump it on a debian
> PC with sqlite 2.8.17
>
> This resulted in the following error message:
>> andr...@notebook:~/db$ sqlite ./database.db ".dump"
>> Unable to open database ./database.db
>
> I had a look into it with hexdump:
>
>> 2a 2a 20 54 68 69 73 20  66 69 6c 65 20 63 6f 6e  |** This file con|
>>  74 61 69 6e 73 20 61 6e  20 53 51 4c 69 74 65 20  |tains an SQLite |
>>  32 2e 31 20 64 61 74 61  62 61 73 65 20 2a 2a 00  |2.1 database **.|
>>  28 75 e3 da 0f 08 00 00  01 00 00 00 23 00 00 00  |(u..#...|
>>  04 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
>>  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
>
> Is there any known problem with using sqlite 2.1 files with 2.8 version?
>
> How to convert this version to sqlite 3?
>
> Thank you!
>
>
> Regards,
>  Andreas Weller
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

Hi,

this may be a stupid question, but have you used the BINARY mode when
downloading the file from the FTP?

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reasons for SQLITE_CANTOPEN

2010-01-10 Thread Filip Navara
Hi!

Does "pragma journal_mode=truncate;" make any difference?
Is this on Windows?
Do you have TortoiseSVN installed on the same system?

If you answer Yes to all these questions then use Google, I have already
explained at least twice why it happens.

Best regards,
Filip Navara

On Sun, Jan 10, 2010 at 4:59 AM, Nikolaus Rath <nikol...@rath.org> wrote:

> Hello,
>
> I am accessing the same database from several threads, each using a
> separate connection. Shared cache is not enabled.
>
> When my program has been running for a while, I suddenly get an
> SQLITE_CANTOPEN error when I'm trying to open a database connection with
> a new thread. The database file, however, is definitively present and
> accessible.
>
> At the time of the error, there are probably around 30 threads holding
> database connections, and with certainty less than 213.
>
> (213 is the total number of threads that the program has created at that
> point, but most of them should have had only a short lifetime, so the
> actual number of active connections should be much smaller. I did not
> investigate the number of active threads closer, because even 213 active
> connection doesn't seem like a critical number to me).
>
> Can someone tell me under which conditions sqlite returns
> SQLITE_CANTOPEN, and how I can figure out what problem it encounters
> exactly?
>
>
> Best,
>
>
>   -Nikolaus
>
> --
>  »Time flies like an arrow, fruit flies like a Banana.«
>
>  PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite bug report - large databases only - 'database or disk is full'

2009-12-10 Thread Filip Navara
On Mon, Dec 7, 2009 at 10:21 PM,  <p...@mail.com> wrote:
>
> SQLite bug report
>
> Summary:
> --
>
> error message:
>
>       Error: near line 2: database or disk is full
>
> It happens with plenty of disk space available and with 'unlimited' database 
> size.
> It does not happen on all systems. It does not happen on small databases.
>

I run your scripts with SQLite 3.6.21 on Windows 7 64-bit on NTFS
drive and unfortunately I run out of disk space before the problem
manifested. Would it be possible for you to setup Process Monitor with
filter on the database path (just the path, so both journal and the
main database file are in the log) and history depth set to 1 million
(the lowest value) and then capture the file accesses during the run
of the reproduction scripts? It is possible to save and export the
data then for further analysis and hopefully it will give a clue on
why it happens.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite bug report - large databases only - 'database or disk is full'

2009-12-08 Thread Filip Navara
On Mon, Dec 7, 2009 at 10:21 PM,  <p...@mail.com> wrote:
>
> SQLite bug report
>
> Summary:
> --
>
> error message:
>
>       Error: near line 2: database or disk is full
>
> It happens with plenty of disk space available and with 'unlimited' database 
> size.
> It does not happen on all systems. It does not happen on small databases.
>
>
> Details:
> --
>
>
> The error occurs on
> - windows server 2003 SP2 64bit, windows server 2009 64bit
> - windows vista, windows 7 rc1, windows 7 - all 64-bit
>
> The error does _not_ occur on
> - windows server 2003 SP2 32bit, windows xp sp3 32 bit

Just a short note, we experience very similar problem with our program
and the symptoms match exactly. So far we were unable to reproduce it
reliably on our systems, but we have plenty of bug reports from
production systems. I will post more details soon.

Our database files have page size 4Kb and all of the affected systems
are running Windows. We use incremental vacuum mode and the file sizes
are commonly over 1 Gb large, file system is NTFS (afaik).

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Broken RSS

2009-12-01 Thread Filip Navara
Hello,

the RSS at http://www.sqlite.org/src/timeline.rss is broken. The
contents of several elements are not properly escaped (< and >
characters), so several readers can't parse it - Google Reader for
example.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread Filip Navara
On Wed, Oct 14, 2009 at 6:06 PM, Dan Kennedy <danielk1...@gmail.com> wrote:
>
>> The problem I think may be people using TerminateThread();  that's how
>> you hard kill a thread.  It seems that can interrupt an I/O
>> operation --
>> ie an operation that writes more than one cluster at a time.  Meaning,
>> synch = full may have nothing to do with it.  If you have to say write
>> more than one cluster (disk block), TerminateThread looks like it can
>> abort the IO op in the middle of a multi-block op?  I'm trying to run
>> that down but can't yet find anything that verifies this.
>
> Even if it does, which seems quite plausible, the only way
> I can see this causing corruption is if you are in persistent
> journal mode and a (weak) checksum gives you a false positive
> on the last, corrupted, record in the journal file.

This is quite possibly happening in our case. Any way to prove the
theory? What should we look for?

The problem is that journal_mode=persist was the only usable
journaling mode on Windows due to the way file deleting is handled (in
SQLite). Using journal_mode=delete is problematic on any machine with
TortoiseSVN/TortoiseGIT or other programs installed. Now that
journal_mode=truncate exists we can try switching to that.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread Filip Navara
Some threads were hard killed (using the .NET
"Thread.IsBackgroundThread" mechanism, which uses TerminateThread
AFAIK) in that version during normal operation, but none of them do
database writes. They could have been doing database reads though.

The whole application has been hard-killed few times when exception
happened during database manipulation though.

Best regards,
Filip Navara

On Wed, Oct 14, 2009 at 5:45 PM, McClellen, Chris
<cmcclel...@weather.com> wrote:
> Do you ever teriminate threads via TerminateThread() or run as a service 
> (where the scm will call terminateThread() once your main service thread 
> exits)?  In other words do you hard-kill your threads?
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Filip Navara
> Sent: Wednesday, October 14, 2009 7:15 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Corrupted database
>
> On Mon, Oct 12, 2009 at 8:56 PM, McClellen, Chris
> <cmcclel...@weather.com> wrote:
>> What is your synchronous set to?  Full?  FYI If you are using .NET data
>> providers, it is set to "Normal" by default.
>
> Normal or Off, but no power failure was involved. (Yes,
> System.Data.SQLite is used)
>
>> If it is not set to full, I have seen corruption when an application
>> crashes, or exits when a thread is in the middle of updating the db
>> (Synchronous = OFF makes corruption even easier in this case).  I have
>> seen apps that do not wait for background threads to finish before
>> termination, and without full sync on, either the db or the log gets
>> corrupted.  A corrupted log can cause problems for you db on next run
>> when recovery happens.
>
> Sounds suspiciously like our case, but still Synchronous=off is
> supposed to work in the event of application crash, hard killed
> threads and so on. Previous version of the application frequently
> forgot to close the databases on exit and did other nasty things that
> and now fixed, but none of them should cause the database to be
> corrupted.
>
> Best regards,
> Filip Navara
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread Filip Navara
On Mon, Oct 12, 2009 at 8:56 PM, McClellen, Chris
<cmcclel...@weather.com> wrote:
> What is your synchronous set to?  Full?  FYI If you are using .NET data
> providers, it is set to "Normal" by default.

Normal or Off, but no power failure was involved. (Yes,
System.Data.SQLite is used)

> If it is not set to full, I have seen corruption when an application
> crashes, or exits when a thread is in the middle of updating the db
> (Synchronous = OFF makes corruption even easier in this case).  I have
> seen apps that do not wait for background threads to finish before
> termination, and without full sync on, either the db or the log gets
> corrupted.  A corrupted log can cause problems for you db on next run
> when recovery happens.

Sounds suspiciously like our case, but still Synchronous=off is
supposed to work in the event of application crash, hard killed
threads and so on. Previous version of the application frequently
forgot to close the databases on exit and did other nasty things that
and now fixed, but none of them should cause the database to be
corrupted.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database

2009-10-12 Thread Filip Navara
Hello,

for a few months we have been occasionally getting corrupted databases
in the field. So far we were unable to acquire any of them from our
customers, but this week I finally got hold of one. Output from
"pragma integrity_check" is included below.

The schema is the following:

CREATE VIRTUAL TABLE "LocalMailsIndex3" USING fts3 (id INTEGER,
partName TEXT, content TEXT);
CREATE TABLE LocalMailsIndex3_content(  docid INTEGER PRIMARY
KEY,c0id, c1partName, c2content);
CREATE TABLE LocalMailsIndex3_segdir(  level integer,  idx integer,
start_block integer,  leaves_end_block integer,  end_block integer,
root blob,  primary key(level, idx));
CREATE TABLE LocalMailsIndex3_segments(  blockid INTEGER PRIMARY KEY,
block blob);
CREATE INDEX "LocalMailsIndex3_contentIndex" ON
"LocalMailsIndex3_content" ("c0id", "c1partName");

The database is created using SQLite 3.6.14.2, thread safe, on Windows
with auto_vacuum=incremental. It is always opened as attached database
with journal_mode=persist. Application crashes were most probably
involved, but no operating system / power crashes as far as I know.

One thread in the application is periodically running "pragma
freelist_count" and "pragma incremental_vacuum(...)". Other threads
are running combination of the following commands and no other:

INSERT INTO mail_fti.LocalMailsIndex3("id", "partName", "content")
VALUES (@id, @partName, @content)
SELECT c2content AS content FROM mail_fti.LocalMailsIndex3_content
WHERE c0...@id AND c1partna...@partname
SELECT docid FROM mail_fti.LocalMailsIndex3_content WHERE c0id IN (...)
DELETE FROM mail_fti.LocalMailsIndex3 WHERE doc...@docid
SELECT ... WHERE "id" IN (SELECT "id" FROM mail_fti.LocalMailsIndex3
WHERE content MATCH "...")

Anybody has seen something like this?
Anybody willing to look at it? I can send the database privately.

Best regards,
Filip Navara

Main freelist: Bad ptr map entry key=5143 expected=(2,0) got=(3,4467)
Main freelist: freelist leaf count too big on page 5143
Main freelist: Bad ptr map entry key=5449 expected=(2,0) got=(4,5143)
Main freelist: freelist leaf count too big on page 5449
Main freelist: 904 of 908 pages missing from overflow list starting at 5143
On tree page 3878 cell 26: invalid page number 5737
On tree page 3878 cell 26: Child page depth differs
On tree page 3878 cell 27: Failed to read ptrmap key=5746
On tree page 3878 cell 27: invalid page number 5746
On tree page 3878 cell 28: Failed to read ptrmap key=5748
On tree page 3878 cell 28: invalid page number 5748
On tree page 3878 cell 29: Failed to read ptrmap key=5749
On tree page 3878 cell 29: invalid page number 5749
On tree page 3878 cell 30: Failed to read ptrmap key=5755
On tree page 3878 cell 30: invalid page number 5755
On tree page 3878 cell 31: Failed to read ptrmap key=5757
On tree page 3878 cell 31: invalid page number 5757
On tree page 3878 cell 32: Failed to read ptrmap key=5759
On tree page 3878 cell 32: invalid page number 5759
On tree page 3878 cell 33: Failed to read ptrmap key=5761
On tree page 3878 cell 33: invalid page number 5761
On tree page 3878 cell 34: Failed to read ptrmap key=5763
On tree page 3878 cell 34: invalid page number 5763
On tree page 3878 cell 35: Failed to read ptrmap key=5767
On tree page 3878 cell 35: invalid page number 5767
On tree page 3878 cell 36: Failed to read ptrmap key=5769
On tree page 3878 cell 36: invalid page number 5769
On tree page 3878 cell 37: Failed to read ptrmap key=5771
On tree page 3878 cell 37: invalid page number 5771
On tree page 3878 cell 38: Failed to read ptrmap key=5773
On tree page 3878 cell 38: invalid page number 5773
On tree page 3878 cell 39: Failed to read ptrmap key=5775
On tree page 3878 cell 39: invalid page number 5775
On tree page 3878 cell 40: Failed to read ptrmap key=5777
On tree page 3878 cell 40: invalid page number 5777
On tree page 3878 cell 41: Failed to read ptrmap key=5780
On tree page 3878 cell 41: invalid page number 5780
On tree page 3878 cell 42: Failed to read ptrmap key=5783
On tree page 3878 cell 42: invalid page number 5783
On tree page 3878 cell 43: Failed to read ptrmap key=5787
On tree page 3878 cell 43: invalid page number 5787
On tree page 3878 cell 44: Failed to read ptrmap key=5789
On tree page 3878 cell 44: invalid page number 5789
On tree page 3878 cell 45: Failed to read ptrmap key=5793
On tree page 3878 cell 45: invalid page number 5793
On tree page 3878 cell 46: Failed to read ptrmap key=5795
On tree page 3878 cell 46: invalid page number 5795
On tree page 3878 cell 47: Failed to read ptrmap key=5797
On tree page 3878 cell 47: invalid page number 5797
On tree page 3878 cell 48: Failed to read ptrmap key=5801
On tree page 3878 cell 48: invalid page number 5801
On tree page 3878 cell 49: Failed to read ptrmap key=5805
On tree page 3878 cell 49: invalid page number 5805
On tree page 3878 cell

Re: [sqlite] ANN: SQLite 3.6.16.C#

2009-08-30 Thread Filip Navara
On Sat, Aug 29, 2009 at 9:51 PM, Yves
Goergen wrote:
> On 01.08.2009 16:19 CE(S)T, Noah Hart wrote:
>> This is not a driver, dll, or wrapper.  This is a port of the underlying
>> SQLite software.
>
> Hm, yes, but isn't the other C# SQLite assembly also the entire DB
> engine? I mean, there's no client/server driver; if you can access
> SQLite files, you already have the entire engine in your hands. And the
> other one doesn't need any additional files, it's just one .NET
> assembly. And a huge one. So I cannot imagine that it's just some
> bindings to a native DLL.

It's both bindings and the native code in one DLL. The SQLite code
there is not managed and is compiled only for single platform.

F.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feeds for sqlite releases

2009-06-27 Thread Filip Navara
Hi,

there already is an RSS feed -
http://www.sqlite.org/cvstrac/timeline.rss - unfortunately it can't be
limited to display only the milestones as regular web timeline can.

Best regards,
Filip Navara

On Sun, Jun 28, 2009 at 12:49 AM, João Eiras<joao.ei...@gmail.com> wrote:
> Hi !
>
> I really missing having feeds to track sqlite releases. Currently I either 
> have to go through the mailing list, which is not low traffic, or open the 
> website.
> Would you consider having an rss feed for sqlite releases ?
>
> Thanks.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query regarding downloading 3.2.2 version of Sqlite

2009-06-15 Thread Filip Navara
Heya,
search for sqlite-3_2_2.zip or sqlitedll-3_2_2.zip (
http://www.ponx.org/download/CD/Python/sqlitedll-3_2_2.zip) on Google.
They're no longer available from the official site. Alternatively you can
checkout the correct version from SQLite CVS and compile it yourself.

Best regards,
Filip Navara

On Fri, Jun 12, 2009 at 8:35 PM, ganesh hegde <
ganesh.shreekanth.he...@gmail.com> wrote:

> Hey PPL,
>i am required to use 3.2.2 version of sqlite as part of my work, because
> that is the version that has been approved. So could anyone please let me
> know if there is any way of downloading the 3.2.2 version of the binaries
> for Windows.(The current download page only points to 3.6.14).
> The situation is that it takes a while to get the approval, but there isnt
> much time to build the prototype. Please let me know if there is a way
> forward from here.
>
> --
> Regards,
> Ganesh Hegde
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corruption of incremental_vacuum databases

2009-05-30 Thread Filip Navara
On Sat, May 30, 2009 at 7:43 AM, John Machin <sjmac...@lexicon.net> wrote:
> On 17/04/2009 1:39 AM, Filip Navara wrote:
>> Hello,
>>
>> I have expected at least some reply. Oh well, new the corruption has happened
>> again (on another different machine) and I have saved the database files. One
>> of the corrupted files is available at 
>> http://www.emclient.com/temp/folders.zip.
>
> U ... your first message [scroll down to read] is talking about
> *incremental* vacuuming; however the database file that you made
> available has FULL (not incremental) auto-vacumming set.

I switch to incremental vacuum mode at run-time once the database is opened.

> Six orphan pages at the end of the file plus another 3 orphans suggests
> that an auto-vacuum (full or incremental) may have been interrupted --
> or perhaps later given that you are? were? using synchronous=off.

While I was using synchronous=off in this instance, it happened even
with synchronous=FULL. There was never a power failure during the
tests, only the application may have been killed few times.

> Did you get any resolution on this?

Updating to latest SQLite, which includes change #6413, fixed it.
Unfortunately it created a lot of other problems that are hopefully
now solved with the 3.6.14.2 release.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on errors - IOERR and CANTOPEN

2009-05-26 Thread Filip Navara
Hi Dennis,

what SQLite version and on what platform are you using? There was a
bug in SQLite 3.6.13 on Windows where SQLITE_CANTOPEN was incorrectly
returned during journal check when race condition between two threads
was hit. Also there could be some other software interfering with the
journal deletion, which could cause SQLITE_IOERR. Typically
TortoiseSVN has the habbit of doing this. While there is mechanism in
SQLite to workaround this, it's far from prefect. Known workaround is
to use "pragma journal_mode=persist;". If this solves your problems,
then you most probably hit this bug.

BTW, I don't know how many people are affected by this journal
creation bug, but there's a reliable way to workaround it. The
solution would be to detect the "delete pending" state and rename the
journal file in that case. This will allow the new journal file to be
created and the old one will disappear as soon as the offending
application closes the last handle to it. Anyway... don't want to hick
your thread :)

Best regards,
Filip Navara

On Tue, May 26, 2009 at 7:51 AM, Dennis Volodomanov
<dennis.volodoma...@conceiva.com> wrote:
> I sometimes get either a SQLITE_IOERR or a SQLITE_CANTOPEN when issuing
> BEGIN IMMEDIATE TRANSACTION or END TRANSACTION, however the database
> file is there and is being used by another thread. I thought I'd get the
> usual SQLITE_BUSY or SQLITE_LOCKED, but sometimes these file-related
> errors come up.
>
>
>
> Does anyone know why they come up and what should be the correct logic
> to continue? Should (and can it) the operation in question be retried,
> as if a BUSY/LOCKED was encountered?
>
>
>
> Thanks in advance,
>
>
>
>   Dennis
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2009-05-23 Thread Filip Navara
PRAGMA schema_version ... for the second case.

F.

On Sat, May 23, 2009 at 4:20 PM, Simon Slavin
 wrote:
>
> On 21 May 2009, at 5:59am, Simon Slavin wrote:
>
>> int sqlite3_change_count(sqlite3*)
>>
>> Returns an integer which is incremented whenn a change is made to any
>> table in the database.  May be the value stored in bytes 24..27 of
>> file header or something else if that's not what I really want.
>
> Having thought about it some more, I want two different numbers.  One
> of them changes when a field changes: INSERT/DELETE/UPDATE.  The other
> changes when the schema changes: ALTER/DROP/CREATE.  Rather than crowd
> the function library with multiple functions, it might be possible to
> implement it like so:
>
> int sqlite3_change_count(sqlite3*, int cTypes)
>
> when cTypes = 1, you get the number of changes to the schema
> when cTypes = 2, you get the number of changes to fields
> when cTypes = 3, you get the sum of the above two figures
>
> Or possibly 1 and 2 should be the other way around.
>
> Any ideas, criticism, or reasons it can't be done ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_CANTOPEN error / bug?

2009-04-17 Thread Filip Navara
Proposed fix:

--- os_win.old  2009-04-09 20:41:18.0 +0200
+++ os_win.c2009-04-17 16:33:47.904710700 +0200
@@ -1317,7 +1317,7 @@
   }else{
 dwCreationDisposition = OPEN_EXISTING;
   }
-  if( flags & SQLITE_OPEN_MAIN_DB ){
+  if( !(flags & SQLITE_OPEN_EXCLUSIVE) ){
 dwShareMode = FILE_SHARE_READ | FILE_SHARE_WRITE;
   }else{
 dwShareMode = 0;

Best regards,
Filip Navara

On Fri, Apr 17, 2009 at 3:30 PM, Filip Navara <filip.nav...@gmail.com> wrote:
> Hi again,
>
> the bug was introduced in revision 1.573 of pager.c.
>
> Best regards,
> Filip Navara
>
> On Fri, Apr 17, 2009 at 2:26 PM, Filip Navara <filip.nav...@gmail.com> wrote:
>>
>> Hello,
>>
>> today I tried to upgrade our software from SQLite version 3.6.3 to version 
>> 3.6.13 and I am hitting a race condition that I believe is a bug in the 
>> library. The library is compiled as thread-safe (and it's reproducible even 
>> with the precompiled DLL). Sometimes sqlite3_step fails with SQLITE_CANTOPEN 
>> error. I traced it with Process Monitor and each time the error happened, 
>> this could be seen in the log:
>>
>> (time / process / PID / function name / file name / result of the call / 
>> parameters / thread ID)
>> 12:27:55.1443678 PM eM Client.vshost.exe 3308 LockFile folders.dat SUCCESS 
>> Exclusive: True, Offset: 1,073,741,825, Length: 1, Fail Immediately: True 
>> 5052
>> 12:27:55.1443826 PM eM Client.vshost.exe 3308 UnlockFileSingle folders.dat 
>> SUCCESS Offset: 1,073,741,825, Length: 1 5052
>> 12:27:55.1443944 PM eM Client.vshost.exe 3308 QueryStandardInformationFile 
>> folders.dat SUCCESS AllocationSize: 32,768, EndOfFile: 21,504, 
>> NumberOfLinks: 1, DeletePending: False, Directory: False 5052
>> 12:27:55.1445976 PM eM Client.vshost.exe 3308 CreateFile folders.dat-journal 
>> SUCCESS Desired Access: Generic Read, Disposition: Open, Options: 
>> Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: 
>> None, AllocationSize: n/a, OpenResult: Opened 5052
>> 12:27:55.1447616 PM eM Client.vshost.exe 3308 CreateFile folders.dat-journal 
>> SHARING VIOLATION Desired Access: Generic Read, Disposition: Open, Options: 
>> Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: 
>> None, AllocationSize: n/a 5804
>> 12:27:55.1447930 PM eM Client.vshost.exe 3308 ReadFile folders.dat-journal 
>> SUCCESS Offset: 0, Length: 1, Priority: Normal 5052
>> 12:27:55.1448185 PM eM Client.vshost.exe 3308 UnlockFileSingle folders.dat 
>> SUCCESS Offset: 1,073,741,826, Length: 510 5804
>> 12:27:55.1448268 PM eM Client.vshost.exe 3308 CloseFile folders.dat-journal 
>> SUCCESS 5052
>> There are two threads accessing the folders.dat database at the same time, 
>> the first one (5052) succeeds, while the other one (5804) fails when 
>> checking for hot journal. Here are the stack traces of the two threads 
>> during the CreateFile calls (both against the sqlite3.c 3.6.13 amalgamation).
>>
>> Thread 5052:
>> 14 sqlite3.dll winOpen + 0x139, sqlite3.c(27454) 0x100067d6
>> 15 sqlite3.dll sqlite3OsOpen + 0x19, sqlite3.c(11655) 0x100095ed
>> 16 sqlite3.dll hasHotJournal + 0xce, sqlite3.c(33360) 0x1000ac42
>> 17 sqlite3.dll pagerSharedLock + 0x102, sqlite3.c(33499) 0x100093c0
>> 18 sqlite3.dll sqlite3PagerAcquire + 0x41, sqlite3.c(33732) 0x10008c21
>> 19 sqlite3.dll sqlite3BtreeGetPage + 0x19, sqlite3.c(37419) 0x1000c92a
>> 20 sqlite3.dll lockBtree + 0x16, sqlite3.c(38097) 0x1000c6ee
>> 21 sqlite3.dll sqlite3BtreeBeginTrans + 0xe5,sqlite3.c(38366) 0x1000c434
>> 22 sqlite3.dll sqlite3VdbeExec + 0x2e59, sqlite3.c(52191) 0x10019223
>> 23 sqlite3.dll sqlite3Step + 0x161,sqlite3.c(48601) 0x10014d9d
>> 24 sqlite3.dll sqlite3_step + 0x3d, sqlite3.c(48675) 0x10014aaf
>> Thread 5804:
>> 14 sqlite3.dll winOpen + 0x139, sqlite3.c(27454) 0x100067d6
>> 15 sqlite3.dll sqlite3OsOpen + 0x19, sqlite3.c(11655) 0x100095ed
>> 16 sqlite3.dll hasHotJournal + 0xce, sqlite3.c(33360) 0x1000ac42
>> 17 sqlite3.dll pagerSharedLock + 0x102, sqlite3.c(33499) 0x100093c0
>> 18 sqlite3.dll sqlite3PagerAcquire + 0x41, sqlite3.c(33732) 0x10008c21
>> 19 sqlite3.dll sqlite3BtreeGetMeta + 0x6e, sqlite3.c(42972) 0x1000ed7e
>> 20 sqlite3.dll sqlite3VdbeExec + 0x2f05, sqlite3.c(52230) 0x100192cf
>> 21 sqlite3.dll sqlite3Step + 0x161, sqlite3.c(48601) 0x10014d9d
>> 22 sqlite3.dll sqlite3_step + 0x3d, sqlite3.c(48675) 0x10014aaf
>> I believe it's incorrect behavior to return SQLITE_CANTOPEN to the 
>> application in that case. I'm not sure what would be the correct solution 
>> (setting sharing mode in winOpen?). Anyone can help or offer any insight?
>> Thanks,
>> Filip Navara
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_CANTOPEN error / bug?

2009-04-17 Thread Filip Navara
Hi again,

the bug was introduced in revision 1.573 of pager.c.

Best regards,
Filip Navara

On Fri, Apr 17, 2009 at 2:26 PM, Filip Navara <filip.nav...@gmail.com> wrote:
>
> Hello,
>
> today I tried to upgrade our software from SQLite version 3.6.3 to version 
> 3.6.13 and I am hitting a race condition that I believe is a bug in the 
> library. The library is compiled as thread-safe (and it's reproducible even 
> with the precompiled DLL). Sometimes sqlite3_step fails with SQLITE_CANTOPEN 
> error. I traced it with Process Monitor and each time the error happened, 
> this could be seen in the log:
>
> (time / process / PID / function name / file name / result of the call / 
> parameters / thread ID)
> 12:27:55.1443678 PM eM Client.vshost.exe 3308 LockFile folders.dat SUCCESS 
> Exclusive: True, Offset: 1,073,741,825, Length: 1, Fail Immediately: True 5052
> 12:27:55.1443826 PM eM Client.vshost.exe 3308 UnlockFileSingle folders.dat 
> SUCCESS Offset: 1,073,741,825, Length: 1 5052
> 12:27:55.1443944 PM eM Client.vshost.exe 3308 QueryStandardInformationFile 
> folders.dat SUCCESS AllocationSize: 32,768, EndOfFile: 21,504, NumberOfLinks: 
> 1, DeletePending: False, Directory: False 5052
> 12:27:55.1445976 PM eM Client.vshost.exe 3308 CreateFile folders.dat-journal 
> SUCCESS Desired Access: Generic Read, Disposition: Open, Options: Synchronous 
> IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: None, 
> AllocationSize: n/a, OpenResult: Opened 5052
> 12:27:55.1447616 PM eM Client.vshost.exe 3308 CreateFile folders.dat-journal 
> SHARING VIOLATION Desired Access: Generic Read, Disposition: Open, Options: 
> Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: None, 
> AllocationSize: n/a 5804
> 12:27:55.1447930 PM eM Client.vshost.exe 3308 ReadFile folders.dat-journal 
> SUCCESS Offset: 0, Length: 1, Priority: Normal 5052
> 12:27:55.1448185 PM eM Client.vshost.exe 3308 UnlockFileSingle folders.dat 
> SUCCESS Offset: 1,073,741,826, Length: 510 5804
> 12:27:55.1448268 PM eM Client.vshost.exe 3308 CloseFile folders.dat-journal 
> SUCCESS 5052
> There are two threads accessing the folders.dat database at the same time, 
> the first one (5052) succeeds, while the other one (5804) fails when checking 
> for hot journal. Here are the stack traces of the two threads during the 
> CreateFile calls (both against the sqlite3.c 3.6.13 amalgamation).
>
> Thread 5052:
> 14 sqlite3.dll winOpen + 0x139, sqlite3.c(27454) 0x100067d6
> 15 sqlite3.dll sqlite3OsOpen + 0x19, sqlite3.c(11655) 0x100095ed
> 16 sqlite3.dll hasHotJournal + 0xce, sqlite3.c(33360) 0x1000ac42
> 17 sqlite3.dll pagerSharedLock + 0x102, sqlite3.c(33499) 0x100093c0
> 18 sqlite3.dll sqlite3PagerAcquire + 0x41, sqlite3.c(33732) 0x10008c21
> 19 sqlite3.dll sqlite3BtreeGetPage + 0x19, sqlite3.c(37419) 0x1000c92a
> 20 sqlite3.dll lockBtree + 0x16, sqlite3.c(38097) 0x1000c6ee
> 21 sqlite3.dll sqlite3BtreeBeginTrans + 0xe5,sqlite3.c(38366) 0x1000c434
> 22 sqlite3.dll sqlite3VdbeExec + 0x2e59, sqlite3.c(52191) 0x10019223
> 23 sqlite3.dll sqlite3Step + 0x161,sqlite3.c(48601) 0x10014d9d
> 24 sqlite3.dll sqlite3_step + 0x3d, sqlite3.c(48675) 0x10014aaf
> Thread 5804:
> 14 sqlite3.dll winOpen + 0x139, sqlite3.c(27454) 0x100067d6
> 15 sqlite3.dll sqlite3OsOpen + 0x19, sqlite3.c(11655) 0x100095ed
> 16 sqlite3.dll hasHotJournal + 0xce, sqlite3.c(33360) 0x1000ac42
> 17 sqlite3.dll pagerSharedLock + 0x102, sqlite3.c(33499) 0x100093c0
> 18 sqlite3.dll sqlite3PagerAcquire + 0x41, sqlite3.c(33732) 0x10008c21
> 19 sqlite3.dll sqlite3BtreeGetMeta + 0x6e, sqlite3.c(42972) 0x1000ed7e
> 20 sqlite3.dll sqlite3VdbeExec + 0x2f05, sqlite3.c(52230) 0x100192cf
> 21 sqlite3.dll sqlite3Step + 0x161, sqlite3.c(48601) 0x10014d9d
> 22 sqlite3.dll sqlite3_step + 0x3d, sqlite3.c(48675) 0x10014aaf
> I believe it's incorrect behavior to return SQLITE_CANTOPEN to the 
> application in that case. I'm not sure what would be the correct solution 
> (setting sharing mode in winOpen?). Anyone can help or offer any insight?
> Thanks,
> Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_CANTOPEN error / bug?

2009-04-17 Thread Filip Navara
Hello,

today I tried to upgrade our software from SQLite version 3.6.3 to version
3.6.13 and I am hitting a race condition that I believe is a bug in the
library. The library is compiled as thread-safe (and it's reproducible even
with the precompiled DLL). Sometimes sqlite3_step fails with SQLITE_CANTOPEN
error. I traced it with Process Monitor and each time the error happened,
this could be seen in the log:

(time / process / PID / function name / file name / result of the call /
parameters / thread ID)
12:27:55.1443678 PM eM Client.vshost.exe 3308 LockFile folders.dat
SUCCESS Exclusive:
True, Offset: 1,073,741,825, Length: 1, Fail Immediately: True 5052
12:27:55.1443826 PM eM Client.vshost.exe 3308 UnlockFileSingle folders.dat
SUCCESS Offset: 1,073,741,825, Length: 1 5052
12:27:55.1443944 PM eM Client.vshost.exe 3308 QueryStandardInformationFile
folders.dat SUCCESS AllocationSize: 32,768, EndOfFile: 21,504,
NumberOfLinks: 1, DeletePending: False, Directory: False 5052
12:27:55.1445976 PM eM Client.vshost.exe 3308 CreateFile folders.dat-journal
SUCCESS Desired Access: Generic Read, Disposition: Open, Options:
Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode:
None, AllocationSize: n/a, OpenResult: Opened 5052
12:27:55.1447616 PM eM Client.vshost.exe 3308 CreateFile
folders.dat-journal SHARING
VIOLATION Desired Access: Generic Read, Disposition: Open, Options:
Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode:
None, AllocationSize: n/a 5804
12:27:55.1447930 PM eM Client.vshost.exe 3308 ReadFile folders.dat-journal
SUCCESS Offset: 0, Length: 1, Priority: Normal 5052
12:27:55.1448185 PM eM Client.vshost.exe 3308 UnlockFileSingle folders.dat
SUCCESS Offset: 1,073,741,826, Length: 510 5804
12:27:55.1448268 PM eM Client.vshost.exe 3308 CloseFile folders.dat-journal
SUCCESS 5052

There are two threads accessing the folders.dat database at the same time,
the first one (5052) succeeds, while the other one (5804) fails when
checking for hot journal. Here are the stack traces of the two threads
during the CreateFile calls (both against the sqlite3.c 3.6.13
amalgamation).

Thread 5052:
14 sqlite3.dll winOpen + 0x139, sqlite3.c(27454) 0x100067d6
15 sqlite3.dll sqlite3OsOpen + 0x19, sqlite3.c(11655) 0x100095ed
16 sqlite3.dll hasHotJournal + 0xce, sqlite3.c(33360) 0x1000ac42
17 sqlite3.dll pagerSharedLock + 0x102, sqlite3.c(33499) 0x100093c0
18 sqlite3.dll sqlite3PagerAcquire + 0x41, sqlite3.c(33732) 0x10008c21
19 sqlite3.dll sqlite3BtreeGetPage + 0x19, sqlite3.c(37419) 0x1000c92a
20 sqlite3.dll lockBtree + 0x16, sqlite3.c(38097) 0x1000c6ee
21 sqlite3.dll sqlite3BtreeBeginTrans + 0xe5,sqlite3.c(38366) 0x1000c434
22 sqlite3.dll sqlite3VdbeExec + 0x2e59, sqlite3.c(52191) 0x10019223
23 sqlite3.dll sqlite3Step + 0x161,sqlite3.c(48601) 0x10014d9d
24 sqlite3.dll sqlite3_step + 0x3d, sqlite3.c(48675) 0x10014aaf

Thread 5804:
14 sqlite3.dll winOpen + 0x139, sqlite3.c(27454) 0x100067d6
15 sqlite3.dll sqlite3OsOpen + 0x19, sqlite3.c(11655) 0x100095ed
16 sqlite3.dll hasHotJournal + 0xce, sqlite3.c(33360) 0x1000ac42
17 sqlite3.dll pagerSharedLock + 0x102, sqlite3.c(33499) 0x100093c0
18 sqlite3.dll sqlite3PagerAcquire + 0x41, sqlite3.c(33732) 0x10008c21
19 sqlite3.dll sqlite3BtreeGetMeta + 0x6e, sqlite3.c(42972) 0x1000ed7e
20 sqlite3.dll sqlite3VdbeExec + 0x2f05, sqlite3.c(52230) 0x100192cf
21 sqlite3.dll sqlite3Step + 0x161, sqlite3.c(48601) 0x10014d9d
22 sqlite3.dll sqlite3_step + 0x3d, sqlite3.c(48675) 0x10014aaf

I believe it's incorrect behavior to return SQLITE_CANTOPEN to the
application in that case. I'm not sure what would be the correct solution
(setting sharing mode in winOpen?). Anyone can help or offer any insight?

Thanks,
Filip Navara
"Time of Day","Process Name","PID","Operation","Path","Result","Detail","TID"
"12:27:55.1394789 PM","eM 
Client.vshost.exe","3308","LockFile","C:\Users\Filip\AppData\Roaming\eM 
Client\folders.dat","SUCCESS","Exclusive: True, Offset: 1,073,741,824, Length: 
1, Fail Immediately: True","5052"
"12:27:55.1395073 PM","eM 
Client.vshost.exe","3308","LockFile","C:\Users\Filip\AppData\Roaming\eM 
Client\folders.dat","SUCCESS","Exclusive: False, Offset: 1,073,741,826, Length: 
510, Fail Immediately: True","5052"
"12:27:55.1395216 PM","eM 
Client.vshost.exe","3308","UnlockFileSingle","C:\Users\Filip\AppData\Roaming\eM 
Client\folders.dat","SUCCESS","Offset: 1,073,741,824, Length: 1","5052"
"12:27:55.1397821 PM","eM 
Client.vshost.exe","3308","QueryOpen","C:\Users\Filip\AppData\Roaming\eM 
Client\folders.dat-journal","FAST IO DISALLOWED","","5052"
"12:27

Re: [sqlite] Corruption of incremental_vacuum databases

2009-04-16 Thread Filip Navara
Hello,

I have expected at least some reply. Oh well, new the corruption has happened
again (on another different machine) and I have saved the database files. One
of the corrupted files is available at http://www.emclient.com/temp/folders.zip.
I'd be glad for any help or at least confirmation that it could be
related to the
issues in the tickets listed below.

Thanks,
Filip Navara

On Tue, Mar 31, 2009 at 11:05 AM, Filip Navara <xnav...@volny.cz> wrote:
> Hello,
>
> after seeing the recent change #6413 and ticket #3761 I finally
> decided to write about a corruption issue we have.
>
> This is the environment of our application:
> - We are using SQLite 3.6.3 (with SQLite.NET 1.0.60.0).
> - Several database files. Each file is opened in it's own connection
> and never shared across them.
> - Some of these connections have another database attached to it
> (containing mostly BLOB data).
> - In all cases the connections are opened on program start and closed
> on program shutdown.
> - There's a low-priority thread that executes "pragma
> incremental_vacuum" when the application is idle and there is enough
> free pages. Code of the thread is listed below.
> - "journal_mode=persist" is used on all databases in all connections
> (to workaround a bug in the journal deletion logic on Windows, search
> for "TortoiseSVN" in the mailing list archive for details)
> - "synchronous=off" is used on all databases in all connections. This
> setting is likely to change in future, but in no case of the
> corruption a system crash was involved.
>
> Since we started using the incremental_vacuum mode we were getting
> database corruption errors pretty often (sometimes as often as once a
> day in 3 people). Most, if not all, of these corruptions happened
> following a ROLLBACK (caused by constraint violation). "pragma
> integrity_check;" on the already corrupted databases usually reported
> few "Page XXX is never used" error.
>
> Unfortunately I don't have any of the corrupted databases at hand and
> I have no deterministic way to create them. My question is if these
> could be related to the just fixed problem (in ticket 3761) or if it
> could be another issue?
>
> Best regards,
> Filip Navara
>
> 
>
> WaitHandle[] handles = new WaitHandle[] { this.stopEvent, this.wakeupEvent };
> System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();
> int timeout = -1;
> int pagesPerIteration = 32;
>
> // Wait for thread shutdown and wakeup event. The shutdown event
> // is used to stop the thread on application exit. The wakeup event is
> // fired on startup if there are free pages in the database or if a DELETE
> // statement was executed.
> while (WaitHandle.WaitAny(handles, timeout, false) != 0)
> {
>        long totalFreePages = 0, freePages;
>        lock (this.repositories)
>        {
>                stopWatch.Reset();
>                stopWatch.Start();
>                foreach (IRepositoryBase repository in this.repositories)
>                {
>                       // wrapper around "pragma freelist_count;"
>                        freePages = repository.GetFreePageCount();
>                        totalFreePages += freePages;
>                        if (freePages > 0)
>                               // wrapper around "pragma incremental_vacuum(x)"
>                                repository.Compact(pagesPerIteration);
>                }
>                stopWatch.Stop();
>        }
>
>        // We start by freeing 32 pages per one iteration of the loop for
>       // each database. After each iteration the number is recalculated
>       // based on the time spent on the operation and then it's
>       // truncated to the <24;4096> range.
>        pagesPerIteration = Math.Min(Math.Max(24, (int)(100.0 *
> pagesPerIteration / stopWatch.ElapsedMilliseconds)), 4096);
>
>        // If there are still free pages in the databases then schedule the
>       // thread to wake up in 200ms and continue vacuuming them.
>        if (totalFreePages > 0)
>                timeout = 200;
>        else
>                timeout = -1;
> }
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corruption of incremental_vacuum databases

2009-03-31 Thread Filip Navara
Hello,

after seeing the recent change #6413 and ticket #3761 I finally
decided to write about a corruption issue we have.

This is the environment of our application:
- We are using SQLite 3.6.3 (with SQLite.NET 1.0.60.0).
- Several database files. Each file is opened in it's own connection
and never shared across them.
- Some of these connections have another database attached to it
(containing mostly BLOB data).
- In all cases the connections are opened on program start and closed
on program shutdown.
- There's a low-priority thread that executes "pragma
incremental_vacuum" when the application is idle and there is enough
free pages. Code of the thread is listed below.
- "journal_mode=persist" is used on all databases in all connections
(to workaround a bug in the journal deletion logic on Windows, search
for "TortoiseSVN" in the mailing list archive for details)
- "synchronous=off" is used on all databases in all connections. This
setting is likely to change in future, but in no case of the
corruption a system crash was involved.

Since we started using the incremental_vacuum mode we were getting
database corruption errors pretty often (sometimes as often as once a
day in 3 people). Most, if not all, of these corruptions happened
following a ROLLBACK (caused by constraint violation). "pragma
integrity_check;" on the already corrupted databases usually reported
few "Page XXX is never used" error.

Unfortunately I don't have any of the corrupted databases at hand and
I have no deterministic way to create them. My question is if these
could be related to the just fixed problem (in ticket 3761) or if it
could be another issue?

Best regards,
Filip Navara



WaitHandle[] handles = new WaitHandle[] { this.stopEvent, this.wakeupEvent };
System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();
int timeout = -1;
int pagesPerIteration = 32;

// Wait for thread shutdown and wakeup event. The shutdown event
// is used to stop the thread on application exit. The wakeup event is
// fired on startup if there are free pages in the database or if a DELETE
// statement was executed.
while (WaitHandle.WaitAny(handles, timeout, false) != 0)
{
       long totalFreePages = 0, freePages;
       lock (this.repositories)
       {
               stopWatch.Reset();
               stopWatch.Start();
               foreach (IRepositoryBase repository in this.repositories)
               {
   // wrapper around "pragma freelist_count;"
                       freePages = repository.GetFreePageCount();
                       totalFreePages += freePages;
                       if (freePages > 0)
   // wrapper around "pragma incremental_vacuum(x)"
                               repository.Compact(pagesPerIteration);
               }
               stopWatch.Stop();
       }

       // We start by freeing 32 pages per one iteration of the loop for
   // each database. After each iteration the number is recalculated
   // based on the time spent on the operation and then it's
   // truncated to the <24;4096> range.
       pagesPerIteration = Math.Min(Math.Max(24, (int)(100.0 *
pagesPerIteration / stopWatch.ElapsedMilliseconds)), 4096);

       // If there are still free pages in the databases then schedule the
   // thread to wake up in 200ms and continue vacuuming them.
       if (totalFreePages > 0)
               timeout = 200;
       else
               timeout = -1;
}
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unexpected interaction between virtual tables and the IN operator.

2008-12-19 Thread Filip Navara
Any update on this? It's seriously affecting performance in our application.

Best regards,
Filip Navara

On Tue, Aug 12, 2008 at 6:33 PM, Scott Hess <sh...@google.com> wrote:
> [Full text of an example later in the email.]
>
> For a query like this:
>
>   SELECT * FROM t_ft WHERE docid = (SELECT rowid FROM t WHERE label = 'y');
>
> fulltextBestIndex() gets a SQLITE_INDEX_CONSTRAINT_EQ on the docid
> column, which can be efficiently accessed, so fulltextFilter() can do
> a very efficient query.  Meanwhile, for this:
>
>   SELECT * FROM t_ft WHERE docid IN (SELECT rowid FROM t WHERE label = 'y');
>
> fulltestBestIndex() gets no constraints, and thus can't do anything,
> so you end up with a full table scan of the fts3 table.
>
> As far as I can tell from sqlite/src/where.c, the IN operator is
> explicitly not optimized for virtual tables.  I can't quite figure out
> why, though.  An oversight?  It makes things very inefficient for
> certain schema arrangements.  You can work around it by making your
> sub-select explicit in your application code, but since IN is
> optimized fine for regular tables, it's a bit unexpected.
>
> Thanks,
> scott
>
>
> An example for feeding to sqlite3 (though you'll need to set
> breakpoints in fts3.c to see it happening):
>
> DROP TABLE IF EXISTS t;
> DROP TABLE IF EXISTS t_ft;
> CREATE TABLE t (
>  id INTEGER PRIMARY KEY,
>  label TEXT
> );
> CREATE INDEX t_idx ON t(label);
> CREATE VIRTUAL TABLE t_ft USING fts3(c);
>
> INSERT INTO t (id, label) VALUES (null, 'x');
> INSERT INTO t_ft (docid, c) VALUES (LAST_INSERT_ROWID(), 'This is a test');
> INSERT INTO t (id, label) VALUES (null, 'y');
> INSERT INTO t_ft (docid, c) VALUES (LAST_INSERT_ROWID(), 'That was a test');
> INSERT INTO t (id, label) VALUES (null, 'x');
> INSERT INTO t_ft (docid, c) VALUES (LAST_INSERT_ROWID(), 'Another test');
> INSERT INTO t (id, label) VALUES (null, 'z');
> INSERT INTO t_ft (docid, c) VALUES (LAST_INSERT_ROWID(), 'And another');
>
> SELECT * FROM t_ft WHERE docid = (SELECT rowid FROM t WHERE label =
> 'y');   -- Efficient
> SELECT * FROM t_ft WHERE docid IN (SELECT rowid FROM t WHERE label =
> 'y');   -- Inefficient, sub-select matches one row
> SELECT * FROM t_ft WHERE docid IN (SELECT rowid FROM t WHERE label =
> 'x');   -- Inefficient, sub-select matches multiple rows
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open16 fails on Windows Vista 64b

2008-11-26 Thread Filip Navara
That'd happen if you use the 32bit DLL with 64bit application... F.

On Tue, Nov 25, 2008 at 1:59 PM, Ti Ny <[EMAIL PROTECTED]> wrote:
>
> I am getting an exception: An attempt was made to load a program with a 
> incorrect format. (HRESULT: 0x8007000B) when I am trying to use 
> sqlite3_open16 on Windows Vista 64b. On 32b or XP 32bit it works correctly.
>
> _
> Invite your mail contacts to join your friends list with Windows Live Spaces. 
> It's easy!
> http://spaces.live.com/spacesapi.aspx?wx_action=create_url=/friends.aspx=en-us
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can I use sqlite3_bind for colmun name or ASC/DESC selector?

2008-10-25 Thread Filip Navara
No, it's impossible. The main idea of prepared statements and binding
is that the statement has to be compiled only once into the virtual
machine code that is then executed when you run sqlite3_step. This
works only as long as the compiled form isn't affected by the bound
data. Binding column names, ASC/DESC, etc. would change the meaning of
the query and the compiled code wouldn't be reusable (eg. using ASC or
DESC could result in use of different index and so on).

Best regards,
Filip NAvara

On Sat, Oct 25, 2008 at 6:54 PM, mfujisaw <[EMAIL PROTECTED]> wrote:
> Hi, guys,
>
> I am developping allication with SQLite3 functions.
> When I will use sqlite3_bind and wild card for some SQL, it wouldn't work 
> well.
>
> 1. SELECT ? FROM records WHERE age=30
> I want to set some table name into ? every time.
>
> 2. SELECT * FROM records WHERE age=30 ORDER BY ?
> I want to set some order target into ? every time.
>
> 3. SELECT * FROM records WHERE age=30 ORDER BY weight ?
> I want to select ASC/DESC into ? every time.
>
> In every those 3 cases, I cant get correct results.
> Does any one help me?
> Can I make these parameter wildcard?
>
>
> Thanks,
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Intermittent SQLITE_CANTOPEN on Windows

2008-10-15 Thread Filip Navara
Hmm, it didn't help. I can still easily reproduce the TortoiseSVN bug.
The only reliable workaround is to use different journal_mode (or
implementing renaming of the journal file before deleting it).

Best regards,
F.

On Wed, Oct 15, 2008 at 2:33 PM, Shane Harrelson <[EMAIL PROTECTED]> wrote:
> Version 3.6.1 (IIRC) had changes to improve the error detection and retry
> logic for this condition (typically caused when a 3rd party  application
> opens the SQLite journal file.)
> HTH.
> -Shane
>
>
> On Mon, Oct 13, 2008 at 12:52 PM, Doug <[EMAIL PROTECTED]> wrote:
>
>> I'm using SQLite 3.5.6 on Windows and intermittently get SQLITE_CANTOPEN
>> when doing an insert.  When that fails, I can use the debugger to go back
>> up
>> and step through the same lines again (using the same database handle -
>> nothing opened or closed in between) and it will work.
>>
>> I am using sqlite3_bind_blob with the following: INSERT OR REPLACE INTO
>> BlobTable (BlobKey, BlobVal) Values ('exampleKey', ?)
>> in case that makes any difference (the SQLITE_CANTOPEN code is returned
>> from
>> sqlite3_step).
>>
>> I doubt this has anything to do with SQLite as it's been working perfectly
>> for years, but I also can't figure out what has changed on my system such
>> that this would be happening now.
>>
>> Thanks in advance for any ideas.
>>
>> Doug
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Intermittent SQLITE_CANTOPEN on Windows

2008-10-14 Thread Filip Navara
http://www.mail-archive.com/sqlite-users@sqlite.org/msg34453.html

On Mon, Oct 13, 2008 at 6:52 PM, Doug <[EMAIL PROTECTED]> wrote:
> I'm using SQLite 3.5.6 on Windows and intermittently get SQLITE_CANTOPEN
> when doing an insert.  When that fails, I can use the debugger to go back up
> and step through the same lines again (using the same database handle -
> nothing opened or closed in between) and it will work.
>
> I am using sqlite3_bind_blob with the following: INSERT OR REPLACE INTO
> BlobTable (BlobKey, BlobVal) Values ('exampleKey', ?)
> in case that makes any difference (the SQLITE_CANTOPEN code is returned from
> sqlite3_step).
>
> I doubt this has anything to do with SQLite as it's been working perfectly
> for years, but I also can't figure out what has changed on my system such
> that this would be happening now.
>
> Thanks in advance for any ideas.
>
> Doug
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vista frustrations

2008-09-17 Thread Filip Navara
"(*) Sun's ZFS can even detect strided sequential access - ie reading X
amount of data every Y kilobytes."

... and so can the NT cache manager since the very first Windows NT
release ;-) It's good to see that people are finally adapting these
features 15 years later.

F.

On Wed, Sep 17, 2008 at 10:17 AM, Roger Binns <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Robert Simpson wrote:
>> To me this seems like an obvious bug in Vista,
>
> Actually I'd argue that it is behaving as designed.  Generally
> filesystem code will try to detect what is going on under the hood.  In
> particular if it looks like you are doing sequential access(*) then they
> will start doing read ahead, whereas read ahead is a waste for random
> access.  By using the sequential or random flags you are explicitly
> telling the filesystem to ignore its heuristics and do as you say only.
>
> Since SQLite cannot tell in advance whether access is almost entirely
> random or almost entirely sequential, it makes far more sense to let the
> operating system use its builtin heuristics and optimise accordingly.
>
> (*) Sun's ZFS can even detect strided sequential access - ie reading X
> amount of data every Y kilobytes.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFI0L0vmOOfHg372QQRAk+7AKCEloS0d+xB+M2C/Bap38ilZZ8tVACfciC4
> vMfbYMNVV9k6CNR7hpSQo6A=
> =AXGU
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database file size isn't really very small

2008-07-18 Thread Filip Navara
Not really two copies as the integer could be primary key ...
something along the lines of

CREATE TABLE StockName (date INTEGER PRIMARY KEY, price REAL);

Regards,
F.

On Fri, Jul 18, 2008 at 10:03 PM, Jay A. Kreibich <[EMAIL PROTECTED]> wrote:
> On Fri, Jul 18, 2008 at 11:58:16AM -0700, Corey Nelson scratched on the wall:
>> > sqlite3 Ticks.db ".dump"
>> BEGIN TRANSACTION;
>> CREATE TABLE StockName (date TEXT UNIQUE ON CONFLICT REPLACE, price REAL);
>
>> I would expect the database file to store a bit of "extra" data but it's
>> 2.17 times bigger than the text file!
>
>  As others have pointed out, the issue is with the index, which is
>  created automatically by the "UNIQUE" constraint.
>
>  In SQLite an index holds a full copy of the data.  Since the vast
>  majority of your data (byte wise) is the date string, slightly more
>  than 2x sounds just about right.
>
>  You can try to convert the date to a large integer.  SQLite supports
>  up to 64 bit ints (signed), which should hold your current
>  representation without problems.  You'll still have two copies, but
>  the data should be much smaller.
>
>   -j
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "'People who live in bamboo houses should not throw pandas.' Jesus said that."
>   - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006"
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite 3.5.9 and Vista 64

2008-06-11 Thread Filip Navara
http://www.nabble.com/CANTOPEN-error-on-Windows-systems-running-TortoiseSVN-to17656998.html

- Filip

On Thu, Jun 12, 2008 at 12:46 AM, derek walters <[EMAIL PROTECTED]> wrote:
> I have a program that uses sqlite_exec in a loop of about a thousand
> queries. In windows XP (32), the program operates fine, but with Vista (64)
> I get random occurances of SQLITE_CANTOPEN (about 0.5% -- 5 or so -- of the
> 1000 queries have this error) as a return from the function when using the
> same data. Is there any known compatibility issues between SQlite and Vista,
> specifically 64-bit?
>
> Another thing I've noticed, is that even when using the same laptop model,
> the program running under Vista seems to run through the function
> sqlite_exec ~10x faster than under XP.
> Thanks,
> --
> --
> Derek Walters
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CANTOPEN error on Windows systems running TortoiseSVN

2008-06-04 Thread Filip Navara
On Wed, Jun 4, 2008 at 11:41 PM, Eric Minbiole
<[EMAIL PROTECTED]> wrote:
> It looks like you can configure TortoiseSVN to include / exclude
> specific paths during its searches.  Though I've not tried it, I would
> think you could simply exclude any paths that contain SQLite databases.
>  This thread had some good info:
>
> http://www.nabble.com/Disable-TSVNCache.exe-to-minimize-disk-IO--to2385446.html
>
> ~Eric

That would solve the specific problem with TortoiseSVN, but other
reports suggest that similar error happens also with Windows Desktop
Search and possibly some other applications. I really hesitate to
include workaround in our installer for specific applications. Thanks
for the hint, though.

- Filip
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] CANTOPEN error on Windows systems running TortoiseSVN

2008-06-04 Thread Filip Navara
Hello!

While testing our application we occasionally got the "Unable to open
database" error on several machines from time to time. Same problem is
also discussed on http://sqlite.phxsoftware.com/forums/t/689.aspx. It
was very hard to reproduce since it was timing dependent, but we
managed to pin down the problem on one of our notebooks with the help
of stress-test application and Process Monitor. To make the long story
short here is the exact description of the problem:

- TSVNCache.exe monitors files and for whatever reason it opens them
and performs some queries on them.
- SQLite creates a journal file.
- TSVNCache notices the new file and opens it.
- SQLite deletes the journal file and it now enters the "Delete
Pending" state since TSVNCache still has handle for it.
- SQLite tries to recreate the journal for next transaction (3 times),
but it always fails with the ERROR_DELETE_PENDING error.
  => CANTOPEN error is returned.
- TSVNCache finally releases the file handle and the file gets deleted.

The key reason for the problem happening only on Windows is the
"delete pending" state that doesn't exist in *nix systems. While the
file is in this state the name cannot be reused. Any ideas how to fix
this issue? It could potentially affect a lot of people and cause
unexpected errors when transactions are performed in rapid succession.
Personally I can think only of a) renaming the journal file to unique
name before deleting it b) using persistent journals (unfortunately
the version of SQLite we use doesn't support them).

Best regards,
Filip Navara

The relevant part of the log is

TSVNCache.exe   QueryOpen   FAST IO DISALLOWED
SQLiteTest.exe  CloseFile   SUCCESS
TSVNCache.exe   CreateFile  SUCCESS
SQLiteTest.exe  CreateFile  SUCCESS
SQLiteTest.exe  QueryAttributeTagFile   SUCCESS
SQLiteTest.exe  SetDispositionInformationFile   SUCCESS
SQLiteTest.exe  CloseFile   SUCCESS
SQLiteTest.exe  QueryOpen   FAST IO DISALLOWED
SQLiteTest.exe  CreateFile  DELETE PENDING
SQLiteTest.exe  UnlockFileSingleSUCCESS
SQLiteTest.exe  UnlockFileSingleSUCCESS
SQLiteTest.exe  LockFileSUCCESS
SQLiteTest.exe  LockFileSUCCESS
SQLiteTest.exe  UnlockFileSingleSUCCESS
SQLiteTest.exe  QueryOpen   FAST IO DISALLOWED
SQLiteTest.exe  CreateFile  DELETE PENDING
SQLiteTest.exe  QueryStandardInformationFileSUCCESS
SQLiteTest.exe  ReadFileSUCCESS
SQLiteTest.exe  LockFileSUCCESS
SQLiteTest.exe  CreateFile  DELETE PENDING
SQLiteTest.exe  CreateFile  DELETE PENDING
SQLiteTest.exe  UnlockFileSingleSUCCESS
SQLiteTest.exe  UnlockFileSingleSUCCESS
TSVNCache.exe   QueryNetworkOpenInformationFile SUCCESS
TSVNCache.exe   CloseFile   SUCCESS
TSVNCache.exe   QueryOpen   FAST IO DISALLOWED
TSVNCache.exe   CreateFile  NAME NOT FOUND
dw20.exeCreateFile  NAME NOT FOUND
SQLiteTest.exe  CloseFile   SUCCESS
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] nVARCHAR as unique index

2008-05-14 Thread Filip Navara
CREATE INDEX BrandKey2 ON Brand(BrandDescription);

On Wed, May 14, 2008 at 7:32 AM, Farzana <[EMAIL PROTECTED]> wrote:
>
>  Thanks for your reply Igor.
>
>  When we checked with the provided query we found that the duplicate values
>  are present in the BrandDescription. We are working with the device where
>  the memory is limited. We are suppose to sort the data by
>  BrandDescription.When we tried with "ORDER BY BrandDescription" in the query
>  to be executed, its taking more time than we expected.
>
>  Is there any work around to sort the data by BrandDescription physically?
>  Thanks in advance.
>
>  Regards,
>  Farzana.
>
>
>
>
>  Igor Tandetnik wrote:
>  >
>  > "Farzana" <[EMAIL PROTECTED]>
>  > wrote in message news:[EMAIL PROTECTED]
>  >> We are using SQLite and we have the table structure as CREATE TABLE
>  >> Brand(PcNo numeric(4) Not Null,SubPcNo numeric(4) Not Null,BrandNo
>  >> numeric(9) Not Null,BrandDescription nVARCHAR(254)Not Null,ST
>  >> numeric(1),TS numeric(14)) where the index is CREATE UNIQUE index
>  >> BrandKey1 on Brand(PcNo,SubPcNo,BrandNo). When we use this table in
>  >> the application we are suppose to sort the datas by
>  >> BrandDescription.So we tried to create a unique index as CREATE
>  >> UNIQUE INDEX BrandKey2 on Brand(BrandDescription).But we couldn't
>  >> able to create the unique index.
>  >
>  > Define "couldn't able". What exactly seems to be the problem? Did you
>  > get an error message? What did the message say?
>  >
>  > Is it possible that the values in BrandDescription column are not in
>  > fact unique? If they are not, then naturally you would get an error
>  > trying to create a unique index. Try this statement:
>  >
>  > select * from Brand
>  > group by BrandDescription
>  > having count(*) > 1;
>  >
>  > If this returns any rows, you have duplicates.
>  >
>  > Igor Tandetnik
>  >
>  >
>  >
>  > ___
>  > sqlite-users mailing list
>  > sqlite-users@sqlite.org
>  > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>  >
>  >
>
>  --
>  View this message in context: 
> http://www.nabble.com/nVARCHAR-as-unique-index-tp17206197p17223665.html
>
> Sent from the SQLite mailing list archive at Nabble.com.
>
>  ___
>
>
> sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] delimiting text with embedded quotes

2008-05-08 Thread Filip Navara
You don't want to use double quotes for strings actually, they can
refer to column names. The correct way is to use single quotes (') and
escape the quotes in actual text by using two single quotes ('').

F.

On Thu, May 8, 2008 at 12:47 PM,  <[EMAIL PROTECTED]> wrote:
> I am converting text data from another database into SQLite. Some text
>  data has embedded apostrophes like this:
>
> This was George's big day
>
>  Other data has embedded double quotes like this:
>
> The box is 3" wide
>
>  I am generating INSERT INTO statements for thousands of records to be used
>  in a Windows script to move the data into SQLite. I am needing to use
>  different statement (with different delmiters) for each of the above
>  2 scenarios:
>
>  INSERT INTO tbl VALUES("This was George's big day");
>  INSERT INTO tbl VALUES('The box is 3" wide');
>
>  Double-quotes wrap the apostrophe in #1, apostrophes wrap the double-quote
>  in #2. But this means I need to parse the data content for every field in
>  order to generate the INSERTs. Is there a single set of delimiters that
>  would handle both scenarios without having to parse the data first?
>
>  Chris
>
>  
>  Christopher F. Martin
>  School of Medicine
>  Center for Digestive Diseases & Nutrition
>  CB# 7555, 4104 Bioinformatics Bldg.
>  University of North Carolina at Chapel Hill
>  Chapel Hill, North Carolina 27599-7555
>  Phone: 919.966.9340   Fax: 919.966.7592
>  ~~~
>
>
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re[2]: [sqlite] Regarding aborting a query

2006-10-03 Thread Filip Navara
> At the present time, sqlite hadn't implemented syntax such as LIMIT 0,5. So
> you have to do it manually. It's fairly easy if you use prepared statements.
> The steps are as follows:

Actually it implements such syntax. See LIMIT and OFFSET at
http://www.sqlite.org/lang_select.html

Filip


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Where to look in the code for this ?

2006-10-02 Thread Filip Navara
[snip]
>   if (sqlite3_prepare(db, "SELECT * FROM tbookings", -1, , 
> ) ==
> SQLITE_OK)
>  {
>  rc = SQLITE_ROW;
>  while (rc == SQLITE_ROW)
>{
>rc = sqlite3_step(rs);
>printf("id = %d, ", sqlite3_column_int(rs, 
> 0));
>}
>  }
<- what about calling sqlite3_reset(rs); here?
>   printf("About to finalize, do UPDATE now, then press Esc, 
> rc =
[snip]

- Filip


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Full-text searching

2006-10-01 Thread Filip Navara
Hi,

I was considering to use the SQLite FTS1 engine for a MIME-enabled
mail application. We want to use different tokenizer for each row of
inserted data (which corresponds to MIME part). For example we want to
use PDF tokenizer for getting the words out of application/x-pdf parts
and HTML tokenizer for dissecting text/html documents. Is some such
feature planned to be implemented? (Currently the tokenizer is set
per virtual table which is unusable for us)

Alternatively, will there be any flag to not store content in the
fulltext engine tables?

Best regards,
Filip Navara
www.emclient.com



-
To unsubscribe, send email to [EMAIL PROTECTED]
-