[sqlite] Proposed new version numbering scheme for SQLite - Feedback requested

2015-10-08 Thread Bert Huijben
> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Simon Slavin
> Sent: donderdag 8 oktober 2015 16:36
> To: General Discussion of SQLite Database  users at mailinglists.sqlite.org>
> Subject: Re: [sqlite] Proposed new version numbering scheme for SQLite -
> Feedback requested
> 
> 
> On 8 Oct 2015, at 2:38pm, Richard Hipp  wrote:
> 
> > If accepted, the new policy will cause the next release to be 3.9.0
> > instead of 3.8.12.  And the second number in the version will be
> > increased much more aggressively in future releases.
> 
> I approve of this particular release changing the Y value (i.e. being
3.9.0) since
> it allows SQLite to create and change databases to a format which can't be
> opened with previous versions.
> 
> "However, the current tarball naming conventions only reserve two digits
for
> the Y and so the naming format for downloads will need to be revised in
> about 2030."
> 
> If we're still actively using SQLite3 for new projects in 2030 (i.e. we
haven't
> moved on to SQLite4 or something else entirely), they'd better award the
> dev team a solid platinum prize of some sort.

+1

Bert Huijben

BTW: Completed the Subversion testsuite on the 3.8.12 version. No problems
found.



[sqlite] Windows 8.x security requirements / SafeSEHCheck - NXCheck - DBCheck

2015-04-03 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Keith Medcalf
> Sent: vrijdag 3 april 2015 01:51
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Windows 8.x security requirements / SafeSEHCheck -
> NXCheck - DBCheck
> 
> 
> BTW, I have verified that these options all work as described and the
options
> are recognized and processed properly by Windows, and that BinScope is
> happy:
> 
> Failed checks
> d:\source\sqlite\sqlite3.dll - SafeSEHCheck ( FAIL )
> 
> Passed checks
> d:\source\sqlite\sqlite3.dll - NXCheck ( PASS )
This makes all the allocated memory and stack not-executable

> d:\source\sqlite\sqlite3.dll - DBCheck ( PASS )

> (Note, position independent code (PIC) is by definition loadable at any
base.
> Microsoft is just several decades behind in generating position
independent
> code.)
And this makes it possible to load the DLL above 2 GB, which by default
isn't enabled because too much existing legacy code still casts pointers to
integers and back.. Another opt-in flag in the header, to allow existing
programs to still work on bigger machines.


SafeSEH is a similar flag for Structured Exception Handling. Without this
flag the stack is unwound on exceptions by using debugger tricks and
information stored on the stack itself. When this flag is set the compiler
promises to provide all this information in a parsable format somewhere in
the binary. This improves performance and reliability on these exceptions.

But as far as I know Sqlite doesn't use structured exception handling at
all. Perhaps MingW supports a flag of just telling that, if it doesn't
support creating these tables.
(In Windows x86_64 these tables are always required, so this flag doesn't
apply there)


Windows 8.1 and Windows 10 will add another feature like this where all
callback function pointers are stored in a central table for verification if
a passed callback function is really to the intended code. (As far as I know
this is currently only supported by internal and pre-release versions of the
MSVC compiler)

Bert



[sqlite] Weird (slow) TEMP B-TREE ORDER BY

2015-03-14 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Dinu Marina
> Sent: donderdag 12 maart 2015 11:34
> To: sqlite-users at mailinglists.sqlite.org
> Subject: Re: [sqlite] Weird (slow) TEMP B-TREE ORDER BY
> 
> Hi Richard,
> 
> Like said, we've already tried:
> CREATE TEMP TABLE x AS (...query without outer sort...); CREATE xind ON
> x (price); SELECT * FROM x ORDER BY 2
> but it also takes 30 seconds; for some really esoteric reason, CREATE
> TABLE x AS ... takes all of those 30s, even though the execution plan
> looks identical to the simple select (the same 72 steps). This is what I
> would dare call "spooky", any explanation would be appreciated here too.

Did you try running analyze after adding some data and before looking at the
query plans?

With indexes that contain multiple columns adding the sqlite_stat1 table
(via analyze, or in another way) may introduce huge differences in query
plans and performance. 

We found that out the hard way in Subversion, because we have a pretty much
static first column wc_id in most of our indexes in preparation for some new
features. Even when it used the right indexes in query plans, it sometimes
didn't use all the columns that it should.

We now add an initial sqlite_stat1 table and we can trust to see a stable
(good) performance from Sqlite.
https://www.sqlite.org/optoverview.html#manctrl


Bert 



Re: [sqlite] group_concat query performance

2014-10-17 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Kraijenbrink - FixHet - Systeembeheer
> Sent: vrijdag 17 oktober 2014 16:46
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] group_concat query performance
> 
> Thanks Bert,
> 
> You are right. Now the C++ example runs equaly fast. Looks like I have to
> redesign the Db schema.

Have you tested using a transaction over your combined queries?

Even for simple read operations this may give a huge performance difference.


In our use case (where we use SQLite as Subversion working copy database)
the performance problems are usually in the total number of transactions for
an operation, not in the queries itself.

Bert
> 
> With regards,
> 
> Peter
> 
> >Where do you perform the query in the C++ code?
> >
> >Your C++ program shows how you prepare the statement 5000 times, but
> not how you execute it.
> >
> >The VB.Net code prepares the statement once, and then executes it 5000
> times.
> >
> >
> >It looks like you are testing completely different things.
> >
> > Bert
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat query performance

2014-10-17 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Kraijenbrink - FixHet - Systeembeheer
> Sent: vrijdag 17 oktober 2014 12:01
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] group_concat query performance
> 
> 
> Joe Mistachkin wrote:
> >
> >Thanks for the query.  It's difficult to track down performance issues
with
> System.Data.SQLite without seeing the C# (or VB.NET) example code as
> there are a variety of ways to query and process data using it.
> >
> >Is there any chance we could see the code that is using
> System.Data.SQLite?
> 
> Yes, sure. Here they are, the first one is C++, the second one VB.net .Net
> Framework 4

Where do you perform the query in the C++ code?

Your C++ program shows how you prepare the statement 5000 times, but not how
you execute it.

The VB.Net code prepares the statement once, and then executes it 5000
times.


It looks like you are testing completely different things.

Bert

> 
> With regards,
> 
> Peter
> 
> > C++ example code: <
> 
> #include "stdafx.h"
> #include 
> #include 
> #include 
> #include 
> 
> int main()
> {
>   sqlite3 *db;
>   sqlite3_stmt *res;
>   time_t execStart, execStop;
> 
>   const char *errMSG;
>   const char *tail;
> 
>   std::cout << "Running GROUP_CONCAT function test..." << std::endl;
> 
>   time();
> 
>   int error = sqlite3_open("test.db",);
> 
>   if (error)
>   {
>   std::cout << "Could not open DB" << std::endl;
>   sqlite3_close(db);
> 
>   system("Pause");
>   return 1;
>   }
> 
>   int cnt;
> 
>   for (cnt = 0; cnt < 5; cnt++)
>   {
>   std::string query = "SELECT
> GROUP_CONCAT(Parent.fkintFolderID,':') FilePath FROM
> tblFolderNestedSets Node, tblFolderNestedSets Parent "
>   "WHERE Node.intLeft
> BETWEEN Parent.intLeft AND Parent.intRight AND Parent.fkintSessionID =
> Node.fkintSessionID "
>   "AND Node.fkintSessionID
=
> 1824 AND Node.fkintFolderID  = 2913318;";
> 
>   error = sqlite3_prepare_v2(db,query.c_str(), query.length(),
> , );
> 
>   if (error != SQLITE_OK)
>   {
> 
>   std::cout << "Could not prepare sql" << std::endl;
>   sqlite3_close(db);
> 
>   system("Pause");
>   return 1;
>   }
>   }
> 
>   sqlite3_finalize(res);
>   sqlite3_close(db);
> 
>   time();
> 
>   double timeDiff = difftime(execStart, execStop);
> 
>   printf("Elapsed time is %.2lf seconds. ", timeDiff);
> 
>   system("Pause");
>   return 0;
> 
> }
> 
> > System.Data.SQLite example code:
> <<
> 
> Module Module1
> 
> Sub Main()
> 
> Dim _stopwatch As New Stopwatch()
> 
> Dim _dbConn As New System.Data.SQLite.SQLiteConnection()
> Dim _dbPath As String = "Data Source=test.db"
> 
> _stopwatch.Start()
> 
> Console.WriteLine("Running GROUP_CONCAT function test...")
> 
> _dbConn.ConnectionString = _dbPath
> _dbConn.Open()
> 
> Dim _selCmd As System.Data.SQLite.SQLiteCommand
> _selCmd = New System.Data.SQLite.SQLiteCommand(_dbConn)
> 
> _selCmd.CommandText = "SELECT
> GROUP_CONCAT(Parent.fkintFolderID,':') FilePath FROM
> tblFolderNestedSets Node, tblFolderNestedSets Parent " & _
>   "WHERE Node.intLeft BETWEEN Parent.intLeft
AND
> Parent.intRight AND Parent.fkintSessionID = Node.fkintSessionID " & _
>   "AND Node.fkintSessionID =  1824 AND
Node.fkintFolderID  =
> 2913318;"
> 
> Dim _cnt As Integer
> Dim _result As String
> 
> For _cnt = 1 To 5
> 
> _result = _selCmd.ExecuteScalar().ToString()
> Next
> 
> _dbConn.Close()
> 
> _stopwatch.Stop()
> 
> Console.WriteLine("Elapsed time is {0} seconds.",
_stopwatch.Elapsed)
> Console.WriteLine("Press any key to continue...")
> Console.ReadKey()
> 
> End Sub
> 
> End Module
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] SQLite version 3.8.6 coming soon

2014-08-06 Thread Bert Huijben

"Richard Hipp" wrote...

> Version 3.8.6 was originally scheduled for September.  But the change log 
> (
> http://www.sqlite.org/draft/releaselog/current.html) is getting rather 
> long
> and there are a number of important bug fixes.  So we might try to get
> 3.8.6 out the door sooner rather than later.
>
> Please test!  Recent amalgamations are available on the download page (
> http://www.sqlite.org/download.html).  Please compile your applications 
> use
> the latest tip of trunk and let us know if you encounter any problems.
>
> Please review the change log and speak up if you see anything amiss.
>
> The status board (http://www.sqlite.org/checklists/3080600/index) for the
> 3.8.6 release is now on-line.  The usual rule applies:  When the 
> check-list
> goes all-green, we will cut the release.  No check-list items have been
> marked off yet, but we might start marking them off later this week, or
> early next week.
>
> Your feedback is greatly appreciated.

The new wrapping of InterlockedCompareExchange introduced in 3.8.6 breaks 
compilation in Visual C++ 2013 (and the "14" CTPs) for X86 and X64, as this 
function is implemented as an intrinsic in these compilers.

When linking the amalgamation to a program I get a linker error:
sqlite3.lib(sqlite3.obj) : error LNK2001: unresolved external symbol 
__InterlockedCompareExchange

In the newer SDK versions used by this compiler there is a:
#define InterlockedCompareExchange  _InterlockedCompareExchange
(via winbase.h)

Which triggers the use of the intrinsic. (Older versions of the compiler appear 
to have the intrinsic, but by default their SDKs don't use it)


As I assume this is required for the testing framework, an easy fix might be to 
just undefine 'InterlockedCompareExchange'.
(I haven't tested this myself yet)

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unexpected temporary B-TREE in 3.8.5-201405271818

2014-06-01 Thread Bert Huijben
Hi,

I just tested Sqlite 3.8.5 on Subversion's schema and found that one of our
queries started using a temporary B-Tree while executing, which it didn't
use in any older Sqlite version. I wouldn't expect a usage of a temporary
table as the data is already properly ordered when it uses the primary key
index.

The query in its readable form is:
[[
INSERT INTO delete_list(local_relpath)
SELECT ?2
UNION ALL
SELECT local_relpath FROM nodes AS n
WHERE wc_id = ?1
  AND IS_STRICT_DESCENDANT_OF(local_relpath, ?2)
  AND op_depth >= ?3
  AND op_depth = (SELECT MAX(s.op_depth) FROM nodes AS s
  WHERE s.wc_id = ?1
AND s.local_relpath = n.local_relpath)
  AND presence NOT IN (MAP_BASE_DELETED, MAP_NOT_PRESENT, MAP_EXCLUDED,
MAP_SERVER_EXCLUDED)
  AND file_external IS NULL
ORDER by local_relpath
]]

The actual query passed to sqlite is:
 [[
INSERT INTO delete_list(local_relpath) SELECT ?2 UNION ALL SELECT
local_relpath FROM nodes AS n WHERE wc_id = ?1   AND (((local_relpath) >
(CASE (?2) WHEN '' THEN '' ELSE (?2) || '/' END)) AND ((local_relpath) <
CASE (?2) WHEN '' THEN X'' ELSE (?2) || '0' END))   AND op_depth >= ?3
AND op_depth = (SELECT MAX(s.op_depth) FROM nodes AS s
WHERE s.wc_id = ?1 AND s.local_relpath =
n.local_relpath)   AND presence NOT IN ('base-deleted', 'not-present',
'excluded', 'server-excluded')   AND file_external IS NULL ORDER by
local_relpath
]]

The query plan in 3.8.5-201405271818 is:
[[
USE TEMP B-TREE FOR ORDER BY
SEARCH TABLE nodes AS n USING INDEX sqlite_autoindex_NODES_1 (wc_id=? AND
local_relpath>? AND local_relpath? AND local_relpathhttp://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Stephen Chrzanowski
> Sent: woensdag 28 mei 2014 17:33
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Load time performance
> 
> Caching or not, I don't experience the same thing with opening a read only
> text file in Notepad.  For S'n'Gs, I had a random text file sitting on my
> desktop (A config file for a game, and it is pure text) and decided to use
> it as a subject of abuse.  Its original attribute is set so read only is
> OFF.  I opened it up twice in PSPad just to try and eliminate any kind of
> cache chance.  The second load was instant as the PSPad software was
> actually running the second time I ran.  I closed the file, set the R/O
> attribute, and re-opened.  Again, instant load.

This sounds like hitting a retry loop. The Sqlite Windows VFS implements a
few of those to avoid issues with virusscanners, but it shouldn't hit those
om such a case... But it would be hard to tell what caused the access denied
error, triggering the retry.


Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Performance tuning: winDelete() tweak

2014-02-24 Thread Bert Huijben
Hi,

Earlier today I tried to show how to use a profiler to somebody while
running a checkout on a very recent Subversion build on Windows. This build
contained Sqlite 3.8.3.1.

We happened to see that one of the functions that caught about 15% of the
profile samples was the function 'winDelete()' in the Sqlite amalgamation.

That by itself is not that surprising as we know delete is quite slow on
Windows, but what was really surprising was that about 6% of that 15% was
caused by the GetFileAttributes() call that Sqlite performs right before
calling DeleteFileW().


If I look at the relevant source code of winDelete()
[[
  attr = osGetFileAttributesW(zConverted);
#endif
  if ( attr==INVALID_FILE_ATTRIBUTES ){
lastErrno = osGetLastError();
if( lastErrno==ERROR_FILE_NOT_FOUND
 || lastErrno==ERROR_PATH_NOT_FOUND ){
  rc = SQLITE_IOERR_DELETE_NOENT; /* Already gone? */
}else{
  rc = SQLITE_ERROR;
}
break;
  }
  if ( attr_ATTRIBUTE_DIRECTORY ){
rc = SQLITE_ERROR; /* Files only. */
break;
  }
  if ( osDeleteFileW(zConverted) ){
rc = SQLITE_OK; /* Deleted OK. */
break;
  }
]]

it appears that Sqlite only calls GetFileAttributes() to improve the error
handling; especially for trying to accidentally remove a directory.


Is it possible to improve this code by first calling osDeleteFileW() and
only if that fails falling back to looking at osGetFileAttributes() on what
to do next? (Read: error out | retry)


The DeleteFileW() function will just fail when trying to delete a directory
(with a detailed error in GetLastError()), so I don't think this should
introduce a behavior change... but a tweak like this might improve
performance in this very common code path.

Thanks,
Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Simple SELECT misbehaving in Sqlite 3.8.3 with ENABLE_STAT3

2014-02-10 Thread Bert Huijben

Hi,

As part of the Subversion 1.8.6 release we tried introducing some data in
the 'sqlitstat_stat1' table using the recommended approach for Sqlite 3.8.0+
compatibility to tell sqlite about our 'bad indexes':
[[
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES 
INSERT INTO "sqlite_stat1" VALUES 
...
ANALYZE sqlite_master;
]]
(this was discussed on this list a few months ago and worked fine in all our
internal testing)

During release testing we found that some distributions decided to enable
SQLITE_ENABLE_STAT3, and with this flag at least one of our queries doesn't
work the way we expect it with Sqlite 3.8.3 after that second ANALYZE call.
(The distribution: OS/X 'Homebrew'
https://github.com/Homebrew/homebrew/commit/c9eca803d676961ead136b07ab145cc1
e826b314 )

Trimmed testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row.sql

Original/full testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row-FULL.sql


The simplified query
[[
SELECT local_relpath, moved_to, op_depth, 1
 FROM nodes n
WHERE wc_id = 1
  AND (local_relpath = 'A/B' OR ((local_relpath > 'A/B/') AND (local_relpath
< 'A/B0')))
  AND moved_to IS NOT NULL
  AND op_depth >= 0;
]]

Returns 1 row in the sqlite versions as we normally compile it, but when
3.8.3 has SQLITE_ENABLE_STAT3 enabled it doesn't return any rows.

In my opinion the missing heuristics in the statistics table shouldn't make
the query return invalid data. (It could make it faster, slower, ...).


I'm guessing that this is a bug that needs some fix.

But for our usage of Sqlite in Subversion we can probably better block usage
of an sqlite that has STAT2, STAT3 (or higher) enabled.

What is the recommended approach for detecting this scenario?

Bert

--
The schema of the database and the testcase are part of Subversion and
Apache 2 licensed, so feel free to use any part for future testing.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Simple SELECT misbehaving in Sqlite 3.8.3 with ENABLE_STAT3

2014-02-10 Thread Bert Huijben
[Retrying with the user I subscribed with to avoid the moderation]

Hi,

As part of the Subversion 1.8.6 release we tried introducing some data in
the 'sqlitstat_stat1' table using the recommended approach for Sqlite 3.8.0+
compatibility to tell sqlite about our 'bad indexes':
[[
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES 
INSERT INTO "sqlite_stat1" VALUES 
...
ANALYZE sqlite_master;
]]
(this was discussed on this list a few months ago and worked fine in all our
internal testing)

During release testing we found that some distributions decided to enable
SQLITE_ENABLE_STAT3, and with this flag at least one of our queries doesn't
work the way we expect it with Sqlite 3.8.3 after that second ANALYZE call.
(The distribution: OS/X 'Homebrew'
https://github.com/Homebrew/homebrew/commit/c9eca803d676961ead136b07ab145cc1
e826b314 )

Trimmed testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row.sql

Original/full testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row-FULL.sql


The simplified query
[[
SELECT local_relpath, moved_to, op_depth, 1
 FROM nodes n
WHERE wc_id = 1
  AND (local_relpath = 'A/B' OR ((local_relpath > 'A/B/') AND (local_relpath
< 'A/B0')))
  AND moved_to IS NOT NULL
  AND op_depth >= 0;
]]

Returns 1 row in the sqlite versions as we normally compile it, but when
3.8.3 has SQLITE_ENABLE_STAT3 enabled it doesn't return any rows.

In my opinion the missing heuristics in the statistics table shouldn't make
the query return invalid data. (It could make it faster, slower, ...).


I'm guessing that this is a bug that needs some fix.

But for our usage of Sqlite in Subversion we can probably better block usage
of an sqlite that has STAT2, STAT3 (or higher) enabled.

What is the recommended approach for detecting this scenario?

Bert

--
The schema of the database and the testcase are part of Subversion and
Apache 2 licensed, so feel free to use any part for future testing.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite and Virtual PC

2013-09-08 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Gintaras Didzgalvis
> Sent: zondag 8 september 2013 06:32
> To: sqlite-users@sqlite.org
> Subject: [sqlite] SQLite and Virtual PC
> 
> Hi,
> 
> SQLite running on Microsoft Virtual PC 2007 cannot read and write
> databases that are on host PC.
> 
> Details:
> sqlite3_open_v2(SQLITE_OPEN_READWRITE) succeeds, but sqlite3_exec
> returns SQLITE_BUSY, even with SELECT.
> Using the official downloaded sqlite3.dll, version 3.8.0.
> OS on guest PC, where SQLite is used: Windows XP, SP3.
> OS on host PC: Windows 7, 32-bit.
> Accessing the database file using the "shared folders" feature of
> Virtual PC.
> Windows API functions can read and write the file.
> No problems on Vmware player (accessing the file through its virtual
> network).

The Virtual PC shared folder feature implements only a very tiny subset of
the filesystem apis and I don't think Microsoft will support this for adding
new features.

 I would recommend using a shared drive over the virtual network from
Virtual PC too, as that should work just like fileshares between machines
(and is a well tested scenario by both Microsoft as well as many Sqlite
based application users)

Bert
> 
> --
> --
> 
> Gintaras
> http://www.quickmacros.com
> __
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-03 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Marc L. Allen
> Sent: dinsdag 3 september 2013 15:55
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> Am I understanding that, in this example, the I_NODES_PARENT is being
> chosen as the search index because... it's smaller and therefore faster to
find
> initial qualifying rows that you can then use in some sort of ordered
lookup in
> another index/table?
> 
> I'm always in awe of some of the plans a good QA comes up with, and it
> blows me away that there are cases when using a less-covering index would
> be better than a more-covering index.

It also happens to be the first index to be encountered that has wc_id as
first component. 

I'm not sure which part (being first vs being smaller) is used to make the
decision, but this is exactly why I expected this to be a simple corner case
bug instead of part of the new query optimizer design.

In Subversion we have queries that only apply on a single directory level
(where the I_NODES_PARENT is used for) vs queries that apply to an entire
tree (in most cases handled via the primary key index). Using the right
index is critical for end-user performance.


The I_NODES_MOVED index is only used when trees are moved, which is an
uncommon operation, but obtaining information about moves is performance
critical in certain scenarios. I hope we will start using the partial index
support for this with the next release. That should also directly invalidate
using this index for these optimizations .

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-03 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: dinsdag 3 september 2013 02:12
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Mon, Sep 2, 2013 at 10:58 AM, Bert Huijben <rhuij...@apache.org>
> wrote:
> 
> > We anticipate that the wc_id column will be used more in future versions
> > and
> > I had hoped that the assumption that a better index match (matching more
> > usable columns) would always be preferable over one that uses less
> columns.
> >
> > All indexes have the wc_id column first, so I don't see a good reason
that
> > Sqlite in this case prefers only this column of the specific (bad) index
> > over any other index that can help with the rest of the query.
> >
> > This optimization added in 3.7.12 made it easier for us to write clearer
> > well performing queries, without any hassle. . . We can't really assume
our
> > users to run a specific version of Sqlite (as most unix distributions
> > compile sqlite themselves for products like ours)...
> >
> 
> As I've alluded to before, the issue here boils down to the "shape" of the
> content of the database file.  By "shape", I mean the distribution of
> values in each table, and in particular how well each index is able to
> narrow down a search.  Knowledge of the shape of the data is critically
> important in choosing the best query plan when there are two or more
> potentially good query plans.  It isn't possible to make good decisions
> about query plans without knowing the shape of the data.

I understand.
A long long time ago I implemented similar indexing myself for a library
system, where I couldn't even use the whole keys as an index caused by the
storage limitations at the time. I'm glad we don't have that 1980's limits
any more.


At Subversion Our databases aren't well formed by the relational standards,
but everything worked reasonably well with Sqlite 3.7 before. 
I then tuned all queries for Sqlite 3.7.12-3.7.18 where I could get all of
our queries use the right indexes without much trouble.

> When you run ANALYZE, SQLite records information in the sqlite_stat1 table
> that gives it a good picture of the shape of the data.  If you have not
run
> ANALYZE or if you have dropped or deleted the sqlite_stat1 table, then
> SQLite has no information about the true shape of the data, so it has to
> guess.  The guess that SQLite has made for the the past decade is that the
> left-most column of each index does a pretty good job of narrowing down
> the
> search and then subsequent columns merely refine that search slightly.
> This is just a wild guess, of course.  But it seems to have worked
> remarkably well in a wide variety of databases in literally millions of
> applications over the past ten years.

I don't like that I have to do this, but this appears to be the best
maintainable option for the long term future

> The issue at hand is that the guess that SQLite is making on the shape of
> unanalyzed databases is not working very well for SVN.  The problem is
that
> the left-most column of many of the indices do a very poor job of
narrowing
> down the search, in violation of the assumption made for the standard
> guess.
> 
> As I've pointed out, one could just run ANALYZE to populate the
> sqlite_stat1 table, then SQLite will know the database shape and will pick
> good plans.  Or one can manually populate the sqlite_stat1 table with
> something close to the anticipated shape of the database.  But Bert says
he
> doesn't want to do any of that.
> 
> So I've come up with a new, experimental compile-time option:
> 
> -DSQLITE_DEFAULT_INDEX_SHAPE=1
> 
> Rebuilding with the compile-time option above, using code from the
> index-shape-1 branch (http://www.sqlite.org/src/timeline?r=index-shape-1)
> causes SQLite to make a much more pessimistic guess about the
> effectiveness
> of unanalyzed indices.  In particular, using the new default shape, SQLite
> assumes that you need to use all columns of an index in order to narrow
the
> search significantly.  The number of rows that need to be searched
> increases geometrically based on the number of unused columns on the
> right-hand side of the index.

I can confirm that this patch updates all our queries to work with a query
plan at least as efficient as with Sqlite 3.7.12-3.7.18.

I will spend some time over the next few days to see if I can get more
details on the difference over the different versions by using a standard
install, a _stat1 table and this patch by extending our test infrastructure.

> Leave the SQLITE_DEFAULT_I

Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-02 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: maandag 2 september 2013 16:11
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Mon, Sep 2, 2013 at 9:42 AM, Bert Huijben <rhuij...@apache.org>
> wrote:
> 
> >
> > My question is more like: Should I see this as a light regression that
most
> > likely will be resolved in a future version, or as a buggy assumption in
> > our
> > code that I should work around? It should be possible via explicit
'INDEXED
> > BY' clauses and/or breaking the query in separate parts. (Or for future
> > versions by pre-filling the stat table)
> >
> 
> I think rules #3 and #4 (especially #4) of
> http://www.sqlite.org/queryplanner-ng.html#howtofix are applicable here.
> 
> Without the results of ANALYZE, SQLite assumes that the left-most column
of
> an index will narrow down a search to about 10 rows.  That is clearly not
> the case with your schema (where the left-most column of several indices,
> wc_id, is always the same value).  Hence, SQLite really needs the results
> of ANALYZE to work efficiently.
> 
> But that does not mean you have to run ANALYZE on each installation.
> Instead, you can install a pre-computed sqlite3_stat1 table (the larger of
> the two you emailed above would suffice) whenever you create a new
> database.  To do this, just run:
> 
>  ANALYZE sqlite_master;   -- Create the sqlite_stat1 table
>  INSERT INTO sqlite_stat1 VALUES(...);  -- repeat as necessary to fill
> in the table.
>  ANALYZE sqlite_master;   -- Load the sqlite_stat1 table into the
> optimizer.
> 
> The above steps only need to be done once, when the database is first
> created, and can be part of the same script that creates all the other
> tables, indices, triggers, and views in your database.  You should never
> need to run ANALYZE again (assuming the "shape" of your data is always
> roughly the same).  The sqlite_stat1 table created here gives SQLite all
> the information it needs to be able to figure out the best way to handle
> queries in your peculiar usage pattern.

Thanks.

We anticipate that the wc_id column will be used more in future versions and
I had hoped that the assumption that a better index match (matching more
usable columns) would always be preferable over one that uses less columns. 

All indexes have the wc_id column first, so I don't see a good reason that
Sqlite in this case prefers only this column of the specific (bad) index
over any other index that can help with the rest of the query. 

This optimization added in 3.7.12 made it easier for us to write clearer
well performing queries, without any hassle. . . We can't really assume our
users to run a specific version of Sqlite (as most unix distributions
compile sqlite themselves for products like ours)...



Most Subversion developers are not database experts, so adding this
initialization will add more code that will have to be explained over and
over again to developers that don't understand how a query optimizer (or a
B-Tree) works.

Thanks for the call ANALYZE twice trick. That avoids having to declare the
table ourselves.

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-02 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: vrijdag 30 augustus 2013 21:41
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Fri, Aug 30, 2013 at 3:31 PM, Bert Huijben <rhuij...@apache.org> wrote:
> 
> > The analyze on the very small database (which I used for the comparison
> > between 3.7 and 3.8) is:
> >
> 
> Thanks for the data.

I can share the/an actual database if that would help. (The wc.db describes
a checkout of a 100% public subversion repository, so there is nothing
secret in it)

For myself it would be interesting to know how I should look at the OR
optimization which was available for our use cases between Sqlite 3.7.12 and
3.7.18.

My question is more like: Should I see this as a light regression that most
likely will be resolved in a future version, or as a buggy assumption in our
code that I should work around? It should be possible via explicit 'INDEXED
BY' clauses and/or breaking the query in separate parts. (Or for future
versions by pre-filling the stat table)

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-08-31 Thread Bert Huijben
Hi,

 

Using the Subversion 1.7 / 1.8 wc.db schema I get quite different results
from the query planner for several of our queries, where the difference in
performance is quite huge.

 

For this typical example using 3.8.0.1 only one index component is used:

$ EXPLAIN QUERY PLAN DELETE FROM NODES WHERE wc_id = ?1 AND local_relpath =
?2;

0|0|0|SEARCH TABLE NODES USING COVERING INDEX I_NODES_PARENT (wc_id=?)

 

While with 3.7.17 two components are used:

$ EXPLAIN QUERY PLAN DELETE FROM NODES WHERE wc_id = ?1 AND local_relpath =
?2;

0|0|0|SEARCH TABLE NODES USING COVERING INDEX sqlite_autoindex_NODES_1
(wc_id=? AND local_relpath=?) (~9 rows)

 

As currently wc_id is almost always 0 in our database, 3.8.0 will perform a
full table scan while 3.7.17 (and older) just deletes the right record.

 

If I perform a similar 'SELECT * FROM' instead of 'DELETE FROM', the right
index is used (but not as COVERING of course), while a 'SELECT local_relpath
FROM' shows the same problem as the delete.

 

 

This problem appears specific to cases where multiple covering indexes are
found.

 

Bert

 

 

-- 

Complete schema:

[[

CREATE TABLE REPOSITORY (   id INTEGER PRIMARY KEY AUTOINCREMENT,   root
TEXT UNIQUE NOT NULL,   uuid  TEXT NOT NULL   );

CREATE INDEX I_UUID ON REPOSITORY (uuid);

CREATE INDEX I_ROOT ON REPOSITORY (root);

CREATE TABLE WCROOT (   id  INTEGER PRIMARY KEY AUTOINCREMENT,
local_abspath  TEXT UNIQUE   );

CREATE UNIQUE INDEX I_LOCAL_ABSPATH ON WCROOT (local_abspath);

CREATE TABLE PRISTINE (   checksum  TEXT NOT NULL PRIMARY KEY,   compression
INTEGER,   size  INTEGER NOT NULL,   refcount  INTEGER NOT NULL,
md5_checksum  TEXT NOT NULL   );

CREATE TABLE ACTUAL_NODE (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   parent_relpath  TEXT,   properties  BLOB,
conflict_old  TEXT,   conflict_new  TEXT,   conflict_working  TEXT,
prop_reject  TEXT,   changelist  TEXT,   text_mod  TEXT,
tree_conflict_data  TEXT,   conflict_data  BLOB,   older_checksum  TEXT
REFERENCES PRISTINE (checksum),   left_checksum  TEXT REFERENCES PRISTINE
(checksum),   right_checksum  TEXT REFERENCES PRISTINE (checksum),   PRIMARY
KEY (wc_id, local_relpath)   );

CREATE TABLE LOCK (   repos_id  INTEGER NOT NULL REFERENCES REPOSITORY (id),
repos_relpath  TEXT NOT NULL,   lock_token  TEXT NOT NULL,   lock_owner
TEXT,   lock_comment  TEXT,   lock_date  INTEGER,   PRIMARY KEY (repos_id,
repos_relpath)   );

CREATE TABLE WORK_QUEUE (   id  INTEGER PRIMARY KEY AUTOINCREMENT,   work
BLOB NOT NULL   );

CREATE TABLE WC_LOCK (   wc_id  INTEGER NOT NULL  REFERENCES WCROOT (id),
local_dir_relpath  TEXT NOT NULL,   locked_levels  INTEGER NOT NULL DEFAULT
-1,   PRIMARY KEY (wc_id, local_dir_relpath)  );

CREATE TABLE NODES (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   op_depth INTEGER NOT NULL,   parent_relpath
TEXT,   repos_id  INTEGER REFERENCES REPOSITORY (id),   repos_path  TEXT,
revision  INTEGER,   presence  TEXT NOT NULL,   moved_here  INTEGER,
moved_to  TEXT,   kind  TEXT NOT NULL,   properties  BLOB,   depth  TEXT,
checksum  TEXT REFERENCES PRISTINE (checksum),   symlink_target  TEXT,
changed_revision  INTEGER,   changed_date  INTEGER,   changed_author
TEXT,   translated_size  INTEGER,   last_mod_time  INTEGER,   dav_cache
BLOB,   file_external  TEXT, inherited_props BLOB,   PRIMARY KEY (wc_id,
local_relpath, op_depth)   );

CREATE VIEW NODES_CURRENT AS   SELECT * FROM nodes AS n WHERE op_depth =
(SELECT MAX(op_depth) FROM nodes AS n2   WHERE n2.wc_id
= n.wc_id AND n2.local_relpath = n.local_relpath);

CREATE VIEW NODES_BASE AS   SELECT * FROM nodes   WHERE op_depth = 0;

CREATE TRIGGER nodes_insert_trigger AFTER INSERT ON nodes WHEN NEW.checksum
IS NOT NULL BEGIN   UPDATE pristine SET refcount = refcount + 1   WHERE
checksum = NEW.checksum; END;

CREATE TRIGGER nodes_delete_trigger AFTER DELETE ON nodes WHEN OLD.checksum
IS NOT NULL BEGIN   UPDATE pristine SET refcount = refcount - 1   WHERE
checksum = OLD.checksum; END;

CREATE TRIGGER nodes_update_checksum_trigger AFTER UPDATE OF checksum ON
nodes WHEN NEW.checksum IS NOT OLD.checksum BEGIN   UPDATE pristine SET
refcount = refcount + 1   WHERE checksum = NEW.checksum;   UPDATE pristine
SET refcount = refcount - 1   WHERE checksum = OLD.checksum; END;

CREATE TABLE EXTERNALS (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   parent_relpath  TEXT NOT NULL,   repos_id
INTEGER NOT NULL REFERENCES REPOSITORY (id),   presence  TEXT NOT NULL,
kind  TEXT NOT NULL,   def_local_relpath TEXT NOT NULL,
def_repos_relpath TEXT NOT NULL,   def_operational_revision  TEXT,
def_revision  TEXT,   PRIMARY KEY (wc_id, local_relpath) );

CREATE UNIQUE INDEX I_EXTERNALS_DEFINED ON EXTERNALS (wc_id,
def_local_relpath,
local_relpath);

CREATE UNIQUE INDEX I_NODES_MOVED ON NODES (wc_id, 

Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-08-30 Thread Bert Huijben
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: vrijdag 30 augustus 2013 21:16
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Fri, Aug 30, 2013 at 2:44 PM,  wrote:
> 
> >
> > The query
> > [[
> > DELETE FROM nodes WHERE wc_id = ?1   AND (local_relpath = ?2 OR
> > (((local_relpath) > (CASE (?2) WHEN '' THEN '' ELSE (?2) || '/' END))
AND
> > ((local_relpath) < CASE (?2) WHEN '' THEN X'' ELSE (?2) || '0'
END)))
> > AND op_depth = ?3
> > ]]
> >
> 
> 
> > Is handled by 3.7.17 as:
> > 0|0|0|SEARCH TABLE nodes USING COVERING INDEX
> sqlite_autoindex_NODES_1
> > (wc_id=? AND local_relpath=? AND op_depth=?) (~1 rows)
> > 0|0|0|SEARCH TABLE nodes USING COVERING INDEX
> sqlite_autoindex_NODES_1
> > (wc_id=? AND local_relpath>? AND local_relpath >
> > Which I read as two separate operations, under the 'OR' optimization
> >
> > But 3.8.0.1 does:
> > 0|0|0|SEARCH TABLE nodes USING COVERING INDEX I_NODES_PARENT
> (wc_id=?)
> >
> > Which in our case is far worse than using the primary key on the normal
> > table as wc_id is constant and local_relpath +- our primary key.
> >
> > But the query planner has know way of knowing that wc_id is always the
> same value, unless you run ANALYZE.  Can you do that, please:  Run
> ANALYZE,
> then send in the content of the resulting "sqlite_stat1" table.

The analyze on the very small database (which I used for the comparison
between 3.7 and 3.8) is:
[[
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_ROOT','1 1');
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_UUID','1 1');
INSERT INTO "sqlite_stat1"
VALUES('REPOSITORY','sqlite_autoindex_REPOSITORY_1','1 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','I_LOCAL_ABSPATH','1 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','sqlite_autoindex_WCROOT_1','1
1');
INSERT INTO "sqlite_stat1" VALUES('PRISTINE','I_PRISTINE_MD5','45 1');
INSERT INTO "sqlite_stat1"
VALUES('PRISTINE','sqlite_autoindex_PRISTINE_1','45 1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_PARENT','7 7 4 1 1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_MOVED','7 7 7 7');
INSERT INTO "sqlite_stat1" VALUES('NODES','sqlite_autoindex_NODES_1','7 7 1
1');
COMMIT;
]]

A larger Subversion working copy gets
[[
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_ROOT','2 1');
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_UUID','2 2');
INSERT INTO "sqlite_stat1"
VALUES('REPOSITORY','sqlite_autoindex_REPOSITORY_1','2 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','I_LOCAL_ABSPATH','1 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','sqlite_autoindex_WCROOT_1','1
1');
INSERT INTO "sqlite_stat1" VALUES('PRISTINE','I_PRISTINE_MD5','9451 1');
INSERT INTO "sqlite_stat1"
VALUES('PRISTINE','sqlite_autoindex_PRISTINE_1','9451 1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_PARENT','18420 18420 9 1
1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_MOVED','18420 18420 18420
18420');
INSERT INTO "sqlite_stat1" VALUES('NODES','sqlite_autoindex_NODES_1','18420
18420 1 1');
COMMIT;
]]

But in general we don't run analyze in the working copies. We would most
likely have to insert some initial analyze data for our use cases as our
users always start from an empty database and there is no real time where we
can run analyze

As I guessed after your question 3.8.0 optimizes this correctly after
analyzing; 3.7.17 also without.

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows-Specific 2-c-files Amalgamation?

2013-03-12 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Dominique Devienne
> Sent: maandag 11 maart 2013 17:17
> To: General Discussion of SQLite Database
> Subject: [sqlite] Windows-Specific 2-c-files Amalgamation?
> 
> I understand the "deployment" ease and performance advantage of the
> amalgamation. I like it.
> 
> Except I can't debug it... 'Cause I'm primarily on Windows, which has a
> well-known limitation in the way it stores Debug Information (uses 16-bit
> integer for the line number, so limited to ~64K lines, whereas the
> amalgamation is much larger).
> 
> Could there be another amalgamation, perhaps Windows specific, that
> spreads
> the sole .c file into 2 or 3?
> 
> I'm sure it's work, and perhaps even requires another "private" header to
> share non-public data structures between the two or more C files, but
given
> SQLite's clean architecture and layers (Pager, VDBE, etc...), I'm assuming
> it's at least "possible", no? Might even not loose much of that little
> performance advantage the amalgamation brings???
> 
> I for one would be able to dive deeper into the code, without resorting to
> the not-recommended full build (which is more Linux than Windows friendly)
> 
> I don't have much hope for this (any more than MS fixing its
> tool-chain...), but I thought I might as well ask ;). Perhaps someone
knows
> a trick or two to work-around that MS debugging issue?

Since Visual C++ 2005 the limit in PDB files was increased to 24 bit. You
might still get a warning for compatibility with older tools, but the 'well
known limitation' was resolved over 8 years ago; 4 major compiler versions
ago.

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] SQLite 3.7.16 beta

2013-03-06 Thread Bert Huijben
Hi,

 

I compile sqlite with SQLITE_OMIT_UTF16 (and a few other omit flags) on
Windows and when testing with 3.7.16 I get:

 

sqlite3.lib(sqlite3.obj) : error LNK2019: unresolved external symbol
_sqlite3_result_text16le referenced in function _charFunc 

fatal error LNK1120: 1 unresolved externals

 

By just looking at the error message this appears to be related to the
introduction of the new char() function.

 

Bert

 

From: sqlite-dev-boun...@sqlite.org [mailto:sqlite-dev-boun...@sqlite.org]
On Behalf Of Richard Hipp
Sent: zaterdag 2 maart 2013 17:30
To: General Discussion of SQLite Database; sqlite-dev
Subject: [sqlite-dev] SQLite 3.7.16 beta

 

The anticipated release date for SQLite 3.7.16 is currently 2013-03-12.
Additional information about the forthcoming 3.7.16 release:

   (1)  http://www.sqlite.org/draft/releaselog/3_7_16.html
   (2)  http://www.sqlite.org/draft/download.html
   (3)  http://www.sqlite.org/checklists/3071600

See a summary of changes at (1).  Download a snapshot of the latest code (in
the form of an amalgamation "sqlite3.c" source file) from (2).  The status
board at (3) will show the current state of release testing.  When the
status board goes all green, we will cut the release.  The links above, and
indeed the entire http://www.sqlite.org/draft/ sub-website, will be updated
periodically between now and the official release so check back frequently.

Please report any problems encountered.

-- 
D. Richard Hipp
d...@sqlite.org 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows (slow) vs. iOS/OSX (fast) Performance

2012-12-01 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of David de Regt
> Sent: vrijdag 30 november 2012 18:41
> To: General Discussion of SQLite Database
> Subject: [sqlite] Windows (slow) vs. iOS/OSX (fast) Performance
> 
> Hey all.  I've been struggling with a basic perf issue running the same
code on
> Windows vs. iOS and OSX.
> 
> Basic query set:
> CREATE TABLE test (col1 int, col2 text);
> [loop 500 times]: INSERT INTO TEST (col1,col2) VALUES (4,'test4')
> 
> I'm coding this using the default C amalgamation release and using
> prepare/etc. on all platforms in the exact same way (same very simple DB-
> access class I made).  I realize that using a transaction around this
would
> vastly improve perf, but given the atomic nature of the app that this test
is
> simulating, it won't work to wrap it into transactions, so my goal is to
improve
> the atomic performance.  These are all being run on the same Macbook Pro,
> with an SSD, running Windows via boot camp, OSX natively, and iOS via the
> iOS simulator:
> 
> With defaults (pragma sync = on, default journal_mode):
> Windows: 2500ms
> iOS: 300ms
> OSX: 280ms

On Windows Sqlite by default uses the real/full fsyncs to the hardware as
provided by the OS, while on at least OS/X, by default it doesn't.
See http://sqlite.org/pragma.html#pragma_fullfsync
(I don't know about iOS, but I wouldn't be surprised if it doesn't have the
same level of syncs)

But that doesn't answer why the performance is so different when the syncs
are off in your followup question.
> 
> With pragma sync = off, journal_mode = memory:
> Windows: 62ms
> iOS: 25ms
> OSX: 25ms
> 
> Turning off sync doesn't make me feel warm and fuzzy about our lost-power
> scenario, so with sync on, it seems like something must be fishy for it to
be

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite converts all unicode characters into ANSI

2012-11-20 Thread Bert Huijben
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of ZikO
> Sent: maandag 19 november 2012 02:57
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] SQLite converts all unicode characters into ANSI
> 
> Hello,
> thanks for your answers.
> 
> I downloaded both precompiled binaries and shell for windows from
> www.sqlite.org.
> 
> The script looks like this:
> CREATE TABLE IF NOT EXISTS imiona (
>   id INTEGER PRIMARY KEY,
>   data TEXT,
>   imie1 TEXT,
>   imie2 TEXT,
>   imie3 TEXT);

> 
> D:\Wydarzenia\Events>sqlite3 imieniny.db < test.sql
> Error: near line 1: near "´╗┐CREATE": syntax error
> 
> I then used the conding UTF-8 without BOM and the command was accepted
> but
> unfortunately I get the strange result:
> sqlite> select * from imiona;
> 1   01/01/2012  MasławaMieczysława  Mieszka
> 2   16/01/2012  Marcelego   Walerii   Włodzimie
> 3   17/09/2012  Franciszka  Lamberty  Narcyza
> 4   01/01/2012  MasławaMieczysława  Mieszka
> 5   16/01/2012  Marcelego   Walerii   Włodzimie
> 6   17/09/2012  Franciszka  Lamberty  Narcyza
> 
> Now, I don't know how to test if the text is OK because SQLite2009Pro
> displays everything converted to ANSI even ithough I set it to UTF-8 :/
> Perhaps this program is not very good then. What would you recommend,
> instead?
> 
> once again thanks for your effort towards this. I am really really surprised
> this behaviour.

Could you try .read in sqlite3, instead of using a pipe. With the pipe the 
shell is responsible for reading your file and the forwarding to sqlite3, while 
using .read  makes sqlite read the file, which should be fully 
transparent to whatever encoding you use.

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Visual Studio 2008 Express and sqlite3.c

2012-03-30 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Marco Bambini
> Sent: vrijdag 30 maart 2012 9:52
> To: General Discussion of SQLite Database
> Subject: [sqlite] Visual Studio 2008 Express and sqlite3.c
> 
> I am trying to compile the latest sqlite 3.7.11 amalgamation C file within
a
> Visual Studio 2008 Express C++ project.
> sqlite3.c is correctly set as to be compiled as C code but I am unable to
find
> out a way to fix some compilation errors:
> 
> Compiling...
> sqlite3.c
> ..\Sources\sqlite3.c(107831) : error C2143: syntax error : missing ':'
before '!'


Line 107831 is the 'int not' line in

/*
** An instance of this structure is used to store the LIKE,
** GLOB, NOT LIKE, and NOT GLOB operators.
*/
struct LikeOp {
  Token eOperator;  /* "like" or "glob" or "regexp" */
  int not; /* True if the NOT keyword is present */
};

In my version of sqlite3.c

Can you make sure your file is not somehow corrupted?
(The next error is a reference to this structure).

Maybe you somehow include a file that redefines 'not'?

I can compile sqlite3.c with Visual C++ 2008 Professional (with all SPs)
without problems.

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users