[sqlite] Proposed new version numbering scheme for SQLite - Feedback requested

2015-10-08 Thread Bert Huijben
> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Simon Slavin
> Sent: donderdag 8 oktober 2015 16:36
> To: General Discussion of SQLite Database  users at mailinglists.sqlite.org>
> Subject: Re: [sqlite] Proposed new version numbering scheme for SQLite -
> Feedback requested
> 
> 
> On 8 Oct 2015, at 2:38pm, Richard Hipp  wrote:
> 
> > If accepted, the new policy will cause the next release to be 3.9.0
> > instead of 3.8.12.  And the second number in the version will be
> > increased much more aggressively in future releases.
> 
> I approve of this particular release changing the Y value (i.e. being
3.9.0) since
> it allows SQLite to create and change databases to a format which can't be
> opened with previous versions.
> 
> "However, the current tarball naming conventions only reserve two digits
for
> the Y and so the naming format for downloads will need to be revised in
> about 2030."
> 
> If we're still actively using SQLite3 for new projects in 2030 (i.e. we
haven't
> moved on to SQLite4 or something else entirely), they'd better award the
> dev team a solid platinum prize of some sort.

+1

Bert Huijben

BTW: Completed the Subversion testsuite on the 3.8.12 version. No problems
found.



[sqlite] Windows 8.x security requirements / SafeSEHCheck - NXCheck - DBCheck

2015-04-03 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Keith Medcalf
> Sent: vrijdag 3 april 2015 01:51
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Windows 8.x security requirements / SafeSEHCheck -
> NXCheck - DBCheck
> 
> 
> BTW, I have verified that these options all work as described and the
options
> are recognized and processed properly by Windows, and that BinScope is
> happy:
> 
> Failed checks
> d:\source\sqlite\sqlite3.dll - SafeSEHCheck ( FAIL )
> 
> Passed checks
> d:\source\sqlite\sqlite3.dll - NXCheck ( PASS )
This makes all the allocated memory and stack not-executable

> d:\source\sqlite\sqlite3.dll - DBCheck ( PASS )

> (Note, position independent code (PIC) is by definition loadable at any
base.
> Microsoft is just several decades behind in generating position
independent
> code.)
And this makes it possible to load the DLL above 2 GB, which by default
isn't enabled because too much existing legacy code still casts pointers to
integers and back.. Another opt-in flag in the header, to allow existing
programs to still work on bigger machines.


SafeSEH is a similar flag for Structured Exception Handling. Without this
flag the stack is unwound on exceptions by using debugger tricks and
information stored on the stack itself. When this flag is set the compiler
promises to provide all this information in a parsable format somewhere in
the binary. This improves performance and reliability on these exceptions.

But as far as I know Sqlite doesn't use structured exception handling at
all. Perhaps MingW supports a flag of just telling that, if it doesn't
support creating these tables.
(In Windows x86_64 these tables are always required, so this flag doesn't
apply there)


Windows 8.1 and Windows 10 will add another feature like this where all
callback function pointers are stored in a central table for verification if
a passed callback function is really to the intended code. (As far as I know
this is currently only supported by internal and pre-release versions of the
MSVC compiler)

Bert



[sqlite] Weird (slow) TEMP B-TREE ORDER BY

2015-03-14 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Dinu Marina
> Sent: donderdag 12 maart 2015 11:34
> To: sqlite-users at mailinglists.sqlite.org
> Subject: Re: [sqlite] Weird (slow) TEMP B-TREE ORDER BY
> 
> Hi Richard,
> 
> Like said, we've already tried:
> CREATE TEMP TABLE x AS (...query without outer sort...); CREATE xind ON
> x (price); SELECT * FROM x ORDER BY 2
> but it also takes 30 seconds; for some really esoteric reason, CREATE
> TABLE x AS ... takes all of those 30s, even though the execution plan
> looks identical to the simple select (the same 72 steps). This is what I
> would dare call "spooky", any explanation would be appreciated here too.

Did you try running analyze after adding some data and before looking at the
query plans?

With indexes that contain multiple columns adding the sqlite_stat1 table
(via analyze, or in another way) may introduce huge differences in query
plans and performance. 

We found that out the hard way in Subversion, because we have a pretty much
static first column wc_id in most of our indexes in preparation for some new
features. Even when it used the right indexes in query plans, it sometimes
didn't use all the columns that it should.

We now add an initial sqlite_stat1 table and we can trust to see a stable
(good) performance from Sqlite.
https://www.sqlite.org/optoverview.html#manctrl


Bert 



Re: [sqlite] group_concat query performance

2014-10-17 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Kraijenbrink - FixHet - Systeembeheer
> Sent: vrijdag 17 oktober 2014 16:46
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] group_concat query performance
> 
> Thanks Bert,
> 
> You are right. Now the C++ example runs equaly fast. Looks like I have to
> redesign the Db schema.

Have you tested using a transaction over your combined queries?

Even for simple read operations this may give a huge performance difference.


In our use case (where we use SQLite as Subversion working copy database)
the performance problems are usually in the total number of transactions for
an operation, not in the queries itself.

Bert
> 
> With regards,
> 
> Peter
> 
> >Where do you perform the query in the C++ code?
> >
> >Your C++ program shows how you prepare the statement 5000 times, but
> not how you execute it.
> >
> >The VB.Net code prepares the statement once, and then executes it 5000
> times.
> >
> >
> >It looks like you are testing completely different things.
> >
> > Bert
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat query performance

2014-10-17 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Kraijenbrink - FixHet - Systeembeheer
> Sent: vrijdag 17 oktober 2014 12:01
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] group_concat query performance
> 
> 
> Joe Mistachkin wrote:
> >
> >Thanks for the query.  It's difficult to track down performance issues
with
> System.Data.SQLite without seeing the C# (or VB.NET) example code as
> there are a variety of ways to query and process data using it.
> >
> >Is there any chance we could see the code that is using
> System.Data.SQLite?
> 
> Yes, sure. Here they are, the first one is C++, the second one VB.net .Net
> Framework 4

Where do you perform the query in the C++ code?

Your C++ program shows how you prepare the statement 5000 times, but not how
you execute it.

The VB.Net code prepares the statement once, and then executes it 5000
times.


It looks like you are testing completely different things.

Bert

> 
> With regards,
> 
> Peter
> 
> >>>>>>>>> C++ example code: <<<<<<<<<<<<<<<<<<<<<
> 
> #include "stdafx.h"
> #include 
> #include 
> #include 
> #include 
> 
> int main()
> {
>   sqlite3 *db;
>   sqlite3_stmt *res;
>   time_t execStart, execStop;
> 
>   const char *errMSG;
>   const char *tail;
> 
>   std::cout << "Running GROUP_CONCAT function test..." << std::endl;
> 
>   time();
> 
>   int error = sqlite3_open("test.db",);
> 
>   if (error)
>   {
>   std::cout << "Could not open DB" << std::endl;
>   sqlite3_close(db);
> 
>   system("Pause");
>   return 1;
>   }
> 
>   int cnt;
> 
>   for (cnt = 0; cnt < 5; cnt++)
>   {
>   std::string query = "SELECT
> GROUP_CONCAT(Parent.fkintFolderID,':') FilePath FROM
> tblFolderNestedSets Node, tblFolderNestedSets Parent "
>   "WHERE Node.intLeft
> BETWEEN Parent.intLeft AND Parent.intRight AND Parent.fkintSessionID =
> Node.fkintSessionID "
>   "AND Node.fkintSessionID
=
> 1824 AND Node.fkintFolderID  = 2913318;";
> 
>   error = sqlite3_prepare_v2(db,query.c_str(), query.length(),
> , );
> 
>   if (error != SQLITE_OK)
>   {
> 
>   std::cout << "Could not prepare sql" << std::endl;
>   sqlite3_close(db);
> 
>   system("Pause");
>   return 1;
>   }
>   }
> 
>   sqlite3_finalize(res);
>   sqlite3_close(db);
> 
>   time();
> 
>   double timeDiff = difftime(execStart, execStop);
> 
>   printf("Elapsed time is %.2lf seconds. ", timeDiff);
> 
>   system("Pause");
>   return 0;
> 
> }
> 
> >>>>>>>>>>>>> System.Data.SQLite example code:
> <<<<<<<<<<<<<<<<<<<<<<
> 
> Module Module1
> 
> Sub Main()
> 
> Dim _stopwatch As New Stopwatch()
> 
> Dim _dbConn As New System.Data.SQLite.SQLiteConnection()
> Dim _dbPath As String = "Data Source=test.db"
> 
> _stopwatch.Start()
> 
> Console.WriteLine("Running GROUP_CONCAT function test...")
> 
> _dbConn.ConnectionString = _dbPath
> _dbConn.Open()
> 
> Dim _selCmd As System.Data.SQLite.SQLiteCommand
> _selCmd = New System.Data.SQLite.SQLiteCommand(_dbConn)
> 
> _selCmd.CommandText = "SELECT
> GROUP_CONCAT(Parent.fkintFolderID,':') FilePath FROM
> tblFolderNestedSets Node, tblFolderNestedSets Parent " & _
>   "WHERE Node.intLeft BETWEEN Parent.intLeft
AND
> Parent.intRight AND Parent.fkintSessionID = Node.fkintSessionID " & _
>   "AND Node.fkintSessionID =  1824 AND
Node.fkintFolderID  =
> 2913318;"
> 
> Dim _cnt As Integer
> Dim _result As String
> 
> For _cnt = 1 To 5
> 
> _result = _selCmd.ExecuteScalar().ToString()
> Next
> 
> _dbConn.Close()
> 
> _stopwatch.Stop()
> 
> Console.WriteLine("Elapsed time is {0} seconds.",
_stopwatch.Elapsed)
> Console.WriteLine("Press any key to continue...")
> Console.ReadKey()
> 
> End Sub
> 
> End Module
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] SQLite version 3.8.6 coming soon

2014-08-06 Thread Bert Huijben

"Richard Hipp" wrote...

> Version 3.8.6 was originally scheduled for September.  But the change log 
> (
> http://www.sqlite.org/draft/releaselog/current.html) is getting rather 
> long
> and there are a number of important bug fixes.  So we might try to get
> 3.8.6 out the door sooner rather than later.
>
> Please test!  Recent amalgamations are available on the download page (
> http://www.sqlite.org/download.html).  Please compile your applications 
> use
> the latest tip of trunk and let us know if you encounter any problems.
>
> Please review the change log and speak up if you see anything amiss.
>
> The status board (http://www.sqlite.org/checklists/3080600/index) for the
> 3.8.6 release is now on-line.  The usual rule applies:  When the 
> check-list
> goes all-green, we will cut the release.  No check-list items have been
> marked off yet, but we might start marking them off later this week, or
> early next week.
>
> Your feedback is greatly appreciated.

The new wrapping of InterlockedCompareExchange introduced in 3.8.6 breaks 
compilation in Visual C++ 2013 (and the "14" CTPs) for X86 and X64, as this 
function is implemented as an intrinsic in these compilers.

When linking the amalgamation to a program I get a linker error:
sqlite3.lib(sqlite3.obj) : error LNK2001: unresolved external symbol 
__InterlockedCompareExchange

In the newer SDK versions used by this compiler there is a:
#define InterlockedCompareExchange  _InterlockedCompareExchange
(via winbase.h)

Which triggers the use of the intrinsic. (Older versions of the compiler appear 
to have the intrinsic, but by default their SDKs don't use it)


As I assume this is required for the testing framework, an easy fix might be to 
just undefine 'InterlockedCompareExchange'.
(I haven't tested this myself yet)

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unexpected temporary B-TREE in 3.8.5-201405271818

2014-06-01 Thread Bert Huijben
Hi,

I just tested Sqlite 3.8.5 on Subversion's schema and found that one of our
queries started using a temporary B-Tree while executing, which it didn't
use in any older Sqlite version. I wouldn't expect a usage of a temporary
table as the data is already properly ordered when it uses the primary key
index.

The query in its readable form is:
[[
INSERT INTO delete_list(local_relpath)
SELECT ?2
UNION ALL
SELECT local_relpath FROM nodes AS n
WHERE wc_id = ?1
  AND IS_STRICT_DESCENDANT_OF(local_relpath, ?2)
  AND op_depth >= ?3
  AND op_depth = (SELECT MAX(s.op_depth) FROM nodes AS s
  WHERE s.wc_id = ?1
AND s.local_relpath = n.local_relpath)
  AND presence NOT IN (MAP_BASE_DELETED, MAP_NOT_PRESENT, MAP_EXCLUDED,
MAP_SERVER_EXCLUDED)
  AND file_external IS NULL
ORDER by local_relpath
]]

The actual query passed to sqlite is:
 [[
INSERT INTO delete_list(local_relpath) SELECT ?2 UNION ALL SELECT
local_relpath FROM nodes AS n WHERE wc_id = ?1   AND (((local_relpath) >
(CASE (?2) WHEN '' THEN '' ELSE (?2) || '/' END)) AND ((local_relpath) <
CASE (?2) WHEN '' THEN X'' ELSE (?2) || '0' END))   AND op_depth >= ?3
AND op_depth = (SELECT MAX(s.op_depth) FROM nodes AS s
WHERE s.wc_id = ?1 AND s.local_relpath =
n.local_relpath)   AND presence NOT IN ('base-deleted', 'not-present',
'excluded', 'server-excluded')   AND file_external IS NULL ORDER by
local_relpath
]]

The query plan in 3.8.5-201405271818 is:
[[
USE TEMP B-TREE FOR ORDER BY
SEARCH TABLE nodes AS n USING INDEX sqlite_autoindex_NODES_1 (wc_id=? AND
local_relpath>? AND local_relpath? AND local_relpathhttp://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Stephen Chrzanowski
> Sent: woensdag 28 mei 2014 17:33
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Load time performance
> 
> Caching or not, I don't experience the same thing with opening a read only
> text file in Notepad.  For S'n'Gs, I had a random text file sitting on my
> desktop (A config file for a game, and it is pure text) and decided to use
> it as a subject of abuse.  Its original attribute is set so read only is
> OFF.  I opened it up twice in PSPad just to try and eliminate any kind of
> cache chance.  The second load was instant as the PSPad software was
> actually running the second time I ran.  I closed the file, set the R/O
> attribute, and re-opened.  Again, instant load.

This sounds like hitting a retry loop. The Sqlite Windows VFS implements a
few of those to avoid issues with virusscanners, but it shouldn't hit those
om such a case... But it would be hard to tell what caused the access denied
error, triggering the retry.


Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Performance tuning: winDelete() tweak

2014-02-24 Thread Bert Huijben
Hi,

Earlier today I tried to show how to use a profiler to somebody while
running a checkout on a very recent Subversion build on Windows. This build
contained Sqlite 3.8.3.1.

We happened to see that one of the functions that caught about 15% of the
profile samples was the function 'winDelete()' in the Sqlite amalgamation.

That by itself is not that surprising as we know delete is quite slow on
Windows, but what was really surprising was that about 6% of that 15% was
caused by the GetFileAttributes() call that Sqlite performs right before
calling DeleteFileW().


If I look at the relevant source code of winDelete()
[[
  attr = osGetFileAttributesW(zConverted);
#endif
  if ( attr==INVALID_FILE_ATTRIBUTES ){
lastErrno = osGetLastError();
if( lastErrno==ERROR_FILE_NOT_FOUND
 || lastErrno==ERROR_PATH_NOT_FOUND ){
  rc = SQLITE_IOERR_DELETE_NOENT; /* Already gone? */
}else{
  rc = SQLITE_ERROR;
}
break;
  }
  if ( attr_ATTRIBUTE_DIRECTORY ){
rc = SQLITE_ERROR; /* Files only. */
break;
  }
  if ( osDeleteFileW(zConverted) ){
rc = SQLITE_OK; /* Deleted OK. */
break;
  }
]]

it appears that Sqlite only calls GetFileAttributes() to improve the error
handling; especially for trying to accidentally remove a directory.


Is it possible to improve this code by first calling osDeleteFileW() and
only if that fails falling back to looking at osGetFileAttributes() on what
to do next? (Read: error out | retry)


The DeleteFileW() function will just fail when trying to delete a directory
(with a detailed error in GetLastError()), so I don't think this should
introduce a behavior change... but a tweak like this might improve
performance in this very common code path.

Thanks,
Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Simple SELECT misbehaving in Sqlite 3.8.3 with ENABLE_STAT3

2014-02-10 Thread Bert Huijben

Hi,

As part of the Subversion 1.8.6 release we tried introducing some data in
the 'sqlitstat_stat1' table using the recommended approach for Sqlite 3.8.0+
compatibility to tell sqlite about our 'bad indexes':
[[
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES 
INSERT INTO "sqlite_stat1" VALUES 
...
ANALYZE sqlite_master;
]]
(this was discussed on this list a few months ago and worked fine in all our
internal testing)

During release testing we found that some distributions decided to enable
SQLITE_ENABLE_STAT3, and with this flag at least one of our queries doesn't
work the way we expect it with Sqlite 3.8.3 after that second ANALYZE call.
(The distribution: OS/X 'Homebrew'
https://github.com/Homebrew/homebrew/commit/c9eca803d676961ead136b07ab145cc1
e826b314 )

Trimmed testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row.sql

Original/full testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row-FULL.sql


The simplified query
[[
SELECT local_relpath, moved_to, op_depth, 1
 FROM nodes n
WHERE wc_id = 1
  AND (local_relpath = 'A/B' OR ((local_relpath > 'A/B/') AND (local_relpath
< 'A/B0')))
  AND moved_to IS NOT NULL
  AND op_depth >= 0;
]]

Returns 1 row in the sqlite versions as we normally compile it, but when
3.8.3 has SQLITE_ENABLE_STAT3 enabled it doesn't return any rows.

In my opinion the missing heuristics in the statistics table shouldn't make
the query return invalid data. (It could make it faster, slower, ...).


I'm guessing that this is a bug that needs some fix.

But for our usage of Sqlite in Subversion we can probably better block usage
of an sqlite that has STAT2, STAT3 (or higher) enabled.

What is the recommended approach for detecting this scenario?

Bert

--
The schema of the database and the testcase are part of Subversion and
Apache 2 licensed, so feel free to use any part for future testing.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Simple SELECT misbehaving in Sqlite 3.8.3 with ENABLE_STAT3

2014-02-10 Thread Bert Huijben
[Retrying with the user I subscribed with to avoid the moderation]

Hi,

As part of the Subversion 1.8.6 release we tried introducing some data in
the 'sqlitstat_stat1' table using the recommended approach for Sqlite 3.8.0+
compatibility to tell sqlite about our 'bad indexes':
[[
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES 
INSERT INTO "sqlite_stat1" VALUES 
...
ANALYZE sqlite_master;
]]
(this was discussed on this list a few months ago and worked fine in all our
internal testing)

During release testing we found that some distributions decided to enable
SQLITE_ENABLE_STAT3, and with this flag at least one of our queries doesn't
work the way we expect it with Sqlite 3.8.3 after that second ANALYZE call.
(The distribution: OS/X 'Homebrew'
https://github.com/Homebrew/homebrew/commit/c9eca803d676961ead136b07ab145cc1
e826b314 )

Trimmed testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row.sql

Original/full testcase on
http://b.qqn.nl/f/201402-sqlite-stat3-no-row-FULL.sql


The simplified query
[[
SELECT local_relpath, moved_to, op_depth, 1
 FROM nodes n
WHERE wc_id = 1
  AND (local_relpath = 'A/B' OR ((local_relpath > 'A/B/') AND (local_relpath
< 'A/B0')))
  AND moved_to IS NOT NULL
  AND op_depth >= 0;
]]

Returns 1 row in the sqlite versions as we normally compile it, but when
3.8.3 has SQLITE_ENABLE_STAT3 enabled it doesn't return any rows.

In my opinion the missing heuristics in the statistics table shouldn't make
the query return invalid data. (It could make it faster, slower, ...).


I'm guessing that this is a bug that needs some fix.

But for our usage of Sqlite in Subversion we can probably better block usage
of an sqlite that has STAT2, STAT3 (or higher) enabled.

What is the recommended approach for detecting this scenario?

Bert

--
The schema of the database and the testcase are part of Subversion and
Apache 2 licensed, so feel free to use any part for future testing.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite and Virtual PC

2013-09-08 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Gintaras Didzgalvis
> Sent: zondag 8 september 2013 06:32
> To: sqlite-users@sqlite.org
> Subject: [sqlite] SQLite and Virtual PC
> 
> Hi,
> 
> SQLite running on Microsoft Virtual PC 2007 cannot read and write
> databases that are on host PC.
> 
> Details:
> sqlite3_open_v2(SQLITE_OPEN_READWRITE) succeeds, but sqlite3_exec
> returns SQLITE_BUSY, even with SELECT.
> Using the official downloaded sqlite3.dll, version 3.8.0.
> OS on guest PC, where SQLite is used: Windows XP, SP3.
> OS on host PC: Windows 7, 32-bit.
> Accessing the database file using the "shared folders" feature of
> Virtual PC.
> Windows API functions can read and write the file.
> No problems on Vmware player (accessing the file through its virtual
> network).

The Virtual PC shared folder feature implements only a very tiny subset of
the filesystem apis and I don't think Microsoft will support this for adding
new features.

 I would recommend using a shared drive over the virtual network from
Virtual PC too, as that should work just like fileshares between machines
(and is a well tested scenario by both Microsoft as well as many Sqlite
based application users)

Bert
> 
> --
> --
> 
> Gintaras
> http://www.quickmacros.com
> __
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-03 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Marc L. Allen
> Sent: dinsdag 3 september 2013 15:55
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> Am I understanding that, in this example, the I_NODES_PARENT is being
> chosen as the search index because... it's smaller and therefore faster to
find
> initial qualifying rows that you can then use in some sort of ordered
lookup in
> another index/table?
> 
> I'm always in awe of some of the plans a good QA comes up with, and it
> blows me away that there are cases when using a less-covering index would
> be better than a more-covering index.

It also happens to be the first index to be encountered that has wc_id as
first component. 

I'm not sure which part (being first vs being smaller) is used to make the
decision, but this is exactly why I expected this to be a simple corner case
bug instead of part of the new query optimizer design.

In Subversion we have queries that only apply on a single directory level
(where the I_NODES_PARENT is used for) vs queries that apply to an entire
tree (in most cases handled via the primary key index). Using the right
index is critical for end-user performance.


The I_NODES_MOVED index is only used when trees are moved, which is an
uncommon operation, but obtaining information about moves is performance
critical in certain scenarios. I hope we will start using the partial index
support for this with the next release. That should also directly invalidate
using this index for these optimizations .

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-03 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: dinsdag 3 september 2013 02:12
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Mon, Sep 2, 2013 at 10:58 AM, Bert Huijben <rhuij...@apache.org>
> wrote:
> 
> > We anticipate that the wc_id column will be used more in future versions
> > and
> > I had hoped that the assumption that a better index match (matching more
> > usable columns) would always be preferable over one that uses less
> columns.
> >
> > All indexes have the wc_id column first, so I don't see a good reason
that
> > Sqlite in this case prefers only this column of the specific (bad) index
> > over any other index that can help with the rest of the query.
> >
> > This optimization added in 3.7.12 made it easier for us to write clearer
> > well performing queries, without any hassle. . . We can't really assume
our
> > users to run a specific version of Sqlite (as most unix distributions
> > compile sqlite themselves for products like ours)...
> >
> 
> As I've alluded to before, the issue here boils down to the "shape" of the
> content of the database file.  By "shape", I mean the distribution of
> values in each table, and in particular how well each index is able to
> narrow down a search.  Knowledge of the shape of the data is critically
> important in choosing the best query plan when there are two or more
> potentially good query plans.  It isn't possible to make good decisions
> about query plans without knowing the shape of the data.

I understand.
A long long time ago I implemented similar indexing myself for a library
system, where I couldn't even use the whole keys as an index caused by the
storage limitations at the time. I'm glad we don't have that 1980's limits
any more.


At Subversion Our databases aren't well formed by the relational standards,
but everything worked reasonably well with Sqlite 3.7 before. 
I then tuned all queries for Sqlite 3.7.12-3.7.18 where I could get all of
our queries use the right indexes without much trouble.

> When you run ANALYZE, SQLite records information in the sqlite_stat1 table
> that gives it a good picture of the shape of the data.  If you have not
run
> ANALYZE or if you have dropped or deleted the sqlite_stat1 table, then
> SQLite has no information about the true shape of the data, so it has to
> guess.  The guess that SQLite has made for the the past decade is that the
> left-most column of each index does a pretty good job of narrowing down
> the
> search and then subsequent columns merely refine that search slightly.
> This is just a wild guess, of course.  But it seems to have worked
> remarkably well in a wide variety of databases in literally millions of
> applications over the past ten years.

I don't like that I have to do this, but this appears to be the best
maintainable option for the long term future

> The issue at hand is that the guess that SQLite is making on the shape of
> unanalyzed databases is not working very well for SVN.  The problem is
that
> the left-most column of many of the indices do a very poor job of
narrowing
> down the search, in violation of the assumption made for the standard
> guess.
> 
> As I've pointed out, one could just run ANALYZE to populate the
> sqlite_stat1 table, then SQLite will know the database shape and will pick
> good plans.  Or one can manually populate the sqlite_stat1 table with
> something close to the anticipated shape of the database.  But Bert says
he
> doesn't want to do any of that.
> 
> So I've come up with a new, experimental compile-time option:
> 
> -DSQLITE_DEFAULT_INDEX_SHAPE=1
> 
> Rebuilding with the compile-time option above, using code from the
> index-shape-1 branch (http://www.sqlite.org/src/timeline?r=index-shape-1)
> causes SQLite to make a much more pessimistic guess about the
> effectiveness
> of unanalyzed indices.  In particular, using the new default shape, SQLite
> assumes that you need to use all columns of an index in order to narrow
the
> search significantly.  The number of rows that need to be searched
> increases geometrically based on the number of unused columns on the
> right-hand side of the index.

I can confirm that this patch updates all our queries to work with a query
plan at least as efficient as with Sqlite 3.7.12-3.7.18.

I will spend some time over the next few days to see if I can get more
details on the difference over the different versions by using a standard
install, a _stat1 table and this patch by extending our test infrastructure.

> Leave the SQLITE_DEFAULT_I

Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-02 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: maandag 2 september 2013 16:11
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Mon, Sep 2, 2013 at 9:42 AM, Bert Huijben <rhuij...@apache.org>
> wrote:
> 
> >
> > My question is more like: Should I see this as a light regression that
most
> > likely will be resolved in a future version, or as a buggy assumption in
> > our
> > code that I should work around? It should be possible via explicit
'INDEXED
> > BY' clauses and/or breaking the query in separate parts. (Or for future
> > versions by pre-filling the stat table)
> >
> 
> I think rules #3 and #4 (especially #4) of
> http://www.sqlite.org/queryplanner-ng.html#howtofix are applicable here.
> 
> Without the results of ANALYZE, SQLite assumes that the left-most column
of
> an index will narrow down a search to about 10 rows.  That is clearly not
> the case with your schema (where the left-most column of several indices,
> wc_id, is always the same value).  Hence, SQLite really needs the results
> of ANALYZE to work efficiently.
> 
> But that does not mean you have to run ANALYZE on each installation.
> Instead, you can install a pre-computed sqlite3_stat1 table (the larger of
> the two you emailed above would suffice) whenever you create a new
> database.  To do this, just run:
> 
>  ANALYZE sqlite_master;   -- Create the sqlite_stat1 table
>  INSERT INTO sqlite_stat1 VALUES(...);  -- repeat as necessary to fill
> in the table.
>  ANALYZE sqlite_master;   -- Load the sqlite_stat1 table into the
> optimizer.
> 
> The above steps only need to be done once, when the database is first
> created, and can be part of the same script that creates all the other
> tables, indices, triggers, and views in your database.  You should never
> need to run ANALYZE again (assuming the "shape" of your data is always
> roughly the same).  The sqlite_stat1 table created here gives SQLite all
> the information it needs to be able to figure out the best way to handle
> queries in your peculiar usage pattern.

Thanks.

We anticipate that the wc_id column will be used more in future versions and
I had hoped that the assumption that a better index match (matching more
usable columns) would always be preferable over one that uses less columns. 

All indexes have the wc_id column first, so I don't see a good reason that
Sqlite in this case prefers only this column of the specific (bad) index
over any other index that can help with the rest of the query. 

This optimization added in 3.7.12 made it easier for us to write clearer
well performing queries, without any hassle. . . We can't really assume our
users to run a specific version of Sqlite (as most unix distributions
compile sqlite themselves for products like ours)...



Most Subversion developers are not database experts, so adding this
initialization will add more code that will have to be explained over and
over again to developers that don't understand how a query optimizer (or a
B-Tree) works.

Thanks for the call ANALYZE twice trick. That avoids having to declare the
table ourselves.

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-09-02 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: vrijdag 30 augustus 2013 21:41
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Fri, Aug 30, 2013 at 3:31 PM, Bert Huijben <rhuij...@apache.org> wrote:
> 
> > The analyze on the very small database (which I used for the comparison
> > between 3.7 and 3.8) is:
> >
> 
> Thanks for the data.

I can share the/an actual database if that would help. (The wc.db describes
a checkout of a 100% public subversion repository, so there is nothing
secret in it)

For myself it would be interesting to know how I should look at the OR
optimization which was available for our use cases between Sqlite 3.7.12 and
3.7.18.

My question is more like: Should I see this as a light regression that most
likely will be resolved in a future version, or as a buggy assumption in our
code that I should work around? It should be possible via explicit 'INDEXED
BY' clauses and/or breaking the query in separate parts. (Or for future
versions by pre-filling the stat table)

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-08-31 Thread bert
Hi,

 

Using the Subversion 1.7 / 1.8 wc.db schema I get quite different results
from the query planner for several of our queries, where the difference in
performance is quite huge.

 

For this typical example using 3.8.0.1 only one index component is used:

$ EXPLAIN QUERY PLAN DELETE FROM NODES WHERE wc_id = ?1 AND local_relpath =
?2;

0|0|0|SEARCH TABLE NODES USING COVERING INDEX I_NODES_PARENT (wc_id=?)

 

While with 3.7.17 two components are used:

$ EXPLAIN QUERY PLAN DELETE FROM NODES WHERE wc_id = ?1 AND local_relpath =
?2;

0|0|0|SEARCH TABLE NODES USING COVERING INDEX sqlite_autoindex_NODES_1
(wc_id=? AND local_relpath=?) (~9 rows)

 

As currently wc_id is almost always 0 in our database, 3.8.0 will perform a
full table scan while 3.7.17 (and older) just deletes the right record.

 

If I perform a similar 'SELECT * FROM' instead of 'DELETE FROM', the right
index is used (but not as COVERING of course), while a 'SELECT local_relpath
FROM' shows the same problem as the delete.

 

 

This problem appears specific to cases where multiple covering indexes are
found.

 

Bert

 

 

-- 

Complete schema:

[[

CREATE TABLE REPOSITORY (   id INTEGER PRIMARY KEY AUTOINCREMENT,   root
TEXT UNIQUE NOT NULL,   uuid  TEXT NOT NULL   );

CREATE INDEX I_UUID ON REPOSITORY (uuid);

CREATE INDEX I_ROOT ON REPOSITORY (root);

CREATE TABLE WCROOT (   id  INTEGER PRIMARY KEY AUTOINCREMENT,
local_abspath  TEXT UNIQUE   );

CREATE UNIQUE INDEX I_LOCAL_ABSPATH ON WCROOT (local_abspath);

CREATE TABLE PRISTINE (   checksum  TEXT NOT NULL PRIMARY KEY,   compression
INTEGER,   size  INTEGER NOT NULL,   refcount  INTEGER NOT NULL,
md5_checksum  TEXT NOT NULL   );

CREATE TABLE ACTUAL_NODE (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   parent_relpath  TEXT,   properties  BLOB,
conflict_old  TEXT,   conflict_new  TEXT,   conflict_working  TEXT,
prop_reject  TEXT,   changelist  TEXT,   text_mod  TEXT,
tree_conflict_data  TEXT,   conflict_data  BLOB,   older_checksum  TEXT
REFERENCES PRISTINE (checksum),   left_checksum  TEXT REFERENCES PRISTINE
(checksum),   right_checksum  TEXT REFERENCES PRISTINE (checksum),   PRIMARY
KEY (wc_id, local_relpath)   );

CREATE TABLE LOCK (   repos_id  INTEGER NOT NULL REFERENCES REPOSITORY (id),
repos_relpath  TEXT NOT NULL,   lock_token  TEXT NOT NULL,   lock_owner
TEXT,   lock_comment  TEXT,   lock_date  INTEGER,   PRIMARY KEY (repos_id,
repos_relpath)   );

CREATE TABLE WORK_QUEUE (   id  INTEGER PRIMARY KEY AUTOINCREMENT,   work
BLOB NOT NULL   );

CREATE TABLE WC_LOCK (   wc_id  INTEGER NOT NULL  REFERENCES WCROOT (id),
local_dir_relpath  TEXT NOT NULL,   locked_levels  INTEGER NOT NULL DEFAULT
-1,   PRIMARY KEY (wc_id, local_dir_relpath)  );

CREATE TABLE NODES (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   op_depth INTEGER NOT NULL,   parent_relpath
TEXT,   repos_id  INTEGER REFERENCES REPOSITORY (id),   repos_path  TEXT,
revision  INTEGER,   presence  TEXT NOT NULL,   moved_here  INTEGER,
moved_to  TEXT,   kind  TEXT NOT NULL,   properties  BLOB,   depth  TEXT,
checksum  TEXT REFERENCES PRISTINE (checksum),   symlink_target  TEXT,
changed_revision  INTEGER,   changed_date  INTEGER,   changed_author
TEXT,   translated_size  INTEGER,   last_mod_time  INTEGER,   dav_cache
BLOB,   file_external  TEXT, inherited_props BLOB,   PRIMARY KEY (wc_id,
local_relpath, op_depth)   );

CREATE VIEW NODES_CURRENT AS   SELECT * FROM nodes AS n WHERE op_depth =
(SELECT MAX(op_depth) FROM nodes AS n2   WHERE n2.wc_id
= n.wc_id AND n2.local_relpath = n.local_relpath);

CREATE VIEW NODES_BASE AS   SELECT * FROM nodes   WHERE op_depth = 0;

CREATE TRIGGER nodes_insert_trigger AFTER INSERT ON nodes WHEN NEW.checksum
IS NOT NULL BEGIN   UPDATE pristine SET refcount = refcount + 1   WHERE
checksum = NEW.checksum; END;

CREATE TRIGGER nodes_delete_trigger AFTER DELETE ON nodes WHEN OLD.checksum
IS NOT NULL BEGIN   UPDATE pristine SET refcount = refcount - 1   WHERE
checksum = OLD.checksum; END;

CREATE TRIGGER nodes_update_checksum_trigger AFTER UPDATE OF checksum ON
nodes WHEN NEW.checksum IS NOT OLD.checksum BEGIN   UPDATE pristine SET
refcount = refcount + 1   WHERE checksum = NEW.checksum;   UPDATE pristine
SET refcount = refcount - 1   WHERE checksum = OLD.checksum; END;

CREATE TABLE EXTERNALS (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   parent_relpath  TEXT NOT NULL,   repos_id
INTEGER NOT NULL REFERENCES REPOSITORY (id),   presence  TEXT NOT NULL,
kind  TEXT NOT NULL,   def_local_relpath TEXT NOT NULL,
def_repos_relpath TEXT NOT NULL,   def_operational_revision  TEXT,
def_revision  TEXT,   PRIMARY KEY (wc_id, local_relpath) );

CREATE UNIQUE INDEX I_EXTERNALS_DEFINED ON EXTERNALS (wc_id,
def_local_relpath,
local_relpath);

CREATE UNIQUE INDEX I_NODES_MOVED ON NODES (wc_id

[sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-08-31 Thread Bert Huijben
Hi,

 

Using the Subversion 1.7 / 1.8 wc.db schema I get quite different results
from the query planner for several of our queries, where the difference in
performance is quite huge.

 

For this typical example using 3.8.0.1 only one index component is used:

$ EXPLAIN QUERY PLAN DELETE FROM NODES WHERE wc_id = ?1 AND local_relpath =
?2;

0|0|0|SEARCH TABLE NODES USING COVERING INDEX I_NODES_PARENT (wc_id=?)

 

While with 3.7.17 two components are used:

$ EXPLAIN QUERY PLAN DELETE FROM NODES WHERE wc_id = ?1 AND local_relpath =
?2;

0|0|0|SEARCH TABLE NODES USING COVERING INDEX sqlite_autoindex_NODES_1
(wc_id=? AND local_relpath=?) (~9 rows)

 

As currently wc_id is almost always 0 in our database, 3.8.0 will perform a
full table scan while 3.7.17 (and older) just deletes the right record.

 

If I perform a similar 'SELECT * FROM' instead of 'DELETE FROM', the right
index is used (but not as COVERING of course), while a 'SELECT local_relpath
FROM' shows the same problem as the delete.

 

 

This problem appears specific to cases where multiple covering indexes are
found.

 

Bert

 

 

-- 

Complete schema:

[[

CREATE TABLE REPOSITORY (   id INTEGER PRIMARY KEY AUTOINCREMENT,   root
TEXT UNIQUE NOT NULL,   uuid  TEXT NOT NULL   );

CREATE INDEX I_UUID ON REPOSITORY (uuid);

CREATE INDEX I_ROOT ON REPOSITORY (root);

CREATE TABLE WCROOT (   id  INTEGER PRIMARY KEY AUTOINCREMENT,
local_abspath  TEXT UNIQUE   );

CREATE UNIQUE INDEX I_LOCAL_ABSPATH ON WCROOT (local_abspath);

CREATE TABLE PRISTINE (   checksum  TEXT NOT NULL PRIMARY KEY,   compression
INTEGER,   size  INTEGER NOT NULL,   refcount  INTEGER NOT NULL,
md5_checksum  TEXT NOT NULL   );

CREATE TABLE ACTUAL_NODE (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   parent_relpath  TEXT,   properties  BLOB,
conflict_old  TEXT,   conflict_new  TEXT,   conflict_working  TEXT,
prop_reject  TEXT,   changelist  TEXT,   text_mod  TEXT,
tree_conflict_data  TEXT,   conflict_data  BLOB,   older_checksum  TEXT
REFERENCES PRISTINE (checksum),   left_checksum  TEXT REFERENCES PRISTINE
(checksum),   right_checksum  TEXT REFERENCES PRISTINE (checksum),   PRIMARY
KEY (wc_id, local_relpath)   );

CREATE TABLE LOCK (   repos_id  INTEGER NOT NULL REFERENCES REPOSITORY (id),
repos_relpath  TEXT NOT NULL,   lock_token  TEXT NOT NULL,   lock_owner
TEXT,   lock_comment  TEXT,   lock_date  INTEGER,   PRIMARY KEY (repos_id,
repos_relpath)   );

CREATE TABLE WORK_QUEUE (   id  INTEGER PRIMARY KEY AUTOINCREMENT,   work
BLOB NOT NULL   );

CREATE TABLE WC_LOCK (   wc_id  INTEGER NOT NULL  REFERENCES WCROOT (id),
local_dir_relpath  TEXT NOT NULL,   locked_levels  INTEGER NOT NULL DEFAULT
-1,   PRIMARY KEY (wc_id, local_dir_relpath)  );

CREATE TABLE NODES (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   op_depth INTEGER NOT NULL,   parent_relpath
TEXT,   repos_id  INTEGER REFERENCES REPOSITORY (id),   repos_path  TEXT,
revision  INTEGER,   presence  TEXT NOT NULL,   moved_here  INTEGER,
moved_to  TEXT,   kind  TEXT NOT NULL,   properties  BLOB,   depth  TEXT,
checksum  TEXT REFERENCES PRISTINE (checksum),   symlink_target  TEXT,
changed_revision  INTEGER,   changed_date  INTEGER,   changed_author
TEXT,   translated_size  INTEGER,   last_mod_time  INTEGER,   dav_cache
BLOB,   file_external  TEXT, inherited_props BLOB,   PRIMARY KEY (wc_id,
local_relpath, op_depth)   );

CREATE VIEW NODES_CURRENT AS   SELECT * FROM nodes AS n WHERE op_depth =
(SELECT MAX(op_depth) FROM nodes AS n2   WHERE n2.wc_id
= n.wc_id AND n2.local_relpath = n.local_relpath);

CREATE VIEW NODES_BASE AS   SELECT * FROM nodes   WHERE op_depth = 0;

CREATE TRIGGER nodes_insert_trigger AFTER INSERT ON nodes WHEN NEW.checksum
IS NOT NULL BEGIN   UPDATE pristine SET refcount = refcount + 1   WHERE
checksum = NEW.checksum; END;

CREATE TRIGGER nodes_delete_trigger AFTER DELETE ON nodes WHEN OLD.checksum
IS NOT NULL BEGIN   UPDATE pristine SET refcount = refcount - 1   WHERE
checksum = OLD.checksum; END;

CREATE TRIGGER nodes_update_checksum_trigger AFTER UPDATE OF checksum ON
nodes WHEN NEW.checksum IS NOT OLD.checksum BEGIN   UPDATE pristine SET
refcount = refcount + 1   WHERE checksum = NEW.checksum;   UPDATE pristine
SET refcount = refcount - 1   WHERE checksum = OLD.checksum; END;

CREATE TABLE EXTERNALS (   wc_id  INTEGER NOT NULL REFERENCES WCROOT (id),
local_relpath  TEXT NOT NULL,   parent_relpath  TEXT NOT NULL,   repos_id
INTEGER NOT NULL REFERENCES REPOSITORY (id),   presence  TEXT NOT NULL,
kind  TEXT NOT NULL,   def_local_relpath TEXT NOT NULL,
def_repos_relpath TEXT NOT NULL,   def_operational_revision  TEXT,
def_revision  TEXT,   PRIMARY KEY (wc_id, local_relpath) );

CREATE UNIQUE INDEX I_EXTERNALS_DEFINED ON EXTERNALS (wc_id,
def_local_relpath,
local_relpath);

CREATE UNIQUE INDEX I_NODES_MOVED ON NODES (wc_id

Re: [sqlite] Inefficient covering index used for Subversion with SQLite 3.8.0

2013-08-30 Thread Bert Huijben
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: vrijdag 30 augustus 2013 21:16
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Inefficient covering index used for Subversion with
> SQLite 3.8.0
> 
> On Fri, Aug 30, 2013 at 2:44 PM, <rhuij...@apache.org> wrote:
> 
> >
> > The query
> > [[
> > DELETE FROM nodes WHERE wc_id = ?1   AND (local_relpath = ?2 OR
> > (((local_relpath) > (CASE (?2) WHEN '' THEN '' ELSE (?2) || '/' END))
AND
> > ((local_relpath) < CASE (?2) WHEN '' THEN X'' ELSE (?2) || '0'
END)))
> > AND op_depth = ?3
> > ]]
> >
> 
> 
> > Is handled by 3.7.17 as:
> > 0|0|0|SEARCH TABLE nodes USING COVERING INDEX
> sqlite_autoindex_NODES_1
> > (wc_id=? AND local_relpath=? AND op_depth=?) (~1 rows)
> > 0|0|0|SEARCH TABLE nodes USING COVERING INDEX
> sqlite_autoindex_NODES_1
> > (wc_id=? AND local_relpath>? AND local_relpath >
> > Which I read as two separate operations, under the 'OR' optimization
> >
> > But 3.8.0.1 does:
> > 0|0|0|SEARCH TABLE nodes USING COVERING INDEX I_NODES_PARENT
> (wc_id=?)
> >
> > Which in our case is far worse than using the primary key on the normal
> > table as wc_id is constant and local_relpath +- our primary key.
> >
> > But the query planner has know way of knowing that wc_id is always the
> same value, unless you run ANALYZE.  Can you do that, please:  Run
> ANALYZE,
> then send in the content of the resulting "sqlite_stat1" table.

The analyze on the very small database (which I used for the comparison
between 3.7 and 3.8) is:
[[
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_ROOT','1 1');
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_UUID','1 1');
INSERT INTO "sqlite_stat1"
VALUES('REPOSITORY','sqlite_autoindex_REPOSITORY_1','1 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','I_LOCAL_ABSPATH','1 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','sqlite_autoindex_WCROOT_1','1
1');
INSERT INTO "sqlite_stat1" VALUES('PRISTINE','I_PRISTINE_MD5','45 1');
INSERT INTO "sqlite_stat1"
VALUES('PRISTINE','sqlite_autoindex_PRISTINE_1','45 1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_PARENT','7 7 4 1 1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_MOVED','7 7 7 7');
INSERT INTO "sqlite_stat1" VALUES('NODES','sqlite_autoindex_NODES_1','7 7 1
1');
COMMIT;
]]

A larger Subversion working copy gets
[[
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_ROOT','2 1');
INSERT INTO "sqlite_stat1" VALUES('REPOSITORY','I_UUID','2 2');
INSERT INTO "sqlite_stat1"
VALUES('REPOSITORY','sqlite_autoindex_REPOSITORY_1','2 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','I_LOCAL_ABSPATH','1 1');
INSERT INTO "sqlite_stat1" VALUES('WCROOT','sqlite_autoindex_WCROOT_1','1
1');
INSERT INTO "sqlite_stat1" VALUES('PRISTINE','I_PRISTINE_MD5','9451 1');
INSERT INTO "sqlite_stat1"
VALUES('PRISTINE','sqlite_autoindex_PRISTINE_1','9451 1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_PARENT','18420 18420 9 1
1');
INSERT INTO "sqlite_stat1" VALUES('NODES','I_NODES_MOVED','18420 18420 18420
18420');
INSERT INTO "sqlite_stat1" VALUES('NODES','sqlite_autoindex_NODES_1','18420
18420 1 1');
COMMIT;
]]

But in general we don't run analyze in the working copies. We would most
likely have to insert some initial analyze data for our use cases as our
users always start from an empty database and there is no real time where we
can run analyze

As I guessed after your question 3.8.0 optimizes this correctly after
analyzing; 3.7.17 also without.

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows-Specific 2-c-files Amalgamation?

2013-03-12 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Dominique Devienne
> Sent: maandag 11 maart 2013 17:17
> To: General Discussion of SQLite Database
> Subject: [sqlite] Windows-Specific 2-c-files Amalgamation?
> 
> I understand the "deployment" ease and performance advantage of the
> amalgamation. I like it.
> 
> Except I can't debug it... 'Cause I'm primarily on Windows, which has a
> well-known limitation in the way it stores Debug Information (uses 16-bit
> integer for the line number, so limited to ~64K lines, whereas the
> amalgamation is much larger).
> 
> Could there be another amalgamation, perhaps Windows specific, that
> spreads
> the sole .c file into 2 or 3?
> 
> I'm sure it's work, and perhaps even requires another "private" header to
> share non-public data structures between the two or more C files, but
given
> SQLite's clean architecture and layers (Pager, VDBE, etc...), I'm assuming
> it's at least "possible", no? Might even not loose much of that little
> performance advantage the amalgamation brings???
> 
> I for one would be able to dive deeper into the code, without resorting to
> the not-recommended full build (which is more Linux than Windows friendly)
> 
> I don't have much hope for this (any more than MS fixing its
> tool-chain...), but I thought I might as well ask ;). Perhaps someone
knows
> a trick or two to work-around that MS debugging issue?

Since Visual C++ 2005 the limit in PDB files was increased to 24 bit. You
might still get a warning for compatibility with older tools, but the 'well
known limitation' was resolved over 8 years ago; 4 major compiler versions
ago.

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] SQLite 3.7.16 beta

2013-03-06 Thread Bert Huijben
Hi,

 

I compile sqlite with SQLITE_OMIT_UTF16 (and a few other omit flags) on
Windows and when testing with 3.7.16 I get:

 

sqlite3.lib(sqlite3.obj) : error LNK2019: unresolved external symbol
_sqlite3_result_text16le referenced in function _charFunc 

fatal error LNK1120: 1 unresolved externals

 

By just looking at the error message this appears to be related to the
introduction of the new char() function.

 

Bert

 

From: sqlite-dev-boun...@sqlite.org [mailto:sqlite-dev-boun...@sqlite.org]
On Behalf Of Richard Hipp
Sent: zaterdag 2 maart 2013 17:30
To: General Discussion of SQLite Database; sqlite-dev
Subject: [sqlite-dev] SQLite 3.7.16 beta

 

The anticipated release date for SQLite 3.7.16 is currently 2013-03-12.
Additional information about the forthcoming 3.7.16 release:

   (1)  http://www.sqlite.org/draft/releaselog/3_7_16.html
   (2)  http://www.sqlite.org/draft/download.html
   (3)  http://www.sqlite.org/checklists/3071600

See a summary of changes at (1).  Download a snapshot of the latest code (in
the form of an amalgamation "sqlite3.c" source file) from (2).  The status
board at (3) will show the current state of release testing.  When the
status board goes all green, we will cut the release.  The links above, and
indeed the entire http://www.sqlite.org/draft/ sub-website, will be updated
periodically between now and the official release so check back frequently.

Please report any problems encountered.

-- 
D. Richard Hipp
d...@sqlite.org 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows (slow) vs. iOS/OSX (fast) Performance

2012-12-01 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of David de Regt
> Sent: vrijdag 30 november 2012 18:41
> To: General Discussion of SQLite Database
> Subject: [sqlite] Windows (slow) vs. iOS/OSX (fast) Performance
> 
> Hey all.  I've been struggling with a basic perf issue running the same
code on
> Windows vs. iOS and OSX.
> 
> Basic query set:
> CREATE TABLE test (col1 int, col2 text);
> [loop 500 times]: INSERT INTO TEST (col1,col2) VALUES (4,'test4')
> 
> I'm coding this using the default C amalgamation release and using
> prepare/etc. on all platforms in the exact same way (same very simple DB-
> access class I made).  I realize that using a transaction around this
would
> vastly improve perf, but given the atomic nature of the app that this test
is
> simulating, it won't work to wrap it into transactions, so my goal is to
improve
> the atomic performance.  These are all being run on the same Macbook Pro,
> with an SSD, running Windows via boot camp, OSX natively, and iOS via the
> iOS simulator:
> 
> With defaults (pragma sync = on, default journal_mode):
> Windows: 2500ms
> iOS: 300ms
> OSX: 280ms

On Windows Sqlite by default uses the real/full fsyncs to the hardware as
provided by the OS, while on at least OS/X, by default it doesn't.
See http://sqlite.org/pragma.html#pragma_fullfsync
(I don't know about iOS, but I wouldn't be surprised if it doesn't have the
same level of syncs)

But that doesn't answer why the performance is so different when the syncs
are off in your followup question.
> 
> With pragma sync = off, journal_mode = memory:
> Windows: 62ms
> iOS: 25ms
> OSX: 25ms
> 
> Turning off sync doesn't make me feel warm and fuzzy about our lost-power
> scenario, so with sync on, it seems like something must be fishy for it to
be

Bert

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite converts all unicode characters into ANSI

2012-11-20 Thread Bert Huijben
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of ZikO
> Sent: maandag 19 november 2012 02:57
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] SQLite converts all unicode characters into ANSI
> 
> Hello,
> thanks for your answers.
> 
> I downloaded both precompiled binaries and shell for windows from
> www.sqlite.org.
> 
> The script looks like this:
> CREATE TABLE IF NOT EXISTS imiona (
>   id INTEGER PRIMARY KEY,
>   data TEXT,
>   imie1 TEXT,
>   imie2 TEXT,
>   imie3 TEXT);

> 
> D:\Wydarzenia\Events>sqlite3 imieniny.db < test.sql
> Error: near line 1: near "´╗┐CREATE": syntax error
> 
> I then used the conding UTF-8 without BOM and the command was accepted
> but
> unfortunately I get the strange result:
> sqlite> select * from imiona;
> 1   01/01/2012  MasławaMieczysława  Mieszka
> 2   16/01/2012  Marcelego   Walerii   Włodzimie
> 3   17/09/2012  Franciszka  Lamberty  Narcyza
> 4   01/01/2012  MasławaMieczysława  Mieszka
> 5   16/01/2012  Marcelego   Walerii   Włodzimie
> 6   17/09/2012  Franciszka  Lamberty  Narcyza
> 
> Now, I don't know how to test if the text is OK because SQLite2009Pro
> displays everything converted to ANSI even ithough I set it to UTF-8 :/
> Perhaps this program is not very good then. What would you recommend,
> instead?
> 
> once again thanks for your effort towards this. I am really really surprised
> this behaviour.

Could you try .read in sqlite3, instead of using a pipe. With the pipe the 
shell is responsible for reading your file and the forwarding to sqlite3, while 
using .read  makes sqlite read the file, which should be fully 
transparent to whatever encoding you use.

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Visual Studio 2008 Express and sqlite3.c

2012-03-30 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Marco Bambini
> Sent: vrijdag 30 maart 2012 9:52
> To: General Discussion of SQLite Database
> Subject: [sqlite] Visual Studio 2008 Express and sqlite3.c
> 
> I am trying to compile the latest sqlite 3.7.11 amalgamation C file within
a
> Visual Studio 2008 Express C++ project.
> sqlite3.c is correctly set as to be compiled as C code but I am unable to
find
> out a way to fix some compilation errors:
> 
> Compiling...
> sqlite3.c
> ..\Sources\sqlite3.c(107831) : error C2143: syntax error : missing ':'
before '!'


Line 107831 is the 'int not' line in

/*
** An instance of this structure is used to store the LIKE,
** GLOB, NOT LIKE, and NOT GLOB operators.
*/
struct LikeOp {
  Token eOperator;  /* "like" or "glob" or "regexp" */
  int not; /* True if the NOT keyword is present */
};

In my version of sqlite3.c

Can you make sure your file is not somehow corrupted?
(The next error is a reference to this structure).

Maybe you somehow include a file that redefines 'not'?

I can compile sqlite3.c with Visual C++ 2008 Professional (with all SPs)
without problems.

Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Question about database design

2011-02-02 Thread Bert Nelsen
Hello!

I have over 100 columns in a table. Most of the columns will stay empty.
Just an example:

customer_lastname
customer_firstname
customer_street
customer_PhonePrivate (will almost always stay empty)
customer_PhoneCompany
customer_PhoneMobile
customer_PhoneWAP (will almost always stay empty)
customer_Phone1 (will almost always stay empty)
customer_Phone2 (will almost always stay empty)

Because I felt so stupid about these mostly empty columns taking so much
space, I tended to replace all the "phone" columns by a single column named
"customerPhone".
I stored the values into customerPhone like that:


[sqlite] Storing large numeric values

2010-01-02 Thread Bert Nelsen
I am trying to save values like 19.000.000.000 to my database but I haven't
found the appropriate column type yet. Can anybody help please.
I am using the dhRichClient command object, but even Int64 isn't large
enough.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mozilla's method

2010-01-02 Thread Bert Nelsen
Thanks Olaf.

On Sat, Jan 2, 2010 at 12:49 AM, Olaf Schmidt <s...@online.de> wrote:

>
> "Artur Reilin" <sql...@yuedream.de> schrieb im
> Newsbeitrag news:op.u5vno6hp1pq...@rear...
>
> > If you are using something like an log system it
> > would be better in this way, but in apps like an
> > shop what wouldn't be so great. (thinking about
> > ebay with the bets and such..)
>
> Of course, but I think I made that already clear,
> that the approach should not to be misunderstood as a
> "general recommendation" - it really should be used only within
> smaller Apps, which don't need e.g. "stacked transactions",
> or "complex transactions which could fail" ... Apps which
> also only work singlethreaded within a single process ...
> ...the timer-based transaction-syncing then only an
> "easier applicable workaround" in environments which
> cannot - (or don't want to) make use of the more efficient
> working async-writer-thread implementation of the SQLite-engine).
>
> Olaf Schmidt
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Regards,
Bert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mozilla's method

2010-01-01 Thread Bert Nelsen
So SQLite looks at both the database on the disk and in memory?
Wouldn't that be difficult???
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mozilla's method

2010-01-01 Thread Bert Nelsen
Hello Olaf!
Thanks again for the message.
By "corrupted" (the meaning of this word seems to differ depending on the
user's perspective) you mean that the db is really destroyed and cannot be
opened anymore?
Currently I am working on an application that tracks something like a GPS
signal, and if a signal is not saved to the db it's not a big problem. But
db corruption would be.
Another question would be: When I use transactions, and I said ".BeginTrans"
and insert new records and then, before saying ".CommitTrans", I query the
records, they seem to be already saved. Can you tell me why this is so? Does
a select command automatically trigger a ".CommitTrans"?
Wishing you a happy new year!

On Fri, Jan 1, 2010 at 9:11 AM, Olaf Schmidt <s...@online.de> wrote:

>
> "Bert Nelsen" <bert.nel...@googlemail.com> schrieb
> im Newsbeitrag
> news:a5ffd530912311004p26a7cc5k1f1bf6f671bef...@mail.gmail.com...
>
> > Your .Sychronous = False property does everything
> > as fast as I want, and I am not afraid of losing some
> > user data (it's not a critical application) but
> > I am very much afraid of having a corrupted db.
> > Can anybody please confirm
> > that there is no chance of getting my db corrupted?
>
> Ah, I see now, where the "confusion" came from.
> The wrappers Synchronous-Property has nothing to do
> with the (relative new) async-writer-feature of SQLite -
> instead it maps to SQLites Synchronous PRAGMA
> (as a "convenience property").
>
> You can set all the Pragmas alternatively also per
> Cnn.Execute "PRAGMA pragma_name ..."
>
> or read out a current Pragma-Value with:
> Cnn.OpenRecordset("PRAGMA pragma_name")(0).Value
>
> Please read about SQLite-Pragmas here:
> http://www.sqlite.org/pragma.html
> ... and what's written there about the Synchronous-Pragma-
> Settings. With the Synchronous-Pragma at 'Off' or '0', you're
> risking DB-corruption.
>
> So, I would not touch the Synchronous-Property in your
> case (leaving it at its default FULL(2)) - instead you should
> wrap larger insert- or update-actions within a transaction -
> that works fast as well.
> Also consider using the binding-support of the wrapper
> (the Command-Objects), to achieve faster (and more
> typesafe) operations in "write direction".
>
> Olaf Schmidt
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Regards,
Bert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mozilla's method

2009-12-31 Thread Bert Nelsen
Hello Olaf!
Thank you for your reply. I am using your wrapperlib (and I love it) but I
thought it was a more general question and had nothing directly to do with
your wrapperlib. But I am not sure anymore what is yours and what is common
Sqlite.
Your .Sychronous = False property does everything as fast as I want, and I
am not afraid of losing some user data (it's not a critical application) but
I am very much afraid of having a corrupted db. Can anybody please confirm
that there is no chance of getting my db corrupted?

On Thu, Dec 31, 2009 at 6:11 PM, Olaf Schmidt <s...@online.de> wrote:

>
> "Bert Nelsen" <bert.nel...@googlemail.com> schrieb
> im Newsbeitrag
> news:a5ffd530912310853t7024d908tefbf1ef40df47...@mail.gmail.com...
>
> > I would like to make writes to my SQLite db faster.
> In what regard (under which circumstances) is it slow
> currently?
>
> What's your typical recordcount per transaction?
> Do you use transactions at all?
> How many indexes are defined on the "slow tables"?
> Are you using Command-Objects (the sqlite-bind API)?
> In what language is the App-Host written (regarding
> easy "thread-control")?
> Do you work through a wrapper yet, or with the SQLite-lib
> directly?
>
> > I was thinking about the Async method, but I think I
> > remember reading somewhere that it may cause database
> > corruption.
> IMO the async-feature was not risky with regards to data-
> corruption, only with regards to durability - an (uncorrupted)
> DB could contain "lesser data" (not the last "version", which
> your successfully reported transactions suggested earlier) -
> in case of e.g. a powerfailure.
> But your "mozilla-comment-snippet" already mentioned that too.
>
> I remember, that at some point in time you were using my
> COM-wrapper, to work with SQLite. And I was thinking
> about "handing out" the async-functionality over an appropriate
> method, as the feature came up - but then stayed away from
> it, for the sake of higher stability. The COM-wrapper is
> mostly used in conjunction with VB5/6 - and there you have
> not that much control over the teardown-process of an App
> like in other, more "bare to the metal" languages - imagine
> the async SQLite-WriterThread, having "yet some stuff
> in the pipe" - and a closing VB-App, which only "knows"
> the COM-Object - and not the SQLite-lib behind it, which
> "owns" the async thread.
>
> So, just in case you're using the dhRichClient3-wrapper-lib,
> there's currently no plan (and no time), to make that feature
> available in a reliable and stable working way (playing well
> with VB5/6) over the COM-interface.
>
> Olaf Schmidt
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Regards,
Bert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mozilla's method

2009-12-31 Thread Bert Nelsen
It's here:

https://developer.mozilla.org/en/Storage:Performance

On Thu, Dec 31, 2009 at 6:04 PM, Shawn Wilsher <comrade...@gmail.com> wrote:

> Where did you see this?  It's inaccurate, and if I can, I'll remove it.  If
> you want to use a proper asynchronous statement execution which wraps your
> statement[s] in a transaction, you want to use this:
> https://developer.mozilla.org/En/Storage#Asynchronously
>
> Cheers,
>
> Shawn Wilsher
> Mozilla Developer
>
> On Thu, Dec 31, 2009 at 8:53 AM, Bert Nelsen <bert.nel...@googlemail.com
> >wrote:
>
> > Hello,
> >
> > I would like to make writes to my SQLite db faster.
> >
> > I was thinking about the Async method, but I think I remember reading
> > somewhere that it may cause database corruption.
> >
> > Now I read something on the Mozilla pages, and I don't understand what
> > exactely they are doing.
> >
> > Do they bundle everything in transactions only or do they in fact use the
> > Async method?
> > It is not clear to me by reading through their articles.
> >
> > It would be nice if somebody could clear me up on this issue.
> >
> > Here is the article:
> >
> > Lazy writing
> >
> > Mozilla has relaxed the ACID requirements in order to speed up commits.
> In
> > particular, we have dropped durability. This means that when a commit
> > returns, you are not guaranteed that the commit has gone through. If the
> > power goes out right away, that commit may (or may not) be lost. However,
> > we
> > still support the other (ACI) requirements. This means that the database
> > will not get corrupted. If the power goes out immediately after a commit,
> > the transaction will be like it was rolled back: the database will still
> be
> > in a consistent state.
> >
> > Higher commit performance is achieved by writing to the database from a
> > separate thread (see
> > storage/src/mozStorageAsyncIO.cpp<
> >
> http://mxr.mozilla.org/mozilla-central/source/storage/src/mozStorageAsyncIO.cpp
> > >
> > which
> > is associated with the storage service in
> > storage/src/mozStorageService.cpp<
> >
> http://mxr.mozilla.org/mozilla-central/source/storage/src/mozStorageService.cpp
> > >
> > ).
> > The main database thread does everything exactly as it did before.
> However,
> > we have overridden the file operations and everything comes through the
> > AsnycIO module. This file is based on
> > test_async.c<
> http://www.sqlite.org/cvstrac/rlog?f=sqlite/src/test_async.c
> > >from
> > the sqlite distribution.
> >
> > The AsyncIO module packages writes up in messages and puts them on the
> > write
> > thread's message queue. This write thread waits for messages and
> processes
> > them as fast as it can. This means that writes, locking, and most
> > importantly, disk syncs, only block the AsyncIO thread. Reads are done
> > synchronously, taking into account unwritten data still in the buffer.
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Regards,
Bert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Mozilla's method

2009-12-31 Thread Bert Nelsen
Hello,

I would like to make writes to my SQLite db faster.

I was thinking about the Async method, but I think I remember reading
somewhere that it may cause database corruption.

Now I read something on the Mozilla pages, and I don't understand what
exactely they are doing.

Do they bundle everything in transactions only or do they in fact use the
Async method?
It is not clear to me by reading through their articles.

It would be nice if somebody could clear me up on this issue.

Here is the article:

Lazy writing

Mozilla has relaxed the ACID requirements in order to speed up commits. In
particular, we have dropped durability. This means that when a commit
returns, you are not guaranteed that the commit has gone through. If the
power goes out right away, that commit may (or may not) be lost. However, we
still support the other (ACI) requirements. This means that the database
will not get corrupted. If the power goes out immediately after a commit,
the transaction will be like it was rolled back: the database will still be
in a consistent state.

Higher commit performance is achieved by writing to the database from a
separate thread (see
storage/src/mozStorageAsyncIO.cpp
which
is associated with the storage service in
storage/src/mozStorageService.cpp
).
The main database thread does everything exactly as it did before. However,
we have overridden the file operations and everything comes through the
AsnycIO module. This file is based on
test_async.cfrom
the sqlite distribution.

The AsyncIO module packages writes up in messages and puts them on the write
thread's message queue. This write thread waits for messages and processes
them as fast as it can. This means that writes, locking, and most
importantly, disk syncs, only block the AsyncIO thread. Reads are done
synchronously, taking into account unwritten data still in the buffer.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index is not being used

2009-12-19 Thread Bert Nelsen
Hello!

I have the problem that one index is not taken into account when I execute a
query:

I have 1 table with 2 columns.
Each column has an index, but EXPLAIN QUERY PLAN tells me that the second
index is not being used.

--
First column:
name: mycolumn1
type: text
collation: nocase
unique: yes

First column index:
name: idx_mycolumn1
collation: nocase
unique: yes
---
Second column:
name: mycolumn2
type: text
collation: -
unique: false

Second column index:
name: idx_mycolumn2
collation: -
unique: false
--

Results for "SELECT * FROM mytable WHERE mycolumn1 LIKE 'a%'":
1: 0
2: TABLE mytable WITH INDEX idx_mycolumn1

Results for "SELECT * FROM mytable WHERE mycolumn2 LIKE 'a%'"
0
TABLE mytable

Can anybody tell me why and how I can change this?

Thank you.

-- 
Regards,
Bert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Structured or Object-Oriented?

2006-04-15 Thread Bert Verhees

[EMAIL PROTECTED] schreef:

If you're using Visual C# Express 2005 (or any 2005 product) then you may


as
  

well go for broke and learn the ADO.NET 2.0 framework.  I wrote/maintain a
fairly decent SQLite ADO.NET 2.0 Provider at http://sqlite.phxsoftware.com
and make myself available on the forums quite a bit.

Robert



I think Aaron is working in Mono: how would that work out on Linux for
connecting to
SQLite through DotNet (I presume that ADO.Net is Windows only!);
  

ADO.net is also for mono

I was suggesting that he convert VB code to C# in VisualStudio;
I'm interested in the solution you propose and will have a go at it after
Easter

regards,

Kevin



  




Re: [sqlite] WAKEUP !!! SOMEBODY IS DESTROYING THE WIKI PAGES

2006-03-30 Thread Bert Verhees

A lot of invisible spam is added

http://www.xxx-free-porn.info/index2832.html
- http://www.xxx-free-porn.info/index2833.html
- http://www.xxx-free-porn.info/index2845.html
- http://www.xxx-free-porn.info/index2933.html
- http://www.xxx-free-porn.info/index2975.html
- http://www.xxx-free-porn.info/index2990.html
- http://www.xxx-free-porn.info/index3014.html
- http://www.xxx-free-porn.info/index3026.html
- http://www.xxx-free-porn.info/index3054.html
- http://www.xxx-free-porn.info/index3061.html
- http://www.xxx-free-porn.info/index3148.html
- http://www.xxx-free-porn.info/index319.html
- http://www.xxx-free-porn.info/index3195.html
- http://www.xxx-free-porn.info/index3219.html
- http://www.xxx-free-porn.info/index3267.html

Only visible in Diff of the wiki

Bert




Re: [sqlite] WAKEUP !!! SOMEBODY IS DESTROYING THE WIKI PAGES

2006-03-30 Thread Bert Verhees

Lodewijk Duymaer van Twist wrote:





 

Looks strange, many changes now, today, in short time. But I cannot 
judge what is changed, some one should take a look at it.


Bert

21:12 	* 	Wiki page SqliteCompetitors 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteCompetitors> edited by 
anonymous
21:11 	* 	Wiki page SqlitePorts 
<http://www.sqlite.org/cvstrac/wiki?p=SqlitePorts> edited by anonymous
21:11 	* 	Wiki page SqliteReporting 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteReporting> edited by anonymous
21:10 	* 	Wiki page SqliteNetwork 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteNetwork> edited by anonymous
21:06 	* 	Wiki page DateAndTimeFunctions 
<http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions> edited by 
anonymous
21:06 	* 	Wiki page ReadLine 
<http://www.sqlite.org/cvstrac/wiki?p=ReadLine> edited by anonymous
20:58 	* 	Wiki page SqliteTclPackageWrapper 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteTclPackageWrapper> edited by 
anonymous
20:57 	* 	Wiki page MultiThreading 
<http://www.sqlite.org/cvstrac/wiki?p=MultiThreading> edited by anonymous
20:56 	* 	Wiki page SqliteUsers 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteUsers> edited by anonymous
20:55 	* 	Wiki page SqliteCompetitors 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteCompetitors> edited by 
anonymous
20:54 	* 	Wiki page SqlitePorts 
<http://www.sqlite.org/cvstrac/wiki?p=SqlitePorts> edited by anonymous
20:52 	* 	Wiki page SqliteReporting 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteReporting> edited by anonymous
20:48 	* 	Wiki page SqliteNetwork 
<http://www.sqlite.org/cvstrac/wiki?p=SqliteNetwork> edited by anonymous
20:46 	* 	Wiki page DateAndTimeFunctions 
<http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions> edited by 
anonymous





Re: [sqlite] NHibernate

2006-03-27 Thread Bert Verhees

Robert Simpson wrote:


- Original Message - From: "Bert Verhees" <[EMAIL PROTECTED]>
To: <sqlite-users@sqlite.org>
Sent: Monday, March 27, 2006 9:27 AM
Subject: [sqlite] NHibernate


Hi, I am trying about whole day to connect a Delphi.NET app over 
NHibernate to SQLite.


I cannot get it done.

I have following App.config (in the exe-directory)



I'm afraid you'll have to ask about this on the nHibernate forums.  
Most of the people on this list are concerned with the core sqlite 
engine itself and not the wrappers and O/R mappers built on top of it.


Robert


It is free to ask, my mother always told me, and in a sqlite user list 
there is a possibility that people had to solve the same problem.
But since I got no reaction, I am afraid yoy could be right, and I will 
also try elsewhere


regards
Bert Verhees










[sqlite] NHibernate

2006-03-27 Thread Bert Verhees
Hi, I am trying about whole day to connect a Delphi.NET app over 
NHibernate to SQLite.


I cannot get it done.

I have following App.config (in the exe-directory)



 
type="System.Configuration.NameValueSectionHandler, System, 
Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"

   />
 
 
   value="NHibernate.Driver.SQLiteDriver" />
   value="NHibernate.Connection.DriverConnectionProvider" />
   value="NHibernate.Dialect.SQLiteDialect" />
   

   
  


and this code
 cfg := Configuration.Create;
 cfg.AddAssembly('nhibernate');
--->  factory := cfg.BuildSessionFactory; <--
 session := factory.OpenSession;
session.BeginTransaction;

The error message appears on the errors

---
Project4
---
The hibernate.connection.driver_class must be specified in the 
NHibernate configuration section.

---
OK  
---


I am using the Finisar SQLite ADO-driver, which is used to build 
NHibernate (one can see in the code)


Someone have an idea?

Thanks in advance.




Re: [sqlite] Disable index?

2006-02-19 Thread Bert Verhees
Op zondag 19 februari 2006 13:23, schreef Nemanja Corlija:
> On 2/19/06, SanjayK <[EMAIL PROTECTED]> wrote:
> > I am using a virtual tree control to display columns dynamically when
> > needed from the sqlite database. While it works ok for display purposes,
> > in certain other operations, I need to improve the speed. I found that
> > the new sqlite random access (even with prepared/transaction) approach is
> > about 15 times slower than my earlier design where I was using a direct
> > access file with read, seek, etc on Windows.
> >
> > In spite of this, sqlite has several advantages and I am staying with it.
> > I am looking for speed improvement suggestions. Somewhere in a thread I
> > read "disable indexing." I can't find any reference to how to do it in
> > the docs or in this group. How do I disable indexing? I will also
> > appreciate any other suggestions for speed improvement too.
>
> Disabling index might come in handy in some cases. For example, I was
> able to speedup one query a lot by disabling index for sorting. For
> some reason SQLite 2 was going back to disk to sort by index even
> though it had all data needed in result set already. You can disable
> the index like this:
> SELECT * FROM foo ORDER BY +bar;

I used a lot of the opcode from 2.8.x (forgot exact which one)
It did not use an index on sorting, but it sorted the result-set

>
> where bar is your indexed column.
>
> If that still doesn't help, you'll have much better chance of getting
> some help here on the list if you post more info. Such as SQLite
> version used, schema of tables involved in query and query it self.
> EXPLAIN's output for that query wouldn't hurt also. If you're using
> some wrapper it would be good to run that query from sqlite shell and
> see how fast that goes.
>
> --
> Nemanja Corlija <[EMAIL PROTECTED]>

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] Decimal separator

2006-01-31 Thread Bert Verhees

Will Leshner wrote:



On Jan 31, 2006, at 1:42 PM, Bert Verhees wrote:

It is only the English speaking part of the world using Arabic  
numerals is a '.',




And the Japanese speaking world :)


Yuo are right, and the South American speaking Spanish also
But the South American speaking Portuguese not, the Russian not.
The Chinese, I could not check on my Windows.

Maybe it is fifty-fifty

A pragma taking care for this could help, but, for me I have written my 
own functions, doing the translation well, it is not an issue.
Like Boguslaw, I use Delphi, I use a function like this, I am used to 
that, for many years, with many databases.


Decimalseperator is only a variable in the System-unit (I believe), 
changing it does not take much of the performance.


function Float2Str(f:Float):string
var
   olddec: char;
begin
   olddec := decimalseperator;
   decimalseperator := '.';
   Result := FloatToStr(f);
   decimalseperator := olddec;
end;

bert

end;



--
REALbasic news and tips: http://rbgazette.com
KidzMail & KidzBlog: http://haranbanjo.com








Re: [sqlite] Decimal separator

2006-01-31 Thread Bert Verhees

Carl Jacobs wrote:


All would be fine but look at this :

create table test(
price double,
amount double default 0
);

insert into test(price) values("12,0");

amount now = 0.0
 



The world seems to have settled on using Arabic numerals 0, 1, 2 ... 9. I
think we should think about settling on . as the decimal separator, it would
save a bit of confusion if we all used the same notation.
 

It is only the English speaking part of the world using Arabic numerals 
is a '.', which is a minority

The rest uses a ','
I think we should settle there

;-)

I think a pragma would be a good thing.


I suspect that "12,0" is being stored as a string. Don't forget that for all
intents and purposes sqlite3 is typless, so it will store your value in
whatever is the most compact form. So, if you want to, you can store a
picture of yourself in field price!

Regards,
Carl.



 





Re: [sqlite] index question

2005-11-21 Thread Bert Verhees

Sorry, my emailer messed things up, try it again

0OpenVirtual10keyinfo(1,BINARY)
1Goto031   
2Integer00   
3OpenRead02   
4SetNumColumns02   
5Integer00   
6OpenRead23keyinfo(1,BINARY)
7Integer10   
8NotNull-111   
9Pop10   
10Goto028   
11MakeRecord10n
12MemStore00   
13MoveGe228   
14MemLoad00   
15IdxGE228+
16RowKey20   
17IdxIsNull127   
18IdxRowid20   
19MoveGe00   
20Column00   
21MakeRecord-10   
22Distinct125   
23Pop20   
24Goto027   
25IdxInsert10   
26Callback10   
27Next214   
28Close00   
29Close20   
30Halt00   
31Transaction00   
32VerifyCookie03   
33Goto02   
34Noop00   




 
  CREATE TABLE temp (Name varchar(255),RxDT DateTime)
  
   
   
  
  
   
CREATE INDEX t on temp(rxdt)

   
   
CREATE INDEX t on temp(rxdt)

   
  
 





Wilfried Mestdagh wrote:


Hi Bert,

 


'select distinct Name ' +
'from Rx ' +
'where RxDT >= ' + DT + ' ' +
'order by Name'
 



One thing is not yet clear to me. That is the 'distinct'. To have this
as fast as possible I have to make also a separate index on "RxDt,
Name". Is that correct ?  (or the way around: "Name, RxDt" ?)

---
Rgds, Wilfried
http://www.mestdagh.biz



 





Re: [sqlite] index question

2005-11-20 Thread Bert Verhees

Wilfried Mestdagh wrote:


Hi,

I'm a little confused how to create indexes. (beginners question, but I
could not find or understeand clearly in FAQ). I have on a table
following 3 selects:

'select distinct Name ' +
 'from Rx ' +
 'where RxDT >= ' + DT + ' ' +
 'order by Name'

'select * ' +
 'from Rx ' +
 'where RxDt >= ' + DT + ' ' +
 'order by ID, RxDT desc'

'select * ' +
 'from Rx ' +
 'where RxDt >= ' + DT + ' and ReportType = ' + RT + ' ' +
 'order by ID, RxDT desc'

Can you please teach me whitch indexes I exacly have to make ? RxDT is
everywhere but I'm not sure if I have to make a separate index for it or
that I have to make just 3 separate indexes as I would do with a common
TDataSet for example ?  Do I have to make separate index for the DESC ?

---
Mvg, Wilfried
http://www.mestdagh.biz



 

Because you are compairing RxDT, that should be indexed for sure, in the 
third query you are also compairing reporttype. You could consider 
indexing that too.

If the result set from

RxDt >= is small, you will not have much profit form an index on Reporttype

The Order by's, in my opinion do not need an index, because it is only 
the resultset which is sorted


Bert


Re: [sqlite] Dotnet C# support

2005-10-28 Thread Bert Verhees
I agree, never use PInvoke, if possible to avoid
There is a performqncepenalty, and maybe you loose platform-independency
(think of Mono)

Bert

> I think you are looking for this http://adodotnetsqlite.sourceforge.net/
>
> I wrote my own driver in C++ (boy, that was a lot of work) then discovered
> the above.
>
> -
> ed
>
> --- Wilfried Mestdagh <[EMAIL PROTECTED]> wrote:
>
>> Hi Darren,
>>
>> > Is there a version which will work for dotnot?
>>
>> If not then you can use every win32 dll in C# using P/Invoke
>>
>> ---
>> Rgds, Wilfried
>> http://www.mestdagh.biz
>>
>>
>
>
>
>
>
> __
> Yahoo! Mail - PC Magazine Editors' Choice 2005
> http://mail.yahoo.com
>




Re: [sqlite] Quoted identifiers

2005-10-18 Thread Bert Verhees
> delimited identifiers.  Alternate output
> behaviour can be accomplished by a connection or
> statement specific pragma, or a wrapper.

IMHO a very good Delphi-wrapper
http://www.aducom.com/sqlite/

>
> -- Darren Duncan

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] Speed Test Done !

2005-10-06 Thread Bert Verhees
Panorama does not seem to support SQL (I did not find the term SQL on its 
feature-page http://www.provue.com/panorama5.html), it requires a paid 
runtime-version for distribution, it is restricted only for Mac and Windows.
It seems to need a lot of RAM, because it seems to mirror its database into 
RAM, that is the way how it come to speed, also disadvantage that for  
transactions to work safe, data have to be written to disk in between, this 
should be taken into benchmarks

see http://www.provue.com/

Bert

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]
> rg] On Behalf Of Richard
> Sent: 06 October 2005 04:57
> To: SQLite Users Mailing List
> Subject: [sqlite] Speed Test Done !
> Importance: High
>
> Well, finally import 9,337,681 records into sqlite3 test2.db
> and ran the test.
>
>
> import sqtest4.txt : 2 min 28 seconds
> select A, '32.0833' From T ; 9 min 20 seconds
> --
>
>
> I use another database for MacOS X
> also works on Windows, Call Panorama
>
> Did another test, comparing database...
>
> import sqtest4.txt : 36 seconds
> select from Field A contains 32.0833 / 55 seconds found 4322 records
> out of 9,337,681
>
> Still have found no sql program yet, that can beat Panorama in speed.
>
> Regards-
> Richard Nagle
> CMS

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] ANN: Sqlite3Explorer version 2.0 released

2005-10-03 Thread Bert Verhees
Mike, a very good, stable and handy tool. Especially I have a lot of pleasure 
with the querybuilder, saves me a lot of time.

Thank you very much, 

Regards
Bert Verhees.

Op maandag 3 oktober 2005 18:52, schreef Cariotoglou Mike:
> this is a major release, with a lot of changes. please see the readme at
> www.singular.gr/sqlite.
> Pls read the whole page carefully, as support for datatypes is now a lot
> more powerful, but slightly different than
> the previous versions.
> Report users : you will need to download an extra dll, it is bookmarked
> in the above link.
>
> you can see the changes at a glance in
> www.singular.gr/sqlite/changes.htm

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


[sqlite] Richard Hipp Awarded Google-O'Reilly Open Source Award at OSCON 2005

2005-08-05 Thread Bert Verhees
SQLite is a very great engine.

Thanks for the good work, well earned this award, congratulations.

Good luck

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] Multi-threading.

2005-07-29 Thread Bert Verhees
Some class-libraries make multithreading programming really easy, piece 
of cake, even when there are points to synchronize again to a single 
thread (f.e. interface), it is easy. You do not need to be a genius to 
write multithreading/multithreaded code


kind regards
Bert Verhees



Re: [sqlite] searching for proper date and time stamps

2005-06-23 Thread Bert Verhees

[EMAIL PROTECTED] wrote:


Hello,

I'm using the TCL binding of sqlite3
One of the columns in my table has a date and time stamp, something like
this:
 



datetimes are sorted as strings, maybe that explains
Bert


07/20/2004 01:35:40
06/20/2005 01:37:01
06/20/2005 12:10:07
06/20/2005 12:17:08
06/20/2005 01:35:00

If I want to get all dates that are later than "06/20/2005 01:00:00"  I
might try:

"select c1 from t1 where c1 > '06/20/2005 01:35:00'"

But that also returns All the above including "07/20/2004 01:35:40" because
(I suppose) comparison starts from the left and it sees 07 which is > than
06.  Any ideas or hints in general where I can get the dates that I'm
looking for?

Thanks



The information transmitted is intended only for the person(s)or entity 
to which it is addressed and may contain confidential and/or legally 
privileged material. Delivery of this message to any person other than 
the intended recipient(s) is not intended in any way to waive privilege 
or confidentiality. Any review, retransmission, dissemination or other 
use of , or taking of any action in reliance upon, this information by 
entities other than the intended recipient is prohibited. If you 
receive this in error, please contact the sender and delete the 
material from any computer.


For Translation:

http://www.baxter.com/email_disclaimer



 





Re: [sqlite] Sqlite low level and Speed.

2005-06-20 Thread Bert Verhees
use transactions, speeds up a lot

Op maandag 20 juni 2005 11:35, schreef Yuriy:
> Hello sqlite-users,
>
> Sqlite low level and Speed.
>
> Sorry for my bad English.
>
> I try use  Sqlite  for  Operation FAST grouping  strings (delete
> duplicates)
>
> I have input array of strings
>
> String1
> String2
> String3
> String1
> String2
> ………
> StringN
>
>
> Need delete dublicates.
>
> Output database
> String1
> String2
> String3
> ……..
> StringN
>
> 1. Example of the decision
>
> CREATE TABLE testtable (val text)
> CREATE INDEX index_val ON [testtable]([val]);
> PRAGMA synchronous = OFF;
>
> for i:=1 to 1000 do
> begin
> select * from testable where  val=StringN
> if val NOT Exist  insert into testtable
> end
>
> Very Slow.
>
> 2. Example of the decision
>
>
> CREATE TABLE testtable (val text,hashval integer)
> CREATE INDEX index_hashval ON [testtable]([ hashval]);
> PRAGMA synchronous = OFF;
>
> for i:=1 to 1000 do
> begin
> select * from testable where  hashval=hash(StringN)and(val= StringN)
> if val NOT Exist  insert into testtable
> end
>
> Very Slow.
>
>
> 3. Example of the decision
>
> I find good example for SQLite Low level functions SQlite VS BDB.
>
> http://rganesan.blogspot.com/
>
> But this example use such functions as
>
> sqlite3BtreeOpen
> sqlite3BtreeInsert
> sqlite3BtreeCursor
>
> I Use Windows and Delphi. In the SQlite3.dll no import this functions.
>
> Please Help me. May be need recompile sql I Have Visual C++ and C++Builder
> If need recompile as make it is on Windows Platform?
>
> May be present other decision this problem? Need more speed for grouping
> big arrays of strings
>
> Thanks you.

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] Trouble with column names

2005-04-25 Thread Bert Verhees
Op maandag 25 april 2005 14:12, schreef George Ionescu:
> Hello Dr. Hipp,
> Hello dear sqlite users,
>
> while working with sqlite v3.2.1, I've noticed another annoyance in the way
> sqlite returns column names (besides the bug with #1141, duplicated as
> #1218).
>
> Using sqlite command line, enter the following:
>
> sqlite3 test.db
> create table test(Field1 text(20));
> insert into test values ('a value');
> select field1 from test;

did you try

select field1 as field1 from test;

regards
Bert Verhees
>
> Field1
> a value
>
> notice the uppercase F from Field1, although field1 is requested.
> So, it seems that sqlite returns column names as defined in schema, not as
> requested by the user. This may not seem a big problem, but it becomes one
> when working with columns and trying to refer them by name and not wanting
> to do a case-insensitive comparison (I'm using a std::map to store
> columns).
>
> Any ideas? Should I write a ticket?
>
> If this is so hard to resolve, could someone point out the routine in which
> sqlite computes column names? I could try to solve it myself and post a
> patch.
>
> Thanks.

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] OnLine CD - what a method I choose

2005-04-22 Thread Bert Verhees
Op vr, 22-04-2005 te 09:26 +0200, schreef [EMAIL PROTECTED]:
> Dear Bert !
> 
> >This group is not about firebird,
> 
> Ok, I know it.
> 
> 
> But I want info from SQLite.
> 
> I wrote these fbembed question to see, I want to compare the possible 
> solutions.
> I don't know enough about the SQLite, so I'm not sure that is true, or 
> not true, what I say about it.
> 
> If you think that SQLite have a simple way to create the apps, please 
> write me.
> 
> The DataBase and app is must be placed in CD, and I not want write to DB.
> Only search datas, with eng/hun order, and with like "%n%" (eng/hun).
> SQLite is good for it ?

I don't know about the language issue, someone else must answer that.
SQLite is for the rest a good choice, good connectivity, many wrappers
in many programming languages, after some experimenting, easy to work
with.
It can run from CD, as long as you do not start transactions (nor
implicit)

Have fun with it.

I must say, the SQL-features for SQLite are a little bit less then in
Firebird. Mostly this is no problem at all. 

It is easy to switch from SQLite to Firebird and back.
You can write a datapump in half an hour.

I even once wrote an generic datapump from sqlite to firebird and back.

http://www.rosa.nl/Bold2/

It does not work very well,it is again a bit old, summer last year, time
flies in ICT, but you can look at the sourcecode how easy it is.

So you can try both, and decide later which one is better for you,
Firebird or SQLite

Bert

> 
> Thanx:
>  ft
> 
> Bert Verhees Ãrta:
> 
> >Op vr, 22-04-2005 te 08:56 +0200, schreef [EMAIL PROTECTED]:
> >  
> >
> >>Hi !
> >>
> >>In the years of 2000 I wrote 3 programs.
> >>These apps are placed in CD, but in this time I don't have good solution 
> >>for online databases.
> >>
> >>Then I placed an Installer in CD, and this Setup prg. is install 
> >>InterBase 6.0 in the computer with my software.
> >>
> >>That is very wrong solution (I think it NOW), but in the year 2000 I 
> >>need fast development...
> >>
> >>The problems are many with this:
> >>- The InterBase installer is english, and the program is multilang
> >>- The IB server may be exists in the dest. computer, then it need sysdba 
> >>password setting.
> >>- If FB or other IB same systems are installed in computer, then the 
> >>program have been crashed on checking.
> >>
> >>So.
> >>
> >>I need to re-develop these apps, and make new CD-s.
> >>But I have a problem, because I don't know, what a method I choose for it !
> >>
> >>The older databases stored in IB6.0 format.
> >>
> >>1. FB (FireBird) Embed 1.5.2:
> >>That is very good if working, and not buggy.
> >>But many questions I have about it.
> >>A.) it is working if I place in CD ? (Online database needed !)
> >>B.) it is working with older db format ?
> >>C.) it have enough stability ?
> >>D.) it is enough to place FBEmbed.dll as gds32.dll, or I need another 
> >>files ?
> >>E.) it is fully supports charset, and ordering ?
> >>F.) I can protect my fdb file with another password technique ?
> >>G.) it is enough speedy ?
> >>H.) can I use it readonly mode ?
> >>I.) it is supports an FB compliant interface to access db with IBX/IBO ?
> >>
> >>
> >
> >This group is not about firebird, but as you say, it is feature-rich,
> >you can use it embedded, and the only file you need is indeed gds32.dll
> >It is said to be slower than SQLite, but in many situations you will not
> >notice. It depends on what your app does, you only do reads, no inserts
> >or updates
> >I do not know about the language issue and firebird.
> >
> >regards
> >bert verhees
> >
> >  
> >
> >>2. XML ?
> >>XML is very flexible, and because it is text based, then everytime 
> >>readable. MS XML Interface is good for it, but not support searches - so 
> >>I need to filter every elements.
> >>Another problem is charset. I need to use win1250 (hun), and english 
> >>charset. How to this program works with another langs/settings ?
> >>Sort problems.
> >>
> >>3. SQLite
> >>SQLite is developed for these types of applications. 1 user, local 
> >>datasets, quick access, etc.
> >>But many problems with it:
> >>- it have only english "order by" lang support
> >>- it is not suppports functions like upper/lower (with hun chars)
> >>So I need to search every records, and I need to compare them with my 
> >>function, and I need to repack the subqueries' result.
> >>Many developing, and I don't know, how to works it in a system with 
> >>another language...
> >>
> >>Please help me. I  want to choose SQLite if possible a 'simpler' 
> >>solution than I write every filters by hand.
> >>
> >>Thanx for it:
> >>  ft
> >>
> >>
> >>
> >>
> >>
> >>
> >
> >
> >  
> >
> 



Re: [sqlite] OnLine CD - what a method I choose

2005-04-22 Thread Bert Verhees
Op vr, 22-04-2005 te 08:56 +0200, schreef [EMAIL PROTECTED]:
> Hi !
> 
> In the years of 2000 I wrote 3 programs.
> These apps are placed in CD, but in this time I don't have good solution 
> for online databases.
> 
> Then I placed an Installer in CD, and this Setup prg. is install 
> InterBase 6.0 in the computer with my software.
> 
> That is very wrong solution (I think it NOW), but in the year 2000 I 
> need fast development...
> 
> The problems are many with this:
> - The InterBase installer is english, and the program is multilang
> - The IB server may be exists in the dest. computer, then it need sysdba 
> password setting.
> - If FB or other IB same systems are installed in computer, then the 
> program have been crashed on checking.
> 
> So.
> 
> I need to re-develop these apps, and make new CD-s.
> But I have a problem, because I don't know, what a method I choose for it !
> 
> The older databases stored in IB6.0 format.
> 
> 1. FB (FireBird) Embed 1.5.2:
> That is very good if working, and not buggy.
> But many questions I have about it.
> A.) it is working if I place in CD ? (Online database needed !)
> B.) it is working with older db format ?
> C.) it have enough stability ?
> D.) it is enough to place FBEmbed.dll as gds32.dll, or I need another 
> files ?
> E.) it is fully supports charset, and ordering ?
> F.) I can protect my fdb file with another password technique ?
> G.) it is enough speedy ?
> H.) can I use it readonly mode ?
> I.) it is supports an FB compliant interface to access db with IBX/IBO ?

This group is not about firebird, but as you say, it is feature-rich,
you can use it embedded, and the only file you need is indeed gds32.dll
It is said to be slower than SQLite, but in many situations you will not
notice. It depends on what your app does, you only do reads, no inserts
or updates
I do not know about the language issue and firebird.

regards
bert verhees

> 
> 2. XML ?
> XML is very flexible, and because it is text based, then everytime 
> readable. MS XML Interface is good for it, but not support searches - so 
> I need to filter every elements.
> Another problem is charset. I need to use win1250 (hun), and english 
> charset. How to this program works with another langs/settings ?
> Sort problems.
> 
> 3. SQLite
> SQLite is developed for these types of applications. 1 user, local 
> datasets, quick access, etc.
> But many problems with it:
> - it have only english "order by" lang support
> - it is not suppports functions like upper/lower (with hun chars)
> So I need to search every records, and I need to compare them with my 
> function, and I need to repack the subqueries' result.
> Many developing, and I don't know, how to works it in a system with 
> another language...
> 
> Please help me. I  want to choose SQLite if possible a 'simpler' 
> solution than I write every filters by hand.
> 
> Thanx for it:
>   ft
> 
> 
> 
> 



Re: [sqlite] ANN: Sqlite3Explorer

2005-04-04 Thread Bert Verhees
Serge Liber wrote:
Hi,
When starting sqlite3Explorer an error message appears:
"The procedure entry point GdiGradientFill could not be located in the dll
GDI32.dll"
How can I proceed?
I use NT4+sp6
 

This is a nasty one, I had it too. It has to do with a not consistently 
supported API, which occurs in some versions of this DLL (someone told me).
I was looking in Google for an answer, and someone advised me to 
overwrite the DLL with another one (which is not easy because you have 
to restart the system and use the Windows API to overwrite it at 
startup), and the problem was solved, but I ran against another, and 
some other applications stopped to work.
It took me days to repair it.

So be careful, if you try another version of that Dll, put it in the 
same directory as the application is, and be sure that the dot "." is 
found earlier in the path than the Windows directory. This way you don't 
break Windows and it is possible you find a workaround.

regards
Bert Verhees
Serge Liber
New version is out. Since the problem with size limits was not lifted
from contrib, I created a home site for the program. You will now be
able to get latest version from
http://www.singular.gr/sqlite
Latest is 1.1 , 24/3/2005 (ignore the typo in the date)
Regards,mike

 




Re: [sqlite] Quoestion on Order By ... ?

2005-04-01 Thread Bert Verhees
Jay wrote:
OK, there is no requirement, but there is a wish to return a stable
sorting outcome.
   

Just add an 'order by' clause.
 

Please note that the SQL standard does not prohibid stable sorting
outcome.
   

It does prohibit it. It states the output is not sorted unless
you ask for it to be.
 

May I ask, are you a lawyer?
It does not say, give the resultset randomized if two sort-arguments are 
the same.

 

Please also note that by adding the rowid an unstable sort-algorithm
can produce a stable sort-outcome.
   

Add an 'order by' clause and a rowid to your select.
 

Here you have a good point, you can always do that, and it saves 
processor cycles for not adding the rowid unasked.

regards
Bert Verhees


Re: [sqlite] Quoestion on Order By ... ?

2005-03-31 Thread Bert Verhees
Eric Bohlman wrote:
Shum [Ming Yik] wrote:
Actually Sqlite same as other SQL in return order such as sample (1)
( it seems  follow the insert order when no specific order by ... )
But Sqlite break the rule (may be it is not a rule), when part of the 
whole
Order by  string passing into the select statement ... as  sample (2)

Why not both in a same way ?
I meams:
1.) both in (descending order )
or
2.) both in ( ascending order)

I suggest you get hold of an algorithms textbook and read the section 
on sorting, paying particular attention to the distinction between 
"stable" and "unstable" sorting algorithms.  You seem to be 
complaining that SQLite offers an unstable sort, but there's nothing 
in the SQL standards to require a stable sort.

As others have said, you're skating on thin ice if you make *any* 
assumptions about the ordering of rows coming out of a SELECT other 
than those explicitly stated in an ORDER BY clause.  There's no reason 
any implementation should even be required to return rows in the same 
order on two successive identical SELECTs, and I can easily think of 
implementations that wouldn't do so (e.g. due to caching).

OK, there is no requirement, but there is a wish to return a stable sorting 
outcome.
Please note that the SQL standard does not prohibid stable sorting outcome.
Please also note that by adding the rowid an unstable sort-algorithm can 
produce a stable sort-outcome.
regards
Bert Verhees




Re: [sqlite] Quoestion on Order By ... ?

2005-03-31 Thread Bert Verhees
Shum [Ming Yik] wrote:

> Hi Scott Baker,
>
> In other SQL ... mySQL, PostgreSQL, MSSQL, SQLAnywhere ...
> Both statement return same as (1)
>
> Actually (1) is the inserted order
>
> If SQLite show as other SQL ... It is much convenience ...!

If the sort algorithm would always ends up the constructed sort lines
with the rowid, then it would sort constructed sortlines which are the
same following their order in the table

Bert

>
> Shum
> www.mignyik.com
>
>
>
> - Original Message - From: "Scott Baker" <[EMAIL PROTECTED]>
> To: <sqlite-users@sqlite.org>
> Sent: Friday, April 01, 2005 3:13 AM
> Subject: Re: [sqlite] Quoestion on Order By ... ?
>
>
>> Not necessarily. All you're telling it to do is sort on the first
>> column, anything beyond that is just random luck. You could easily
>> sort it on both columns.
>>
>> ORDER BY XIVONO, XICODE;
>>
>> Shum [Ming Yik] wrote:
>>
>>> Hi All,
>>>
>>> I have problem with Order by ...:
>>>
>>> There is a table as follow: mytbl01
>>>
>>> XIVONO XICODE XCNT
>>> X001 001 1
>>> X001 002 2
>>> X001 003 3
>>> X001 004 4
>>>
>>>
>>> 1.) SELECT * from mytbl01 WHERE XIVONO='X001'; it return as expected
>>> ! ... it is fine !
>>> X001 001 1
>>> X001 002 2
>>> X001 003 3
>>> X001 004 4
>>>
>>> 2.) However SELECT * from mytbl01 WHERE XIVONO='X001' ORDER BY
>>> XIVONO; it return as follow:
>>> X001 004 4
>>> X001 003 3
>>> X001 002 2
>>> X001 001 1
>>>
>>>
>>> I expect that the result of the Selectd statement (1) and (2) should
>>> be the same (even in order)
>>> But it does not 
>>>
>>> Any idea ?
>>>
>>> Shum
>>> www.mingyik.com
>>>
>>>
>>>
>>>
>>
>> -- 
>> Scott Baker
>> Canby Telephone - Network Administrator - RHCE
>> Ph: 503.266.8253 
>
>
>
>



Re: [sqlite] Proposal: limit the number of columns in a table to 2000.

2005-03-17 Thread Bert Verhees
I cannot imagine ever needing more then 2000 columns in a table, if I 
would, I could always create a parallel table

 

As currently implemented, there is no fixed limit to
the number
of columns you can put in a table in SQLite.  If the
CREATE TABLE
statement will fit in memory, then SQLite will
accept it.  Call
the number of columns in a table K.  I am proposing
to limit the
value of K to something like 2000.
Would this cause anyone any grief?
Note that SQLite is optimized for a K that is small
- a few dozen
at most.  There are algorithms in the parser that
run in time
O(K*K).  These could be changed to O(K) but with K
small the
constant of proportionality is such that it isn't
worthwhile.
So, even though SQLite will work on a table with a
million or
more columns, it is not a practical thing to do, in
general.
The largest value of K I have seen in the wild is in
the 
low 100s.  I thought that I was testing with K
values in
the thousands, but I just checked and I think the
test
scripts only go as high as K=1000 in one place.

The reason it would be good to limit K to about 2000
is
that if I do so there are some places where I can
increase
the run-time performance some.  It would also reduce
code complexity in a few spots.
So who out there needs a value of K larger than
2000?
What is the largest K that anybody is using?  Who
would
object if I inserted a limit on K that was in the
range
of 1000 or 2000?
--
D. Richard Hipp <[EMAIL PROTECTED]>
   





Yahoo! Mail - Com 250MB de espaço. Abra sua conta! http://mail.yahoo.com.br/
 




Re: [sqlite] authenticating user before allosw to use sql_lite?

2005-02-15 Thread Bert Verhees
Op dinsdag 15 februari 2005 17:21, schreef teoh:
> hi , Is it possible to prompt user for username and
> password before allow to use database like in mysql?

As far as I know, sqlite has no user authorisation-mechanism

>
>
>
> __
> Do you Yahoo!?
> Take Yahoo! Mail with you! Get it on your mobile phone.
> http://mobile.yahoo.com/maildemo

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] How to return only certain number of rows from a table in a select statement and not all the rows

2005-02-09 Thread Bert Verhees
Steve Frierdich wrote:

Does anyone know what I need to add to a select statement so only 
certain number of rows will return?
What would I add to a select statement query so only 10 rows from a 
table would return and not all the records in the table.
Thanks
Steve



select * from table LIMIT 10


Re: [sqlite] SQLite Advocacy

2005-02-01 Thread Bert Verhees
>Not true at all.  In fact, from experience, the Linux OS is
>>much more full of holes than Windows.  It appears most hate
>>    
>>
Linux OS is the kernel, the rest is third party. The kernel 2.6.10 is 
now the latest, that means there were 10 upgrades last three (four ?) years in 
the serie 2.6.x. And not all of them were security issues.

Windows OS, is a huge complex of binaries and libraries, even the 
Internet Explorer is a part of the OS, as Microsoft stated many times 
before court. (what does that tell us about the OS-design, if a webbrowser is 
an unremovable and essential part of the OS)
There were many. many security upgrades, even after they promised they were 
going secure from now on in 2002, they had many new holes.

It makes no sense counting security holes in third party software, because 
that is not the responsibility of the OS-vendor.

Normally don't do Microsoft bashing, but I could not resist this time.
Bert Verhees


[Fwd: Re: [sqlite] How do I create a field with default current_timestamp]

2005-01-24 Thread Bert Verhees
sqlite> Insert into TemperatureData (tempt) values(86.0);
sqlite> select * from temperaturedata;
40.2|
86|CURRENT_TIMESTAMP
--- Begin Message ---
Brass Tilde wrote:
Create table TemperatureData ( Tempt real, TimeStamp default
   

CURRENT_TIMESTAMP);
 

Insert into TempertaureData values( 40.2, NULL);
   

Try specifying the Tempt field and it's value only.  SQLite may be
interpreting the NULL as a supplied value and thus not inserting the default
value.
Insert into TemperatureData (tempt) values(86.0);
Don't have SQLite on the machine here, or I would check it myself.

 

sqlite> Create table TemperatureData ( Tempt real, TimeStamp default 
CURRENT_TIMESTAMP);
sqlite> Insert into TempertaureData values( 40.2, NULL);
SQL error: no such table: TempertaureData
sqlite> Insert into TemperatureData values( 40.2, NULL);
sqlite> select * from temperaturedata;
40.2|

--- End Message ---


Re: [sqlite] How do I create a field with default current_timestamp

2005-01-24 Thread Bert Verhees
Brass Tilde wrote:
Create table TemperatureData ( Tempt real, TimeStamp default
   

CURRENT_TIMESTAMP);
 

Insert into TempertaureData values( 40.2, NULL);
   

Try specifying the Tempt field and it's value only.  SQLite may be
interpreting the NULL as a supplied value and thus not inserting the default
value.
Insert into TemperatureData (tempt) values(86.0);
Don't have SQLite on the machine here, or I would check it myself.

 

sqlite> Create table TemperatureData ( Tempt real, TimeStamp default 
CURRENT_TIMESTAMP);
sqlite> Insert into TempertaureData values( 40.2, NULL);
SQL error: no such table: TempertaureData
sqlite> Insert into TemperatureData values( 40.2, NULL);
sqlite> select * from temperaturedata;
40.2|


Re: [sqlite] add new column to table

2005-01-10 Thread Bert Verhees
Op maandag 10 januari 2005 23:56, schreef Lloyd Thomas:
> Thanks. That is going OK but I am having a problem with updating the new
> column with the info I need. It seems to update with the same entry from my
> users table to all rows.
>
> UPDATE call_data SET caller_name = (SELECT firstname || surname AS
> 'caller_name' FROM users WHERE extn_no = (SELECT extn_no FROM call_data));
>
> I have missed something?

The FAQ gives a working example, I used it a lot, please try that.

Bert Verhees

>
>
>
> - Original Message -
> From: "Lloyd Thomas" <[EMAIL PROTECTED]>
> To: <sqlite-users@sqlite.org>
> Sent: Monday, January 10, 2005 9:08 PM
> Subject: Re: [sqlite] add new column to table
>
> > Thanks Paul,
> >I have used that example before to recreate a table. Can I
> > use the same thing to recreate a table and populate a new column with
> > data from a select query.
> > The table I wish to recreate has an ID number in it and I wish to use
> > this to get the forename and surname from another table.
> >
> > Lloyd
> > - Original Message -
> > From: "Paul Dixon" <[EMAIL PROTECTED]>
> > To: <sqlite-users@sqlite.org>
> > Sent: Monday, January 10, 2005 5:50 PM
> > Subject: Re: [sqlite] add new column to table
> >
> >> Lloyd Thomas wrote:
> >>>I wish to create a new column in a table and add data, which is queried
> >>>from another table.What is the best way?
> >>
> >> There's no "ALTER TABLE" support in the support language, so you have to
> >> recreate the entire table and re-populate it.
> >>
> >> Check the FAQ: http://www.sqlite.org/faq.html#q13
> >>
> >> Paul

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] Error in Aducom components

2005-01-09 Thread Bert Verhees
Op zondag 9 januari 2005 11:46, schreef Ahmet Aksoy:
> Hi Bert,
> Thank you for your answer.
> I found a solution for my problem: setting TypeLess property to true.
> Also, I think I found the reason of my problem. My original database was
> in MsAcces, and I converted it to sqLite using sqlite3Browser of Mike
> Cariotoglou.
> Some columns which are of integer type, contain nulls after conversion.
> That might be the main problem. I'll try to contact Mike for this
> situation. I'm sure he can find a solution.
> See you,
> Ahmet Aksoy

The disadvantage of working typeless in Aducom is that you do not use the 
typechecking in Aducom. Using internal typechecking can be used in a 
application, f.e. you can let your code react on the value of data, a stupid 
example:

var
housenumber : string;
begin
try
//field housenumber is integer field, and maybe sometimes NULL
housenumber := fieldbyname('housenumber').tostring;
except
housenumber := 'not specified';
end;
end;

In this case maybe you would get an exception because the field(handling code) 
"housenumber" is asked to convert an integer to a string, and if it is not 
possible, then it is probably NULL. You would, in this case use, the internal 
type-checking of Aducom, to change your program-flow

There is a forum on the Aducom website where you can Aducom specific 
questions. F.e. a good question in this case would be, how to handle 
NULL-values in Integer-fileds with Typeless=false;
It would be very handy if you would do that, because, I did not find that 
question, and the forum would become more valuable.

Bert

>
> Bert Verhees wrote:
> >Op zondag 9 januari 2005 01:29, schreef Ahmet Aksoy:
> >>Hi,
> >>I'm trying to open a query using Aducom components, but I get an error
> >>message saying:
> >>'' is not a valid integer value.
> >>I can open and use the database with sqlite3.exe. Also
> >>Sqlite3Explorer.exe can open the database and all the tables inside it.
> >>So, there are no problems in the database.
> >>Have you encountered such a situation?
> >>What can I do?
> >>Ahmet Aksoy
> >
> >I use the Aducom components a lot, I remember there are two ways to solve
> > this problem, one is, give a default value to a integer field, so there
> > always is in integer in it, the other way is surrounding that specified
> > action in a exception clause

-- 
Met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] Error in Aducom components

2005-01-09 Thread Bert Verhees
Op zondag 9 januari 2005 01:29, schreef Ahmet Aksoy:
> Hi,
> I'm trying to open a query using Aducom components, but I get an error
> message saying:
> '' is not a valid integer value.
> I can open and use the database with sqlite3.exe. Also
> Sqlite3Explorer.exe can open the database and all the tables inside it.
> So, there are no problems in the database.
> Have you encountered such a situation?
> What can I do?
> Ahmet Aksoy

I use the Aducom components a lot, I remember there are two ways to solve this 
problem, one is, give a default value to a integer field, so there always is 
in integer in it, the other way is surrounding that specified action in a 
exception clause

-- 
Bert Verhees


Re: [sqlite] Advice needed for a new group member

2004-12-31 Thread Bert Verhees
Op vrijdag 31 december 2004 02:23, schreef mike cariotoglou:
> In the home page of www.sqlite.org there is a "contrib" link in the right
> upper area.
> It points to http://www.sqlite.org/contrib

Thanks
Bert


Re: [sqlite] Advice needed for a new group member

2004-12-30 Thread Bert Verhees
Op donderdag 30 december 2004 13:37, schreef mike:
> When used correctly, sqlite is the fastest SQL engine I have seen  (I am
> also using D7). So, yes, it would be a good choice.
> You need some API wrapper for sqlite, there are a few around including my
> own :) (see contrib)

Excuse, me, can you please point me to the location where I can find your 
wrapper.

Thanks
Bert

>
> -Original Message-
> From: Ahmet Aksoy [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 30, 2004 12:00 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Advice needed for a new group member
>
> Hi,
> I'm both new to the group, and sqlite.
> I want to redesign one of my old projects.
> It is a kind of mapping program, which shows ports on a world  map. Map is
> drawn using geographical coordinates of seashores. Also the coordinates of
> the ports are used in the program.
> You can make searches on port lists, and draw the selected part of the map
> which contains the port. Also, you can see the local details  if you point
> a location on the map. You can change the coordinates and scale of the map
> easily.
> In the project I'm using MSAccess. Now there are about 3000 ports on the
> database. That number will increase in near future.
> There are very few editing on the database. Records are mostly readonly.
> In order to gain some extra speed, and remove licencing  problems while
> distributing my program, I made some search on the internet, and found
> sqlite as it fits best to my needs.
> I'm using delphi7 pro for programming.
> Do you think that sqlite will be  a correct choice for such a project?
> Which delphi components should I use?
> Thanks in advance.
> Ahmet Aksoy


Re: [sqlite] Advice needed for a new group member

2004-12-30 Thread Bert Verhees
Op woensdag 29 december 2004 22:59, schreef Ahmet Aksoy:
> Hi,
> I'm both new to the group, and sqlite.
> I want to redesign one of my old projects.
> It is a kind of mapping program, which shows ports on a world  map. Map
> is drawn using geographical coordinates of seashores. Also the
> coordinates of the ports are used in the program.
> You can make searches on port lists, and draw the selected part of the
> map which contains the port. Also, you can see the local details  if you
> point a location on the map. You can change the coordinates and scale of
> the map easily.
> In the project I'm using MSAccess. Now there are about 3000 ports on the
> database. That number will increase in near future.
> There are very few editing on the database. Records are mostly readonly.
> In order to gain some extra speed, and remove licencing  problems while
> distributing my program, I made some search on the internet, and found
> sqlite as it fits best to my needs.
> I'm using delphi7 pro for programming.
> Do you think that sqlite will be  a correct choice for such a project?

I guess it is, it can store data, also large numbers of data, it can very fast 
retrieve data, as I understand your database is embedded, that is also a 
reason why sqlite can be used. There are some hickups in the 
sqlite-sql-parser which you should know about, I don't know if they are fixed 
in the latest versions.

One annoying thing is that when using OR in the WHERE part in the same table 
sqlite does not make use of the indexes. I always avoid this, by using UNION, 
there are some more small things, they are documented on the sqlite-website

> Which delphi components should I use?


I use mostly http://www.aducom.com/sqlite/ for Delphi, easy to use, works good 
and easy, and you can connect it to data-aware components of Delphi.

What also is a good component is the dbXpress driver, but sadly it is only 
available for sqlite 2.8.x, but still I use it. I once had the plan to 
rewrite it to sqlite 3.x, but I have no time. You can find it on:
http://www.bcp-software.nl/artikelen/sqlite.html

But it is easy to use

regards
Bert Verhees

> Thanks in advance.
> Ahmet Aksoy


Re: [sqlite] ODBC ADO

2004-12-21 Thread Bert Verhees
George Ionescu wrote:
Hello Bert,
have you tried using one of the ActiveX wrappers? I know it's not 
ODBC/ADO, but my wrapper (http://www.terrainformatica.com/sqlitedb) 
has at least an ADO-compatible interface, so that might help. I didn't 
want to write a full-blown ADO provider because this is the beauty of 
sqlite: you can have a whole database engine by delivering a single 
file to the client.

Best regards,
George Ionescu

I could not find it on the website, does it support sqlite 3.0x?
Bert


Re: [sqlite] ODBC ADO

2004-12-17 Thread Bert Verhees
Op vrijdag 17 december 2004 16:33, schreef Jonatan Moleiro:
> Try one of these:
>
> Aducom http://www.aducom.com/sqlite/ (Suppor 2.x and 3.x)

This database-component is not recognized as a database-component in the Bold  
MDA-layer (model driven architecture), so I cannot us it.
But I used this component to write a dbXpress driver, I am halfway with it, 
but I have lack of time
> Zeos http://www.zeoslib.net/  (Suppor 2.x)

See previous mail.

> Other: http://www.torry.net/pages.php?id=561 (Some components for access

I cannot find anything usefull. on Torry.net

I used to use the dbXpress driver of Marco Wobben, but he did not update it 
for a long time, that is why I want to write a new one which is not that 
hard, but I do not have much time, that is the problem.

At this moment there is not a for me useful Delphi layer to SQLite. So I have 
to use another db, for the time.

Thanks for your reaction

regards
Bert Verhees

> SQLite components)
>
> Jonatan
>
> - Original Message -
> From: "Bert Verhees" <[EMAIL PROTECTED]>
> To: "SQLite" <[EMAIL PROTECTED]>
> Sent: Friday, December 17, 2004 10:53 AM
> Subject: [sqlite] ODBC ADO
>
> > Does someone have experience with the ODBC-driver for SQLite3 in
> > combination with the ADO layer in Delphi.
> > I cannot get it to run, it stumbled about the isolation-level for
> > transactions
> >
> > Thanks in advance for an answer
> >
> > Bert Verhees

-- 
met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] ODBC ADO

2004-12-17 Thread Bert Verhees
Op donderdag 16 december 2004 16:17, schreef Clay Dowling:
> Bert Verhees said:
> > Does someone have experience with the ODBC-driver for SQLite3 in
> > combination with the ADO layer in Delphi.
> > I cannot get it to run, it stumbled about the isolation-level for
> > transactions
>
> I don't have experience with the specific situation you're describing, but
> if all you need is Delphi access I highly recommend the Zeos Database
> Objects at http://www.zeoslib.net.  The latest beta is SQLite capable, at
> least for SQLite 2.8.x 

I need support for 3.x, because of the fact that 2.8.x does not use indices 
when there is an OR in the WHERE part in the same table.
I know UNION could help, but I have to deal with live-generated 
SQL-statements.
> 
> Connecting through ODBC via ADO just seems like way too many layers to be
> comfortable.

Me too, thanks for the tip. BTW did you notice that none of the links on that 
site seems to work. A bug, I guess.

Kind regards
Bert Verhees

>
> Clay Dowling

-- 
met vriendelijke groet
Bert Verhees
ROSA Software


[sqlite] ODBC ADO

2004-12-17 Thread Bert Verhees
Does someone have experience with the ODBC-driver for SQLite3 in 
combination with the ADO layer in Delphi.
I cannot get it to run, it stumbled about the isolation-level for 
transactions

Thanks in advance for an answer
Bert Verhees


[sqlite] any possibility for sqlite3_step to return SQLITE_BUSY while a handler is installed?

2004-12-15 Thread bert hubert
Hi sqlite people!

I haven't yet had the chance to say so but I want to thank Richard and all
other contributors - sqlite has truly changed the way I'm able to write
software. It rocks. Thank you.

Ok, on to my question. I have two programs, one that fills an sqlite
database, one that displays the result to users. These occasionally get into
situations with locks, sometimes prolonged. I've decided that users get to
deal with this, and should have their queries timed out after 10 seconds. 

The program that fills the database can't deal with it, so it has an eternal
timeout. 

My question is: can sqlite3_step return SQLITE_BUSY, without ever calling my
handler? It certainly does so in my application, which is written in single
threaded c++ and runs under linux 2.6.10-rc2, Debian sid, with sqlite
3.0.8. It happens with 3.0.7 as well.

I have only one sqlite instance in the process.

This is my handler:

extern "C" {
  static int eternityHandler(void *, int)
  {
cerr<<"timeout handler called"<

Re: [sqlite] SQLite Tutorial - [Request For Comments]

2004-11-24 Thread Bert Verhees
Op woensdag 24 november 2004 00:22, schreef Griggs, Donald:
> Re:  SQLite Tutorial
> http://prdownloads.sourceforge.net/souptonuts/README_sqlite_tutorial.html?d
>o wnload
>
> Thanks for providing this.

Very good, I was all the times saving emails from this mailinglist, and now I 
have the information all together.
Page works fine in Konqueror, no problems at all.

What I like too, is that it has a lot of information on almost a single page 
and no bitmaps. That makes it easy to save. 
If it could contain more information, I would have no problem with that, But I 
have a good internet-connection

Very good job

Thanks
Bert Verhees

>
> I did notice that the document insists on displaying at about 20% wider
> than my screen, causing lots of horizontal scrolling.
>
> My screen res is set to 1024x768.   Workplace PC with IE 6.0 at full
> screen. Printing it also resulted in some right-edge truncation.
>
> Donald Griggs
>
>
> Opinions are not necessarily those of Misys Healthcare Systems nor its
> board of directors.

-- 
met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] Improving text search using LIKE '%string to search%'

2004-10-07 Thread Bert Verhees
Op donderdag 7 oktober 2004 21:50, schreef Patrick Peccatte:
> I use SQLite 2.8.14 for a documentation application which needs a lot of
> text search.
> Text searchs are done using somethink like:
> SELECT * FROM Texts WHERE CurrentText LIKE '%string to search%';
> I use index on every searchable columns.
> Response time are acceptable until about 10 texts stored in database,
> but are very very long with 50 texts in database.
> Is exist any method to improve this kind of search ?
> --
> Patrick Peccatte - Soft Experience

SQLite does not use indices on LIKE operator,

if you want to do text searches, you better create an algoritm to use 
something like %word-list-index%

You can do text searcha t the beginning of a string

select. where text>=" aa" and text<="aa", it finds all words starting with 
aa, and uses indexes

regards
Bert Verhees



Re: [sqlite] question on speed - or what am I doing wrong

2004-09-16 Thread Bert Verhees
use begin transaction, commit transaction
and do allinserts between those

Bert

Op donderdag 16 september 2004 14:47, schreef [EMAIL PROTECTED]:
> I'm having problems with sqlite query speed - in fact it is very slow
>
> Fact 1: I'm using sqlite 2.8.14
> Fact 2: my database file is named te2.db - and has one  it's layout looks
> like this
>
> create table tmaster (
> broj integer primary key,
> datumblok date,
> vrijemeblok time,
> datumdeblok date,
> lokacija varchar(50),
> regbroj varchar(15),
> vlasnik varchar(25),
> adresa varchar (50),
> kompol varchar(25),
> radblok varchar(25),
> raddeblok varchar(25),
> nalog integer,
> zabiljeska integer,
> prekrsaj varchar(50),
> posljedica varchar(50),
> napomena varchar(200),
> foto1 varchar(200),
> foto2 varchar(200),
> foto3 varchar(200),
> foto4 varchar(200),
> foto5 varchar(200),
> foto6 varchar(200),
> unosilac varchar(8),
> datumunosa date,
> vrijemeunosa time);
>
> create table tradnik (
> id integer primary key,
> oznaka varchar(8),
> ime varchar(25));
>
> create table tkompol (
> id integer primary key,
> oznaka varchar(8),
> ime varchar(25));
>
> create table tlokacija (
> id integer primary key,
> oznaka varchar(50));
>
> create table tprekrsaj (
> id integer primary key,
> oznaka varchar(50),
> kazna float);
>
> create table tposljedica (
> id integer primary key,
> oznaka varchar(50));
>
> create table tkorisnici (
> id integer primary key,
> ime varchar(25),
> username varchar(8),
> password varchar(8),
> ovlast varchar(1));
>
> create table tlogovanja (
> id integer primary key,
> ime varchar(25),
> datum date,
> vrijeme time,
> uspjeh varchar(10));
>
> create table topcije (
> podesavanje varchar(100));
>
> Fact 3: I've done command ".read q.txt" which contains over 15900 of:
>
> insert into tmaster values(null,'01.01.2003','12:30','21.11.2004','Probna
> lokacija','232-k-212','Mico Micicevic','Stepe Stepanovica 17, 78000
> Prnjavor','Drasko Ignjatic','Mirko Miric','Biljana
> Jeftic',212132,213213,'Parkirao na nedozvoljeno mjesto na trotoaru','Vozilo
> premjesteno na drugu lokaciju','Bio bezobrazan i prijetio, pokusao da
> osteti
> lisice','c:\zsike\!pika\proba.jpg','c:\mika\!pika\proba.jpg','mika.jpg','',
>'','','admin','01.01.2003','12:30');
>
> Fact 4: It took over a 20 minutes to execute this query and te2.db now
> takes about 21,7MB. I've done it twice, to make sure.
>
> What do I have to do to make this faster? And reading? Please, give me a
> concrete answer with examples if possible...
>
> Thanks in advance,
> b

-- 
met vriendelijke groet
Bert Verhees
ROSA Software


Re: [sqlite] OLE/DB provider for SQLite?

2004-08-31 Thread Bert Verhees

Did you noticed there are very few open source OLE/DB providers? MySQL
only has one, and not maintained since 2001. It started as a commercial
project and only later (L)GPL'ed.
There is one open source for Inteerbase:
http://sourceforge.net/projects/iboledb/
The reason for this, IMHO, is because it isn't easy programming and it
doesn't make sense for most of the major languages exactly for the
difficulty in making it work right. It's easy to just use the native
language wrappers for this, but that will not work in other languages
(or even different frameworks, like VCL/MFC).
If you think my "business plan" is not fair, I would be pleased to ear
any alternatives you might have to make it better.
An alternative businessplan you can consider, could be like Marco Wobben 
has with the DBXpress-driver.
You can give it for free, but ask money for the source-code.
When it works good, I guess you can sell some of your source-codes.
You could also have restrictions on the binary only version, for 
example, not to use for commercial projects.
Maybe I would be the source-code, just for not being dependant to 
someone else for a critical driver in my software projects.

I am glad I bought the source code from Marco Wobben, because he does 
not want to do some small changes I need.

Regards
Bert Verhees
Best regards,
~Nuno Lucas






[sqlite] Firebird-SQLite conversion tool

2004-07-26 Thread Bert Verhees
You can find it and download for free:
http://www.rosa.nl/Bold2/
It makes experimenting with SQLite in Bold/Eco (Borland) easier.
kind regards
Bert Verhees


Re: [sqlite] again on sqlite as a standard, was: SQLite and OOo

2004-07-25 Thread Bert Verhees
> These newly added features were long requested as well, and although 
> ALTER TABLE is indeed an important feature too, there are probably 
> other issues that require more attention right now.
> 
> In any case, there is an entry on the FAQ section that shows you how to 
> do something like ALTER TABLE. It's not ideal because it copies the 
> entire table twice, but it's better than nothing, I suppose:

The workaround for ALTER TABLE is not useable in SQL-code generating 
environments.
These kinds of environments gain in importance.
Not only in Model Driven Architecture, but also, I think it
is possible in Open Office there will be GUI's which generate
SQL code.

> 
> http://www.sqlite.org/faq.html#q13
> 
> Cheers,
> 
> -- 
> Tito




Re: [sqlite] not supported keywords

2004-07-23 Thread Bert Verhees
D. Richard Hipp wrote:
Bert Verhees wrote:
Hello,
Are there any plans in the near feature to implement the keywords
EXISTS and ALTER TABLE?
These to keywords/functions are very important when using SQLite in a 
MDA (Model Driven Architecture) where SQL-code is generated?

There are plans to add EXISTS and ALTER TABLE and a few other things.
Whether or not this occurs in the "near future" depends on your
definition of "near", I suppose.

I hope it will be soon.
I did some tests with Firebird (in embedded version). It is about half 
the speed in Delphi Architect.
Maybe it also depends on other things, that could be possible.

My application does not run smoothly with firebird, it does with SQLite.
But Firebird has support for these two keywords, so now I synchronize 
with a Firebird-database, and let the MDA-environment evolve and check 
the table.
Now I am looking to find my way back with the new database to SQLite, 
hope I will find a way.

But as you can see, for me, there is a big need for these commands.
It is just those two commands that are missing, for now at least, and I 
am working quit some time with this.

regards
Bert Verhees


[sqlite] Bold and SQLite

2004-05-30 Thread Bert Verhees
Hi,
I am using Bold in Delphi 7 Architect and SQLite over the DBXpress driver from 
bcp-software.
Works fine, it creates the database as it should, I had to put off the timestamp of 
the records, which was no problem as I do not need that.
Than it comes to inserting data, which works fine too.
Then I stop the application and start it again, and suddenly other queries start to 
run, because there are data in the database, and I get weird errors, like:
"Field 'fieldname' not found."
Resuming:
Without data it starts fine and let me insert data, with data it comes to errors and 
the application does not connect the Bold-controls to the database, which makes the 
application useless.
I wonder, does someone have a clue, experience with this, or tips.
Thanks very much in advance
Kind regards
Bert Verhees

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


[sqlite] Re: Bold and SQLite

2004-05-30 Thread Bert Verhees
> Hi,
>
> I am using Bold in Delphi 7 Architect and SQLite over the DBXpress 
driver from bcp-software.
>
> Works fine, it creates the database as it should, I had to put off 
the timestamp of the records,
> which was no problem as I do not need that.
>
> Than it comes to inserting data, which works fine too.
>
> Then I stop the application and start it again, and suddenly other 
queries start to run, because
> there are data in the database, and I get weird errors, like: "Field 
'fieldname' not found."
>
> Resuming:
> Without data it starts fine and let me insert data, with data it 
comes to errors and the application
> does not connect the Bold-controls to the database, which makes the 
application useless.
>
> I wonder, does someone have a clue, experience with this, or tips.
>
> Thanks very much in advance

Strange that my message never appeared on the sqlite list!!
Maybe a system/server problem??
Ok, problem is solved now.
I found the answer, happily I have the source-code of the dbxpress driver.
SQLite does return the table name when it is used in a join, like it is 
documented below
-
PRAGMA full_column_names = ON; (1)
PRAGMA full_column_names = OFF; (0)

The column names reported in an SQLite callback are normally just the 
name of the column itself, except for joins when "TABLE.COLUMN" is used. 
But when full_column_names is turned on, column names are always 
reported as "TABLE.COLUMN" even for simple queries.
-

So I changed the DbXpress driver so that it does strip of the tablename, 
as Bold always executes joins with tablename but it seems to expect 
column-names without tablenames, anyway, this short code-change did the 
trick

Bert Verhees
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Is the DLL kept in RAM constantly?

2004-05-03 Thread Bert Verhees
Op maandag 3 mei 2004 22:31, schreef Darren Duncan:
> At 8:06 PM +0200 5/3/04, Frederic Faure wrote:
> >Hi,
> >Since the performance of SQLite are so great, my dad and I were
> >wondering if SQLite used a Windows API to tell it not to unload
> >SQLite from RAM even if no program was still calling it, so that the
> >next time a program loaded it, it would still be up and running in
> >RAM, with data also loaded?

There is a file, an interface-file in pascal (Delphi), it is called 
sqlite.pas, somewhere on the internet. I use it a lot. It is very handy, it 
encapsulated all conversions to PChar, etc, returns strings and stringlist, 
easy to handle and important:
It loads sqlite.dll in the initialisation part, and keeps it loaded, so as 
long as you do not destroy the sqlite-object in this file, the dll stays 
loaded.

I guess this kind of things are also possible in other 
development-environments.

Bert Verhees


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Is using max(rowid) instead of count(*) safe?

2004-03-28 Thread Bert Verhees
Ali Sadik Kumlali wrote:

Hello,

I am working with approximately 3.5 million rows data. As far as I
see count(*) is very slow comparing with max(rowid). 

I wonder if I could use max(rowid) safely instead of count(*). Does
rowid increments by 1 for each new row?
Thanks a lot.

Ali Sadik Kumlali

__
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 

When you delete a row in between the rowid's will not be decremented, so 
it is only save when you never delete a row

Bert Verhees

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] a few questions about indexes

2004-03-19 Thread Bert Verhees
eno wrote:

[EMAIL PROTECTED] wrote:

ok.  As I am sure you realize it would be very useful to be able to 
create case-insensitive indexes with:

create index idx_my_index on my_table(lower(my_column));


How can you be sure which index sqlite will use when there are more 
indices on a field?

SELECT * FROM my_table WHERE lower(my_column) < "cd"

but I wonder if such a thing could be done with triggers.

/eno

Excuse me for the late reaction, I tried what you suggested, but in a 
certain project I am bound to SQLite 2.7.3, maybe it has changed.
When I try your solution, the index as suggested is not used, instead a 
tablescan is done and the lower is called as a function.

Maybe your suggestion works in younger versions of SQLite, I don't know.

Kind regards
Bert Verhees




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


[sqlite] Date Time, and now without make-up codes

2004-01-05 Thread Bert Verhees
First of all, very good, that de sqlite-list does not accept codes,
although it sometimes slips through.

Excuse for my previous mail.

But now, what I have to say.


> > CREATE TABLE Allan (DateTimeField DateTime);
> > INSERT INTO Allan (DateTimeField) VALUES ('2004-01-04 12:00:00');
> > SELECT DateTimeField FROM Allan WHERE datetime(DateTimeField) =
> > datetime('2004-01-04 12:00:00');
> > SELECT DateTimeField FROM Allan WHERE datetime(DateTimeField) >
> > datetime('2004-01-03 12:00:00');
> > SELECT DateTimeField FROM Allan WHERE datetime(DateTimeField) <
> > datetime('2004-01-03 12:00:00');

I am using SQLite from Delphi, and Delhi does milliseconds, so I have a time-
notation of -MM-DD HH:NN:SS:ZZZ standarized in my applications

It takes a trick in Delphi to translate a DateTime to use also
milliseconds, I use
EncodeTime from string to timevar and FormatDateTime('hh:mm:ss:zzz',timevar)
from timevar to string.

I insert the strings in SQLite, and the conversion to time or from time
take not
much time, in fact, I insert 10.000 records in about one and a half minute.

I benchmarked that the conversion was 12 seconds, and the actually
insertion was
77 seconds. ( a bit older PC, AMD 900Mhz, Windows-NT4.0)
The records also contained small strings, floats, and are 15 fields wide.
For me that is very satisfying.

> By dropping the transformation you permit the current version of sqlite to
> use any relevant index to do these selections.
>
> I don't know the ANSI standard to which you referred, but XSD defines the
> datetime with a few extra tweaks: A T between the date and time, optional
> sub-second resolution, and an optional time-zone:
> '2004-01-04T12:00:00+10:00'.

So I would suggest to use also milliseconds in a standard time-definition,
if one
has to be choosen.
I would leave out the "T", I have never seen this before, and it adds
nothing extra
above a space.
A space does the job, and is most often used, as far as I know.

The timezone seems OK at first glance, but I am not sure.

You need the 0 in for example +09:00
And there is another problem, I guess.

-05:00 sorts before -06:00.
So indexing for sorting or comparing does not make any sence then.

2004-01-04T12:00:00:000-05:00 (7 o'clock Greenwich)
sorts as String later then
2004-01-04T11:00:00:000-02:00 (9 o'clock Greenwich)
thus in fact it is a time passed earlier, and the sort is wrong.

What do you do with opcode IncrKey?

It needs a complex indexing algoritm.

IMHO timezones have to be handeld by the application before adding the
time to a
database, this keeps the database more quick.
And milliseconds should be used, set them to 000 when not used.

Kind regards
Bert Verhees




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] double email

2003-11-23 Thread Bert Verhees
>> It seems to me that all the people send theiur ewmail double, and that
>> it
>> is a failure in the list software configuration.
>>
>> This email has only one TO: sqlite-users <[EMAIL PROTECTED]>
>> And no CC or BCC, thus, if this is posted twice, it is the list software
>
> Ever thought that it might be something at your end, or maybe you're
> subscribed twice? Nobody else is complaining about this, and I sure don't
> have any issues with duplicate posts.

It is possible that that is the case, then someone or something did
subscribe for the second time, as I am experiencing this problem very
consistently since two days.

If I knew who does system management, I would report this issue directly
to him or her, but I guess the system manager is reading this too, at this
moment. It must be easy for him or her to remove one subscription after
investigating if there are two. Please do not remove the last
subscription, because I really appreciate reading this list.

At the moment, I am reading the list at a bandwith of 9,6 Kbps for about a
dollar a minute, so it is very expensive being on the Internet, that is
why I address the system manager this way, instead of spending expensive
bytes on finding which email I have to address.

Thanks in advance for your consideration.

Kind regards
Bert Verhees



Kind regards
Bert Verhees

>
> --
> Richard Heyes
> http://www.v-webmail.co.uk
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] double email

2003-11-21 Thread Bert Verhees
> Bert Verhees wrote:
>> It would be nice if people would stop sending their emails twice to this
>> list, and also would put off their read or deliver notification
>
> And learn to quote. And not top-post.
>
> -- Gerhard

OK, I will not top-quote

Will you stop sending your emails twice, this one, I also got twice

Thanks

Kind regards
Bert Verhees

>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[sqlite] double email

2003-11-21 Thread Bert Verhees
It would be nice if people would stop sending their emails twice to this
list, and also would put off their read or deliver notification

thanks
Bert Verhees

> [EMAIL PROTECTED] wrote:
>> Hi all,
>>
>> I'm trying SQLite with Zope
>
> You're using PySQLite, right?
>
>> and it works fine, but  when I use the browser's
>> [Stop] button to interrupt the request it  gives me the message:
>>
>> "database is locked" [...]
>
> This happens when you have a connection that is within a transaction (i.
> e. you haven't issued .commit() or .rollback() on the connection object,
> yet) and you're trying to open a second connection to the same database.
>
> The solution is to assure that you always .commit() or .rollback(). I
> don't know where to put that in a ZOPE database adapter, perhaps you
> should check the sources of a different ZOPE DA.
>
> But it's hard to tell without knowing your sources and without sufficent
> knowledge about Zope database adapters ;-)
>
> Or are you using PySQLite not from within a DA, but otherwise?
>
> -- Gerhard
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Performance tuning question

2003-11-15 Thread Bert Verhees
On 14 Nov 2003 at 16:28, Arthur Hsu wrote:

> Hello,
> 
> I managed to download new CVS versions (by hand ...) and compile them on
> Win32 platform (finally).  Following are my test results:
> 
> Time elapsedADO.NetSQLite 2.8.6SQLite CVS
> 0   0  0   0
> 30  7419   57297920
> 60  14176  801310711
> 90  20760  986913147
> 120 26623  11033   14944
> 150 32862  12633   16598
> 180 38783  13044   17878
> 210 44472  13098   19609
> 240 49873  14120   20711
> 
> The CVS version is quite linear after the first 30 seconds.  I'm still
> digging the reason for the performance difference between 0-30 and 30-60.
> Any ideas?
> 
An important difference is that ADO.net writes records to a memory-cache, and at the 
point of updatedatebase writes the records to the database in a batch.
If this is an advantage or not, that is a point of discussion. In a single-user 
environment 
this is mostly not a problem, except when the computer crashes, or electricity fails.

But the problem with this specific benchmark is that you are comparing things you 
should 
not compare. Maybe there is a possibility to write SQLite-data to a memory cache to, 
and 
after finishing collecting data, write them to the database.

Kind regards
Bert Verhees


> -Arthur
> 
> 
> 
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 
> 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]