[sqlite] Releasing a read (SHARED) lock

2015-03-12 Thread Barry
Hello everybody,

I have a situation where two processes are accessing the same SQLite
database. One process only reads from the database, one process reads and
writes.

These processes keep a single database connection open for the lifetime of
the process.

It seems to me that once the reader process accesses the database (after it
performs its first SELECT statement), it maintains a lock on the database
until the connection is closed (when the program is exited). This prevents
the writer process from updating the database.

I tried changing to WAL. This made the writer process able to commit its
changes, but now the reader does not see any modifications made to the
database until it is restarted (It seems to see a snapshot of the DB at the
time of its first read).

I am using prepared statements: On opening the DB, I create all my prepared
statements. When I need to execute a statement, I bind to the statement,
call sqlite3_step (possibly multiple times), then call sqlite3_reset. I do
not finalise the statements until the program closes.

In order to simulate 'save' behaviour, the writer process always holds a
transaction open. When the user chooses 'save', the current transaction is
committed and a new transaction is begun. (I understand that the reader
will not see any changes in the uncommitted transaction, but is not seeing
any committed transactions either).

I have checked quite thoroughly through my code and cannot find any
instances of statements executed without a sqlite3_reset quickly following.

Is this intended behaviour - that once a connection has performed a read,
it maintains its lock on the database for its lifetime? Is SQLite smart
enough to know that the pages it holds in cache of the reader are invalid
after the writer has made changes to the DB on disk?

If this is not the intended behaviour - is there a way I can find out which
statements are causing the lock to be held open? Or can I force SQLite to
discard its cache?

Any help would be appreciated.

Regards,

Barry Smith


[sqlite] Releasing a read (SHARED) lock

2015-03-13 Thread Barry
On 13 March 2015 at 01:21, Dinu Marina  wrote:

> You should be also aware of a more common pitfall: unclosed result sets.
> Any lock is held until you read PAST the last row or you call stmt_finalize
> (or the equivalent abstraction in your DBAL). Always close select
> statements.


Hi Dinu,

Am I correct in my understanding then that it is not enough to call
sqlite3_reset, I must call sqlite3_finalize after I am done with a
statement?

Cheers,

 - Barry


> On Mar 12, 2015 11:40 AM, "R.Smith"  wrote:
>
> >
> >
> > On 2015-03-12 04:38 AM, Barry wrote:
> >
> >> Hello everybody,
> >>
> >> I have a situation where two processes are accessing the same SQLite
> >> database. One process only reads from the database, one process reads
> and
> >> writes.
> >>
> >> These processes keep a single database connection open for the lifetime
> of
> >> the process.
> >>
> >> It seems to me that once the reader process accesses the database (after
> >> it
> >> performs its first SELECT statement), it maintains a lock on the
> database
> >> until the connection is closed (when the program is exited). This
> prevents
> >> the writer process from updating the database.
> >>
> >
> > Hi Barry,
> >
> > This is the usual situation when one of your transactions in the "reading
> > only" database does not finalize. i.e. you started a transaction there
> and
> > did not end it with either "END TRANSACTION" or "COMMIT" or "ROLLBACK".
> >
> > In the non-WAL DB it will simply hold the lock preventing changes. In the
> > WAL DB it will hold the lock for its own view of the data but let the
> other
> > writer write.. however it won't see the changes for itself.
> >
> > This is very common actually. Just find every transaction you start
> > (explicitly or implicitly) and make sure you end it and when you end it,
> > see what the return value is from SQLite and whether it reports any
> error.
> >
> > HTH!
> > Ryan
> >
> >
> >
> >> I tried changing to WAL. This made the writer process able to commit its
> >> changes, but now the reader does not see any modifications made to the
> >> database until it is restarted (It seems to see a snapshot of the DB at
> >> the
> >> time of its first read).
> >>
> >> I am using prepared statements: On opening the DB, I create all my
> >> prepared
> >> statements. When I need to execute a statement, I bind to the statement,
> >> call sqlite3_step (possibly multiple times), then call sqlite3_reset. I
> do
> >> not finalise the statements until the program closes.
> >>
> >> In order to simulate 'save' behaviour, the writer process always holds a
> >> transaction open. When the user chooses 'save', the current transaction
> is
> >> committed and a new transaction is begun. (I understand that the reader
> >> will not see any changes in the uncommitted transaction, but is not
> seeing
> >> any committed transactions either).
> >>
> >> I have checked quite thoroughly through my code and cannot find any
> >> instances of statements executed without a sqlite3_reset quickly
> >> following.
> >>
> >> Is this intended behaviour - that once a connection has performed a
> read,
> >> it maintains its lock on the database for its lifetime? Is SQLite smart
> >> enough to know that the pages it holds in cache of the reader are
> invalid
> >> after the writer has made changes to the DB on disk?
> >>
> >> If this is not the intended behaviour - is there a way I can find out
> >> which
> >> statements are causing the lock to be held open? Or can I force SQLite
> to
> >> discard its cache?
> >>
> >> Any help would be appreciated.
> >>
> >> Regards,
> >>
> >> Barry Smith
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users at mailinglists.sqlite.org
> >> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users at mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Releasing a read (SHARED) lock

2015-03-13 Thread Barry
Hmmm, I am guilty of not always stepping until I get SQLITE_DONE (I am
usually selecting by the primary key, and therefore know I will only get a
single result), but I make sure to always call sqlite_reset, so I don't
think that should be the issue.

On further reflection, I suspect it might be the 'obscure side-effect'
mentioned by Dinu. I do have functions that look like:

//bind all parameters
//...
//begin querying statement1:
//Note: stmt1 is only queried once (and hence is still 'open' for now)
sqlite3_step(stmt1);

//Use the results from statement1 to bind statement2
//...
//begin querying statement2:
while(SQLITE3_OK == sqlite3_step(stmt2)) {
  //Do things
  }

//reset both statements
sqlite3_reset(stmt2);
sqlite3_reset(stmt1);

So, in this situation I do in fact have two SELECT statements open
simultaneously, which should trigger the unreleasable lock. I will try
change my code and post results here.

On 13 March 2015 at 09:02, Dinu Marina  wrote:

> You should also check, as R.Smith and Slavin pointed, that you don't
> accidentally have a BEGIN [DEFERRED] somewhere, as that would trigger
> exactly this behavior: it will lock on the first select (the first step()
> actually)
>
>
> On 13.03.2015 02:44, Simon Slavin wrote:
>
>> On 13 Mar 2015, at 12:17am, Barry  wrote:
>>
>>  On 13 March 2015 at 01:21, Dinu Marina  wrote:
>>>
>>>  You should be also aware of a more common pitfall: unclosed result sets.
>>>> Any lock is held until you read PAST the last row or you call
>>>> stmt_finalize
>>>> (or the equivalent abstraction in your DBAL). Always close select
>>>> statements.
>>>>
>>> Hi Dinu,
>>>
>>> Am I correct in my understanding then that it is not enough to call
>>> sqlite3_reset, I must call sqlite3_finalize after I am done with a
>>> statement?
>>>
>> In order to dispose of a statement you can do either sqlite3_reset() or
>> sqlite3_finalize().  The common mistake is to do something like this:
>>
>> SELECT * FROM myTable WHERE rowid = 36
>>
>> and then not do either _reset() or _finalize() because you know you have
>> asked for only one row so you expect SQLite to have done a _finalize() for
>> you.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] BEGINNER - Transactions in shell script

2015-09-07 Thread Barry
SQL commands do not need to be on multiple lines (they only need a
semicolon after each command).
But dot commands do.

Have you tried the following?
(ECHO .bail on
ECHO %multiple commands%) | sqlite3.exe %dbname%

But, much more importantly (particularly since you included BEGINNER in big
letters in your subject line): are you protecting your DB against SQL
injection attacks? I imagine this would be rather hard to do from a script,
unless you have already sanitized data coming in.


On 7 September 2015 at 18:53, Simon Slavin  wrote:

>
> On 7 Sep 2015, at 8:00am, Petr L?z?ovsk?  wrote:
>
> > Uhgh Spent lot of time to made whole scripts bundle to use no
> tempfiles (avoiding I/Os), and not this ;-)
> > OK, few more questions:
> >
> > Does sqlite3 expect exactly one statement per one line on input?
> > Or put it into command line? sqlite3.exe %db% "%multiline_statement%"
> (have seriuos doubts about this solution)
> > Or this? echo "%multiline_statement%"|sqlite3.exe %db%
>
> I can tell you if this works on the Macintosh version but this may not
> necessarily mean it works on the version for your operating system.
>
> If not, create a text file with the commands in then feed it the text
> file.  The command could look like
>
> sqlite3 %db% < theCommands.txt
>
> I think the structure you used above ...
>
> Unix/Linux:
> cat theCommands.txt | sqlite3.exe %db%
> Windows:
> type theCommands.txt | sqlite3.exe %db%
>
> will also work.
>
> You may find it useful to figure out a way of generating a filename which
> will never be repeated.  When I try to do this I usually end up with
> something which relates to the current time.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Avoid duplicate sets with join table

2016-04-20 Thread Barry
Would not the following snippet be useful to return the record ID if the
feature set already exists?

SELECT rec.id
FROM records rec
WHERE
1 = (SELECT SUM(id_f IN (21, 22, 23)) = 3 AND COUNT(*) = 3 FROM
records_features rf WHERE rf.id_r = rec.id)

You would have to have some logic in your calling language to deal with
things appropriately...

I don't know how this could be coded into a DB constraint since SQLite
lacks FOR EACH STATEMENT triggers, and to insert a superset of an existing
set you may at some point insert the existing set.

On 17 April 2016 at 17:32, James K. Lowden  wrote:

> On Sat, 16 Apr 2016 01:20:55 +0200
> Ketil Froyn  wrote:
>
> > I have two tables and a join table, in principle like this:
> >
> > CREATE TABLE records (id INTEGER PRIMARY KEY, data TEXT);
> > CREATE TABLE features (id INTEGER PRIMARY KEY, data TEXT UNIQUE);
> > CREATE TABLE records_features (id_r INTEGER, id_f INTEGER, ord
> > INTEGER);
>
> (As a style suggestion, consider not reduplicating plurals.  It's a
> one-man band, a 3-star general, and a 5-finger discount.  What you have
>  is a set of record_features.)
>
> > Later, if I come across a new record that has exactly features
> > "feature1", "feature2" and "feature3" (IDs 20,21 and 22), I don't want
> > to add a new record for this. So given a new set of feature IDs, how
> > can I best check if there's another record that has these exact
> > features before I insert it?
> >
> > To be clear, a record with features "feature1", "feature2", "feature4"
> > would be ok. So would "feature1", "feature2". Subsets are ok, but not
> > exact duplicates.
>
> This is an application of relational division: you want to know if two
> sets are equal.  Well, almost.  You really want to know if the "new" set
> is a subset of an existing one.
>
> You're also a little stuck for lack of syntax.  There are two sets in
> question: the extant one and the one to be inserted.  You can insert a
> set in SQLite,
>
> insert into T values (a), (b), (c);
>
> but because that set of rows (a, b, and c) doesn't have a name, you
> can't refer to them again in the same query.  So you have to dance a
> little jig, something like:
>
> begin transation
> insert into record features ...
> delete from record features
> id = new_id
> where exists (
> select 1 from record features as rf
> where rf.id_r = record_features.id_r
> and rf.id_f = record_features.id_f
> group by id
> having count(*) = (
> select count(*)
> from record_features where id_r = new_id
> )
> );
> commit transaction
>
> That puts the rows into the database -- where they can be examined --
> and deletes them if they turn out to be a duplicate.  You could also
> apply the same logic in an insert & update trigger, and probably should
> if the rule applies to the *data*, and not just the application's
> expectations.
>
> --jkl
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


Re: [sqlite] Help with Backup API please

2017-02-08 Thread Barry
Ok. My bad for singling out SQLite, I should have rather mentioned that the
problem exists with all memory allocation routines (malloc/free or
new/delete), at least on windows.

A workaround is to pass around allocator / deallocator function pointers
with any data structure which contains pointers which might be free'd or
realloc'd in a different module. My assumption was that SQLite does not do
this, since being Lite is a main priority and if one is using multiple
modules then option to dynamically link to SQLite avoids the problem.

My reason for assuming memory corruption (or invalid call to free(...)) was
the fact that Brett is encountering an exception rather than an invalid
return code.

On 8 February 2017 at 14:16, Richard Hipp <d...@sqlite.org> wrote:

> On 2/8/17, Barry Smith <smith.bar...@gmail.com> wrote:
> >
> > I believe SQLite doesn't use the standard memory allocation routines, but
> > instead has its own routines.
>
> SQLite has its on memory allocation routines (if you use the right
> compile-time and start-time options) but it uses system
> malloc()/realloc()/free() by default.
>
> There are internal wrappers routines for the memory allocator that
> dispatch requests to the appropriate implementation.  Again, unless
> you take special actions to change it, memory allocation request are
> satisfied by malloc().
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Options

2017-02-21 Thread Barry
Are you using NuGet or downloading the binaries from System.Data.Sqlite.org?

Ah. You'll have to excuse me, it's been a while since I actually started a
new project and set SQLite as a dependency. I'm not sure if things have
changed, or if my memory is terrible, but what I said may not have been
completely clear or correct.

If you're using NuGet (the easiest way):
 Turns out there's a NuGet package called System.Data.Sqlite.Core* that
includes only the core SQLite wrapper. If you install System.Data.Sqlite
you get the LINQ and EF stuff (which I generally try to stay away from). If
you use this package you will get both the x64 and x86 interop binaries in
their respective folders. If you have already installed the
System.Data.Sqlite package, and got the LINQ and EF dependencies, you
should be able to remove them as a dependency by right clicking the the
unnecessary dependencies to open their context menu, and selecting "Remove".

If you're downloading from the website System.Data.Sqlite.org:
 You only get an x86 or a x64 version of the interop binaries, depending on
what you download. The download page has some instructions on how you can
make the wrapper library automatically select the correct interop dll for
whichever architecture you're targetting. See the section entitled "Native
Library pre-loading". I haven't gone through this setup personally so
cannot help you any more than what is there on the site (The NuGet package
comes with this preconfigured). You can choose to download the mixed mode
assemblies which will not have the 'interop' DLLs. I haven't played around
with these because the download page specifically advises to avoid these
packages unless you need to put SQLite in the Global Assembly Cache (which
the authors also recommend against). Note that if you download from the
website, you'll get the LINQ and EF6 dlls included in the zip file. If you
never reference them from your project, they won't be copied to your output
directory and you can ignore them.

Hope this helps. I am by no means an expert, I've only messed with it a few
times. I hope that if I have got things wrong that someone more experienced
can jump in and correct my mistakes.

* This name might be a little confusing since Microsoft have recently
release an unrelated product called Entity Framework Core. It appears
unrelated to System.Data.Sqlite.Core.

On 21 February 2017 at 13:48, Clyde Eisenbeis <cte...@gmail.com> wrote:

> Thanks for the good info!
>
> I can't find SQLite.Interop.dll ... is referenced at
> https://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki,
> but don't see a download.
>
> On Mon, Feb 20, 2017 at 12:30 PM, Barry Smith <smith.bar...@gmail.com>
> wrote:
> > I would use system.data.sqlite in that situation.
> >
> > But I would also say it depends on what you already have written, and
> what your strengths are. I am under the impression from your first email
> that you already have something written using system.data.sqlite. i.e.
> Using the class System.Data.SQLite.SQLiteConnection to create a
> connection to the db, then using the methods of that to manipulate the db
> or extract data from it. Have I assumed wrong?
> >
> > If I am wrong, and you have yet to start writing anything, I would still
> recommend using system.data.sqlite. Only if you particularly like LINQ over
> SQL and you are prepared to learn the caveats of the entity framework would
> I recommend that.
> >
> > Note that if you're using system.data.sqlite you will ultimately produce
> a few dlls that must be distributed together:
> >  - your custom library, which contains the code you've written
> >  - System.Data.Sqlite.dll, which contains the wrapper to make an
> interface to access SQLite in a more dotNet friendly manner
> >  - x64\sqlite.interop.dll
> >  - x86\sqlite.interop.dll
> > The last two contain the 'raw' SQLite library (for either 32 or 64 bit
> systems).
> >
> > You should not need the other libraries for a simple application. If you
> find that visual studio is placing them in your project's output directory,
> check if they are listed as a reference and try to remove them then
> recompile.
> >
> >> On 20 Feb 2017, at 1:05 PM, Clyde Eisenbeis <cte...@gmail.com> wrote:
> >>
> >> Thanks for the clarification.  In my case:
> >>
> >> 1) Speed is not an issue.  Size is not an issue.
> >>
> >> 2) This is a personal use database (genealogy).
> >>
> >> 3) Typically I create .dll's that serve as a library (WPF Custom
> >> Control Library) ... easy to use for different programs.
> >>
> >> 4) For example, I have an Excel .dll library (uses Excel as a
> >> database).  When the program runs the first time using thi

Re: [sqlite] SQLite3 on Windows

2017-10-10 Thread Barry
Perhaps I misunderestimate your question, but I think a certain amount of
explanation of how SQLite works may be in order. What are you trying to do?

The SQLite shell tool does not depend on the DLL (the SQLite library is
instead included in the executable file). You can put sqlite3.exe wherever
you like and do not need to copy anything else around with it. Probably in
a PATH folder is best, so you can access it whenever necessary; but I have
also included it in project folders that I have worked on so I can check it
straight into source control to make it easier for other developers to use
(with a certain amount of cringing about what I was doing).

SQLite is not like other database management systems (MSSQL, MySql,
Postgre, etc.) in that it does not have a dedicated server that runs to
control DB access. Instead, every application that wants to use SQLite
directly accesses the relevant files, using the SQLite library. The library
can be though of as a component of the particular app that wishes to access
SQLite databases.

As a result, as a user you should not need to put the dll anywhere. Any
program that relies on SQLite should include it with its distribution,
either as a DLL which will be installed into that programs install
directly, or compiled directly into the executable of the program.

You should only need the SQLite dll if you intend to build applications
(using an unmanaged language, such as C or C++) that use SQLite*. In that
case, it will likely reside with other third party libraries your project
references, and you should ensure that any installer you distribute
includes that file.

*Sometimes it's better to use the dll file to build applications, sometimes
it's better to just download a copy of the amalgamation c source files and
include that in your project.

On 10 October 2017 at 08:38, Keith Medcalf  wrote:

>
> Hmmm.
>
> Must be rather old since it mentions 16-bit Windows but not 64-bit
> Windows.  So for all the parts that reference the "System" directories you
> need to apply a grain of salt
>
> 64-bit Windows looks for 64-bit DLLs in Windows\System32
> 64-bit Windows looks for 32-bit DLLs in Windows\SysWOW64 (and not in
> Windows\System32)
> 32-bit Windows looks for 32-bit DLLs in Windows\System32
> 32-bit Windows looks for 16-bit DLLs in Windows\System
>
> The "Common Files" search location is automagically, on 64-bit Windows,
> Program Files (x86)\Common Files when looking for at 32-bit DLL and
> Program Files\Common Files when looking for 64-bit DLL
>
> Similarly the names for 32-bit versions of Windows searching for 32-bit
> DLLs is
> Program Files\Common Files
>
> Also, searches that search the "Program Files" tree will automatically use
> "Program Files (x86)" tree if looking for 32-bit DLLs on 64-bit Windows
> Other search locations in the AppData tree also magically redirect to the
> (x86) directories if looking for 32-bit DLLs on 64-bit Windows
>
> Paths are not modified so you better hope that if you put the DLL in a
> directory in the (various) paths that it will attempt to load ONLY THE
> FIRST such DLL found and will crash (or the loader will fail) if the DLL
> first found is the wrong "bitedness".  There are sekret registry hacks that
> you can do to register the "same" module in different locations for
> 16/32/64 loading automagically (more or less) but those hacks are not
> officially supported.
>
> So, if you are using multiple versions of the same module named DLL, you
> have to be careful where you put it, unless you know that you will only
> ever require one model, or the application itself loads the DLL from the
> Application Directory.  Otherwise, if you need multiple models
> simultaneously you need to be careful where you put them of use the default
> System directories.
>
> ---
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
>
> >-Original Message-
> >From: sqlite-users [mailto:sqlite-users-
> >boun...@mailinglists.sqlite.org] On Behalf Of Warren Young
> >Sent: Tuesday, 10 October, 2017 08:57
> >To: SQLite mailing list
> >Subject: Re: [sqlite] SQLite3 on Windows
> >
> >On Oct 10, 2017, at 3:49 AM, Phoenix 
> >wrote:
> >>
> >> 2) Does the .dll need to be in a specific folder or is it okay if
> >the
> >> folder is listed in my PATH?
> >
> >https://msdn.microsoft.com/en-
> >us/library/windows/desktop/ms682586(v=vs.85).aspx
> >
> >___
> >sqlite-users mailing list
> >sqlite-users@mailinglists.sqlite.org
> >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org

[sqlite] PRAGMA writable_schema and schema_version: changes are ignored on the same connection

2018-06-04 Thread Barry
This started off as a bug report about the writable schema method
recommended in  https://www.sqlite.org/lang_altertable.html, but I realised
the problem is a bit broader.

It seems SQLite is ignoring manual changes to sqlite_master when combined
with increments to schema_version. The alter table page referenced above
does not mention this, and indeed has advice (to run an integrity_check)
that relies on different behaviour. Restarting SQLite ensures it has
re-read the schema. I did not check the behaviour of simultaneous
connections.

C:\Users\Me>sqlite3
SQLite version 3.24.0 2018-06-04 19:24:41
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE TABLE Hi(ID INTEGER PRIMARY KEY);
sqlite> PRAGMA schema_version;
1
sqlite> PRAGMA writable_schema=ON;
sqlite> UPDATE sqlite_master SET SQL='CREATE TABLE Hi(ID INTEGER PRIMARY
KEY, value INTEGER)' WHERE name='Hi' AND type='Table';
sqlite> PRAGMA writable_schema=OFF;
sqlite> PRAGMA schema_version=2;
sqlite> INSERT INTO Hi VALUES (1, 4);
Error: table Hi has 1 columns but 2 values were supplied
sqlite> PRAGMA schema_version;
2
sqlite> PRAGMA schema_version=3;
sqlite> INSERT INTO Hi VALUES (1, 4);
Error: table Hi has 1 columns but 2 values were supplied
sqlite>

The context of this is:

I just tried to follow the second method (writable schema) advised on
https://www.sqlite.org/lang_altertable.html to alter a column and drop a
NOT NULL constraint.

Due to a typo I updated the SQL in sqlite_master to CREAT TABLE... instead
of CREATE TABLE.

This database still passed the PRAGMA integrity_check that I ran before
committing the transaction. However, when I closed the database then
reopened and executed a DDL statement it got all upset and told me that my
database schema is malformed.

I understand that writable_schema is a "You break it you buy it" type
situation, but why does the documentation recommend I run the PRAGMA
integrity_check if it won't pick up something so simple and obviously
erroneous as misspelling CREAT?

Reproduction example (this is 3.22, but as illustrated above 3.24 will give
the same results):

C:\Users\Me>sqlite3 test.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> CREATE Table foo(ID INTEGER PRIMARY KEY);
sqlite> PRAGMA schema_version;
1
sqlite> BEGIN;
sqlite> PRAGMA writable_schema=ON;
sqlite> UPDATE sqlite_master SET SQL = 'CREAT TABLE foo(bar INTEGER PRIMARY
KEY)' WHERE type='table' and name='foo';
sqlite> PRAGMA writable_schema=OFF;
sqlite> PRAGMA schema_version=2;
sqlite> PRAGMA schema_version;
2
sqlite> PRAGMA integrity_check;
ok
sqlite> COMMIT;
sqlite> .exit

C:\Users\Me>sqlite3 test.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> CREAT TABLE bar (ID INTEGER PRIMARY KEY);
Error: near "CREAT": syntax error
sqlite> CREATE TABLE bar (ID INTEGER PRIMARY KEY);
Error: malformed database schema (foo)
sqlite>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIMIT

2018-06-23 Thread Barry
I checked with a fake dataset: the GROUP BY solution is quicker with no
index on F2 (the query planner can understand that query well enough to
create an automatic covering index). However, with an index on F2, the
rowid solution is considerably faster for a dataset with 2600 random
distinct F2 and one million total rows in T.

The test script is at the end of the email. On my computer, with the rowid
correlation, the query returned in 14 seconds*. With the GROUP BY solution,
the query took 50 seconds.

It was interesting to see that both queries ran quickly spitting out values
until SQLite had returned all possible values, then stalled for a long time
as the DB engine processed all rows that could never return anything.

Cheers,

 - Barry

[*I suspected that most of the time with the rowid query was taken in
writing to the console, so I SELECT'd COUNT(*) instead of * and it returned
in 6 seconds, and I suspect still did most of the work (the query plan was
the same). I tried the same thing on the GROUP BY query, but the GROUP BY
and COUNT don't play so well together and it still spat out a bunch of
numbers to the console.]

CREATE TABLE T(F1 INTEGER PRIMARY KEY, F2 TEXT NOT NULL);

CREATE TABLE Names(id INTEGER PRIMARY KEY, Name TEXT NOT NULL);

INSERT INTO Names(Name) VALUES
('Alex'),('Brett'),('Cindy'),('Dennis'),('Echo'),('Frank'),('Garry'),('Herbert'),('Indigo'),('Jack'),('Karl'),('Lima'),('Melon'),('Nunes'),('Oprah'),('Peter'),('Quincy'),('Robert'),('Sarah'),('Tangerine'),('Unicorn'),('Violet'),('Wilfred'),('Violet'),('Wesley'),('Xavier'),('Yeltzin'),('Zbrudov');

WITH num(i) AS (SELECT 1 UNION ALL SELECT i + 1 FROM num WHERE i < 100)
INSERT INTO Names(Name)
SELECT Name || i FROM Names, Num;

WITH num(i, rnd) AS (SELECT 1, RANDOM()%1300 + 1300 UNION ALL SELECT i + 1,
RANDOM()%1300 + 1300 FROM num WHERE i < 100)
INSERT INTO T(F2)
SELECT (SELECT Name FROM Names WHERE id = rnd) FROM num;

CREATE INDEX idx_F2 ON T(F2);

On Sun, 24 Jun 2018 at 07:00, Gert Van Assche  wrote:

> Ryan, my dataset isn't that big. 11K records.
> Your solution is noticeable faster 996 ms vs 13126 ms.
> Interesting!
>
> gert
>
> Op za 23 jun. 2018 om 18:09 schreef R Smith :
>
> >
> > On 2018/06/23 2:47 PM, Gert Van Assche wrote:
> > > Barry, that's even easier indeed. And it works perfect!
> > > Thanks for sharing this.
> >
> > I very much like the method which Barry suggested for the simplicity,
> > but have avoided it since I thought it would be significantly slower on
> > a large data set, however, SQLite is known for some pretty neat
> > optimizations (especially where the row_id is concerned) and I'm now
> > interested to know if indeed such optimizations feature in this case. I
> > have a few such implemented queries that might benefit from changing
> over.
> >
> > May I ask, if you do have a rather large dataset, and perhaps don't mind
> > the time, would you care to compare the two methods and let us know if
> > there is any discernible difference in speed? Also one small important
> > item - how many F1 items are there typically per unique F2 item in
> > total? Does it grow over time?
> >
> > Also, you do not need hashes, for either of the two queries - the
> > queries should work regardless of integer or Unicode text used, perhaps
> > the comparison might need a specific collation? Surrounding both
> > references with a TRIM() function might also help. Essentially, if F1 of
> > record 3 in Table T is '*&^%@#*&^@#' then   "... FROM T AS A, T AS B
> > WHERE A.F1 = B.F1" must resolve correctly. If it doesn't, it's a bug.
> > The <, <=, >, >= might all produce some Unicode weirdness upon
> > inadequate collations.
> >
> >
> > If time doesn't allow, then don't spend effort on this, it's simply a
> > curiosity. :)
> >
> > Thanks!
> > Ryan
> >
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT becomes very slow when converted to UPDATE

2018-06-23 Thread Barry
Do I correctly understand the intention of the UPDATE is that for
each my_id in meta_table, it will store the count of all the hashes that
are associated only with my_id and no other id's?

In that case, have you tried:

UPDATE
 meta_table
SET
 distinct_hashes = (
 SELECT
 COUNT(hash) AS num
 FROM
 temp.t_distinct_hashes_by_id d
 WHERE
 --No need for any NOT NULL, it'll fail equality comparison
 1 = (SELECT COUNT(my_id) FROM  temp.t_distinct_hashes_by_id sub
WHERE d.hash = sub.hash)
 AND
 meta_table.my_id = d.my_id
 )
;

Or you could try (although I doubt it'd be any faster):

distinct_hashes = (
 SELECT
 COUNT(*) AS num
  FROM (
 -- Technically not valid SQL,
 -- but SQLite allows you to select non aggregate rows not
include in the GROUP BY,
 -- and we're safe because of HAVING COUNT(*) = 1.
 SELECT my_id
 FROM
temp.t_distinct_hashes_by_id
 GROUP BY Hash
 HAVING COUNT(*) = 1
  ) d
  WHERE d.my_id = meta_table.my_id
 );

If my logic is correct both of these should work, and both avoid a SCAN
TABLE in your innermost loop.

Also, is it possible my_id and hash could be guaranteed NOT NULL in your
temp table? I'm not sure if that would speed up the query (although it
couldn't hurt), but it would certainly make the problem easier to think
about.

Can you share your overall goal? It looks like you're doing after like
SELECT my_id, COUNT(*) FROM (SELECT my_id FROM  GROUP BY hash
HAVING COUNT(DISTINCT (hash, my_id)) = 1) GROUP by my_id.
Which doesn't work because SQLite doesn't like COUNT(DISTINCT ).

On Sat, 23 Jun 2018 at 23:44, Jonathan Moules 
wrote:

> Hi List,
> I'm trying to find all hashes that are unique to a specific id (my_id),
> and then use a UPDATE-Join to update another table with that number.
>
> After much tweaking, I've simplified the table down to a basic temp
> table (actually created using a CREATE AS SELECT ... GROUP BY my_id, hash):
>
> CREATE TEMP TABLE t_distinct_hashes_by_id (
>  my_idINTEGER,
>  hashTEXT
> );
>
> And indexes in both directions because I'm still trying to optimise (and
> see what SQLite wants):
>
> CREATE UNIQUE INDEX temp.idx__1 ON t_distinct_hashes_by_id (
>  hash,
>   my_id
> );
>
> CREATE UNIQUE INDEX temp.idx__2 ON t_distinct_hashes_by_id (
>  my_id,
>  hash
> );
>
> There are only 20 values for my_id, but several hundred thousand hashes.
>
> -
>
> I can do a SELECT query which gets what I want and runs in about 0.5
> seconds:
>
> SELECT
> *
>  FROM
>  temp.t_distinct_hashes_by_id d
>  WHERE
>  hash NOT IN (SELECT hash FROM temp.t_distinct_hashes_by_id
> sub where 1 != sub.my_id and hash not NULL)
>  AND
>  1 = d.my_id
>
> The EXPLAIN:
> 000SEARCH TABLE t_distinct_hashes_by_id AS l USING COVERING
> INDEX idx__2 (my_id=?)
> 000EXECUTE LIST SUBQUERY 1
> 100SCAN TABLE t_distinct_hashes_by_id AS sub
>
> -
> So in theory, I'd hope that an UPDATE version using this select should
> take around 20 * 0.5 sec = 10 seconds. But it's actually taking... well
> I don't know how long, at least 10 minutes before I gave up waiting.
> This is the UPDATE:
>
> UPDATE
>  meta_table
> SET
>  distinct_hashes = (
>  SELECT
>  COUNT(hash) AS num
>  FROM
>  temp.t_distinct_hashes_by_id d
>  WHERE
>  hash NOT IN (SELECT hash FROM temp.t_distinct_hashes_by_id
> sub where meta_table.my_id != sub.my_id and hash not NULL)
>  AND
>  -- This one has to be at the bottom for some reason.
>  meta_table.my_id = d.my_id
>  )
> ;
>
> The query plan for this UPDATE includes two CORRELATED Subqueries, which
> the docs say are reassessed on every run - that seems like the problem.
> I get that it'd need to do that 20 times (once per my_id), but the
> slowdown seems considerably longer than that needs to warrant.
>
> 000SCAN TABLE meta_table
> 000EXECUTE CORRELATED SCALAR SUBQUERY 0
> 000SEARCH TABLE t_distinct_hashes_by_id AS l USING COVERING
> INDEX idx__2 (my_id=?)
> 000EXECUTE CORRELATED LIST SUBQUERY 1
> 100SCAN TABLE t_distinct_hashes_by_id AS sub
>
> Does anyone have any thoughts on how I can speed this up (SQLite 3.15.0
> and confirmed in 3.24.0 (which uses about 3 times the disk IO / sec for
> the same query/data))?
> Thanks,
> Jonathan
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list

[sqlite] Fwd: PRAGMA writable_schema and schema_version: changes are ignored on the same connection

2018-06-20 Thread Barry
Trying again since this didn't go through the first time...

This started off as a bug report about the writable schema method
recommended in  https://www.sqlite.org/lang_altertable.html, but I realised
the problem is a bit broader.

It seems SQLite is ignoring manual changes to sqlite_master when combined
with increments to schema_version. The alter table page referenced above
does not mention this, and indeed has advice (to run an integrity_check)
that relies on different behaviour. Restarting SQLite ensures it has
re-read the schema. I did not check the behaviour of simultaneous
connections.

C:\Users\Me>sqlite3
SQLite version 3.24.0 2018-06-04 19:24:41
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE TABLE Hi(ID INTEGER PRIMARY KEY);
sqlite> PRAGMA schema_version;
1
sqlite> PRAGMA writable_schema=ON;
sqlite> UPDATE sqlite_master SET SQL='CREATE TABLE Hi(ID INTEGER PRIMARY
KEY, value INTEGER)' WHERE name='Hi' AND type='Table';
sqlite> PRAGMA writable_schema=OFF;
sqlite> PRAGMA schema_version=2;
sqlite> INSERT INTO Hi VALUES (1, 4);
Error: table Hi has 1 columns but 2 values were supplied
sqlite> PRAGMA schema_version;
2
sqlite> PRAGMA schema_version=3;
sqlite> INSERT INTO Hi VALUES (1, 4);
Error: table Hi has 1 columns but 2 values were supplied
sqlite>

The context of this is:

I just tried to follow the second method (writable schema) advised on
https://www.sqlite.org/lang_altertable.html to alter a column and drop a
NOT NULL constraint.

Due to a typo I updated the SQL in sqlite_master to CREAT TABLE... instead
of CREATE TABLE.

This database still passed the PRAGMA integrity_check that I ran before
committing the transaction. However, when I closed the database then
reopened and executed a DDL statement it got all upset and told me that my
database schema is malformed.

I understand that writable_schema is a "You break it you buy it" type
situation, but why does the documentation recommend I run the PRAGMA
integrity_check if it won't pick up something so simple and obviously
erroneous as misspelling CREAT?

Reproduction example (this is 3.22, but as illustrated above 3.24 will give
the same results):

C:\Users\Me>sqlite3 test.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> CREATE Table foo(ID INTEGER PRIMARY KEY);
sqlite> PRAGMA schema_version;
1
sqlite> BEGIN;
sqlite> PRAGMA writable_schema=ON;
sqlite> UPDATE sqlite_master SET SQL = 'CREAT TABLE foo(bar INTEGER PRIMARY
KEY)' WHERE type='table' and name='foo';
sqlite> PRAGMA writable_schema=OFF;
sqlite> PRAGMA schema_version=2;
sqlite> PRAGMA schema_version;
2
sqlite> PRAGMA integrity_check;
ok
sqlite> COMMIT;
sqlite> .exit

C:\Users\Me>sqlite3 test.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> CREAT TABLE bar (ID INTEGER PRIMARY KEY);
Error: near "CREAT": syntax error
sqlite> CREATE TABLE bar (ID INTEGER PRIMARY KEY);
Error: malformed database schema (foo)
sqlite>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An issue with System.Data.SQLite

2017-12-22 Thread Barry
Do you allow the entity framework to create your database, or do you
do it yourself?

You need text affinity on the columns in question. Then any numbers
will be stored as text, not numbers.

See:
https://sqlite.org/datatype3.html

Particularly:

3.1. Determination Of Column Affinity

The affinity of a column is determined by the declared type of the
column, according to the following rules in the order shown:

1) If the declared type contains the string "INT" then it is assigned
INTEGER affinity.

2) If the declared type of the column contains any of the strings
"CHAR", "CLOB", or "TEXT" then that column has TEXT affinity.

On 21 December 2017 at 23:06, Алексей Люльченко
 wrote:
> Dear Sir!
>
> I would like to ask you to help me with an issue which appears while I use
> System.Data.SQLite database engine.
>
> I use .net framework 4.0.0, EntityFramework 6.2.0 and SQLite 1.0.106. I
> have an entity with properties of string type. This entity is mapped
> to my database table which has the same structure (columns names and data
> types). When I write and read back data to and from the database, all is OK
> while these columns have strings like "abcdef" or "123abc". But when I try
> to write a string like "1234" it is written to the database table like an
> int data type value. I know that it is a normal behavior and it is not an
> issue (https://sqlite.org/faq.html#q3). But I get the
> "System.InvalidCastException" when I try to read back the row from the
> database to my entity because the datatypes are different (the entity has
> the property of string type but the database table now has the cell of int
> datatype).
>
> Please, give me the clues what is going wrong, if you can. I think it is a
> typical bug in many situations.
>
> Thank you in advance!
>
> Yours Faithfully,
>
> Aleksey Lulchenko, .Net developer at "TDabbat".
>  Saratov, Russia
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Connection open to nonexistent database succeeds! WTF?

2018-03-31 Thread Barry
It looks like you're using C# and System.Data.SQLite.

If that's the case, use the connection string parameter FailIfMissing=True
to throw an exception if the file does not exist.

On 1 April 2018 at 02:34, Jens Alfke  wrote:

>
> > On Mar 31, 2018, at 8:17 AM, Mike Clark 
> wrote:
> >
> > Is this expected behavior?
>
> Yes. If the database file doesn’t exist, opening it will create it.
> (That’s how you create new databases.) There is a flag to sqlite3_open (in
> the C API) that prevents creating a file.
>
> —Jens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Doing math in sqlite

2018-12-20 Thread Barry
The following will give you information pretty much as your example: car,
start date, end date, distance.

SELECT m1.Car, m2.Date AS StartDate, m1.Date AS endDate, m1.od_reading -
m2.od_reading AS distance
FROM mileage AS m1
JOIN mileage AS m2 ON m2.rowid = (SELECT rowid FROM mileage m3 WHERE
m3.Date < m1.Date AND m3.car = m1.car ORDER BY Date DESC LIMIT 1)
WHERE Distance IS NOT NULL;

You can add an additional "AND CAR ='foo'" if you only want data about foo.

Note: this uses the rowid, which is a SQLite specific feature, as a match
on the join because it's slightly more efficient. You could also have the
join condition stated in more general SQL terms as:
JOIN mileage AS m2 ON m2.car = m1.car AND m2.Date = (SELECT Date FROM
mileage m3 WHERE m3.Date < m1.Date AND m3.car = m1.car ORDER BY Date DESC
LIMIT 1)

On Thu, 20 Dec 2018 at 19:48, Jungle Boogie  wrote:

> On Thu 20 Dec 2018  6:26 PM, Barry Smith wrote:
> >
> >
> > > On 20 Dec 2018, at 4:21 pm, Jungle Boogie 
> wrote:
> > >
> > > Hi All,
> > >
> > > Some sample data:
> > > 2018/04/15,foo,170644
> > > 2018/04/15,bar.69625
> > > 2018/04/22,foo,170821
> > > 2018/04/22,bar,69914
> > > 2018/04/29,foo,171006
> > > 2018/04/29,bar,70123
> > > 2018/05/06,foo,171129
> > > 2018/05/06,bar,70223
> > > 2018/05/09,foo,171178
> > > 2018/05/20,foo,171304
> > > 2018/05/20,bar,70517
> > >
> >
> > I wouldn't call my solution elegant, but if you put a unique constraint
> on the "date" column, and want your query to return null if there is not a
> record exactly on that date:
> >
> > SELECT
> >(SELECT od_reading FROM mileage WHERE date = '2018/05/20' AND
> car='bar')
> >-
> >(SELECT od_reading FROM mileage WHERE date='2018/05/06' AND car='bar')
> > As ExactMileage
> >
> > You can rephrase that as:
> > SELECT end.od_reading - start.od_reading FROM mileage AS start, mileage
> AS end WHERE end.date='2018/05/20' AND start.date='2018/05/06' AND end.car
> ='bar' and start.car='bar'
> >
> > Or even you could use CTEs to repeat yourself less:
> > WITH OnlyBar AS SELECT * FROM mileage WHERE car='bar'
> > SELECT end.od_reading - start.od_reading FROM OnlyBar AS start, OnlyBar
> AS end WHERE end.date='2018/05/20' AND start.date='2018/05/06
> >
> > If you might query between two dates that don't necessarily have a
> record, the best you can do is give a range of what the possible driven
> distance may be:
> > SELECT
> >(SELECT MIN(od_reading) FROM mileage WHERE date >= '2018/05/20' AND
> car='bar')
> >-
> >(SELECT MAX(od_reading) FROM mileage WHERE date <= '2018/05/06' AND
> car='bar')
> > AS MaxPossible,
> >(SELECT MAX(od_reading) FROM mileage WHERE date <= '2018/05/20' AND
> car='bar')
> >-
> >(SELECT MIN(od_reading) FROM mileage WHERE date >= '2018/05/06' AND
> car='bar')
> > AS MinPossible
> >
> > The second query finds the records before and after the dates in
> question (or uses data on exactly that date) to figure out the minimum and
> maximum possible distances that may have been driven between the dates of
> interest.
> >
> > Note that there is a more efficient form of the second query, where one
> selects just od_reading and orders by date, with a limit of 1 instead of
> using the MIN and MAX functions; if your table is going to be large then
> that is a better option (with an index on (car, date), the value can then
> be computed with an index seek instead of a table scan). I used the min/max
> version because it's simpler and easier to read
> >
>
> Thanks all for for the helpful replies!
>
> All of them work, as expected, but I see I wasn't exactly clear with my
> second
> part of the question.
>
> Is it possible to determine how many miles were driven the previous week
> with
> current week's data in one query.
>
> For example with foo:
> 2018/05/20 - 2018/05/09 = 126
> 2018/05/09 - 2018/04/29 = 172
> 2018/04/29 - 2018/04/22 = 185
>
> Basically some kind of for loop through all the matches of 'foo'
>
> > >
> > > Thanks,
> > > j.b.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Support for System.Data.SQLite: Different API type for int/integer columns

2019-01-28 Thread Barry
I use Convert.ToInt32 (or whatever else is appropriate) instead of direct
casts when reading from System.Data.SQLite... I use a very similar schema
in an SQLServer and SQLite database, and Convert seems to handle many of
the differences without requiring different code for each database engine.

On Mon, 28 Jan 2019 at 03:31, Joe Mistachkin  wrote:

>
> JP wrote:
> >
> > The answers don't explain why there is a bitness difference at run-time
> > between the types retrieved from INT and INTEGER columns, and that's my
> > question. From reading https://sqlite.org/datatype3.html I understand
> > there should be no difference whatsoever between defining a column INT
> > or INTEGER (other than whether a primary key may become a rowid alias).
> >
>
> System.Data.SQLite maps type names to their associated types.  Mappings
> can be overridden on a per-connection basis.  By default, INT maps to a
> 32-bit integer and INTEGER maps to a 64-bit integer, mostly for reasons
> of backward compatibility.
>
> In order to customize the mappings, the flag UseConnectionTypes should
> be used when opening the connection -AND- the AddTypeMapping method is
> used to override a built-in mapping of the same name (e.g. "INT").
>
> --
> Joe Mistachkin
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is this a bug?

2019-06-18 Thread Barry
Your nested select statement wants to return many rows. Technically I think
this is illegal SQL (?), but it looks like SQLite is being lax about this
as usual and doing what it normally does in this sort of situation: picks a
row at random from the inner select.

Your statement is more or less equivalent to:
SELECT DISTINCT refVolume FROM cosSquered WHERE (refVolumn
CAST(10*cosSquared AS INT)) = (SELECT refVolume, 9 FROM cosSquared LIMIT 1)

The statement that will output the same same list of refVolumes is:
SELECT DISTINCT refVolume FROM cosSquared WHERE CAST(10 * cosSquared AS
INT) = 9

On Tue, 18 Jun 2019 at 16:44, Roman Fleysher 
wrote:

> Dear SQLiters,
>
>
> I can not figure out what I am doing wrong. In testing, I simplified to
> the following:
>
> CREATE TABLE cosSquared(refVolume INT, vecVolume INT, cosSquared REAL);
>
> SELECT refVolume, CAST(10*max(cosSquared) AS INT) FROM cosSquared GROUP BY
> refVolume;
>
> refVolume   CAST(10*max(cosSquared) AS INT)
> --  ---
> 2   9
> 3   9
> 4   9
> 5   9
> .
> 31  9
> 32  9
> 33  9
>
> That is, we see that for refVolumes between 2 and 33, the value of the
> CAST() is always 9. Thus, I expect the following statement to output the
> same list of refVolumes. But it does not:
>
> SELECT DISTINCT refVolume FROM cosSquared
>  WHERE (refVolume, CAST(10*cosSquared AS INT)) = (SELECT refVolume, 9 FROM
> cosSquared);
>
> refVolume
> --
> 2
>
> What am I doing wrong? I am using version 3.16.
>
> Thank you for your help,
>
> Roman
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Talos Security Advisory for Sqlite3 (TALOS-2019-0777)

2019-05-08 Thread Barry
Dr. Hipp, you're sending your replies to the mailing list as well as your
intended recipient. Not sure if this is intended?

On Wed, 8 May 2019 at 08:02, Richard Hipp  wrote:

> On 5/8/19, Richard Hipp  wrote:
> > On 5/8/19, Regina Wilson (regiwils)  wrote:
> >>
> >> Here’s a copy of the report.
> >
> >
> > Thanks!  Is the "poc" file available for our inspection too?
>
> If you want to keep the "poc" encrypted, you can log in at
> https://sqlite.org/secure/upload as user "talos" with password
> "6diMu23YpNW" and upload the file to us that way.  If you do this,
> please send us a separate email so that we will know to go retrieve
> the file.
>
> Security note: The password in the previous paragraph only permits
> writing, not reading.  The only purpose of the password is to prevent
> people from spamming us with extraneous uploads.  Knowledge of the
> password does not enable an attacker to access the uploaded file. In
> other words, the password is not a security mechanism - it is an
> anti-spam mechanism.
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread Barry
If you use the SQLite backup api to 'backup' (save) the in-memory database
to the disk database, an atomic transaction is used to write the data to
the destination database. This means that all of SQLites usual guarantees
apply: If the operation completes, the disk database will contain a
complete, consistent copy of the in-memory database; If the operation does
not complete the next program to open the disk database will see the hot
journal and roll it back, and see a complete, consistent version of the
on-disk database before you started the backup procedure.

Note that if the database and its journal are separated, or the journal is
deleted, then this guarantee is lost and the database is corrupt. In this
way the backup API isn't much better than your other idea of copy then save.

If performance is important, this is one of the few cases when you will be
much better off using a rollback journal than a WAL journal: If you are
using a rollback journal the final COMMIT merely consists of deleting the
journal, in WAL mode all changes must be written many times.

If performance is critical and you intend to overwrite, then you can
perform VACUUM INTO a new database to disk the use OS calls to replace the
original database. Pretty sure this would be quicker than the backup API,
but you'd have to have custom rollback logic. You'd have to check if your
an OS rename and overwrite is atomic. Not sure on this one, if someone on
the list knows I'm wrong on please correct me.

You mentioned wanting to just be able to track the changes to the in-memory
database, and commit them to disk when the user presses save. SQLite has an
extension to do the hard work of this for you:
https://www.sqlite.org/sessionintro.html . You still get transactions with
all of your atomic guarantees of an always consistent database.

--

Off topic: There are many good reasons to use an in-memory SQLite database
for what ardi wants to do. Some of them are:
 - The biggest: You get to query your data using SQL
 - You get constraint enforcement
 - You get indices
 - You don't need to write much code to differentiate between whether
working on the in-memory db copy or an on-disk database (or indeed another
db engine)
 - Client expectations of 'open' and 'save' behaviour prevent you from
writing directly to the on-disk database.

For performance reasons I've had to write custom code where I have changing
data that I need to keep 'indices' on. I found it very difficult to get it
right, and very fragile. If I didn't need the performance, I would much
rather have used an in-memory DB (sadly it just wasn't fast enough).

On Sun, 14 Jul 2019 at 22:26, Jens Alfke  wrote:

>
> > On Jul 14, 2019, at 10:05 AM, ardi  wrote:
> >
> > Do you have any recommendation for saving the inmemory db in a safe
> > way?
>
> If you’re going to keep your data in memory, there’s no good reason to use
> SQLite at all. Just define custom model objects that you can operate on
> efficiently, and write the data out in a standard format like JSON or XML.
> (Most frameworks have serialization features that make this easy.)
>
> The point of using a database is that you _don’t_ have all the data in
> memory, and can still access it quickly. This is attractive if you want to
> scale to large data sets that won’t fit in RAM or which take too long to
> read/write from storage. If that’s not an issue for you, don’t use a
> database; it just adds more complexity to your data model layer.
>
> —Jens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.SQLite.EF6 feature request: Timespan support

2019-08-19 Thread Barry
Hi Simon,

The System.Data.SQLite (basically a dotNet wrapper for SQLite, and official
part of SQLite) team maintain a library called System.Data.SQLite.EF6,
which I understand to be the glue between the Entity Framework and SQLite.
If my understanding is correct, this is the library that should be doing
the TimeSpan <-> (Number or string or however you want to store it in the
database) conversion.

Cheers,

 - Barry

On Mon, 19 Aug 2019 at 16:40, Simon Slavin  wrote:

> On 20 Aug 2019, at 12:19am, Barry  wrote:
>
> > Would it be possible for a future version of SQLite to support TimeSpan
> mapping in the entity framework?
>
> This is not an aspect of SQLite itself, but of how Entity Framework 6
> talks to SQLite.  The development team who could affect this change are
> those who maintain Entity Framework.  Perhaps you could contact them.
>
> I'm not an expert on it, but I see no reason why Entity Framework could
> not see a '.Time' and know that SQLite should store it as text or a number,
> making appropriate conversions when needed.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] System.Data.SQLite.EF6 feature request: Timespan support

2019-08-19 Thread Barry
A project I work on runs mainly on SQL Server, but there are times we want
to be able to create the same data model in SQLite for portability.

We mostly use EF6 to access the database; we have a code-first approach and
write custom scripts to generate the database schema.

We have a custom ConnectionFactory to determine whether a new database
context should use a SQLiteConnection or SQLServerConnection. (Yes it's
deprecated but it works...)

This was working well until we added a TimeSpan field to one of our
entities. Now we get an exception on model creation: There is no store type
corresponding to the EDM type 'Edm.Time' of primitive type 'Time'.

I know it would be possible for us to store the Ticks or TotalSeconds of
the TimeSpan in a SQLite INTEGER or REAL field, but we've already got
deployments using the TimeSpan field on SQLServer and would rather avoid
having to change that schema.

Would it be possible for a future version of SQLite to support TimeSpan
mapping in the entity framework?

Thanks in advance,

 - Barry Smith
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] sqlite3_exec without ubiqitous text conversions

2019-07-30 Thread Barry
Why do it this way?

Why not write your own custom_sqlite3_exec(...) that uses the standard,
stable, documented interfaces?

custom_sqlite3_exec(...) could call prepare / step / finalize, and use the
standard sqlite3_column_* interfaces to fill a result array. This would be
very little work and could be a drop in replacement of sqlite3_exec. As a
bonus, you can have a proper signature of void* for your callback function
(since it's no longer receiving an array of strings, char** is a lie which
will confuse anyone working on the project in future). Another
maintainability bonus: The name will alert any consumers that this isn't
part of the standard sqlite3 interface.

An assumption you've made may be incorrect: You say that callers know what
data type to expect so that they know how to cast the results (I assume
this is based on the declared column affinity?), but SQLite is not a
strictly typed database. So the actual data might not match what they're
expecting; by going the route you've chosen the exact way you pick up these
errors might be quite far from the source of the error.

A sane improvement to the interface would be for the consumer of
custom_sqlite3_exec(...) to pass in an array of types that they are
expecting; checking that the types match inside of your custom function
would be fairly trivial (and performant) and you could have properly
defined behaviour for what happens if the types from the database don't
match the exepcted types. Some ideas are: Use SQLite type coercion (just
call sqlite3_int even if it's a string); fail; or skip the record.

On Tue, 30 Jul 2019 at 07:40, Barone Ashura  wrote:

> I really wish to keep the argument polite and constructive, so please dont
> get me wrong if I reply, I just want to understand what you are referring
> to, realize and evaluate which is the best way to go for me, not for the
> sqlite library, that's why I writing to the sqlite library.
>
> Il giorno mar 30 lug 2019 alle ore 15:50 Hick Gunter  ha
> scritto:
>
> > f) There are exactly 2 documented functions in your code. Did you not
> read
> > their documentation???
> >
> > See https://sqlite.org/c3ref/column_blob.html
> >
> > " After a type conversion, the result of calling sqlite3_column_type() is
> > undefined, though harmless. Future versions of SQLite may change the
> > behavior of sqlite3_column_type() following a type conversion."
> >
>
> which type conversion are you referring to? the second one erroneously
> called in the example code? I do understand the 'type conversion' to be one
> of the six functions listed in the page you linked (which was read).
> The statement before the one you quoted specifically says:  "The return
> value of sqlite3_column_type() can be used to decide which of the first six
> interface should be used to extract the column value". Isnt this exactly
> what I want to be doing? Read the column type from the statement, according
> to the return value call the relevant extraction function. The
> documentation describes 'automatic conversions' being performed if I am
> trying to extract a datatype that is different from the internal datatype.
> But this is exactly what I want to stay away from, unless, of course I am
> missing something, which I would very like understand.
>
>
> > b) Breaking encapsulation includes referencing internal .h files in your
> > own code. You should be using sqlite3.h *only*
> >
>
> I am not referencing ANY internal.h file in my own code. I am just using
> the amalgamation distribution, and I am writing code in sqlite.c, not in my
> own source files, so nobody outside of sqlite.c calls any function or uses
> any datatype that is not declared in sqlite.h.
>
>
>
> > You do realise that declared types are not enforced in SQLite? If somehow
> > a real with a non-integer value got inserted into an integer field, the
> > type of the returned value would still be real, but you would be
> expecting
> > integer. And silently reading the bits of the double value as an integer,
> > which would probalby cause a practically untetectable error.
> >
>
> True, agreed. This is the reason why the types of each value being inserted
> into, updated and read from the database is strictly enforced outside of
> sqlite. To state in a cear way: SQLite might not enforce declared types, by
> the application around sqlite has been specifically designed to enforce
> different types.
>
>
> >
> > c) There is no guarantee that the internal fields referring to a previous
> > state of a Mem structure are preserved across conversions.
> >
>
> why do you refer to a "previous state of a mem structure"? the code is
> being executed inside a custom sqlite_exec function, which mirrors the
> behaviour of the provided sqlite_exec function, except for the conversions
> to/from text (which return the pointer to the internal data type, without
> any conversion in case of Integer and float value), WAY before calling the
> statement finalization. The callback itself that must be provided 

Re: [sqlite] Is WAL mode serializable?

2019-07-09 Thread Barry
Serialization isolation requires 'The databases ends up in the state it
would appear if every transaction were run sequentially'. It doesn't
actually require that all transactions occur sequentially, just that the
database ends up in a state as though they had.

Why do you think that SQLite's system of denying a read transaction the
ability to escalate to write if the data it has read has been modified by
another transaction violates that contract?

On Tue, 9 Jul 2019 at 20:39, Keith Medcalf  wrote:

>
> On Tuesday, 9 July, 2019 20:34, Andy Bennett 
> wrote:
>
> >However, the wording for WAL mode is confusing me.
>
> >isolation.html says 'all transactions in SQLite show "serializable"
> >isolation.' but it also says 'In WAL mode, SQLite exhibits "snapshot
> >isolation"'.
>
> >Snapshot Isolation and Serializable often (in other engines) mean
> >different things at commit time (
> >https://blogs.msdn.microsoft.com/craigfr/2007/05/16/serializable-vs-
> >snapshot-isolation-level/
> >), but SQLite seems to say that the snapshot isolation is upgraded to
> >serializable by forbidding readers to upgrade to writers if another
> >writer got in before them:
>
> >'The attempt by X to escalate its transaction from a read transaction
> >to a write transaction fails with an SQLITE_BUSY_SNAPSHOT error because
> >the snapshot of the database being viewed by X is no longer the latest
> >version of the database.'
>
> >So far, so good.
>
> >However. the definition of SQLITE_BUSY_SNAPSHOT at
> >https://sqlite.org/rescode.html#busy_snapshot says:
>
> >-
> >1. Process A starts a read transaction on the database and does one
> >   or more SELECT statement. Process A keeps the transaction open.
>
> >2. Process B updates the database, changing values previous read by
> >   process A.
>
> >3. Process A now tries to write to the database. But process A's view
> >   of the database content is now obsolete because process B has modified
> >   the database file after process A read from it. Hence process A gets
> >   an SQLITE_BUSY_SNAPSHOT error.
> >-
>
> >In particular 'Process B updates the database, changing values
> >previous read by process A.' seems to suggest that values read
> >by A have to be changed to effect the SQLITE_BUSY_SNAPSHOT
> >error in A.
>
> SQLite does not track individual "rows" or "tables" being changed.  In (2)
> "changing values previously read by process A" means the database has
> changed since process A's view of the database was created.  What exactly
> changed is immaterial, since tracking of changes only occurs at the "whole
> database" level.
>
> >Is that last quote just imprecise writing or is there really a
> >difference between SQLite's Snapshot Isolation in WAL mode and
> >its Serializable isolation in ROLLBACK mode?
>
> Yes, there is a difference.  In rollback mode, a transaction merely places
> a shared lock on the database preventing any changes whatsoever from being
> committed to the database.  All outstanding transactions must be "closed"
> before a commit may proceed (acquire an EXCLUSIVE lock on the entire
> database including all its rows in all tables).  Thus "changes" to the
> database are "serialized" because they can only occur in series and only
> when nothing else is "looking" (since looking requires at least a read
> lock, which will prohibit the update from being committed).
>
> WAL mode however, is different.  When you commence a "read transaction" in
> WAL mode you are creating a *snapshot* of the database at that time
> (actually, the reference to *snapshot* is incorrect, the actual isolation
> level is "repeatable read").  "repeatable read" markers can be thought of
> as "timestamps"  -- that is, a process holding a "repeatable read" lock
> against the database can see the database up to the time of its "timestamp"
> was created (when the lock was obtained).  It cannot see changes that are
> made *after* that time.  If another process obtains a "repeatable read"
> timestamp that is the same as one already obtained by another process, and
> then commits those changes, the original process is still holding a
> "repeatable read" lock on the database as it existed BEFORE the change was
> committed.  Thus it cannot update the database because it is seeing a
> "historical" version of the database and not the "current" view of the
> database -- that is it is looking at the database as it was at the time it
> obtained its "repeatable read" timestamp, and cannot be permitted to make
> changes since it is looking at an "old view" that does not represent an
> updateable state of affairs.
>
> In effect WAL is a "stack of changes" to be applied to the database file.
> When a read lock is obtained, the position in the stack is remembered.  If
> something else adds more data to the stack, then the original locker is no
> longer "top of stack" and cannot write changes to the database because it
> cannot "see" the changes made after it entered repeatable read isolation.
>
> see 

Re: [sqlite] wal mode

2019-12-08 Thread Barry
Even in rollback journal mode, it is not universally safe to use normal
file operations on a SQLite database. See section 1.3 and 1.4 of
https://www.sqlite.org/howtocorrupt.html

If you want to use normal file system operations (or any type of
manipulation not using the SQLite library) on an SQLite database, the most
bombproof method is to take a backup first, then perform whatever file
system operations you want on the backup. This way, you can be sure that
(a) no process is connected to the backup (because it has a different
filename) and (b) there are no hot journals associated with the backup.

You can either use the backup API from an application, or the .backup
command from the SQLite command line interface (CLI). From what you say,
wanting to use scp, it sounds like a manual process or shell script; the
CLI would likely be the easiest choice.

On Sat, 7 Dec 2019 at 04:39, MM  wrote:

> On Fri, 6 Dec 2019 at 19:06, Simon Slavin  wrote:
>
> > On 6 Dec 2019, at 6:39pm, MM  wrote:
> >
> > > So it suffices that I run "PRAGMA journal_mode=WAL;" once from say the
> > sqlite3 cli, for all future connections from any tool will use WAL mode
> for
> > this database file?
> >
> > Correct.
> >
> > > What happens when 2 processes that have had their connection open for a
> > while, attempt a UPDATE or INSERT INTO statement at the same time?
> >
> > It depends on which connections have a timeout set:
> >
> > 
> >
> > It is normal to specify a timeout of 10 seconds (or even 10 minutes) for
> > every connection you open.  This means that a process which finds the
> > database locked will enter a delay/retry loop for up to that amount of
> time
> > before returning SQLITE_BUSY.
> >
> > However, note that the default timeout is zero.  Which means that if you
> > don't set a timeout on a connection, and it encounters a locked database,
> > it will immediately return SQLITE_BUSY without retrying.
> >
> > [The above explanation is simplified for clarity.]
> > ___
> >
>
> Thank you.
> After having set WAL mode, ontop of my db file, there will be 2 extra
> files? At all times?
> Up until, I used to scp the db file to another host, and sometimes work on
> the db there and then copy back the db to the main host when no processes
> are running.
> Do I know simply copy the 3 files? the db file, and the 2 others?
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] design problem involving trigger

2015-08-23 Thread Barry Smith
Could this not be achieved by two indexes: one partial and one complete?

CREATE UNIQUE INDEX idx_books1 ON Books(title, author);

CREATE UNIQUE INDEX idx_books2 ON Books(title) WHERE author ISNULL;

To save space and (maybe) time, you could put a 'WHERE author NOTNULL' on the 
first index.

Of course, I'm just talking about how to code it, the issues mentioned by R 
Smith is a different kettle of fish.

Cheers,

Barry


> On 23 Aug 2015, at 3:05 am, "R.Smith"  wrote:
> 
> 
> 
>> On 2015-08-21 11:23 PM, Will Parsons wrote:
>>> On 21 Aug 2015, R.Smith wrote:
>>> 
>>>> On 2015-08-21 04:47 AM, Will Parsons wrote:
>>>> I'm working on a program that involves a catalogue of books.  Part of
>>>> the database schema looks like this:
>>>> 
>>>> create table Books(id integer primary key,
>>>> title text collate nocase not null,
>>>> author references Authors(id),
>>>> ...
>>>> unique(title, author));
>>>> 
>>>> create table Authors(id integer primary key,
>>>>   name text unique not null check(name <> ''));
>>>> 
>>>> The idea here is that the title+author of a book must be unique, but a
>>>> book may not necessarily have an author associated with it.  But, the
>>>> schema fragment as I have presented it does not disallow entering the
>>>> same title with a null author multiple times, which is clearly
>>>> undesirable.
>>>> 
>>>> In thinking about how to avoid this, one possibility that has occurred
>>>> to me is to add an entry to the Authors table with id=0, name=NULL, and
>>>> to modify the Books table to be:
>>>> 
>>>> create table Books(id integer primary key,
>>>> title text collate nocase not null,
>>>> author references Authors(id) not null,
>>>> ...
>>>> unique(title, author));
>>>> 
>>>> With this, entries in the Books table that presently have the author
>>>> field set to NUll would instead have author=0.
>>>> 
>>>> What I would like to have is a trigger that when an attempt is made to
>>>> enter a new record into the Books table with a NULL author field, is
>>>> to force the author field to 0 instead.  I can't see how to do this
>>>> with a "before insert" trigger, though.  Perhaps I'm approaching this
>>>> the wrong way; in any case I'd appreciate advice.
>>> Nothing wrong with your approach, simply declare the Author to be NOT
>>> NULL and if it is possible to have non-Authors you could do what you
>>> suggest, add a 0-Author to the parent table, or, you could use a partial
>>> Index (available after SQLite 3.7 only I think) which would allow you to
>>> have NULLS but still maintain a strict relation, something like this:
>>> 
>>> create table Books(id integer primary key,
>>> title text collate nocase not null,
>>> author int not null references Authors(id),
>>> ...
>>> );
>>> 
>>> create unique index uBookAuth on Books(title,author) where author is not 
>>> null;
>>> 
>>> create table Authors(id integer primary key,
>>>   name text unique not null check(name <> ''));
>> I guess I don't understand how having a partial index where author is
>> not null prevents adding two entries with the same title and a null
>> author.  How is this different from what I have now with the
>> "unique(title, author)" specification?
> 
> I think I may have misinterpreted slightly...
> 
> To clarify: are the book titles unique or are they not?
> 
> If they are Unique, i.e if no book title can ever appear twice, regardless 
> who authored it, then simply declare a unique index on title.
> If they are NOT Unique, i.e. if a title can appear more than once, then why 
> is it not working for you?
> 
> Essentially, if I interpret correctly, you are asking the DB to NOT limit the 
> number of same-titled books, except when you have supplied an author and the 
> same author has already such a titled book, but then when you don't supply an 
> author, it should know to now also limit the copies of  no-author books?
> 
> I assumed before that you only added NULL for author if you don't know the 
> author yet (which wo

[sqlite] Under what circumstances can a table be locked when the database is first opened?

2015-05-23 Thread Barry Smith
Hi,

Unless you are using shared cache, SQLite does not lock on a per table level - 
only it locks the entire database.

Under what circumstances are you trying to access the database both times? Are 
these multiple connections within the same process or are you shutting down the 
process and then restarting? Do you have shared cached enabled?

Process Explorer is a windows tool that will tell you which process has open 
handles on a particular file. I found it a right pita to use though.

Cheers,

Barry

> On 23 May 2015, at 1:42 pm, "Keith Medcalf"  wrote:
> 
> 
> 1)  Something else has the database open and locked.
> 2)  You are using Shared Cache 
> 3)  Something forgot to finalize a select
> 4)  The database is stored on a non-locally-attached filesystem
> 5)  An issue in the version of SQLite you are using (and you did not say 
> which version you are using)
> 6)  A buggy filesystem driver (you did not say which one you are using)
> 7)  Badly designed antivirus software
> 8)  Badly designed file syncronization software (for example, storing the 
> database in a directory that is being monitored and synced by badly designed 
> software (dropbox for example)).
> 
>> -Original Message-
>> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
>> bounces at mailinglists.sqlite.org] On Behalf Of david at andl.org
>> Sent: Friday, 22 May, 2015 21:13
>> To: 'General Discussion of SQLite Database'
>> Subject: [sqlite] Under what circumstances can a table be locked when the
>> database is first opened?
>> 
>> Question: Under what circumstances can a table be locked when the database
>> is first opened?
>> 
>> My program does:
>> 
>> DROP TABLE IF EXISTS
>> CREATE TABLE
>> INSERT INTO (multiple times)
>> SELECT * (for each row)
>> 
>> Run it once and it works perfectly. Run it twice and the DROP TABLE
>> triggers
>> the error:
>> 
>> SQLITE_LOCKED, database table is locked
>> 
>> Delete the database and run it again and it works. Just once.
>> 
>> Nothing in the documentation tells me how a table can be locked when the
>> database is first opened. I'm using the raw C interface on Windows, so
>> what
>> can I be doing wrong?
>> 
>> The code is actually written in C#, but uses Interop to call the C API
>> directly. The database open code looks like this. No open flags are used.
>> 
>> LastResult = (Result)sqlite3_open(path, out _dbhandle);
>> 
>> Regards
>> David M Bennett FACS
>> 
>> Andl - A New Data Language - andl.org
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] I don't understand how to use NOT EXISTS

2015-09-16 Thread Barry Smith
If you have a unique index on name, you could use INSERT OR IGNORE. 
https://www.sqlite.org/lang_conflict.html

INSERT OR IGNORE INTO TAGS (NAME, COUNT) VALUES ('Bleh', 1)

As for your original query: think about just the select clause (you can run it 
independently). This will return ('magnetohydrodynamics', 1) for each row in 
the table. 

If you did not want to use INSERT OR IGNORE, you could put a LIMIT 1 in there, 
or rephrase your query to not use TAGS in the outer select (note: you never 
actually reference anything in the TAGS specified in the outer select)

> On 15 Sep 2015, at 1:06 pm, Nicolas J?ger  
> wrote:
> 
> hi,
> I have a table TAGS with idkey and two colums (NAME, COUNT):
> 
> id|NAME|COUNT
> 53|black metal|3
> 55|occult rock|3
> 61|satanic smurfs|1
> 62|beer|0
> 63|pizza|0
> 
> I want to check if some tag exist by checking if `NAME` is recorded
> in the table or not. If not, I want to add it;
> 
> INSERT INTO TAGS ( NAME, COUNT ) SELECT 'magnetohydrodynamics', 1
> FROM TAGS WHERE NOT EXISTS (SELECT * FROM TAGS WHERE NAME =
> 'magnetohydrodynamics' );
> 
> then if I look up in the table I see:
> 
> id|NAME|COUNT
> 53|black metal|3
> 55|occult rock|3
> 61|satanic smurfs|1
> 62|beer|0
> 63|pizza|0
> 64|magnetohydrodynamics|1
> 65|magnetohydrodynamics|1
> 66|magnetohydrodynamics|1
> 67|magnetohydrodynamics|1
> 68|magnetohydrodynamics|1
> 
> could you tell me please where I did some mess ?
> 
> regards,
> Nicolas J.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] database permanently locked

2006-12-02 Thread Max Barry

My database is permanently locked, and I've spent two fruitless days
trying to unlock it.

The problem:

$ sqlite trac.db
SQLite version 3.3.6
Enter ".help" for instructions
sqlite> .databases
Error: database is locked

I believe this was caused by something bad happening in Trac 0.9.6. More
than that I don't know.

What I've tried so far:

* Killing every process I own
* Getting the machine owner to restart Apache
* Using lsof to verify that no process I own is locking trac.db, or
using any SQL libs
* Moving/renaming the trac.db file

Because it's a shared box I don't have root access and can't reboot it.
What else can I try?

Thanks for any help,

Max.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] database permanently locked

2006-12-03 Thread Max Barry

Christian Smith wrote:
> Max Barry uttered:
> 
> My database is permanently locked, and I've spent two fruitless days
> trying to unlock it.
> 
> You haven't said what sort of box this is. I guess a generic Unix. If Linux, 
> you'll probably have fuser installed. Run fuser against the database file, 
> and it'll tell you the PID of any process that has the file open. I'm not 
> sure if other Unix have fuser installed by default.

Sorry, yes: it's a Linux box with a 2.4.32 kernel.

> If the file is owned by a trac processs, you can kill it using "fuser -k", 
> assuming you have permission.

Unfortunately fuser doesn't show any process holding a lock, and I've
already tried killing every process I have permission to.

I suspect that fuser might not show me the answer because I'm non-root.
Certainly ps, top, etc, only show processes I own. I've asked the box
owner to run 'fuser' for me and see if that shows something.

drh wrote:
> Is the database on an NFS filesystem.  The locking is busted
> on some (many?) implementations of NFS, resulting in behavior
> like shown above.

Yep, it's NFS. The line from /etc/fstab is:

10.3.100.108:/vol/boot/spunky/maitland  /home/.maitland nfs
defaults,intr,vers=3,bg,rsize=8192,wsize=8192   0 0

> Your work-around is to copy the database to a local filesystem
> that actually supports posix advisory locks.

Ah. Except I don't think I can do that, because I don't have root on
this box.

I discovered I could make a copy of the trac.db file and edit my
trac.ini file to make point to that instead ("database =
sqlite:db/trac2.db"). Ta da, no more lock! So at least my site is usable
again.

But it sounds like this will happen again with my current configuration.
Is there anything I can do as non-root, or do I need to just be prepared
for lots of db copying?

Thanks very much to drh and Christian for the fast replies!

Max.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SqLite db ODBC

2008-05-29 Thread Barry Trutor
I am trying to locate an ODBC driver that I can use in the Windows Data
Link.

Is this doable? Does one exist?

Thanks, Barry

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: [sqlite] Threads and locking

2005-12-16 Thread Barry Paul

Yes, but I think that will just lead to the same problem. Essentially that
is what SQLite is doing for me already.

What is happening is that the high priority user interface thread is waiting
for the low priority worker thread to complete its transaction. This
effectively is reducing the priority of the user interface which either
times out or becomes sluggish... 

In the busy handler can you find out what thread has the lock? If so, I
could probably temporarily increase the priority of the locking thread and
speed up the transaction processing/unlocking...

Thanks again,

--
BP
<< www.planet-hood.com >> Welcome to our world <<
 
 

> -Original Message-
> From: John Stanton [mailto:[EMAIL PROTECTED] 
> Sent: Friday, December 16, 2005 1:07 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Threads and locking
> 
> Have you thought of using a lock to synchronise access to the 
> databaseso that only one thread at a time could change the 
> database although both could read simultaneously?
> JS
> 
> Barry Paul wrote:
> >  
> > Hi, I am having some unexpected locking issues with SQLite.
> > 
> > I have a desktop application that uses SQLite. The 
> application has a 
> > low priority worker thread that is constantly 
> > analyzing/adding/updating/deleting
> > records in the database. The main application thread mainly 
> reads from 
> > the database but also does some updating/deleting. Both 
> threads have 
> > their own SQLite connection.
> >  
> > My problem is that when I do updates in the main 
> application thread I 
> > quite often fail with a return value of SQLITE_BUSY. I have messed 
> > around with busy_timeouts and busy_handlers without much 
> success. My 
> > current busy handler (culled either from this list or the web) is:
> > 
> > int busyHandler(void *pArg1, int iPriorCalls) {
> > 
> > // sleep if handler has been called less than 
> threshold value
> > if (iPriorCalls < 20)
> > {
> > // adding a random value here greatly 
> reduces locking
> > if (pArg1 < 0)
> > Sleep((rand() % 500) + 400);
> > else Sleep(500);
> > return 1;
> > }
> > 
> > // have sqlite3_exec immediately return SQLITE_BUSY
> > return 0;
> > }
> > 
> > If I increase the transaction size on the low priority thread I get 
> > more update failures on the main thread.
> > 
> > My schema is fairly simple and my tables contain < 90,000 rows. It 
> > would seem to me that with just two threads and this busy handler I 
> > should never (or very rarely) get SQLITE_BUSY.
> >  
> > My theory is that the main application thread is getting locked out 
> > because it is waiting for the low priority thread to 
> release the lock 
> > on the database. Meanwhile something else is happing on the 
> machine at 
> > a higher priority and not letting the low priority thread 
> back in to 
> > finish the transaction and release the lock.
> > 
> > Does this sound reasonable and is there a good way of dealing with 
> > this situation?  Should I try to increase the priority of the 
> > background thread when I get a lock? Or is there some way 
> to make sure 
> > that transactions in the low priority thread are executed 
> all at once without interruption?
> > 
> > Thanks for your time,
> > 
> > --
> > BP
> > << www.planet-hood.com >> Welcome to our world <<
> >  
> > 
> 
> 
> 



Re: [sqlite] Help with Backup API please

2017-02-08 Thread Barry Smith
Hi Brett,

I believe SQLite doesn't use the standard memory allocation routines, but 
instead has its own routines. These (might) use global variables. If each 
module of your application statically links to the SQLite source rather than 
having SQLite in a common DLL, then each module will have its own set of global 
variables. As such, passing raw SQLite pointers across module boundaries can 
result in undefined behaviour. I suspect other issues could also occur with 
other global variables in SQLite.

Note that passing the raw pointer is not the real culprit, but rather having 
different modules attempt to work on the pointer. For example:, the following 
should work fine:
sqlite3* pDb = DLL1GetDb();
DLL1DoSomething(pDb);

But either of the following could fail:
sqlite3* pDb = DLL1GetDb();
DLL2DoSomething(pDb);
// or, if the calling module is not DLL1
sqlite3_exec(pDb,...);

Personally, to enforce this I do not expose (either in arguments or return 
values) raw sqlite objects (dbs, statements, or anything else) in exported 
functions.

If you already have SQLite in a common DLL, or you perform all sqlite3_* calls 
from the same module, I apologise for wasting your time.

Also, SQLite is written in C so it uses error return codes rather than throwing 
exceptions. The exception you are catching would have to be thrown by another 
library that SQLite is calling. This would mean that the output of 
sqlite3_errmsg is likely unrelated to the exception since SQLite would never 
have intercepted it. Perhaps your debugging environment could tell you more 
details of the real proximate cause of the error (unless the you mean that the 
error message of the exception, by coincidence, was the same as a standard 
SQLite error message)

Cheers,

 - Barry 

> On 8 Feb 2017, at 9:30 AM, Brett Goodman <bre...@eztools-software.com> wrote:
> 
> Hello Sqlite users.  I have a problem I can't solve.  I have a C++ DLL 
> project in which I compile the Sqlite amalgamation code. The purpose is to 
> wrap the key Sqlite functions with exported functions I can call from another 
> C++ COM DLL.   I'm using VS2003 for this because its part of a legacy 
> product.  I've been using this project for years, updating the Sqlite source 
> code from time to time with no problems.  I can call functions like 
> sqlite3_open_v2, sqlite3_prepare, sqlite3_step, etc. with no problems.  Now 
> I'm trying to integrate the Backup API.  When I call sqlite3_backup_init it 
> throws this error: _/"library /__/
> /__/routine called out of sequence"/_.  I want to step into the Sqlite code 
> to find out why but for some reason my debugger won't allow me to step into 
> the C code from my C++ code.  So I'm hoping someone can help me.  Below is my 
> C++ implementation which I modeled from the sample on this page: 
> sqlite.org/backup.html.  My calling code is from a VB6 project which simply 
> creates/opens a memory DB and calls the below function (via my COM DLL).  Any 
> help is much appreciated.
> 
> Thanks,
> BrettG
> 
> 
> SQLITE_DLL_API int backup( /*sqlite3*/ void* pdb, const WCHAR *pFilename, 
> BOOL toFile )
> {
>  int rc;
>  sqlite3 *pFileDb;
>  sqlite3_backup *pBackup;
>  sqlite3 *pTo;
>  sqlite3 *pFrom;
> 
>  CTextConverter c(CP_UTF8);
>  //rc = sqlite3_open_v2( c.U2M(pFilename), , SQLITE_OPEN_READWRITE, 
> NULL);
>  rc = sqlite3_open( c.U2M(pFilename), );
> 
>  if( rc==SQLITE_OK )
>  {
>try
>{
>sqlite3 *pMemoryDb = (sqlite3*) pdb;
>pFrom = (toFile ? pMemoryDb : pFileDb);
>pTo   = (toFile ? pFileDb : pMemoryDb);
> 
>// this line throws exception:  "library routine called out of 
> sequence"
> *pBackup = sqlite3_backup_init(pTo, "main", pFrom, "temp");*
>if( pBackup )
>{
>sqlite3_backup_step(pBackup, -1);
>sqlite3_backup_finish(pBackup);
>}
>rc = sqlite3_errcode(pTo);
>sqlite3_close(pFileDb);
>}
>catch(...)
>{
>sqlite3_close(pFileDb);
>const char* err = sqlite3_errmsg( (sqlite3*) pdb );
>throw err;
>}
>  }
>  return rc;
> }
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Options

2017-02-20 Thread Barry Smith
I would use system.data.sqlite in that situation.

But I would also say it depends on what you already have written, and what your 
strengths are. I am under the impression from your first email that you already 
have something written using system.data.sqlite. i.e. Using the class 
System.Data.SQLite.SQLiteConnection to create a connection to the db, then 
using the methods of that to manipulate the db or extract data from it. Have I 
assumed wrong?

If I am wrong, and you have yet to start writing anything, I would still 
recommend using system.data.sqlite. Only if you particularly like LINQ over SQL 
and you are prepared to learn the caveats of the entity framework would I 
recommend that.

Note that if you're using system.data.sqlite you will ultimately produce a few 
dlls that must be distributed together:
 - your custom library, which contains the code you've written
 - System.Data.Sqlite.dll, which contains the wrapper to make an interface to 
access SQLite in a more dotNet friendly manner
 - x64\sqlite.interop.dll
 - x86\sqlite.interop.dll
The last two contain the 'raw' SQLite library (for either 32 or 64 bit systems).

You should not need the other libraries for a simple application. If you find 
that visual studio is placing them in your project's output directory, check if 
they are listed as a reference and try to remove them then recompile.

> On 20 Feb 2017, at 1:05 PM, Clyde Eisenbeis <cte...@gmail.com> wrote:
> 
> Thanks for the clarification.  In my case:
> 
> 1) Speed is not an issue.  Size is not an issue.
> 
> 2) This is a personal use database (genealogy).
> 
> 3) Typically I create .dll's that serve as a library (WPF Custom
> Control Library) ... easy to use for different programs.
> 
> 4) For example, I have an Excel .dll library (uses Excel as a
> database).  When the program runs the first time using this .dll
> library, it creates the Excel file along with multiple sheets.
> 
> 5) I'd like to create a similar .dll for an SQLite library.  The
> program that uses this .dll is a simple WPF program that uses the .dll
> class name to access the functions.
> 
> With this info, which option would you recommend?
> 
>> On Sun, Feb 19, 2017 at 9:45 PM, Barry Smith <smith.bar...@gmail.com> wrote:
>> Strange, I replied to this earlier... Perhaps my messages are not getting 
>> through.
>> 
>> You cannot include a .c file for compilation in a c# project. You'd have to 
>> do use a separate DLL and do some pinvoke stuff to get to the raw SQLite 
>> interface, but in my opinion you're better off using the system.data.sqlite 
>> wrapper. If you need the speed and power of the raw interface, you probably 
>> need to drop out of an interpreted and managed language (c#) too...
>> 
>> You don't need the entity framework (EF) to run system.data.sqlite. That is 
>> an object relational mapper (ORM) that uses a lot of fancy reflection to 
>> make data access a little easier* (until you get stung by it) and a lot 
>> slower. EF is developed my Microsoft, although SQLite must provide some 
>> input to make it work with its syntax. You should be able to remove the 
>> entity framework dependencies from your project and still compile with no 
>> issues. Try a complete rebuild / clean compile to try get rid of the 
>> unnecessary dlls.
>> 
>> *whether an ORM actually makes data access easier is debatable, they 
>> basically allow you to write your data access queries in LINQ rather than 
>> SQL, and automatically instansiate c# objects for each line in the results. 
>> I find SQL easier...
>> 
>>> On 19 Feb 2017, at 1:50 PM, Clyde Eisenbeis <cte...@gmail.com> wrote:
>>> 
>>> Sorry for the slow response.
>>> 
>>> My code is in C#.  I don't know if the amalgamation source code in C
>>> can be compiled so it is compatible with C#.
>>> 
>>> If it can, I'd be interested in details.  Thanks!
>>> 
>>>> On Sat, Feb 18, 2017 at 1:29 AM, R Smith <rsm...@rsweb.co.za> wrote:
>>>> 
>>>> 
>>>>>> On 2017/02/18 12:45 AM, Warren Young wrote:
>>>>>> 
>>>>>> On Feb 17, 2017, at 7:32 AM, R Smith <rsm...@rsweb.co.za> wrote:
>>>>>> 
>>>>>> You can even checkout the latest commits via SVN
>>>>> 
>>>>> There’s a Subversion mirror of the official Fossil code repository for
>>>>> SQLite?
>>>> 
>>>> 
>>>> Apologies, force of habit nomenclature. Have fallen to calling any Software
>>>> Versioning system just 'SVN' for short. I did of course mean for it to be
>>>> checked out via Fossil.

Re: [sqlite] SQLite Options

2017-02-17 Thread Barry Smith
System.Data.SQLite is the package you want if you just want a .Net style (i.e. 
Using the standard .net db interfaces) wrapper around SQLite. You can find it 
on NuGet.

The entity framework is a library that maps database entries and relations to 
OOP style objects. Look up object relational mapping (ORM). It's a bit of a 
monster that uses a lot of reflection. It can make some tasks easier, but it's 
also very easy to get stung by it. I would not recommend it for any time you 
need performance, or to deal with even moderate record counts. Although the 
entity framework is compatible with SQLite and system.data.sqlite, it is not 
specific to this dbms - it's a data access layer developed by Microsoft for 
general db access. You do not need it to use SQLite.

Have you tried to remove the reference to the entity framework then performed a 
clean build?

Ps this list strips attachments, so I can't see exactly what you've highlighted.

> On 16 Feb 2017, at 5:40 PM, Clyde Eisenbeis  wrote:
> 
> I started writing SQLite code about two years ago (Visual Studio 2013,
> C#, WPF) ... with a significant delay, since then, because of a
> physical move.
> 
> The code is written for a specific use on my computer ... no other users.
> 
> SQLite was chosen so my sons could eventually install this program on
> their computer ... no database needs to be installed ... no other
> installation required.
> 
> I don't recall the actions taken then, but I do see quite a few
> additional files (EntityFramework.dll, EntityFramework.SqlServer.dll,
> etc.) as references ... see attachment.
> 
> Is there an SQLite version that is comprised of fewer dlls, etc.? ...
> Perhaps SQLite3?
> 
> Clyde
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why is this so much more efficient?

2017-02-16 Thread Barry Smith
Aliases in SQL are not the same as variables in most procedural languages. So 
every time you mention 'totaltime' SQLite is probably recalculating that value 
by adding all the columns together.  See the various discussions regarding no 
deterministic (random) functions last year. Less references to that alias => 
less calculations. Reference it twice and expect it to be slower.

> On 17 Feb 2017, at 12:27 AM, Darko Volaric  wrote:
> 
> You can actually index functions or expression:
> https://www.sqlite.org/expridx.html
> 
> On Thu, Feb 16, 2017 at 9:32 PM, Cecil Westerhof 
> wrote:
> 
>> 2017-02-16 21:10 GMT+01:00 Dominique Pellé :
>> 
>>> Cecil Westerhof  wrote:
>>> 
 I have a table vmstat that I use to store vmstat info. ;-)
 At the moment it has more as 661 thousand records.
 
 In principle the values of usertime, systemtime, idletime, waittime and
 stolentime should add up to 100. I just wanted to check it. Of-course
>>> there
 could be a rounding error, so I wrote the following query:
 SELECT date
 ,  time
 ,  usertime
 ,  systemtime
 ,  idletime
 ,  waittime
 ,  stolentime
 ,  (usertime + systemtime + idletime + waittime + stolentime) AS
 totaltime
 FROM   vmstat
 WHERE  totaltime  < 99 OR totaltime > 101
 
 I did not like that, so I rewrote the WHERE to:
 WHERE  ABS(100 - totaltime) > 1
 
 The funny thing the second WHERE is more efficient as the first, where
>> I
 would have expected it to be the other way around.
 The first takes around 1.050 milliseconds.
 The second takes around  950 milliseconds.
 So the second is around 10% more efficient. Why is this?
 
 In case it is important: I did this in sqlitebrowser 3.7.0, which uses
 SQLite 3.8.10.2.
>>> 
>>> 
>>> I suspect  that when you use "WHERE ABS(100 - totaltime) > 1"
>>> SQLite cannot use an index since it's an expression, so it does a
>>> full table scan, whereas the other solution which does
>>> "WHERE  totaltime  < 99 OR totaltime > 101"
>>> may use an index on totaltime (assuming that there is an index).
>>> 
>>> In general using an index is good.  But if most of the records
>>> satisfy the condition "ABS(100 - totaltime) > 1" then an index
>>> can be more harmful than useful.   And that could explain
>>> why using "WHERE ABS(100 - totaltime) > 1" is a bit faster.
>>> You could try "EXPLAIN QUERY PLAN" on your queries to
>>> see if they use an index or if they do a full table scan.
>> 
>> ​Totaltime is calculated, so it cannot have an index. ;-)
>> Besides from the almost 700.000 records only two satisfy the condition.
>> 
>> I should look into EXPLAIN QUERY PLAN.
>> 
>> --
>> Cecil Westerhof
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite.Interop.dll

2017-02-23 Thread Barry Smith
Oh, I do remember having this issue before.

I think the cause is this: Visual studio attempts to trace which dlls are 
required for each project. Then if project A is dependent on project B, visual 
studio will copy what it thinks is all the required dlls for both projects into 
project A's output directory.

Unfortunately visual studio is only so smart and doesn't realise that the 
SQLite.interop.dll is a required dependency. I believe the NuGet package puts 
instructions to move those dlls to the output directory by means of xcopy. So, 
you have this situation:

Project GlobalSQLite, with reference to SQLite.
Project GeneologyControls, with reference to GlobalSQLite.

Tell visual studio to build and it goes and executes the step where the SQLite 
interop DLL is copied to the output of the GlobalSQLite project. But then it 
builds GeneologyControls and doesn't realise it needs to copy the interop DLLs! 
So finally your program executes in the output directory of the 
GeneologyControls project, but the SQLite.interop.dll files are in the output 
of the other project.

You need to ensure that the SQLite interop dlls are in the final output 
directory (and also included when you distribute your application). You can do 
this manually using windows explorer, or you can put in a pre or post build 
event of your final project to copy the x64 and x86 folders (containing the 
SQLite.interop.dll files) into its output directory. Doing it from Windows 
explorer is easier, but you may forget then if you do a clean build a few years 
down the line you'll run into the problem again.

Perhaps there's a cleaner way to do this. Those were my solutions, though...

> On 23 Feb 2017, at 1:58 PM, Clyde Eisenbeis  wrote:
> 
> ---
> About two years ago, I downloaded and installed SQLite.  I don't
> recall the details, but it was a program that installed SQLite.
> 
> I ended up with files such as EntityFramework.dll,
> EntityFramework.SqlServer.dll, System.Data.SQLite.dll, etc.  This
> required "using System.Data.SQLite".
> 
> ---
> I then created a WPF C# genealogy program ... and a GlobalSQLite.dll
> library (as a WPF Custom Control Library).
> 
> The GlobalSQLite.dll library contains commonly used functions.  For example:
> 
>   boCreateFileAndTables(string stPathFilename,
> List liststTableNames,
> List liststFieldDefinitions)
>   {...}
> 
> ---
> I am working to improve / clean up the original code for WPF C#
> genealogy program.
> 
> When I have SQLite installed on my genealogy program and on my
> GlobalSQLite.dll library, it works fine.
> 
> When I don't have SQLite installed on my genealogy program, I get an
> exception "Unable to load DLL 'SQLite.Interop.dll'".  It occurs in my
> GlobalSQLite.dll library program:
> 
>sqliteConn = new System.Data.SQLite.SQLiteConnection("Data
> Source=" + stPathFilename + ";").
> 
> I do find SQLite.Interop.dll under ...
> GlobalsSQLite\packages\System.Data.SQLite.Core.1.0.101.0\build\net46\x86
> ... and under ... \x64.
> 
> Is there a better place to put SQLite.Interop.dll so it works?
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Options

2017-02-19 Thread Barry Smith
Strange, I replied to this earlier... Perhaps my messages are not getting 
through.

You cannot include a .c file for compilation in a c# project. You'd have to do 
use a separate DLL and do some pinvoke stuff to get to the raw SQLite 
interface, but in my opinion you're better off using the system.data.sqlite 
wrapper. If you need the speed and power of the raw interface, you probably 
need to drop out of an interpreted and managed language (c#) too...

You don't need the entity framework (EF) to run system.data.sqlite. That is an 
object relational mapper (ORM) that uses a lot of fancy reflection to make data 
access a little easier* (until you get stung by it) and a lot slower. EF is 
developed my Microsoft, although SQLite must provide some input to make it work 
with its syntax. You should be able to remove the entity framework dependencies 
from your project and still compile with no issues. Try a complete rebuild / 
clean compile to try get rid of the unnecessary dlls.

*whether an ORM actually makes data access easier is debatable, they basically 
allow you to write your data access queries in LINQ rather than SQL, and 
automatically instansiate c# objects for each line in the results. I find SQL 
easier...

> On 19 Feb 2017, at 1:50 PM, Clyde Eisenbeis  wrote:
> 
> Sorry for the slow response.
> 
> My code is in C#.  I don't know if the amalgamation source code in C
> can be compiled so it is compatible with C#.
> 
> If it can, I'd be interested in details.  Thanks!
> 
>> On Sat, Feb 18, 2017 at 1:29 AM, R Smith  wrote:
>> 
>> 
>>> On 2017/02/18 12:45 AM, Warren Young wrote:
>>> 
 On Feb 17, 2017, at 7:32 AM, R Smith  wrote:
 
 You can even checkout the latest commits via SVN
>>> 
>>> There’s a Subversion mirror of the official Fossil code repository for
>>> SQLite?
>> 
>> 
>> Apologies, force of habit nomenclature. Have fallen to calling any Software
>> Versioning system just 'SVN' for short. I did of course mean for it to be
>> checked out via Fossil.
>> 
>>> https://goo.gl/KzLcV8
>>> 
>>> (Excuse the shortener, it’s a raly long URL.)
>>> 
>>> I could give you that Zip file link, but I suspect it’s purposely not
>>> being published to avoid load on the SQLite repository server caused by bots
>>> repeatedly requesting Zip files and tarballs.
>> 
>> 
>> The bots can read goo links nowadays. ;)
>> 
>>> Using Fossil is far more efficient than downloading Zip archives, but as I
>>> keep getting reminded in my own Fossil-hosted public project, some people
>>> just refuse to install and use anything they don’t absolutely have to.  It’s
>>> six easy steps, but apparently that’s too many for some.
>> 
>> 
>> Agreed, and what is more sad is that Fossil is so much better at actual
>> "Version-Control" (as opposed to making sharing code easiest). If we could
>> get the rest of the World to rather Fossil, everybody wins. (I can already
>> hear Linus clutching his chest and breathing erratically!)
>> 
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatype for prices (1,500)

2016-12-03 Thread Barry Smith
Nice.

Say goodbye to transitive equality though.

> On 3 Dec 2016, at 5:02 AM, Keith Medcalf  wrote:
> 
> 
> feq (and friends) are an extension I wrote that does proper floating point 
> comparisons:
> 
> /*
> ** 2015-11-09
> **
> ** The author disclaims copyright to this source code.  In place of
> ** a legal notice, here is a blessing:
> **
> **May you do good and not evil.
> **May you find forgiveness for yourself and forgive others.
> **May you share freely, never taking more than you give.
> **
> **
> **
> ** This SQLite extension implements functions for comparison of floating point
> ** double precision numbers x and y where x is within a specified number of
> ** ULPs of y.
> **
> ** The default number of ULPs if not specified is 5.
> ** The size of an ULP is based on the scale of x.
> ** If only one value is specified it will be a y value and x will be 0.0
> **
> **   ulp(x) -> ULP precision of x (increment to next representable number)
> **   epsilon(x) -> ULP precision of x (alias of ulp)
> **   ulps(x, y) -> number of ULP of x in (x - y)
> **
> **   flt(x[, y[, u]])   -> x less than y
> **   fle(x[, y[, u]])   -> x less or equal y
> **   feq(x[, y[, u]])   -> x equal y
> **   fge(x[, y[, u]])   -> x greater or equal y
> **   fgt(x[, y[, u]])   -> x greater than y
> **   fne(x[, y[, u]])   -> x not equal y
> **
> */
> #ifdef __cplusplus
> extern "C" {
> #endif
> 
> /*
> ** If the MATH.H is already included, use the math library
> ** otherwise make sure memcpy is defined
> */
> 
> #ifndef _MATH_H_
> #include 
> #endif
> 
> #ifndef SQLITE_PRIVATE
>#define SQLITE_PRIVATE static
> #endif
> 
> #ifdef SQLITE_CORE
>#include "sqlite3.h"
> #else
>#ifdef _HAVE_SQLITE_CONFIG_H
>#include "config.h"
>#endif
>#include "sqlite3ext.h"
>SQLITE_EXTENSION_INIT1
> #endif
> 
> /*
> ** Common Functions
> */
> 
> #ifdef _MATH_H_
> 
> /*
> ** If we have the math library then use it
> */
> 
> static double epsilon(double value)
> {
>int exponent;
>double mantissa = frexp(value, );
>return ldexp(1.0, exponent - 53);
> }
> #define sqlfcmp_fabs fabs
> 
> #else
> 
> /*
> ** If we do not have the math library already, do not force it to be included
> */
> 
> static double epsilon(double value)
> {
>sqlite_int64 a;
>double r = value;
>if (r < 0)
>r = -r;
>memcpy(, , sizeof(a));
>if (a == 0)
>a = (970LL << 52);
>else
>a = ((a >> 52) - 52LL) << 52;
>memcpy(, , sizeof(a));
>return r;
> }
> 
> static double sqlfcmp_fabs(double v)
> {
>if (v < 0)
>return -v;
>return v;
> }
> #endif
> 
> static double distance(double x, double y)
> {
>return (x - y) / epsilon(x);
> }
> 
> /*
> ** Return the distance to the next representable number
> */
> 
> SQLITE_PRIVATE void _ulp(sqlite3_context *context, int argc, sqlite3_value 
> **argv)
> {
>sqlite3_result_double(context, epsilon(sqlite3_value_double(argv[0])));
> }
> 
> /*
> ** Return the number of representable numbers between x and y based on the 
> ULP of x
> ** if only one parameter, it is y and x is 0.0
> */
> 
> SQLITE_PRIVATE void _ulps(sqlite3_context *context, int argc, sqlite3_value 
> **argv)
> {
>double x = 0.0;
>double y = 0.0;
>uintptr_t flag = (uintptr_t)sqlite3_user_data(context);
>if ((argc == 0) || (argc > 2))
>return;
>if (argc == 1)
>y = sqlite3_value_double(argv[0]);
>else
>{
>x = sqlite3_value_double(argv[0]);
>y = sqlite3_value_double(argv[1]);
>}
>sqlite3_result_double(context, distance(x, y));
> }
> 
> /*
> ** Perform Floating Point (Double) Comparisons using ULP of first parameter
> **   first parameter will be 0.0 if only one parameter specified, which will 
> be y
> ** User Context defines operations
> **  flag == 0   x != y
> **  flag & 1x < y
> **  flag & 2x = y
> **  flag & 4x > y
> ** Flag values are additive
> **  flag & 3x <= y
> **  flag & 6x >= y
> ** optional third parameter is ULPs of x to consider equal, defaults to 5
> ** if only one parameter then test if x within 5 ULPs of 0.0
> */
> 
> SQLITE_PRIVATE void _fpc(sqlite3_context *context, int argc, sqlite3_value 
> **argv)
> {
>double x = 0.0;
>double y = 0.0;
>double ulps = 0.0;
>double delta = 5.0;
>uintptr_t flag = (uintptr_t)sqlite3_user_data(context);
>if ((argc == 0) || (argc > 3))
>return;
>if (argc == 1)
>y = sqlite3_value_double(argv[0]);
>else
>{
>x = sqlite3_value_double(argv[0]);
>y = sqlite3_value_double(argv[1]);
>}
>if (argc > 2)
>delta = sqlfcmp_fabs(sqlite3_value_double(argv[2]));
>ulps = distance(x, y);
>if (   ((flag & 2)  && (sqlfcmp_fabs(ulps) <= delta))   /* flag & 2 means 
> test equal */
>|| ((flag & 1)  && (ulps < -delta))   

Re: [sqlite] confused getting started

2017-03-05 Thread Barry Smith
A less LMGTFY-like response:
First off, using SQLite does require that you become familiar with SQL, and a 
bunch of database ideas: normalization for good schema design, unit of work for 
good use of transactions, how to use indexes, others I'm sure I don't know 
about. But those ideas are not specific to SQLite, and are good to think about 
anyway. And any part of SQL or db design you learn will be far easier than 
trying to deal with the same problems in a file format you've written yourself.

If you want to use SQLite in a c/c++ app, your easiest option is to simply 
download the amalgamation and include it in your project. That's two files - 
SQLite.c and SQLite.h. The basic procedure is:
Open the file: sqlite3_open(...);
prepare your statement: sqlite3_prepare_v2(...);
Bind your parameters: sqlite3_bind(...);
In a loop {
 execute the statement: sqlite3_step(...);
 Retrieve data if it's a select: sqlite3_column(...);
}
Finalize or reset the statement (depending if you want to use it again): 
sqlite3_finalise / sqlite3_reset
Close the file: sqlite3_close(...)

If you're using dotNet, you can use NuGet to install system.data.sqlite.core, 
then #using system.data.sqlite you'll find a ado.net database interface which 
you should more or less use like any other ado.net db interface (with the 
advantage that if you ever migrate to another dbms will be easier).

I can't comment on Python, I haven't used that.

I never thought SQLite difficult to start using. In fact, possibly what I 
didn't want to accept at first was exactly how easy it was and thought there 
were things I was missing*.

For all but the simplest use cases I think SQLite is far easier than direct 
file access for the fact that it keeps track of the contents of the file for 
you, you have an extensible self documenting file format (in the form of your 
db schema), and it takes care of most of your robustness concerns by making 
everything ACID.

* there were things I was missing - like getting all the config right, and 
making sure to reset or finalize statements, and making sure to always do any 
file operations on the log file(s) too. But even with these mistakes it still 
worked leagues better than anything I could have written myself, with a 
fraction of the work.

> On 5 Mar 2017, at 9:20 PM, Jens Alfke  wrote:
> 
> 
>> On Mar 5, 2017, at 3:03 AM, NTrewartha T-Online  
>> wrote:
>> 
>> Any examples of a C,C++,C# or Python usage for sqllite.?
> 
> Have you tried searching? I entered “sqlite c example” and “sqlite python 
> example” into Google and got some useful-looking links in the top few hits.
> Also, there are quite a few books about SQLite.
> 
>> I would like sqllite on my raspberry pi 3 after I have gained experience 
>> under Windows 10.
> 
> If it’s not installed already in the Raspbian OS, you should just need to run 
> “sudo apt-get sqlite”. (Possibly “sudo apt-get sqlite_dev” since IIRC the 
> development resources like header files are in separate packages under 
> Debian-based distros.)
> 
>> Judging what the replies to questions, the very new beginners are left a bit 
>> out in the cold.
> 
> No offense intended, but SQLite isn’t an especially beginner-friendly tool. 
> It’s a powerful relational database with a ton of configurable options, and a 
> somewhat tricky C API, not to mention a sophisticated query language that you 
> also need to master to make effective use of it. (However, using it from 
> Python should be somewhat easier, since the API is a bit higher level and you 
> don’t have to worry about things like memory management.)
> 
> If your data storage needs aren’t too complex, there are simpler ways to 
> implement it. For example, in the past I’ve just used a simple data 
> serialization library to read and write the entire data set to disk. It works 
> great when the data is small enough that it fits easily in memory and doesn’t 
> take too long to read or write (say, under 100MB.) This is the equivalent of 
> using a regular battery-powered drill to make some holes, instead of learning 
> how to use a router or end mill :)
> 
> (Also, in general if you’re moving from Windows to literally any other 
> platform, you’ll have to re-learn some of your development processes. Windows 
> does things differently from Unix, which is what everything else is based on.)
> 
> —Jens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Seeking advice on improving batch sql update performance on zipvfs

2017-09-08 Thread Barry Smith
Are your updates sorted by DS? If your queries are sorted then sequential 
queries are more likely to hit the same db pages while searching the index, 
resulting in higher cache usage and fewer decompression operations. This would 
have less benefit if your 100k DS values of the updates are randomly 
distributed through the 30m available, and more of an effect if they're tightly 
clustered in some range.

If you mainly query by DS, is that the same as saying it is your primary key? 
If so, have you tried to declare it so and try the table WITHOUT ROWID, and 
don't bother with the index? It may help since you wouldn't have to decompress 
both the index pages and the table pages.

> On 8 Sep 2017, at 12:33 pm, Dominique Pellé  wrote:
> 
> Yue Wu  wrote:
> 
>> Hello,
>> As mentioned in the subject, our goal is to improve performance regarding
>> to batch sql updates.
>> The update sql as follow,
>> 
>>> UPDATE ITEM SET FIELD4 =? WHERE DS=?
>> 
>> We run 100,000 updates in a single transaction. The zipvfs version takes
>> about 20 min while uncompressed version takes about 7 min.
> 
> Which compression algorithm do you use with zipvfs?
> Try LZ4, it's is times faster than zlib at compression and
> decompression, at the cost of compressing a bit less:
> 
> https://github.com/lz4/lz4
> 
> Or try zstd, which can compress more than zlib and faster (especially
> on 64-bit architectures), but it's not as fast as LZ4:
> 
> http://facebook.github.io/zstd/
> 
> What is also your SQLite page size?
> Reducing the SQLite page size probably helps to speed up
> updates, since zipvfs compresses by pages.
> 
> Dominique
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ADO.NET driver upgrade puzzle

2017-09-27 Thread Barry Smith
Quite strange that Prepare() makes a difference. The system.data.sqlite 
documentation states that Prepare() does nothing, and a code inspection of the 
system.data.sqlite source shows that it does nothing but check that the command 
hasn't been disposed, and that the connection is still valid (which is done at 
the start of almost every method in the class, and therefore again when you 
call ExecuteNonQuery.)

> On 27 Sep 2017, at 5:07 am, Roberts, Barry (FINTL) <b.robe...@fugro.com> 
> wrote:
> 
> Hi,
> 
> I am aware of the sqlite FAQ, and especially the comments regarding file 
> systems. We only ever use NTFS file system, and never have any FAT or FAT32 
> file systems. Given that the old 1.0.80.0 driver is totally stable, and the 
> newer one is not, on the same exact hardware, I do not think the issue is 
> hardware related.
> 
> I enforce that the writer to a specific database can only run 1 at a time, 
> this is done via the C# Interlocked mechanism which has always proved to be 
> very robust. There is a separate logger object instance per database, and 
> within each of those the writing is controlled with the interlock. This 
> allows the application to buffer for longer if the writing to disk encounters 
> a short term slow down.
> 
> As per Clemens suggestion I could replace the use of the connection pool and 
> hold a connection open per database. The code can be structured to do that, 
> because object instances are not shared between database logger instances. 
> However I would have preferred to use the connection pool, because that 
> allows me to open late/close early on my connection usage, which is generally 
> better for maintenance and how the production code is currently strutured. I 
> could restructure the code, but that is not really desirable in production 
> code, simply due to an ADO.NET driver update.
> 
> I have been running my test application for days now without issues. This is 
> configured with pooling on, statement preparation off. If I enable statement 
> preparation the system will fail after an hour or two. So it looks like our 
> production code needs to run the same, and remove the IDbCommand.Prepare() 
> calls. The old driver works well but has lots of lock contention, the new one 
> resolves that issue, but is not as stable in some configurations. It is a 
> trade-off, I need the new driver to improve performance, but will trade off 
> the command prepare calls, to enable driver stability.
> 
> Kind Regards,
> Barry Roberts.
> b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ADO.NET driver upgrade puzzle

2017-09-26 Thread Barry Smith


> On 26 Sep 2017, at 12:14 am, Clemens Ladisch <clem...@ladisch.de> wrote:
> 
> Roberts, Barry (FINTL) wrote:
>> As per my original post, all C# access code is making extensive use of
>> "using" statements. However we do obviously rely on the connection pool
>> being thread safe, because many threads are writing to different
>> databases (connections) concurrently.
>> 
>> There is no direct relationship between threads and databases. However
>> I do enforce that within a process only one thread can be writing to
>> a specific database (because sqlite does not support parallel writing)
>> at a time.
> 
> How exactly are you enforcing that?
> 
>> public IDbConnection CreateConnection()
>> {
>>var connection = new SQLiteConnection
>>{
>>ConnectionString = m_Builder.ConnectionString
>>};
>> 
>>return connection;
>> }
> 
> I would be tempted to replace that ConnectionString variable with the
> actual connection object, and not using the connection pool.  (Assuming
> that the builder objects are not shared, or properly locked.)
> 
> 
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIMIT

2018-06-23 Thread Barry Smith
Ryan's way works well. Here is a second method which expresses it in a 
different way:

SELECT * FROM T t1 where rowid IN (SELECT rowid FROM T t2 WHERE t1.F2 = t2.F2 
ORDER BY rowid LIMIT 10)

If you have WITHOUT ROWID tables you'd have to replace rowid with your primary 
key.

(The query may still work without the ORDER BY, I didn't test it, but even if 
it does a future query optimiser might break that because without the order by 
the results of the inner select are free to include a different 10 rowids for 
every value in the outer query)

> On 23 Jun 2018, at 9:50 pm, Gert Van Assche  wrote:
> 
> Hi Ryan, thanks for this. This is working if the F1 field is a numeric
> value. With text (especially Asian & Arabic characters) this does not seem
> to work.
> So I created an MD5 hash from the text fields and it works great! Thank you
> so much.
> 
> gert
> 
> Op vr 22 jun. 2018 om 22:52 schreef R Smith :
> 
>> 
>>> On 2018/06/22 10:04 PM, Gert Van Assche wrote:
>>>  All,
>>> 
>>> I'm sure it must be possible, I just don't find how.
>>> I have a table T with 2 fields (F1 and F2). The F1 are unique and the F2
>>> are not unique.
>>> I would like to get only 10 F1 fields for each unique F2.
>> 
>> This is not normally done, and windowing functions in other RDBMSes
>> makes for an easier way, but it can still be done in SQLite with some
>> creative grouping of a self-joined query.
>> In this example, I limited it to 3 F1 items per unique F2 for brevity,
>> but you can easily change the "< 4" to "< 11" or "<= 10" according to
>> preference.
>> 
>>   -- SQLite version 3.20.1  [ Release: 2017-08-24 ]  on SQLitespeed
>> version 2.0.2.4.
>>   --
>> 
>> 
>> 
>> CREATE TABLE T(F1 INTEGER PRIMARY KEY, F2 TEXT);
>> 
>> INSERT INTO T(F1,F2) VALUES
>>  (1,  'John')
>> ,(2,  'John')
>> ,(3,  'Jason')
>> ,(4,  'John')
>> ,(5,  'Jason')
>> ,(6,  'John')
>> ,(7,  'John')
>> ,(8,  'Jason')
>> ,(9,  'Jason')
>> ,(10,  'Joan')
>> ,(11,  'Joan')
>> ,(12,  'Joan')
>> ,(13,  'Jimmy')
>> ;
>> 
>> SELECT A.F2, B.F1
>>   FROM T AS A
>>   JOIN T AS B ON B.F2 = A.F2 AND B.F1 >= A.F1
>>  GROUP BY A.F2, B.F1
>>  HAVING COUNT(*) < 4
>> ;
>> 
>>   -- F2|  F1
>>   -- - | ---
>>   -- Jason |  3
>>   -- Jason |  5
>>   -- Jason |  8
>>   -- Jimmy |  13
>>   -- Joan  |  10
>>   -- Joan  |  11
>>   -- Joan  |  12
>>   -- John  |  1
>>   -- John  |  2
>>   -- John  |  4
>> 
>> 
>> 
>> -- Another option to note, in case the 10 limit is not important and
>> simply aimed
>> -- at saving space, is to use group concatenation, like so:
>> 
>> SELECT F2, group_concat(F1)AS F1
>>   FROM T
>>  GROUP BY F2
>> ;
>> 
>>   --   |
>>   -- F2|F1
>>   -- - | -
>>   -- Jason |  3,5,8,9
>>   -- Jimmy | 13
>>   -- Joan  |  10,11,12
>>   -- John  | 1,2,4,6,7
>> 
>> 
>>   --
>> 
>> 
>> 
>> Cheers!
>> Ryan
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] probably recursive?

2018-05-01 Thread Barry Smith
Well those constraints simplify your problem.

In the resultant dataset, the largest X and Y values will be equal, and the 
largest X will have and entry for every coordinate from (X, 1) to (X, X). 
Likewise the largest Y will have an entry for every coordinate from (1, Y) to 
(Y, Y). Basically you'll have two lines from the axes, drawing a square. All 
points outside that square will be culled, all points on and inside the square 
will be kept.

Since you know that, you now have a one dimensional problem to solve. It still 
seems a little recursive to me, but it should be easier because you only need 
to find a single number (which you can then plug into a delete statement).

If my statement about the square is not obvious to prove in your head I can try 
write a proof for that but I'm not much good at proofs.

> On 2 May 2018, at 7:27 am, Roman Fleysher <roman.fleys...@einstein.yu.edu> 
> wrote:
> 
> Pairs (x,y) do not repeat.
> 
> Actual x and y are positive integers, but I do not see how being positive can 
> be relevant. Integer is important for sorting/comparison.
> 
> 
> Roman
> 
> 
> From: sqlite-users [sqlite-users-boun...@mailinglists.sqlite.org] on behalf 
> of Barry Smith [smith.bar...@gmail.com]
> Sent: Tuesday, May 01, 2018 5:23 PM
> To: SQLite mailing list
> Subject: Re: [sqlite] probably recursive?
> 
> Is there a uniqueness constraint on your initial data? Can the same 
> coordinate be listed multiple times?
> 
> Is there a requirement that X > 0 and Y > 0?
> 
>>> On 2 May 2018, at 3:35 am, Simon Slavin <slav...@bigfraud.org> wrote:
>>> 
>>> On 1 May 2018, at 6:28pm, Simon Slavin <slav...@bigfraud.org> wrote:
>>> 
>>> I just realised that
>> 
>> That was intended to be personal email.  Apologies, everyone.
>> 
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] probably recursive?

2018-05-01 Thread Barry Smith
In you initial email, what is n? Some real number between zero and one?

> On 2 May 2018, at 8:37 am, Abroży Nieprzełoży 
> <abrozynieprzelozy314...@gmail.com> wrote:
> 
> I think Barry mean that you can represent the (x,y) pair as a single
> number like (max(X)-min(X))*(Y-min(Y))+X-min(X) or so, but I don't see
> how it would be helpful.
> 
> 2018-05-02 0:20 GMT+02:00, Roman Fleysher:
>> Dear Barry,
>> 
>> The statement about the square is not obvious to me. The requirements on
>> counts in x and y are different.
>> 
>> I also imagine answer could be two or several non-overlapping  "rectangles".
>> "Rectangles" will not be densely filled with dots, they might have empty
>> spots either because the points were never on the list or were eliminated.
>> 
>> Roman
>> 
>> ____
>> From: sqlite-users [sqlite-users-boun...@mailinglists.sqlite.org] on behalf
>> of Barry Smith [smith.bar...@gmail.com]
>> Sent: Tuesday, May 01, 2018 6:12 PM
>> To: SQLite mailing list
>> Subject: Re: [sqlite] probably recursive?
>> 
>> Well those constraints simplify your problem.
>> 
>> In the resultant dataset, the largest X and Y values will be equal, and the
>> largest X will have and entry for every coordinate from (X, 1) to (X, X).
>> Likewise the largest Y will have an entry for every coordinate from (1, Y)
>> to (Y, Y). Basically you'll have two lines from the axes, drawing a square.
>> All points outside that square will be culled, all points on and inside the
>> square will be kept.
>> 
>> Since you know that, you now have a one dimensional problem to solve. It
>> still seems a little recursive to me, but it should be easier because you
>> only need to find a single number (which you can then plug into a delete
>> statement).
>> 
>> If my statement about the square is not obvious to prove in your head I can
>> try write a proof for that but I'm not much good at proofs.
>> 
>>> On 2 May 2018, at 7:27 am, Roman Fleysher
>>> wrote:
>>> 
>>> Pairs (x,y) do not repeat.
>>> 
>>> Actual x and y are positive integers, but I do not see how being positive
>>> can be relevant. Integer is important for sorting/comparison.
>>> 
>>> 
>>> Roman
>>> 
>>> 
>>> From: sqlite-users [sqlite-users-boun...@mailinglists.sqlite.org] on
>>> behalf of Barry Smith [smith.bar...@gmail.com]
>>> Sent: Tuesday, May 01, 2018 5:23 PM
>>> To: SQLite mailing list
>>> Subject: Re: [sqlite] probably recursive?
>>> 
>>> Is there a uniqueness constraint on your initial data? Can the same
>>> coordinate be listed multiple times?
>>> 
>>> Is there a requirement that X > 0 and Y > 0?
>>> 
>>>>> On 2 May 2018, at 3:35 am, Simon Slavin  wrote:
>>>>> 
>>>>> On 1 May 2018, at 6:28pm, Simon Slavin  wrote:
>>>>> 
>>>>> I just realised that
>>>> 
>>>> That was intended to be personal email.  Apologies, everyone.
>>>> 
>>>> Simon.
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@mailinglists.sqlite.org
>>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] probably recursive?

2018-05-01 Thread Barry Smith
Is there a uniqueness constraint on your initial data? Can the same coordinate 
be listed multiple times?

Is there a requirement that X > 0 and Y > 0?

> On 2 May 2018, at 3:35 am, Simon Slavin  wrote:
> 
>> On 1 May 2018, at 6:28pm, Simon Slavin  wrote:
>> 
>> I just realised that
> 
> That was intended to be personal email.  Apologies, everyone.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] probably recursive?

2018-05-01 Thread Barry Smith
Ah my bad, I misunderstood the initial condition. nX is a function of X. My 
statements were only true if nX=X. Well, sorry about the noise.

> On 2 May 2018, at 8:20 am, Roman Fleysher <roman.fleys...@einstein.yu.edu> 
> wrote:
> 
> Dear Barry,
> 
> The statement about the square is not obvious to me. The requirements on 
> counts in x and y are different.
> 
> I also imagine answer could be two or several non-overlapping  "rectangles". 
> "Rectangles" will not be densely filled with dots, they might have empty 
> spots either because the points were never on the list or were eliminated.
> 
> Roman
> 
> 
> From: sqlite-users [sqlite-users-boun...@mailinglists.sqlite.org] on behalf 
> of Barry Smith [smith.bar...@gmail.com]
> Sent: Tuesday, May 01, 2018 6:12 PM
> To: SQLite mailing list
> Subject: Re: [sqlite] probably recursive?
> 
> Well those constraints simplify your problem.
> 
> In the resultant dataset, the largest X and Y values will be equal, and the 
> largest X will have and entry for every coordinate from (X, 1) to (X, X). 
> Likewise the largest Y will have an entry for every coordinate from (1, Y) to 
> (Y, Y). Basically you'll have two lines from the axes, drawing a square. All 
> points outside that square will be culled, all points on and inside the 
> square will be kept.
> 
> Since you know that, you now have a one dimensional problem to solve. It 
> still seems a little recursive to me, but it should be easier because you 
> only need to find a single number (which you can then plug into a delete 
> statement).
> 
> If my statement about the square is not obvious to prove in your head I can 
> try write a proof for that but I'm not much good at proofs.
> 
>> On 2 May 2018, at 7:27 am, Roman Fleysher <roman.fleys...@einstein.yu.edu> 
>> wrote:
>> 
>> Pairs (x,y) do not repeat.
>> 
>> Actual x and y are positive integers, but I do not see how being positive 
>> can be relevant. Integer is important for sorting/comparison.
>> 
>> 
>> Roman
>> 
>> 
>> From: sqlite-users [sqlite-users-boun...@mailinglists.sqlite.org] on behalf 
>> of Barry Smith [smith.bar...@gmail.com]
>> Sent: Tuesday, May 01, 2018 5:23 PM
>> To: SQLite mailing list
>> Subject: Re: [sqlite] probably recursive?
>> 
>> Is there a uniqueness constraint on your initial data? Can the same 
>> coordinate be listed multiple times?
>> 
>> Is there a requirement that X > 0 and Y > 0?
>> 
>>>> On 2 May 2018, at 3:35 am, Simon Slavin <slav...@bigfraud.org> wrote:
>>>> 
>>>> On 1 May 2018, at 6:28pm, Simon Slavin <slav...@bigfraud.org> wrote:
>>>> 
>>>> I just realised that
>>> 
>>> That was intended to be personal email.  Apologies, everyone.
>>> 
>>> Simon.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [SQLite3] [Mono] [Linux] "db is locked"

2018-07-03 Thread Barry Smith
Some ideas:

Sqlite may return that the database is locked immediately if it detects a 
deadlock situation. Something like: a different connection holds a reserved 
lock (waiting for read connections to close so it can promote to exclusive), 
and the current connection tries to promote from a read lock to a reserved or 
exclusive lock. Busy timeout will never resolve this situation so the 
connection attempting to promote just returns that the database is locked.

You can also get guaranteed timeouts if you use multiple connections on the 
same thread, or if your threading logic causes a deadlock. An example might be:

//...
DataReader dataReader = outerCmd.ExecuteReader()
foreach (var row in dataReader)
{
using (SQLiteConnection innerCon = GetNewConnection())
//attempt to write with inner conn
}

The above might happen in a called function to obscure what is happening. Such 
things might be valid on a different dbms that does table or row level locking, 
but can't be used with SQLite's Db level locking (I got bitten by this when I 
thought 'oh yeah, change from sql server to sqlite? I'll just update the SQL 
and change the providers. The logic will translate fine.')

Finally, I have seen inexplicable timeouts if I mix SQLite and 
TransactionScope. In my case the sqlite connections didn't need to participate 
in the transaction so I fixed it by setting enlist=false on the connection 
string and not investigating further.

> On 4 Jul 2018, at 5:14 am, Simon Slavin  wrote:
> 
>> On 3 Jul 2018, at 8:08pm, Phani Rahul Sivalenka 
>>  wrote:
>> 
>> As per our observation, the initial write operations on the sqlite db file
>> throw "db is locked" error. After a certain time (around an hour) write
>> operations start working and we are able to do all the operations as
>> required.  [...]
> 
>> “data
>> source=/var/www/html/ChargerDatabase.db;DateTimeKind=Utc;Version=3;Pooling=True;Synchronous=Off;journal
>> mode=Memory;Busy Timeout=3;Default Timeout=30”
> 
> Test each of your timeouts by removing one, then the other, then both.  See 
> if the removal of one of them changes the described behaviour.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fsync on -wal still happening

2017-12-29 Thread Barry Smith
I believe the inotifywait does not actually wait for the fsync operation before 
notifying.

Process A can write to a file, the OS can cache it without flushing to disk, 
and a request by process B will be served directly from the cache. Therefore 
the operating system can notify you of the change before it's written to 
persistent storage.

So, in normal synchronous mode, SQLite doesn't cache its writes, it just waits 
before calling fsync and relies on the OS cache.

Perhaps benchmarking before engaging in such an endeavour as you plan would 
indicate whether the OS cache gives you sufficient performance.

> On 29 Dec 2017, at 12:07 pm, Ian Freeman  wrote:
> 
> Thanks, Simon. Indeed I did see that option. But I'm concerned about
> maintaining integrity in the middle of an power-interrupted commit, so
> I was hoping to leverage WAL's inverted behavior of not modifying the
> database file directly until checkpoint time. The only thing left to
> investigate is how WAL handles power loss during the middle of a
> checkpoint. Normally I believe it's able to recover because the next db
> open has the -wal file to look at. But not if I'm storing it in-memory.
> 
> So I'm thinking I will copy the -wal file to disk before the checkpoint
> op for safety, and then load it back onto ramdisk after. I suppose I
> may have to exit WAL mode in order to perform that copy, making
> checkpointing a very expensive operation for my app.
> 
>>> On Fri, 2017-12-29 at 16:48 +, Simon Slavin wrote:
>>> 
>>> On 29 Dec 2017, at 4:34pm, Ian Freeman  wrote:
>>> 
>>> I see, then what I'm seeing is just normal behavior of the writes
>>> being
>>> flushed to disk. I read what I wanted to hear about
>>> synchronous=NORMAL
>>> delaying writes to the -wal file. Instead I'm going to see if I can
>>> move -wal to a ramdisk and see how that will affect db integrity
>>> with
>>> power losses at in-opportune times.
>> 
>> Did you see
>> 
>> PRAGMA journal_mode = MEMORY
>> 
>> ?
>> 
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Doing math in sqlite

2018-12-20 Thread Barry Smith


> On 20 Dec 2018, at 4:21 pm, Jungle Boogie  wrote:
> 
> Hi All,
> 
> This is more of a how do I do this in sql question. I apologize in advance
> for a simple question, but I need to learn somehow, so any pointers are 
> appreciated.
> 
> My very simple schema:
> 
> CREATE TABLE mileage (
> date text,
> "car" text,
> "od_reading" integer
> )
> 
> Some sample data:
> 2018/04/15,foo,170644
> 2018/04/15,bar.69625
> 2018/04/22,foo,170821
> 2018/04/22,bar,69914
> 2018/04/29,foo,171006
> 2018/04/29,bar,70123
> 2018/05/06,foo,171129
> 2018/05/06,bar,70223
> 2018/05/09,foo,171178
> 2018/05/20,foo,171304
> 2018/05/20,bar,70517
> 
> I can select the first and last od_readings pertaining to a particular car 
> this
> way:
> select od_reading from mileage where car='foo' limit 1
> select od_reading from mileage where car='bar' order by od_reading desc limit 
> 1
> 
> How do I subtract the last query from the first query to determine how many
> miles were driven?
> 
> Next, is there an elegant way to see how many miles difference there is 
> between
> two readings of the same car?
> 
> For instance, how many miles difference are there between bar on 2018/05/20 
> and
> 2018/05/06?

I wouldn't call my solution elegant, but if you put a unique constraint on the 
"date" column, and want your query to return null if there is not a record 
exactly on that date:

SELECT
   (SELECT od_reading FROM mileage WHERE date = '2018/05/20' AND car='bar')
   -
   (SELECT od_reading FROM mileage WHERE date='2018/05/06' AND car='bar')
As ExactMileage

You can rephrase that as:
SELECT end.od_reading - start.od_reading FROM mileage AS start, mileage AS end 
WHERE end.date='2018/05/20' AND start.date='2018/05/06' AND end.car ='bar' and 
start.car='bar'

Or even you could use CTEs to repeat yourself less:
WITH OnlyBar AS SELECT * FROM mileage WHERE car='bar'
SELECT end.od_reading - start.od_reading FROM OnlyBar AS start, OnlyBar AS end 
WHERE end.date='2018/05/20' AND start.date='2018/05/06

If you might query between two dates that don't necessarily have a record, the 
best you can do is give a range of what the possible driven distance may be:
SELECT
   (SELECT MIN(od_reading) FROM mileage WHERE date >= '2018/05/20' AND 
car='bar')
   -
   (SELECT MAX(od_reading) FROM mileage WHERE date <= '2018/05/06' AND 
car='bar')
AS MaxPossible,
   (SELECT MAX(od_reading) FROM mileage WHERE date <= '2018/05/20' AND 
car='bar')
   -
   (SELECT MIN(od_reading) FROM mileage WHERE date >= '2018/05/06' AND 
car='bar')
AS MinPossible

The second query finds the records before and after the dates in question (or 
uses data on exactly that date) to figure out the minimum and maximum possible 
distances that may have been driven between the dates of interest.

Note that there is a more efficient form of the second query, where one selects 
just od_reading and orders by date, with a limit of 1 instead of using the MIN 
and MAX functions; if your table is going to be large then that is a better 
option (with an index on (car, date), the value can then be computed with an 
index seek instead of a table scan). I used the min/max version because it's 
simpler and easier to read

> 
> Thanks,
> j.b.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] SQLite error (5): database is locked

2019-01-14 Thread Barry Smith
That's not how you set a busy timeout using a connection string. That's not how 
you set any (pragma) options with a connection string. Check the 
System.Data.SQLite documentation (or google) to find out connection string 
parameters, or play around with the SQLiteConnectionStringBuilder.

The busy timeout pragma will not help if you end up in a deadlock. One way a 
deadlock can occur is as follows:
 1) Connection A starts a transaction and reads the database - it takes out a 
shared lock
 2) Connection B starts a transaction and reads the database - it too takes out 
a shared lock
 3) Connection A attempts to write to the database, so it upgrades to a 
reserved lock and waits for all readers to close (all shared locks to be 
released).
 4) Connection B attempts to write to the database. It keeps its shared lock 
and tries to upgrade to reserved. Because A already owns a reserved lock, 
connection B is denied its lock. No amount of waiting will solve this problem 
because A is waiting for B to release its lock so it can have an exclusive 
lock. SQLite knows this, so it returns SQLITE_BUSY immediately.

https://www.sqlite.org/lockingv3.html

Note that the above is true for databases with an old style (non-WAL) journal. 
I assume similar protections and situations exist in WAL, but can't be certain 
and they may use a different mechanism.

You can figure out whether you have a deadlock or simply the wait timed out by 
looking at how quickly the error was returned. You can also look at where other 
threads (or processes if your debugger can attach to multiple processes) are 
when the error occurs.

Are you using explicit transactions? Entity Franework shouldn't cause deadlocks 
unless you are manually taking control of the transactions. EF can also take a 
very long time to SaveChanges if you have a large number of entities...

> On 14 Jan 2019, at 10:14 pm, Urs Wagner  wrote:
> 
> We are using entity framework
> 
> The timeout pragma does not work. Is think the timeout is not set, see below
> 
>var esb = new EntityConnectionStringBuilder
>{
>Metadata = 
> "res://*/RadaxModel.csdl|res://*/RadaxModel.ssdl|res://*/RadaxModel.msl",
>Provider = "System.Data.SQLite.EF6",
>ProviderConnectionString = @"data source=" + _dataBase + 
> ";PRAGMA foreign_keys = ON;PRAGMA locking_mode = EXCLUSIVE;PRAGMA 
> schema.synchronous = NORMAL; PRAGMA schema.journal_mode = DELETE; PRAGMA 
> busy_timeout = 10"
>};
> 
> 
> -Original Message-
> From: sqlite-users  On Behalf 
> Of Hick Gunter
> Sent: Monday, January 14, 2019 11:28 AM
> To: 'SQLite mailing list' 
> Subject: Re: [sqlite] [EXTERNAL] SQLite error (5): database is locked
> 
> With journal mode, SQLite supports 1 writer OR n readers; with WAL mode, 
> SQLite supports 1 writer AND N readers.
> 
> In any case, connections need to indicate if or how long they are willing to 
> wait for the db file to be unlocked. Default is NO.
> 
> The easiest way is to specify a timeout on the connection. The value needs to 
> be longer than your longest write transaction is expected to run and shorter 
> than the latency required by your application.
> 
> -Ursprüngliche Nachricht-
> Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
> Auftrag von Urs Wagner
> Gesendet: Montag, 14. Jänner 2019 10:24
> An: SQLite mailing list 
> Betreff: [EXTERNAL] [sqlite] SQLite error (5): database is locked
> 
> Hallo
> 
> I use several tasks in C# to call Sqlite queries.
> No I get the error SQLite error (5): database is locked.
> Is it not possible to use more than one tasks with Sqlite?
> 
> Regards
> 
> Urs
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
> Gunter Hick | Software Engineer | Scientific Games International GmbH | 
> Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) 
> +43 1 80100 - 0
> 
> May be privileged. May be confidential. Please delete if not the addressee.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The "natural" order of the query results.

2018-09-16 Thread Barry Smith
Without an order by, sqlite can return the rows in any order it pleases. Likely 
whatever consumes the least resources. Although unlikely given your indices, it 
might be possible - for instance if some future micro-optimisation finds that 
it's quicker to read the index in reverse, then sqlite would give things in the 
opposite order. If you leave out a necessary order by you are very much 
exposing yourself to internal changes. So much so that there is a pragma 
reverse_unordered_selects 
(https://www.sqlite.org/pragma.html#pragma_reverse_unordered_selects) you can 
use to specifically find if your app makes invalid assumptions about order.

You might have found a missed optimisation opportunity here (although there's 
always the tradeoff of library size & cycles to optimise vs execution cycles 
saved to consider).

> On 16 Sep 2018, at 6:29 pm, John Found  wrote:
> 
> Is there some relation between the indexes used in the query, the GROUP BY 
> fields used 
> and the order of the result rows, when no "ORDER BY" clause is used?
> 
> I am asking, because I noticed, that on some queries, when I am using "ORDER 
> BY" the query always 
> use temporary b-tree for ordering, but by including the needed fields in the 
> "GROUP BY" clause 
> and removing the ORDER BY clause, the query returns the rows in the proper 
> order without temp b-tree.
> 
> So, is it safe to use this implicit ordering, or this behavior can be changed 
> in the future versions of SQLite?
> 
> Here is an example:
> 
> create table A (
>  id integer primary key autoincrement,
>  o1 integer,
>  o2 integer
> );
> 
> create table B (
>  Aid integer references A(id),
>  data text
> );
> 
> create index idxA on A(o1 desc, o2 desc);
> 
> insert into A(o1, o2) values (1, 100), (2, 50), (3, 200), (5, 300);
> insert into B(Aid, data) values (1, "b"), (1, "a"), (2, "c"), (2, "d"), (2, 
> "e"), (3, "f"), (3, "g");
> 
> -- Always uses temp b-tree for order by
> select 
>  group_concat(B.data), o1, o2
> from 
>  A
> left join 
>  B on A.id = B.Aid
> group by 
>  A.id
> order by 
>  A.o1 desc, A.o2 desc;
> 
> explain query plan:
> idparentnotuseddetail
> 800SCAN TABLE A
> 1900SEARCH TABLE B USING AUTOMATIC COVERING INDEX (Aid=?)
> 5800USE TEMP B-TREE FOR ORDER BY
> 
> 
> -- This one returns the rows in the needed order without ORDER BY
> select 
>  group_concat(B.data), o1, o2
> from 
>  A indexed by idxA
> left join B on A.id = B.Aid
> group by A.id, A.o1, A.o2;
> 
> explain query plan:
> idparentnotuseddetail
> 700SCAN TABLE A USING COVERING INDEX idxA
> 1800SEARCH TABLE B USING AUTOMATIC COVERING INDEX (Aid=?)
> 
> 
> -- But if I add ORDER BY it still begins to use temp b-tree 
> -- regardless that it does not change the order.
> select 
>  group_concat(B.data), o1, o2
> from 
>  A indexed by idxA
> left join B on A.id = B.Aid
> group by A.id, A.o1, A.o2
> order by A.o1 desc, A.o2 desc;
> 
> explain query plan:
> 800SCAN TABLE A
> 1900SEARCH TABLE B USING AUTOMATIC COVERING INDEX (Aid=?)
> 6000USE TEMP B-TREE FOR ORDER BY
> 
> 
> All the above queries, returns the same result rows in the same order:
> 
> group_concat(B.data)  o1   o2
> NULL   5   300
> f,g3   200
> c,d,e  2   50
> a,b1   100
> 
> 
> 
>
> -- 
> John Found 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row locking sqlite3

2019-03-22 Thread Barry Smith
You might be interested in the BEGIN CONCURRENT branch. It does page level 
locking (not quite as granular as row level).

https://www.sqlite.org/cgi/src/doc/begin-concurrent/doc/begin_concurrent.md

> On 22 Mar 2019, at 11:48 am, Thomas Kurz  wrote:
> 
> This sounds interesting. I have some questions about:
> 
>> Row lock information is shared with processes. If a process finished 
>> unexpectedly, unnecessary lock information might be stayed. In order to 
>> unlock them, please use sqlumdash_cleaner.exe which clears all record 
>> information. If there is a process which is in a transaction, 
>> sqlumdash_cleaner.exe should be called after end the transaction.
> 
> - Where is row lock information stored? In database file, in journal file, in 
> WAL file, or in memory?
> - Why should the cleaner be called after the end of a transaction?
> - I don't like the idea of calling an external exe in case of problems (and 
> more than that, after every transaction??). Couldn't you introduce a PRAGMA 
> for unlocking rows?
> 
> 
> 
> - Original Message - 
> From: Peng Yu 
> To: SQLite mailing list 
> Sent: Friday, March 22, 2019, 15:25:24
> Subject: [sqlite] Row locking sqlite3
> 
> Hi,
> 
> I see that sqlite3 still does not support row locking. This package
> tries to resolve this problem. But it does not have a standard build
> process for Linux.
> 
> https://github.com/sqlumdash/sqlumdash/
> 
> Are there other packages similar to sqlite3 but support row locking? Thanks.
> 
> -- 
> Regards,
> Peng
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it possible to use sqlite online-backup system in an ongoing manner?

2019-03-13 Thread Barry Smith
I think the sessions extension can be used for what you want to do. I haven't 
used it myself, but from it's documented behaviour it looks like you could 
record change sets for every three second interval then apply them back to your 
database on disk. If your app is multi-threaded it might be a pain (unless 
there's an atomic 'stop this changeset and immediately start another one'

> On 13 Mar 2019, at 8:40 am, Simon Slavin  wrote:
> 
>> On 13 Mar 2019, at 2:31pm, John Smith  wrote:
>> 
>> I am working with IN-MEMORY database.
>> When my program starts I load data from file-system DB into my IN-MEMORY DB.
>> All other SQL operations are performed directly on my IN-MEMORY database.
>> This is in order to keep performance high.
> 
> First, make sure you really need to do this.  SQLite performance is normally 
> very high, even without taking special measures.  You may be wasting 
> programming time and introducing complexity which will be difficult to debug. 
>  Run some time-trials.  Of course, you may have already run some time-trials.
> 
>>// Save only intermediate changes (?)
>>sqlite3_backup_step(p, -1); // Backup all modifications from last time
> 
> You cannot combine these two things.  The Online Backup API backs up an 
> entire database.  It does it page by page, without understanding individual 
> rows of data.  It cannot select only changes.  So you might want to use it, 
> but if you do you'll create a new copy of the entire database every time.
> 
> You might want to instead use the Resumable Bulk Update extension:
> 
> 
> 
> " An RBU Update is a bulk update of a database file that may include many 
> insert, update and delete operations on one or more tables. "
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd exception when creating a connection object

2019-05-15 Thread Roberts, Barry
Hi Joe,

We are not using multiple app domains within a process. 

Within the process where the errors are coming from we have

* N databases being accessed, typically between 5-100 separate databases
* Each database has a dedicated handler, to ensure we only ever invoke 1 writer 
at a time to a database.
* All communication to the database is going through the SQLite C# code, 
connections always via the SQLiteConnection instance.

The code follows the form of

Using(var connection = connectionFactory.CreateConnection())
{
Connection.Open();
   // do something useful in here.
}

There is 1 connection factory per database. The create connection call above 
does the following

   /// 
public IDbConnection CreateConnection()
{
EnsureExists(); // Create the file if not there.

return new 
SQLiteConnection(ConnectionStringBuilder.ConnectionString, true);
}

  [NotNull]
private DbConnectionStringBuilder ConnectionStringBuilder
{
get
{
var builder = new SQLiteConnectionStringBuilder
{
DataSource = m_Factory.DatabasePath(),
Version = 3,
DefaultTimeout = 60,
PageSize = m_ConnectionOptions.PageSize,
JournalMode = SQLiteJournalModeEnum.Persist,
BinaryGUID = true,
FailIfMissing = true,
SyncMode = m_ConnectionOptions.SynchronizationMode,
Pooling = m_ConnectionOptions.UseConnectionPool,
};

return builder;
}
}

Typically connection pooling will be on.

At application start, each of the database handlers will verify once that the 
table schema in the database is ok. In order to do that they first open a 
connection as per above, that is when the exception tends to occur. The 
application internally is multi-threaded, each of the database handlers will 
execute concurrently (they are scheduled onto the thread pool), so many 
connection objects will get instantiated very close to each other time wise.

Unfortunately the problem seems to be very difficult to replicate reliably. 
Generally we are seeing around about a 5% failure rate at application start. 
Any further info, ask away.

Hope that helps,
Barry Roberts. 

Message: 7
Date: Tue, 14 May 2019 16:34:16 -0400
From: "Joe Mistachkin" 
To: "'General Discussion of SQLite Database'"

Subject: Re: [sqlite] Odd exception when creating a connection object
Message-ID: <3EA5A76974A44B60A3EC32B67C8C6D5A@LACHRYMOSE>
Content-Type: text/plain;   charset="us-ascii"


Barry Roberts wrote:
> 
> InvalidOperationException: code = Misuse (21), message = 
> System.Data.SQLite.SQLiteException (0x87EF): bad parameter or other
API 
> misuse
> 

Are you using multiple AppDomains?  Do you have other threads in the process
accessing SQLite via its native API while the connections are being created?

Any details you could provide in order to reproduce this could be useful.

--
Joe Mistachkin
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [Re] Odd exception when creating a connection object

2019-05-14 Thread Roberts, Barry
Hi,

I have just taken a look at the source code between the 1.0.107.0 and 1.0.110.0 
drivers, and there are a lot of changes in the SQLiteLog.Initialize code area, 
and how the SQLiteConnection now calls into it. We are seeing more of the same 
exceptions thrown during testing with 110 (we never had this with 107), as 
shown below, which seems to indicate the initialization code is not operating 
in a thread safe manner within a process. I would rather the maintainers take a 
look at it, as they are aware of why it was altered.


InvalidOperationException: code = Misuse (21), message = 
System.Data.SQLite.SQLiteException (0x87EF): bad parameter or other API 
misuse

Failed to configure managed assembly logging.

   at System.Data.SQLite.SQLiteLog.Initialize(String className)

   at System.Data.SQLite.SQLiteConnection..ctor(String connectionString, 
Boolean parseViaFramework)

Kind Regards,
Barry Roberts.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Odd exception when creating a connection object

2019-05-01 Thread Roberts, Barry
Hi,

We have a large C# application which uses the System.Data.SQLite.Core NuGet 
package. We have been running with version 1.0.107.0 since it was released 
without any issues. Last week we updated to version 1.0.110.0, and now we are 
seeing occasional odd errors appearing (fortunately we are testing and have not 
released). The following error is the most puzzling, this is coming from code 
that has been working fine before upgrade. Internally we use the 
SQLiteConnectionBuilder to ensure the connection string is formatted correctly 
before passing to a new SQLiteConnection instance. Any ideas what can cause the 
issue below, given that the only thing changed is the sqlite assemblies?

InvalidOperationException: code = Misuse (21), message = 
System.Data.SQLite.SQLiteException (0x87EF): bad parameter or other API 
misuse
Failed to configure managed assembly logging.
   at System.Data.SQLite.SQLiteLog.Initialize(String className)
   at System.Data.SQLite.SQLiteConnection..ctor(String connectionString, 
Boolean parseViaFramework)

We have parse via framework set to true.

Kind Regards,
Barry Roberts.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Searching by Guid index in table (.NET)? I can write and read, but not search by the Guid

2020-02-15 Thread Barry Smith
Regardless of whether you decide to store GUIDs as text or binary, things will 
be easier if you set your connection string appropriately. BinaryGUID is the 
parameter you want to change. 

See https://www.connectionstrings.com/sqlite-net-provider/store-guid-as-text/

For performance binary is better than text representation. The text UID 
requires twice as many bytes to store, twice as much in cache. If the UID is a 
significant portion of your row, this could be significant. If you’re running 
the latest version of SQLite you can make a computed column to get the text 
representation for manual/CLI access if you need it.

Also ‘randomblob(16)’ will not give you a real GUID. It’d work for most 
purposes but why not call it something else, at least that’d give you a red 
flag to perhaps expect problems when you pass it to a library that expects a 
real GUID and might check the format bits.

> On 14 Feb 2020, at 6:09 am, Andy KU7T  wrote:
> 
> Hi,
> 
> I am having trouble searching for a Guid. I think some conversion is missing. 
>  I wonder if someone can point me to the error…
> 
> Column:
> [ID] GUID(16)
> Index:
> CREATE UNIQUE INDEX [ID_INDEX] ON [DXLOG]([ID] COLLATE [BINARY] ASC);
> Trigger:
> CREATE TRIGGER [AUTOGENERATE_ID] AFTER INSERT ON [DXLOG] FOR EACH ROW WHEN 
> ([NEW].[ID] IS NULL)
> BEGIN
>  UPDATE
>[DXLOG]
>  SET
>[ID] = HEX (RANDOMBLOB (16))
>  WHERE
>[NEW].[ID] IS NULL AND [DXLOG].[ROWID] = [NEW].[ROWID];
> END;
> 
> Inserting a row simply will create a new guid for me. Good. Reading it like 
> this:
> 
> SELECT ID FROM [Dxlog] WHERE RowId = @RowID
> 
> Where @RowId is the LastInsertedRowId.
> And getting it in code:
> 
> Dim rdr2 As SQLiteDataReader = Nothing
> rdr2 = theDB.ExecuteQuery("SELECT ID FROM [Dxlog] WHERE RowId = @RowID", 
> sqlParameters)
> rdr2.Read()
> Me.mvarId = rdr2.Item("ID")
> 
> The returned ID is indeed a Guid. In SQLite it looks like this:
> 
> 40FD6722384053ED3BA45CD1C5FDB30D
> And in .NET: {40fd6722-3840-53ed-3ba4-5cd1c5fdb30d}
> 
> Now, when I am trying to search for it like this, I get nothing:
> 
>sqlParameters = New List(Of SQLiteParameter)()
>AddSqlParam(sqlParameters, "@ID", DbType.Guid, Me.Id)
> 
>rdr2 = theDB.ExecuteQuery("SELECT * FROM [Dxlog] WHERE ID 
> = @ID", sqlParameters)
>rdr2.Read()
> 
> 
> Do I have to converte the Guid to binary somehow?
> 
> Thanks
> Andy
> 
> Sent from Mail for Windows 10
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unexplained table bloat

2020-01-13 Thread Barry Smith
On the original topic...

How does one end up with a database in this state? I.e with a binary value that 
contains 0x00 bytes followed by other bytes but a type of TEXT?

If the definition of a text string in SQLite is that it ends at the first 0x00 
byte, then it seems that anything stored as a text string should adhere to that.

So a database with a TEXT value that contains characters after the first 0x00 
should be considered corrupt. Given that to retrieve the actual contents of the 
cell it must be cast to BLOB, why not force the storage of any string that 
contains 0x00 as a BLOB in the first place?

What am I missing here?

On 13 Jan 2020, at 6:02 am, Simon Slavin  wrote:
> 
> On 13 Jan 2020, at 9:26am, Dominique Devienne  wrote:
> 
>> Which implies length(text_val) is O(N), while
>> length(blob_val) is O(1),
>> something I never quite realized.
> 
> For this reason, and others discussed downthread, some languages which store 
> Unicode strings store the number of graphemes as well as its contents.  So 
> functions which care about the … let's call it "width" … just retrieve that 
> number rather than having to parse the string to figure out the length.
> 
> In a Unicode string 'length' can mean
> 
> 1) octet count (number of 8-bit bytes used to store the string)
> 2) number of code points (basic unicode unit)
> 3) number of code units (how code points get arranged in UTF8, UTF16, etc., 
> not as simple as it looks)
> 4) length in graphemes (space-using units)
> 5) length in glyphs (font-rendering units)
> 
> and probably others I've forgotten.  Not to mention that I simplified the 
> definitions of the above and may have got them wrong.
> 
> An application centred around rendering text (e.g. vector graphics drawing 
> apps) might have each piece of text stored with all five of those numbers, 
> just to save it from having to constantly recalculate them.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FW: Questions about your "Performance Matters" talk re SQLite

2020-01-02 Thread Barry Smith
One thing that really stands is “creates 64 threads that operate on independent 
tables in the sqlite database, performing operations that should be almost 
entirely independent.”

But that’s not how SQLite works - at least not when writing data. SQLite takes 
a lock on the entire database, there is no fine granularity locking that allows 
you to perform simultaneous writes to different tables.

It seems attempting to do this - use 64 threads to write to a database - is a 
highly inefficient use of the library.

> On 2 Jan 2020, at 1:54 pm, Doug  wrote:
> 
> I asked for some information from Emery Berger about his video talk on 
> performance where he said they got a 25% improvement in SQLite performance. 
> Here is the reply I got back.
> 
> 
> 
> I know there has been a lot of talk about what can and cannot be done with 
> the C calling interface because of compatibility issues and the myriad set of 
> wrappers on various forms. I’m having a hard time letting go of a possible 
> 25% performance improvement.
> 
> 
> 
> I don’t have the slightest idea on how to run a benchmark (but I could 
> learn). I wonder if the current set of benchmarks used by SQLite developers 
> actually measure throughput using wall-clock numbers. It might be a good idea 
> to put a wrapper around all the benchmarks to capture how long they took to 
> run (wall-clock), and include things like number and type of cpu cores, 
> average cpu busy time, and other relevant numbers. If the benchmarks are run 
> on lots of different machines (all over the world?), it would provide an 
> excellent view of what changes in SQLite made a difference in performance.
> 
> 
> 
> Doug
> 
> 
> 
> From: Curtsinger, Charlie  
> Sent: Thursday, January 02, 2020 10:55 AM
> To: dougf@comcast.net
> Cc: Emery D Berger 
> Subject: Re: Questions about your "Performance Matters" talk re SQLite
> 
> 
> 
> Hello Doug,
> 
> 
> 
> I was able to track down the sqlite benchmark I ran for the paper, and I’ve 
> checked it into the github repository at 
> https://github.com/plasma-umass/coz/tree/master/benchmarks/sqlite. This 
> benchmark creates 64 threads that operate on independent tables in the sqlite 
> database, performing operations that should be almost entirely independent. 
> This benchmark exposes contention inside of sqlite, since running it with a 
> larger number of hardware threads will hurt performance. I see a performance 
> improvement of nearly 5x when I run this on a two-core linux VM versus a 
> 64-thread Xeon machine, since there are fewer opportunities for the threads 
> to interfere with each other.
> 
> 
> 
> You can also find the modified version of sqlite with the same benchmark at 
> https://github.com/plasma-umass/coz/tree/master/benchmarks/sqlite-modified. 
> There are just a few changes from indirect to direct calls in the sqlite3.c 
> file.
> 
> 
> 
> I reran the experiment on the same machine we used for the original Coz 
> paper, and saw a performance improvement of around 20% with the modified 
> version of sqlite. That’s slightly less than what we originally found, but I 
> didn’t do many runs (just five) and there’s quite a bit of variability. The 
> compiler has been upgraded on this machine as well, so there could be some 
> effect there as well. On a much-newer 64-thread Xeon machine I see a 
> difference of just 5%, still in favor of the modified version of sqlite. 
> That’s not terribly surprising, since Intel has baked a lot of extra 
> pointer-chasing and branch prediction smarts into processors in the years 
> since we set up the 64-core AMD machine we originally used for the Coz 
> benchmarks.
> 
> 
> 
> As far as measuring performance, I’d encourage you *not* to use cpu cycles as 
> a proxy for runtime. Dynamic frequency scaling can mess up these 
> measurements, especially if the clock frequency is dropped in response to the 
> program’s behavior. Putting many threads to sleep might allow the OS to drop 
> the CPU frequency, thereby reducing the number of CPU cycles. That doesn’t 
> mean the program will actually run in a shorter wall clock time. Some CPUs 
> have a hardware event that counts “clock cycles” at a constant rate even with 
> frequency scaling, but these are really just high-precision timers and would 
> be perfectly fine for measuring runtime. I’m thinking of the “ref-cycles” 
> event from perf here.
> 
> 
> 
> Hope this helps,
> 
> - Charlie
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] AVG Function HowTo

2019-12-28 Thread Barry Smith
SQL is a declarative language, not a procedural one. Therefore I find it helps 
to think in declarative terms rather than procedural.

What that means practically in this case is don’t think in terms of loops. 
Think about what you want, and think about how your data is related.

I don’t entirely understand what you want. Am I correct in restating it as:
‘You want to know the expenses in each earning period.‘

Expenses and earnings are related by date.

So, I suggest:
 - you want to group your expenses data by the last earning date.
  - you want to run an aggregate function (likely SUM, but I’m not entirely 
sure what you’re trying to do) within each group.
 - SQL supports comparison operators on dates (assuming you store them in an 
appropriate format, like integer timestamp or ISO86somethingsomething).

Something like the following might give you what you want:

SELECT
 SUM(expense) as period_expenses,
 (SELECT MAX(ear.Date) FROM earnings AS ear WHERE ear.Date <= exp.Date) AS 
last_earning_date
FROM
 Expenses AS exp
 GROUP BY
 last_earning_date

This might not be what you want but is perhaps a starting point of how things 
are done in SQL.

You can also achieve that without the sub select by using joins and window 
functions or a variety of other ways, the above is just how I think about it.

Also you might want to use date functions to strip out only the date if you’re 
storing date+time in your date field.

> On 28 Dec 2019, at 8:45 am, Luuk  wrote:
> 
> 
>> On 28-12-2019 13:07, Csanyi Pal wrote:
>> Hi,
>> 
>> I have attached here the exported sql file of my little sqlite database.
>> 
>> I am a newbe in the sqlite language so I ask here only an advice in which 
>> direction to go, eg. which functions of sqlite language to use to achieve my 
>> goal?
>> 
>> 
> SQList is able to calulate averages. lets give simple example
> 
> select min(a),avg(a),max(a)
> from (select 1 as a union all select 2 union all select 3 union all select 4);
> 
> It will output:
> 
> 1|2.5|4
> 
> because 1 is the minimum values of the selected values 1,2,3,4
> 
> 2.5 is the average of the selected values 1,2,3,4
> 
> and 4 is the max values of those.
> 
> 
> For calculating with date (and/or time) function you should read this page:
> https://www.sqlite.org/lang_datefunc.html
> 
> If you created an example that 'does not work' (i do mean 'that does not do 
> what you expect it to do' ), come back here with that example, and i'm sure 
> someone will help you
> 
> post the database structure, the query and some example data, and if possible 
> the expected output...
> 
> 
>> Do I think well?
>> 
>> Any advices will be appreciated!
>> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL help

2019-12-28 Thread Barry Smith
Is not identifier resolution (search in the current scope first then step out) 
part of the SQL standard? Damn closed standards and their inability to check 
without paying through the nose.

Even if not, and this is in fact undocumented, I would be amazed if it changed, 
purely for the sheer amount of software it would break.

> On 28 Dec 2019, at 8:19 am, Simon Slavin  wrote:
> 
> On 28 Dec 2019, at 3:46pm, x  wrote:
> 
>> In the embedded select it takes c from the outer table t1 but doesn’t flag 
>> ‘ambiguous column’ for b in the embedded select. Is it standard that tables 
>> at the same level are searched first for the column and tables from the 
>> surrounding  levels are only searched if it’s not found?
> 
> Undocumented.  Therefore even if someone told you the what's done now, it 
> might change in some future version of SQLite.  Either using a different 
> level, or deciding to report the ambiguity.
> 
> To make your code dependable, instead of "b" specify "t1.b" or "t2.b", as you 
> did in your WHERE clause.  This will ensure that your code is understood 
> correctly by SQLite, and it will also help anyone who has to read your code 
> in the future.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL help

2019-12-28 Thread Barry Smith
Why does section 6.4.3.a.i of that linked standard not apply to his sub select?

In the quote below, CR is Column Reference, CN is Column Name.

3) If CR does not contain a , then CR shall be contained
within the scope of one or more s or s whose associated tables include a column whose  is CN. Let the phrase possible qualifiers denote those
s and s.

a) Case:

  i) If the most local scope contains exactly one possible
 qualifier, then the qualifier Q equivalent to that unique
  or  is implicit.



> On 28 Dec 2019, at 9:48 am, Simon Slavin  wrote:
> 
> On 28 Dec 2019, at 5:19pm, Barry Smith  wrote:
> 
>> Is not identifier resolution (search in the current scope first then step 
>> out) part of the SQL standard?
> 
> The way the SELECT statements are nested in the question is itself 
> non-standard in any version of SQL I can cite in this message.
> 
>> Damn closed standards and their inability to check without paying through 
>> the nose.
> 
> For that reason, we quote the 180,000 word SQL-92 …
> 
> <https://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt>
> 
> … quite a lot, and the nine parts of SQL:2016 never.  (Hmm.  No, there's no 
> SQL:2019 yet.)
> 
>> Even if not, and this is in fact undocumented, I would be amazed if it 
>> changed, purely for the sheer amount of software it would break.
> 
> 
> Oh, me too.  But I wouldn't intentionally write code that depended on it.  
> Nor would I recommend that to anyone else.  Every time a developer says "We 
> do  for backward compatibility." I die a little.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ADO.NET driver upgrade puzzle

2017-09-25 Thread Roberts, Barry (FINTL)
Hi Clemens,

As per my original post, all C# access code is making extensive use of "using" 
statements. However we do obviously rely on the connection pool being thread 
safe, because many threads are writing to different databases (connections) 
concurrently.

There is no direct relationship between threads and databases. However I do 
enforce that within a process only one thread can be writing to a specific 
database (because sqlite does not support parallel writing) at a time. All 
threads are running on the Pfx thread pool. As data is received from a device 
it is assigned to a specific data stream. If the buffer for the stream is then 
full a write action is assigned to the thread pool. When that runs it will call 
into the ADO.NET code, and send the data to the database for that connection. 
Here is some test code, that shows how we access the connections normally. 
Imagine that the "Insert" method is called whenever we have data to flush for a 
specific database, and there are normally 200-500 database connections active. 
Obviously at the end of the using statements the connection is effectively 
handed back to the connection pool, which is outside of my direct control.

public void Insert([NotNull] IEnumerable records)
{
ArgumentHelper.AssertNotNull(records, "records");
var sb = new StringBuilder();

sb.AppendLine("INSERT INTO");
sb.AppendLine(m_TableName);
sb.AppendLine("(identity, time, binary)");
sb.AppendLine("VALUES");
sb.AppendLine("(@identity, @time, @binary)");

using (IDbConnection connection = CreateConnection())
{
connection.Open();

using (IDbTransaction transaction = 
connection.BeginTransactionSerializable())
{
using (IDbCommand command = CreateCommand(connection))
{
command.Transaction = transaction;
command.CommandText = sb.ToString();
command.Parameters.Add(new SQLiteParameter("@identity", 
DbType.Int32));
command.Parameters.Add(new SQLiteParameter("@time", 
DbType.Int64));
command.Parameters.Add(new SQLiteParameter("@binary",   
DbType.Binary));

if (m_Factory.Configuration.UsePreparedStatements)
{
command.Prepare();
}

var stream = new ByteStream();

IDataParameterCollection parameters = 
command.Parameters;
IDataParameter parameter01 = ((IDbDataParameter) 
parameters[1]);
IDataParameter parameter02 = ((IDbDataParameter) 
parameters[2]);

foreach (var record in records)
{
stream.Clear();
record.Serialize(stream);

parameter01.Value = record.TimeOfValidity.TaiTicks;
parameter02.Value = stream.GetContents();
command.ExecuteNonQuery();
}
}

try
{
transaction.Commit();
}
catch
{
transaction.Rollback();
throw;
}
}
}
}

[NotNull]
   public IDbConnection CreateConnection()
   {
   var connection = new SQLiteConnection
   {
   ConnectionString = 
m_Builder.ConnectionString
   };

   return connection;
   }

[NotNull]
public IDbCommand CreateCommand([NotNull] IDbConnection connection)
   {
   IDbCommand command = new 
SQLiteCommand();
   command.Connection = 
connection.NotNull("connection");
       return command;
   }



Kind Regards,
Barry Roberts.

b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is db size an issue?

2017-10-02 Thread Roberts, Barry (FINTL)
We have logged live data into databases that are over 200GB without any issues. 
I don't think SQLite will be the issue itself, as it is merely limited by the 
file system. The more likely issue is when you need to transfer or backup the 
file, it can get unwieldy if it is too large.

Kind Regards,
Barry Roberts.

b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] pragma integrity_check throwing exception

2017-10-03 Thread Roberts, Barry (FINTL)
Hi,

Assume I have a database which is "malformed" due to rowid xxx missing from a 
table index. In the System.Data.SQLite.dll 1.0.80.0 C# driver the following 
code would return the reason, I would get a list of the rowid problems allowing 
me to log them.

private static IEnumerable IntegrityCheck(SQLiteConnection 
connection)
{
using (var command = new SQLiteCommand(connection))
{
command.CommandTimeout = 0;
command.CommandText = "pragma integrity_check";

using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
yield return reader.GetString(0);
}
}
}
}

However using a later driver such as 1.0.105.1 the ExecuteReader() call throws 
an exception saying the database is malformed. That is not very helpful and 
stops me running the integrity check. If I replace the integrity_check with 
quick_check it works and returns ok, so access to the file is ok, just some 
internal indexes are messed up. Is there any way to get the above working in 
the later drivers?

Kind Regards,
Fugro Intersite.

Barry Roberts.
b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ADO.NET driver upgrade puzzle

2017-09-25 Thread Roberts, Barry (FINTL)
Hi,

We have a C# application which logs data in real-time to sqlite files. The 
application is multi-threaded, and typically logs 200-500 data streams 
simultaneously. The logging rate per stream is 1Hz to 1000Hz, depending on the 
type of data coming in. Each data stream is logged to its own db3 file. This 
means that there are generally 200-500 connections active, and we run with 
connection pooling on.  Each data stream typically buffers for 2-3 seconds then 
writes the buffer to the respective db3 file. All processing with the 
application is multi-threaded.

I am currently looking at updating the version of System.Data.SQLite we are 
running, but running into some issues. The application currently runs 1.0.80.0, 
and I am testing with 1.0.105.1, the code changes to our app were fairly 
minimal. The reason we were looking to upgrade is that performance profiling 
was showing a high degree of lock contention within the old driver code, 
primarily due to a static lock in the old driver. The newer driver uses locks 
based on the connection handle, which removes the lock contention we were 
seeing, which leads to a significant improvement in CPU usage.

When logging with the old driver there are no issues, other than the lock 
contention.
When logging with the new driver, the system sometimes locks up, or marks one 
or more of the db3 as malformed.

After much testing, this seems to occur when we prepare statements on the 
sqlite command. If I disable the IDbCommand.Prepare() calls, the driver appears 
to be stable. Structurally our code is fairly standard, and makes extensive use 
of "using" blocks. We only hold the connection and transaction open for the 
minimal time possible, releasing the connection back to the pool.

Using (connection)
Open connection
Using (begin transaction)
Using (create command)
Prepare insert command
Transfer data
Commit transaction

One of the stability tests I run involves 2 processes logging to the same data 
sets (db3) files, because we do have multi-process access onto the db3 files. 
Connections are configured to use journal mode "persist".

Should we not use prepare calls when running connection pooling, or do you have 
any other recommendations we should take note of.

Kind Regards,
Barry Roberts.

b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ADO.NET driver upgrade puzzle

2017-09-27 Thread Roberts, Barry (FINTL)
Hi,

I am aware of the sqlite FAQ, and especially the comments regarding file 
systems. We only ever use NTFS file system, and never have any FAT or FAT32 
file systems. Given that the old 1.0.80.0 driver is totally stable, and the 
newer one is not, on the same exact hardware, I do not think the issue is 
hardware related.

I enforce that the writer to a specific database can only run 1 at a time, this 
is done via the C# Interlocked mechanism which has always proved to be very 
robust. There is a separate logger object instance per database, and within 
each of those the writing is controlled with the interlock. This allows the 
application to buffer for longer if the writing to disk encounters a short term 
slow down.

As per Clemens suggestion I could replace the use of the connection pool and 
hold a connection open per database. The code can be structured to do that, 
because object instances are not shared between database logger instances. 
However I would have preferred to use the connection pool, because that allows 
me to open late/close early on my connection usage, which is generally better 
for maintenance and how the production code is currently strutured. I could 
restructure the code, but that is not really desirable in production code, 
simply due to an ADO.NET driver update.

I have been running my test application for days now without issues. This is 
configured with pooling on, statement preparation off. If I enable statement 
preparation the system will fail after an hour or two. So it looks like our 
production code needs to run the same, and remove the IDbCommand.Prepare() 
calls. The old driver works well but has lots of lock contention, the new one 
resolves that issue, but is not as stable in some configurations. It is a 
trade-off, I need the new driver to improve performance, but will trade off the 
command prepare calls, to enable driver stability.

Kind Regards,
Barry Roberts.
b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C# pragma integrity_check call throwing exception

2017-10-26 Thread Roberts, Barry (FINTL)
Simon,


> Your text makes it look like you think that that kind of corruption affects 
> only existing rows.  This is not the case.  If you continue to write to a 
> database which shows this problem, you can lose more of the existing rows 
> and/or the new data you're trying to write.  The proper reaction to any such 
> errors is to revert to an uncorrupted backup or, at minimum, to do VACUUM or 
> REINDEX then check that the data is still consistent and plausible.

I am aware that other issues may exist. If the integrity_check did not return 
"ok" we would then try and fix the issues using the command line tooling. 
Generally if/when there is an issue it is reasonably minor, and a VACUUM or 
REINDEX fixes it.  My concern was that the newer C# driver is showing different 
behaviour to the old driver when tested against the exact same sqlite file (I 
have unit tests covering this scenario).  I have to justify changing our source 
code because of it.

Keith,


> Have you run "PRAGMA integrity_check;" from the command line shell against 
> the same database and does it return rows or just throw the same exception?


Thanks for the suggestion, I have just run a test database that is "corrupted" 
using the latest sqlite3 download

SQLite version 3.21.0 2017-10-24 18:55:49
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open entities-corrupt.db3
sqlite> pragma integrity_check;
Error: database disk image is malformed
sqlite> pragma quick_check;
ok
sqlite> REINDEX;
sqlite> pragma quick_check;
ok
sqlite> pragma integrity_check;
ok
sqlite>

So it looks like the actual engine is doing this now. This helps me justify 
altering our code base, to work with the revised behaviour.

Kind Regards,
Fugro Intersite.

Barry Roberts.
b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] C# pragma integrity_check call throwing exception

2017-10-24 Thread Roberts, Barry (FINTL)
Hi,

I asked the following question a few weeks ago, but did not get any responses, 
hoping someone may have an idea.

We are currently running System.Data.SQLite.dll 1.0.80.0 and would like to 
upgrade to a newer version. However there are a variety of issues we have run 
into, most of which I have now resolved, however one remains. Assume I have a 
database which is "malformed" due to rowid xxx missing from a table index. 
Using the 1.0.80.0 driver the following code would return the results from the 
pragma integrity_check call.

private static IEnumerable IntegrityCheck(SQLiteConnection 
connection)
{
using (var command = new SQLiteCommand(connection))
{
command.CommandTimeout = 0;
command.CommandText = "pragma integrity_check";

using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
yield return reader.GetString(0);
}
}
}
}

Our system would get a list of the rowid problems allowing it to log them and 
inform the user. I am currently testing using the 1.0.105.1 driver, however the 
ExecuteReader() call (above) throws an exception saying the database is 
malformed. That is not very helpful and effectively stops the system obtaining 
integrity check information. If I replace the integrity_check with quick_check 
it works and returns ok, so access to the file is ok, just some internal 
indexes are messed up. Is there any way to get the above working in the later 
drivers, or can someone tell me which driver update changed this behaviour?

Kind Regards,
Fugro Intersite.

Barry Roberts.
b.robe...@fugro.com<mailto:b.robe...@fugro.com> | www.fugro.com

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users