[sqlite] extension_functions.c - node_iterate blows stack

2019-10-01 Thread Dave King
Hi all, I ran into an issue with the extension-functions.c file posted
here, in that iterating through the binary tree representation for
mode/median/quartile isn't done in a tail recursive way, which can cause a
stack frame violation:
https://www.sqlite.org/contrib//download/extension-functions.c?get=25

I understand from the page that these are "use at your own risk" files, but
I also see the file in question has had a number of patches. Would it be
appropriate for me to submit a new version of the file correcting this
behavior? Thanks for any guidance you can provide.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] pointer-passing interface, and testing if pointer is correct 'type'....

2019-06-15 Thread dave
Folks;
 
I have a v-table using the pointer-passing interface, and binding 'null' is
a valid use-case.  However, to wit there is no way to tell on the v-table
implementation side if the sqlite3_value_pointer() returns 'null' because
that is what the user explicitly and validly chose, or because they
specified the wrong 'pointer type string'.  As it is, I can't emit useful
error messages to developers in that case.
 
If there is a method of differentiating 'null because you want it', and
'null because you can't have it', please advise.
 
Outside of that, I would suggest as a future enhancement possibly:
*  int sqlite_isvalid_pointer(sqlite3_value*, const char*)
  returns a 'boolean' indicating it was bound validly or not
or if doing two validations (one for the test, one for the value retrieval)
is unappealing, maybe:
*  void *sqlite3_value_pointer_v2(sqlite3_value*, const char*, int*);
  gets pointer as per usual, and if final parameter is non-null, provide a
'boolean' indicating that it was validly bound.

Thanks!
-dave
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite segfault on INNER JOIN ON (...) + WHERE fieldIN (list, of, items)

2019-02-19 Thread dave
...
> To: SQLite mailing list
> Subject: Re: [sqlite] sqlite segfault on INNER JOIN ON (...) 
> + WHERE fieldIN (list, of, items)
> 
> 
> If you compile with assert() statements enabled (using the
> --enable-debug option with ./configure or otherwise adding the
> -DSQLITE_DEBUG flag) then you will hit an assertion fault earlier.
> The problem is that the same ephemeral table - the in-memory table
> that is constructed to hold the RHS of the IN operator "(1,2,3)" - is
> being used for two incompatible purposes.  The check-in that caused
...

Thanks for the info.  I wonder if it makes sense for me to include the
SQLITE_DEBUG in all debug configurations of my product?  Does it have any
untoward effect other than maybe slowdowns etc?

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite segfault on INNER JOIN ON (...) + WHERE fieldIN (list, of, items)

2019-02-19 Thread dave

> Wow; can confirm. I crashed it in my debugger in the 
> amalgamation of 3.27.1
> in the function SQLITE_PRIVATE RecordCompare
> sqlite3VdbeFindCompare(UnpackedRecord *p)
> 
> At line 80720,   if( p->pKeyInfo->nAllField<=13 )
> 
> in that case:
> pKeyInfo is NULL

Lastly, if it helps, converting the query to:

SELECT a.id FROM t AS a INNER JOIN t as b ON a.id = b.id WHERE 
a.id = 1 or a.id = 2 or a.id = 3;

Does /not/ crash.

(and nice work on the bisect! Lol)

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite segfault on INNER JOIN ON (...) + WHERE field IN (list, of, items)

2019-02-19 Thread dave
> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Ignacio Losiggio
> Sent: Tuesday, February 19, 2019 6:27 PM
> To: sqlite-users@mailinglists.sqlite.org
> Subject: [sqlite] sqlite segfault on INNER JOIN ON (...) + 
> WHERE field IN (list, of, items)
> 
> 
> (I'm sorry if my description of the bug is not good. I tried 
> to get as 
> much
> information possible before sending this email).
> 
> A few days ago I encountered some weird segfaults on my django 
> application.
> After a bit of troubleshooting I realized that the applications was 
> crashing on
> the sqlite library.
> 
> I dumped the database and recompiled python with debugging symbols to 
> get the
> segfaulting query. After some digging I managed to get a small 
> reproduction of
> the segfault:
> 
> CREATE TABLE IF NOT EXISTS t (id integer NOT NULL PRIMARY KEY 
> AUTOINCREMENT);
> INSERT INTO t VALUES(1);
> SELECT a.id FROM t AS a INNER JOIN t as b ON a.id = b.id 
> WHERE a.id IN 
> (1, 2, 3);
> 
> Once I got that I installed fossil and bisected the segfault to the
> e130319317e76119 version, below is the output of `fossil chart`:
> 
>   1 BAD 2019-02-19 20:29:05 f16d127c3b4a735a
>   3 BAD 2019-01-14 13:32:15 ddc3697efd61830f
>   5 BAD 2019-01-05 21:09:37 598d7358e7329f0d
>   7 BAD 2019-01-03 15:17:01 bef216dfa1456a78
>   8 BAD 2019-01-01 19:17:42 911342f7512145a8
>   9 BAD 2018-12-31 21:43:55 b57c545a384ab5d6
>  10 BAD 2018-12-31 20:39:37 e130319317e76119
>  11 GOOD2018-12-31 17:58:05 f856676c8438dbf5 CURRENT
>   6 GOOD2018-12-31 16:36:42 4678cb1044f0b4dc
>   4 GOOD2018-12-24 20:00:27 3873941c4fb9aa2d
>   2 GOOD2018-12-01 12:34:55 bf8c1b2b7a5960c2
> 
> I don't have experience with fossil nor sqlite3 nor SQL in 
> general, so 
> I really
> don't know how to add more information to this report.
> 
> 
> Thanks in advance.


Wow; can confirm. I crashed it in my debugger in the amalgamation of 3.27.1
in the function SQLITE_PRIVATE RecordCompare
sqlite3VdbeFindCompare(UnpackedRecord *p)

At line 80720,   if( p->pKeyInfo->nAllField<=13 )

in that case:
pKeyInfo is NULL

Stack traceback:

sqlite3.exe!sqlite3VdbeFindCompare(UnpackedRecord * p) Line 80720
sqlite3.exe!sqlite3BtreeMovetoUnpacked(BtCursor * pCur,
UnpackedRecord * pIdxKey, __int64 intKey, int biasRight, int * pRes) Line
68434
sqlite3.exe!sqlite3VdbeExec(Vdbe * p) Line 87626
sqlite3.exe!sqlite3Step(Vdbe * p) Line 81718
sqlite3.exe!sqlite3_step(sqlite3_stmt * pStmt) Line 81784
sqlite3.exe!exec_prepared_stmt(ShellState * pArg, sqlite3_stmt *
pStmt) Line 10445
sqlite3.exe!shell_exec(ShellState * pArg, const char * zSql, char *
* pzErrMsg) Line 10752
sqlite3.exe!runOneSqlLine(ShellState * p, char * zSql, _iobuf * in,
int startline) Line 16106
sqlite3.exe!process_input(ShellState * p) Line 16206
sqlite3.exe!wmain(int argc, wchar_t * * wargv) Line 16959

Hth a little.

Cheers,
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] virtual table xBestIndex and SQLITE_CONSTRAINT andzErrMsg

2019-02-19 Thread dave
> On 2/19/19, dave  wrote:
> > addition, but I have lost a capability relative to the 
> prior scheme of using
> > high query cost along with a special flag communicated in 
> pIdxInfo->idxNum,
> > that being the ablilty to emit contextual info as to why 
> the query failed.
> 
> Yeah.  There is no way to report an error out of xBestIndex.  And, in
> fact, you would not want to do that because one or more xBestIndex
> calls might actually work.  Or, there might be multiple xBestIndex
> calls that all fail for different reasons, in which case it is unclear
> which error should be reported.
> 
> I will ponder your request.  In the meantime, you can continue to use
> the old method, which still works like it always has.
> 
> -- 
> D. Richard Hipp


OK, well the theory being that the message would be emitted only when all
the candidate plans were tried, and still no solution waa found (I guess at
the same spot where the current message is emitted).  But maybe that is too
late, and any messages set along the way are already gone.

As for multiple messages, even just emitting an arbitrary one is useful.
These failures happen at design time and the developer incrementally refines
his/her query until there were no such errors.  I'm not sure if it is
possible to happen once a working query has been created.  I would think
that if you had defined a query that was demonstably solvable once, that any
subsequent executions would at worst gravitate to that known working soluton
even if the planner tried to do things differently that time (maybe based on
data values).

OK, for now I will revert to the old method.

Cheers!

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] virtual table xBestIndex and SQLITE_CONSTRAINT and zErrMsg

2019-02-19 Thread dave
I noticed that in 3.26 a feature was added whereby a proposed execution plan
can be rejected in vtables by returning SQLITE_CONSTRAINT.  I welcome this
addition, but I have lost a capability relative to the prior scheme of using
high query cost along with a special flag communicated in pIdxInfo->idxNum,
that being the ablilty to emit contextual info as to why the query failed.

Under the new scheme, a failed query is met with:

  Error: no query solution

But under the old scheme I was able to emit:

  Error: GROUPACCTS: There must be equality constraints on GNAME and ISLOCAL

The context info is handy for developers building the query so they can know
what they are missing, since required constraints like this are non-obvious
from a pure SQL standpoint.  This is especially true in the context of
joins, since then you otherwise wouldn't even know what table is
problemattic.

Under the old scheme I would have to fail my query in xFilter, and I would
set the error text like this:

 if ( IDXVAL_FAILQUERYPLAN == idxNum )
 {
  sqlite3_free( pThis->pVtab->zErrMsg );
  pThis->pVtab->zErrMsg = sqlite3_mprintf( VTBLA4GNAME": There must be
equality constraints on GNAME and ISLOCAL" );
  return SQLITE_CONSTRAINT;
 }

I did try setting the error text in a similar manner in the xFilter method,
however it seems this text is ignored in that case, and I only get the 'no
solution message'.

My suggestion would be to not ignore it in the case of failing xBestIndex
for no query plan, and to emit it if it has been set.

If this is done, I imagine some additional consideration would have to be
made for the case where one proposed query plan is rejected, and another
plan has been accepted.  In that case, maybe the net successful plan would
still have error texts from the previous rejected plan?  I don't know if
this would cause a problem or not.

Cheers!
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Syntax clarification

2018-12-28 Thread Dave Delage
I've searched without success for this answer. I use SQLite3, Zeos and 
Delphi so maybe this isn't a perfectly sqlite3 question but here goes:


 What are the pros/cons of query.sql.text := 'some string'; versus  
query.sql.add('some string');


Dave






---
This email has been checked for viruses by AVG.
https://www.avg.com

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regarding CoC

2018-10-19 Thread Dave Waters
As if I needed another reason for SQLite to be my favorite database.  Well
done.

On Fri, Oct 19, 2018 at 10:11 AM Richard Hipp  wrote:

> On 10/19/18, Mantas Gridinas  wrote:
> >
> > I found code of conduct in documentation and I was wondering if it were
> > true. Checking the version history it appears to have been added on
> > 2018-02-22.
> >
>
> Yes.  Clients were encouraging me to have a code of conduct.  (Having
> a CoC seems to be a trendy thing nowadays.)  So I looked around and
> came up with what you found, submitted the idea to the whole staff,
> and everybody approved.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Dave
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] virtual tables, xBestIndex, pIdxInfo->aConstraint[].op, and 'NOT'...

2018-10-17 Thread dave
> Behalf Of Richard Hipp
> > And also the meaning/use of 
> SQLITE_INDEX_CONSTRAINT_FUNCTION, that would be
...
> The SQLITE_INDEX_CONSTRAINT_FUNCTION mechanism was added to support
> the new Geopoly extension, and the ability to index on things like
> "WHERE geopoly_within(_shape,...)" and "WHERE
> geopoly_overlap(_shape,...)".  There is little to no documentation on
> the SQLITE_INDEX_CONSTRAINT_FUNCTION mechanism yet.  See the geopoly
> implementation for an example.
> -- 
> D. Richard Hipp

Thanks for the scoop!

OK, at this point I'll assume the 'NOT' variant of MATCH, LIKE, GLOB, REGEXP
are currently /not/ supported in xBestIndex, and just hope that maybe
someday they will be.

Cheers, and thanks for all the feedback!
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] virtual tables, xBestIndex, pIdxInfo->aConstraint[].op, and 'NOT'...

2018-10-16 Thread dave
> Behalf Of Keith Medcalf
> Sent: Saturday, October 13, 2018 6:53 PM
...
> 
> > Lastly, I'm assuming that 'IS' and 'IS NOT' is functionally 
> > equivalent to the '=' and '<>' operators?  
> > Or is there some subtle difference
> 
> As long as neither the LHS or the RHS are null, then IS and 
> IS NOT are the same as == and <> respectively.
> 
> However, if you use the "comparison" operators (==, <>) then 
> if either the LHS or the RHS or both are NULL, then the 
> results is NULL (that is, false).  For the purpose of these 
> comparisons NULL is a value that is neither equal to nor not 
> equal to any other value, including null.
> 
> IS and IS NOT mean that NULL is a distinct value and NULL IS 
> NULL is TRUE, NULL IS NOT 7 is TRUE, and so on and so forth.
> 

Thanks for the confirmation of the behaviour of 'is' in sqlite.

And if anyone has comnments regarding the first two issues I mentioned,
namely the absence of support of 
  NOT MATCH, NOT LIKE, NOT GLOB, NOT REGEXP
in xBestIndex()

And also the meaning/use of SQLITE_INDEX_CONSTRAINT_FUNCTION, that would be
super helpful.

Cheers!

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] virtual tables, xBestIndex, pIdxInfo->aConstraint[].op, and 'NOT'...

2018-10-13 Thread dave
OK, I finally got around to upgrading the sqlite version to 3.25.1, and
testing this xBestIndex stuff out.  Here is what I found:

These forms can still bypass the vtable's implementation of
constraints/indices.  They are all negations:

NOT MATCH
NOT LIKE
NOT GLOB
NOT REGEXP

In each of these cases, there is no invocation of xBestIndex to let the
vtable handle those negated predicates, but there /are/ invocations to let
the vtable handle the asserted forms.

As mentioned, I can live with this limitation for now, but you might
consider extending support for such in the future for completeness.  In my
case, I implemented LIKE, but that code is bypassed if the user specifies
NOT LIKE -- a situation which can produces surprising results!

Also, I noticed SQLITE_INDEX_CONSTRAINT_FUNCTION.  I don't know what this
is, and I could not find any documentation for such, and so I couldn't test
that one.  I'm guessing it is now possible to have a user-defined predicate
function?  I'd like to know how to use that.  At any rate, I suspect it
might need a 'not' version as well.

Lastly, I'm assuming that 'IS' and 'IS NOT' is functionally equivalent to
the '=' and '<>' operators?  Or is there some subtle difference?  E.g. I can
issue a query with a search condition "where name is 'person'" which
triggers invocation of xBestIndex, and seems to behave like '='.  It was my
belief that the right-hand-side of 'IS' could only contain a boolean (true,
false, null) as per SQL-92, but I'm guessing that SQLite extends it's
meaning.  Interestingly a search condition "where name is true" parses and
runs, but does /not/ cause invocation of xBestIndex at all.

Cheers!
-dave

> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of dave
> Sent: Monday, September 24, 2018 2:57 PM
> To: 'SQLite mailing list'
> Subject: Re: [sqlite] [EXTERNAL] virtual tables, 
> xBestIndex,pIdxInfo->aConstraint[].op, and 'NOT'...
> 
> 
> I am using 3.20.1
> 
> Yes, I noticed LIKE etc.  It looked like it changed from a 
> bitfield to an
> enum at some point.  So, I guess I am one versionpoint shy of 
> having NE and
> IS.
> OK, I'll check out at least 3.21 -- I was hesitant to upgrade just now
> because there were some shell.c issues I had with those 
> embedded extensions
> relative to the implementation in 3.20, but this may compell 
> me to do so.
> (I use shell.c in a special debug build of my product).
> 
> Thanks!
> -dave
> 
> > -Original Message-
> > From: sqlite-users 
> > [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> > Behalf Of Hick Gunter
> > Sent: Monday, September 24, 2018 1:57 AM
> > To: 'SQLite mailing list'
> > Subject: Re: [sqlite] [EXTERNAL] virtual tables, xBestIndex, 
> > pIdxInfo->aConstraint[].op, and 'NOT'...
> > 
> > 
> > Which version are you using? The set of constraint constants 
> > was extended in versions 3.10 (LIKE, GLOB, REGEXP), 3.21 (NE, 
> > IS*) and most recently 3.25 (FUNCTION)
> > 
> > -Ursprüngliche Nachricht-
> > Von: sqlite-users 
> > [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
> > Auftrag von dave
> > Gesendet: Sonntag, 23. September 2018 23:26
> > An: 'SQLite mailing list' 
> > Betreff: [EXTERNAL] [sqlite] virtual tables, xBestIndex, 
> > pIdxInfo->aConstraint[].op, and 'NOT'...
> > 
> > Folks,
> > 
> > I cannot seem to find a means of filtering on negated 
> > operators, e.g. <>, not null, not like, etc., in the 
> > xBestIndex() method for virtual vables.  As best as I can 
> > tell, I cannot, unless there is something I am missing, hence 
> > this inquiry.
> > 
> > In a few virtual tables I have implemented, I have handled 
> > the SQLITE_INDEX_CONSTRAINT_EQ, and the 
> > SQLITE_INDEX_CONSTRAINT_LIKE (for
> > example) in the xBestIndex and xFilter.  These code paths are 
> > taken for queries of the form:
> > 
> > select * from myvtab where mycol = 'xxx';
> > select * from myvtab where mycol like 'xxx';
> > 
> > but /not/ for queries of the form:
> > 
> > select * from myvtab where mycol <> 'xxx';
> > select * from myvtab where mycol not like 'xxx';
> > 
> > I can work around these things for now with caveats in 
> > documentation, but it does sometimes cause confusion to users.
> > 
> > For example, in one case I have extended the syntax of LIKE . 
> >  That extension of syntax is invoked for a positive LIKE 
> > constraint, but is bypassed for a negated one.  I can work 
> > around that with an extension function, but I won't get the 
> > hints at

Re: [sqlite] [EXTERNAL] virtual tables, xBestIndex, pIdxInfo->aConstraint[].op, and 'NOT'...

2018-09-24 Thread dave
I am using 3.20.1

Yes, I noticed LIKE etc.  It looked like it changed from a bitfield to an
enum at some point.  So, I guess I am one versionpoint shy of having NE and
IS.
OK, I'll check out at least 3.21 -- I was hesitant to upgrade just now
because there were some shell.c issues I had with those embedded extensions
relative to the implementation in 3.20, but this may compell me to do so.
(I use shell.c in a special debug build of my product).

Thanks!
-dave

> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Hick Gunter
> Sent: Monday, September 24, 2018 1:57 AM
> To: 'SQLite mailing list'
> Subject: Re: [sqlite] [EXTERNAL] virtual tables, xBestIndex, 
> pIdxInfo->aConstraint[].op, and 'NOT'...
> 
> 
> Which version are you using? The set of constraint constants 
> was extended in versions 3.10 (LIKE, GLOB, REGEXP), 3.21 (NE, 
> IS*) and most recently 3.25 (FUNCTION)
> 
> -Ursprüngliche Nachricht-
> Von: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
> Auftrag von dave
> Gesendet: Sonntag, 23. September 2018 23:26
> An: 'SQLite mailing list' 
> Betreff: [EXTERNAL] [sqlite] virtual tables, xBestIndex, 
> pIdxInfo->aConstraint[].op, and 'NOT'...
> 
> Folks,
> 
> I cannot seem to find a means of filtering on negated 
> operators, e.g. <>, not null, not like, etc., in the 
> xBestIndex() method for virtual vables.  As best as I can 
> tell, I cannot, unless there is something I am missing, hence 
> this inquiry.
> 
> In a few virtual tables I have implemented, I have handled 
> the SQLITE_INDEX_CONSTRAINT_EQ, and the 
> SQLITE_INDEX_CONSTRAINT_LIKE (for
> example) in the xBestIndex and xFilter.  These code paths are 
> taken for queries of the form:
> 
> select * from myvtab where mycol = 'xxx';
> select * from myvtab where mycol like 'xxx';
> 
> but /not/ for queries of the form:
> 
> select * from myvtab where mycol <> 'xxx';
> select * from myvtab where mycol not like 'xxx';
> 
> I can work around these things for now with caveats in 
> documentation, but it does sometimes cause confusion to users.
> 
> For example, in one case I have extended the syntax of LIKE . 
>  That extension of syntax is invoked for a positive LIKE 
> constraint, but is bypassed for a negated one.  I can work 
> around that with an extension function, but I won't get the 
> hints at record enumeration time that could reduce the 
> dataset from the underlying source.
> 
> In other cases, I have some 'required' columns, which must be 
> present in a EQ constraints (usually they wind up being 
> parameters to a function call that generates the underlying 
> data).  I emit an error when such constraints are missing, 
> but it can be confusing to users when:
> 
> select * from myvtab where mycol <> 'xxx';
> 
> indicates that "you must have a constraint on 'mycol'"
> 
> Lastly, some behavioural inconsistencies occur between these forms:
> 
> select * from myvtab where mycol = null;
> select * from myvtab where mycol is null;
> 
> Since the first comes in as a constraint to xBestIndex, 
> whereas the second does not.
> 
> Anyway, as I said, I can work around this for now, but I 
> thought I would ask
> if:
> 
> 1)  is it true:  xBestIndex doesn't get to see negated 
> predicates, or is it just somewhere that I have not found?
> 2)  if it's not possible, would it be worthwhile to consider 
> extending the operator set in some way to present the 
> negative clauses at some release in the future?
> 
> Thanks for any info!
> 
> -dave
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
>  Gunter Hick | Software Engineer | Scientific Games 
> International GmbH | Klitschgasse 2-4, A-1130 Vienna | FN 
> 157284 a, HG Wien, DVR: 0430013 | (O) +43 1 80100 - 0
> 
> May be privileged. May be confidential. Please delete if not 
> the addressee.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] virtual tables, xBestIndex, pIdxInfo->aConstraint[].op, and 'NOT'...

2018-09-23 Thread dave
Folks,
 
I cannot seem to find a means of filtering on negated operators, e.g. <>,
not null, not like, etc., in the xBestIndex() method for virtual vables.  As
best as I can tell, I cannot, unless there is something I am missing, hence
this inquiry.
 
In a few virtual tables I have implemented, I have handled the
SQLITE_INDEX_CONSTRAINT_EQ, and the SQLITE_INDEX_CONSTRAINT_LIKE (for
example) in the xBestIndex and xFilter.  These code paths are taken for
queries of the form:
 
select * from myvtab where mycol = 'xxx';
select * from myvtab where mycol like 'xxx';
 
but /not/ for queries of the form:
 
select * from myvtab where mycol <> 'xxx';
select * from myvtab where mycol not like 'xxx';
 
I can work around these things for now with caveats in documentation, but it
does sometimes cause confusion to users.
 
For example, in one case I have extended the syntax of LIKE .  That
extension of syntax is invoked for a positive LIKE constraint, but is
bypassed for a negated one.  I can work around that with an extension
function, but I won't get the hints at record enumeration time that could
reduce the dataset from the underlying source.
 
In other cases, I have some 'required' columns, which must be present in a
EQ constraints (usually they wind up being parameters to a function call
that generates the underlying data).  I emit an error when such constraints
are missing, but it can be confusing to users when:
 
select * from myvtab where mycol <> 'xxx';
 
indicates that "you must have a constraint on 'mycol'"
 
Lastly, some behavioural inconsistencies occur between these forms:
 
select * from myvtab where mycol = null;
select * from myvtab where mycol is null;
 
Since the first comes in as a constraint to xBestIndex, whereas the second
does not.
 
Anyway, as I said, I can work around this for now, but I thought I would ask
if:

1)  is it true:  xBestIndex doesn't get to see negated predicates, or is it
just somewhere that I have not found?
2)  if it's not possible, would it be worthwhile to consider extending the
operator set in some way to present the negative clauses at some release in
the future?
 
Thanks for any info!
 
-dave

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Configuring csv extension to use '; ' as column delimiter instead of ',' ?

2018-05-01 Thread dave
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Olivier Mascia
> 
> Considering:
> 
> CREATE VIRTUAL TABLE temp.t1 USING csv(filename='thefile.csv');
> 
> Is there any way to teach the csv extension to use ';' 
> instead of ',' as the column delimiter, getting away from the 
> strict RFC4180 definition?
 
While on the topic of Excel generated CSV, I had occaision to do this a
year-and-a-half ago, so let me forewarn you of a couple other things:
*  quoting; excel will sometimes enclose a field in quotes, and sometimes
not, depending on the content therein
*  quoting; excel uses 'double quoting' for 'escaping'; i.e. two quotation
marks in a row are interpreted as a quote char, rather than the arguably
more common backslash style escaping
*  multi-line fields;  your excel document can have cells which contain
multiple lines (i.e. embedded carriage returns)

Anyway, all this led me in my case to write a state machine to read in a
logical 'line' of text, and crack it into fields.

That being said, just now taking a peek at the current implementaion in the
sqlite codebase: 

ext\misc\csv.c:197

Is a commment

/* Read a single field of CSV text.  Compatible with rfc4180 and extended
** with the option of having a separator other than ",".

So maybe that implementation is already enhanced to accommodate those cases,
but the capability is simply not exposed through parameters to the
CREATE VIRTUAL TABLE CSV (...)

So maybe one would just need to modify the csvtabCreate to process some
additional parameters and propagate those settings to the implementation.

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] copmile SQLite with extension?

2018-04-26 Thread dave
> Behalf Of Thomas Kurz
> Sent: Thursday, April 26, 2018 7:06 AM
> Subject: [sqlite] copmile SQLite with extension?
> 
> I have a project with heavily uses the libsqlitefunctions 
> extension. It is a bit annoying to "select 
> load_extension('...')" for every connection.
> 
> Is it possible to compile an sqlite.dll which already has the 
> extension embedded so that the functions provided can be used 
> without further initialization required?

Here are couple things you might consider:
*  In most of my projects, I statically link the various extensions and
sqlite, rather than load them dynamically.  To do this, you may also need to
make some subtle changes in your extension code.  In particular there is a
difference between SQLITE_EXTENSION_INIT1 and SQLITE_EXTENSION_INIT3 that
determines if the extension code makes calls to bound sqlite code, or
through a 'vtable' provided by the host application.  There is a complier
constant SQLITE_CORE that is meant to be defined if you are linking sqlite
statically, and it can be used to swtich between that behaviour.  E.g., my
extension code starts with this:

#ifndef SQLITE_CORE
#include 
//declares extern the vtable of all the sqlite3 functions (for loadable
modules only)
SQLITE_EXTENSION_INIT3
#else
#include 
#endif

*  you still mention a DLL, though.  Maybe you are putting all of SQLite in
a dll, but want that dll to contain your extension code?  I believe that the
static linking stuff I mention above is still relevant in that case, since
the extension would be statically linked to the sqlite core.

*  even if you link all that stuff statically, you still have to register
your extensions.  Moreover, extensions are associated with databases, and so
(I believe) you are meant to do it again if you ATTACH another db.  In the
'extension in a dll' form, you are meant to export a method under a
well-known name that is used by the load_extension function (you can change
the name, but you'll have to specify it explicitly).  If you statically
link, you will still need to call this registration method explicitly.
There is help in this method:

sqlite3_auto_extension()

Which will invoke your registration function for you, for every attached
database.  This makes it more-or-less transparent when you use the rest of
the sqlite library -- your extensions are just there.

Again, I'm unclear on your question about compiling sqlite.dll, but I am
interpreting that to mean:  "I still want sqlite to be a separate dll for
some reason, rather than statically linking it, but I want that dll to have
sqlite, and my extension, and auto register them so I don't have to."  If
so, I believe all my above statements are accurate:
*  the extension should be treated as statically linked to sqlite with the
relevant changes to some of the SQLITE_EXTENSION_xxx macros.
*  the registration still needs to be performed; you can do that wherever
you're doing other library initialization
*  you can make that registration more transparent by using the
sqlite3_auto_extension() mechanism

HTH

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Are you getting spam when you post to sqlite-users ?

2018-04-18 Thread dave
Wouldn't it be as simple as subscribing to the mailing list and harvesting
the emails directly from the inbound content?  Because as it is, everyone's
real email already comes to me in the list messages I receive (as
'sqlite-users on behalf of x...@yyy.com')

The spam message I just received used the subject line that I had posted
onto the list a day or so ago (so I know how they got it), but was addressed
to my email directly (so I don't think there's any banning that sqlite can
do).  Perhaps it's a pity I'm otherwise too busy, because Samantha invites
me to 'come and treat us in real, plz'.  Lol!  Oh well, this account is so
inundated with spam already, I suppose a little more won't hurt

> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of José María Mateos
> Sent: Wednesday, April 18, 2018 8:55 AM
> To: sqlite-users@mailinglists.sqlite.org
> Subject: Re: [sqlite] Are you getting spam when you post to 
> sqlite-users ?
> 
> 
> On Tue, Apr 17, 2018, at 17:39, Simon Slavin wrote:
> > Dear list-posters,
> > 
> > Are you getting a new dating-spam each time you post to 
> this list ?  If 
> > you are, please post a brief follow-up to this message.  
> Please do /not/ 
> > include any details about the spam, its headers, or the person it's 
> > apparently from.  Just a "me too" until I say I have seen enough 
> > responses.
> 
> For what I understand, this exact behavior is happening right 
> now on the r-help mailing list too. People were wondering if 
> addresses were being scrapped from nabble.com or something similar.
> 
> Cheers,
> 
> -- 
> José María (Chema) Mateos
> https://rinzewind.org/blog-es || https://rinzewind.org/blog-en
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] kooky thought: a vm-only build (for embedded). feasible?

2018-04-18 Thread dave
Thanks for the feedback.  The schema would indeed be fixed.  It sounds like
I have an 'interesting' side project in my future!

> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Hick Gunter
> Sent: Monday, April 16, 2018 1:50 AM
> To: 'SQLite mailing list'
> Subject: Re: [sqlite] [EXTERNAL] kooky thought: a vm-only 
> build (for embedded). feasible?
> 
> 
> There have been some inquries on the list into executing 
> pre-generated bytecode. If you have a fixed schema you may be 
> able to create a valid sqlite3_stmt pointer from bytecode 
> stored somewhere.
> 
> -Ursprüngliche Nachricht-
> Von: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
> Auftrag von dave
> Gesendet: Sonntag, 15. April 2018 20:55
> An: sqlite-users@mailinglists.sqlite.org
> Betreff: [EXTERNAL] [sqlite] kooky thought: a vm-only build 
> (for embedded). feasible?
> 
> I had a stray thought, and wanted to ask if it's been thunk 
> before,and if so what is the thinking?  Or just for commentary.
> 
> I have been building a system, part of which uses sqlite and 
> virtual tables.
> This is working great in a desktop/mobile environment.  
> However, eventually one day, I will want to migrate aspects 
> of the product to deeply embedded systems (e.g. something 
> like an STM32F4 class chip), and am thinking about size -- 
> both code and RAM.  I know about the various compile switches 
> that can turn off various features, but I wonder if I can 
> really strip it down further by eliminating parsing, query 
> planning, etc, altogether, and only support the virtual 
> machine.  I do need virtual tables, though.  In my particular 
> use-case, I only need read access -- no create or update.  
> The thinking being that I can build queries offline and 
> compile them into the p-code (or whatever it's called), and 
> either burn those well know queries into flash, or perhaps 
> send them down the wire as needed.  Then of course (maybe 
> even more critically), can I control ram usage in a 
> deterministic way such that it will still work on 
> memory-constrained devices (e.g. having a total of 128 KiB 
> max for the whole system).
> 
> Anway, has this been discussed before?  Or is it a fool's errand?
> 
> Cheers!
> 
> -dave
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
>  Gunter Hick | Software Engineer | Scientific Games 
> International GmbH | Klitschgasse 2-4, A-1130 Vienna | FN 
> 157284 a, HG Wien, DVR: 0430013 | (O) +43 1 80100 - 0
> 
> May be privileged. May be confidential. Please delete if not 
> the addressee.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] kooky thought: a vm-only build (for embedded).feasible?

2018-04-18 Thread dave
...
> > size -- both code and RAM.  I know about the various 
> compile switches that
> > can turn off various features, but I wonder if I can really 
> strip it down
> > further by eliminating parsing, query planning, etc, 
> altogether, and only
> > support the virtual machine.  I do need virtual tables, 
> though.  In my
> > particular use-case, I only need read access -- no create 
> or update.  The
...
> > such that it will still work on memory-constrained devices 
> (e.g. having a
> > total of 128 KiB max for the whole system).
> >
> > Anway, has this been discussed before?  Or is it a fool's errand?
> 
> We did this once, back in 2005, for a startup company in Boston.  It
> was called "SSE".  Unfortunately, we didn't continue to support it.  I
> went looking for the source code and could not find it.
> 
> The database was to run on a smart-card with limited RAM.  All of the
> prepared statements were generated on a workstation, then serialized
> and stored in a special table in the database file.  The application
> would then use a special API that would deserialize a prepared
> statement (identified by a well-known integer) then bind parameters
> and run it.
> 
> So much has changed in the SQLite bytecode engine since then that
> there is basically zero chance that SSE would still run today, even if
> I could find the source code.
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org

Ah, groovy.  Well, at least that is validation of the concept.  So it sounds
like I have a side project for my copious free time!

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] kooky thought: a vm-only build (for embedded).feasible?

2018-04-15 Thread dave
That was more-or-less my thinking.

Mostly, my inquiry is to solicit any advice or wisdom-of-the-ages, or even
advice against it.  After having sent that, I suspect that this would more
likely wind up being something I'm on my own in doing the hands-on work, but
I still welcome any advice on how to approach the surgery.

-dave

> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Simon Slavin
> Sent: Sunday, April 15, 2018 2:06 PM
> To: SQLite mailing list
> Subject: Re: [sqlite] kooky thought: a vm-only build (for 
> embedded).feasible?
> 
> 
> 
> 
> On 15 Apr 2018, at 7:54pm, dave <d...@ziggurat29.com> wrote:
> 
> > I wonder if I can really strip it down
> > further by eliminating parsing, query planning, etc, 
> altogether, and only
> > support the virtual machine.
> 
> I wonder what you would find if you looked through the data 
> structure of sqlite3_stmt.  Presumably the compilation 
> process would convert the SQL text into bytecode and the 
> bytecode would be stored in the statement.
> 
> Once you have seen the bytecode from your desired SQL, it 
> might be possible to write a C function which accepts a 
> pointer and a length and creates a statement with a copy of 
> that chunk of memory as the bytecode and everything else set 
> up the way it is in a newly-created statement.
> 
> Or something like that.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] kooky thought: a vm-only build (for embedded). feasible?

2018-04-15 Thread dave
I had a stray thought, and wanted to ask if it's been thunk before,and if so
what is the thinking?  Or just for commentary.
 
I have been building a system, part of which uses sqlite and virtual tables.
This is working great in a desktop/mobile environment.  However, eventually
one day, I will want to migrate aspects of the product to deeply embedded
systems (e.g. something like an STM32F4 class chip), and am thinking about
size -- both code and RAM.  I know about the various compile switches that
can turn off various features, but I wonder if I can really strip it down
further by eliminating parsing, query planning, etc, altogether, and only
support the virtual machine.  I do need virtual tables, though.  In my
particular use-case, I only need read access -- no create or update.  The
thinking being that I can build queries offline and compile them into the
p-code (or whatever it's called), and either burn those well know queries
into flash, or perhaps send them down the wire as needed.  Then of course
(maybe even more critically), can I control ram usage in a deterministic way
such that it will still work on memory-constrained devices (e.g. having a
total of 128 KiB max for the whole system).
 
Anway, has this been discussed before?  Or is it a fool's errand?
 
Cheers!
 
-dave
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] bind blob lifetime

2018-01-10 Thread Dave Milter
On Tue, Jan 9, 2018 at 7:28 PM, Hick Gunter  wrote:
> A bound blob or string is destroyed "after SQLite has finished with it". This 
> should be the case when sqlite3_clear_bindings() is called. Are you sure it 
> is not deleted then? Code reading suggests it should be.
>
> Other times are when the parameter is re-bound, or the statement finalized.
>

Sorry for misunderstanding. I did not do any experiments, I only read
documentation couple of times about "bind_blob",
and it was not clear when destructor will be called by sqlite.
Thank you!
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] bind blob lifetime

2018-01-09 Thread Dave Milter
I have cycle like this:

```c
const char sql[] = "INSERT INTO test (pk, geom) VALUES (?, ?)";
sqlite3_stmt *stmt;
sqlite3_prepare_v2 (handle, sql, strlen (sql), , NULL);

for (...) {
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
int blob_size = ..;
unsigned char *blob = malloc(blob_size);
sqlite3_bind_int64 (stmt, 1, pk);
sqlite3_bind_blob (stmt, 2, blob, blob_size, free);
sqlite3_step (stmt);
}
//sqlite3_finalize
```

I wonder is it necessary to allocate memory on every cycle?
I know that I can pass SQLITE_TRANSIENT, but in this case code would
be exactly the same, just allocation on every cycle happens inside
sqlite.

According to documentation  it is not clear when sqlite call
destructor of blob (in our case  "free"), is it happens after:
sqlite3_reset
sqlite3_clear_bindings
sqlite3_bind_blob
step

sqlite3_reset
sqlite3_clear_bindings
sqlite3_bind_blob
<< here previous memory was freed???
step

?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] API, sqlite3_value_bytes and sqlite3_value_blob...

2017-12-13 Thread dave
> -Original Message-
> Behalf Of J Decker
...
> <peter.nichvolo...@gmail.com> wrote:
> 
> > Dave. The documentation contains many such catch-all 
> statements which do
...
> > The current decision tree of the particular catch-all 
> documentation comment
...
> > int sqlite3ValueBytes(sqlite3_value *pVal, u8 enc){
...
> it's valueBytes that has the biggest impact
> 
...
> sqlite3_value_bytes
>- invalidates sqlite3_value_text16*() result  (always)
> 
> sqlite3_value_bytes16
>   - invalidates sqlite3_value_text() result (always)
>   - invalidates sqlite3_value_text16*() result if the format 
> does not match
> defined SQLITE_UTF16NATIVE
> 
...

Now I can see it!

Thanks, guys - Clemens, Peter, d3ck0r - for your perspicacious insights!  I
am slightly aglow with the enlightenment; I think I'll unplug the Xmas tree
for the rest of the evening, and bask instead in the illumnation your have
so kindly imbued upon me.  I think there will be enough light for me to do
some reading of vdbemem.c!

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] API, sqlite3_value_bytes and sqlite3_value_blob...

2017-12-13 Thread dave
I have a question regarding the API documention at
http://sqlite.org/c3ref/value_blob.html, which states:
"... the pointer returned from sqlite3_value_blob(), .. can be invalidated
by a subsequent call to sqlite3_value_bytes(), ..."
Is that statement still true?  I ask because I notice that the source of
many of the extensions in 'sqlite/ext' seem to violate that advice.
 
I first noticed this when I was recently working on fileio.c (e.g. line 73
vs 77), but grepping through the source I find many other cases where the
pointer is retrieved via  *_blob() or *.text() BEFORE invoking
sqlite3_value_bytes().  E.g these source and line numbers:
fts2_tokenizer.c:71, 72
fts3_expr.c:1248, 1249
fts3_tokenizer.c:78, 79
fts3_tokenize_vtab.c:347, 348
fts3_write.c:5290, 5291
fts5_index.c:6270, 6271
fts5_storage.c:735, 736
fts5_tcl.c:547
fts5_test_tok.c:375, 376
fts5_vocab.c:607, 608; 612, 613; 616, 617
(I stopped grepping at this point; this list is not comprehensive).
 
Anyway, just wondered if the api documentation's advice is maybe out-of-date
with current reality.  Thoughts/comments?
 
Cheers!
-dave
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] readfile/writefile extensioln, and UTF8 on Windows....

2017-12-11 Thread dave
Certainly; the source file is short.  I did successfully post it to the
'dev' list, however, which I guess is set to accept attachements.

You might want to make edits if you do choose to encorporate it into the
canonical codebase.  E.g.:
*  I think there are some existing sqlite utility functions for doing utf8
to utf16 that you might prefer instead of the MultiByteToWideChar Windows
native function
*  maybe you prefer wfopen to CreateFileW (I think that would work also)
*  maybe there is a better #define for windows conditional compilation in
sqlite codebase that you might prefer to the ones I used
*  etc.

Cheers,

-dave

//fileio.patch start  ===


18a19,21
> #if defined(WIN32) || defined(WIN64) || defined(WINDOWS)
> #include 
> #endif
29a33,73
> #if defined(WIN32) || defined(WIN64) || defined(WINDOWS)
>   const char* zName;
>   int nCvt;
>   wchar_t* awchName;
>   HANDLE hfile;
>   BY_HANDLE_FILE_INFORMATION finfo;
>   void* pBuf;
>   DWORD dwRead;
> 
>   (void)(argc);  /* Unused parameter */
>   /*get file name (utf8)*/
>   zName = (const char*)sqlite3_value_text( argv[0] );
>   if ( zName == 0 ) return;
>   /*figure out how many (utf-16)*/
>   nCvt = MultiByteToWideChar( CP_UTF8, 0, zName, -1, NULL, 0 );
>   if ( 0 == nCvt ) return;
>   awchName = sqlite3_malloc( nCvt * sizeof( wchar_t ) );
>   nCvt = MultiByteToWideChar( CP_UTF8, 0, zName, -1, awchName, nCvt );
>   hfile = CreateFileW( awchName, GENERIC_READ, FILE_SHARE_READ|
FILE_SHARE_DELETE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL );
>   sqlite3_free( awchName );
>   if ( INVALID_HANDLE_VALUE == hfile ) return;
>   if ( !GetFileInformationByHandle( hfile,  ) )
>   {
> CloseHandle( hfile );
> return;
>   }
>   pBuf = sqlite3_malloc( (int)finfo.nFileSizeLow );
>   if ( NULL == pBuf )
>   {
> CloseHandle( hfile );
> return;
>   }
>   if ( !ReadFile( hfile, pBuf, finfo.nFileSizeLow, , NULL ) )
>   {
> sqlite3_free( pBuf );
> CloseHandle( hfile );
> return;
>   }
>   sqlite3_result_blob( context, pBuf, dwRead, sqlite3_free );
>   CloseHandle( hfile );
> #else
49a94
> #endif
62a108,149
> #if defined(WIN32) || defined(WIN64) || defined(WINDOWS)
>   const char* zName;
>   int nCvt;
>   wchar_t* awchName;
>   HANDLE hfile;
>   int nLen;
>   const char* z;
>   sqlite3_int64 rc;
>   DWORD dwWritten;
> 
>   (void)(argc);  /* Unused parameter */
>   //get file name (utf8)
>   zName = (const char*)sqlite3_value_text( argv[0] );
>   if ( zName == 0 ) return;
>   //figure out how many (utf-16)
>   nCvt = MultiByteToWideChar( CP_UTF8, 0, zName, -1, NULL, 0 );
>   if ( 0 == nCvt ) return;
>   awchName = sqlite3_malloc( nCvt * sizeof( wchar_t ) );
>   nCvt = MultiByteToWideChar( CP_UTF8, 0, zName, -1, awchName, nCvt );
>   hfile = CreateFileW( awchName, GENERIC_WRITE, FILE_SHARE_WRITE |
FILE_SHARE_DELETE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL );
>   sqlite3_free( awchName );
>   if ( INVALID_HANDLE_VALUE == hfile ) return;
>   nLen = sqlite3_value_bytes( argv[1] );
>   z = (const char*)sqlite3_value_blob( argv[1] );
>   if ( NULL == z )
>   {
> rc = 0;
>   }
>   else
>   {
> if ( !WriteFile( hfile, z, (DWORD)nLen, , NULL ) )
> {
>   rc = 0;
> }
> else
> {
>   rc = (sqlite3_int64)dwWritten;
> }
>   }
>   CloseHandle( hfile );
>   sqlite3_result_int64( context, rc );
> #else
80a168
> #endif


//fileio.patch end===

> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Dan Kennedy
> Sent: Monday, December 11, 2017 2:09 PM
> To: sqlite-users@mailinglists.sqlite.org
> Subject: Re: [sqlite] readfile/writefile extensioln, and UTF8 
> on Windows
> 
> 
> On 12/12/2017 12:42 AM, dave wrote:
> > Folks;  I recently had some trouble using the readfile() 
> extension until I
> > noticed it was due to there being filenames with Unicode in 
> them, and that
> > the existing implementation using fopen() doesn't do UTF8 
> on Windows (I
> > think it uses the ambient code page).
> >   
> > I modified the extension code to support UTF8 filenames on 
> Windows, and it
> > works fine (at least in my test cases!).  Thinking that 
> someone might fine
> > it useful, I am attaching the modified code.  Use it if and 
> however you
> > like.  Also note there is an embedded copy of fileio.c in 
> shell.c that also
> > needs the mod, for the sqlite shell.c builds.
> 
> 
> shell.c is a generated file, so the build process will pull 
> in changes 
> to ext/misc/fileio.c.
>

Re: [sqlite] readfile/writefile extensioln, and UTF8 on Windows....

2017-12-11 Thread dave
Welp; I guess attachements get stripped off.  And maybe I should have sent
this to the 'dev' list anyway, so I'll try there.

> -Original Message-
...
> To: 'SQLite mailing list'
> Subject: [sqlite] readfile/writefile extensioln, and UTF8 on 
> Windows
> 
> Folks;  I recently had some trouble using the readfile() 
...
>  
> Attached herewith.
>  
> Cheers!
>  
> -dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] readfile/writefile extensioln, and UTF8 on Windows....

2017-12-11 Thread dave
Folks;  I recently had some trouble using the readfile() extension until I
noticed it was due to there being filenames with Unicode in them, and that
the existing implementation using fopen() doesn't do UTF8 on Windows (I
think it uses the ambient code page).
 
I modified the extension code to support UTF8 filenames on Windows, and it
works fine (at least in my test cases!).  Thinking that someone might fine
it useful, I am attaching the modified code.  Use it if and however you
like.  Also note there is an embedded copy of fileio.c in shell.c that also
needs the mod, for the sqlite shell.c builds.
 
Attached herewith.
 
Cheers!
 
-dave
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very sqlite3 noobie error

2017-10-23 Thread dave
> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of John R. Sowden
> Sent: Sunday, October 22, 2017 9:59 PM
...
> Since I am trying to learn sqlite3 (unlearning foxpro) I find that 
> python is the simpleist language, wfich allows me to focus on 
> sqlite, I 
> amtrying the =guide just sent to the list.
> 
...
If you are trying to 'learn sqlite3' relative to FoxPro, wouldn't you maybe
prefer just experimenting the sqlite command shell?  Then you don't have a
language binding in the way, and can focus on the SQL directly. (I still
usually design/test my queries with the shell, and move them into my product
code after they are doing what I want).

OTOH, if you are trying to learn the programattic API, and not so much the
SQL dialect, then have at it, but I would suggest choosing whatever host
language you are going to build the product in, because all the various
language bindings (except for C) are separate projects and they differ.

Just a thought; cheers
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] xRowid and read only virtual tables....

2017-10-20 Thread dave
> Behalf Of Dan Kennedy
> Sent: Tuesday, October 17, 2017 11:58 AM
...
> 
> I think the exception is queries with OR terms. With 
> FTS[345], if you do 
> something like:
> 
>CREATE VIRTUAL TABLE t1 USING fts5(x);
>EXPLAIN SELECT x FROM t1 WHERE t1 MATCH 'abc' OR t1 MATCH 'def';
> 
> You can see the Rowid opcodes.
> 
> SQLite runs two separate queries on the virtual table - one 
> with "MATCH 
> 'abc'" and the other with "MATCH 'def'". It uses the rowids for each 
> matched row to avoid returning duplicates. If the xRowid 
> method always 
> returned 0, then only the first set of matches would be returned 
> (because SQLite would deem the second set to be duplicates of the 
> first). Or if xRowid returned arbitrary values your results might 
> include duplicates. etc.
> 
> Same applies to other virtual table types.
...

FYI FWIW, I had a moment to play with this a little.  I was able to
reproduce Dan's case, however I'm not so sure that it is due to the OR (or
at least not only that).  I think it maybe has more to do with the OR of
MATCH's.

I tried with one of my vtables using an 'OR' clause, and I got no rowid
opcodes.  I was using equality, however.  There were two scenarios:
1)  OR clause on a column that is indexed
  this generated two table scans, with different filter values
2)  OR clause on a column that was /not/ indexed
  this generated one table scan, with both conditionals evaluated on the
same row
Those seemed like sane plans.  For fun I also tried 'IN' with the exact same
results.

I'm less familiar with MATCH, but I understand what Dan is saying about
de-duping.  I don't understand why the planner would have chosen to realize
OR as a set union, but I'm sure it has it's reasons.  I should study the
query planner implementation one day when I have some time

Cheers!
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] virtual tables, and theTrue meaning of pIdxInfo->estimatedCost, pIdxInfo->estimatedRows, and pIdxInfo->idxFlags...

2017-10-20 Thread dave
> Behalf Of Hick Gunter
> Sent: Friday, October 20, 2017 1:55 AM
>
> I can provide some info coming from our experience with SQLite 3.7.14:
> 
> Since most SQl processing is IO bound, the "estimated cost" 
> should be the number of disk IO operations required to 
> retrieve the rows. The later addition of "estimated rows" 
> reflects to the fact that some virtual table implementations 
> might use non-disk storage (e.g. process or shared memory), 
> where the number of IO operations is determined by the 
> resident set and the cost of paging/swapping.
> 
> Lets say you have 1 records of 200 bytes with 50 bytes of 
> key overhead stored in some kind of ISAM file, and a page size of 4k.
> 
> Performing a full table scan will take an estimated 1 * 
> 200 / 4096 ~= 489 disk accesses, whereas looking up a single 
> record will take about 3 (50 bytes per key in a 4096 byte 
> page gives an estimated fan out of over 100, resulting in 2 
> pages to read from the index and 1 for the record itself). 
> Performing a partial index scan that returns 100 records will 
> take 2 acesses to locate the first record, 1 more if a second 
> index page is required and anywhere between 5 (if the records 
> are contiguous) and 100 (if each is from a separate page) 
> accesses to retrieve the records themselves.
> 
> Regarding the UNIQUE flag, this is quite different from the 
> number of estimated rows, which may be 0 or 1 due to rounding 
> errors on a non-unique index (e.g. the initials of a set of 
> 100 people has a cardinality of 26*26=676, giving an average 
> number of 0,1479 records per index entry, but there may still 
> be duplicates).
 
Thanks so much, Hick, for the detailed info.

I guess I am still a little unclear about the importance of
SQLITE_INDEX_SCAN_UNIQUE, but I am interpreting your statement to mean
something like 'it is a more assertive statement about the number of rows
returned than the row estimate of an equal value', and that somehow guides
the query planner more strongly in some direction.  It also sounds like what
I was doing (described in my first message, here elided), was fine.

Thanks, and cheers!
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] virtual tables, and theTrue meaning of pIdxInfo->estimatedCost, pIdxInfo->estimatedRows, and pIdxInfo->idxFlags...

2017-10-19 Thread dave
Hi folks,
 
I am trying to fully understand the impact and correct use of a few subtle
features related to virtual tables' the xBestIndex mechanism, and their
correct use.  Here are my current beliefs:
 
*  pIdxInfo->estimatedCost
  obviously the cost of the proposed plan; a metric of the 'viscosity' of
the table when traversing through xNext relative to other tables and
especially to filesystem access
*  pIdxInfo->estimatedRows
  obviously the approximate number of rows that a proposed plan will return.
But less obvious to me is how this materially affects the query plan,
especially relative to pIdxInfo->estimatedCost
  and a little bit with respect to:
* pIdxInfo->idxFlags
  when the SQLITE_INDEX_SCAN_UNIQUE is set.  Isn't setting
pIdxInfo->estimatedRows to 0 or 1 enough to communicate this same
information?
 
Anyway, I am dutifully setting both estimatedRows and idxFlags in cases
where I have a 0-or-1-result table (I have several of these), and I am also
estimatedRows to LLONG_MAX along with estimatedCost to DBL_MAX in cases
where a plan can never be executed (btw I would respectfully suggest perhaps
using a bit in idxFlags to communicate 'never use this plan, it will never
work').
 
I haven't had any ill effects doing the above, but wonder if that is
'correct'.  Also, it would be interesting just to know what the material
effect of estimatedRows and idxFlags is, so that I can maybe use them more
effectively.
 
Any thoughts or corrections to my thinking?  Thanks in advance; cheers!
-dave
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] xRowid and read only virtual tables....

2017-10-17 Thread dave
> -Original Message-
> From: sqlite-users 
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Hick Gunter
> 
> In our virtual table implementations, we are using the rowid 
> to return the location of the record in the backing store 
> (e.g. record offset in the file used as a backing store, 
> offset within a shared memory section or maybe even the 
> memory address of the record image) and also implement fast 
> lookup by rowid.
> 
> If you don't require such ability, you may as well return a 
> constant, a global counter value or a counter that is reset 
> in the xFilter function.
> 
> So, YES you always have to implement the xRowid method.
> 
> It will only get called if your SELECT statement explicitly 
> mentions it. No "INTEGER PRIMARY KEY" magic is performed for 
> virtual tables.
> 
Thanks for your input as well; I somehow missed it until just now.
Cheers!
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] xRowid and read only virtual tables....

2017-10-17 Thread dave
> Behalf Of Dan Kennedy
> Sent: Tuesday, October 17, 2017 11:58 AM
> 
> On 10/17/2017 01:22 PM, Hick Gunter wrote:
> > In our virtual table implementations, we are using the 
> rowid to return the location of the record in the backing 
> store (e.g. record offset in the file used as a backing 
> store, offset within a shared memory section or maybe even 
> the memory address of the record image) and also implement 
> fast lookup by rowid.
> >
> > If you don't require such ability, you may as well return a 
> constant, a global counter value or a counter that is reset 
> in the xFilter function.
> >
> > So, YES you always have to implement the xRowid method.
> >
> > It will only get called if your SELECT statement explicitly 
> mentions it. No "INTEGER PRIMARY KEY" magic is performed for 
> virtual tables.
> 
> I think the exception is queries with OR terms. With 
> FTS[345], if you do 
> something like:
> 
>CREATE VIRTUAL TABLE t1 USING fts5(x);
>EXPLAIN SELECT x FROM t1 WHERE t1 MATCH 'abc' OR t1 MATCH 'def';
> 
> You can see the Rowid opcodes.
> 
> SQLite runs two separate queries on the virtual table - one 
> with "MATCH 
> 'abc'" and the other with "MATCH 'def'". It uses the rowids for each 
> matched row to avoid returning duplicates. If the xRowid 
> method always 
> returned 0, then only the first set of matches would be returned 
> (because SQLite would deem the second set to be duplicates of the 
> first). Or if xRowid returned arbitrary values your results might 
> include duplicates. etc.
> 
> Same applies to other virtual table types.
> 
> Dan.

Yikes, thanks for the insight on that OR use-case; I'll have to do some
analysis to see what is my exposure.

The counter trick is an interesting suggestion, but I guess I am still
at-risk because I have to make it deterministic/repeatable at least in the
context of a statement, which can still a challenge.  E.g. in your OR clause
example, if the query engine does indeed do two table scans (as opposed to
one table scan, and computing all the predicates) then I have some risk that
the two scans return different results (since my data is coming from APIs,
and dynamic, rather that persisted collections).

In other projects I've definitely used the counter trick before, caching the
underlying data (to support updates and transactions) but those were known
to be small datasets.  This stuff coming from APIs could be big, so I wanted
to avoid caching it all.  But one does what one must

Thanks for all the feedback!

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] xRowid and read only virtual tables....

2017-10-17 Thread dave
> On 10/16/17, dave <d...@ziggurat29.com> wrote:
> > Hi, I am building a system which involves a number of virtual table
> > implementations.  They are all read-only, but will be 
> involved in a bunch of
> > joins amongst themselves.  My question is this:
> >
> > the documentation
> >   http://sqlite.org/vtab.html#tabfunc2 at 2.12 xRowid
> > seems (to my reading) to be always required to be 
> implemented.  But does it
> > really?  Is it ever used for read-only tables?  I have never seen it
> > invoked, and I have been blithely ignoring implementing it, 
> but I wonder if
> > there is a case where it would be invoked for a read-only 
> query and so I am
> > tempting fate.
> 
> I don't think xRowid is ever called if you create a WITHOUT ROWID
> virtual table (https://sqlite.org//vtab.html#worid).  But, just to be
> safe, I think I would include a stub function that always returned 0.
> -- 
> D. Richard Hipp

Thanks. OK, I am interpreting that to mean:
*  you could use WITHOUT ROWID, which will surely obviate the need for a
valid xRowid implementation. However that does incur the need for defining
PRIMARY KEY, etc.  (I have verified this. It is problemattic with at least a
few of my vtables)
*  for a read-only vtable, you do not need a valid imlementation of xRowid,
you can just stub it.  However it is required to be present (i.e. the xRowid
member must not be NULL).

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] xRowid and read only virtual tables....

2017-10-16 Thread dave
Hi, I am building a system which involves a number of virtual table
implementations.  They are all read-only, but will be involved in a bunch of
joins amongst themselves.  My question is this:
 
the documentation
  http://sqlite.org/vtab.html#tabfunc2 at 2.12 xRowid
seems (to my reading) to be always required to be implemented.  But does it
really?  Is it ever used for read-only tables?  I have never seen it
invoked, and I have been blithely ignoring implementing it, but I wonder if
there is a case where it would be invoked for a read-only query and so I am
tempting fate.
 
I ask in particular because implementing it will be quite awkward for the
underlying implementation in my case, and I'd very much prefer to skip it.
Even a 'without rowid' table would imply specifying some primary key, which
in a few cases would also be awkward.
 
Thanks in advance,
 
-dave
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] eponymous vtables, xBestIndex, and required parameters...

2017-10-05 Thread dave
> ...
> > 1)  is there an orthodox method of indicating that a query 
> plan request from
> > xBestIndex is a no-go,
> 
> Give that plan a huge estimatedCost.
> 
> As a backup, in the exceedingly unlikely event that SQLite chooses
> your no-go plan in spite of the huge estimatedCost, also provide a
> unique idxNum and if xFilter sees that idxNum, have xFilter throw an
> error with error message text that is something like "query planner
> could not find an acceptable solution".
> ...
> And I guess as a bonus 4th question:  What is the established orthodoxy in
> picking estimatedCost anyway?
> ...
> It is not overly sensitive to the scale of your cost estimates.   For
> ...
> You don't know how to estimate that?  Then guess.  As long as the
> relative costs for other invocations of xBestIndex on the same virtual
> table are in reasonable proportion, everything should work fine.

Thanks!  I like the idxNum tweak for the error message; I'll add that stuff
in.

And the info about relative costs _on_the_same_virtual_table_ is very
enlightening because I suppose the converse is true, that the extimated cost
relative to OTHER virtual/physical tables does NOT matter.

-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] eponymous vtables, xBestIndex, and required parameters...

2017-10-04 Thread dave
am I using the 'pIdxInfo->aConstraintUsage[nIdx].omit' variable
appropriately?  My interpretation is that means 'the vtable can fully handle
the constraint, sqlite does not need to do a double-check on it afterwards'.
I.e., in the example above, the vtable can totally handle it, but if there
was something like a regex on fname, the vtable can help narrow the results
down, but sqlite needs to do a final fer-real regex test to reject some that
select through anyway.  In that case omit would be 0.
 
Lastly, the third question regards my work-around.  I feel dirty doing this,
but if the query plan fails requirements, then instead of communicating an
error value, what I'm doing is setting the cost to infinity, e.g.
 
3)  is something like this the only hope:
 
 if ( 0 == pIdxInfo->idxNum )
 {
pIdxInfo->estimatedCost = DBL_MAX;
 }
 else
 {
pIdxInfo->estimatedCost = 10;
 }
 
Doing this did keep sqlite picking the 'approved' plan, but it just doesn't
feel like a deterministic solution to fiddle with query costs to avoid
catastrophe.
 
And I guess as a bonus 4th question:  What is the established orthodoxy in
picking estimatedCost anyway?  It seems from the source comments that it is
intended to mean 'approximate number of disk accesses', which I understand
qualitatively, but what is a 'disk access' quantitatively.  And how would I
compare that to, say, an API call that I am using as source data for my
vtable, which is purely in-memory, but could conceivably be quite expensive
(e.g. network stuff), so how would I tweak that?
 
OK!  Thanks so much if you read this far!  And thanks even more if anyone
can advise on how to deal with xBestIndex and required constraints on an
eponymous vtable acting as a table-valued function!
 
Cheers;
-dave


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] GROUP_CONCAT separator and DISTINCT

2017-08-25 Thread Dave Blake
It seems that it is not possible to specify the concatenation separator
when using GROUP_CONCAT with DISTINCT.

For example while this works

SELECT pub_id, GROUP_CONCAT(cate_id, " - ")
FROM book_mast
GROUP BY pub_id;

and this works

SELECT pub_id, GROUP_CONCAT(DISTINCT cate_id)
FROM book_mast
GROUP BY pub_id;

this does not

SELECT pub_id, GROUP_CONCAT(DISTINCT cate_id, " - ")
FROM book_mast
GROUP BY pub_id;

Is that an error, or by design?
Is there another way I can specify the separator when using DISTINCT?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Testing sqlite db to see if exists & ready

2017-04-06 Thread dave boland
I assume this will work in a similar fashion for Python?

On Thu, Apr 6, 2017, at 03:24 PM, Simon Slavin wrote:
> 
> On 6 Apr 2017, at 7:38pm, dave boland <dbola...@fastmail.fm> wrote:
> 
> > "unconfigured means no tables, no fields, no nothing.  With SQLite, it
> > is possible to have an empty file, a database with a table but no
> > fields, etc.  The reason this concerns me is that I want to know what I
> > have before connecting to a file and creating a new database when I did
> > not intend to do that.  So, what (and why) are the steps to test the
> > database file to see what state it is in?
> 
> Okay.  If that’s the definition of 'unconfigured' you want, do what I
> recommended in a previous post:
> 
> First, use the PHP function "file_exists()" to check that the file
> exists.
> 
> If the file does exist use PHP to check it’s an actual database:
> 
> fopen(path, 'rb')
> fread(, 16)
> fclose().
> 
> Then check those 16 bytes.  They should be 'SQLite format 3\0'.  The last
> character is a 0x00 byte for a string terminator.  If there are less then
> 16 bytes, or if they don’t match that string then it’s not a "configured"
> (by your definition) SQLite database.
> 
> Simon.
> ___
> sqlite-users">sqlite-users mailing list
> sqlite-users">sqlite-users@mailinglists">sqlite-users">sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users;>sqlite-users
-- 
  dave boland
  dbola...@fastmail.fm

-- 
http://www.fastmail.com - Accessible with your email software
  or over the web

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Testing sqlite db to see if exists & ready

2017-04-06 Thread dave boland
"unconfigured means no tables, no fields, no nothing.  With SQLite, it
is possible to have an empty file, a database with a table but no
fields, etc.  The reason this concerns me is that I want to know what I
have before connecting to a file and creating a new database when I did
not intend to do that.  So, what (and why) are the steps to test the
database file to see what state it is in?

Thanks, 
Dave
> 
> I’m not sure what you mean by "unconfigured" so I’ll let other people
> write about that, or you can post to clarify.
> 
> Simon.
> ___
> sqlite-users">sqlite-users mailing list
> sqlite-users">sqlite-users@mailinglists">sqlite-users">sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users;>sqlite-users
-- 
  dave boland
  dbola...@fastmail.fm

-- 
http://www.fastmail.com - A no graphics, no pop-ups email service

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Testing sqlite db to see if exists & ready

2017-04-06 Thread dave boland
Being a little paranoid, I like to insure that the db file exists, which
is easy, and what state it is in (unconfigured, so needs to be made
ready; or ready to accept data (or be read)).  How do I do that?  Using
Python, but would like a generalized approach.  Feel free to point me to
documentation that I may have missed.

Thanks,
Dave
-- 
  dave boland
  dbola...@fastmail.fm

-- 
http://www.fastmail.com - A fast, anti-spam email service.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deleting records from a large table

2017-03-03 Thread Dave Blake
Thanks all for your input, it has really helped.

In my real world application tmp_keep is a temporary table populated by
examinining a number of other tables etc., and I suddenly realsied that it
could even contain duplicate ids.  Sloppy thinking on my part.

I get the best results by creating another table:
CREATE TEMPORARY TABLE tmp_keep_unique (id1 integer primary key);
INSERT INTO tmp_keep_unique SELECT DISTINCT id1 from tmp_keep;

It takes far longer to create an index on tmp_keep, than it save times on
the above query with one.

Then
*with a primary key on both table1 and tmp_keep_unique*DELETE FROM table1
WHERE id1 NOT IN (SELECT id1 FROM tmp_keep_unique);

is acceptably efficient.

On SQLite NOT EXISTS is quicker than NOT IN, but I also need to use same
SQL on a MySQL implementation, and it behaves the oppoiste.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deleting records from a large table

2017-03-03 Thread Dave Blake
Could be keep almost all the records so ~50, but it varies greatly so
sometimes will be just keep 10. I can adjust approach depending on size if
necessary.

Yes the id1 are integer primary keys.

Table1 has a number of indexes and views, so the create new table approach
is less attractive
​
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Deleting records from a large table

2017-03-03 Thread Dave Blake
Say table1 has more then 50 records, and there is a second table
tmp_keep with the ids of the records in table1 to be kept, the rest need to
be deleted. The number of records in tmp_keep can vary from 0 to all the
records in table1, with any values in between.

What is the best strategy for doing the deletion?

For deleting a large number of records (tmp_keep is small), this works
fine:
DELETE FROM table1 WHERE id1 NOT IN (SELECT id1 FROM tmp_keep);

But this becomes inefficient when tmp_keep is large.

Any suggestions?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] under the christmas tree

2016-10-30 Thread Dave Wellman
"+lots" for OVER and PARTITION BY!

Very useful in my line of work as well.

Cheers,
Dave


Ward Analytics Ltd - information in motion
Tel: +44 (0) 118 9740191
Fax: +44 (0) 118 9740192
www: http://www.ward-analytics.com

Registered office address: The Oriel, Sydenham Road, Guildford, Surrey, United 
Kingdom, GU1 3SR
Registered company number: 3917021 Registered in England and Wales.

-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Big Stone
Sent: 30 October 2016 16:01
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] under the christmas tree

Hello Dear Sqlite developers,

I'm amazed by the recent "json" and "raw values" features.

Is it technically possible that one day, sqlite will integrate a few of the 
following analytical functions ?
OVER
PARTITION BY
NTILE
PERCENTILE_CONT
PERCENTILE_DISC

These are useful in my real life, and I don't guess if there is a technical 
"impossibility" for them to ever appear in sqlite.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] IN verses EXISTS Query Speed

2016-10-16 Thread Dave Blake
Some simple testing is showing using an EXISTS statement is generally
quicker then using an IN

e.g.

SELECT * FROM tablea
WHERE EXISTS (SELECT 1 FROM tableb WHERE tablea.id  = tableb.id AND ...)

is quicker than
SELECT * FROM tablea
WHERE tablea.id IN  (SELECT tableb.id FROM tableb WHERE ...)

Is there any reason for this to be always true in SQLite, or is it query
dependant?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using Bitwise Logic In Query

2016-09-08 Thread Dave Blake
Looking for the best way to query a table with an integer column by value
of the lower 16 bits of the data in that column. Does SQLite support
bitwise logic?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] VARCHAR or TEXT to save sapce

2016-09-02 Thread Dave Blake
Can I confirm that defining feilds as VARCHAR(20) rather than TEXT, if I
know I only want 20 chars or less,  will result in a smaller database?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] The Session Extension (future SQLite extension)

2016-05-07 Thread Dave Wellman
My apologies if this has already been raised...

When I first read about this I thought (assumed) that 'changes' were meant
to be just data changes, i.e. changes to rows where they have been inserted,
updated and/or deleted. In my experience a fairly typical journaling
function that a number of dbms's offer. A typical use case for this would
be: run the 'real' processing on the production system, copy the journal
information (what is referred to as the 'changeset'  or 'patchset') to a
backup system and then apply the same changes to that system. This is one
way of keeping a disaster recovery environment up to date.

However, the documentation includes the following: "They work all day, in
parallel, each making their own customizations and tweaks to the design. "

Does the "to the design" imply that this feature would also cater for DDL
changes to tables, indexes etc.?

Cheers,
Dave


Ward Analytics Ltd - information in motion
Tel: +44 (0) 118 9740191
Fax: +44 (0) 118 9740192
www: http://www.ward-analytics.com

Registered office address: The Oriel, Sydenham Road, Guildford, Surrey,
United Kingdom, GU1 3SR
Registered company number: 3917021 Registered in England and Wales.

-Original Message-
From: sqlite-users-boun...@mailinglists.sqlite.org
[mailto:sqlite-users-bounces at mailinglists.sqlite.org] On Behalf Of Zsb?n
Ambrus
Sent: 07 May 2016 11:40
To: SQLite mailing list
Subject: Re: [sqlite] The Session Extension (future SQLite extension)

As for the session extension
"https://www.sqlite.org/draft/sessionintro.html;, what I'd like to ask is
when this is more useful than the RBU extension "http://sqlite.org/rbu.html;
?  The two seem to serve a similar purpose.

Ambrus
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] SELECT 0 gives TEXT instead of INTEGER

2016-04-21 Thread Dave Wellman
How about something like:

WITH RECURSIVE
expansion(byte) AS (
SELECT 0
UNION  ALL
SELECT byte + 1 FROM expansion
LIMIT  10
)
SELECT PRINTF('%02d',byte)
FROM   expansion
;

Cheers,
Dave


Ward Analytics Ltd - information in motion
Tel: +44 (0) 118 9740191
Fax: +44 (0) 118 9740192
www: http://www.ward-analytics.com

Registered office address: The Oriel, Sydenham Road, Guildford, Surrey, United 
Kingdom, GU1 3SR
Registered company number: 3917021 Registered in England and Wales.

-Original Message-
From: sqlite-users-bounces at mailinglists.sqlite.org 
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Cecil 
Westerhof
Sent: 21 April 2016 11:20
To: SQLite mailing list
Subject: [sqlite] SELECT 0 gives TEXT instead of INTEGER

I have the following:
WITH RECURSIVE
expansion(byte) AS (
SELECT 0
UNION  ALL
SELECT byte + 1 FROM expansion
LIMIT  10
)
SELECT hex(byte)
FROM   expansion
;

?I would expect to get 00-09, but I get 30-39. 30 is the hex-value of the ASCII 
0. How would I get what I want. I could subtract 48, but that is not my 
preferred solution.?

--
Cecil Westerhof
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Encrypt the SQL query

2016-02-27 Thread Dave Baggett
As others have pointed out, since SQLite must ultimately execute the query, it 
has to be unencrypted in memory at some point. In general, there is no way to 
protect data from prying eyes if that data must be used by a running program, 
because a competent adversary can inspect the program as it's running.

High-assurance applications such as those used for classified work incorporate 
countermeasures the make it practically harder to do this, but these offer 
absolutely no additional security from a theoretical standpoint. An example is 
the requirement to keep keys and keying material AES-wrapped in memory except 
at the point of use.

As far as I'm aware, the state of the art in protecting secrets that must be 
kept in memory is so-called "whitebox encryption". You can google it, but 
essentially the idea is to take the state of a cryptographic primitive like AES 
and explode it out into a much more complex (and therefore harder to analyze), 
but equivalent representation. Here again, though, this just makes things a bit 
harder for an adversary -- it provides no additional security from a 
theoretical standpoint.

Another approach to keeping secrets that you must use in running programs is to 
store them only in hardware security modules (HSMs). In this scenario, you have 
the HSM -- usually a USB or microSD device -- do whatever computation you need 
using its on-board CPU. The HSM then provides you the result of the computation 
(decrypted data or whatever). The HSM hardware guarantees that the secret 
itself is never revealed to the host computing device; getting the stored 
secret requires physically disassembling the HSM. HSM devices are commodity 
hardware now; you can buy one for under $50.

It would be interesting to contemplate running all of SQLite on an HSM, as this 
would allow you to perform database transactions while ensuring the database 
itself was kept hidden from the host computing device. I'm not aware of any 
generically programmable HSMs capable of doing this, though, and of course your 
database would have to entirely fit within the HSM's on-board storage. These 
devices usually only have a small amount of storage -- enough to store 4096 
keys, for example.

But if there were an HSM that shipped with a "real" amount of memory and 
storage -- and was generically programmable -- there's no reason it couldn't be 
done.

Dave

Sent with inky<http://inky.com?kme=signature>

 wrote:

Hi,



In my C++ program, I will invoke SQLite to execute SQL queries. But these

queries are just stored as normal string constants in C++ so it is easy to

be decoded via reverse engineering method. Does SQLite provide a good way to

encrypt the SQL query strings while does not affect the performance when

executing the queries?



Thanks







___

sqlite-users mailing list

sqlite-users at mailinglists.sqlite.org

http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite I/O tuning redux

2016-02-20 Thread Dave Baggett
Further to me earlier plea for help -- here's an update and another question.

In an attempt to precisely understand exactly why and where we're doing 
writing, I've added a bit of code to SQLite to measure aggregate disk I/O, 
along with a profiling hook that lets me measure the amount read and written by 
each SQL statement I execute. The results are eye-opening.

I have covering indices that make some of our common queries run a lot faster. 
It turns out that the vast majority of my writing to disk is simply to maintain 
these indices. Obviously, I will remove as many indices as I can, but as it is 
right now I'm seeing sustained write rates of 4MB/sec -- almost entirely to 
keep indices up to date -- which seems insane.

Question: can I force SQLite to keep an index purely in memory, as it would do 
for a memory database? (Note that I have considered the approach of maintaining 
shadow memory tables with triggers, but this would seem to require keeping the 
entire database itself in memory -- not just the index.)

If, as I assume, there's no way to do this, what's the best way to go about 
adding this capability to SQLite? I've written fairly complicated virtual 
tables and extension functions, so I have at least some idea what I'm getting 
into.

Dave

Sent with inky<http://inky.com?kme=signature>


[sqlite] SQLite tuning advice for caching scenario

2016-02-17 Thread Dave Baggett
Clarification to my own post:

When I talk about buffering, say, 16MB of write transactions in memory, I want 
the effects transactions to be visible to readers immediately (once they are 
"commited" to RAM). This seems akin to WAL mode where SQLite scans the .wal 
file prior to scanning the .dat file on SELECTs.

(BTW, I'm using WAL mode and have found it performs better than standard 
journal mode on pretty much every target device.)

Dave

Sent with inky<http://inky.com?kme=signature>

"Dave Baggett"  wrote:



OK, that helps -- thank you.



One clarification: is it the case that transaction bundling ONLY affects 
write/delete operations -- i.e., those operations that alter the database?



Another clarification: is it the case that writes within a single transaction 
will remain in the in-memory page cache until COMMIT is issued? I see various 
pragmas like cache_spill that seem to control the page cache but I'm confused 
as to what they do.



Finally, is there a way to tell SQLite I would like to buffer, say, 16MB of 
write/delete transactions in memory before the transactions are written to 
disk? Here I am talking about the meta level above the transaction level -- I 
have atomic transactions and I want to defer physically writing them until I 
have enough of them (say, 16MB worth of altered pages).



Dave



[sqlite] SQLite tuning advice for caching scenario

2016-02-17 Thread Dave Baggett
OK, that helps -- thank you.

One clarification: is it the case that transaction bundling ONLY affects 
write/delete operations -- i.e., those operations that alter the database?

Another clarification: is it the case that writes within a single transaction 
will remain in the in-memory page cache until COMMIT is issued? I see various 
pragmas like cache_spill that seem to control the page cache but I'm confused 
as to what they do.

Finally, is there a way to tell SQLite I would like to buffer, say, 16MB of 
write/delete transactions in memory before the transactions are written to 
disk? Here I am talking about the meta level above the transaction level -- I 
have atomic transactions and I want to defer physically writing them until I 
have enough of them (say, 16MB worth of altered pages).

Dave

Sent with inky<http://inky.com?kme=signature>

"Simon Slavin"  wrote:



On 17 Feb 2016, at 3:34pm, Simon Slavin  wrote:



> A lot of operations on the database file are done at the beginning and end of 
> every transaction.  If your journal is in memory, then you can dramatically 
> disk usage by using large transactions.  So I think you are right and you 
> should check out that strategy.



I'm sorry, that's poorly phrased and has a word missing.  Here's a better 
version:



A high proportion of the disk activity involved in making changes to the 
database are done to support the transaction structure, rather than the 
individual operation (INSERT/UPDATE/DELETE) you asked for.  Grouping lots of 
operations together into one transaction will reduce the overhead needed for 
locking and ACID.  In addition, in some journal modes operations relating to 
the transaction as a whole are done with the database file whereas much of the 
work relating to the operations is done with the journal file.  This should 
increase the advantage in your situation gained by using large transactions.



Simon.

___

sqlite-users mailing list

sqlite-users at mailinglists.sqlite.org

http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite tuning advice for caching scenario

2016-02-17 Thread Dave Baggett
That's a great suggestion. One issue, though is that I'd have to run two FTS 
searches to search -- one on the disk-based database, and one on the 
memory-based one. I also already have the database split into 8 .dat files for 
scaling purposes. :)

But this may be workable -- thanks. (BTW, I am using SQLite via apsw -- thanks 
for that too!)

Dave

Sent with inky<http://inky.com?kme=signature>

"Roger Binns"  wrote:

-BEGIN PGP SIGNED MESSAGE-

Hash: SHA1



On 17/02/16 06:37, Dave Baggett wrote:

> I'd welcome any suggestions



How about two databases?  Create an in memory database for the cache.

Then whenever it hits a certain size (eg 64MB) or time passed (eg 5

minutes), copy/move data from the memory database to the persistent

(disk) one.  This ensures the writes to the disk database are in big

chunks.



Roger

-BEGIN PGP SIGNATURE-

Version: GnuPG v2



iEYEARECAAYFAlbEm9AACgkQmOOfHg372QR9rwCgu+MPM+kJEGYlBTzkKRYhHsOu

U98AoOZ4kBue7MV6Q8P9+vkljyJGobVu

=61/4

-END PGP SIGNATURE-

___

sqlite-users mailing list

sqlite-users at mailinglists.sqlite.org

http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite tuning advice for caching scenario

2016-02-17 Thread Dave Baggett
I should have clarified: the problematic locking is happening in the OS layer. 
I've completely disable SQLite (thread) locking by building with 
SQLITE_THREADSAFE=0.
(And, yes, I'm only using SQLite from a single thread!)

Regarding transactions, I'm bundling write operations into transactions, but 
not optimally. If, say, I do a huge write as a single transaction, will that 
cause SQLite to keep everything in memory until I COMMIT? Perhaps that's the 
right strategy for me to pursue.

If there is a document (or even section of the SQLite source) that I could read 
to fully understand where the transition from memory (page cache, etc.) to disk 
occurs, that would probably get me above n00b level of understanding, which 
would help.

Dave

Sent with inky<http://inky.com?kme=signature>

"Simon Slavin"  wrote:



On 17 Feb 2016, at 2:37pm, Dave Baggett  wrote:



> I'm seeking specific advice on how to tune SQLite for this application when 
> deployed on a target with extremely poor write performance. On this target 
> writing in many small chunks is much more expensive than writing in a single 
> big sequential chunk. In particular, the write syscall is very expensive, 
> because frequent writes cause the host to spend huge amounts of time in lock 
> contention, because the locking strategy is very coarse.



The vast majority of time spent writing is not in the change you asked to make 
(INSERT, UPDATE, etc.) but in the measures made to handle the transaction: 
locking, ensuring ACID, etc..



Are you doing multiple write commands in one chunk of time ?  If so, are you 
enclosing them in a single transaction ?



Simon.

___

sqlite-users mailing list

sqlite-users at mailinglists.sqlite.org

http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite tuning advice for caching scenario

2016-02-17 Thread Dave Baggett
My application uses SQLite to cache data. Specifically, the cache isn't the 
storage of record for anything; it's just a local copy of data that's stored 
remotely on servers of record. The cache primarily facilitates rapid searching 
of the data locally, via FTS.

I'm seeking specific advice on how to tune SQLite for this application when 
deployed on a target with extremely poor write performance. On this target 
writing in many small chunks is much more expensive than writing in a single 
big sequential chunk. In particular, the write syscall is very expensive, 
because frequent writes cause the host to spend huge amounts of time in lock 
contention, because the locking strategy is very coarse.

Given that the data I am storing in SQLite is expendable -- I can always fetch 
it again from the server -- I don't really care if I lose, say, the last 30 
seconds of data written if the machine loses power, the app is killed by the 
host operating system, etc. However, I want to avoid the database going 
corrupt, since that requires fetching everything again.

It seems like an optimal strategy for this would be to keep a WAL-like journal 
in memory. I have plenty of RAM so 64MB for an in-memory WAL "file" would work. 
However, I don't see any way to tell SQLite to use WAL mode but keep the WAL 
file in memory. I also believe from reading the docs that if the memory-based 
WAL file is lost (e.g., loss of power) then the database will be corrupt.

I've tried journal_mode=MEMORY but that didn't seem to help any. It seems that 
no matter what pragmas I use, I can't convince SQLite to keep transactions in 
memory very long.

What I need, I guess, is some tuning parameter -- or I can write my own VFS -- 
that buffers entire transactions, then periodically flushes large groups of 
transactions at once, minimizing the number of write calls.

I'd welcome any suggestions from SQLite experts on this.

Dave

Sent with inky<http://inky.com?kme=signature>


[sqlite] Bug: LEFT JOIN on view confusing the query optimiser

2016-02-14 Thread Dave Blake
Thank you for your help. I now understand why the optimiser has
difficulties and 1) is slower than 2)

Thank you also for the union suggestion, although not sure in this case
that it makes anything more readable than breaking the view into tables.

In my real world application my solution is to avoid the left join by
ensuring that all songs have at least one artist. This is a better data
design in the end.

On 14 February 2016 at 18:00, Clemens Ladisch  wrote:

> Dave Blake wrote:
> > Is there anything I can do to get the optimiser to perform 1) with the
> same
> > efficiency as 2)?
>
> See point 3 of <http://www.sqlite.org/optoverview.html#flattening>; the
> optimizer has problems when you use a subquery (i.e., a view) at the
> right side of a left join.
>
> If you really want to keep the view, you could replace the outer join
> with an inner join, and add the non-matching rows separately:
>
> SELECT A.*, view.* FROM A JOIN view ON ... WHERE ...
> UNION ALL
> SELECT A.*, NULL FROM A WHERE ... AND id NOT IN (SELECT A_id FROM view);
>
> Whether this is better than breaking up the view is something you have
> to decide yourself.
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Bug: LEFT JOIN on view confusing the query optimiser

2016-02-14 Thread Dave Blake
>You need to sort the output of EXPLAIN QUERY PLAN.

Apologies for not spoting the need to sort the query plan! A fundamental
error on my part.

However, in real world application 1) is significantly slower than 2)
despite returing the same number of rows. If I correctly read the plans
this time, it is because 2) searches the song_artist table using an index
but 1) scans the entire table.

Is there anything I can do to get the optimiser to perform 1) with the same
efficiency as 2)?

In real world application I use a "songartistview" as a means to list and
alias name the fields wanted from joins over several tables. This was fine
until I needed to use it in a left join, and the speed plumited. For
readablity and management I would like to continute to use a view, but it
is so slow. Is there any alternate to having to use tables like 2) does?

On 14 February 2016 at 15:15, Clemens Ladisch  wrote:

> Dave Blake wrote:
> > What I see as wrong is that in 1) (and 4) ) we have a query of the form
> > A LEFT JOIN B WHERE clause involving index fields on A
> >
> > yet the optimiser does not search A, the outer table, first using the
> index.
>
> EXPLAIN QUERY PLAN SELECT song.*, songartistview.* FROM song LEFT JOIN
> songartistview ON song.idSong = songartistview.idSong WHERE song.idSong =1;
> 1|0|0|SCAN TABLE song_artist
> 1|1|1|SEARCH TABLE artist USING INTEGER PRIMARY KEY (rowid=?)
> 0|0|0|SEARCH TABLE song USING INTEGER PRIMARY KEY (rowid=?)
> 0|1|1|SCAN SUBQUERY 1
>
> You need to sort the output of EXPLAIN QUERY PLAN.
> The song table is searched first.
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Bug: LEFT JOIN on view confusing the query optimiser

2016-02-14 Thread Dave Blake
>1) has "song LEFT JOIN (song_artist JOIN artist)".
>3) has "(song LEFT JOIN song_artist) JOIN artist".
>2) has "(song LEFT JOIN song_artist) LEFT JOIN artist".
>4) has "song LEFT JOIN (song_artist LEFT JOIN artist)".

OK, I see how you are saying the queries differ.

What I see as wrong is that in 1) (and 4) ) we have a query of the form
A LEFT JOIN B WHERE clause involving index fields on A

yet the optimiser does not search A, the outer table, first using the index.

You rightly said
> A left join forces SQLite to scan the outer table first.  This is
> necessary for the join to work correctly.

But it is not scanning song, the outer table, first. It is not optimising
correctly.

Is there a way with 1) to get it to scan song first? Can you see why that
would be the optimal plan?

On 14 February 2016 at 10:49, Clemens Ladisch  wrote:

> Dave Blake wrote:
> > 3) SELECT song.*, song_artist.*, artist.*
> > FROM song LEFT JOIN song_artist ON song.idSong = song_artist.idSong
> > JOIN artist ON song_artist.idArtist = artist.idArtist
> > WHERE song.idSong =1
> >
> > This ... has the same join combination as 1).
>
> No.
> 1) has "song LEFT JOIN (song_artist JOIN artist)".
> 3) has "(song LEFT JOIN song_artist) JOIN artist".
>
> > Similarly trying a view using outer join
> >
> > CREATE VIEW songartistleftview AS SELECT song_artist.idSong AS idSong,
> > song_artist.idArtist AS idArtist,
> > artist.strArtist AS strArtist
> > FROM song_artist LEFT JOIN artist ON song_artist.idArtist =
> artist.idArtist
> >
> > New query
> > 4) SELECT song.*, songartistleftview.* FROM song
> > LEFT JOIN songartistleftview ON song.idSong = songartistleftview.idSong
> > WHERE song.idSong =1
> >
> > 4) has same slow query plan as 1) despite having all left joins like 2).
>
> There are still differences:
> 2) has "(song LEFT JOIN song_artist) LEFT JOIN artist".
> 4) has "song LEFT JOIN (song_artist LEFT JOIN artist)".
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Bug: LEFT JOIN on view confusing the query optimiser

2016-02-13 Thread Dave Blake
>The result is correct.

I am so surprized by your assertion I am sure there is some
misunderstanding. The way 1) is performed, scan song_artist first,  is
suboptimal. Try my example with ~8000 rows and a where clause that returns
multiple songs and you will see what I mean.

>This would not necessarily be the case if it
>executed an outer join as if it were an inner join.

Not sure what you are saying here?

>A left join forces SQLite to scan the outer table first.  This is
>necessary for the join to work correctly.

Yes. With "A LEFT JOIN B" the outer table is "A", and I expect it to be
scanned first. If there is a where clause on index fields from A I expect
that index to be searched first. That is how 2) is done, but in 1) it does
not.

>No, the difference is that the view uses an inner join, while your
>second query use only outer joins.

OK, look at this variation
3) SELECT song.*, song_artist.*, artist.*
FROM song LEFT JOIN song_artist ON song.idSong = song_artist.idSong
JOIN artist ON song_artist.idArtist = artist.idArtist
WHERE song.idSong =1

This produces the same (optimum) query plan as 2), and has the same join
combination as 1). Of couse if there are no song_artist records for idSong
= 1 then this format  of query will not return the same results as 1) or 2)

Similarly trying a view using outer join

CREATE VIEW songartistleftview AS SELECT song_artist.idSong AS idSong,
song_artist.idArtist AS idArtist,
artist.strArtist AS strArtist
FROM song_artist LEFT JOIN artist ON song_artist.idArtist = artist.idArtist

New query
4) SELECT song.*, songartistleftview.* FROM song
LEFT JOIN songartistleftview ON song.idSong = songartistleftview.idSong
WHERE song.idSong =1

4) has same slow query plan as 1) despite having all left joins like 2).

I genuinely believe that the optimiser is choosing the wrong plan for 1)
and 4). I do not see your argument for correctness. Could you please look
at this again.

Moreover earlier versions of SQLite e.g. 3.8.3.1 got it right and use same
efficient plan for 1), 2), 3) and 4). The current behaviour is making views
unusable in my application.

On 12 February 2016 at 20:47, Clemens Ladisch  wrote:

> Dave Blake wrote:
> >> It chooses a _correct_ plan.
> >
> > Really? With query 1) to select a song it first scans the song_artist
> table
> > that could contain many thousands of records. That does not seem right.
>
> The result is correct.  This would not necessarily be the case if it
> executed an outer join as if it were an inner join.
>
> A left join forces SQLite to scan the outer table first.  This is
> necessary for the join to work correctly.
>
> >> Your queries are quite different.
> >
> > They produce the same result sets, but yes one uses a view and the other
> > the constitent tables.
>
> No, the difference is that the view uses an inner join, while your
> second query use only outer joins.
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Bug: LEFT JOIN on view confusing the query optimiser

2016-02-12 Thread Dave Blake
Thank you for looking at this.

>It chooses a _correct_ plan.

Really? With query 1) to select a song it first scans the song_artist table
that could contain many thousands of records. That does not seem right.

>Your queries are quite different.

They produce the same result sets, but yes one uses a view and the other
the constitent tables.

>What is your query actually supposed to do?

I have simplified things to make the issue clear, obviously unsuccesfuly.
In my application the query is more complex.

In this example there is a many to many relationship between songs and
artists resolved using a song_artist link table. Both 1) and 2) return the
song fields and the artist fields for all the artists for a chosen song.
More realistically the song and artist tables would have many more columns,
and the where clause be more involved and result in more than one song. But
none of those things are relevent to the issue.

Importantly you say

>It has exactly the same query plan as the query with the view.

That is not what I am getting. I have listed the query plans I get with
3.8.10.1, what version are you using and what query plans do you get?

On 12 February 2016 at 13:34, Clemens Ladisch  wrote:

> Dave Blake wrote:
> > I noticed my queries going very slowly after changing a join to a left
> > join, examination of the query plan showed that the optimiser was
> choosing
> > a poor plan.
>
> It chooses a _correct_ plan.
>
> > It only occurs when the left join is on a views, if explicit
> > tables are used instead then the problem does not occur.
>
> Your queries are quite different.  (One less LEFT.)
>
> This is the equivalent query with tables:
>
> SELECT song.*, song_artist.*, artist.*
> FROM song
> LEFT JOIN (song_artist JOIN artist
>ON song_artist.idArtist = artist.idArtist
>   ) AS songartistview
>  ON song.idSong = songartistview.idSong
> WHERE song.idSong =1;
>
> It has exactly the same query plan as the query with the view.
>
> > In trying to get the artist(s) for a song ...
>
> A query to get the artist(s) for a song would look like this:
>
> SELECT *
> FROM artist
> WHERE idArtist IN (SELECT idArtist
>FROM song_artist
>WHERE idSong = 1);
>
> What is your query actually supposed to do?
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Bug: LEFT JOIN on view confusing the query optimiser

2016-02-12 Thread Dave Blake
I noticed my queries going very slowly after changing a join to a left
join, examination of the query plan showed that the optimiser was choosing
a poor plan. It only occurs when the left join is on a views, if explicit
tables are used instead then the problem does not occur.

To repeat the issue try this example.
CREATE TABLE song ( idSong integer primary key, strTitle varchar(512))
CREATE TABLE song_artist ( idArtist integer, idSong integer)
CREATE TABLE artist (idArtist integer primary key, strArtist
varchar(256), strBio  text)

CREATE INDEX idxSongArtist_1 ON song_artist ( idSong);
CREATE INDEX idxSongArtist_2 ON song_artist ( idArtist);

CREATE VIEW songartistview AS SELECT song_artist.idSong AS idSong,
song_artist.idArtist AS idArtist, artist.strArtist AS strArtist FROM
song_artist JOIN artist ON song_artist.idArtist = artist.idArtist

In the data there can be songs with no artist, and artists with no song
(hence the use of left join)

Compare the query plan of
1) SELECT song.*, songartistview.* FROM song LEFT JOIN songartistview ON
song.idSong = songartistview.idSong WHERE song.idSong =1

with
2) SELECT song.*, song_artist.*, artist.* FROM song LEFT JOIN song_artist
ON song.idSong = song_artist.idSong LEFT JOIN artist ON
song_artist.idArtist = artist.idArtist WHERE song.idSong =1


Query 2) sensibly does
"SEARCH TABLE song USING INTEGER PRIMARY KEY (rowid=?)"
"SEARCH TABLE song_artist USING INDEX idxSongArtist_1 (idSong=?)"
"SEARCH TABLE artist USING INTEGER PRIMARY KEY (rowid=?)"

But the view equivalent 1) does
"SCAN TABLE song_artist"
"SEARCH TABLE artist USING INTEGER PRIMARY KEY (rowid=?)"
"SEARCH TABLE song USING INTEGER PRIMARY KEY (rowid=?)"
"SCAN SUBQUERY 1"

In trying to get the artist(s) for a song, scanning the full song_artist
table is not an efficient place to start!
Note this is a greatly simplified example to show the issue, in real
application the impact of a suboptimal plan is significant.

My testing was done mostly in v3.8.10.2, but this also happens in v3.8.6,
yet does not seem to occur in v3.8.3.1

Running ANALYZE on my real world data made the issue even worse (plan
started with scan of artist table instead). My current work around is to
convert my views into tables, but it would be nice to be able to use views
as an alias for some table joins. I did consider using "CROSS JOIN" because
SQLite effectively treats that as a LEFT JOIN but does not left the
optimiser change the table order, however that makes my SQL non-generic and
my queries need to work on MySQL too.


[sqlite] Recent regression caused by WAL checkin

2016-01-22 Thread Dave Baggett
A recent check-in to the SQLite master code broke my encryption codec. Can 
anyone explain the purpose of Dan Kennedy's checkin of Jan 9, 2016 with 
comment 

*"If a single page is written to the wal file more than once, have each 
subsequent copy overwrite the original frame."* 

Is this an optimization? If I revert this single commit in my local copy, my 
codec works fine. If I leave it in, I get a corrupt database error very soon 
after startup. 

Any pointers appreciated!
Dave
Sent with [inky](http://inky.com?kme=signature)


[sqlite] Bug report for MAX()

2015-11-25 Thread Dave McKee
I can replicate this behaviour if I insert a zero-length string into the
column.

sqlite> create table foo(a);
sqlite> insert into foo values(5);
sqlite> insert into foo values("");
sqlite> select max(a) from foo;

sqlite> select min(a) from foo;
5
sqlite> select avg(a) from foo;
2.5

Is this a possible explanation?

On Wed, Nov 25, 2015 at 11:37 AM, Simon Slavin  wrote:

>
> On 25 Nov 2015, at 11:09am, R Smith  wrote:
>
> > sqlite> SELECT max(UnitCost) FROM BOMData;
> >
> > sqlite> SELECT min(UnitCost) FROM BOMData;
> > 0.0
>
> Can you please post the result of
>
> SELECT DISTINCT typeof(UnitCost) FROM BOMData;
>
> (I think that's how you do it.  You might need to use GROUP BY.)
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Fwd: Outdated section of docs?

2015-09-24 Thread Dave McKee
Hmmm... actually, looks like Windows 10 has regressed back to only having
one set of DST rules...

Manually changed timezone to Pacific US.  Results are from before reboot
but don't seem to have changed after.

David.

Windows 10
Dates from https://www.timeanddate.com/time/dst/1985.html
Using sqlite-shell-win32-x86-3081101.zip
--
2012: Sunday, 11 March
sqlite> select datetime("2012-03-10T12:00", 'localtime');
2012-03-10 04:00:00
sqlite> select datetime("2012-03-11T12:00", 'localtime');
2012-03-11 05:00:00

2007: Sunday, 11 March
sqlite> select datetime("2007-03-10T12:00", 'localtime');
2007-03-10 04:00:00
sqlite> select datetime("2007-03-11T12:00", 'localtime');
2007-03-11 05:00:00

2006: Sunday, 2 April
sqlite> select datetime("2006-04-01T12:00", 'localtime');
2006-04-01 05:00:00
sqlite> select datetime("2006-04-02T12:00", 'localtime');
2006-04-02 05:00:00
sqlite> select datetime("2006-04-03T12:00", 'localtime');
2006-04-03 05:00:00

sqlite> select datetime("2006-03-11T12:00", 'localtime');
2006-03-11 04:00:00
sqlite> select datetime("2006-03-12T12:00", 'localtime');
2006-03-12 05:00:00

1985: Sunday, 28 April
sqlite> select datetime("1985-03-09T12:00", 'localtime');
1985-03-09 04:00:00
sqlite> select datetime("1985-03-10T12:00", 'localtime');
1985-03-10 05:00:00


On 22 September 2015 at 17:32, Dave McKee  wrote:

>
> -- Forwarded message --
> From: Jonathan Moules 
> Date: Mon, Sep 21, 2015 at 5:31 PM
> Subject: [sqlite] Outdated section of docs?
> To: sqlite-users at mailinglists.sqlite.org
>
>
> Hi,
> I was reading this page (http://sqlite.org/lang_datefunc.html), and at
> the very bottom it says:
> /
> /
>
>/"Non-Vista Windows platforms only support one set of DST rules.
>Vista only supports two. Therefore, on these platforms, historical
>DST calculations will be incorrect. For example, in the US, in 2007
>the DST rules changed. Non-Vista Windows platforms apply the new
>2007 DST rules to all previous years as well. Vista does somewhat
>better getting results correct back to 1986, when the rules were
>also changed."/
>
>
> I wonder if that's a section that was written years ago, and the bits that
> apply to Vista also apply to the Windows releases since then? I don't know
> enough about Windows Timezone things to be able to find out easily but this
> reads like it was written back in the era of Vista and probably holds for
> newer releases too.
> Cheers,
> Jonathan
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>


[sqlite] When to disambiguate column names in queries?

2015-03-16 Thread Dave Wellman
Hi Bill,

I'm not sure about the 'sqlite world', most of my sql work is done using a
different dbms, but my 'standard' is to always use alias names anytime I
have two or more tables in a SELECT.

Yes, it can make the sql longer and sometimes it is not strictly necessary
(as you say, only needed for column names  that are not unique), but I've
found that this makes large pieces of sql much easier to read -
particularly when you have to come back to them after a few months in order
to change something.

Cheers,
Dave


Ward Analytics Ltd - information in motion
Tel: +44 (0) 118 9740191
Fax: +44 (0) 118 9740192
www: http://www.ward-analytics.com

Registered office address: The Oriel, Sydenham Road, Guildford, Surrey,
United Kingdom, GU1 3SR
Registered company number: 3917021 Registered in England and Wales.


-Original Message-
From: sqlite-users-boun...@mailinglists.sqlite.org
[mailto:sqlite-users-bounces at mailinglists.sqlite.org] On Behalf Of Drago,
William @ CSG - NARDA-MITEQ
Sent: 16 March 2015 16:17
To: General Discussion of SQLite Database
Subject: [sqlite] When to disambiguate column names in queries?

All,

Some of my Select statements are pretty long and I'm starting to think it'd
be a good idea to always include table names of columns instead of just when
they are not unique. This would make the Select statements longer, but
perhaps easier to understand if the reader knows where each column is from.

Any thoughts on this? I realize something like this can be highly
subjective, but I'm wondering if there's a generally accepted practice in
the SQLite world.

Thanks,
--
Bill Drago
Senior Engineer
L3 Narda-MITEQ<http://www.nardamicrowave.com/>
435 Moreland Road
Hauppauge, NY 11788
631-272-5947 / William.Drago at L-3COM.com<mailto:William.Drago at L-3COM.com>


CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any
attachments are solely for the use of the addressee and may contain
information that is privileged or confidential. Any disclosure, use or
distribution of the information contained herein is prohibited. In the event
this e-mail contains technical data within the definition of the
International Traffic in Arms Regulations or Export Administration
Regulations, it is subject to the export control laws of the U.S.Government.
The recipient should check this e-mail and any attachments for the presence
of viruses as L-3 does not accept any liability associated with the
transmission of this e-mail. If you have received this communication in
error, please notify the sender by reply e-mail and immediately delete this
message and any attachments.
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>The only safe thing to do is to drop the index and remake it.  Or do to 
>something which does that (e.g. VACUUM).

As I said, the first step in my repair was to drop the offending index.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>The only safe thing to do is to drop the index and remake it.  Or do to 
>something which does that (e.g. VACUUM).

As I said, the first step in my repair was to drop the offending index.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

I have some addition evidence that there is an underlying problem,
exacerbated by some failure in SMB file sharing.

In this instance, there is a set of duplicated records that did not
directly cause an indexing error, but which could have been created
if a transaction failed (presumably due to a file i/o error), was 
incorrectly unwound, and then repeated.

- Details -

Using the sqlite3 tool, starting with the damaged database;
 I dropped the indexes that had directly caused the complaint
 queried to find the duplicated records
 deleted the duplicated records
 tried to recreate the indexes (expecting this would succeed).  
 It did not.  I got a "database is malformed" error.

I take this as evidence that there was some actual damage to the
database, not just cleanly duplicated records with a bad index.

I did a full dump of the original database, removed the bad index
request, created a new database from the dump, repeated the duplicate 
record removal, and successfully created the index.  

This "fully repaired" database turned out to contain a duplicated set of 
records which did not cause an indexing problem, but which should not have
occurred, and was consistent with a duplicated transaction.  If this had 
been caused by a program error - ie; I really inserted the records twice, 
the database would not have been really damaged, and the shortcut repair I
tried first would have succeeded.

--

In this case, the client is a mac running os 10.7.5, the file
server is a PC running OS 8 server, and the sharing is via SMB







[sqlite] Can I copy one column of data to another table?

2015-03-09 Thread Dave
Thanks for the help Igor. :-)
Dave

On 3/7/2015 1:37 PM, Igor Tandetnik wrote:
>
>
> On 3/7/2015 11:42 AM, Dave wrote:
>> Now when trying to use the database I see that I should have made 1
>> table with all the related data (I think) and am trying to copy one
>> column of data at a time to the "main" table. Can that be done and if so
>> how? The data in all the columns has to line up with the ID numbers.
>
> insert into WideTable(ID, C1, C2, C3)
> select t1.ID, t1.C1, t2.C2, t3.C3
> from NarrowTable1 t1 join NarrowTable2 t2 using (ID) join NarrowTable3 
> t3 using (ID);
>
>



[sqlite] restructuring databases (was Re: Can I copy one column of data to another table?)

2015-03-08 Thread Dave
Jim,
   My data used to be hard coded into my app but now I am porting to 
Android and the rules have changed. I need smaller subs so the database 
fills the need to make my large subs smaller due to hard coded data. So 
in reality I just need a flat file or as you as say, a denormalized 
database. I figured I would learn the normalized way as years ago I was 
trying to figure that out with Access (with no success). But I am also 
learning Android programming (very early stage), and see that it will be 
easier for me to have a denormalized db for now. My db will be "read 
only" too. So that makes its use more like hard coded data which I want 
and need. When I learned VB years ago I got stumped often but over time 
things started to "click". I am waiting to hear those clicks with SQLite 
soon. :)
Thanks,
Dave
(also in Florida)

On 3/7/2015 8:03 PM, Jim Callahan wrote:
> The appropriate structure of the database depends on whether you need the
> tables spread out into multiple tables for consistency ("one fact in one
> location") -- a process called "normalization or whether you want all the
> data in one table for ease of querying ("denormalization").
>
> Transactional systems are usually designed to be normalized; while decision
> support tables may be more de-normalized (for ease of querying).
>
> If you want to combine your tables along the primary key, the way to do it
> in SQL is to create SELECT ...JOIN queries and then when you have a working
> join query that has the struture you want you have a choice of using the
> SELECT...JOIN query as if it was table or converting the query to a table
> with a CREATE TABLE FROM SELECT ...JOIN query.
>
> Jim Callahan
> Orlando, FL
>
> On Sat, Mar 7, 2015 at 8:18 PM, Darren Duncan 
> wrote:
>



[sqlite] restructuring databases (was Re: Can I copy one column of data to another table?)

2015-03-08 Thread Dave
Duncan,
   Thanks for your comments. My db is small and the records are at less 
than 200. The tables number 7 and the colums 18 not counting the primary 
auto increment. The UPDATE method I ended up using took about 1 second 
for the one colum I moved or copied. I am just using this db as "read 
only" so my needs are quite different from the norm probably. The SQL 
commands will eventually get sorted out and the usual rules apply to how 
fast I learn. :-) First the word games. Sometimes a command or syntax is 
not intuitive so even if I looked at the word "Update" it didn't sound 
like what I was looking for at the time. I was thinking "copy" or "cut 
and paste" as it seemed feasible because a spreadsheet can do that so 
why couldn't a db app? :-) It will all be water under the bridge some 
day...and I will be smarter too.
Thanks,
schemer

On 3/7/2015 7:18 PM, Darren Duncan wrote:
> On 2015-03-07 9:59 AM, Simon Slavin wrote:
>> On 7 Mar 2015, at 4:42pm, Dave  wrote:
>>
>>> I am fairly new at this although I have wanted to learn and tried 
>>> again and again...But I have a problem. I created a database and 
>>> probably did it wrong and I am trying to fix it. I made a database 
>>> with 7 tables in it all with a primary key and a record ID that 
>>> matches the primary key. Now when trying to use the database I see 
>>> that I should have made 1 table with all the related data (I think) 
>>> and am trying to copy one column of data at a time to the "main" 
>>> table. Can that be done and if so how?
>>
>> Without going into your situation in detail, I have a suggestion 
>> which may help you approach the problem another way.  The SQLite 
>> shell tool has a '.dump' command which turns a database into SQL 
>> commands, and a '.read' command which uses the commands to create 
>> schema and data in a new database.
>>
>> So dump the database into a text file.  Then you can use editing 
>> tools (usually global find-and-replace) mess with the text file so 
>> that all the inserting is done to the same table.  Then you can 
>> create your new database by reading the altered text file.
>
> Frankly the idea (proposed by Simon here) of solving this by dumping 
> everything to a text file and manipulating it there with editing tools 
> sounds abysmal to me.
>
> The only time one might consider that reasonable is if the total 
> number of records is just a handful and you're essentially just 
> re-entering them from scratch.
>
> Once you've already got your data in SQLite, the best general solution 
> by far is to use SQL to manipulate it; if you can't, you've already lost.
>
> What you want to do is create new table(s) with the new format you 
> want, and then do INSERT INTO  SELECT FROM  such that the 
> SELECT easily and reliably does all the hard work of collecting up all 
> the data from the old tables and rearranging it into the new format.  
> Depending on the complexity of the task, you may also create temporary 
> tables for intermediate stages of the processing.
>
> Solving the problem with the likes of SQL UPDATE is hard, but using 
> SELECT is easy.
>
> By a similar token, I believe SQL is often the best place to clean up 
> data from external sources.  Create temporary tables that are very lax 
> in format and constraints that take the external data as pristine as 
> possible, load into those, and then use SELECTs/etc to derive cleaner 
> versions from those into the final tables (or other intermediaries), 
> and you can use the SQL powers to filter or compensate for dirty data 
> etc.  Especially useful for dealing with duplicate data in the source, 
> find or handle with SELECT GROUP BY etc rather than trying conditional 
> INSERT logic or what have you.
>
> -- Darren Duncan
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] Can I copy one column of data to another table?

2015-03-07 Thread Dave
Ryan,
   Thanks for the added info. I will be happy to take this off-list for 
any further pointers if they may be best suited there . I don't want to 
be the guy that causes any trouble as a noob. :-) I will touch base with 
you off -list soon but maybe in a day or so. I appreciate the help.
Thanks again,
Dave

On 3/7/2015 3:06 PM, R.Smith wrote:
>
>
> On 2015-03-07 10:55 PM, Dave wrote:
>> Ryan,
>>   I have been to the link below but was under the impression that SQL 
>> and SQLite are two different things so I usually just look up SQLite 
>> help. I can do simple queries as I did the Kahn Academy training. :-) 
>> Most of the queries are straight forward but this one seemed a little 
>> tougher. Thanks for understanding and having been where I am now. It 
>> make me feel better already. :-)
>> Dave
>
> SQL is a kind of language, like English. The SQL language is spoken by 
> most relational database systems, such as SQLite, PostGres, MSSQL, 
> MySQL, Oracle, etc.  Of course most of them bend the standard language 
> a bit to their own needs in much the same way that English differs 
> between America and England, but it's still English. A good grasp of 
> SQL in general will enable you to use any of these systems, not just a 
> single one.
>
> Myself and others here will be happy to share even more or answer any 
> other questions you may have as you learn, you are not alone!
>
> I am however going to suggest we go off-list for any more pointers 
> regarding the above, simply because this thread has become a little 
> verbose for the purposes of the list - but that's simply in 
> consideration of the other readers, you are welcome to post any other 
> questions.
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] Can I copy one column of data to another table?

2015-03-07 Thread Dave
Ryan,
   I have been to the link below but was under the impression that SQL 
and SQLite are two different things so I usually just look up SQLite 
help. I can do simple queries as I did the Kahn Academy training. :-) 
Most of the queries are straight forward but this one seemed a little 
tougher. Thanks for understanding and having been where I am now. It 
make me feel better already. :-)
Dave

On 3/7/2015 2:44 PM, R.Smith wrote:
>
>
> On 2015-03-07 10:32 PM, Dave wrote:
>> Ryan,
>>   Thanks for your reply. As I mention in my last post:
>>
>> I got it sorted out I ended up with this using my SQLite Expert 
>> Professional (if it matters).
>
> There are a lot of great tools available for SQLite (possibly moreso 
> than any other DB system because of the open nature and myriad of ways 
> it can be used) and SQLitexpert is a favourite of mine too - but the 
> tool never matters, only the SQL.
>
>> But I will keep this info for testing to see if it does the same 
>> thing or can help me in the future. I tried to learn DB with ACCESS 
>> years ago and never really fully understood it. I would get stuck on 
>> the relationship part with the keys and the visual flow charts always 
>> seemed to point from one table to another but the lines pointed to an 
>> item instead of an ID. Or so it looked like that to me. Never really 
>> got to use it back then. 
>
> You and us both, you first have to understand relational DB design 
> before using the arrows and blocks become sensible in any meaningful way.
>
>> Now I am starting over again with SQLite. And I have made some 
>> progress, but have a long way to go. Good thing for me is I plan to 
>> use my database (for now) as basically a "flat file" where my app 
>> will just use the related data in click events to further process.
>
> May I suggest checking out http://www.w3schools.com/sql/sql_intro.asp
> Other resources are available too, but working through the very simple 
> lessons and great examples there will switch all the lights on for you 
> - without a single arrow being drawn.
>
> Good luck!
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] Can I copy one column of data to another table?

2015-03-07 Thread Dave
On 3/7/2015 1:42 PM, R.Smith wrote:
>
> Hi Dave, you did not give us the schemata so I'm going to guess you 
> have tables like this:
>
> CEATE TABLE T1("ID" INT PRIMARY KEY,  "val1" TEXT);
> CEATE TABLE T2("ID" INT PRIMARY KEY,  "val2" TEXT);
> CEATE TABLE T3("ID" INT PRIMARY KEY,  "val3" TEXT);
> ... all the way to ...
> CEATE TABLE T7("ID" INT PRIMARY KEY,  "val7" TEXT);
>
> And now you figured out best would be to have 1 Table like this"
>
> CEATE TABLE T_ALL("ID" INT PRIMARY KEY,  "val1" TEXT,  "val2" TEXT, 
> "val3" TEXT,  "val4" TEXT,  "val5" TEXT,  "val6" TEXT, "val7" TEXT);
>
> Right?
>
> If this is the case, assuming[1] all ID's are present in all tables, 
> the query to put all the single tables into the big one is simply this:
>
> INSERT INTO T_ALL (ID, val1, val2, val3, val4, val5, val6, val7)
>   SELECT T1.ID, T1.val1, T2.val2, T3.val3, T4.val4, T5.val5, T6.val6, 
> T7.val7
>FROM T1
>LEFT JOIN T2 ON T2.ID=T1.ID
>LEFT JOIN T3 ON T3.ID=T1.ID
>LEFT JOIN T4 ON T4.ID=T1.ID
>LEFT JOIN T5 ON T5.ID=T1.ID
>LEFT JOIN T6 ON T6.ID=T1.ID
>LEFT JOIN T7 ON T7.ID=T1.ID
>   WHERE 1;
>
> This simply looks up the same ID in all the tables and inserts each 
> tables value for the val column into the main table. You will have to 
> (obviously) substitute your actual table/column names.
>
> [1] : If all the IDs are not present in all the columns, you will get 
> NULL values inserted in the main table or completely miss out some IDs 
> (if not present in T1), so if this is the case, let us know the exact 
> schema and layout to figure out a more correct transfer.
>
> HTH
> Ryan
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
Ryan,
   After reviewing your answer some more you pretty much nailed it. I 
was trying some cut/paste method earlier with no luck at all and getting 
the field filled with nulls and the data appended to the row(2s) instead 
of the columns. Thanks again for your help. I have printed this out for 
future reference. :-)
Dave



[sqlite] Can I copy one column of data to another table?

2015-03-07 Thread Dave
Ryan,
   Thanks for your reply. As I mention in my last post:

I got it sorted out I ended up with this using my SQLite Expert 
Professional (if it matters).

update tableB set column2 = (select column2 from tableA where 
tableA.rowid = tableB.rowid)

Of course I did not figure that out myself and asked for help on the 
forum of the software and got an answer within 30 minutes.
-

But I will keep this info for testing to see if it does the same thing 
or can help me in the future. I tried to learn DB with ACCESS years ago 
and never really fully understood it. I would get stuck on the 
relationship part with the keys and the visual flow charts always seemed 
to point from one table to another but the lines pointed to an item 
instead of an ID. Or so it looked like that to me. Never really got to 
use it back then. Now I am starting over again with SQLite. And I have 
made some progress, but have a long way to go. Good thing for me is I 
plan to use my database (for now) as basically a "flat file" where my 
app will just use the related data in click events to further process.

Thanks again,
schemer


On 3/7/2015 1:42 PM, R.Smith wrote:
>
>
> On 2015-03-07 06:42 PM, Dave wrote:
>> I am fairly new at this although I have wanted to learn and tried 
>> again and again...But I have a problem. I created a database and 
>> probably did it wrong and I am trying to fix it. I made a database 
>> with 7 tables in it all with a primary key and a record ID that 
>> matches the primary key. Now when trying to use the database I see 
>> that I should have made 1 table with all the related data (I think) 
>> and am trying to copy one column of data at a time to the "main" 
>> table. Can that be done and if so how? The data in all the columns 
>> has to line up with the ID numbers. I know I can retype it all 
>> manually but it seems it should be an easy task. I have SQLite Expert 
>> Professional.
>
> Hi Dave, you did not give us the schemata so I'm going to guess you 
> have tables like this:
>
> CEATE TABLE T1("ID" INT PRIMARY KEY,  "val1" TEXT);
> CEATE TABLE T2("ID" INT PRIMARY KEY,  "val2" TEXT);
> CEATE TABLE T3("ID" INT PRIMARY KEY,  "val3" TEXT);
> ... all the way to ...
> CEATE TABLE T7("ID" INT PRIMARY KEY,  "val7" TEXT);
>
> And now you figured out best would be to have 1 Table like this"
>
> CEATE TABLE T_ALL("ID" INT PRIMARY KEY,  "val1" TEXT,  "val2" TEXT, 
> "val3" TEXT,  "val4" TEXT,  "val5" TEXT,  "val6" TEXT, "val7" TEXT);
>
> Right?
>
> If this is the case, assuming[1] all ID's are present in all tables, 
> the query to put all the single tables into the big one is simply this:
>
> INSERT INTO T_ALL (ID, val1, val2, val3, val4, val5, val6, val7)
>   SELECT T1.ID, T1.val1, T2.val2, T3.val3, T4.val4, T5.val5, T6.val6, 
> T7.val7
>FROM T1
>LEFT JOIN T2 ON T2.ID=T1.ID
>LEFT JOIN T3 ON T3.ID=T1.ID
>LEFT JOIN T4 ON T4.ID=T1.ID
>LEFT JOIN T5 ON T5.ID=T1.ID
>LEFT JOIN T6 ON T6.ID=T1.ID
>LEFT JOIN T7 ON T7.ID=T1.ID
>   WHERE 1;
>
> This simply looks up the same ID in all the tables and inserts each 
> tables value for the val column into the main table. You will have to 
> (obviously) substitute your actual table/column names.
>
> [1] : If all the IDs are not present in all the columns, you will get 
> NULL values inserted in the main table or completely miss out some IDs 
> (if not present in T1), so if this is the case, let us know the exact 
> schema and layout to figure out a more correct transfer.
>
> HTH
> Ryan
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] Can I copy one column of data to another table?

2015-03-07 Thread Dave
Hi Paul,
   I got it sorted out I ended up with this using my SQLite Expert 
Professional (if it matters).

update tableB set column2 = (select column2 from tableA where 
tableA.rowid = tableB.rowid)

Of course I did not figure that out myself and asked for help on the 
forum of the software and got an answer within 30 minutes. :-)

Thanks,
schemer



On 3/7/2015 1:35 PM, Paul Sanderson wrote:
> Dave
>
> I'm not sure exactly what you are trying to do from your description -
> the schema of  the tables you have and those that you want may help.
>
> But as a general idea you might be able to use something along the lines of
>
> create table newtable as select x, y, z from oldtable
>
> More info here:
>
> https://www.sqlite.org/lang_createtable.html
> Paul
> www.sandersonforensics.com
> skype: r3scue193
> twitter: @sandersonforens
> Tel +44 (0)1326 572786
> http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
> -Forensic Toolkit for SQLite
> email from a work address for a fully functional demo licence
>
>
> On 7 March 2015 at 19:08, Dave  wrote:
>> Thanks Simon. If I can't figure that out I will just type all the data in
>> manually and learn from the school of hard knocks. :-) I googles it and it
>> seems that I am not the only one that has tried to do this and it seems like
>> it should be easy. I think in regular SQL it might be easier. Oh well, I had
>> my app data "hard coded" in the past and decided to use a database to make
>> it easier. I am sure it will be, once I get more experience.
>> schemer
>>
>>
>> On 3/7/2015 11:59 AM, Simon Slavin wrote:
>>> On 7 Mar 2015, at 4:42pm, Dave  wrote:
>>>
>>>> I am fairly new at this although I have wanted to learn and tried again
>>>> and again...But I have a problem. I created a database and probably did it
>>>> wrong and I am trying to fix it. I made a database with 7 tables in it all
>>>> with a primary key and a record ID that matches the primary key. Now when
>>>> trying to use the database I see that I should have made 1 table with all
>>>> the related data (I think) and am trying to copy one column of data at a
>>>> time to the "main" table. Can that be done and if so how?
>>> Without going into your situation in detail, I have a suggestion which may
>>> help you approach the problem another way.  The SQLite shell tool has a
>>> '.dump' command which turns a database into SQL commands, and a '.read'
>>> command which uses the commands to create schema and data in a new database.
>>>
>>> So dump the database into a text file.  Then you can use editing tools
>>> (usually global find-and-replace) mess with the text file so that all the
>>> inserting is done to the same table.  Then you can create your new database
>>> by reading the altered text file.
>>>
>>> Simon.
>>>



[sqlite] Can I copy one column of data to another table?

2015-03-07 Thread Dave
Thanks Simon. If I can't figure that out I will just type all the data 
in manually and learn from the school of hard knocks. :-) I googles it 
and it seems that I am not the only one that has tried to do this and it 
seems like it should be easy. I think in regular SQL it might be easier. 
Oh well, I had my app data "hard coded" in the past and decided to use a 
database to make it easier. I am sure it will be, once I get more 
experience.
schemer

On 3/7/2015 11:59 AM, Simon Slavin wrote:
> On 7 Mar 2015, at 4:42pm, Dave  wrote:
>
>> I am fairly new at this although I have wanted to learn and tried again and 
>> again...But I have a problem. I created a database and probably did it wrong 
>> and I am trying to fix it. I made a database with 7 tables in it all with a 
>> primary key and a record ID that matches the primary key. Now when trying to 
>> use the database I see that I should have made 1 table with all the related 
>> data (I think) and am trying to copy one column of data at a time to the 
>> "main" table. Can that be done and if so how?
> Without going into your situation in detail, I have a suggestion which may 
> help you approach the problem another way.  The SQLite shell tool has a 
> '.dump' command which turns a database into SQL commands, and a '.read' 
> command which uses the commands to create schema and data in a new database.
>
> So dump the database into a text file.  Then you can use editing tools 
> (usually global find-and-replace) mess with the text file so that all the 
> inserting is done to the same table.  Then you can create your new database 
> by reading the altered text file.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] Can I copy one column of data to another table?

2015-03-07 Thread Dave
I am fairly new at this although I have wanted to learn and tried again 
and again...But I have a problem. I created a database and probably did 
it wrong and I am trying to fix it. I made a database with 7 tables in 
it all with a primary key and a record ID that matches the primary key. 
Now when trying to use the database I see that I should have made 1 
table with all the related data (I think) and am trying to copy one 
column of data at a time to the "main" table. Can that be done and if so 
how? The data in all the columns has to line up with the ID numbers. I 
know I can retype it all manually but it seems it should be an easy 
task. I have SQLite Expert Professional.
Thanks,
schemer


[sqlite] Corrupted database

2015-03-05 Thread Dave Dyer
If you can .dump it, can you also use .read to restore it? And if so, how 
damaged does it appear to be?

My databases, with a corrupt index, couldn't be restored directly, but
the duplicate entries could be seen, cleaned up, and then the restore
succeeded.  Or (more conveniently) remove the index creation from the
.dump, restore, use queries to find and remove duplicates, then reinstate
the index.

My thinking is that the kind of corruption I've had should at least
be a different error code, and that a pragma to drop the index could
allow repair without the extreme of dumping and editing the dump file.



[sqlite] Corrupted database

2015-03-04 Thread Dave Dyer

I'd be interested if you could characterize the corruption.  For 
example, can use still use .dump to dump the database, and if so
what kind of damage is there?

The cases I've encountered recently, the "corruption" was only 
a few duplicated records, which broke the uniqueness constraint
on an index.

Interestingly, and perhaps alarmingly, if the index hadn't existed,
no corruption would have been detected, although the internal events
that allowed the duplicate entries would still be an undetected error.



[sqlite] Corrupted database

2015-03-04 Thread Dave Dyer

I'd be interested if you could characterize the corruption.  For 
example, can use still use .dump to dump the database, and if so
what kind of damage is there?

The cases I've encountered recently, the "corruption" was only 
a few duplicated records, which broke the uniqueness constraint
on an index.

Interestingly, and perhaps alarmingly, if the index hadn't existed,
no corruption would have been detected, although the internal events
that allowed the duplicate entries would still be an undetected error.



[sqlite] recurrent failure mode

2015-02-26 Thread Dave Dyer

>
>You might want to read my message on the topic from the list archives,
>dated Sat, 31 Jan 2015.  

In this case, no concurrent or multiple users are involved.  It's just
one client and the database.   There's still plenty of scope for the
networked file system to do things that make sqlite fail.



[sqlite] recurrent failure mode

2015-02-25 Thread Dave Dyer

>
>Can you provide the schema (the CREATE TABLE and any CREATE INDEX commands) 
>for that table ?

CREATE TABLE preference_table (
 preferenceSet text,/* name of this preference group */
 preferenceName text,   /* a preference in this group */
 preferenceValue text   /* sort order of this k...;
CREATE UNIQUE INDEX preferenceindex on 
preference_table(preferenceSet,preferenceName);



>Do you have any multi-access things going on ?  Two or more computers, 
>applications, processes or threads trying to access the database at the same 
>time ?

No, but it would be normal for the database to be on a different
computer than the sqlite client, and be using whatever networked 
file system is common.  The culprit clients seem to be macs, we're
still seeking more information about the specifics.

>Does your application check the result code returned from all sqlite3_ calls ? 
> Not just the ones inserting rows, but also those opening and closing the 
>file, setting PRAGMAs, or anything else it does with the sqlite3_ library ?

Yes.  It all goes through a common interface function which is
careful about checking.

As I said in the original message, this is something that has been
working without problems for a few years, the only thing that's changing
is the network and OS environment it's deployed in.  My hypothesis is
that a new failure mode in the file system is tickling a sqlite bug.
Based on the evidence available now, a transaction that is trying to 
insert 4 records fails, and is retried, resulting in 8 records which
can't be indexed.




[sqlite] recurrent failure mode

2015-02-25 Thread Dave Dyer

We're experiencing a new, recurrent failure mode in an old (ie; not recently 
changed) sqlite application.   This may be associated with buggy networked
file system implementations (thanks to apple and/or microsoft)

The apparent problem is that indexes on a small table become corrupted
by not being unique.  Except for the non-uniqueness of the index keys,
there's no apparent damage.

The facile explanation would be that a transaction to insert a new
record was executed twice, but the indexes were incorrectly maintained.

INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','scrollPos','0');
INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','nFill','0');
INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','placeInBW','0');
INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','DB_Subset','');


I suppose that this might be a sqlite bug if the "insert records" step
and the "maintain indexes" step were separated by a disk error and the
rollback of the failed transaction was incomplete.



[sqlite] recurrent failure mode

2015-02-25 Thread Dave Dyer

We're experiencing a new, recurrent failure mode in an old (ie; not recently 
changed) sqlite application.   This may be associated with buggy networked
file system implementations (thanks to apple and/or microsoft)

The apparent problem is that indexes on a small table become corrupted
by not being unique.  Except for the non-uniqueness of the index keys,
there's no apparent damage.

The facile explanation would be that a transaction to insert a new
record was executed twice, but the indexes were incorrectly maintained.

INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','scrollPos','0');
INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','nFill','0');
INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','placeInBW','0');
INSERT INTO "preference_table" VALUES('Picture Placer-707-1304b-19-Maranda 
Richardson','DB_Subset','');


I suppose that this might be a sqlite bug if the "insert records" step
and the "maintain indexes" step were separated by a disk error and the
rollback of the failed transaction was incomplete.



Re: [sqlite] sqlite3 tool bug

2015-01-17 Thread Dave Dyer

>
>The  has been in shell.c since 3.8.6.  We are on 3.8.8.  Why
>not upgrade?
>-- 

Here in the real world, when everything is working, we ask "why upgrade".

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug

2015-01-17 Thread Dave Dyer

>
>The  has been in shell.c since 3.8.6.  We are on 3.8.8.  Why
>not upgrade?
>-- 

Here in the real world, when everything is working, we ask "why upgrade".



Re: [sqlite] sqlite3 tool bug

2015-01-17 Thread Dave Dyer

>
>
>OK.  Dave, please try this patch at let us know if it works better for
>you:  https://www.sqlite.org/src/info/80541e8b94b7
>

It needs #include  to compile in my sources.
With that, it seems to fix the problem.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug

2015-01-17 Thread Dave Dyer

>
>
>OK.  Dave, please try this patch at let us know if it works better for
>you:  https://www.sqlite.org/src/info/80541e8b94b7
>

It needs #include  to compile in my sources.
With that, it seems to fix the problem.



Re: [sqlite] sqlite3 tool bug summary

2015-01-17 Thread Dave Dyer

>
> .once '| sqlite3 new.db'
> .dump

.Once is not a command in the version of sqlite3 I use.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug summary

2015-01-17 Thread Dave Dyer

>
> .once '| sqlite3 new.db'
> .dump

.Once is not a command in the version of sqlite3 I use.



Re: [sqlite] sqlite3 tool bug summary

2015-01-17 Thread Dave Dyer

>
>> But that doesn't explain the difference between redirecting to a file
>> and redirecting to a pipe.
using .output file works
using > to direct stdout to a file works and produces the same file as .output
using .read file works
using < file does not work.
using | to shortcut > and < doesn't work.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug summary

2015-01-17 Thread Dave Dyer

>
>> But that doesn't explain the difference between redirecting to a file
>> and redirecting to a pipe.
using .output file works
using > to direct stdout to a file works and produces the same file as .output
using .read file works
using < file does not work.
using | to shortcut > and < doesn't work.




  1   2   3   4   5   >