Re: [sqlite] Custom aggregate functions in Tcl

2019-02-02 Thread Andy Goth
se -class:

db function list -deterministic -window -initial {foo bar} {
apply {{method state args} {
puts \[[lrange [info level 0] 2 end]\]
switch $method {
step{concat $state $args}
inverse {lrange $state [llength $args] end}
value   {set state}
destroy {unset state}
}
}}
}

Used with the prior [db eval] query, the above prints:

[new {foo bar}]
[step {} a 1]
[step {a 1} b -2]
[value {a 1 b -2}]
a {a 1 b -2}
[step {a 1 b -2} c 3]
[value {a 1 b -2 c 3}]
b {a 1 b -2 c 3}
[inverse {a 1 b -2 c 3} a 1]
[step {b -2 c 3} d -4]
[value {b -2 c 3 d -4}]
c {b -2 c 3 d -4}
[inverse {b -2 c 3 d -4} b -2]
[step {c 3 d -4} e 5]
[value {c 3 d -4 e 5}]
d {c 3 d -4 e 5}
[inverse {c 3 d -4 e 5} c 3]
[step {d -4 e 5} f -6]
[value {d -4 e 5 f -6}]
e {d -4 e 5 f -6}
[inverse {d -4 e 5 f -6} d -4]
[value {e 5 f -6}]
f {e 5 f -6}
[destroy {e 5 f -6}]

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom aggregate functions in Tcl

2019-01-31 Thread Andy Goth
I read some more about window functions and now see more clearly that they
are an extension to aggregate functions. Now I understand why it makes
sense to have a method name for both aggregate and window functions. I'll
also go ahead and put window function support in my code next chance I get,
rather than wait until I'm totally solid on aggregate functions.

On Wed, Jan 30, 2019, 13:59 Richard Hipp  It seems that you distinguish between the xStep and xFinal methods by
> the number of argments.  xStep as 1+N argument (where N is the number
> of function parameters) and xFinal has just 1.
>
> Dan suggests (and I agree) that this will not extend well to window
> functions.  It might be better to have an initial argument that is the
> "method" name.  xStep would 2+N arguments where the first argument is
> the string "step" and xFinal has 2 arguments where the first argument
> is "final".  Then when you go to add the xValue and xInverse routines
> for window functions, you will have a convenient way to distinguish
> those calls from xStep and xFinal.
>
> On 1/30/19, Andy Goth  wrote:
> > On 1/29/19 1:15 AM, Andy Goth wrote:
> >> I wish to define custom aggregate functions in Tcl
> >
> > Initial implementation:
> >
> >
> https://chiselapp.com/user/andy/repository/sqlite-andy/info/e0689f05d1f8792d
> >
> > Sample program, intended to be run from the root of a built SQLite tree:
> >
> > #!/usr/bin/env tclsh
> >
> > load .libs/libtclsqlite3.so
> >
> > sqlite3 db
> >
> > db function incr -deterministic {
> >
> >  apply {{arg} {
> >
> >  incr arg
> >
> >  }}
> >
> > }
> >
> > db function list -deterministic -aggregate {
> >
> >  apply {{state args} {
> >
> >  concat $state $args
> >
> >  }}
> >
> > }
> >
> > puts [db eval {
> >
> >  SELECT list(column1, ''), incr(42)
> >
> >FROM (VALUES ('#'), ('#'), (' c '), ('\'), ('\\'), ('{ xx '),
> > ('}'))
> > }]
> >
> > db close
> >
> > # vim: set sts=4 sw=4 tw=80 et ft=tcl:
> >
> >
> > Here's the output:
> >
> > {{#} {} # {} { c } {} \\ {} {\\} {} \{\ xx\  {} \} {}} 43
> >
> >
> > Notice that the first # element is brace-quoted and the second one is
> > not.  This demonstrates TIP 401 compliance (http://tip.tcl.tk/401) in
> > recent Tcl.
> >
> > Before I can call this project finished, I need to update the test suite
> > and documentation.  But I didn't want to wait for that to release the
> > code I've already written, so here you go.
> >
> > As a separate project, I would like to improve the Tcl script dispatch.
> > For several reasons, safeToUseEvalObjv() doesn't really meet its stated
> > goal.  I have a lot to say about this but should wait until another time
> > to go into detail.  Let's finish the aggregate function project first.
> >
> > Another good follow-on project would be adding support for window
> > functions.  Before I can take that on, I need to gain some experience
> > using, let alone writing, window functions.
> >
> > --
> > Andy Goth | 
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom aggregate functions in Tcl

2019-01-30 Thread Andy Goth

On 1/30/19 3:27 PM, Andy Goth wrote:

The next chance I get (probably tomorrow morning), I'll go ahead and add
"step" or "final" as the initial argument to aggregate functions.  I'll
also lift the prohibition on aggregate functions with no arguments.


This change is now committed.

https://chiselapp.com/user/andy/repository/sqlite-andy/info/4fc35d5e09e2a486

I went with a string table approach that can be later expanded to handle
method names for window functions.


oo::class create ListAggregate {
 method step {state args} {
 concat $state $args
 }
 method final {state} {
 my destroy
 return $state
 }
}
db function list -deterministic -aggregate [ListAggregate new]


This approach won't work.  ListAggregate needs to be separately
instantiated for every single group, not just once per database
connection.  Instead, something horrible like this is needed:

oo::class create ListAggregate {
variable state
method step {args} {
lappend state {*}$args
}
method final {} {
return $state
}
}
db function list -deterministic -aggregate {
apply {{method obj args} {
if {$obj eq {}} {
set obj [ListAggregate new]
}
set result [$obj $method {*}$args]
if {$method eq "final"} {
$obj destroy
return $result
} else {
return $obj
}
}}
}

I don't like this one bit.  There's more logic adapting between the
TclOO implementation and the tclsqlite calling convention than there is
actual work being done.  It's much simpler to just store the state data
in the return value than in an object instantiation.  Nevertheless, this
approach may be more appropriate for complex window functions, so I'm
exploring it rather than dismissing it out of hand.

Let's have a choice of calling conventions, so different-sized tasks can
have more infrastructure if they need it and less if they don't.  By
default, use the simple convention I started with, though augmented with
method name, because why not.  But as an alternative, well... start by
looking at this example:

oo::class create ListAggregate {
variable state
method step {args} {
lappend state {*}$args
}
method final {} {
return $state
}
}
db function list -deterministic -aggregate -class ListAggregate

The addition of the -class switch changes the script to instead be the
name of a TclOO class, to which I will refer as $class:

1. Before the first invocation of the step function (or final function,
if there are no rows), [$class new] is be called.  Its return value is
saved as the object instance name, referred to below as $obj.

2. For each row, [$obj step ?arg ...?] is called.

3. After the last row, [$obj final] is called, and its return value is
used as the return value of the SQL function.

4. To clean up, [$obj destroy] is called.

If an error occurs in step 1, terminate immediately.  If an error occurs
in steps 2 or 3, go straight to step 4.

You may notice TclOO is not actually required.  Other object systems
such as [incr tcl], XOTcl, Snit, and stooop may be used, though possibly
with wrapper shims, but honestly anybody can write commands that behave
like $class and $obj are expected to behave.  Thus, there is no real
dependency on Tcl 8.6.

[$class new] and [$obj destroy] may call user-defined constructors and
destructors, but while this may be useful to the programmer, it is
internal to TclOO and is not SQLite's concern.

To recap, when -class is not used, the procedure is as follows, where
$script is the script argument to [db function]:

1. For each row, [$script step $state ?arg ...?] is called, where $state
is (first row) empty string or (subsequent rows) the return value of the
previous call to [$script step].

2. After the last row, [$script final $state] is called, and its return
value is used as the return value of the SQL function.  $state is the
return value of the last call to [$script step] or empty string if there
were no rows.

Unless there are comments or objections, I'll try adding -class the next
time I get a chance to hack on this.  I think it may be overkill for
aggregate functions but will probably be useful for window functions.

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom aggregate functions in Tcl

2019-01-30 Thread Andy Goth

On 1/30/19 1:59 PM, Richard Hipp wrote:

It seems that you distinguish between the xStep and xFinal methods by
the number of argments.  xStep [has] 1+N argument (where N is the
number of function parameters) and xFinal has just 1.


Yes.  I was explicit about that in my first email.


Dan suggests (and I agree) that this will not extend well to window
functions.


I agree too.  Trouble is, I designed around aggregate functions while
disregarding window functions because of my total lack of experience
with window functions.


It might be better to have an initial argument that is the
"method" name.


I did initially consider this but decided it was not necessary for
aggregate functions because final will always have zero arguments
whereas step will never usefully have zero arguments.


xStep would 2+N arguments where the first argument is the string
"step" and xFinal has 2 arguments where the first argument is "final".
Then when you go to add the xValue and xInverse routines for window
functions, you will have a convenient way to distinguish those calls
from xStep and xFinal.


Adding an extra initial argument to the aggregate functions is not a
difficult thing to do.  I can certainly add that.  I just didn't see it
as useful because it conveys information redundantly provided by another
path guaranteed to be there.

Again, I agree it leaves room for future expansion, but (1) it's highly
unlikely that the fundamental definition of an aggregate function will
change, and (2) I don't think it's strictly necessary that the same
calling convention be used for aggregate functions and window functions.

My preference would be that all functions, regardless of kind, have the
same convention, but this is not possible because the interface to
scalar functions is set in stone, and it would be useless to insert a
first argument that's always "function".  Therefore, since aggregate
functions must be defined differently than scalar functions, I thought
it would be fine for window functions to be defined differently than
aggregate functions.

The next chance I get (probably tomorrow morning), I'll go ahead and add
"step" or "final" as the initial argument to aggregate functions.  I'll
also lift the prohibition on aggregate functions with no arguments.

All my above reasoning notwithstanding (I just wanted to document why I
took the approach I did), this change does benefit aggregate functions
by making it easier to tie into TclOO and similar Tcl object systems
which use initial arguments as method names.  As for procedures that
don't need this, they are free to ignore the argument.

Old example, works with current code, here for baseline comparison:

db function list -deterministic -aggregate {
apply {{state args} {
concat $state $args
}}
}

New example:

db function list -deterministic -aggregate {
apply {{method state args} {
concat $state $args
}}
}

Alternately:

db function list -deterministic -aggregate {
apply {{method state args} {
switch $method {
step {concat $state $args}
final {set state}
}
}}
}

Or:

namespace eval ListAggregate {
namespace export step final
namespace ensemble create
proc step {state args} {
concat $state $args
}
proc final {state} {
return $state
}
}
db function list -deterministic -aggregate ListAggregate

Or:

oo::class create ListAggregate
oo::objdefine ListAggregate {
method step {state args} {
concat $state $args
}
method final {state} {
return $state
}
}
db function list -deterministic -aggregate ListAggregate

Or:

oo::class create ListAggregate {
method step {state args} {
concat $state $args
}
method final {state} {
my destroy
return $state
}
}
db function list -deterministic -aggregate [ListAggregate new]

Though I do not prefer that final example since it creates heavyweight
infrastructure to hold per-instance internal state, yet doesn't actually
make use of it.  However, this could be useful as an alternative to
storing state data in the return value, making it possible to modify it
in-place without incurring copy-on-write, as documented in the
tclSqlFuncStep() comments.

Overall, my preference is to avoid creating global named objects when
anonymous values will do the job, hence my use of [apply].

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom aggregate functions in Tcl

2019-01-30 Thread Andy Goth

On 1/29/19 1:15 AM, Andy Goth wrote:

I wish to define custom aggregate functions in Tcl


Initial implementation:

https://chiselapp.com/user/andy/repository/sqlite-andy/info/e0689f05d1f8792d

Sample program, intended to be run from the root of a built SQLite tree:

#!/usr/bin/env tclsh 

load .libs/libtclsqlite3.so 

sqlite3 db 

db function incr -deterministic { 

apply {{arg} { 

incr arg 

}} 

} 

db function list -deterministic -aggregate { 

apply {{state args} { 

concat $state $args 

}} 

} 

puts [db eval { 

SELECT list(column1, ''), incr(42) 

  FROM (VALUES ('#'), ('#'), (' c '), ('\'), ('\\'), ('{ xx '), 
('}'))
}] 

db close 

# vim: set sts=4 sw=4 tw=80 et ft=tcl: 



Here's the output:

{{#} {} # {} { c } {} \\ {} {\\} {} \{\ xx\  {} \} {}} 43 



Notice that the first # element is brace-quoted and the second one is 
not.  This demonstrates TIP 401 compliance (http://tip.tcl.tk/401) in 
recent Tcl.


Before I can call this project finished, I need to update the test suite 
and documentation.  But I didn't want to wait for that to release the 
code I've already written, so here you go.


As a separate project, I would like to improve the Tcl script dispatch. 
For several reasons, safeToUseEvalObjv() doesn't really meet its stated 
goal.  I have a lot to say about this but should wait until another time 
to go into detail.  Let's finish the aggregate function project first.


Another good follow-on project would be adding support for window 
functions.  Before I can take that on, I need to gain some experience 
using, let alone writing, window functions.


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom aggregate functions in Tcl

2019-01-29 Thread Andy Goth
Oh yeah, I meant to say that I was going to leave window functions for
future expansion. First I need to get more familiar with their use.
Yesterday was my first time implementing an aggregate function, and I need
to work my way up.

On Tue, Jan 29, 2019, 07:46 Richard Hipp  On 1/29/19, Andy Goth  wrote:
> >
> > Question: does xFinal() get called if an error occurs during (or between)
> > calling xStep()? Are errors even possible? I'm curious if there's any way
> > to leak the Tcl_Obj pointed to by the aggregate context.
>
> xFinal() gets called by sqlite3_reset() or sqlite3_finalize() if it
> hasn't been called already when those routines are first invoked.
>
> You might also what to add the ability to specify xValue() and
> xInverse() methods so that you can do window functions in TCL as well.
> See
> https://www.sqlite.org/windowfunctions.html#user_defined_aggregate_window_functions
> for additional information.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Custom aggregate functions in Tcl

2019-01-28 Thread Andy Goth
I wish to define custom aggregate functions in Tcl, but this capability is
currently not exposed through the Tcl interface. Thus I am thinking about
how best to add it. Here's a first crack at a design proposal:

Extend the [db function] command to accept an -aggregate switch that makes
the new function be an aggregate function. Otherwise, the function defaults
to being a scalar function.

When an aggregate Tcl function is called from SQL, it is invoked in two
phases: step and final. The step phase receives the input data, and the
final phase produces the result.

During step, the function is invoked once for each row (in the group). For
the first row (in the group), the first argument to the function will be
empty string, and subsequent arguments are the SQL values from the row
being processed. For each additional row (in the group), the first argument
is the Tcl value returned by the prior invocation, and subsequent arguments
are as above.

During final, the function is invoked one last time (at the end of each
group). Its sole argument is the return value of the last step invocation,
or empty string if there were no rows. The return value of the Tcl function
is used as the return value of the SQL function.

If there were no rows and GROUP BY was used, the function is not invoked at
all, because there were no groups.

The Tcl function can tell whether it is in the step or final phase by how
many arguments it receives. If it receives multiple, it is in step. If it
receives only one, it is in final.

Depending on how it is written, the Tcl function may be able to tell if
it's being called for the first time (within the group) by checking if its
first argument is empty string. If non-empty, it definitely has been called
before (within the group). If empty, it is most likely on the first row
and, if in step, may need to initialize.

However, if the aggregate function wishes to disregard some of its input
data, it may choose to return empty string to discard any state data
arising from the current and prior rows (in the group). This will result in
empty string being the first argument to the next invocation. This mode of
operation is ill-advised but not illegal, though maybe some valid use cases
may exist, provided that ORDER BY is being used.

It is an error to pass an aggregate Tcl function zero arguments.

Here's an example that will return the entire Fossil global_config table as
a key/value dict:

db function list -deterministic -aggregate {
apply {{state args} {
concat $state $args
}
}

db onecolumn {
SELECT list(name, value)
  FROM global_config
 ORDER BY name
}

As for implementation, I think the existing code can be leveraged to a
great extent. Adapt the existing tclSqlFunc() to be the backend to new
tclSqlFuncScalar(), tclSqlFuncStep(), and tclSqlFuncFinal() functions, and
adjust the DB_FUNCTION code to recognize -aggregate and pass different
function pointers accordingly. Use sqlite3_aggregate_context() to hold the
Tcl_Obj * resulting from each invocation, then let it be the first argument
to the next invocation.

I would be happy to implement this myself, since this is functionality I
will be needing soon. Once I'm satisfied with it and have test suite
updates, what's the recommended method for sharing my patch?

Question: does xFinal() get called if an error occurs during (or between)
calling xStep()? Are errors even possible? I'm curious if there's any way
to leak the Tcl_Obj pointed to by the aggregate context.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ideas or Guide on SQLite Search Engine for a relational database?

2019-01-20 Thread Andy Goth
You can try reading the Fossil source code to see how it handles full-text
searching across multiple tables.

https://fossil-scm.org/index.html/artifact?fn=src/search.c=trunk

On Sat, Jan 19, 2019, 06:10 Scott 
> I apologize, I sent this from a different email than I registered
> accidentally.
> Hi Everyone!
> The Ask and what I’ve done:
> I'm building a research database with an embedded SQLite DB using Intellij
> for development. I've created the data entry point for the users; however,
> I'm looking for anything anyone might want to share or suggest as a code,
> link, or design, whatever it may be, on developing a search engine for a
> relational database. I’ve been reading through tutorials on
> SQLiteTutorials.net to learn a little more about indexing and searching.
> I’ve come across the FTS5 (“Full Text Search”) tutorial, but I’m having
> difficulty finding anything that may assist with a multi-table search. I
> guess it would nice to have an example to follow. I wouldn’t think this
> would be the first-time for this, but I’m a little naïve too.
>
> I'm not asking for someone to solve the problem but just provide a
> direction that may save a lot of research time. I don’t know if my images
> will post or be available through this email, but I've provided some images
> below of the GUI and the SQLite DB which has 8 tables all linking back to
> the primary table called "Source". This is a new task for me and my
> development skills in Java.
>
> The Goal:
> This Research DB users want to be able to search the database either by
> Source, Topic, Question, Comments, Quotes, Authors, which most are a one to
> many with one a many to many relationship. A single source, for example a
> book entered in the database, could be linked to multiple topics,
> questions, Comments, Quotes, etc. So the users may want to search each of
> these separately on a particular subject, or possibly all the tables for a
> particular subject. Topic will help the mostly, but a topic may be
> addressed in a comment or quote, etc, that may not be associated directly
> with a particular topic. So there is a need to search a number of fields
> over 8 tables. My thoughts are that the search criteria should probably
> return into a table list, then when the user selects a particular row, this
> data would populate into corresponding fields for readability similar to
> the "Entry" tab (see below).
>
> The only thing I suspect may make this easier than expected is that no
> matter whether the user searches by Topic, or Question, or Comment, etc,
> the search will always join all the tables and return all the fields for
> anything linked back to a source. However, the source will be associated to
> many of each of those.
>
> Thanks,
>
> Scott
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lazy JSON data to sqlite database

2019-01-19 Thread Andy Goth
If you have an SQL schema that works for you and also sample data, I might
be able to assist writing the conversion program. Dunno if you're
interested since it sounds like you won't need to do this again for another
year.

On Sat, Jan 19, 2019, 23:17 David Bicking  >> Is there a tool out there that will more or less automate the task for
>
>
> >> me? Hopefully free, as no one is paying me to do this. (The other
> >> volunteers have maybe a dozen records in total and are doing their
> >> reports by hand. )
>
> >The automation is at a lower level than you seem to realize.  JSON I/O is
> a solved problem, but actually >doing anything with that data, such as
> transforming it into a relational database form, is up to you.
>
> I thought I got lucky. I found an web site called SQLify, that lets you
> upload a JSON file, and it will examine it, figure out a schema, and
> create  CREAT TABLE and INSERT lines with the data. Kind of neat.
> Unfortunately, it flattened the data in to one table, which didn't matter
> for the venue and group data, but it flattened the  hosts data by only
> using the first one and discarding the rest.
> >> A cursory look at the data: there is a top level "event" table.  "Group"
> >> and "venue" tables that are one-to-one with the event table, and a
> >> "hosts" tables with many hosts to the event table.
> >
> >JSON doesn't have tables, nor does JSON pre-declare a schema.  JSON
> intermixes the schema with the >data, and the schema is allowed to vary
> over the course of the document.
>
> Yeah, sloppy writing on my part. JSON has objects, which can be stored in
> SQL tables.
>
> >> Oh, to complicate things, the source data is not valid JSON, as the>>
> upstream source did not escape quote marks within the text.  So is there
> >> a tool that can clean up the quotes, hopefully there won't be many bad
> >> quotes... maybe three or four in the 600 record.>
> >Then it's not really JSON and can't be processed with true-blue JSON
> tools.  Everything you're saying is >calling for a custom tool.  Custom
> tools do not have to be hard or expensive though.
> >
> >But in the case of quotes not being escaped, that all by itself could be
> difficult to automate, since heuristics >will need to be applied to figure
> out whether any given quote mark is a string delimiter.  Human
> >intervention might be required.
>
> I kind of solved this by running the data through a parser. I would then
> take the error message, and find the text in the source file, and escape
> the quotes. Then parse again and fix the next error. Tedious but that works.
> David
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.24.0 Solaris 9 build failure

2019-01-19 Thread Andy Goth
Dennis Clarke wrote:
> On 2018-07-28 08:33, Andy Goth wrote:
>> SQLite 3.24.0 fails to build on Solaris 9 (a.k.a. Solaris 2.9)

> It may be [worth] while to spin up a Solaris 9 zone on a Solaris 10 or
> Solaris 11 server for this purpose.

I don't have access to any Solaris servers of any kind. And yet, I had the
requirement to produce a working binary for a computer I wasn't even
allowed to go visit. It was rough, but the task is done and sold off.

By the way, I've come to find out that the system in question is actually
Solaris 8, but I have to call it 9 because GCC doesn't support 8. What a
mess. I'm glad it's behind me.

> Not sure how you are getting a cross compile to work at all with just
> /usr/include on hand. Are you using the Sun Studio compilers for this ?

I'm using GCC.

When I said I had only /usr/include, I oversimplified. I have the following:

/lib (symlink to usr/lib)
/usr/include
/usr/lib
/usr/ccs/lib
/usr/local/include
/usr/local/lib
/usr/openwin/include
/usr/openwin/lib
/usr/dt/include
/usr/dt/lib

My notes say I have the next two directories as well, but my notes are
wrong:

/usr/X11/include
/usr/X11/lib

The above is all stored in a file called pkg/solaris-sysroot.tar.

Here's how I built my cross compiler:

TARGET=sparc-sun-solaris2.9 &&
tar xf pkg/binutils-2.31.tar.xz &&
tar xf pkg/gcc-4.9.4.tar.bz2 &&
sudo mkdir -p /opt/cross/sysroot/$TARGET &&
sudo tar -xf pkg/solaris-sysroot.tar -C /opt/cross/sysroot/$TARGET &&
mkdir build-binutils build-gcc &&
cd build-binutils &&
../binutils-2.31/configure -v --target=$TARGET --prefix=/opt/cross \
--with-sysroot=/opt/cross/sysroot/$TARGET &&
make -j2 &&
sudo make install &&
cd ../build-gcc &&
../gcc-4.9.4/configure -v --target=$TARGET --prefix=/opt/cross \
--with-sysroot=/opt/cross/sysroot/$TARGET \
--with-gnu-as --with-gnu-ld \
--disable-libgcj \
--enable-languages=c,c++ --enable-obsolete &&
make -j2 &&
sudo make install &&
cd .. &&
rm -rf build-binutils build-gcc

I then built SQLite in the context of building a Tclkit basekit, and I did
that using the KitCreator script. Here are the commands relevant to SQLite:

[...]
rm -rf tcl/buildsrc/tcl8.6.9/pkgs/sqlite3.25.3 &&
tar xf ../pkg/sqlite-autoconf-326.tar.gz \
sqlite-autoconf-326/{tea,sqlite3.{c,h}} &&
patch -d sqlite-autoconf-326 -p1 < ../pkg/sqlite-sunos.diff &&
mv sqlite-autoconf-326/tea tcl/buildsrc/tcl8.6.9/pkgs/sqlite3.26.0 &&
mv sqlite-autoconf-326/sqlite3.[ch] \
tcl/buildsrc/tcl8.6.9/pkgs/sqlite3.26.0/generic &&
rm -rf sqlite-autoconf-326 &&
[...]

export CFLAGS=-Os CXXFLAGS=-Os

CFLAGS=-O0 CXXFLAGS=-O0 KITCREATOR_PKGS=mk4tcl ./kitcreator &&
mv tclkit-8.6.9 tclkit-local &&
export TCLKIT=$PWD/tclkit-local

TARGET=sparc-sun-solaris2.9 &&
CC=/opt/cross/bin/$TARGET-gcc \
CXX=/opt/cross/bin/$TARGET-g++ \
AR=/opt/cross/bin/$TARGET-ar \
RANLIB=/opt/cross/bin/$TARGET-ranlib \
STRIP=/opt/cross/bin/$TARGET-strip \
KITCREATOR_PKGS="itcl mk4tcl tdom tnc" \
./kitcreator --host=$TARGET &&
mv tclkit-8.6.9 ../tclkit-sunos

The file "sqlite-sunos.diff" is the patch I outlined in my first email:

https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg111233.html
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite 3.24.0 Solaris 9 build failure

2018-07-27 Thread Andy Goth
SQLite 3.24.0 fails to build on Solaris 9 (a.k.a. Solaris 2.9) due to 
not finding fchmod, fchown, readlink, lstat, usleep, struct timeval, and 
gettimeofday.


To correct, do not #define _XOPEN_SOURCE.  There's already a check for 
Mac OS X, so I would suggest extending the check to also exclude Solaris 
9 with something like the following:


#if !defined(_XOPEN_SOURCE) && !defined(__DARWIN__) && !defined(__APPLE) 
&& \

!(defined(__sun) && defined(__SVR4))
#  define _XOPEN_SOURCE 600
#endif

This check isn't version-specific.  There doesn't appear to be a 
guaranteed macro for that purpose.  Sun Studio offers macros like 
_SunOS_5_9 (meaning Solaris 9), but gcc does not.  Though it's a bit 
silly for me to obsess over versions since I don't know exactly which 
versions of Solaris hide the relevant functions and structs if 
_XOPEN_SOURCE is defined.  I only have access to Solaris 9, and by 
"access" I mean I have a copy of /usr/include and such, not a computer I 
can log in to.  Just enough to do a cross-compile, which succeeds with 
the above change.


More investigation is needed to figure out how to make SQLite build for 
Solaris 9 without breaking other Solaris/SunOS platforms.


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-07-19 Thread Andy Goth

On 07/14/18 20:09, E.Pasma wrote:

The new example is clear, and therefore also raises a question:


CREATE TABLE people (name, age);
INSERT INTO people VALUES ('Bob', 33), ('Jen', 19), ('Liz', 30);

CREATE VIEW older PARAMETERS (name, otherName) AS
SELECT t1.age > t2.age AS older
  FROM people AS t1 WHERE t1.name = parameters.name
 , people AS t2 WHERE t2.name = parameters.otherName;

SELECT t1.name AS name
 , t2.name AS otherName
 , older(t1.name, t2.name)
  FROM people AS t1
 , people AS t2
WHERE t1.name != t2.name;

name  otherName  older
  -  -
Bob   Jen1
Bob   Liz1
Jen   Bob0
Jen   Liz0
Liz   Bob0
Liz   Jen1


The function "older" is used here in the SELECT part. But as a
table-valued function I think it must be in the FROM part:

SELECT t1.name AS name, t2.name AS otherName, older --column name
  FROM people AS t1, people AS t2, older(t1.name, t2.name)
WHERE t1.name != t2.name;

My question is: do you envisage a regular function, returning a single
value, or a table-valued function. (that behaves as a table, even the
parameters may be passed as predicates in the WHERE clause)?


Either!  If it returns a single row and a single column, let it be a 
regular function.  Or if it returns a single row, let it be a row value. 
 Or multiple rows (or one or even zero, of course), let it be a table. 
I think the usage should dictate; no need to declare in advance.



If the subject is "defining regular functions directly in SQL" then
CREATE FUNCTION may be considered. This seems easier to understand and
develop. but in the end it can have an enormous impact, like in
postgres  sql-createfunction.html
<https://www.postgresql.org/docs/9.1/static/sql-createfunction.html>


I'm trying to minimize the new syntax in my proposal.  CREATE FUNCTION 
seems too large for SQLite as it stands, though perhaps a small subset 
could be adopted.  I just seen an opportunity to get much of the same 
capability with a small addition to existing syntax.



For the sudoku-solver you do use the table-valued function notation:

SELECT s FROM x(sud) WHERE NOT ind;

and the function should possibly return multiple rows here. Like if a
sudoku is ambiguous or if changing the WHERE clause for development.


Right.  I don't see why it can't go both ways.


The idea would help me. I wonder if it can also be applied in inline
views.


"Inline view" is a phrase I'm not familiar with.  Your example shows a 
nested query, augmented with PARAMETERS.  I suppose it could be called 
an inline view, though.



That could be imagined if PARAMETERS is written after the query
like was an earlier idea.


Right, that's what I first suggested, associating PARAMETERS with SELECT 
rather than with CREATE VIEW or WITH common-table-expression.  So 
perhaps that is a more powerful way to do things after all.



SELECT name, nolder
FROM people
LEFT JOIN  (--inline view
   SELECT COUNT(*)
   FROM people p2
   WHERE age>parameter.age
   PARAMETERS (age)
 ) USING (age)
;


Huh!  That's neat.  For reference, here's how to do the same right now:

CREATE TABLE people (name, age);
INSERT INTO people VALUES ('Bob', 33), ('Jen', 19), ('Liz', 30);

   SELECT people.name AS name
, count(other.name) AS nolder
 FROM people
LEFT JOIN people AS other ON (other.age > people.age)
 GROUP BY people.name;

namenolder
--  --
Bob     0
Jen 2
Liz 1

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-07-13 Thread Andy Goth
 only exist in the view specification.

CREATE VIEW double (arg, result) AS
SELECT PARAMETERS.arg, PARAMETERS.arg * 2;
SELECT * FROM numbers, double ON arg=x;


Hmm, that's something I've not considered before: identifying parameters 
as such everywhere they are used, rather than only in some declaration.


However, even though this works as a function, its usage won't resemble 
other table-valued functions.  Arguments aren't explicitly "passed" to 
the function whereupon they are bound to parameters.  Rather, this 
binding is implied by automatically matching up names from elsewhere.


I fear the precise semantics might be too hard to define and understand. 
 An interface resembling more traditional function calls would be 
easier to deal with and probably just as powerful in the end.



I want to make a view of the sudoku solver

https://www.sqlite.org/lang_with.html#sudoku

That becomes like below. I have in mind to combine it with a further
view to format the input or output.

Thanks, E.Pasma

CREATE VIEW sudsol(sud,sol) AS
WITH RECURSIVE
   digits(z, lp) AS (
 VALUES('1', 1)
 UNION ALL SELECT
 CAST(lp+1 AS TEXT), lp+1 FROM digits WHERE lp<9
   ),
   x(s, ind) AS (
 SELECT PARAMETERS.sud, instr(PARAMETERS.sud, '.')
 UNION ALL
 SELECT
   substr(s, 1, ind-1) || z || substr(s, ind+1),
   instr( substr(s, 1, ind-1) || z || substr(s, ind+1), '.' )
  FROM x, digits AS z
 WHERE ind>0
   AND NOT EXISTS (
 SELECT 1
   FROM digits AS lp
  WHERE z.z = substr(s, ((ind-1)/9)*9 + lp, 1)
 OR z.z = substr(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)
 OR z.z = substr(s, (((ind-1)/3) % 3) * 3
 + ((ind-1)/27) * 27 + lp
 + ((lp-1) / 3) * 6, 1)
  )
   )
SELECT PARAMETERS.sud, s FROM x WHERE ind=0;


How about this?

CREATE VIEW sudsol PARAMETERS (sud) AS
--
  WITH digits (z, lp) AS (
VALUES ('1', 1)
 UNION ALL
SELECT CAST (lp+1 AS TEXT)
 , lp+1
  FROM digits
 WHERE lp < 9)
--
 , x (s, ind) PARAMETERS (sud) AS (
SELECT sud
 , instr(sud, '.')
 UNION ALL
SELECT substr(s, 1, ind-1) || z || substr(s, ind+1)
 , instr(substr(s, 1, ind-1) || z || substr(s, ind+1), '.')
  FROM x(sud)
 , digits AS z
 WHERE ind
   AND NOT EXISTS (SELECT 1
 FROM digits AS lp
where z.z = substr(s, (ind-1)/9*9 + lp, 1)
   or z.z = substr(s, (ind-1)%9 + (lp-1)*9 + 1, 1)
   or z.z = substr(s, (ind-1)/3%3*3
+ (ind-1)/27*27 + lp
+ (lp-1)/3*6, 1)))
--
SELECT s FROM x(sud) WHERE NOT ind;

SELECT sudsol(
'53.'||'.7.'||'...'
 || '6..'||'195'||'...'
 || '.98'||'...'||'.6.'

 || '8..'||'.6.'||'..3'
 || '4..'||'8.3'||'..1'
 || '7..'||'.2.'||'..6'

 || '.6.'||'...'||'28.'
 || '...'||'419'||'..5'
 || '...'||'.8.'||'.79');

Rather than empower x to directly see sudsol's parameter, I declare both 
sudsol and x to be table-valued functions, then I pass the argument 
(terminology: parameter value) to each invocation of x.  This is more 
explicit and requires less magic on the part of SQLite to figure out 
where everything is coming from when faced with nested functions.


Also, witness my lame attempt to format the input. :^)

By the way, I'm unclear why it's okay to simply say "lp" in the above 
indexing expressions, rather than "lp.lp" which is what I believe is meant.


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] CASE and NULL

2018-07-04 Thread Andy Goth
I'd like to use CASE to compare an expression x against a number of 
candidate values.  That's the typical use for "CASE x WHEN", which 
avoids repeating x for each condition.


The trouble is that one of the possible values is NULL, yet the 
comparison against each candidate value is done with the = operator. 
The expression "x = NULL" is meaningless since it will always evaluate 
to NULL, which CASE interprets as untruth, hence "WHEN NULL THEN" will 
never accomplish anything.


The workaround is to not use "CASE x WHEN" and instead use "CASE WHEN" 
and repeat x every time, using = for all non-NULL values and IS for 
NULL.  But this means repeating x for each condition, which is the whole 
reason I'd prefer "CASE x WHEN".


A compromise is to do both, as follows:

CASE x
WHEN 1 THEN 11
WHEN 2 THEN 22
WHEN 3 THEN 33
WHEN 4 THEN 44
ELSE CASE
WHEN x IS NULL THEN 55
ELSE 66
END END

Or the other way around, so that both instances of x are near each other:

CASE WHEN x IS NULL THEN 55
ELSE CASE x
WHEN 1 THEN 11
WHEN 2 THEN 22
WHEN 3 THEN 33
WHEN 4 THEN 44
ELSE 66
END END

If the CASE statement enumerates all possible values of x (whose range 
is perhaps guaranteed with a CHECK or FOREIGN KEY constraint), then the 
ELSE case can stand in for the NULL comparison.  But otherwise, the ELSE 
case will unavoidably collect not only NULL but any other unhandled values.


So I'm wondering: can we do better?  I wouldn't want to risk changing 
the meaning of any existing queries, but it is generally possible to 
extend from the error space: take something that's currently a syntax 
error and give it meaning.  How about the following?


CASE x
WHEN 1 THEN 11
WHEN 2 THEN 22
WHEN 3 THEN 33
WHEN 4 THEN 44
WHEN IS NULL THEN 55
ELSE 66
END

Or equivalently, "ISNULL" instead of "IS NULL".

This treatment could also be applied to numerous other operators that 
take an expression as their left-hand side and produce a truth result:


?NOT? LIKE|GLOB|REGEXP|MATCH expr ?ESCAPE expr?
NOTNULL
NOT NULL
< <= > >= != <>
= == (for completeness, even though it's implied)
?NOT? BETWEEN expr AND expr
?NOT? IN list-generation-expression

Giving us syntax such as:

CREATE TABLE fruits (name); [... veggies ... meats ...]
CASE food
WHEN IN fruits THEN 'fruit'
WHEN IN veggies THEN 'veggie'
WHEN IN meats THEN 'meat'
WHEN ISNULL THEN 'unspecified'
ELSE 'candy' END

CASE filename
WHEN GLOB '.*' THEN 'hidden'
WHEN GLOB '*.png' THEN 'image'
WHEN GLOB '*.html' THEN 'webpage'
WHEN REGEXP '\.docx?$' THEN 'MS-Word'
ELSE 'data' END

CASE age
WHEN < 1 THEN 'baby'
WHEN < 3 THEN 'toddler'
WHEN < 5 THEN 'preschooler'
WHEN < 12 THEN 'gradeschooler'
WHEN < 18 THEN 'teenager'
WHEN < 21 THEN 'young adult'
ELSE 'adult' END

CASE hour
WHEN BETWEEN  6   AND  6.5 THEN 'wake'
WHEN BETWEEN  7   AND  7.5 THEN 'breakfast'
WHEN BETWEEN  8   AND  8.5 THEN 'commute'
WHEN BETWEEN 11.5 AND 12.5 THEN 'lunch'
WHEN BETWEEN  9   AND 17   THEN 'work'
WHEN BETWEEN 17.5 AND 18   THEN 'commute'
WHEN BETWEEN 19   AND 19.5 THEN 'dinner'
WHEN BETWEEN 22   AND 24   THEN 'sleep'
WHEN BETWEEN  0   AND  6   THEN 'sleep'
END

To make the above examples more compelling, replace food, filename, age, 
and hour with complex expressions such as nested queries.


The next evolution in bloat is to also support AND, OR, NOT, and 
parentheses, allowing the LHS operand of any operator in a complex 
expression to be omitted, defaulting to CASE's first argument.  In the 
last example above this would allow the two "sleep" cases (or the 
"commute" cases) to be combined with OR: "WHEN BETWEEN 22 AND 24 OR 
BETWEEN 0 and 6 THEN 'sleep'".  But I imagine this would complicate the 
parser far beyond any practical benefit.


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-07-04 Thread Andy Goth

Revisiting this topic...

On 06/10/18 08:04, sql...@zzo38computer.org wrote:

* Perhaps move PARAMETERS before AS, which may make the syntax easier.


Like so?

CREATE VIEW double PARAMETERS (arg) AS SELECT arg * 2;
SELECT * FROM numbers, double(x);

This is a readability improvement because universally I see function
names and parameters defined before function bodies.  Moving the
PARAMETERS to the left of AS also represents a conceptual shift from
PARAMETERS modifying SELECT (which is what I had in mind) to PARAMETERS
modifying CREATE VIEW (which could well be a better way to look at it).

Now, let's examine the common table expression variant.  I'll repeat the
baseline syntax proposal from my original post:

WITH double AS (SELECT arg * 2 PARAMETERS (arg))
SELECT * FROM numbers, double(x);

Would your suggestion be the following?

WITH double PARAMETERS (arg) AS (SELECT arg * 2)
SELECT * FROM numbers, double(x);

Next, what is the interaction with an explicit column-name list?  Does
the PARAMETERS clause come before or after that?  Compare:

CREATE VIEW double (result) PARAMETERS (arg) AS SELECT arg * 2;
SELECT * FROM numbers, double(x);
WITH double (result) PARAMETERS (arg) AS (SELECT arg * 2)
SELECT * FROM numbers, double(x);

Versus:

CREATE VIEW double PARAMETERS (arg) (result) AS SELECT arg * 2;
SELECT * FROM numbers, double(x);
WITH double PARAMETERS (arg) (result) AS (SELECT arg * 2)
SELECT * FROM numbers, double(x);

I don't think there's any question the former is superior, but I bring
this up for two reasons.  One, every syntax question needs an answer.
Two, I wish to highlight the fact that the (existing) syntax for
column-name list looks very much like what most languages use for a
parameter list, so there's potential confusion, hence the need for the
PARAMETERS token.


* I do agree that defining table-valued functions in these way can be
useful though; I have wanted to define views that take parameters
before, and was unable to.


I would love to be able to create functions without writing extensions
in C, plus this way functions won't require recursive invocation of
SQLite and won't have to be a barrier to the SQLite optimizer.  Rather,
native functions would be inlined right into the bytecodes of whichever
queries use them.


* Another (separate) idea can be "CREATE FUNCTION name(args) AS
select_stmt;" to define your own function.  [...] Both of these are
separate from table-valued functions (parameterized views) though.


Aside from the syntax, is this really a separate idea?  What does this
do that views can't?


If you write "CREATE AGGREGATE FUNCTION" then the function name can be
used as a table name within the select_stmt.


I don't think I understand the part about letting the function name be
used as a table name.  Parameters would already be bound, so there's no
need for a FROM clause to get at them.  That's central to the concept of
parameters as explored by this email thread; I'm hunting for a practical
use for situations that would currently give a "no such column" error.

However, you bring up an interesting question, though it's a potential
issue whether or not the function is an aggregate function.  What if a
parameter name happens to match a column name in one (or more) of the
tables being pulled in by a FROM clause?  A table name qualifier is
needed to disambiguate.  It could be the function name, though it could
be clearer to do like upsert ("excluded") and have a special token, e.g.
"parameters".  (cf. https://sqlite.org/lang_UPSERT.html)

The above applies to parameterized views and common table expressions as
well, so replace "function name" with "view name" or "common table
expression name".

As for having to declare a function as an aggregate, I think that could
be inferred from the fact that the function uses aggregate functions on
its parameters.  Maybe it would have to also not use GROUP BY on those
parameters, not sure there.  What makes an aggregate function, anyway?
It always map multiple input rows to a single output row?  Then does
that mean having LIMIT 1 would also make it an aggregate function?  But
before we get bogged down in semantics, I ask whether or not this
distinction even matters.

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-06-09 Thread Andy Goth
On 6/9/2018 8:31 PM, Simon Slavin wrote:
> On 10 Jun 2018, at 2:18am, Andy Goth  wrote:
>> Skip computed columns in the value list?  If two tables have the same
>> schema, this should duplicate one into the other, but apparently not:
>>
>> INSERT INTO table2 SELECT * from table1;
> 
> This syntax, when found with a computed column, would be something
> that the SQLite engine would have to notice and act correctly.  The
> same in the case of CREATE TABLE ... SELECT ... .
> 
> There are a number of other niggles.  For instance, creating and
> updating an index which includes a VIRTUAL calculated column could be
> complicated and time-consuming.  It may be that for a computed column
> to appear in an index it must be STORED.  On the other hand, the way
> SQLite works internally might make that unnecessary.

I think it's clear why SQLite doesn't already have this feature.  The
required underlying capability is already available.  The remaining bits
are a convenience to the end user at the cost of significant added
complexity inside the engine.  This is probably not going to be added
until SQLite's primary developers themselves decide they need it, or
until someone else decides it's important enough to them to contribute
code and/or funding.

SQLite doesn't seek to implement all things SQL, just the parts that
have proven to be a workhorse.  There's been a gradual shift as more
development hours have been invested, but the "Lite" still means
something.

https://sqlite.org/omitted.html

I've watched this list get shorter over the years, but the items that
remain are increasingly likely to become permanent fixtures.

And hey, that list was a welcome reminder that views can have triggers
which can translate back to real DELETE, INSERT, and UPDATE operations
on the underlying table.  This might be what you want.

SQLite's embeddability and ease of integration (particularly with Tcl)
makes it very easy to wrap, so you can give yourself commands that
generate and access tables using whatever syntax works best for you.
You can write code that intercepts and translates the computed column
syntax of your choice to automatically generated fancy views with
triggers.  Once you get the semantics nailed down and others find your
work useful, that effort can serve as a reference implementation for
actually adding the capability to SQLite itself.

-- 
Andy Goth | 



signature.asc
Description: OpenPGP digital signature
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-06-09 Thread Andy Goth

On 06/09/18 20:10, Simon Slavin wrote:

On 10 Jun 2018, at 2:00am, Andy Goth  wrote:

CREATE TABLE tempLog (
   datestamp TEXT COLLATE NOCASE PRIMARY KEY
, centTemp  REAL);
CREATE VIEW tempLogView AS
 SELECT *
  , centTemp * 9 / 5 + 32 AS fahrTemp
   FROM tempLog;


Yes.  That is an elegant and compact way to do it with SQLite as it is
now.  But it's not as satisfying.  And it doesn't allow indexing by
the computed column.  You have to define the calculation in the index
separately.


However, views make behavior of INSERT and UPDATE clear, since they can
only operate on the real table.  INSERT or UPDATE become murky when in
the presence of computed columns.  I suppose the only sane thing to do
is forbid directly setting the value of a computed column, though what
would the syntax be?  Skip computed columns in the value list?  If two
tables have the same schema, this should duplicate one into the other,
but apparently not:

INSERT INTO table2 SELECT * from table1;

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-06-09 Thread Andy Goth

On 06/09/18 18:04, Simon Slavin wrote:

CREATE TABLE tempLog (
datestamp TEXT COLLATE NOCASE PRIMARY KEY,
centTemp REAL,
fahrTemp AS (centTemp*9/5 + 32) )

I'm happy with another syntax as long as it does the same thing.


CREATE TABLE tempLog (
   datestamp TEXT COLLATE NOCASE PRIMARY KEY
 , centTemp  REAL);
CREATE VIEW tempLogView AS
 SELECT *
  , centTemp * 9 / 5 + 32 AS fahrTemp
   FROM tempLog;


Niggle 1: Can a computed column refer to a column defined after it ?


With the view syntax I showed above, "computed" columns can only refer
to columns that exist in the underlying tables.  I wish SELECT statement
expressions could refer not only to input columns but also output
columns that have been named using AS, but we don't have this feature.

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-06-09 Thread Andy Goth

On 06/09/18 19:46, David Burgess wrote:

There are some functions which are banned.

Are extension functions permitted?


https://sqlite.org/c3ref/create_function.html


And how does an extension function author indicate that the function
is deterministic?


https://sqlite.org/c3ref/c_deterministic.html

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea: defining table-valued functions directly in SQL

2018-06-09 Thread Andy Goth
Notice that the view isn't bound to the numbers table or its x column 
until it's actually used in the SELECT query.  [...]  This might also

make it possible for a table-valued function to recursively invoke
itself, provided that it has a basis case to avoid infinite descent.


We already have recursive invocation with common table expressions.
Nevertheless, here's another way to look at it:

CREATE VIEW range AS
 SELECT start
  WHERE CASE WHEN step > 0 THEN start <= stop ELSE start >= stop END
  UNION ALL range(start + step, stop, step)
 PARAMETERS (start, stop, step DEFAULT 1 CHECK (step != 0));

Here I say "range(...)" as shorthand for "SELECT * FROM range(...)", by
analogy with IN and table-valued functions.  With the above, once start
exceeds stop, the engine would have to optimize out the infinite
sequence of UNION ALL against guaranteed-empty results.

It would make sense for table-valued functions to be usable as
expressions, provided that they return exactly one row and one column.

CREATE VIEW double AS SELECT arg * 2 PARAMETERS (arg);
SELECT x, double(x) FROM range(1, 10);

How about table-valued functions as row values, provided they return
exactly one row?

CREATE TABLE foo (a, b, c);
CREATE VIEW multiples AS SELECT x, x * 2, x * 3 PARAMETERS (x);
UPDATE foo SET (a, b, c) = multiples(5) WHERE a = 0;

Table-valued functions are already permitted to be the right-hand of an
IN operator, so these new table-valued functions should be no different,
provided they return exactly one column.

CREATE VIEW multiples AS
 VALUES (x)
  UNION ALL
 VALUES (x * 2)
  UNION ALL
 VALUES (x * 3)
 PARAMETERS (x);
SELECT * FROM foo WHERE (a, b, c) IN multiples(a);

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Idea: defining table-valued functions directly in SQL

2018-06-09 Thread Andy Goth
Table-valued functions can only be defined within compiled application 
code by means of virtual tables.  I suggest extending SQLite views and 
common table expressions to be the vehicle for defining table-valued 
functions directly in SQL.


A view or common table expression that references nonexistent columns is 
very similar to a table-valued function, if only there were a way to 
bind the nonexistent columns to function arguments.  I thought of 
several ways to go about doing this, which I'll list below.  First, 
here's the current syntax for which I will be providing alternatives:


CREATE TABLE numbers (x);
INSERT INTO numbers VALUES (1), (2), (3);
CREATE VIEW double AS SELECT x * 2 FROM numbers;
SELECT * FROM numbers, double;

My favored approach is to let the SELECT clause of the view or common 
table expression be followed by a new "PARAMETERS (param1, ..., paramN)" 
clause, where "param1" through "paramN" are the parameter names, listed 
in the order their values will be supplied in the table-valued function 
argument list.


CREATE VIEW double AS SELECT arg * 2 PARAMETERS (arg);
SELECT * FROM numbers, double(x);

Notice that the view isn't bound to the numbers table or its x column 
until it's actually used in the SELECT query.  This makes it possible to 
use a single view with many data sources, plus decoupling the operations 
performed in the view from the data source can make the code clearer by 
not forcing the reader to digest both at the same time.  This might also 
make it possible for a table-valued function to recursively invoke 
itself, provided that it has a basis case to avoid infinite descent.


A second approach is to let table-valued function invocation supply 
explicit parameter names for each argument:


CREATE VIEW double AS SELECT arg * 2;
SELECT * FROM numbers, double(x AS arg);

This has the advantage of letting the arguments appear in any order, but 
the extra verbosity may be unwelcome.  This approach can be combined 
with the previous approach to allow optional reordering, plus it could 
be used with regular virtual table-valued functions as well.


I'm not sure this is actually a useful feature though, unless someone 
has a table-valued function with a large number parameters.  However, it 
does suggest the possibility of a new table-valued function being able 
to supply default values for unbound parameters:


CREATE VIEW scale AS SELECT value * scalar
 PARAMETERS (value, scalar DEFAULT 2)
SELECT * FROM numbers, scale(x);
SELECT * FROM numbers, scale(x, 3);
SELECT * FROM numbers, scale(3 AS scalar, x);
SELECT * FROM numbers, scale(3 AS scalar, x AS value);

We can get arbitrarily fancy by permitting more column constraints 
within the PARAMETERS clause.  I don't know if this is a good thing.


Here is a third approach which doesn't involve any syntactic changes. 
Create a function "arg(n)" that returns the value of the nth argument.


CREATE VIEW double AS SELECT arg(1) * 2;
SELECT * FROM numbers, double(x);

I will list one more approach for the sake of completeness, but I don't 
recommend it.  Mirror virtual table-valued functions by having the 
parameters be defined as HIDDEN columns:


CREATE VIEW double AS SELECT arg * 2, arg HIDDEN;
SELECT * FROM numbers, double(x);

I find this to be confusing, though a slight improvement would be to say 
PARAMETER instead of HIDDEN.  Either way, there is a compatibility 
problem because current behavior is to treat HIDDEN or PARAMETER as a 
column alias.  To resolve the ambiguity, it would be necessary to 
recognize HIDDEN or PARAMETER as a keyword only when the expression is a 
simple term (no math) that is otherwise an unbound column.  Messy.


Throughout this email I've tried to distinguish between parameters and 
arguments, but it's a really fine point.  Parameters are names whereas 
arguments are values.  I'm not sure this distinction is important, but I 
went with it anyway.


I used views for my syntax examples, but I'd also like to see this work 
for common table expressions.


WITH double AS (SELECT arg * 2 PARAMETERS (arg))
SELECT * FROM numbers, double(x);

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite Tcl copy command

2016-09-19 Thread Andy Goth
I'm trying to import databases from CSV into an in-memory database.
These CSV files contain quoted delimiters (a,"b,c",d is three columns),
plus they contain a header row.  These two issues are not handled by the
SQLite Tcl extension's copy command.  The SQLite shell's .import command
used to have the same limitations but was improved, yet the SQLite Tcl
extension copy command has not kept pace.

Is there any interest in updating the SQLite Tcl extension copy command
to match the capability of the SQLite shell's .import command?

It makes sense to me, but the details are challenging.  Adding CSV
support to the shell's .import command meant adding a nontrivial amount
of code, and I don't like code duplication, so I'd prefer that most of
the underlying implementation be shared.  Yet it's inappropriate for CSV
parsing to be part of the SQLite core, and I don't think there's a
common shell library at this point, so either one would have to be made,
or CSV would have to be spun off to be its own object file, linked into
both the shell and the Tcl extension.

Yet as complicated as all that sounds, there's more.  The shell's
.import command needs to directly call stdio functions like fopen(), but
a modern implementation of the Tcl extension copy command should use the
Tcl I/O subsystem to leverage Tcl channels.  The result would be
virtualization not only of the input file format ("ASCII" vs. CSV field
readers) but also the underlying I/O.  Thankfully, the only I/O function
called by the field readers is fgetc(), so it shouldn't be too much work
to replace that with a call to a function pointer inside the ImportCtx
structure, through which any reasonable VFS can be invoked.

There are more details to be discussed, e.g. backward compatibility and
what to do about zFile, but for now I'm trying to raise awareness, gauge
interest, and document my first impressions.

See also my post from 2005 which got no replies.  I've asked about this
basic issue before.
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg11195.html

Back to my immediate situation.  I could go off and try implementing the
above, but I could also get what I need by not having the database be
in-memory.  If it's on disk, I can alternate between [exec]'ing the
SQLite shell to import and loading it using the Tcl extension.  Far from
clean, but it would work right now.

And last, a question.  Are there any other functionalities common to the
SQLite shell and Tcl extension which could become common code?

-- 
Andy Goth | 



signature.asc
Description: OpenPGP digital signature
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLAR pronunciation

2014-09-09 Thread Andy Goth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I have to ask, and I apologize if it's been asked before, but... how
is SQLAR pronounced?  My best guess is "squalor". :^)

http://www.sqlite.org/sqlar/doc/trunk/README.md

- -- 
Andy Goth | 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUD0BRAAoJELtYwrrr47Y4k74H/A5MObcI0iXHhXXY3Na9pQwL
iFwjzyrcBoQRskHd3kcvJK/TQG1eGUhIGeuuvZh98jDVO9KL9glc7QDptSlApwBC
hDvPSdgYmFMwydOcVaJ9V/AJZ2n5PWVenfJ/pZM+Yrd5DYSLYISZ/AgP0nsVtR6N
9LH0oChgaPQHNCGbrDODDUAtGjQND/hH8CRXa9+fZJi6K7jHt/riXMOahY37K7Vl
WDeDwuD6zV+u1xV39ovVYRCiYGiPHqAUdzCRDliBKxpkkLXqhUzB0VKAKoxfdNr+
L4MIfkQJMoDsVJ0xEOkGJXDKu4TuoQLD1/ZhEfh2RES6g4VUW1aJ5mr72waP1w8=
=TAgg
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple Select from IN - from a newbie.

2014-05-23 Thread Andy Goth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 5/22/2014 5:23 PM, RSmith wrote:
> On 2014/05/22 17:25, Humblebee wrote:
>> I would like to order the grouping of the Persons in a
>> particular order. So the sequence of the result is based on the
>> TeamPersonTable's order field.
> 
> First a word of warning - Please do not use column names that are
> the same as SQL keywords, such as "Order"... I took the liberty of
> renaming the new column to "orderId" because of this.

I tend to use the term "sort" as a column name when specifying the
sort order, unless there's a more appropriate domain-specific term
available.

Regarding keywords being column (or table!) names: I'm working on an
application that models an existing system which uses a concept
(unrelated to SQL, it actually has to do with music) called a table.
So I'm fairly well stuck using that word to name the table and a
couple related things such as foreign keys.

The solution here (aside from inventing an unfamiliar term or
intentionally misspelling table) is to surround the word in double
quotes to force SQLite to not interpret it as a keyword.  For example:

CREATE TABLE "table"
(id INTEGER PRIMARY KEY, a, b, c);

CREATE TABLE phrase
(id INTEGER PRIMARY KEY, "table" REFERENCES "table", a, b, c);

So that's what double quotes means.  Single quotes, on the other hand,
are used to enter string literals.

- -- 
Andy Goth | 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTf5PvAAoJELtYwrrr47Y4HEkIAIIlRFUxlOk+209QEf4TnXqt
AEjqE9apHT9ROKSK3bHU8oWGbaRxggYtBQmV+VJgmVFLTTAnRv7Dp2H3sS/KOLB+
WvHDoqvRqp7nCmVDEEH8J9toDL8F6AOJLbo31YMqHWYFYaGcYMJofiCc/SQJAU+E
AHkkUDT33qLWy0hB4crpNSdg0gHIcFqwytwSr9SejgITlsGUJc4SpZkxM2GYLiCQ
a33FRg5bla27Gtumt1rdw0sahkOoTlx/fgVhtxcnoHc0kiLL3p5P472lwphd0aQO
wceYK8fzkxEjUY2LsnKERQPsf1JtCwOXVZaCQ6G9LM88ofsdZ0qrbRfJ5dM7n1Q=
=iDnC
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple Select from IN - from a newbie.

2014-05-21 Thread Andy Goth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 5/21/2014 11:09 AM, Stephan Beal wrote:
> On Wed, May 21, 2014 at 6:00 PM, Humblebee
> <fantasia.d...@gmail.com> wrote:
>> | 1  | 4   | 1,5,2,3,4   | | 2  | 5   |
>> 2,6,3,5,1   |
> 
> Without doing what Simon suggests, there is no good solution to
> your problem with the data structure you have. SQL is made for
> normalized data, not strings containing arbitrary tokens separated
> by arbitrary other tokens.

It's probably been suggested at least once in response to the thousands
of times this same question has come up on the list, but I'll put it
forward again.  Perhaps create virtual tables which are essentially
views onto this table but with a normalized presentation.

Something like:

CREATE TABLE TeamAssignment (
parId INTEGER NOT NULL REFERENCES TeamTable,
personId INTEGER NOT NULL REFERENCES PersonTable,
PRIMARY KEY (parId, personId)
);

If you had this, you'd be able to associate persons with their teams,
including situations where a team is empty and a person is in many
teams.

Going the virtual table route will have a negative impact on performance
and maintainability, compared to actually normalizing your database.
But you say you're stuck, so do what you have to.  And honestly, please
don't give people with no knowledge of SQL theory the power to set your
SQL schema in stone.

- -- 
Andy Goth | 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTfNSYAAoJELtYwrrr47Y4GToH/3uXzvBNY499X3nfSEsWPcR4
42o1W/dGRvdwi6p4hGjZmL55RhHwpTjljHaupszF1o3SB6nGlBDaxeOD86SYMATC
KV8w8aNxaFmo6SnwaMmLTOKfL5qnFqqcEV6FD4wpIbSaziteG39AenN4kQqIYAbH
Dpk0XULlrRQClRP+77CWQvdodWzK0C9YEkNCaCYgez6MrvfDOpvTz8s83aKsOSCS
QGGfwHUuwFg96tf1jVYi0PUOiZHofWBXaRESP59uFAO3cRiuOEydHSbabjufexub
hf4ubOsfqlLnJuWhoWXNBzMGDRe2Iu8v46iNR5CT2c4/nzjjkESsKW05xxC86oM=
=c5Bu
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users list failed to block large attachment

2014-05-14 Thread Andy Goth

On 5/13/2014 7:36 PM, Darren Duncan wrote:

Is something wrong with the configuration of this sqlite-users list?

A message of subject "Porting SQLite to plain C RTOS" was allowed and
distributed through it this morning with attachments.

Not only attachments, but about 5MB of attachments.


Quite likely it snuck through due to being plain text.  I don't normally 
mind plain text attachments, for instance patches, if they are very small.


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WITHOUT ROWID option

2014-05-09 Thread Andy Goth

On 5/8/2014 10:11 AM, Jim Morris wrote:

To improve efficiency you could add "where 1=2" to avoid returning any
rows.  Should just check validity.


This being SQLite, as previously discussed, you could say "where 0" :^)

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CTEs and unions

2014-04-22 Thread Andy Goth

On 4/22/2014 5:55 PM, Andy Goth wrote:

On 4/22/2014 5:16 PM, Dominique Devienne wrote:

sqlite> with cte(a) as (select 1)
...> select * from cte
...> union all
...> select * from cte;
Error: no such table: cte


All these queries work for me without error.


http://www.sqlite.org/cgi/src/info/67bfd59d9087a987

This commit fixed your problem, which was written up here:

http://www.sqlite.org/cgi/src/info/31a19d11b97088296a

Try upgrading to 3.8.4 or newer.

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CTEs and unions

2014-04-22 Thread Andy Goth

On 4/22/2014 5:16 PM, Dominique Devienne wrote:

sqlite> with cte(a) as (select 1)
...> select * from cte;
a
1

sqlite> with cte(a) as (select 1)
...> select * from cte
...> union all
...> select * from cte;
Error: no such table: cte

sqlite> with cte(a) as (select 1),
...>  cpy(b) as (select a from cte)
...> select a from cte
...> union all
...> select b from cpy;
Error: no such table: cte

sqlite> with recursive
...>   cte(a) as (select 1),
...>   cpy(b) as (select a from cte)
...> select a from cte
...> union all
...> select b from cpy;
Error: no such table: cte


All these queries work for me without error.

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR REPLACE

2014-04-21 Thread Andy Goth

On 4/21/2014 6:04 AM, Richard Hipp wrote:

On Mon, Apr 21, 2014 at 6:18 AM, Neville Dastur wrote:

Google only showed up that UNIQUE needs to be added regardless of the
column being a PRIMARY KEY.


Google is wrong.  The UNIQUE is superfluous.  PRIMARY KEY always
implies UNIQUE.  Always.


PRIMARY KEY also is supposed to imply NOT NULL.  However, for historical
reasons, SQLite allows NULL in PRIMARY KEY columns if the column is not
INTEGER, not explicitly NOT NULL, and not in a WITHOUT ROWID table.

So you can't just go with SQL standards; you have to check the SQLite
documentation.  http://www.sqlite.org/lang_createtable.html

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] printf function is shown in docs but not found when I try it

2014-04-18 Thread Andy Goth

On 4/18/2014 8:49 AM, c...@isbd.net wrote:

Alternatively are there any other reasonably practical ways to format
numeric output?  Specifically I want to format floating point numbers
into a fixed format like NNN.N.


Your application can provide custom functions callable from queries.

Tcl example:

$ tclsh
% package require sqlite3
% sqlite3 db :memory:
% db function printf format
% db eval {select printf('%05.1f', 12.3)}
012.3

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why sqlite has no sqlite_bind_auto or similary named function

2014-04-18 Thread Andy Goth

On 4/18/2014 12:29 AM, Max Vlasov wrote:

The problem was with my program that automatically converts xml data
into an sqilte table. It looks for an attribute and appends a column
if it does not exists, but stating no particular type. All values were
appended with sqlite_bind_text. Everything was fine, but an index
created after this on a ParentId field that was in real life integer
or null, actually appeared text-based. Sure, typeof all inserted
fields appeared to be text.

So it seems like if general queries allow affinity automatical
selection while bind api does not have the corresponent function. I
know that I can analize incoming data myself, but since general
queries use a similar function probably, making some kind of
sqlite_bind_auto should be no big deal. But probably this decision was
deliberate.


This was done for C compatibility.  A C++ wrapper for SQLite can have an
sqlite_bind_auto() function which is overloaded for a variety of types,
but it would in reality be multiple functions that have the same name
but otherwise different type signatures.  C doesn't mangle function
symbol names according to argument types, so each function must have a
distinct name.

You're free to write an sqlite_bind_auto() function (use whatever name
you choose) that takes a text argument but tries to coerce the data to
integer or real if it thinks it can do so reversibly.  It's not enough
to call strtod() or whatever on the argument and check for success; you
have to make sure the input is that number expressed in canonical form.

There are many potential gotchas here.  For instance, consider entering
telephone numbers.  You may have a database which has entries like:

(800) 123-4567

But also has:

8001234567

These should both be represented as text even though the latter appears
to be an integer (which, by the way, is larger than the 32-bit signed
maximum integer, so don't go putting it in an int).

And why should they both be text?  Because that's in accordance with the
database schema.

Now we're getting to the real reason SQLite doesn't have this auto
function you're asking about.  That is: SQLite expects the programmer to
know the schema and to embed it in the program's structure.  Providing
an automatic function means the programmer gives some of that control
and responsibility back to SQLite, and SQLite may well do it incorrectly
or suboptimally.  Better not to have a feature that breeds bugs and
solves a non-problem.

What you're doing is atypical usage.  You don't have a prearranged
schema, you're trying to detect it from your incoming data.  If you're
going to take on that responsibility, you have to not only figure out
your tables and columns, but also their types.  And if you don't want to
do that latter bit of work, you have to accept that SQLite will insert
everything as text.

And what's the problem with that, anyway?  Sure, it may take a bit more
space on disk, but that's what XML was doing anyway.  Sure, it may give
"incorrect" typeof(), but does your database application really need
that?  At some point, something must know and require a particular
schema, otherwise it can't truly use the data, only pass it along, maybe
converting along the way.  That end user will *expect* an integer here
and a datetime there, and SQLite will do the conversions on demand.
Read up on duck typing sometime.

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 shell man page not installed

2014-04-04 Thread Andy Goth
"make install" in the SQLite source tree (obtained via Fossil) does not 
install the sqlite3 shell man page, sqlite3.1.


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] if possible point another table

2014-04-03 Thread Andy Goth

On 4/3/2014 10:10 PM, Keith Medcalf wrote:


  select *
from k1
union
  select *
from k2;


My understanding of the question was, how to select from tables whose 
names are somehow computed or extracted from another table


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] if possible point another table

2014-04-03 Thread Andy Goth

On 4/3/2014 8:46 PM, YAN HONG YE wrote:

I have a table named aa like this:
id  pid namenotetablename   
1   0   s12 bbc k1
2   1   sss vac k2

another table named k1:
id  pid namenote
11  1   f2  aaa
12  1   fs  bbc

another table named k2:
id  pid namenote
31  2   f2  aaa
32  2   fs  bbc

my question is: in table aa, if possible k1 point to another table "k1"?

select tablename from aa;
I want result is:
id  pid namenote
11  1   f2  aaa
12  1   fs  bbc
31  2   f2  aaa
32  2   fs  bbc


I don't believe this can be done in pure SQL since table names are not
values.  That doesn't mean it can't be done, though you will have to put
some of the logic in your program itself.  Here's a Tcl implementation
that gives the same result as in your question.  The actual work is done
in the last two lines; the rest is for displaying the result nicely and
for creating the database.

# Recent version of Tcl required for [lmap] and max().
package require Tcl 8.6
package require sqlite3

# Executes an SQL query and prints it in nicely formatted columns.
proc query {db query} {
$db eval $query out {
if {![info exists widths]} {
lappend grid $out(*)
set widths [lmap column $out(*) {string length $column}]
}
lappend grid [set row [lmap column $out(*) {set out($column)}]]
set widths [lmap width $widths value $row {expr {
max($width, [string length $value])
}}]
}
set format %-[join $widths "s %-"]s
set display {}
foreach row $grid {
lappend display [format $format {*}$row]
}
join $display \n
}

# Create the database.
sqlite3 db :memory:
db eval {
CREATE TABLE aa (id, pid, name, note, tablename);
INSERT INTO aa VALUES (1, 0, 's12', 'bbc', 'k1'),
  (2, 1, 'sss', 'vac', 'k2');
CREATE TABLE k1 (id, pid, name, note);
INSERT INTO k1 VALUES (11, 1, 'f2', 'aaa'),
  (12, 1, 'fs', 'bbc');
CREATE TABLE k2 (id, pid, name, note);
INSERT INTO k2 VALUES (31, 1, 'f2', 'aaa'),
  (32, 1, 'fs', 'bbc');
}

# Get list of subqueries which need to be UNION'ed.
set queries [db eval {SELECT 'SELECT * FROM ' || tablename FROM aa}]

# Join subqueries with UNION ALL, then execute and display result.
puts [query db [join $queries " UNION ALL "]]

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request - Tcl variables as "value-list"s

2014-04-03 Thread Andy Goth

Thread necromancy!

Back in 2007 I expressed a desire to efficiently insert a *list* of
values all at once, where the entire list is contained within a single
Tcl variable.  The notation would be to use the variable name, prefixed
with $ or :, in place of the value list, intentionally omitting the
parentheses which would otherwise be required by the SQL syntax.

I know it's possible to extract the list into separate variables, one
per element, or to construct the query (with quoting and proper
apostrophe doubling), but both approaches incur performance penalties,
and the latter is subject to injection attacks if done hastily.

Here's what I wish I could type:

foreach row $table {
   db eval {INSERT INTO xyzdata VALUES $row}
}

And here's what works today (the former is slow due to Tcl shimmering
and SQL recompilation, the latter example assumes five columns):

foreach row $table {
   db eval "INSERT INTO xyzdata VALUES\
('[join [string map {' ''} $row] ',']')"
}
foreach row $table {
   lassign $row a b c d e
   db eval {INSERT INTO xyzdata VALUES ($a, $b, $c, $d, $e)}
}

How about inserting NULLs?  The two present-day implementations I offer
don't allow that, though you could play games with nullif().  For the
new notation, I would suggest treating missing columns as NULLs.  (By
the way, treat extra columns as errors.)  This gives some NULL
capability, though only the final columns can be NULL.  And if the
variable itself is NULL, let all columns be NULL.

Inserting multiple rows?  Since the variable takes the place of the
parenthesized value list (including the parentheses themselves), just
separate the variables with the several rows using commas.  Or mix and
match with traditional value lists:

db eval {INSERT INTO xyzdata VALUES $row1, $row2, (1, $y, $z)}

Taking all rows from a variable containing a list of lists?  That goes
beyond the scope of this proposal since it doesn't naturally fit into
the existing syntax.

When I asked about this years ago, I was told that this conflicts
irreconcilably with the fundamental structure of the parser.  Is that
still true?

For reference, here are extracts from my old emails:

On 10/4/2007 9:35 PM, Andy Goth wrote:

See the bottom of http://wiki.tcl.tk/2633 for more details.

My current project would benefit from the ability to expand a Tcl
variable into multiple SQL values.

This expansion will happen within SQLite and not Tcl.  That's the
whole point; it eliminates injection attacks and allows the compiled
SQL to be cached inside the Tcl_Obj.


On 10/5/2007 10:01 AM, Andy Goth wrote:

$ db eval {insert into xyzdata values $x}

Note the lack of parentheses around $x.  This distinguishes it from
the case of inserting a single value "1 2 3" into xyzdata.


Footnote: here are the nullif() games I refer to above.  This code
treats empty string as NULL.  That's not quite the behavior I wanted to
propose though, so I pushed this text to the end of the email.

foreach row $table {
   db eval "INSERT INTO xyzdata VALUES\
(nullif('[join [string map {' ''} $row] ',''),nullif(']',''))"
}
foreach row $table {
   lassign $row a b c d e
   db eval {INSERT INTO xyzdata VALUES (nullif($a, ''), nullif($b, ''),
nullif($c, ''), nullif($d, ''), nullif($e, ''))}
}

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Join of two virtual tables returns incorrect result set in 3.8.4.1

2014-04-02 Thread Andy Goth

On 4/2/2014 4:52 PM, Hinrichsen, John wrote:

sqlite 3.8.4.1 can return an incorrect result when joining two virtual
tables that are themselves based on underlying sqlite tables.

This problem does not happen with sqlite 3.8.3.1 or earlier.

Please see the attached repro.


Attachment appears to be missing.

--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [$db incrblob] and asynchronous [chan copy]

2012-01-27 Thread Andy Goth

http://wiki.tcl.tk/2633#pagetoc0d507ec9

AMG: I can't seem to get asynchronous [chan copy] to work with [$db 
incrblob]. Synchronous works just fine, but when I do asynchronous, the 
copy never starts. I'm copying from the incrblob to a network socket. 
(Someday I might also copy from a network socket to the incrblob, but 
this is doubtful.) I tried an asynchronous copy from the incrblob to a 
local file, and that hung too. No, I'm not forgetting to enter the event 
loop. ;^) The copy code I've written works fine when the source channel 
is an on-disk file, just not when it's an incrblob. Is this an SQLite 
bug, a design limitation, or a problem in my code?


AMG: At the Eighteenth Annual Tcl/Tk Conference (2011), AK, DRH, and I 
had a nice discussion concerning this issue, and we identified a 
potential fix. AK's memchan provides an example of how to support 
asynchronous [chan copy].


--
Andy Goth | 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fts3 - "SQL logic error or missing database" on all match words negated

2008-01-22 Thread Andy Goth
On Tue, 22 Jan 2008 16:45:36 -0800, Scott Hess wrote
> MATCH "foo -bar" should return "The set of documents which match foo
> but not bar".  I read MATCH "-bar" as "The set of all documents which
> do not match bar".  MATCH "-foo -bar" would be "The set of all
> documents which do not match foo and do not match bar".

Each match word adds a constraint which filters down the results.  Each word
in the match query disqualifies all documents that have or don't have that
word, depending on the presence or absence of a leading -, respectively.  Is
that the right way to look at things?  This seems consistent with SQL's design
of adding constraints to queries which default to yielding all rows, and it
also seems to match up with your above explanation.  But it doesn't explain
the empty match query case, in which no constraints are given.

Previously I had thought that the result set is initially empty, that words
lacking -'s add to the set, and words with -'s remove from the set.  This is
consistent with the fact that an empty match query returns zero results, and
this reasoning predicts that a match query consisting only of -words will also
give zero results.  But this isn't how you explain things.

> I'm not sure how the empty-string results matter, as I don't consider 
> MATCH "-foo" to have an implicit empty term.

Okay, I agree that it's logical for a match query consisting only of negated
words to return all rows lacking those words, and I am fine with this
particular case being unsupported.  (There are other "missing" features in
SQLite that are more important to me, like recursive triggers and foreign key
constraints, so I don't mind waiting on this one.)

However, it's also logical (I think--- show me where I'm wrong) for an empty
match query to return all rows, which is an unsupported operation.  Yet rather
than fail with an SQL logic error, fts3 yields zero rows in this case.  I find
this to be inconsistent, and I'd rather have both throw errors or both return
zero results.

> That's what I'm saying.  Calculating the set of all documents which
> match "foo" and the set of all documents which match "bar" and
> removing the latter from the former is conveniently available from 
> the fts index.  Calculating the set of all documents in the fts 
> index would require running a separate query to figure it out.

I take from your discussion that the fts index keeps track of all rows that
contain a given word.  Is it reasonable to add one more entry to the index
that lists all rows?  The indexed "word" could even be empty string, as in all
rows contain empty string. :^)  Then in any match query lacking nonnegated
words (i.e. empty match query or entirely negative match query), the match
words' index sets are intersected with or subtracted from this index of
everything, as if the match query indeed has an implicit empty term.

-- 
Andy Goth | <[EMAIL PROTECTED]> | http://andy.junkdrome.org/


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] fts3 - "SQL logic error or missing database" on all match words negated

2008-01-22 Thread Andy Goth
On Tue, 22 Jan 2008 14:19:10 -0800, Scott Hess wrote
> Hmm.  I think the current behaviour is because the obvious result for
> this query would be the set of all documents with the documents
> matching each of the negated terms removed.

I disagree that this is the obvious result.  Searching for empty string yields
no results, so adding a constraint on top of that should also yield no
results.  But if searching for empty string yielded all rows, I would agree
with you that it is the obvious result.

> In the current system, calculating the set of all documents isn't 
> convenient, so since this couldn't be supported, it's an error.

Then how does "select * from my_fts3_table" work?

-- 
Andy Goth | <[EMAIL PROTECTED]> | http://andy.junkdrome.org/


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sql optimization question

2008-01-20 Thread Andy Goth
On Sun, 20 Jan 2008 19:16:03 -0800 (PST), Ken wrote
> Jay Sprenkle <[EMAIL PROTECTED]> wrote:
>> I'm deleting a tree of data stored in sqlite and was looking for the
>> most efficient way to do it.
>
> You'll need one trigger per table to cause the delete to cascade
> through the tree.

Watch out.  SQLite doesn't support recursive triggers, so the following
won't work.  At least, it didn't work for me. :^)

SQLite version 3.5.4
Enter ".help" for instructions
sqlite> create table x (id, parent);
sqlite> create trigger deltree after delete on x begin
   ...> delete from x where parent = old.id;
   ...> end;
sqlite> insert into x values(0, null);
sqlite> insert into x values(1, 0);
sqlite> insert into x values(2, 0);
sqlite> insert into x values(3, 1);
sqlite> delete from x where id = 0;
sqlite> select * from x;
3|1
sqlite> 

In response to the deletion of (0,null), the trigger fired, deleting
(1,0), and (2,0).  But the trigger didn't fire again in response to
either of these subsequent deletions, so (3,1) was not automatically
deleted.

If anyone knows how to get around this problem, I would like to know.

-- 
Andy Goth
<[EMAIL PROTECTED]>

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] fts3 - primary key doesn't seem to do anything

2008-01-19 Thread Andy Goth
SQLite version 3.5.4
sqlite> create virtual table foo using fts3(content, id primary key);
sqlite> insert or replace into foo values('anything', 1);
sqlite> insert or replace into foo values('anything', 1);
sqlite> insert or replace into foo values('anything else', 1);
sqlite> select * from foo;
anything|1
anything|1
anything else|1

For comparison's sake:

SQLite version 3.5.4
Enter ".help" for instructions
sqlite> create table foo (content, id primary key);
sqlite> insert or replace into foo values('anything', 1);
sqlite> insert or replace into foo values('anything', 1);
sqlite> insert or replace into foo values('anything else', 1);
sqlite> select * from foo;
anything else|1

Why doesn't inserting a row with a duplicate primary key trigger a conflict? 
Is this part of fts3's design, or is it an oversight?  Am I missing something?

For now, I will avoid this problem by deleting rows matching the primary key
before inserting/replacing them.

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] fts3 - "SQL logic error or missing database" on all match words negated

2008-01-19 Thread Andy Goth
SQLite version 3.5.4
sqlite> create virtual table foo using fts3;
sqlite> insert into foo values('anything');
sqlite> select * from foo where foo match '-all -words -are -negated';
SQL error: SQL logic error or missing database

Yeah, I know it's not really a useful thing to search for nothing but negated
words, but I think it should result in an empty list, not an SQL error.  I
mean, the match words often come from a Web , and I don't think I should
have to write code to check for this situation.

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Making BEGIN IMMEDIATE the default.

2007-10-12 Thread Andy Goth
On Thu, 11 Oct 2007 13:33:35 +0200, Kees Nuyt wrote
> On Wed, 10 Oct 2007 22:10:38 -0500, you wrote:
>> You might want to be a little bit more clear about the fact that 
>> [transaction] nests even though BEGIN does not.
>
> The TCL transaction{} can be nested, the SQL BEGIN can't.

It looks like I'm the one who was unclear.  I was asking DRH to consider
expanding the documentation a bit to underscore the fact that [transaction]
can be used in nested fashion despite the limitations of BEGIN--- I had
incorrectly assumed that since BEGIN doesn't nest, [transaction] doesn't nest
either.  Also, I didn't explain my typographical convention: [bracketed] words
are Tcl commands, CAPITALIZED words are SQL keywords.

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Making BEGIN IMMEDIATE the default.

2007-10-10 Thread Andy Goth
On Thu, 11 Oct 2007 02:40:22 +, drh wrote
> The BEGIN, ROLLBACK, and/or COMMIT only happen on the outermost
> "transaction".  Of course, it is kind of silly to nest 
> as shown above.  But this is useful, for example, when each
> "db transaction" is really in a separate procedure and the
> procedures are nested.

Wow, I didn't know [transaction] nests!  Thanks.  I had written some
untrustworthy code to only invoke [transaction] on the outermost stack frame;
it's great to know that I can get rid of it.

>From the documentation:

"Also, BEGIN does not nest, so you have to make sure no other transactions are
active before starting a new one. The 'transaction' method takes care of all
of these details automatically."

You might want to be a little bit more clear about the fact that [transaction]
nests even though BEGIN does not.

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] built in function hex

2007-10-08 Thread Andy Goth
On Mon, 8 Oct 2007 10:03:40 -0700 (PDT), Ken wrote
> is there a built in function to display numeric values as 
> hexidecimal strings?  to_hexstr (40) would display 0x28

hex() isn't quite what you want.  hex(40) gives 3430, since '4' is 0x34 and
'0' is 0x30.  Moving on...

This is the sort of work I prefer to leave up to the application using SQLite.
 SQLite is a talented, efficient, obedient, and tireless reference librarian,
but asking SQLite to write your thesis is going too far!  But you can sure ask
it for all the facts, arguments, and bibliographic references you need.  It's
your job (or, ditching the school metaphor, the application's job) to present
the data to the consumer in a format palatable to same.

Of course, that's just my take on things.

If you still think you have a need for to_hexstr(), you can add it very
easily.  See http://www.sqlite.org/tclsqlite.html#function .  (By the way,
Author, you might want to rename the example function to not collide with the
built-in hex() function.)

-- 
Andy Goth
<[EMAIL PROTECTED]>

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Problems with SQLite and PHP

2007-10-08 Thread Andy Goth
On Mon, 08 Oct 2007 13:12:08 +0200, Markus Wolff - NorthClick wrote
> How does it make sense that the entire directory containing the 
> directory must be writeable?

Sounds like a temporary file or lock file is being created.

Somebody who knows more about PHP or SQLite can tell you which is taking
place.  Me, I'd find out what's really happening by running SQLite in strace
and looking for open() calls.

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Feature request - Tcl variables as "value-list"s

2007-10-05 Thread Andy Goth
On Fri, 5 Oct 2007 09:41:27 -0700, Scott Hess wrote
> On 10/5/07, Andy Goth <[EMAIL PROTECTED]> wrote:
> > proc sql_expand {varname} {
> >upvar 1 $varname var
> >set result [list]
> >foreach elem $var {
> >   lappend result '[string map {' ''} $elem]'
> >}
> >return [join $result ,]
> > }
>
> You really should be using an SQLite-specific quote function
> somewhere.

I didn't know SQLite provided one.  It normally doesn't need one, thanks to
its internal expansion of Tcl variables.  Oh wait, it has quote(), but this
would not work for me because (by design!) its result is formatted in such a
way that SQLite interprets it as a single value, *not* a list.  Or I could
invoke it repeatedly, once per element.

But after all that overhead I imagine I would be better off using Tcl to
expand my data into multiple variables; at least that way all bytecoding gets
cached and I'm guaranteed to be immune to injection.

> You could work around it by doing something like [db eval {select 
> quote($arg)}], but that feels clunky.

I'm pretty sure I know how SQLite will perform quote()'ing, and I can
implement it in pure Tcl easily enough, so I might as well leave it in Tcl. 
Tcl should be able to bytecode string manipulation better than SQLite; string
manipulation is Tcl's raison d'être.

But maybe I'm wrong about how quote() works.  Can somebody confirm?  In my
mind, quote($x) is equivalent to '[string map {' ''} $x]' .

> The quoting you're using will work fine for many cases, but are
> subject to SQL injection attack.

Could you please provide an example?

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Feature request - Tcl variables as "value-list"s

2007-10-05 Thread Andy Goth
On Fri, 05 Oct 2007 15:20:41 +, drh wrote
> "Andy Goth" <[EMAIL PROTECTED]> wrote:
> > http://wiki.tcl.tk/2633
>
> I suggest you go head and write a short TCL procedure to 
> accomplish the same thing.

Like this?

proc sql_expand {varname} {
   upvar 1 $varname var
   set result [list]
   foreach elem $var {
  lappend result '[string map {' ''} $elem]'
   }
   return [join $result ,]
}

$ set x {1 2 3}
$ db eval "insert into xyzdata values ([sql_expand x])"
(expands to)
$ db eval "insert into xyzdata values ('1','2','3')"

Is there any problem with the spurious quotes around the values?  Will that
interfere with integer primary key or anything like that?

> There are deep technical reasons why this is very difficult
> and unlikely to ever happen.

I will spend some time familiarizing myself with the relevant bits of the
SQLite sources to see why this is the case. :^)

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Feature request - Tcl variables as "value-list"s

2007-10-05 Thread Andy Goth
On Thu, 4 Oct 2007 21:35:30 -0500, Andy Goth wrote
> (See my original proposal writeup at the bottom of 
> http://wiki.tcl.tk/2633 for more details.)

I made a significant update to the bottom of said page.  I'll briefly cover it
here as well.  Basically I revise my proposal to be less generic, to only
support the limited case of a single Tcl variable expanding to the entire
value-list.

Old script:

> $ set x {1 2 3}
> $ db eval {insert into xyzdata values({*}$x)}

New script:

$ db eval {insert into xyzdata values $x}

Note the lack of parentheses around $x.  This distinguishes it from the case
of inserting a single value "1 2 3" into xyzdata.

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Feature request - Tcl variables as "value-list"s

2007-10-04 Thread Andy Goth
(See my original proposal writeup at the bottom of http://wiki.tcl.tk/2633 for
more details.)

** Feature request **

My current project would benefit from the ability to expand a Tcl variable
into multiple SQL values.  Quick example:

$ set x {1 2 3}
$ db eval {insert into xyzdata values({*}$x)}

where "{*}$x" basically expands to "1,2,3".

I borrow the {*} notation from Tcl 8.5's expansion directive because it does
something very similar, but any notation will do.  Remember that {*} was
chosen by the Tcl folks to avoid breaking existing scripts.

Of course this expansion will happen within SQLite and not Tcl.  That's the
whole point; it eliminates injection attacks and allows the compiled SQL to be
cached inside the Tcl_Obj.

-- 
Andy Goth
<[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] tclsqlite.c DB_COPY: binary mode?

2005-10-31 Thread Andy Goth
Is it really necessary for copy's input file to be fopen()ed with mode
"rb"?  This causes CRLF files opened on MS-Windows to have \r's appended
to the values in the last column.  The last column might otherwise be
numeric.

Actually, I'd prefer that DB_COPY accept a Tcl channel rather than a
filename.  (Of course, let's not break existing code.)  This way Tcl can
handle all the encoding and line ending junk, plus I can [seek] past
(okay, [gets] past) any header lines.  Moreover, this will make using
stdin possible (I read the header line, sqlite3's DB_COPY reads the
rest, and there's no seeking and no reopening).

-- 
Andy Goth
   <[EMAIL PROTECTED]>
   <[EMAIL PROTECTED]>


signature.asc
Description: OpenPGP digital signature