Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
Ok. I am a fossil newbie. My effort to build from that did not work. The
amalgamation from configure+make seemed incomplete and other files like
json1.c were missing.
I am on trunk and fossil timeline showed Dr Hipp's fix.  I'll educate
myself a bit more on Fossil to see where I went wrong.

However, I downloaded the latest snapshot (
https://www3.sqlite.org/cgi/src/doc/trunk/README.md) and built from that
and.. *SUCCESS!*

As a note, my code is based on this article.
https://blog.budgetwithbuckets.com/2018/08/27/sqlite-changelog.html . Many
thanks to that author for the idea and sample code.

A big thank you to Dr Hipp for the prompt fix.

/Lindsay

Code output from shell built from latest trunk snapshot tarball:

SQLite version 3.26.0 2018-11-02 17:38:39
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .version
SQLite 3.26.0 2018-11-02 17:38:39
1fa74930ab56171e2e840d4a5b259abafb0ad1e0320fc3030066570a6dd10002
gcc-4.9.2
sqlite> attach "napp.db" as napp;
sqlite> DROP TABLE napp.people;
sqlite> DROP TABLE napp.change_log;
sqlite>
sqlite> -- Data table
sqlite> CREATE TABLE napp.people (
   ...> id INTEGER PRIMARY KEY,
   ...> created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
   ...> name TEXT,
   ...> age INTEGER
   ...> );
sqlite>
sqlite> -- Change log table
sqlite> CREATE TABLE napp.change_log (
   ...> id INTEGER PRIMARY KEY,
   ...> created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
   ...> action TEXT,
   ...> table_name TEXT,
   ...> obj_id INTEGER,
   ...> changes TEXT
   ...> );
sqlite>
sqlite> -- Insert Trigger
sqlite> CREATE TRIGGER napp.people_track_insert
   ...> AFTER INSERT ON people
   ...> BEGIN
   ...>   INSERT INTO change_log (action, table_name, obj_id, changes)
   ...>   SELECT
   ...> 'INSERT', 'people', NEW.id, changes
   ...>   FROM
   ...> (SELECT
   ...>   json_group_object(col, json_array(oldval, newval)) AS changes
   ...> FROM
   ...>   (SELECT
   ...> json_extract(value, '$[0]') as col,
   ...> json_extract(value, '$[1]') as oldval,
   ...> json_extract(value, '$[2]') as newval
   ...>   FROM
   ...> json_each(
   ...>   json_array(
   ...> json_array('id', null, NEW.id),
   ...> json_array('created', null, NEW.created),
   ...> json_array('name', null, NEW.name),
   ...> json_array('age', null, NEW.age)
   ...>   )
   ...> )
   ...>   WHERE oldval IS NOT newval
   ...>   )
   ...> );
   ...> END;
sqlite>
sqlite> -- Update Trigger
sqlite> CREATE TRIGGER napp.people_track_update
   ...> AFTER UPDATE ON people
   ...> BEGIN
   ...>   INSERT INTO change_log (action, table_name, obj_id, changes)
   ...>   SELECT
   ...> 'UPDATE', 'people', OLD.id, changes
   ...>   FROM
   ...> (SELECT
   ...>   json_group_object(col, json_array(oldval, newval)) AS changes
   ...> FROM
   ...>   (SELECT
   ...> json_extract(value, '$[0]') as col,
   ...> json_extract(value, '$[1]') as oldval,
   ...> json_extract(value, '$[2]') as newval
   ...>   FROM
   ...> json_each(
   ...>   json_array(
   ...> json_array('id', OLD.id, NEW.id),
   ...> json_array('created', OLD.created, NEW.created),
   ...> json_array('name', OLD.name, NEW.name),
   ...> json_array('age', OLD.age, NEW.age)
   ...>   )
   ...> )
   ...>   WHERE oldval IS NOT newval
   ...>   )
   ...> );
   ...> END;
sqlite>
sqlite> -- Delete Trigger
sqlite> CREATE TRIGGER napp.people_track_delete
   ...> AFTER DELETE ON people
   ...> BEGIN
   ...>   INSERT INTO change_log (action, table_name, obj_id, changes)
   ...>   SELECT
   ...> 'DELETE', 'people', OLD.id, changes
   ...>   FROM
   ...> (SELECT
   ...>   json_group_object(col, json_array(oldval, newval)) AS changes
   ...> FROM
   ...>   (SELECT
   ...> json_extract(value, '$[0]') as col,
   ...> json_extract(value, '$[1]') as oldval,
   ...> json_extract(value, '$[2]') as newval
   ...>   FROM
   ...> json_each(
   ...>   json_array(
   ...> json_array('id', OLD.id, null),
   ...> json_array('created', OLD.created, null),
   ...> json_array('name', OLD.name, null),
   ...> json_array('age', OLD.age, null)
   ...>   )
   ...> )
   ...>   WHERE oldval IS NOT newval
   ...>   )
   ...> );
   ...> END;
sq

Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
Hi,

Sorry. I have had no luck with this.

I followed the instructions here
https://www.sqlite.org/getthecode.html#clone
then built from source following the directions in README.md.

  mkdir bld;#  Build will occur in a sibling directory
  cd bld   ;#  Change to the build directory
  ../sqlite/configure  ;#  Run the configure script
  make ;#  Run the makefile.
  make sqlite3.c   ;#  Build the "amalgamation" source file
  make test;#  Run some tests (requires Tcl)

This is the platform I am building on 'uname -a':

Linux debianE7440-LT 3.16.0-6-amd64 #1 SMP Debian 3.16.57-2 (2018-07-14)
x86_64 GNU/Linux

The current trunk source however does not have the JSON1 extension embedded
in the amalgamation by default. How would I add that?
However, building the json1 extension against this source and then loading
it from the shell reports this:

SQLite version 3.8.10.1 2015-05-13 04:50:30
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .version
SQLite 3.8.10.1 2015-05-13 04:50:30 59e3e9e764440b7feaafadff74f422535d21bca2
sqlite> .load "../../../bin/json1"
Error: ../../../bin/json1.so: undefined symbol:
sqlite3_create_window_function
sqlite>

I'll tinker with it  a more but at this point I am not sure if it is
because an own build configuration issue trying to test your fix or if the
fix itself is the issue.

Regards,
/Lindsay


On Fri, Nov 2, 2018 at 10:38 AM Richard Hipp  wrote:

> Please try your test script on the latest trunk check-in and let us
> know whether or not it is working for you.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
P.S.

Here are the first few lines of 'fossil timeline' on my local that I am
building.

=== 2018-11-02 ===
17:38:39 [1fa74930ab] Enhance triggers so that they can use
table-valued-functions that exist in schemas outside of the schema in which
the trigger is defined. (user: drh)
=== 2018-10-31 ===
20:52:00 [4a6ad5190b] Deploy the sqlite3Strlen30NN() function (argument
guaranteed to be non-NULL) for a small performance improvement. (user: drh)
19:01:13 [790ea39a65] *MERGE* Add support for the SQLITE_PREPARE_NORMALIZED
flag and the sqlite3_normalized_sql() when compiling with
SQLITE_ENABLE_NORMALIZE. Also remove unnecessary whitespace from Makefiles.
 (user: drh)
18:24:29 [7107f0dacf] Tweaks to the test_intarray documentation and tests.
(user: drh)
01:26:24 [4b370c74ae] *MERGE* Merge fixes from trunk, especially rebustness
against corrupt database files. (user: drh tags: apple-osx)
01:12:06 [e0d30c1862] *MERGE* Merge fuzz test cases computed by dbfuzz2.
(user: drh)
01:04:18 [d57873337a] Improved corrupt database detection in the
relocatePage() routine of the b-tree module. (user: drh)

/Lindsay


On Fri, Nov 2, 2018 at 4:49 PM Lindsay Lawrence 
wrote:

> Hi,
>
> Sorry. I have had no luck with this.
>
> I followed the instructions here
> https://www.sqlite.org/getthecode.html#clone
> then built from source following the directions in README.md.
>
>   mkdir bld;#  Build will occur in a sibling directory
>   cd bld   ;#  Change to the build directory
>   ../sqlite/configure  ;#  Run the configure script
>   make ;#  Run the makefile.
>   make sqlite3.c   ;#  Build the "amalgamation" source file
>   make test;#  Run some tests (requires Tcl)
>
> The attached file has the results from running 'make tests'. There are
> some errors there.
>
> This is the platform I am building on 'uname -a':
>
> Linux debianE7440-LT 3.16.0-6-amd64 #1 SMP Debian 3.16.57-2 (2018-07-14)
> x86_64 GNU/Linux
>
> The current trunk source however does not have the JSON1 extension
> embedded in the amalgamation by default. How would I add that?
> However, building the json1 extension against this source and then loading
> it from the shell reports this:
>
> SQLite version 3.8.10.1 2015-05-13 04:50:30
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> .version
> SQLite 3.8.10.1 2015-05-13 04:50:30
> 59e3e9e764440b7feaafadff74f422535d21bca2
> sqlite> .load "../../../bin/json1"
> Error: ../../../bin/json1.so: undefined symbol:
> sqlite3_create_window_function
> sqlite>
>
> I'll tinker with it  a more but at this point I am not sure if it is
> because an own build configuration issue trying to test your fix or if the
> fix itself is the issue.
>
> Regards,
> /Lindsay
>
>
> On Fri, Nov 2, 2018 at 10:38 AM Richard Hipp  wrote:
>
>> Please try your test script on the latest trunk check-in and let us
>> know whether or not it is working for you.
>>
>> --
>> D. Richard Hipp
>> d...@sqlite.org
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
Thanks!

As a further note, I also tried building the shell with JSON1 enabled and
building and loading the json1 extension separately, with the same results.

SQLite version 3.25.2 2018-09-25 19:08:10
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .load './json1.so'
sqlite> attach "test.db" as test;
sqlite> INSERT INTO test.people (name, age) VALUES ('Alice', 30), ('Bob',
42);
Error: no such table: test.json_each

/Lindsay


On Fri, Nov 2, 2018 at 10:00 AM Richard Hipp  wrote:

> On 11/2/18, Lindsay Lawrence  wrote:
> > Has anyone else run into this issue?
>
> I can confirm that it is an issue and that we are working on it.
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
Has anyone else run into this issue?

I have the following tables and trigger

- Data table
CREATE TABLE people (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
name TEXT,
age INTEGER
);

-- Change log table
CREATE TABLE change_log (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
action TEXT,
table_name TEXT,
obj_id INTEGER,
changes TEXT
);

-- Insert Trigger
CREATE TRIGGER people_track_insert
AFTER INSERT ON people
BEGIN
  INSERT INTO change_log (action, table_name, obj_id, changes)
  SELECT
'INSERT', 'people', NEW.id, changes
  FROM
(SELECT
  json_group_object(col, json_array(oldval, newval)) AS changes
FROM
  (SELECT
json_extract(value, '$[0]') as col,
json_extract(value, '$[1]') as oldval,
json_extract(value, '$[2]') as newval
  FROM
json_each(
  json_array(
json_array('id', null, NEW.id),
json_array('created', null, NEW.created),
json_array('name', null, NEW.name),
json_array('age', null, NEW.age)
  )
)
  WHERE oldval IS NOT newval
  )
);
END;

If I then do the following, the trigger works correctly, updating the
change_log table, etc

$> sqlite3  test.db
sqlite> INSERT INTO people (name, age) VALUES ('Alice', 30), ('Bob', 42);
sqlite> UPDATE people SET age = age + 2;
sqlite> UPDATE people SET name = 'Eva' WHERE name='Alice';
sqlite> DELETE FROM people WHERE name = 'Bob';
sqlite> SELECT * FROM change_log;

However, the above queries for the same db, when 'attached', fails.:

$> sqlite3
sqlite> attach './test.db' as test;
...

In the attached case any inserts into the table cause the trigger to fail
with something like the following error:

*Error: near line 1694: no such table: napp.json_each*

Somehow the virtual table json_each in the db trigger is not available for
an 'attached' db. It does work correctly for the main db. It also works for
an attached db as a normal query.

I am running the latest amalgamation code with the json1 extension enabled

sqlite-amalgamation-3250200.zip
(2.17 MiB) C source code as an amalgamation, version 3.25.2.
(sha1: c9ff08b91a0faacabe2acb240e5dba3cf81071f3)

Compiled with:

gcc -Os -I. -DSQLITE_THREADSAFE=0 -DSQLITE_ENABLE_JSON1 -DHAVE_USLEEP
-DHAVE_READLINE shell.c sqlite3.c -ldl -lreadline -lncurses -Os -o sqlite3

/Lindsay
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] json_each() in a table trigger for an 'attached' db does not work

2018-11-02 Thread Lindsay Lawrence
Has anyone else run into this issue? Is there a workaround?

I have the following tables and trigger

- Data table
CREATE TABLE people (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
name TEXT,
age INTEGER
);

-- Change log table
CREATE TABLE change_log (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
action TEXT,
table_name TEXT,
obj_id INTEGER,
changes TEXT
);

-- Insert Trigger
CREATE TRIGGER people_track_insert
AFTER INSERT ON people
BEGIN
  INSERT INTO change_log (action, table_name, obj_id, changes)
  SELECT
'INSERT', 'people', NEW.id, changes
  FROM
(SELECT
  json_group_object(col, json_array(oldval, newval)) AS changes
FROM
  (SELECT
json_extract(value, '$[0]') as col,
json_extract(value, '$[1]') as oldval,
json_extract(value, '$[2]') as newval
  FROM
json_each(
  json_array(
json_array('id', null, NEW.id),
json_array('created', null, NEW.created),
json_array('name', null, NEW.name),
json_array('age', null, NEW.age)
  )
)
  WHERE oldval IS NOT newval
  )
);
END;

If I then do the following, the trigger works correctly, updating the
change_log table, etc

$> sqlite3  test.db
sqlite> INSERT INTO people (name, age) VALUES ('Alice', 30), ('Bob', 42);
sqlite> UPDATE people SET age = age + 2;
sqlite> UPDATE people SET name = 'Eva' WHERE name='Alice';
sqlite> DELETE FROM people WHERE name = 'Bob';
sqlite> SELECT * FROM change_log;

However, the above queries for the same db, when 'attached', fails.:

$> sqlite3
sqlite> attach './test.db' as test;
...

In the attached case any inserts into the table cause the trigger to fail
with something like the following error:


*Error: near line 1694: no such table: napp.json_each*

Somehow the virtual table json_each in the db trigger is not available for
an 'attached' db. It does work correctly for the main db. It also works for
an attached db as a normal query.

I am running the latest amalgamation code with the json1 extension enabled

sqlite-amalgamation-3250200.zip
(2.17 MiB) C source code as an amalgamation, version 3.25.2.
(sha1: c9ff08b91a0faacabe2acb240e5dba3cf81071f3)

Compiled with:

gcc -Os -I. -DSQLITE_THREADSAFE=0 -DSQLITE_ENABLE_JSON1 -DHAVE_USLEEP
-DHAVE_READLINE shell.c sqlite3.c -ldl -lreadline -lncurses -Os -o sqlite3

/Lindsay
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple web query tool

2017-02-03 Thread Lindsay Lawrence
> "Sqlite just does it's thing with a minimum amount of fuss and minimal
impact on system resources, can't rave enough"

Yes! A while back, when I first tried what I outlined it worked so well I
took it a step further.

With nodejs async io, streams and a bit of javascript glue code, it is
relatively easy to manage a small process pool of sqlite shells...
eliminating the overhead of setup/teardown of cgi processes and still
keeping the same simple pipe interface. The result, a simple, robust, high
performance, lightweight db server over http/s with minimal external
dependencies that runs on just about anything. Super flexible, more than
scalable for anything I have thrown at it so far... and it just works. (
https://en.wikipedia.org/wiki/Unix_philosophy).  NodeJS is not as
lightweight as it once was. The binary of recent versions is creeping up on
30M not too long ago it was 6Mb or so. SQLite has added some incredible
functionality in recent years and the shell cli still comes in under 1Mb
for a nicely feature-full build. Apples and oranges. Still...

/Lindsay


On Fri, Feb 3, 2017 at 2:13 PM, Michael Falconer <
michael.j.falco...@gmail.com> wrote:

> >
> > Running the sqlite3 command-line shell via cgi works way better than you
> > may expect.
> >
>
> ​Yay verily, and that is really not doing a great tool justice. I've done a
> lot of similar things to what Lindsay outlines above both with web and
> application targets, Often these procedures are set up as proof of concept
> and in a sort of testing mode. I have found that the so-called test setup
> actually functioned more reliably and consistently than the application
> based code we eventually hacked up. Simple, reliable and very, very
> flexible. Sqlite just does it's thing with a minimum amount of fuss and
> minimal impact on system resources, can't rave enough. :-)​
>
>
> On 3 February 2017 at 18:29, Lindsay Lawrence <thinknl...@gmail.com>
> wrote:
>
> > Running the sqlite3 command-line shell via cgi works way better than you
> > may expect.
> > The command-line shell has a small footprint and works well with stdio in
> > batch mode.
> >
> > You can run a shell script that runs an instance of the cli shell and
> reads
> > and runs a .sql file.  The sql file and bash can be as complex as it
> needs
> > to be.  You can pass in params on the command-line by inserting env
> values
> > into a temp table and then using that table as necessary in subsequent
> sql.
> >
> > For example:
> > Configure your httpd for cgi then have a cgi script, say "*report.cgi*":
> >
> > #!/bin/bash
> > /path/to/sqlite3 -bail -batch "/path/to/my.s3db" ".read
> > /path/to/report.sql"
> >
> > and in *"/path/to/report.sql*"
> >
> > .mode html
> > .headers on
> > .print Content-Type: text/html
> > .print
> > .print 
> > select * from from report_view;
> > .print 
> >
> > For large datasets, or something you just want to import conveniently
> into
> > a spreadsheet, or another db, for further munging you could set csv mode
> > and/or force a download. As a note, unless you are sorting a very large
> > dataset the resource usage of all of this is quite low as sqlite just
> pipes
> > the dataset out over the http response as it is generated.
> >
> > /Lindsay
> >
> >
> > On Wed, Feb 1, 2017 at 8:10 AM, Jay Kreibich <j...@kreibi.ch> wrote:
> >
> > > I'm looking for an *extremely* simple web tool that will allow me to
> > > configure a dozen or so stored queries, which people can then select
> and
> > > run on an internal server.  If the system supports a query variable or
> > two,
> > > that would be fantastic, but I don't even need that.  Any thoughts?  Or
> > do
> > > I dust off the PHP tutorials and spend an afternoon throwing something
> > > together?
> > >
> > >  -j
> > >
> > >
> > > --
> > > Jay A. Kreibich < J A Y @ K R E I B I.C H >
> > >
> > > "Intelligence is like underwear: it is important that you have it, but
> > > showing it to the wrong people has the tendency to make them feel
> > > uncomfortable." -- Angela Johnson
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@mailinglists.sqlite.org
> > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> Regards,
>  Michael.j.Falconer.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple web query tool

2017-02-02 Thread Lindsay Lawrence
Running the sqlite3 command-line shell via cgi works way better than you
may expect.
The command-line shell has a small footprint and works well with stdio in
batch mode.

You can run a shell script that runs an instance of the cli shell and reads
and runs a .sql file.  The sql file and bash can be as complex as it needs
to be.  You can pass in params on the command-line by inserting env values
into a temp table and then using that table as necessary in subsequent sql.

For example:
Configure your httpd for cgi then have a cgi script, say "*report.cgi*":

#!/bin/bash
/path/to/sqlite3 -bail -batch "/path/to/my.s3db" ".read /path/to/report.sql"

and in *"/path/to/report.sql*"

.mode html
.headers on
.print Content-Type: text/html
.print
.print 
select * from from report_view;
.print 

For large datasets, or something you just want to import conveniently into
a spreadsheet, or another db, for further munging you could set csv mode
and/or force a download. As a note, unless you are sorting a very large
dataset the resource usage of all of this is quite low as sqlite just pipes
the dataset out over the http response as it is generated.

/Lindsay


On Wed, Feb 1, 2017 at 8:10 AM, Jay Kreibich <j...@kreibi.ch> wrote:

> I'm looking for an *extremely* simple web tool that will allow me to
> configure a dozen or so stored queries, which people can then select and
> run on an internal server.  If the system supports a query variable or two,
> that would be fantastic, but I don't even need that.  Any thoughts?  Or do
> I dust off the PHP tutorials and spend an afternoon throwing something
> together?
>
>  -j
>
>
> --
> Jay A. Kreibich < J A Y @ K R E I B I.C H >
>
> "Intelligence is like underwear: it is important that you have it, but
> showing it to the wrong people has the tendency to make them feel
> uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Appropriate Uses For SQLite

2015-02-19 Thread Lindsay Lawrence
I have found SQLite is uniquely suited to server-based applications of all
kinds. Wrapped in a simple custom server it is an outstandingly performant
ultra-lightweight engine that can easily service multiple clients. The
ability to spin up multiple instances of the library against the same file
is invaluable. It is the high performance core of a distributed system we
use. Zero configuration is a beautiful thing.

We have found it to be suitable for quite large data; recently loaded the
full Census 2010 and ACS survey datasets, over 0.5 billion rows, into a
single file totalling over 100GB. Properly indexed it is a instantly
queryable allowing some fascinating data exploration and visualization. We
have also used it in scenarios where the ability to 'attach' distinct
database files allows the system to load only what it needs to service
particular queries and makes it simple to clone core tables and distribute
them over multiple systems for combining with related data.

Finally, it has replaced many traditional text-based data exploration tools
I once used... sed, awk, grep etc. Server logs, data exports from
'enterprise' systems, all manner of regularly structured data, are easily
ingested into a sqlite database file and with the addition of simple
dynamically loaded plugins + SQL can be quickly explored, data summarized,
etc. Creating views off of the raw data, I can deliver a single compact
database file to a report writer who using the ODBC driver can make eye
candy reports off of the summarized data with Excel, Access or other UI
based tools.

I will say, the recent addition of the 'WITH' clause to the language as
brought consider expressive power to what I can write directly in SQL. I
love the fact that I can do so much with the SQL language itself now
without writing a line of low-level code and the SQLite API makes it easy
to extend the functional api as needed.

In short, the zero configuration, high performance and lightweight
characteristics of SQLite have made it uniquely suited for the distributed
'cloud' environment we run in where relational data can be fragmented and
distributed over many VMs and still be accessible with a common interface
and query language.

Many thanks.

Best Regards
Lindsay


On Wed, Feb 18, 2015 at 11:11 AM, Darko Volaric  wrote:

> I second this notion. I think SQLite is uniquely suited to server based
> applications of all kinds. Its light footprint and the fact that it's a
> library rather than a full system gives it a flexibility and raw
> performance that other systems cannot. We use it at the core of each node
> in a distributed and parallel system.
>
> When using SQLite the architecture of your database system is not
> preordained by designers who could not foresee novel designs and
> approaches. SQLite is like a systems programing language: It's lean and
> mean and a powerful tool that gives full control to the systems designer
> and programmer.
>
> The only thing I'd change about SQLite is the SQL bit. To me it's an
> anachronism and a mess and needs to be factored further out of the SQLite
> core, with a more rigorous formalism providing an interface (with an
> exposed and supported API) to the database engine, but at a higher level
> than say the virtual machine.
>
> On Wed, Feb 18, 2015 at 9:12 AM, Marcus Grimm 
> wrote:
>
> > We use sqlite as the db engine inside a server application
> > with a number of clients that connect to the server.
> > Sqlite works just beatiful here and I wish these statements
> > "sqlite shall not be used for client/server things" would be
> > worded less generally. In fact when we mention sqlite as our
> > db engine customer point to this restriction and we run into
> > an excuse sort of arguments.
> > On the bottom line: Sqlite CAN very well serve as the DB
> > engine for client/server applications, it just depend how
> > the api is used.
> >
> > Marcus
> >
> > Am 2015-02-18 15:34, schrieb Richard Hipp:
> >
> >> In a feeble effort to do "marketing", I have revised the "Appropriate
> >> Uses For SQLite" webpage to move trendy buzzwords like "Internet of
> >> Things" and "Edge of the Network" above the break.  See:
> >>
> >> https://www.sqlite.org/whentouse.html
> >>
> >> Please be my "focus group", and provide feedback, comments,
> >> suggestions, and/or criticism about the revised document.   Send your
> >> remarks back to this mailing list, or directly to me at the email in
> >> the signature.
> >>
> >> Thank you for your help.
> >>
> > ___
> > sqlite-users mailing list
> > sqlite-users at mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] sqlite3 shell .import bug (edge case)

2013-12-11 Thread Lindsay Lawrence
Hi,

I am currently using sqlite3 version:

3.8.1 2013-10-17 12:57:35 c78be6d786c19073b3a6730dfe3fb1be54f5657a

I found an edge case bug in the sqlite3 shell when importing csv data with
fields containing embedded quotes, CRs and LFs:

When a field contains an embedded quote, and that embedded quote is
immediately followed by an EOL then the current csv parser will consider
the end of field to be reached and fail to import the row correctly.

For example the following csv, a single data row with column headers will
fail to import correctly.

column1,column2,column3,column4
fielddata1,fielddata2,"field ""
data"" 3",fielddata4


The offending code is in shell.c in function

static char *csv_read_one_field(CSVReader *p);

An example fix could be something like the following -- adding a counter
flag 'cQuoteComplete' to track quotes properly.

static char *csv_read_one_field(CSVReader *p){
  int c, pc;
  int cSep = p->cSeparator;
  int cQuoteComplete = 0;
  p->n = 0;
  c = fgetc(p->in);
  if( c==EOF || seenInterrupt ){
p->cTerm = EOF;
return 0;
  }
  if( c=='"' ){
int startLine = p->nLine;
int cQuote = c;
cQuoteComplete+=1;
pc = 0;
while( 1 ){
  c = fgetc(p->in);
  if( c=='\n' ) p->nLine++;
  if( c==cQuote ){
cQuoteComplete+=1;
if( pc==cQuote ){
  pc = 0;
  continue;
}
  }
  if( (c==cSep && pc==cQuote)
   || (c=='\n' && pc==cQuote)
   || (c=='\n' && pc=='\r' && p->n>=2 && p->z[p->n-2]==cQuote)
   || (c==EOF && pc==cQuote)
  ){
if (cQuoteComplete%2 == 0) {
  do{ p->n--; }while( p->z[p->n]!=cQuote );
  p->cTerm = c;
  break;
}
  }
  if( c==EOF ){
fprintf(stderr, "%s:%d: unterminated %c-quoted field\n",
p->zFile, startLine, cQuote);
p->cTerm = EOF;
break;
  }
  csv_append_char(p, c);
  pc = c;
}
if( cQuoteComplete%2 != 0 ){
  fprintf(stderr, "%s:%d: unescaped %c character\n",
  p->zFile, p->nLine, cQuote);
}
  }else{
while( c!=EOF && c!=cSep && c!='\n' ){
  csv_append_char(p, c);
  c = fgetc(p->in);
}
if( c=='\n' ){
  p->nLine++;
  if( p->n>1 && p->z[p->n-1]=='\r' ) p->n--;
}
p->cTerm = c;
  }
  if( p->z ) p->z[p->n] = 0;
  return p->z;
}


I built the sqlite3 shell from the almagamation source and tested the above
change using my import data; about 1Gb of messy,  but rfc4180 compliant,
CSV. It all imported cleanly.

sqlite3 is a wonderful bit of software. I have been using it for some time
now to munge and query multi-gigabyte size data sets and am very impressed
with its performance and capabilities.

Best Regards,
Lindsay
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Request for help with SQLite Query to return missing Date/Time Ranges

2011-04-20 Thread Andrew Lindsay
Dear Group,

 

I am trying to search an SQL database that is meant to have entries logged
every minute for a period of approximately 15 months.

 

I want to create a query that will search through the database and tell me
for which periods I do not have any entries.

 

Any assistance would be greatly appreciated.

 

Regards

 

Andrew Lindsay

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLl question

2009-05-15 Thread Patty Lindsay
Try

select n.name, p.type, p.addr
from p
join name on p.id = n.id
union
select n.name, e.type, e.addr
from e
join name on e.id = n.id

Patty

On Fri, May 15, 2009 at 12:14 AM, Evan Burkitt  wrote:
> This isn't a Sqlite question per se, but I know there are some SQL gurus
> here who might have some insight into this problem. I apologize for
> being off-topic; I can be shameless when I need help. :)>
>
> I have three tables, N, P and E. N contains the fields id and name. The
> other two each contain the fields id, type and addr. P holds phone
> numbers, E email addresses. In P, the type field is always 'phone'; in
> the P it is always 'email'. They are all related on id.
>
> I want to build a single query that will return a result set consisting
> of N.name, P/E.type and P/E.addr. That is, it contains the like-named
> fields of both P and E. For example:
>
> -name -type--- -addr-
> "John Smith", "phone", "123-555-1212"
> "John Smith", "email", "john.sm...@domain.com"
> "Bill Jones", "phone", "123-555-1213"
> "Jane Johnson", "email", "j...@anotherdomain.com"
>
> and so forth. The order of the names and types is not important.
>
> Is this possible?
>
> -evan
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Execute import from php

2009-04-05 Thread Patty Lindsay
I'd like to import a file into a table from php.  Is there anyway to
execute the .import from php.  I am using the pdo interface.

Patty
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index in seperate db file, low impact change?

2006-06-15 Thread Lindsay
Russell Leighton wrote:
>
> Big distruptive code change?

Also more opportunites for things to go wrong with mismatched index/db files
Also no longer can copy/move db as a single file


-- 
Lindsay




Re: [sqlite] Re: LIMIT and paging records

2006-05-29 Thread Lindsay
Igor Tandetnik wrote:
>
> By the way, for most queries you will find that the execution time
> grows as the OFFSET grows. To implement OFFSET X, the engine would
> enumerate records from the beginning and simply ignore the first X-1.
> Retrieving the last pageful often takes almost as long as retrieving
> all the records.

I have noticed exactly that behaviour recently in MySQL and MSSQL (where
it had to be emulated). This was when implementing a web paging access
as the original poster is doing.



-- 
Lindsay




Re: [sqlite] uSQLite, doing HTTP

2005-11-17 Thread Lindsay




[EMAIL PROTECTED] wrote:

  
Agree in principle with what you're saying here as well, one suggestion 
- perhaps you could mod your server to accept/general queries/output via 
standard in/out. That way its could be piped over a multitude of channels.

Cheers - Lindsay

  
  
The architecture would be all wrong for doing that. It would be easy to
make a very little program which passes queries on stdin to the
sqlite3_exec routine and then use the callback function from uSQLite to
reply in the same format.

Thing is, you only have one standard in and out, it could only do 1
client.

  


A lot of CGI programs work that way - PHP can for one, its more
commonly invoked as a DLL plugin these days, but its used to be and
still can be invoked via cmd line pipes. A new instance of php is
invoked for each request.


-- 

Lindsay





Re: [sqlite] uSQLiteServer Source code available

2005-11-15 Thread Lindsay




[EMAIL PROTECTED] wrote:

  HTTP would be the logical choice, telnet is often firewalled. BTW, you
can of course allready telnet into the server anyway.

I will try doing something like this, but I have never implemented the
server side of a HTML connection. Do you know where I can find a quick
guide to making a simple HTTP server? Better still, what about a small
simple ready made HTTP server of the sort that can be embedded into the
application and has hooks to callback into user code?

  


Seeing as PHP5 has SQLite builtin and PHP4 as a plugin, you can
generate it via a PHP script - I have actually done this for a
webclient talking to MySQL & MSSQL

I'd recommend against using XML, its sounds nice but greatly
complicates the clients to I feel no great benefit in this instance.
Basically your're returning tuple result sets which can be easily and
efficiently addressed vi a ASCII line based protocol.

_javascript_ has excellent functions for parsing ASCII data, but support
for XML Parsing is erractic, in IE its done via a ActiveX object.


-- 

Lindsay





Re: [sqlite] Lite Network wrapper

2005-11-13 Thread Lindsay




[EMAIL PROTECTED] wrote:

  
Yes, Webapps may benefit from such a light and easy protocol.
Unfortunately, AFAIK, browser shells do not make sockets available to
scripts such as _javascript_, so you will need a plugin like Java or
Flash. 


Or tunnel it over http. I say this because I actually use a very
similar custom protocol  over http to feed db data to a web app of mine.


-- 

Lindsay





Re: [sqlite] Lite Network wrapper

2005-11-10 Thread Lindsay




[EMAIL PROTECTED] wrote:

  This is a wrapper for SQLite that allows multiple connections accross a
network to a single threaded SQLIte server (all clients are serviced on
the same connection). It also adds a simple security layer.
  

Very cool, wish I hade more time to evaluate it :( Interested in the
protocol, it looks simple eniugh to integrate into a webapp.



-- 

Lindsay





Re: [sqlite] Proposed 3.3.0 changes. Was: 5/2==2

2005-11-01 Thread Lindsay




[EMAIL PROTECTED] wrote:

  I admit that it has been 4 decades since I was in grade school,
but back in the 60's we were taught that 5/2 is 2.5.  Are they
teaching something different now?  Or have I misunderstood the
comment?
  


Ah, but how does the 5 *feel* about being divided by 2 ? is the 2
oppressed by being under the 5 ? how is this the fault of the
patriarchy ? discuss

-- 

Lindsay





Re: [sqlite] Convert SQLite to Java

2005-10-24 Thread Lindsay




Steve O'Hara wrote:

  
The attraction of a pure Java solution is obvious - if a client wants
this app to run on VMS then I've got to create another JNI library that
is VMS specific.  Id you've ever written any JNI you'll know why this
isn't an all together pleasurable experience.
  


Also - and this is *very* relevant for some platforms, sometimes you
don't have access to C or DLL/libs on a platform. I work on MEAP
(embedded java on Canon printers) and we have to get our Java source
OK'd by canon before we are given a key to for installing it on general
canon hardware. No chance at all that we'd be allowed to upload DLL's
or compile C on the platform.

-- 

Lindsay





Re: [sqlite] Convert SQLite to Java

2005-10-24 Thread Lindsay




[EMAIL PROTECTED] wrote:

  hsql is pretty smart in java, the storage is editable by texpad it is only 
consisted of sql statements. It s an in memory only DB
  


It can store and use on disk db as well.

I was evaluating it for an embedded DB, seemed prety easy to use. I
haven't used it in anger yet though.



-- 

Lindsay





Re: [sqlite] Speed Test Done !

2005-10-05 Thread Lindsay

Richard wrote:



Still have found no sql program yet, that can beat Panorama in speed.



Since Panorama is RAM based, how about if you did your speed test with a 
SQLite in memory database ?



--
Lindsay




Re: [sqlite] Database is locked on SELECT only calls?

2005-06-22 Thread Lindsay

D. Richard Hipp wrote:


Please run your program in a debugger and tell
me what the subroutine "isNT()" returns in os_win.c.
If it is returning false, that would explain your
occasional inability to get a read lock on a
read-only database.  


Perhaps a windows programmer can suggest a better
implementation of the isNT() function?
 



Haven't looked at your source, but the following has never failed:

bool IsNT()
{
   DWORD ver = GetVersion();
   if (ver & 0x8000)
   return false;
   else
   return true;
};

bool Is95()
{
   return ! IsNT();
};

This is used in our app on 1000's of PC's, if it failed the PC's would 
crash - badly. Believe me I would know ...


--
Lindsay




Re: [sqlite] Newbie questions

2004-05-17 Thread Lindsay Mathieson
Kurt Welgehausen wrote:
2) use a cursor to browse a result set ? ...
   

No.
 

The  sqlite_compile/step/finalize paradigm allows (requires really ) 
forward browsing through a result set.


[sqlite] ANN : wxSQLite has moved

2004-03-07 Thread Lindsay Mathieson
wxSQLite has moved to http://sourceforge.net/projects/wxsqlite and is 
currently in an active phase of development. No file releases yet (soon 
! when I get the bind/reset api finalised)

CVS is usable though: linux (via configure & make install) and a Win32 
VC project are supported.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Sqlite & COM/EXE server

2004-01-05 Thread Lindsay Mathieson
Eugene Lin wrote:

Bert,

 

It is a COM-related problem, not a SQLITE problem
   

I can now tell you that it IS a sqlite problem NOT a COM problem. Sqlite is trying to create its temporary database at some location (which I'm not sure where) and it failed. I have found that you can force sqlite to store its temporary database in memory. Once I have done that, the problem has gone!
 

Well my guess (from reading the previous emails) is that is neither a 
COM or a SQLite problem - basically a lack of understanding re users, 
services and nt securiity

If your com server is running as a normal service (not interactive or 
logged on) then it has no user profile. Which means it cannot access any 
network directores etc, also it will have no user enviroment settings 
such as temporary directories it can access. This why setting the temp 
dir to memory works.

The easy solutuin is to have the service logon as a user, either an 
existing one or create a user account for it.

Alternatively you could create a temp directory thats globaly 
read/writable and have the service use that as its temp dir.


Re: [sqlite] Read Notifications

2003-10-29 Thread Lindsay Mathieson
Bert Verhees wrote:

Hi,

Since we are using the new List-headers on sqlite-list, I am 
experiencing two disadvantages.

I recieve many emails twice, please do not send your emails twice, but 
manually remove one from the To or CC when your emailer does not 
support List-reply.
But at least we can rest easy knowing that our list adheres to someones 
idea of a std - despite it not being in any rfc anywhere, who cares 
about the duplicates cluttering our inboxes.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]