Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
lite> INSERT INTO people (name, age) VALUES ('Alice', 30), ('Bob', 42);
sqlite> UPDATE people SET age = age + 2;
sqlite> UPDATE people SET name = 'Eva' WHERE name='Alice';
sqlite> DELETE FROM people WHERE name = 'Bob';
sqlite> SELECT * FROM change_log;
1|2018-11-03
00:56:22|INSERT|people|1|{"id":[null,1],"created":[null,"2018-11-03
00:56:22"],"name":[null,"Alice"],"age":[null,30]}
2|2018-11-03
00:56:22|INSERT|people|2|{"id":[null,2],"created":[null,"2018-11-03
00:56:22"],"name":[null,"Bob"],"age":[null,42]}
3|2018-11-03 00:56:22|UPDATE|people|1|{"age":[30,32]}
4|2018-11-03 00:56:22|UPDATE|people|2|{"age":[42,44]}
5|2018-11-03 00:56:22|UPDATE|people|1|{"name":["Alice","Eva"]}
6|2018-11-03 00:56:22|DELETE|people|2|{"id":[2,null],"created":["2018-11-03
00:56:22",null],"name":["Bob",null],"age":[44,null]}
sqlite>


On Fri, Nov 2, 2018 at 5:11 PM Lindsay Lawrence 
wrote:

> Hi,
>
> Sorry. I have had no luck with this.
>
> I followed the instructions here
> https://www.sqlite.org/getthecode.html#clone
> then built from source following the directions in README.md.
>
>   mkdir bld;#  Build will occur in a sibling directory
>   cd bld   ;#  Change to the build directory
>   ../sqlite/configure  ;#  Run the configure script
>   make ;#  Run the makefile.
>   make sqlite3.c   ;#  Build the "amalgamation" source file
>   make test;#  Run some tests (requires Tcl)
>
> This is the platform I am building on 'uname -a':
>
> Linux debianE7440-LT 3.16.0-6-amd64 #1 SMP Debian 3.16.57-2 (2018-07-14)
> x86_64 GNU/Linux
>
> The current trunk source however does not have the JSON1 extension
> embedded in the amalgamation by default. How would I add that?
> However, building the json1 extension against this source and then loading
> it from the shell reports this:
>
> SQLite version 3.8.10.1 2015-05-13 04:50:30
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> .version
> SQLite 3.8.10.1 2015-05-13 04:50:30
> 59e3e9e764440b7feaafadff74f422535d21bca2
> sqlite> .load "../../../bin/json1"
> Error: ../../../bin/json1.so: undefined symbol:
> sqlite3_create_window_function
> sqlite>
>
> I'll tinker with it  a more but at this point I am not sure if it is
> because an own build configuration issue trying to test your fix or if the
> fix itself is the issue.
>
> Regards,
> /Lindsay
>
>
> On Fri, Nov 2, 2018 at 10:38 AM Richard Hipp  wrote:
>
>> Please try your test script on the latest trunk check-in and let us
>> know whether or not it is working for you.
>>
>> --
>> D. Richard Hipp
>> d...@sqlite.org
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
Hi,

Sorry. I have had no luck with this.

I followed the instructions here
https://www.sqlite.org/getthecode.html#clone
then built from source following the directions in README.md.

  mkdir bld;#  Build will occur in a sibling directory
  cd bld   ;#  Change to the build directory
  ../sqlite/configure  ;#  Run the configure script
  make ;#  Run the makefile.
  make sqlite3.c   ;#  Build the "amalgamation" source file
  make test;#  Run some tests (requires Tcl)

This is the platform I am building on 'uname -a':

Linux debianE7440-LT 3.16.0-6-amd64 #1 SMP Debian 3.16.57-2 (2018-07-14)
x86_64 GNU/Linux

The current trunk source however does not have the JSON1 extension embedded
in the amalgamation by default. How would I add that?
However, building the json1 extension against this source and then loading
it from the shell reports this:

SQLite version 3.8.10.1 2015-05-13 04:50:30
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .version
SQLite 3.8.10.1 2015-05-13 04:50:30 59e3e9e764440b7feaafadff74f422535d21bca2
sqlite> .load "../../../bin/json1"
Error: ../../../bin/json1.so: undefined symbol:
sqlite3_create_window_function
sqlite>

I'll tinker with it  a more but at this point I am not sure if it is
because an own build configuration issue trying to test your fix or if the
fix itself is the issue.

Regards,
/Lindsay


On Fri, Nov 2, 2018 at 10:38 AM Richard Hipp  wrote:

> Please try your test script on the latest trunk check-in and let us
> know whether or not it is working for you.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
P.S.

Here are the first few lines of 'fossil timeline' on my local that I am
building.

=== 2018-11-02 ===
17:38:39 [1fa74930ab] Enhance triggers so that they can use
table-valued-functions that exist in schemas outside of the schema in which
the trigger is defined. (user: drh)
=== 2018-10-31 ===
20:52:00 [4a6ad5190b] Deploy the sqlite3Strlen30NN() function (argument
guaranteed to be non-NULL) for a small performance improvement. (user: drh)
19:01:13 [790ea39a65] *MERGE* Add support for the SQLITE_PREPARE_NORMALIZED
flag and the sqlite3_normalized_sql() when compiling with
SQLITE_ENABLE_NORMALIZE. Also remove unnecessary whitespace from Makefiles.
 (user: drh)
18:24:29 [7107f0dacf] Tweaks to the test_intarray documentation and tests.
(user: drh)
01:26:24 [4b370c74ae] *MERGE* Merge fixes from trunk, especially rebustness
against corrupt database files. (user: drh tags: apple-osx)
01:12:06 [e0d30c1862] *MERGE* Merge fuzz test cases computed by dbfuzz2.
(user: drh)
01:04:18 [d57873337a] Improved corrupt database detection in the
relocatePage() routine of the b-tree module. (user: drh)

/Lindsay


On Fri, Nov 2, 2018 at 4:49 PM Lindsay Lawrence 
wrote:

> Hi,
>
> Sorry. I have had no luck with this.
>
> I followed the instructions here
> https://www.sqlite.org/getthecode.html#clone
> then built from source following the directions in README.md.
>
>   mkdir bld;#  Build will occur in a sibling directory
>   cd bld   ;#  Change to the build directory
>   ../sqlite/configure  ;#  Run the configure script
>   make ;#  Run the makefile.
>   make sqlite3.c   ;#  Build the "amalgamation" source file
>   make test;#  Run some tests (requires Tcl)
>
> The attached file has the results from running 'make tests'. There are
> some errors there.
>
> This is the platform I am building on 'uname -a':
>
> Linux debianE7440-LT 3.16.0-6-amd64 #1 SMP Debian 3.16.57-2 (2018-07-14)
> x86_64 GNU/Linux
>
> The current trunk source however does not have the JSON1 extension
> embedded in the amalgamation by default. How would I add that?
> However, building the json1 extension against this source and then loading
> it from the shell reports this:
>
> SQLite version 3.8.10.1 2015-05-13 04:50:30
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> .version
> SQLite 3.8.10.1 2015-05-13 04:50:30
> 59e3e9e764440b7feaafadff74f422535d21bca2
> sqlite> .load "../../../bin/json1"
> Error: ../../../bin/json1.so: undefined symbol:
> sqlite3_create_window_function
> sqlite>
>
> I'll tinker with it  a more but at this point I am not sure if it is
> because an own build configuration issue trying to test your fix or if the
> fix itself is the issue.
>
> Regards,
> /Lindsay
>
>
> On Fri, Nov 2, 2018 at 10:38 AM Richard Hipp  wrote:
>
>> Please try your test script on the latest trunk check-in and let us
>> know whether or not it is working for you.
>>
>> --
>> D. Richard Hipp
>> d...@sqlite.org
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
Thanks!

As a further note, I also tried building the shell with JSON1 enabled and
building and loading the json1 extension separately, with the same results.

SQLite version 3.25.2 2018-09-25 19:08:10
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .load './json1.so'
sqlite> attach "test.db" as test;
sqlite> INSERT INTO test.people (name, age) VALUES ('Alice', 30), ('Bob',
42);
Error: no such table: test.json_each

/Lindsay


On Fri, Nov 2, 2018 at 10:00 AM Richard Hipp  wrote:

> On 11/2/18, Lindsay Lawrence  wrote:
> > Has anyone else run into this issue?
>
> I can confirm that it is an issue and that we are working on it.
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] json_each() in a table trigger for an 'attached' db causes an error

2018-11-02 Thread Lindsay Lawrence
Has anyone else run into this issue?

I have the following tables and trigger

- Data table
CREATE TABLE people (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
name TEXT,
age INTEGER
);

-- Change log table
CREATE TABLE change_log (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
action TEXT,
table_name TEXT,
obj_id INTEGER,
changes TEXT
);

-- Insert Trigger
CREATE TRIGGER people_track_insert
AFTER INSERT ON people
BEGIN
  INSERT INTO change_log (action, table_name, obj_id, changes)
  SELECT
'INSERT', 'people', NEW.id, changes
  FROM
(SELECT
  json_group_object(col, json_array(oldval, newval)) AS changes
FROM
  (SELECT
json_extract(value, '$[0]') as col,
json_extract(value, '$[1]') as oldval,
json_extract(value, '$[2]') as newval
  FROM
json_each(
  json_array(
json_array('id', null, NEW.id),
json_array('created', null, NEW.created),
json_array('name', null, NEW.name),
json_array('age', null, NEW.age)
  )
)
  WHERE oldval IS NOT newval
  )
);
END;

If I then do the following, the trigger works correctly, updating the
change_log table, etc

$> sqlite3  test.db
sqlite> INSERT INTO people (name, age) VALUES ('Alice', 30), ('Bob', 42);
sqlite> UPDATE people SET age = age + 2;
sqlite> UPDATE people SET name = 'Eva' WHERE name='Alice';
sqlite> DELETE FROM people WHERE name = 'Bob';
sqlite> SELECT * FROM change_log;

However, the above queries for the same db, when 'attached', fails.:

$> sqlite3
sqlite> attach './test.db' as test;
...

In the attached case any inserts into the table cause the trigger to fail
with something like the following error:

*Error: near line 1694: no such table: napp.json_each*

Somehow the virtual table json_each in the db trigger is not available for
an 'attached' db. It does work correctly for the main db. It also works for
an attached db as a normal query.

I am running the latest amalgamation code with the json1 extension enabled

sqlite-amalgamation-3250200.zip
(2.17 MiB) C source code as an amalgamation, version 3.25.2.
(sha1: c9ff08b91a0faacabe2acb240e5dba3cf81071f3)

Compiled with:

gcc -Os -I. -DSQLITE_THREADSAFE=0 -DSQLITE_ENABLE_JSON1 -DHAVE_USLEEP
-DHAVE_READLINE shell.c sqlite3.c -ldl -lreadline -lncurses -Os -o sqlite3

/Lindsay
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] json_each() in a table trigger for an 'attached' db does not work

2018-11-02 Thread Lindsay Lawrence
Has anyone else run into this issue? Is there a workaround?

I have the following tables and trigger

- Data table
CREATE TABLE people (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
name TEXT,
age INTEGER
);

-- Change log table
CREATE TABLE change_log (
id INTEGER PRIMARY KEY,
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
action TEXT,
table_name TEXT,
obj_id INTEGER,
changes TEXT
);

-- Insert Trigger
CREATE TRIGGER people_track_insert
AFTER INSERT ON people
BEGIN
  INSERT INTO change_log (action, table_name, obj_id, changes)
  SELECT
'INSERT', 'people', NEW.id, changes
  FROM
(SELECT
  json_group_object(col, json_array(oldval, newval)) AS changes
FROM
  (SELECT
json_extract(value, '$[0]') as col,
json_extract(value, '$[1]') as oldval,
json_extract(value, '$[2]') as newval
  FROM
json_each(
  json_array(
json_array('id', null, NEW.id),
json_array('created', null, NEW.created),
json_array('name', null, NEW.name),
json_array('age', null, NEW.age)
  )
)
  WHERE oldval IS NOT newval
  )
);
END;

If I then do the following, the trigger works correctly, updating the
change_log table, etc

$> sqlite3  test.db
sqlite> INSERT INTO people (name, age) VALUES ('Alice', 30), ('Bob', 42);
sqlite> UPDATE people SET age = age + 2;
sqlite> UPDATE people SET name = 'Eva' WHERE name='Alice';
sqlite> DELETE FROM people WHERE name = 'Bob';
sqlite> SELECT * FROM change_log;

However, the above queries for the same db, when 'attached', fails.:

$> sqlite3
sqlite> attach './test.db' as test;
...

In the attached case any inserts into the table cause the trigger to fail
with something like the following error:


*Error: near line 1694: no such table: napp.json_each*

Somehow the virtual table json_each in the db trigger is not available for
an 'attached' db. It does work correctly for the main db. It also works for
an attached db as a normal query.

I am running the latest amalgamation code with the json1 extension enabled

sqlite-amalgamation-3250200.zip
(2.17 MiB) C source code as an amalgamation, version 3.25.2.
(sha1: c9ff08b91a0faacabe2acb240e5dba3cf81071f3)

Compiled with:

gcc -Os -I. -DSQLITE_THREADSAFE=0 -DSQLITE_ENABLE_JSON1 -DHAVE_USLEEP
-DHAVE_READLINE shell.c sqlite3.c -ldl -lreadline -lncurses -Os -o sqlite3

/Lindsay
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple web query tool

2017-02-03 Thread Lindsay Lawrence
> "Sqlite just does it's thing with a minimum amount of fuss and minimal
impact on system resources, can't rave enough"

Yes! A while back, when I first tried what I outlined it worked so well I
took it a step further.

With nodejs async io, streams and a bit of javascript glue code, it is
relatively easy to manage a small process pool of sqlite shells...
eliminating the overhead of setup/teardown of cgi processes and still
keeping the same simple pipe interface. The result, a simple, robust, high
performance, lightweight db server over http/s with minimal external
dependencies that runs on just about anything. Super flexible, more than
scalable for anything I have thrown at it so far... and it just works. (
https://en.wikipedia.org/wiki/Unix_philosophy).  NodeJS is not as
lightweight as it once was. The binary of recent versions is creeping up on
30M not too long ago it was 6Mb or so. SQLite has added some incredible
functionality in recent years and the shell cli still comes in under 1Mb
for a nicely feature-full build. Apples and oranges. Still...

/Lindsay


On Fri, Feb 3, 2017 at 2:13 PM, Michael Falconer <
michael.j.falco...@gmail.com> wrote:

> >
> > Running the sqlite3 command-line shell via cgi works way better than you
> > may expect.
> >
>
> ​Yay verily, and that is really not doing a great tool justice. I've done a
> lot of similar things to what Lindsay outlines above both with web and
> application targets, Often these procedures are set up as proof of concept
> and in a sort of testing mode. I have found that the so-called test setup
> actually functioned more reliably and consistently than the application
> based code we eventually hacked up. Simple, reliable and very, very
> flexible. Sqlite just does it's thing with a minimum amount of fuss and
> minimal impact on system resources, can't rave enough. :-)​
>
>
> On 3 February 2017 at 18:29, Lindsay Lawrence <thinknl...@gmail.com>
> wrote:
>
> > Running the sqlite3 command-line shell via cgi works way better than you
> > may expect.
> > The command-line shell has a small footprint and works well with stdio in
> > batch mode.
> >
> > You can run a shell script that runs an instance of the cli shell and
> reads
> > and runs a .sql file.  The sql file and bash can be as complex as it
> needs
> > to be.  You can pass in params on the command-line by inserting env
> values
> > into a temp table and then using that table as necessary in subsequent
> sql.
> >
> > For example:
> > Configure your httpd for cgi then have a cgi script, say "*report.cgi*":
> >
> > #!/bin/bash
> > /path/to/sqlite3 -bail -batch "/path/to/my.s3db" ".read
> > /path/to/report.sql"
> >
> > and in *"/path/to/report.sql*"
> >
> > .mode html
> > .headers on
> > .print Content-Type: text/html
> > .print
> > .print 
> > select * from from report_view;
> > .print 
> >
> > For large datasets, or something you just want to import conveniently
> into
> > a spreadsheet, or another db, for further munging you could set csv mode
> > and/or force a download. As a note, unless you are sorting a very large
> > dataset the resource usage of all of this is quite low as sqlite just
> pipes
> > the dataset out over the http response as it is generated.
> >
> > /Lindsay
> >
> >
> > On Wed, Feb 1, 2017 at 8:10 AM, Jay Kreibich <j...@kreibi.ch> wrote:
> >
> > > I'm looking for an *extremely* simple web tool that will allow me to
> > > configure a dozen or so stored queries, which people can then select
> and
> > > run on an internal server.  If the system supports a query variable or
> > two,
> > > that would be fantastic, but I don't even need that.  Any thoughts?  Or
> > do
> > > I dust off the PHP tutorials and spend an afternoon throwing something
> > > together?
> > >
> > >  -j
> > >
> > >
> > > --
> > > Jay A. Kreibich < J A Y @ K R E I B I.C H >
> > >
> > > "Intelligence is like underwear: it is important that you have it, but
> > > showing it to the wrong people has the tendency to make them feel
> > > uncomfortable." -- Angela Johnson
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@mailinglists.sqlite.org
> > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> Regards,
>  Michael.j.Falconer.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple web query tool

2017-02-02 Thread Lindsay Lawrence
Running the sqlite3 command-line shell via cgi works way better than you
may expect.
The command-line shell has a small footprint and works well with stdio in
batch mode.

You can run a shell script that runs an instance of the cli shell and reads
and runs a .sql file.  The sql file and bash can be as complex as it needs
to be.  You can pass in params on the command-line by inserting env values
into a temp table and then using that table as necessary in subsequent sql.

For example:
Configure your httpd for cgi then have a cgi script, say "*report.cgi*":

#!/bin/bash
/path/to/sqlite3 -bail -batch "/path/to/my.s3db" ".read /path/to/report.sql"

and in *"/path/to/report.sql*"

.mode html
.headers on
.print Content-Type: text/html
.print
.print 
select * from from report_view;
.print 

For large datasets, or something you just want to import conveniently into
a spreadsheet, or another db, for further munging you could set csv mode
and/or force a download. As a note, unless you are sorting a very large
dataset the resource usage of all of this is quite low as sqlite just pipes
the dataset out over the http response as it is generated.

/Lindsay


On Wed, Feb 1, 2017 at 8:10 AM, Jay Kreibich  wrote:

> I'm looking for an *extremely* simple web tool that will allow me to
> configure a dozen or so stored queries, which people can then select and
> run on an internal server.  If the system supports a query variable or two,
> that would be fantastic, but I don't even need that.  Any thoughts?  Or do
> I dust off the PHP tutorials and spend an afternoon throwing something
> together?
>
>  -j
>
>
> --
> Jay A. Kreibich < J A Y @ K R E I B I.C H >
>
> "Intelligence is like underwear: it is important that you have it, but
> showing it to the wrong people has the tendency to make them feel
> uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Appropriate Uses For SQLite

2015-02-19 Thread Lindsay Lawrence
I have found SQLite is uniquely suited to server-based applications of all
kinds. Wrapped in a simple custom server it is an outstandingly performant
ultra-lightweight engine that can easily service multiple clients. The
ability to spin up multiple instances of the library against the same file
is invaluable. It is the high performance core of a distributed system we
use. Zero configuration is a beautiful thing.

We have found it to be suitable for quite large data; recently loaded the
full Census 2010 and ACS survey datasets, over 0.5 billion rows, into a
single file totalling over 100GB. Properly indexed it is a instantly
queryable allowing some fascinating data exploration and visualization. We
have also used it in scenarios where the ability to 'attach' distinct
database files allows the system to load only what it needs to service
particular queries and makes it simple to clone core tables and distribute
them over multiple systems for combining with related data.

Finally, it has replaced many traditional text-based data exploration tools
I once used... sed, awk, grep etc. Server logs, data exports from
'enterprise' systems, all manner of regularly structured data, are easily
ingested into a sqlite database file and with the addition of simple
dynamically loaded plugins + SQL can be quickly explored, data summarized,
etc. Creating views off of the raw data, I can deliver a single compact
database file to a report writer who using the ODBC driver can make eye
candy reports off of the summarized data with Excel, Access or other UI
based tools.

I will say, the recent addition of the 'WITH' clause to the language as
brought consider expressive power to what I can write directly in SQL. I
love the fact that I can do so much with the SQL language itself now
without writing a line of low-level code and the SQLite API makes it easy
to extend the functional api as needed.

In short, the zero configuration, high performance and lightweight
characteristics of SQLite have made it uniquely suited for the distributed
'cloud' environment we run in where relational data can be fragmented and
distributed over many VMs and still be accessible with a common interface
and query language.

Many thanks.

Best Regards
Lindsay


On Wed, Feb 18, 2015 at 11:11 AM, Darko Volaric  wrote:

> I second this notion. I think SQLite is uniquely suited to server based
> applications of all kinds. Its light footprint and the fact that it's a
> library rather than a full system gives it a flexibility and raw
> performance that other systems cannot. We use it at the core of each node
> in a distributed and parallel system.
>
> When using SQLite the architecture of your database system is not
> preordained by designers who could not foresee novel designs and
> approaches. SQLite is like a systems programing language: It's lean and
> mean and a powerful tool that gives full control to the systems designer
> and programmer.
>
> The only thing I'd change about SQLite is the SQL bit. To me it's an
> anachronism and a mess and needs to be factored further out of the SQLite
> core, with a more rigorous formalism providing an interface (with an
> exposed and supported API) to the database engine, but at a higher level
> than say the virtual machine.
>
> On Wed, Feb 18, 2015 at 9:12 AM, Marcus Grimm 
> wrote:
>
> > We use sqlite as the db engine inside a server application
> > with a number of clients that connect to the server.
> > Sqlite works just beatiful here and I wish these statements
> > "sqlite shall not be used for client/server things" would be
> > worded less generally. In fact when we mention sqlite as our
> > db engine customer point to this restriction and we run into
> > an excuse sort of arguments.
> > On the bottom line: Sqlite CAN very well serve as the DB
> > engine for client/server applications, it just depend how
> > the api is used.
> >
> > Marcus
> >
> > Am 2015-02-18 15:34, schrieb Richard Hipp:
> >
> >> In a feeble effort to do "marketing", I have revised the "Appropriate
> >> Uses For SQLite" webpage to move trendy buzzwords like "Internet of
> >> Things" and "Edge of the Network" above the break.  See:
> >>
> >> https://www.sqlite.org/whentouse.html
> >>
> >> Please be my "focus group", and provide feedback, comments,
> >> suggestions, and/or criticism about the revised document.   Send your
> >> remarks back to this mailing list, or directly to me at the email in
> >> the signature.
> >>
> >> Thank you for your help.
> >>
> > ___
> > sqlite-users mailing list
> > sqlite-users at mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] sqlite3 shell .import bug (edge case)

2013-12-11 Thread Lindsay Lawrence
Hi,

I am currently using sqlite3 version:

3.8.1 2013-10-17 12:57:35 c78be6d786c19073b3a6730dfe3fb1be54f5657a

I found an edge case bug in the sqlite3 shell when importing csv data with
fields containing embedded quotes, CRs and LFs:

When a field contains an embedded quote, and that embedded quote is
immediately followed by an EOL then the current csv parser will consider
the end of field to be reached and fail to import the row correctly.

For example the following csv, a single data row with column headers will
fail to import correctly.

column1,column2,column3,column4
fielddata1,fielddata2,"field ""
data"" 3",fielddata4


The offending code is in shell.c in function

static char *csv_read_one_field(CSVReader *p);

An example fix could be something like the following -- adding a counter
flag 'cQuoteComplete' to track quotes properly.

static char *csv_read_one_field(CSVReader *p){
  int c, pc;
  int cSep = p->cSeparator;
  int cQuoteComplete = 0;
  p->n = 0;
  c = fgetc(p->in);
  if( c==EOF || seenInterrupt ){
p->cTerm = EOF;
return 0;
  }
  if( c=='"' ){
int startLine = p->nLine;
int cQuote = c;
cQuoteComplete+=1;
pc = 0;
while( 1 ){
  c = fgetc(p->in);
  if( c=='\n' ) p->nLine++;
  if( c==cQuote ){
cQuoteComplete+=1;
if( pc==cQuote ){
  pc = 0;
  continue;
}
  }
  if( (c==cSep && pc==cQuote)
   || (c=='\n' && pc==cQuote)
   || (c=='\n' && pc=='\r' && p->n>=2 && p->z[p->n-2]==cQuote)
   || (c==EOF && pc==cQuote)
  ){
if (cQuoteComplete%2 == 0) {
  do{ p->n--; }while( p->z[p->n]!=cQuote );
  p->cTerm = c;
  break;
}
  }
  if( c==EOF ){
fprintf(stderr, "%s:%d: unterminated %c-quoted field\n",
p->zFile, startLine, cQuote);
p->cTerm = EOF;
break;
  }
  csv_append_char(p, c);
  pc = c;
}
if( cQuoteComplete%2 != 0 ){
  fprintf(stderr, "%s:%d: unescaped %c character\n",
  p->zFile, p->nLine, cQuote);
}
  }else{
while( c!=EOF && c!=cSep && c!='\n' ){
  csv_append_char(p, c);
  c = fgetc(p->in);
}
if( c=='\n' ){
  p->nLine++;
  if( p->n>1 && p->z[p->n-1]=='\r' ) p->n--;
}
p->cTerm = c;
  }
  if( p->z ) p->z[p->n] = 0;
  return p->z;
}


I built the sqlite3 shell from the almagamation source and tested the above
change using my import data; about 1Gb of messy,  but rfc4180 compliant,
CSV. It all imported cleanly.

sqlite3 is a wonderful bit of software. I have been using it for some time
now to munge and query multi-gigabyte size data sets and am very impressed
with its performance and capabilities.

Best Regards,
Lindsay
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users