Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread Simon Slavin

On 3 Mar 2014, at 3:41am, romtek  wrote:

> Thanks, Simon. Interestingly, for this server, disk operations aren't
> particularly fast. One SQLite write op takes about 4 times longer than on a
> HostGator server.

That supports the idea that storage is simulated (or 'virtualised') to a high 
degree.

> I wonder if what I/you described also means that this file system isn't
> likely to support file locks needed for SQLite to control access to the DB
> file to prevent data corruption.

I think that's likely.  Virtualisation is a pig for ACID: it introduces yet 
another gap between processing and physical changes in your storage which will 
be read after restart.

For those playing along at home, doing transactions properly depends on 
synchronisation.  Something that comes up here repeatedly is that synchronising 
takes a long time, and since people buy kit that quotes faster figures things 
at all levels lie about doing synchronisation.  This leads to articles like the 
following:



"Certain OS/Hardware configurations still fake fsync delivering great 
performance at the cost of being non ACID"

Here's a SQLite engineer writing about the same thing: section 3.1 of



Your disk hardware, its firmware driver, the OS's storage driver, the OS's file 
system and the OS file API all get a chance to pretend they're doing 'sync()' 
but actually just return 'done it'.  And if even one of them lies, 
synchronisation appears to happen instantly and your software runs faster.  A 
virtualising system is another chance to do processing faster by lying about 
synchronisation.  And unless something crashes or you have a power failure 
nobody will ever find out.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread romtek
Thanks, Simon. Interestingly, for this server, disk operations aren't
particularly fast. One SQLite write op takes about 4 times longer than on a
HostGator server.

I wonder if what I/you described also means that this file system isn't
likely to support file locks needed for SQLite to control access to the DB
file to prevent data corruption.


On Sun, Mar 2, 2014 at 9:18 PM, Simon Slavin  wrote:

>
> On 3 Mar 2014, at 2:14am, romtek  wrote:
>
> > On one of my hosting servers (this one is a VPS), a bunch of write
> > operations take practically the same amount of time when they are
> performed
> > individually as when they are performed as one explicit transaction. I've
> > varied the number of ops up to 200 -- with the similar results. Why is
> that?
> > What could be about the file system or disk drive that could cause this?
>
> I'm betting it's a running on newer hardware more suited to virtual
> machines.  One of the problems with virtual computers is that their disk
> storage is often virtualised to a very high degree.  For instance, what
> appears to the computer to be disk storage may be entirely held on SSD, or
> on a fast internal disk, and flushed to a huge but slower disk just once a
> minute.  Or once every five minutes.  Or once an hour.  This is an
> efficient way to simulate 20 to 200 virtual machines on what is one lump of
> hardware.
>
> A result of this is that disk operations are very fast.  However, any
> 'sync()' operations do nothing at all because nobody cares what happens if
> an imaginary computer crashes.  Since most of the time involved in ending a
> transaction is waiting for synchronisation, this produces the results you
> note: syncing once takes the same time as syncing 200 times, because
> neither of them is doing much.  And a result of that is that if the
> computer crashes, you lose the last minute/minutes/hour of processing and
> the sync() state of database operations is suspect.
>
> Go read their terms and find out what they guarantee to do if a virtual
> machine crashes.  You'll probably find that they'll get a virtual computer
> running again very quickly but don't make promises about how recent the
> image they restore will be.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread Simon Slavin

On 3 Mar 2014, at 2:14am, romtek  wrote:

> On one of my hosting servers (this one is a VPS), a bunch of write
> operations take practically the same amount of time when they are performed
> individually as when they are performed as one explicit transaction. I've
> varied the number of ops up to 200 -- with the similar results. Why is that?
> What could be about the file system or disk drive that could cause this?

I'm betting it's a running on newer hardware more suited to virtual machines.  
One of the problems with virtual computers is that their disk storage is often 
virtualised to a very high degree.  For instance, what appears to the computer 
to be disk storage may be entirely held on SSD, or on a fast internal disk, and 
flushed to a huge but slower disk just once a minute.  Or once every five 
minutes.  Or once an hour.  This is an efficient way to simulate 20 to 200 
virtual machines on what is one lump of hardware.

A result of this is that disk operations are very fast.  However, any 'sync()' 
operations do nothing at all because nobody cares what happens if an imaginary 
computer crashes.  Since most of the time involved in ending a transaction is 
waiting for synchronisation, this produces the results you note: syncing once 
takes the same time as syncing 200 times, because neither of them is doing 
much.  And a result of that is that if the computer crashes, you lose the last 
minute/minutes/hour of processing and the sync() state of database operations 
is suspect.

Go read their terms and find out what they guarantee to do if a virtual machine 
crashes.  You'll probably find that they'll get a virtual computer running 
again very quickly but don't make promises about how recent the image they 
restore will be.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread romtek
In case this gives somebody a clue, the server in question is on
http://vps.net/.


On Sun, Mar 2, 2014 at 8:14 PM, romtek  wrote:

> Hi,
>
> On one of my hosting servers (this one is a VPS), a bunch of write
> operations take practically the same amount of time when they are performed
> individually as when they are performed as one explicit transaction. I've
> varied the number of ops up to 200 -- with the similar results. Why is that?
> What could be about the file system or disk drive that could cause this?
>
> P.S. My other servers (shared hosting on HostGator), batched writes take
> MUCH less time than individual write ops, as expected.
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread romtek
Hi,

On one of my hosting servers (this one is a VPS), a bunch of write
operations take practically the same amount of time when they are performed
individually as when they are performed as one explicit transaction. I've
varied the number of ops up to 200 -- with the similar results. Why is that?
What could be about the file system or disk drive that could cause this?

P.S. My other servers (shared hosting on HostGator), batched writes take
MUCH less time than individual write ops, as expected.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite destroys civilization.

2014-03-02 Thread Stephen Chrzanowski
Its gotta be great to see your code end up in a TV show and have an actor
say "There's you're problem" and you get to say "Not in my code!".  That
would have been epic to be sitting there for that particular event as a
bystander. heh



On Sun, Mar 2, 2014 at 6:39 PM, Darren Duncan wrote:

> On 3/2/2014, 9:34 AM, Richard Hipp wrote:
>
>> Reports on twitter say that the "nanobots" in the TV drama "Revolution"
>> have source code in the season two finale that looks like this:
>>
>> https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
>>
>> Compare to the SQLite source code here:
>>
>> http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281
>>
>
> Hahaha, that's great.
>
> Its always interesting to see when TV shows include programming code.
>
> Sometimes they actually make an effort to make it more realistic, such as
> in this case.  I recall reading the source code shown in the original Tron
> is like that too.  I have seen several others that are on the realistic
> side.
>
> But a counter-example is a show I saw where they had "programming code"
> but it was actually HTML source, which really shows those ones didn't do
> their homework.
>
> -- Darren Duncan
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-02 Thread Alek Paunov

On 02.03.2014 21:38, Elefterios Stamatogiannakis wrote:

Under this view, the efficiency of the virtual table api is very
important. Above query only uses 2 VTs in it, but we have other queries
that use a lot more VTs than that.


Max tests in C shows 2x CPU work, but he explains that the test is not 
very sound, so let's say somewhere between 1x-2x. Your tests - 3x time.


As you have already identified, the real reason probably is the million 
scale callback quantity across the VM barrier - I do not follow PyPy, 
but see these notes [1] by Mike Pall - the LuaJIT author (LuaJIT is the 
leading project in the trace compilers filed):


[1] http://luajit.org/ext_ffi_semantics.html#callback_performance

Also from one of the dozens of threads touching the subject:

[2] http://www.freelists.org/post/luajit/Yielding-across-C-boundaries,3

```
Entering the VM needs a lot of state setup and leaving it isn't
free either. Constantly entering and leaving the VM via a callback
from C *to* Lua has a high overhead. For short callbacks, the
switching overhead between C and Lua may completely dominate the
total CPU time.

Calling an iterator written in C via the FFI *from* a Lua program
is much cheaper -- this compiles down to a simple call instruction.
```

Unfortunately, for your "insert into t select * from vt" case an the 
callback/iterator transformation is not possible (we do not have 
repetitive _step call to invert the control somehow). What to do?


It seems that the easiest optimization for this (very often) VT use case 
(bulk streaming) is SQLite add-on in _C_ to be written, implementing 
vtable interface specialization containing xNextPage "buffering" let's 
say 4K rows or even better 16KB data (in addition to your initial 
proposal of xNextRow).


The technical question is: how the rows to be encoded? You said 
initially that you use some compressed format. But for such extension, 
to gain more traction in the future, it would be better probably a more 
standard format to be chosen.


a) Rows represented in native SQLite3 format [3]
b) ... native SQLite4 format
c) Some wide used encoding near to SQLite types [4]
d) ...

[3] http://www.sqlite.org/fileformat.html#record_format
[4] https://github.com/msgpack/msgpack/blob/master/spec.md#formats

IMHO, It would be very nice if some common conventions for binary record 
streaming could be discussed and adopted across the SQLite binding and 
add-on developers. The possible applications are not limited only to 
vtables ;-).


Kind regards,
Alek

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite destroys civilization.

2014-03-02 Thread Darren Duncan

On 3/2/2014, 9:34 AM, Richard Hipp wrote:

Reports on twitter say that the "nanobots" in the TV drama "Revolution"
have source code in the season two finale that looks like this:

https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large

Compare to the SQLite source code here:

http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281


Hahaha, that's great.

Its always interesting to see when TV shows include programming code.

Sometimes they actually make an effort to make it more realistic, such as in 
this case.  I recall reading the source code shown in the original Tron is like 
that too.  I have seen several others that are on the realistic side.


But a counter-example is a show I saw where they had "programming code" but it 
was actually HTML source, which really shows those ones didn't do their homework.


-- Darren Duncan


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New

2014-03-02 Thread Klaas V
 Kees wrote answering Ashleigh
 
| If you prefer a graphical user interface, I can recommend
|the sqlite manager plugin in the Firefox web browser.

|| If any one knows a better way to read and understand the files I would 
greatly appreciate it 
|| |I think the file ext. is a plist. 
|| Live, love & laugh. 

|
|-- 
|Groet, Cordialement, Pozdrawiam, Regards,|

a P(roperty)list is a normal twxt file similar to XML and HTML

example:
http://www.apple.com/DTDs/PropertyList-1.0.dtd;>


Cordiali saluti | Kind regards | Vriendelijke groeten | Freundliche Grüsse,
Klaas `Z4us` V, freelance CIO / ICT-guru / SystemDeveloper-Analyst 
kla...@innocentisart.eu http://innocentisart.eu/klaasv/indexw.html



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite destroys civilization.

2014-03-02 Thread Richard Hipp
On Sun, Mar 2, 2014 at 12:34 PM, Richard Hipp  wrote:

> Reports on twitter say that the "nanobots" in the TV drama "Revolution"
> have source code in the season two finale that looks like this:
>
> https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
>
> Compare to the SQLite source code here:
>
> http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281
>

A video clip from the episode can be seen here:

http://www.nbc.com/revolution/video/repairing-the-code/2748856#i145567,p1

You can clearly see the SQLite code on the monitor.  The dialog goes
something like this:

Aaron:  Wait.  Hold on.  There.
Male actor 1: What?
Aaron: There's a memory leak here.  This chunk of code.  (Points to the
SQLite analyzeTable() routine).  That's the problem.  It's eating up all
available resources.  It will force a segmentation fault. The whole system
will crash!

At that point, I said "Not in my code!"

But upon closer inspection, Aaron is correct.  The code has been altered
slightly.  This is what Aaron is looking at (line numbers added):

01 static void analyzeTable(Parse *pParse, Table *pTab, Index *pOnlyIdx){
02   int iDb;
03   int iStatCur;
04   int *key = (char*)malloc(8*sizeOf(char))
05   assert( pTab!=0 );
06   assert( ecrypBtreeHoldsAllMutexes(pParse->db) );
07   iDb = ecrypSchemaToIndex(pParse->db, pTab->pSchema);
08   ecrypBeginWriteOperation(pParse, 0, iDb);
09   iStatCur = pParse->nTab;
10   pParse->nTab += 3;
11   if( pOnlyIdx ){
12 openStatTable(pParse, iDb, iStatCur, pOnlyIdx->zName, "idx");
13   }else{
14 openStatTable(pParse, iDb, iStatCur, pTab->zName, "tbl");
15   }
16 }

The changes from SQLite are (1) all "sqlite3" name prefixes are changes to
"ecryp" and (2) line 04 has been added.  Line 04 is the "memory leak".  It
also contains at least four other errors:  (A) there is no semicolon at the
end.  (B) "sizeof" has a capital "O". (C) It assigns a char* pointer to an
int* variable.  (D) It calls malloc() directly, which is forbidden inside
of SQLite since the application might assign a different set of memory
allocation functions.  The first two errors are fatal - this function won't
even compile.  But, heh, it's a TV show

So there you go.  SQLite used in evil nanobots that destroy civilization.

I've never actually seen Revolution (I don't own a TV set).  So I don't
really understand the plot.  Can somebody who has watched this drama please
brief me?  In particular, I'm curious to know if Aaron a good guy or a bad
guy?
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread Richard Hipp
On Sun, Mar 2, 2014 at 1:55 PM, big stone  wrote:

>==> Why such a 'x6' speed-up, as we need to scan the whole table anyway
> ?
>

SQLite implements GROUP BY by sorting on the terms listed in the GROUP BY
clause.  Then as each row comes out, it compares the GROUP BY columns to
the previous row to see if a new "group" needs to be started.  Sorting is
O(NlogN) if you don't have an index.


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread Mikael
big stone,

Can you please compile a chart (in text format is ok) that puts your
numbers from your last mail in relation with the numbers from your email
prior to that, for everyone to get perfectly clear about how the
optimizations you applied now do improve beyond the numbers published in
the original postgresql vs sqlite published here today?

Like for instance, your 151 second result.. that's more than the 21 seconds
number you published in your first email, and so on -

Thanks!



2014-03-02 19:55 GMT+01:00 big stone :

> Hi again,
>
> I tune a little the SQLite experiment :
> - to get rid of the 19th columns message,
> - to measure the previous tests with more precise figures,
> - the effect of the suggested index :
>CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st, contb_receipt_amt);
> - the effect of using a filesystem database.
>
> results : (time +/-1 seconds, windows timing doesn't show below the second)
> - feeding data :
>. in disk database : 151 seconds
>. in memory database :  131 seconds (25% = 1 cpu used out of 4)
>
>  - creating index CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st,
> contb_receipt_amt) :
>. in disk database : 43 seconds
>. in memory database :  38 seconds
>
> - select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm
> ;
>   . in disk database : 22 seconds
>   . in memory database :  19 seconds
>   . in disk database with index: 3 seconds
>   . in memory database with index :  3 seconds
>
>
> - select cand_nm, sum(contb_receipt_amt) as total from fec group by
> cand_nm ;
>  . in disk database : 27 seconds
>  . in memory database :  24 seconds
>  . in disk database with index: 4 seconds
>  . in memory database with index :  4 seconds
>
>
> Remarks :
>
> - with an expert index, SQLite is 6 times quicker.
>==> Why such a 'x6' speed-up, as we need to scan the whole table anyway
> ?
>
> - the ":memory:" database is barely quicker than the disk database.
>==> How can a rotating disk (7200rpm) database compete with a pure
> in-memory database ?
>
>
>
> *** ANNEXE 
> script to launch with ":memory:" or with "turlututu.db"  :
> (I measure the file LastWrite time, on windows via powershell to get
> seconds)
>
>
> .header on
> .mod csv
> .separator ","
>
> create table fec(
> CMTE_ID ,CAND_ID ,CAND_NM ,CONTBR_NM ,
> CONTBR_CITY ,CONTBR_ST , CONTBR_ZIP ,
> CONTBR_EMPLOYER ,CONTBR_OCCUPATION ,CONTB_RECEIPT_AMT double
> precision,
> CONTB_RECEIPT_DT ,RECEIPT_DESC ,MEMO_CD ,
> MEMO_TEXT  ,FORM_TP ,FILE_NUM ,
> TRAN_ID ,ELECTION_TP ,USELESS_COLUMN
> );
> .import "P0001-ALL.csv" fec
>
>
> --5 344 498 record read with warning as 19th empty column
> .output fec_test0.csv
> select *  from fec limit 1;
>
> .output stdout
> .output fec_test1.csv
> select count(*) from fec;
>
> .output stdout
> .output fec_test2.csv
>
> select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm;
>
> .output stdout
> .output fec_test3.csv
>
> select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
> by cand_nm, contbr_st;
>
> .output stdout
>
> -- in memory, with index   -
> CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st, contb_receipt_amt);
> .output fec_test0c.csv
> select *  from fec limit 1;
>
> .output stdout
>
> .output fec_test1c.csv
> select count(*) from fec;
>
> .output stdout
> .output fec_test2c.csv
>
> select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm;
>
> .output stdout
> .output fec_test3c.csv
>
> select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
> by cand_nm, contbr_st;
>
> .output stdout
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread big stone
Hi Mikael,

I'm not expert in rtree virtual table handling, but you may try and post
the result here .
Adding the test of the -o2 compiled SQLite3.8.3.exe (size 801Ko vs 501Ko
for the standard Sqlite, 'size' optimized)

- feeding data :
   . in disk database : 151 seconds
   . in memory database :  131 seconds (25% = 1 cpu used out of 4)
   . in memory database -o2 compilation :  51 seconds (25% = 1 cpu
used out of 4)

 - creating index CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st,
contb_receipt_amt) :
   . in disk database : 43 seconds
   . in memory database :  38 seconds
   . in memory database -o2 compilation :  25 seconds

- select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm
;
  . in disk database : 22 seconds
  . in memory database :  19 seconds
  . in memory database -o2 compilation :  10 seconds
  . in disk database with index: 3 seconds
  . in memory database with index :  3 seconds
  . in memory database -o2 compilation with index :  2 seconds


- select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm ;
 . in disk database : 27 seconds
 . in memory database :  24 seconds
 . in memory database -o2 compilation  :  14 seconds
 . in disk database with index: 4 seconds
 . in memory database with index :  4 seconds
 . in memory database -o2 compilation with index :  3 seconds

The effect of -o2 is quite significant on these tests.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-02 Thread Elefterios Stamatogiannakis
We have both input and output virtual tables that avoid hitting the hard 
disk and are also able to compress the incoming and outgoing data.


We have a virtual table that takes as input a query and sends the data 
to a port on another machine. This virtual table is called "OUTPUT". And 
another virtual table that takes as input data from another port and 
forwards it into SQLite. Lets call it "INPUT". A query that uses these 
two virtual tables would look like this in madIS:


OUTPUT ip:192.168.0.1 port:8080 select * from INPUT('port:8081');

We actually use queries like above (actually we don't do it directly to 
ports but to buffered named pipes that are then forwarded via netcat) to 
run distributed queries on clusters, connecting all the local 
SQLite/madIS instances on the different machines together.


The main point that i want to make with above explanation is that we 
don't view SQLite only as a traditional database. We also view it as a 
data stream processing machine, that doesn't have the requirement for 
the data to be stored on a hard disk.


Under this view, the efficiency of the virtual table api is very 
important. Above query only uses 2 VTs in it, but we have other queries 
that use a lot more VTs than that.


estama


On 2/3/2014 9:34 μμ, Max Vlasov wrote:

On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
 wrote:


Our main test case is TPCH, a standard DB benchmark. The "lineitem" table of
TPCH contains 16 columns, which for 10M rows would require 160M xColumn
callbacks, to pass it through the virtual table API. These callbacks are
very expensive, especially when at the other end sits a VM (CPython or PyPy)
handling them.



Ok, not stating that the performance improvment is impossible, I will
explain why I'm a little sceptical about it.

For every bulk insert we have a theoretical maxiumum we'd all glad to
see sqlite would perform with - the speed of simple file copying.
Sqlite can't be faster than that, but to be on par is a good goal.
This is not possible when an insert means also modification of other
parts of the file, for example when there's an index involved. But
let's forget about it. Finally when new data is added, sqlite should
write a number of database pages, the cost of this part is absolutely
in the hands of the media (driver) and OS (driver).  But for every
database page write there's also price to pay in CPU units, for many
actions sqlite should do before actual value is translated from what
the developer provided to what actually appears on disk.

The illustration of the CPU price is the following example
  CREATE TABLE t(Value)

on my ssd drive mulitply inserts (thousands)
   insert into t (Value) values ('123456689  // this string
contains many symbols, for example 1024)
performed with the speed
   30 MB/Sec

but the query
   insert into t (Value) values (10)  // this is a small integer value
only
   3 Mb/Sec

Both shows almost full cpu load. Why such difference? Because with
latter query the system can do more than 30 MB of writes in 1 second,
but it should wait for sqlite spending 10 seconds in preparations.
The former is better because CPU cost of passing a large text value to
sqlite is comparatively low comparing to the  time spent in I/O in
writing this on disk.

So CPU price to pay isn't avoidable and notice that in example this is
not virtual table API, this is bind API. I suppose that the price we
pay for CPU spent in virtual table API is on par with an average price
payed in sqlite as a whole. This means that if I transfom the avove
queries into inserts from virtual tables, the final speed difference
will be similar. And this also means that for your comparision tests
(when you get x3 difference), the CPU price sqlite pays inside bind
api and in its code wrapping xColumn call is probably similar. The
rest is the share your code pays.

Well, I know that there are differences in CPU architectures and
probably there are platform where compiled code for bind api and
virtual tables api behaves a little differently making the costs more
diffrent. But imagine that hard task of fine tuning and refactoring
just to get a noticeable difference for a particular platform.


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-02 Thread Max Vlasov
On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
 wrote:
>
> Our main test case is TPCH, a standard DB benchmark. The "lineitem" table of
> TPCH contains 16 columns, which for 10M rows would require 160M xColumn
> callbacks, to pass it through the virtual table API. These callbacks are
> very expensive, especially when at the other end sits a VM (CPython or PyPy)
> handling them.
>

Ok, not stating that the performance improvment is impossible, I will
explain why I'm a little sceptical about it.

For every bulk insert we have a theoretical maxiumum we'd all glad to
see sqlite would perform with - the speed of simple file copying.
Sqlite can't be faster than that, but to be on par is a good goal.
This is not possible when an insert means also modification of other
parts of the file, for example when there's an index involved. But
let's forget about it. Finally when new data is added, sqlite should
write a number of database pages, the cost of this part is absolutely
in the hands of the media (driver) and OS (driver).  But for every
database page write there's also price to pay in CPU units, for many
actions sqlite should do before actual value is translated from what
the developer provided to what actually appears on disk.

The illustration of the CPU price is the following example
 CREATE TABLE t(Value)

on my ssd drive mulitply inserts (thousands)
  insert into t (Value) values ('123456689  // this string
contains many symbols, for example 1024)
performed with the speed
  30 MB/Sec

but the query
  insert into t (Value) values (10)  // this is a small integer value
only
  3 Mb/Sec

Both shows almost full cpu load. Why such difference? Because with
latter query the system can do more than 30 MB of writes in 1 second,
but it should wait for sqlite spending 10 seconds in preparations.
The former is better because CPU cost of passing a large text value to
sqlite is comparatively low comparing to the  time spent in I/O in
writing this on disk.

So CPU price to pay isn't avoidable and notice that in example this is
not virtual table API, this is bind API. I suppose that the price we
pay for CPU spent in virtual table API is on par with an average price
payed in sqlite as a whole. This means that if I transfom the avove
queries into inserts from virtual tables, the final speed difference
will be similar. And this also means that for your comparision tests
(when you get x3 difference), the CPU price sqlite pays inside bind
api and in its code wrapping xColumn call is probably similar. The
rest is the share your code pays.

Well, I know that there are differences in CPU architectures and
probably there are platform where compiled code for bind api and
virtual tables api behaves a little differently making the costs more
diffrent. But imagine that hard task of fine tuning and refactoring
just to get a noticeable difference for a particular platform.


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite destroys civilization.

2014-03-02 Thread big stone
Shouldn't you  add "nanobots" to the  "famous" user list , just below
flame, and over the "android" droids ?

Biggest Companies use SAP

Smallest Companions use SQLite.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread big stone
Hi again,

I tune a little the SQLite experiment :
- to get rid of the 19th columns message,
- to measure the previous tests with more precise figures,
- the effect of the suggested index :
   CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st, contb_receipt_amt);
- the effect of using a filesystem database.

results : (time +/-1 seconds, windows timing doesn't show below the second)
- feeding data :
   . in disk database : 151 seconds
   . in memory database :  131 seconds (25% = 1 cpu used out of 4)

 - creating index CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st,
contb_receipt_amt) :
   . in disk database : 43 seconds
   . in memory database :  38 seconds

- select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm
;
  . in disk database : 22 seconds
  . in memory database :  19 seconds
  . in disk database with index: 3 seconds
  . in memory database with index :  3 seconds


- select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm ;
 . in disk database : 27 seconds
 . in memory database :  24 seconds
 . in disk database with index: 4 seconds
 . in memory database with index :  4 seconds


Remarks :

- with an expert index, SQLite is 6 times quicker.
   ==> Why such a 'x6' speed-up, as we need to scan the whole table anyway ?

- the ":memory:" database is barely quicker than the disk database.
   ==> How can a rotating disk (7200rpm) database compete with a pure
in-memory database ?



*** ANNEXE 
script to launch with ":memory:" or with "turlututu.db"  :
(I measure the file LastWrite time, on windows via powershell to get seconds)


.header on
.mod csv
.separator ","

create table fec(
CMTE_ID ,CAND_ID ,CAND_NM ,CONTBR_NM ,
CONTBR_CITY ,CONTBR_ST , CONTBR_ZIP ,
CONTBR_EMPLOYER ,CONTBR_OCCUPATION ,CONTB_RECEIPT_AMT double
precision,
CONTB_RECEIPT_DT ,RECEIPT_DESC ,MEMO_CD ,
MEMO_TEXT  ,FORM_TP ,FILE_NUM ,
TRAN_ID ,ELECTION_TP ,USELESS_COLUMN
);
.import "P0001-ALL.csv" fec


--5 344 498 record read with warning as 19th empty column
.output fec_test0.csv
select *  from fec limit 1;

.output stdout
.output fec_test1.csv
select count(*) from fec;

.output stdout
.output fec_test2.csv

select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm;

.output stdout
.output fec_test3.csv

select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
by cand_nm, contbr_st;

.output stdout

-- in memory, with index   -
CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st, contb_receipt_amt);
.output fec_test0c.csv
select *  from fec limit 1;

.output stdout

.output fec_test1c.csv
select count(*) from fec;

.output stdout
.output fec_test2c.csv

select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm;

.output stdout
.output fec_test3c.csv

select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
by cand_nm, contbr_st;

.output stdout
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite destroys civilization.

2014-03-02 Thread C M
On Sun, Mar 2, 2014 at 12:34 PM, Richard Hipp  wrote:

> Reports on twitter say that the "nanobots" in the TV drama "Revolution"
> have source code in the season two finale that looks like this:
>
> https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
>
> Compare to the SQLite source code here:
>
> http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281
> --
> D. Richard Hipp
> d...@sqlite.org
>

Best subject line here ever!   Now I feel a little guilty for using
SQLite.  :D

I'm curious how this came to your attention...?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite destroys civilization.

2014-03-02 Thread Richard Hipp
Reports on twitter say that the "nanobots" in the TV drama "Revolution"
have source code in the season two finale that looks like this:

https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large

Compare to the SQLite source code here:

http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread Mikael
big stone,

What are the same results using RTree? (Also feel free to add -O2.)

?

Thanks



2014-03-02 17:25 GMT+01:00 big stone :

> Hi again,
>
> This is what I mean : we should have an updated "speed" page where we could
> objectively measure.
>
> In the mean time, I painfully partially reproduced two of the figures from
> Wes.
>
> Procedure :
>
> download
> ftp://ftp.fec.gov/FEC/Presidential_Map/2012/P0001/P0001-ALL.zip
> unzip to P0001-ALL.csv
>
> This data file is about 965 Mo, 18 columns *  5 344 498 records big.
>
>
> ** Test Preparation **
> - Hardware : pc windows7, 4go ram, cpu intel i3-350m 2.27 Ghz
> - Software :
>   . sqlite-shell-win32-x86-3080300 (sqlite3.8.3)
>   . postgresql 9.3.2.3 64bit
>
> - preparation scripts of sqlite (As there is an added coma at the end of
> each line, The default Sqlite  importation by reading headers will complain
> a little)
> .header on
> .mod csv
> .separator ","
> .import "P0001-ALL.csv" fec
>
> - preparation scripts of postgresql
> create table fec(
> CMTE_ID varchar,CAND_ID varchar,CAND_NM varchar,CONTBR_NM varchar,
> CONTBR_CITY varchar,CONTBR_ST varchar, CONTBR_ZIP varchar,
> CONTBR_EMPLOYER varchar,CONTBR_OCCUPATION varchar,CONTB_RECEIPT_AMT double
> precision,
> CONTB_RECEIPT_DT varchar,RECEIPT_DESC varchar,MEMO_CD varchar,
> MEMO_TEXT  varchar,FORM_TP varchar,FILE_NUM double precision,
> TRAN_ID varchar,ELECTION_TP varchar,USELESS_COLUMN varchar
> );
>
> copy fec from 'C:\\Users\Public\\Documents\\p1all.csv' CSV HEADER; -- load
> in 82 seconds
>
> ** Speed Tests **
> test1 = select cand_nm, sum(contb_receipt_amt) as total from fec group by
> cand_nm;
> ==> SQlite 21 seconds (wes = 72s)
> ==> Postgresql  4.8 seconds stable  (44 seconds first time ?) (wes =4.7)
>
> test2 = select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec
> group by cand_nm, contbr_st;
> select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
> by cand_nm, contbr_st;
> ==> SQlite 27 seconds
> ==> Postgresql  5.7 seconds   (wes=5.96)
>
> ** Conclusion **
> WesMcKinney "Sqlite/speed.htm" page about SQLite is 3.4 times more awfull
> than what I measure.
> Sqlite3.8.3 is about 4 times slower than Postgresql on this two 'raw' Data
> analysis Tests.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread Richard Hipp
On Sun, Mar 2, 2014 at 11:25 AM, big stone  wrote:

>
> ** Speed Tests **
> test1 = select cand_nm, sum(contb_receipt_amt) as total from fec group by
> cand_nm;
> ==> SQlite 21 seconds (wes = 72s)
> ==> Postgresql  4.8 seconds stable  (44 seconds first time ?) (wes =4.7)
>
>
My guess is that PG is creating the appropriate index on the first
invocation, which is why the first run on PG takes so much longer.  SQLite
runs without an index in every case.

What are your performance measurements using SQLite when you create an
index appropriate for the query.  An index that will be appropriate for
both the previous and the following query would be:

CREATE INDEX xyzzy ON fec(cand_nm, contbr_st);

Even better would be a covering index:

CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st, contb_receipt_amt);

What is SQLite's time after it has one or the other of the indices above?


> test2 = select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec
> group by cand_nm, contbr_st;
> select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
> by cand_nm, contbr_st;
> ==> SQlite 27 seconds
> ==> Postgresql  5.7 seconds   (wes=5.96)
>
> ** Conclusion **
> WesMcKinney "Sqlite/speed.htm" page about SQLite is 3.4 times more awfull
> than what I measure.
> Sqlite3.8.3 is about 4 times slower than Postgresql on this two 'raw' Data
> analysis Tests.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread big stone
Hi again,

This is what I mean : we should have an updated "speed" page where we could
objectively measure.

In the mean time, I painfully partially reproduced two of the figures from
Wes.

Procedure :

download
ftp://ftp.fec.gov/FEC/Presidential_Map/2012/P0001/P0001-ALL.zip
unzip to P0001-ALL.csv

This data file is about 965 Mo, 18 columns *  5 344 498 records big.


** Test Preparation **
- Hardware : pc windows7, 4go ram, cpu intel i3-350m 2.27 Ghz
- Software :
  . sqlite-shell-win32-x86-3080300 (sqlite3.8.3)
  . postgresql 9.3.2.3 64bit

- preparation scripts of sqlite (As there is an added coma at the end of
each line, The default Sqlite  importation by reading headers will complain
a little)
.header on
.mod csv
.separator ","
.import "P0001-ALL.csv" fec

- preparation scripts of postgresql
create table fec(
CMTE_ID varchar,CAND_ID varchar,CAND_NM varchar,CONTBR_NM varchar,
CONTBR_CITY varchar,CONTBR_ST varchar, CONTBR_ZIP varchar,
CONTBR_EMPLOYER varchar,CONTBR_OCCUPATION varchar,CONTB_RECEIPT_AMT double
precision,
CONTB_RECEIPT_DT varchar,RECEIPT_DESC varchar,MEMO_CD varchar,
MEMO_TEXT  varchar,FORM_TP varchar,FILE_NUM double precision,
TRAN_ID varchar,ELECTION_TP varchar,USELESS_COLUMN varchar
);

copy fec from 'C:\\Users\Public\\Documents\\p1all.csv' CSV HEADER; -- load
in 82 seconds

** Speed Tests **
test1 = select cand_nm, sum(contb_receipt_amt) as total from fec group by
cand_nm;
==> SQlite 21 seconds (wes = 72s)
==> Postgresql  4.8 seconds stable  (44 seconds first time ?) (wes =4.7)

test2 = select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec
group by cand_nm, contbr_st;
select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
by cand_nm, contbr_st;
==> SQlite 27 seconds
==> Postgresql  5.7 seconds   (wes=5.96)

** Conclusion **
WesMcKinney "Sqlite/speed.htm" page about SQLite is 3.4 times more awfull
than what I measure.
Sqlite3.8.3 is about 4 times slower than Postgresql on this two 'raw' Data
analysis Tests.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread Simon Slavin

On 2 Mar 2014, at 1:48pm, Elefterios Stamatogiannakis  wrote:

> IMHO, a benchmark like this is useless without any more information. Some 
> questions that i would like to see answered:
> 
> - Which SQLite and Postgres versions were used?
> - Are the SQLite indexes, covering ones?
> - Have any performance pragmas being used?

Does Postgres have enough memory assigned that it's caching the entire database 
in memory ?
What journal mode is SQLite running in ?
What page sizes are both systems using ?
How many processors does the computer have (i.e. is the Postgres server process 
using the same process as the app) ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-02 Thread Elefterios Stamatogiannakis
IMHO, a benchmark like this is useless without any more information. 
Some questions that i would like to see answered:


 - Which SQLite and Postgres versions were used?
 - Are the SQLite indexes, covering ones?
 - Have any performance pragmas being used?

Also interval joins ("between") are hard for SQLite's default indexes, 
but converting them to use a multidimensional index (R-Trees) speeds 
them up to similar speeds as Postgres.


estama

On 2/3/2014 3:02 μμ, big stone wrote:

Hello,

This morning I saw  Pandas/Wes McKinney communicating figures :
  - starting at 37'37" of http://vimeo.com/79562736,
  - leaking a slide where SQLite "is" 15 times slower than Postgresql.

==> the dataset is public :
http://www.fec.gov/disclosurep/PDownload.do?candId=P0001=2012=All%20Candidates=pNational
==> the sql are basic.

Wouldn't it be nice to update the "speed.html" page to have an objective
vision ?

Rationals :
- better show progress (it's hidden in
http://www.sqlite.org/checklists/3080300/index),
- better show non-time metrics : memory, electricity ,i/o...
- better show options effect : ":memory:" , "compile -o2", ...
- better show SQLite position in the SQL landscape.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-02 Thread Elefterios Stamatogiannakis
In our performance tests we try to work with data and queries that are 
representative of what we would find in a typical DB.


This means a lot of "small" values (ints, floats, small strings), and 
5-20 columns.


Our main test case is TPCH, a standard DB benchmark. The "lineitem" 
table of TPCH contains 16 columns, which for 10M rows would require 160M 
xColumn callbacks, to pass it through the virtual table API. These 
callbacks are very expensive, especially when at the other end sits a VM 
(CPython or PyPy) handling them.


For PyPy in particular, which is able to produce JIT compiled and 
optimized UDF code, adapted on the "shape" (type and value distribution) 
of the data flows as they pass through it, every time it faces the 
virtual table API it slows down to a crawl, becoming more than 2x slower 
than interpreted Python. This happens because it cannot see the data/row 
"shape" from the many small and unrelated between each other, single 
value based, xColumn callbacks.


Changing the subject, i've seen some requests in previous emails from 
people asking for windowing functions to be added to SQLite. I want to 
propose an alternative that we have been using for years, and is a lot 
more generic than adding specific functions for very "narrow" use cases 
in SQLite.


We have added the "EXPAND" VT function in madIS, which "emulates" nested 
tables in SQLite, enabling to have row and aggregate functions that 
return (in a streaming fashion) multiple values on multiple columns. The 
"EXPAND" function, takes as input a table containing as values (in our 
case Python) generators, and then it calls the generators "expanding" 
the input table to its final form. "EXPAND" is automatically inserted 
wherever is required, so it isn't visible. An example follows:


> select strsplit('one and other');
one|and|other <-- 3 columns

or

> select strsplitV('one and other');
one
and<-- 3 individual rows
other

So by adding a single VT function and some syntactic sugar (auto 
inserting EXPAND VT), we were able to have functionality that is not 
case specific, allowing us to run all kinds of analytics inside SQLite.


The performance of above functionality is already very good. But it 
could be a lot better with a more efficient VT API.


Regards,

estama

On 2/3/2014 9:15 πμ, Max Vlasov wrote:

Hi,
thanks for explaining your syntax in another post. Now about virtual
tables if you don't mind.

On Fri, Feb 28, 2014 at 8:24 PM, Eleytherios Stamatogiannakis
 wrote:


If we load into SQLite, 

create table newtable as select * from READCOMPRESSEDFILE('ctable.rc');

it takes: 55 sec


If we create an external program 

it takes: 19 sec (~3x faster than using the virtual table API)




Looking at your numbers, as a user (and fan :) of virtual tables I
decided to do some tests.

I have a virtual table "all values", it was designed for enumeration
of all tables values to the one single virtual table, so finally it is
a long list of

   TableName, TableRowId, FieldName, Value

so you get the idea. As an example of what it may do, you may open
places.sqlite of mozilla browser and do

   Select * from AllValues where Value Like "%sqlite.org%"

and see actual results even not knowing how they planned their schema.

Internally this virtual table simply uses general selects for all
other tables met in sqlite_master. This is a good (but probably not
the best) test for measuring virtual tables performance, because

   SELECT * FROM AllValues

is equivalent to reading all conventional tables of this database.
Besides
- the tool I use has a tweaker implemented with VFS that allows
measuring speed and other characteristics of the query performed while
the query is in effect.
- I have an option that forces resetting windows cache for the
database file when it is reopened. So with it we exclude the windows
cache from consideration so pure I/O reading is used. Btw, when you do
your comparison, it's very important to reset system cache before
every measurement that involves I/O.


So I took a comparatively large (500 Mb) database consisting of
several small and one big table (Posts) and compared two queries.

(Query1)

   Select sum(length(Body) + length(Title)) from Posts

This ones effectively reads the table data and uses
- length() to force sqlite reading texts that don't fit into single db page
- sum() to exclude accumulating results on my side from comparison, so
we have a single row, single column result from the work completely
done by sqlite.

(Query2)

   Select Sum(Length(Value)) from AllValues

This one performs basically the same but using sqlite virtual tables
api. It also touches other tables, but since they're small, we can
forget about this.

Query1 (General):
   Read: 540MB,
   Time: 24.2 sec,
   CPU Time: 6 Sec (25%)
   Speed: 22.31 MB/Sec

Query2 (Virtual):
   Read: 540MB,
   Time: 27.3 Sec,
   CPU Time: 13 sec (51%)
   Speed: 20 MB/Sec

In my particular test the noticeable 

[sqlite] About "speed"

2014-03-02 Thread big stone
Hello,

This morning I saw  Pandas/Wes McKinney communicating figures :
 - starting at 37'37" of http://vimeo.com/79562736,
 - leaking a slide where SQLite "is" 15 times slower than Postgresql.

==> the dataset is public :
http://www.fec.gov/disclosurep/PDownload.do?candId=P0001=2012=All%20Candidates=pNational
==> the sql are basic.

Wouldn't it be nice to update the "speed.html" page to have an objective
vision ?

Rationals :
- better show progress (it's hidden in
http://www.sqlite.org/checklists/3080300/index),
- better show non-time metrics : memory, electricity ,i/o...
- better show options effect : ":memory:" , "compile -o2", ...
- better show SQLite position in the SQL landscape.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users