[sqlite] Custom aggregate functions in Tcl

2019-01-28 Thread Andy Goth
I wish to define custom aggregate functions in Tcl, but this capability is
currently not exposed through the Tcl interface. Thus I am thinking about
how best to add it. Here's a first crack at a design proposal:

Extend the [db function] command to accept an -aggregate switch that makes
the new function be an aggregate function. Otherwise, the function defaults
to being a scalar function.

When an aggregate Tcl function is called from SQL, it is invoked in two
phases: step and final. The step phase receives the input data, and the
final phase produces the result.

During step, the function is invoked once for each row (in the group). For
the first row (in the group), the first argument to the function will be
empty string, and subsequent arguments are the SQL values from the row
being processed. For each additional row (in the group), the first argument
is the Tcl value returned by the prior invocation, and subsequent arguments
are as above.

During final, the function is invoked one last time (at the end of each
group). Its sole argument is the return value of the last step invocation,
or empty string if there were no rows. The return value of the Tcl function
is used as the return value of the SQL function.

If there were no rows and GROUP BY was used, the function is not invoked at
all, because there were no groups.

The Tcl function can tell whether it is in the step or final phase by how
many arguments it receives. If it receives multiple, it is in step. If it
receives only one, it is in final.

Depending on how it is written, the Tcl function may be able to tell if
it's being called for the first time (within the group) by checking if its
first argument is empty string. If non-empty, it definitely has been called
before (within the group). If empty, it is most likely on the first row
and, if in step, may need to initialize.

However, if the aggregate function wishes to disregard some of its input
data, it may choose to return empty string to discard any state data
arising from the current and prior rows (in the group). This will result in
empty string being the first argument to the next invocation. This mode of
operation is ill-advised but not illegal, though maybe some valid use cases
may exist, provided that ORDER BY is being used.

It is an error to pass an aggregate Tcl function zero arguments.

Here's an example that will return the entire Fossil global_config table as
a key/value dict:

db function list -deterministic -aggregate {
apply {{state args} {
concat $state $args
}
}

db onecolumn {
SELECT list(name, value)
  FROM global_config
 ORDER BY name
}

As for implementation, I think the existing code can be leveraged to a
great extent. Adapt the existing tclSqlFunc() to be the backend to new
tclSqlFuncScalar(), tclSqlFuncStep(), and tclSqlFuncFinal() functions, and
adjust the DB_FUNCTION code to recognize -aggregate and pass different
function pointers accordingly. Use sqlite3_aggregate_context() to hold the
Tcl_Obj * resulting from each invocation, then let it be the first argument
to the next invocation.

I would be happy to implement this myself, since this is functionality I
will be needing soon. Once I'm satisfied with it and have test suite
updates, what's the recommended method for sharing my patch?

Question: does xFinal() get called if an error occurs during (or between)
calling xStep()? Are errors even possible? I'm curious if there's any way
to leak the Tcl_Obj pointed to by the aggregate context.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SEE Temp Files

2019-01-28 Thread Richard Hipp
On 1/28/19, Jim Borden  wrote:
> I see.  That complicates things a bit.  What happens to SQLite temp files
> when they are "done being used" (if such a concept exists).  Are they
> deleted or simply left there for the OS to clean up?

SQLite calls unlink() immediately after open().  So the OS deallocates
the file automatically when it closes.  The O_EXCL, O_CREAT, and
O_NOFOLLOW flags are set on open(). The filename includes 64-bits of
randomness.

Windows is similar except the file is opened with the
FILE_FLAG_DELETE_ON_CLOSE flag because you cannot unlink() an open
file on Windows.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SEE Temp Files

2019-01-28 Thread Jim Borden
I see.  That complicates things a bit.  What happens to SQLite temp files when 
they are "done being used" (if such a concept exists).  Are they deleted or 
simply left there for the OS to clean up?

Jim Borden


On 2019/01/29 8:35, "sqlite-users on behalf of Richard Hipp" 
 
wrote:

Intermediate results may be written into a temp file, if they overflow
memory.  This could result in an information leak, yes.  On the other
hand, if you set SQLITE_TEMP_STORE=2 so that intermediate results are
held in memory, then intermediate results may be written to swap space
when the device gets under memory pressure.  So, I'm not sure it makes
that much difference.  I kind of suspect that a explicit TEMP file
would be safer, as it seems to me it will be harder to find forensic
traces of a TEMP file amid the clutter of a massive volume, and TEMP
files are likely to be overwritten quickly, which I'm not sure happens
with the swap file.  But maybe a security geek can correct me on this.
And maybe it depends a lot on what OS you are running, or what version
of that OS, or how you have the system configured.

So it is hard to know what the right thing to do is.

On 1/28/19, Jim Borden  wrote:
> Having moved from SQLCipher to SEE some time ago, the old advice that
> SQLCipher gave has stuck with me (from
> https://www.zetetic.net/sqlcipher/design/ in the Database Encryption and
> Temporary Files section):  “Provided that you taken the important step of
> disabling file base temporary stores (i.e. --enable-tempstore=yes during
> configuration and define SQLITE_TEMP_STORE=2 during build)”
>
> The reasoning behind this is that certain temporary files are not 
encrypted
> when being written and thus are a violation of the security provided by
> encryption of the database.  I couldn’t find any equivalent warnings
> regarding SQLite Encryption Extension (or anything to assure me that this
> was *not* the case with SEE) so I thought I would ask here if the same
> advice applies or can we be assured that anything SQLite with SEE writes 
to
> the disk relating to the encrypted database will also be encrypted?
>
> 
> The reason why I ask this is because there is a debate going on about the
> appropriate setting of SQLITE_TEMP_STORE on mobile devices and whether or
> not it will end up using too much memory and getting an application
> terminated.  However, the above advice would pretty much force our hand on
> the version of our product that uses SEE.
> 
>
> Thanks,
> Jim Borden
>
>
>
> Privacy Policy
> Marketing
> Preferences
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



Privacy Policy
Marketing 
Preferences
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] -wal and -shm files left behind by libsqlite3.dylib

2019-01-28 Thread Scott Perry
This is expected behaviour. As Richard guessed, the system libsqlite3.dylib is 
built with SQLITE_ENABLE_PERSIST_WAL. For Darwin systems, the compromise of 
keeping the extra 4MiB (max) file around is generally worth the reduced I/O 
overhead that results from creating, resizing, and unlinking files.

If you need, you can still override this behaviour using the 
SQLITE_FCNTL_PERSIST_WAL opcode to the sqlite3_file_control() interface.

Scott


On Jan 28, 2019, at 10:32, Carsten Müncheberg  
wrote:
> 
> When loading and using /usr/lib/libsqlite3.dylib (3.19.3) which is shipped
> with MacOS 10.13.6 I noticed that the -wal and -shm files are not deleted
> when closing the last connection to a database. I tested this with the
> sqlite3 command line tool. It does not happen when I compile and link
> SQLite myself.
> 
> Is this expected behavior or some kind of bug?
> 
> Carsten

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] -wal and -shm files left behind by libsqlite3.dylib

2019-01-28 Thread Carsten Müncheberg
Am Mo., 28. Jan. 2019 um 20:01 Uhr schrieb Richard Hipp :

> On 1/28/19, Carsten Müncheberg  wrote:
> > When loading and using /usr/lib/libsqlite3.dylib (3.19.3) which is
> shipped
> > with MacOS 10.13.6 I noticed that the -wal and -shm files are not deleted
> > when closing the last connection to a database. I tested this with the
> > sqlite3 command line tool. It does not happen when I compile and link
> > SQLite myself.
> >
> > Is this expected behavior or some kind of bug?
>
> My guess is that Apple is enabling persistent WAL mode
> (
> https://www.sqlite.org/c3ref/c_fcntl_begin_atomic_write.html#sqlitefcntlpersistwal
> )
> by default on their build of the SQLite library.
>

Must be it, thanks for the swift reply.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SEE Temp Files

2019-01-28 Thread Richard Hipp
Intermediate results may be written into a temp file, if they overflow
memory.  This could result in an information leak, yes.  On the other
hand, if you set SQLITE_TEMP_STORE=2 so that intermediate results are
held in memory, then intermediate results may be written to swap space
when the device gets under memory pressure.  So, I'm not sure it makes
that much difference.  I kind of suspect that a explicit TEMP file
would be safer, as it seems to me it will be harder to find forensic
traces of a TEMP file amid the clutter of a massive volume, and TEMP
files are likely to be overwritten quickly, which I'm not sure happens
with the swap file.  But maybe a security geek can correct me on this.
And maybe it depends a lot on what OS you are running, or what version
of that OS, or how you have the system configured.

So it is hard to know what the right thing to do is.

On 1/28/19, Jim Borden  wrote:
> Having moved from SQLCipher to SEE some time ago, the old advice that
> SQLCipher gave has stuck with me (from
> https://www.zetetic.net/sqlcipher/design/ in the Database Encryption and
> Temporary Files section):  “Provided that you taken the important step of
> disabling file base temporary stores (i.e. --enable-tempstore=yes during
> configuration and define SQLITE_TEMP_STORE=2 during build)”
>
> The reasoning behind this is that certain temporary files are not encrypted
> when being written and thus are a violation of the security provided by
> encryption of the database.  I couldn’t find any equivalent warnings
> regarding SQLite Encryption Extension (or anything to assure me that this
> was *not* the case with SEE) so I thought I would ask here if the same
> advice applies or can we be assured that anything SQLite with SEE writes to
> the disk relating to the encrypted database will also be encrypted?
>
> 
> The reason why I ask this is because there is a debate going on about the
> appropriate setting of SQLITE_TEMP_STORE on mobile devices and whether or
> not it will end up using too much memory and getting an application
> terminated.  However, the above advice would pretty much force our hand on
> the version of our product that uses SEE.
> 
>
> Thanks,
> Jim Borden
>
>
>
> Privacy Policy
> Marketing
> Preferences
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Inverted index without FTS

2019-01-28 Thread Jens Alfke
Part of what the FTS extension does is maintaining an inverted index.

Is it possible to use this inverted-index functionality without the rest of 
FTS? I imagine it would involve modifications to the FTS5 code, but could those 
be minor or would it require extensive hacking? Or alternatively, is there a 
SQLite extension available that purely manages an inverted index?

I know one can sort-of do this by writing a custom tokenizer to feed “words” 
into FTS; but I don’t think that will work here because (a) the keys aren’t 
necessarily textual, (b) we need to associate metadata that isn’t a byte offset 
in a string, and (c) querying FTS via the MATCH operator is extremely limited, 
e.g. it has to be at the top level of the WHERE clause.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SEE Temp Files

2019-01-28 Thread Jim Borden
Having moved from SQLCipher to SEE some time ago, the old advice that SQLCipher 
gave has stuck with me (from https://www.zetetic.net/sqlcipher/design/ in the 
Database Encryption and Temporary Files section):  “Provided that you taken the 
important step of disabling file base temporary stores (i.e. 
--enable-tempstore=yes during configuration and define SQLITE_TEMP_STORE=2 
during build)”

The reasoning behind this is that certain temporary files are not encrypted 
when being written and thus are a violation of the security provided by 
encryption of the database.  I couldn’t find any equivalent warnings regarding 
SQLite Encryption Extension (or anything to assure me that this was *not* the 
case with SEE) so I thought I would ask here if the same advice applies or can 
we be assured that anything SQLite with SEE writes to the disk relating to the 
encrypted database will also be encrypted?


The reason why I ask this is because there is a debate going on about the 
appropriate setting of SQLITE_TEMP_STORE on mobile devices and whether or not 
it will end up using too much memory and getting an application terminated.  
However, the above advice would pretty much force our hand on the version of 
our product that uses SEE.


Thanks,
Jim Borden



Privacy Policy
Marketing 
Preferences
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Simon Slavin
On 28 Jan 2019, at 9:44pm, Chris Locke  wrote:

>> The table name should not be meaningful to your application; nothing in your 
>> application should conjure up a table name.
> 
> I can't get my head around this advice.  Is this just for this occasion, or 
> for every application?  What if I'm writing a customer address book?  Am I 
> allowed a table called 'customers' ?  Thats meaningful to my application.

The text you quote is unclear, and I see why you questioned it.

Your database has a schema.  A schema comprises tables, indexes, triggers, 
views, etc..  The names of those things mean something to the programmer.  The 
schema doesn't change unless the programmer changes their idea about what the 
application does.

Inside the tables you put your data.  The data changes, as the application may 
have to look inside a table, and examine what it finds, to figure out what data 
it has and what it means.

Programmers change schemas.  Just like they change the software.
Applications change data, not schemas.

There are occasional reasons to violate these rules.  But the people who write 
SQL libraries assume that programmers will follow those rules, and their 
optimizations for execution time and memory used assume you will follow those 
rules.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Warren Young
On Jan 28, 2019, at 2:44 PM, Chris Locke  wrote:
> 
>> The table name should not be meaningful to your application; nothing in
>> your application should conjure up a table name.
> 
> I can't get my head around this advice.  Is this just for this occasion, or
> for every application?  What if I'm writing a customer address book?  Am I
> allowed a table called 'customers' ?  Thats meaningful to my application.
> Not sure what your trying to advise here.

He’s saying that if your customer is called Bill The Cat, you should not call 
the table holding that customer’s records “Bill the Cat”.

How else do you end up with 2 tables in a single database, each created on 
the fly based on some unspecified event, as the OP talks about?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Chris Locke
> The table name should not be meaningful to your application; nothing in
> your application should conjure up a table name.

I can't get my head around this advice.  Is this just for this occasion, or
for every application?  What if I'm writing a customer address book?  Am I
allowed a table called 'customers' ?  Thats meaningful to my application.
Not sure what your trying to advise here.

On Mon, Jan 28, 2019 at 6:59 PM James K. Lowden 
wrote:

> On Mon, 28 Jan 2019 16:28:41 +
> Simon Slavin  wrote:
>
> > SQL is not designed to have a variable number of tables in a
> > database.  All the optimization is done assuming that you will have a
> > low number of tables, and rarely create or drop tables.
>
> This.
>
> The table name should not be meaningful to your application; nothing in
> your application should conjure up a table name.  It's an external
> factor your application is configured to use.
>
> If you're generating table names based on application data, you're
> insinuating data in the metadata.  When Simon says "SQL is not
> designed" for that, he's referring to the fact that the schema is
> expected to be relatively stable because it *describes* the data.  The
> system is designed to search for data in the tables, not among their
> names.
>
> --jkl
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Warren Young
On Jan 28, 2019, at 9:17 AM, mzz...@libero.it wrote:
> 
> I developed an application that need to create 1 table with thousand of rows 
> every time when a certain event occours.
> 
> This works in a good way, but when the number of the tables become huge 
> (about 15000/2 tables) the first DataBase reading query, after Database 
> open, is very slow (about 4sec.) while next reading operations are faster.

This sounds like a “table per client” sort of architecture.  If so, and the 
others’ advice to put all the data in a single table won’t work for you, you 
can easily use SQLite to create a separate DB file for each client.  Each file 
can have the same schema, or not.

If some operations will need to work with data from multiple instances of this 
table, SQLite makes it easy to attach multiple DB files to a single connection 
and cross-query them.

Doing it this way may end up faster than the “everything in one table” 
approach, since SQLite’s performance is roughly proportional to the log2(n) of 
the number of rows in the tables you’re working with.  2 DB files with one 
table per and N rows each might be faster than one DB file with N*2 rows in 
a single table.

There are speed hits in that architecture, too.  

For one, the disk directory file lookup might begin to take significant amounts 
of time at some point.  Usually this begins to happen at about this “tens of 
thousands” area, so if you’re looking at going into the hundreds of thousands 
or millions of DB files, you might want to make the directory structure 
hierarchical in some way.

Another speed trap risk here is that each opened DB takes some memory of its 
own.  If you’ve got RAM to burn, this might be great architecture, but if you 
run the system into swapping, the single-table architecture would probably be 
better.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Claimed vulnerability in SQLite: Info or Intox?

2019-01-28 Thread Warren Young
On Jan 28, 2019, at 1:26 AM, Vladimir Barbu 
 wrote:
> 
> This vulnerability has been addressed in SQLite 3.26.0. When could we expect 
> new version (official) of System.Data.SQLite which uses 3.26.0?

Are you both using FTS3 *and* letting your users execute arbitrary SQL?

Most of the time, the latter is a vulnerability in and of itself.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] -wal and -shm files left behind by libsqlite3.dylib

2019-01-28 Thread Richard Hipp
On 1/28/19, Carsten Müncheberg  wrote:
> When loading and using /usr/lib/libsqlite3.dylib (3.19.3) which is shipped
> with MacOS 10.13.6 I noticed that the -wal and -shm files are not deleted
> when closing the last connection to a database. I tested this with the
> sqlite3 command line tool. It does not happen when I compile and link
> SQLite myself.
>
> Is this expected behavior or some kind of bug?

My guess is that Apple is enabling persistent WAL mode
(https://www.sqlite.org/c3ref/c_fcntl_begin_atomic_write.html#sqlitefcntlpersistwal)
by default on their build of the SQLite library.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread James K. Lowden
On Mon, 28 Jan 2019 16:28:41 +
Simon Slavin  wrote:

> SQL is not designed to have a variable number of tables in a
> database.  All the optimization is done assuming that you will have a
> low number of tables, and rarely create or drop tables.

This.

The table name should not be meaningful to your application; nothing in
your application should conjure up a table name.  It's an external
factor your application is configured to use.  

If you're generating table names based on application data, you're
insinuating data in the metadata.  When Simon says "SQL is not
designed" for that, he's referring to the fact that the schema is
expected to be relatively stable because it *describes* the data.  The
system is designed to search for data in the tables, not among their
names.  

--jkl
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] -wal and -shm files left behind by libsqlite3.dylib

2019-01-28 Thread Carsten Müncheberg
When loading and using /usr/lib/libsqlite3.dylib (3.19.3) which is shipped
with MacOS 10.13.6 I noticed that the -wal and -shm files are not deleted
when closing the last connection to a database. I tested this with the
sqlite3 command line tool. It does not happen when I compile and link
SQLite myself.

Is this expected behavior or some kind of bug?

Carsten
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Igor Korot
Hi,

On Mon, Jan 28, 2019 at 10:17 AM  wrote:
>
> Dear,
>
> I developed an application that need to create 1 table with thousand of rows 
> every time when a certain event occours.

Are you coming from the FoxBase/ForPro world?

Thank you.

>
> This works in a good way, but when the number of the tables become huge 
> (about 15000/2 tables) the first DataBase reading query, after Database 
> open, is very slow (about 4sec.) while next reading operations are faster.
>
> How can I speed up?
>
> Thanks.
>
>  Davide
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread R Smith
When the same solution is offered in rapid intervals, it must be the 
correct one then! :)


To elaborate on what Chris mentioned regarding relational database 
methodology and related tables, here is why and a good description of 
how to do that correctly:

https://en.wikipedia.org/wiki/First_normal_form

And to add to the slew of "Few-tables-many-rows rather than 
Many-tables-few-rows" solutions offered, one thing to note:  After a 
single table contains a few million rows, INSERTing will become slightly 
slower, but the difference will still be measured in milliseconds rather 
than seconds. This really is by far the fastest (and correctest) way.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread R Smith

On 2019/01/28 6:17 PM, mzz...@libero.it wrote:

I developed an application that need to create 1 table with thousand of rows 
every time when a certain event occours.

This works in a good way, but when the number of the tables become huge (about 
15000/2 tables) the first DataBase reading query, after Database open, is 
very slow (about 4sec.) while next reading operations are faster.

How can I speed up?



This is because SQLite has to parse 15000/2 tables when it connects 
to that DB, it's a laborious process and is actually very fast at 4 
seconds.


The problem here is a design one.

Accessing table data is fast, accessing tables are slow. You should have 
more data, less tables.  It's like storing a few boxes of your goods in 
a million warehouses around the World. It's slow to retrieve. Rather 
have one warehouse big enough to hold all your boxes.


Put your data in a single table. One table with 10 million rows will be 
fast as lightening in SQLite, while 10,000 tables with 1,000 rows each 
will work excessively slow. (They are slow in all RDBMS systems but you 
won't notice it much in client-server systems because the table schema 
remains cached in the server).


In stead of having tables like "MyData_1", "MyData_2", "MyData_3", ... , 
"MyData_n" - rather add an extra indexed field to one table called only 
"MyData" that carries the 1, 2, 3 ... n etc.


That way you would end up with one table having millions of rows and not 
millions of tables having few rows.


Cheers!
Ryan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Chris Locke
Why do you need to create a new table each time?  Its easier to create a
relational database.  This means create one main table that might (for
example) have a 'tableID' field, which points to just one other table.
This means you only need two tables.  Not 20,000+
Just an idea.  Depends on what you're trying to achieve.

On Mon, Jan 28, 2019 at 4:18 PM  wrote:

> Dear,
>
> I developed an application that need to create 1 table with thousand of
> rows every time when a certain event occours.
>
> This works in a good way, but when the number of the tables become huge
> (about 15000/2 tables) the first DataBase reading query, after Database
> open, is very slow (about 4sec.) while next reading operations are faster.
>
> How can I speed up?
>
> Thanks.
>
>  Davide
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Simon Slavin
On 28 Jan 2019, at 4:17pm, mzz...@libero.it wrote:

> when the number of the tables become huge (about 15000/2 tables) the 
> first DataBase reading query, after Database open, is very slow (about 4sec.) 
> while next reading operations are faster.
> 
> How can I speed up?

Put all the data in the same table.

At the moment, you pick a new table name each time you write another set of 
data to the database.  Instead of that, create just one big table, and add an 
extra column to the columns which already exist called "dataset".  In that you 
put the string you previously used as the table name.

SQL is not designed to have a variable number of tables in a database.  All the 
optimization is done assuming that you will have a low number of tables, and 
rarely create or drop tables.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-28 Thread Richard Hipp
On 1/28/19, mzz...@libero.it  wrote:
> Dear,
>
> I developed an application that need to create 1 table with thousand of rows
> every time when a certain event occours.
>
> This works in a good way, but when the number of the tables become huge
> (about 15000/2 tables) the first DataBase reading query, after Database
> open, is very slow (about 4sec.) while next reading operations are faster.

When the database first opens, SQLite has to load the schema for the
database (how else is it suppose to know the structure of the
database?) which in your case involves reading and parsing 20,000
CREATE TABLE statements.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite slow when lots of tables

2019-01-28 Thread mzzdvd
Dear,

I developed an application that need to create 1 table with thousand of rows 
every time when a certain event occours.

This works in a good way, but when the number of the tables become huge (about 
15000/2 tables) the first DataBase reading query, after Database open, is 
very slow (about 4sec.) while next reading operations are faster.

How can I speed up?

Thanks.

 Davide
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Minor concern with alter table .. rename to ..

2019-01-28 Thread Olivier Mascia
Dear,

I have minor a concern with alter table .. rename to ...
Please follow this sample (3.26.0):

sqlite> create table T(A integer);
sqlite> .schema T
CREATE TABLE T(A integer);

sqlite> alter table T rename to X;
sqlite> .schema X
CREATE TABLE IF NOT EXISTS "X"(A integer);

A. Why does X needs to be escaped (" ") in the new table definition following 
the rename?

(Sure there is nothing wrong with that).
Though couldn't it follow the exact wording used in the rename to clause?
X if I write X and "X" if I write "X"?
To the extent that the rename could have to be refused (error return) in case 
of parsing issue due to the nature of the unquoted X.

B. The next question is not directly related to the rename to operation, but I 
spotted it when testing the rename to.

Why does "IF NOT EXISTS" is auto-added to the text of CREATE TABLE reported by 
the shell command .schema, when the table name was escaped in the original 
create statement?

sqlite> create table R(A integer);
sqlite> create table "S"(A integer);
sqlite> .schema
CREATE TABLE R(A integer);
CREATE TABLE IF NOT EXISTS "S"(A integer);

while we have this in sqlite_master:

sqlite> select * from sqlite_master;
type|name|tbl_name|rootpage|sql
table|R|R|2|CREATE TABLE R(A integer)
table|S|S|3|CREATE TABLE "S"(A integer)

Again, not bugs, merely possible discrepancies that might have a good reason to 
exist, though I do not yet understand why.

-- 
Best Regards, Meilleures salutations, Met vriendelijke groeten,
Olivier Mascia


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Support for System.Data.SQLite: Different API type for int/integer columns

2019-01-28 Thread Barry
I use Convert.ToInt32 (or whatever else is appropriate) instead of direct
casts when reading from System.Data.SQLite... I use a very similar schema
in an SQLServer and SQLite database, and Convert seems to handle many of
the differences without requiring different code for each database engine.

On Mon, 28 Jan 2019 at 03:31, Joe Mistachkin  wrote:

>
> JP wrote:
> >
> > The answers don't explain why there is a bitness difference at run-time
> > between the types retrieved from INT and INTEGER columns, and that's my
> > question. From reading https://sqlite.org/datatype3.html I understand
> > there should be no difference whatsoever between defining a column INT
> > or INTEGER (other than whether a primary key may become a rowid alias).
> >
>
> System.Data.SQLite maps type names to their associated types.  Mappings
> can be overridden on a per-connection basis.  By default, INT maps to a
> 32-bit integer and INTEGER maps to a 64-bit integer, mostly for reasons
> of backward compatibility.
>
> In order to customize the mappings, the flag UseConnectionTypes should
> be used when opening the connection -AND- the AddTypeMapping method is
> used to override a built-in mapping of the same name (e.g. "INT").
>
> --
> Joe Mistachkin
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Claimed vulnerability in SQLite: Info or Intox?

2019-01-28 Thread Vladimir Barbu
Hi,

This vulnerability has been addressed in SQLite 3.26.0. When could we expect 
new version (official) of System.Data.SQLite which uses 3.26.0?


---
Vladimir
   



-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Keith Medcalf
Sent: Friday, December 21, 2018 06:45
To: SQLite mailing list 
Subject: Re: [sqlite] Claimed vulnerability in SQLite: Info or Intox?


Only if the application were so badly written as to permit the execution of 
untrusted code ...


---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users- 
>boun...@mailinglists.sqlite.org] On Behalf Of Jens Alfke
>Sent: Thursday, 20 December, 2018 18:56
>To: SQLite mailing list
>Subject: Re: [sqlite] Claimed vulnerability in SQLite: Info or Intox?
>
>
>
>> On Dec 20, 2018, at 5:05 PM, Simon Slavin 
>wrote:
>>
>> Which would make it do what ?  I can imagine "crash with a memory
>fault".  I find it much harder to believe "execute code stored in the 
>database".  You would have to know a lot about a program to make it do 
>that, and an attack aimed at one program/library (e.g. Chromium) 
>wouldn't work on another with a different memory layout.
>
>It depends on the details of the vulnerability. Since it’s an FTS3 
>query that triggered the problem, there are probably multiple FTS3 and 
>SQLite stack frames active at the time the buffer overrun occurs, so it 
>may not depend so much on the application itself. (Of course it would 
>likely depend on the compiler, the optimization settings, and of course 
>CPU architecture.)
>
>Again, from Dr. Hipp’s statement:
>   By making malicious changes to the shadow tables that FTS3 uses and 
>then running
>   FTS3 queries that used those tables, an integer overflow could cause a
>   buffer overrun, which if carefully managed might lead to an RCE.
>   This is only a problem for application that enable FTS3 (using the
>   SQLITE_ENABLE_FTS3 or SQLITE_ENABLE_FTS4 compile-time options) and
>   which allow potential attackers to run arbitrary SQL.
>
>Anyway, my original question was: If an application opens untrusted 
>SQLite databases as documents, and if a trigger added to a database can 
>run arbitrary SQL, wouldn’t that make such an application vulnerable?
>
>—Jens
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Min/Max and skip-scan optimizations

2019-01-28 Thread Gerlando Falauto
YES! Thank you!
Many thanks for the ".eqp full" tip also, that really explains a lot
(though I don't really understand any of it yet).

Have a great day!
Gerlando


On Mon, Jan 28, 2019 at 6:50 AM Keith Medcalf  wrote:

>
> Do you perhaps want this:
>
> select source1,
>source2,
>(
> select min(ts)
>   from rolling
>  where source1 = x.source1
>and source2 = x.source2
>)
>   from (
> select distinct source1,
> source2
>   from rolling
>) as x;
>
> SQLite version 3.27.0 2019-01-28 00:42:06
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> .timer on
> sqlite> CREATE TABLE `rolling` (
>...> `source1`TEXT NOT NULL,
>...> `source2`TEXT NOT NULL,
>...> `ts`INTEGER NOT NULL,
>...> `value`TEXT
>...> );
> Run Time: real 0.002 user 0.00 sys 0.00
> sqlite>
> sqlite> CREATE INDEX `sources` ON `rolling` (
>...> `source1`,
>...> `source2`,
>...> `ts`
>...> );
> Run Time: real 0.001 user 0.00 sys 0.00
> sqlite>
> sqlite> INSERT INTO rolling
>...> WITH RECURSIVE
>...>   src1( source1 ) AS ( VALUES("aaa") UNION ALL VALUES("bbb") ),
>...>   src2( source2 ) AS ( VALUES("X1") UNION ALL VALUES("X2")
> UNION ALL
>...> VALUES("X3") UNION ALL VALUES("X4") ),
>...>   cnt( ts, value) AS (
>...>   VALUES( 0, "")
>...> UNION ALL
>...>   SELECT ts+1, value FROM cnt LIMIT 100)
>...>
>...> select src1.source1, src2.source2, cnt.* from src1, src2, cnt;
> Run Time: real 8.920 user 8.843750 sys 0.078125
> sqlite>
> sqlite> analyze;
> Run Time: real 1.285 user 1.281250 sys 0.00
> sqlite> .eqp full
> sqlite>
> sqlite> select source1,
>...>source2,
>...>(
>...> select min(ts)
>...>   from rolling
>...>  where source1 = x.source1
>...>and source2 = x.source2
>...>)
>...>   from (
>...> select distinct source1,
>...> source2
>...>   from rolling
>...>) as x;
> QUERY PLAN
> |--CO-ROUTINE 2
> |  `--SCAN TABLE rolling USING COVERING INDEX sources (~7864320 rows)
> |--SCAN SUBQUERY 2 AS x (~7864320 rows)
> `--CORRELATED SCALAR SUBQUERY 1
>`--SEARCH TABLE rolling USING COVERING INDEX sources (source1=? AND
> source2=?) (~983040 rows)
> addr  opcode p1p2p3p4 p5  comment
>   -        -  --  -
> 0 Init   0 64000  Start at 64
> 1 InitCoroutine  1 23200  x
> 2 Null   1 4 008  r[4]=NULL
> 3 OpenRead   4 3 0 k(4)   00  root=3 iDb=0;
> sources
> 4 ColumnsUsed4 0 0 3  00
> 5 Explain5 0 0 SCAN TABLE rolling USING COVERING
> INDEX sources (~7864320 rows)  00
> 6 Noop   0 0 000  Begin
> WHERE-loop0: rolling
> 7 Rewind 4 212 0  00
> 8 Noop   0 0 000  Begin
> WHERE-core
> 9 Column 4 0 200
> r[2]=rolling.source1
> 10Column 4 1 300
> r[3]=rolling.source2
> 11Ne 2 134 (BINARY)   80  if
> r[4]!=r[2] goto 13
> 12Eq 3 205 (BINARY)   80  if
> r[5]==r[3] goto 20
> 13Copy   2 4 100
> r[4..5]=r[2..3]
> 14Yield  1 0 000
> 15Noop   0 0 000  End
> WHERE-core
> 16Column 4 0 600  r[6]=
> 17Column 4 1 700  r[7]=
> 18SeekGT 4 216 2  00  key=r[6..7]
> 19  Goto   1 8 000
> 20Next   4 8 001
> 21Noop   0 0 000  End WHERE-loop0:
> rolling
> 22EndCoroutine   1 0 000
> 23Explain230 0 SCAN SUBQUERY 2 AS x (~7864320
> rows)  00
> 24Noop   0 0 000  Begin
> WHERE-loop0: x
> 25InitCoroutine  1 0 200
> 26  Yield  1 62000  next row of x
> 27  Noop   0 0 000  Begin
> WHERE-core
> 28  Copy   2 9 000  r[9]=r[2];
> x.source1
> 29  Copy   3 10000