[sqlite] whish list for 2016

2016-02-17 Thread Christian Schmitz

> 
> DROP COLUMN and RENAME COLUMN are relatively easy for tables that lack
> indexes, triggers, views, foreign key references, CHECK constraints,
> and other constructs that might reference the dropped or renamed
> column.  Reliably finding every use of a column name and changing it
> can be tricky.  The problem is more acute for dropping a column - what
> do you do then, change each reference to NULL?

it may be okay to say:

If it is referenced, we may live with an error.

"Can't drop column xxx as it is used in trigger yyy."


Most columns for us are not referenced in any way. Just data columns.

Sincerely
Christian

-- 
Read our blog about news on our plugins:

http://www.mbsplugins.de/



[sqlite] whish list for 2016

2016-01-13 Thread Eduardo Morras
On Tue, 12 Jan 2016 21:58:01 +0100
Christian Schmitz  wrote:

> 
> > Am 20.12.2015 um 19:12 schrieb Big Stone :
> > 
> > Hi All,
> > 
> > To prepare for 2016 greetings moment, here is my personnal whish
> > list
> 
> Unless I missed something, I may suggest
> 
> * moveprev
> * movefirst
> * movelast
> * move to record in cursor with given index  

Insert the query result in a temporal table or view and work with it. The order 
of the result rows will change if the query hasn't got an "ORDER BY".

Depending what language you are programming, you can store the query result on 
a table(Lua), dictionary(Python, Erlang and others) or create a specifc struct 
for your query in C. 

You can use the deprecated get_table and free_table to get a similar recordset 
struct. 

 HTH

> Looping over a recordset twice is often useful.
> 
> Sincerely
> Christian


---   ---
Eduardo Morras 


[sqlite] whish list for 2016

2016-01-13 Thread Jean-Christophe Deschamps
At 08:28 13/01/2016, you wrote:
>On Wed, Jan 13, 2016 at 2:39 AM, Simon Slavin  
>wrote:
>
> > On 12 Jan 2016, at 11:56pm, Scott Hess  wrote:
> >
> > > If I am writing a client that can read SQLite databases, then I 
> probably
> > > don't want your database to be injecting a bunch of arbitrary PRAGMA
> > calls
> > > into my client.
> >
> > It is, after all, the equivalent of an autoexecute macro.  And we 
> all know
> > how good an idea that was.
> >
> > Okay, folks, I'm with Scott.  If you want SQLite to execute a bunch of
> > commands when the database is opened write your own replacement for 
> _open_
> > which opens the file then reads commands from a table and executes 
> them.
> >
>
>You have in my opinion taken this out of context, and are assuming the
>important part is the application, and not the data (i.e. database file).
>
>In original idea was simply to associate with the database some essential
>settings, which should have been such in the first place and aren't only
>because of legacy reasons, and not arbitrary macros. Keith just got 
>carried
>away :) --DD

Exactly! I see a clear difference between settings which are 
*-required-* by the schema and those which are only a convenience or 
comfort for the application(s).

The former type includes what should actually be part of the schema in 
an ideal world:
   foreign keys, recursive triggers, loading of [auto]extensions set 
ON/OFF, ...;
   function extensions and collations, ICU, ...

If some index uses a custom collation which isn't loaded, then queries 
will give erroneous results and not perform DB changes as intended.

If a schema change requires for instance recursive triggers ON, then 
maybe 5, 10 or more applications will need maintainance and 
redistribution. Forgetting update of a single app may silently spoil DB 
integrity.

Similarly, a single app not setting foreign keys ON when needed may 
again silently destroy DB integrity.

--
JcD 



[sqlite] whish list for 2016

2016-01-13 Thread Simon Slavin

On 13 Jan 2016, at 7:28am, Dominique Devienne  wrote:

> You have in my opinion taken this out of context, and are assuming the
> important part is the application, and not the data (i.e. database file).

I apologise.  I didn't read back down the thread before replying.  Sorry.

Simon.


[sqlite] whish list for 2016

2016-01-13 Thread Dominique Devienne
On Wed, Jan 13, 2016 at 2:39 AM, Simon Slavin  wrote:

> On 12 Jan 2016, at 11:56pm, Scott Hess  wrote:
>
> > If I am writing a client that can read SQLite databases, then I probably
> > don't want your database to be injecting a bunch of arbitrary PRAGMA
> calls
> > into my client.
>
> It is, after all, the equivalent of an autoexecute macro.  And we all know
> how good an idea that was.
>
> Okay, folks, I'm with Scott.  If you want SQLite to execute a bunch of
> commands when the database is opened write your own replacement for _open_
> which opens the file then reads commands from a table and executes them.
>

You have in my opinion taken this out of context, and are assuming the
important part is the application, and not the data (i.e. database file).

In original idea was simply to associate with the database some essential
settings, which should have been such in the first place and aren't only
because of legacy reasons, and not arbitrary macros. Keith just got carried
away :) --DD


[sqlite] whish list for 2016

2016-01-13 Thread Scott Hess
On Wed, Jan 13, 2016 at 12:42 AM, Jean-Christophe Deschamps <
jcd at antichoc.net> wrote:

> At 08:28 13/01/2016, you wrote:
>
>> On Wed, Jan 13, 2016 at 2:39 AM, Simon Slavin 
>> wrote:
>> > On 12 Jan 2016, at 11:56pm, Scott Hess  wrote:
>> > > If I am writing a client that can read SQLite databases, then I
>> probably
>> > > don't want your database to be injecting a bunch of arbitrary PRAGMA
>> > calls
>> > > into my client.
>> >
>> > It is, after all, the equivalent of an autoexecute macro.  And we all
>> know
>> > how good an idea that was.
>> >
>> > Okay, folks, I'm with Scott.  If you want SQLite to execute a bunch of
>> > commands when the database is opened write your own replacement for
>> _open_
>> > which opens the file then reads commands from a table and executes them.
>> >
>>
>> You have in my opinion taken this out of context, and are assuming the
>> important part is the application, and not the data (i.e. database file).
>>
>> In original idea was simply to associate with the database some essential
>> settings, which should have been such in the first place and aren't only
>> because of legacy reasons, and not arbitrary macros. Keith just got
>> carried
>> away :) --DD
>>
>
> Exactly! I see a clear difference between settings which are *-required-*
> by the schema and those which are only a convenience or comfort for the
> application(s).
>
> The former type includes what should actually be part of the schema in an
> ideal world:
>   foreign keys, recursive triggers, loading of [auto]extensions set
> ON/OFF, ...;
>   function extensions and collations, ICU, ...
>
> If some index uses a custom collation which isn't loaded, then queries
> will give erroneous results and not perform DB changes as intended.
>

Many of these items are things which require that your version of SQLite be
compiled with the right settings and/or additional code, so they definitely
_are_ the purview of the client, not the data.  The data may be corrupted
by trying to use it in the wrong context, sure, but the client may be
rendered insecure or buggy by loading inappropriate binary code.  I think
best case is for the database to have a catalog of what it needs, and the
client to call an API function to verify that it is constructed in a way to
meet the catalog's needs.  Perhaps the intersection could have allowances
for missing requirements, for instance read-only access can work without
foreign-key support.

If a schema change requires for instance recursive triggers ON, then maybe
> 5, 10 or more applications will need maintainance and redistribution.
> Forgetting update of a single app may silently spoil DB integrity.
>

This issue would seem to apply if you make a change requiring a new SQLite
version, or different SQLite compile-time settings.  Or if you find a bug
with your queries or schema.  These are really distinct areas of
responsibility, so rather than partial piecemeal solutions at the SQLite
level, you'd probably be better off finding a distribution strategy which
makes such updates reasonable, or an architecture which isolates the
database access into a single component.

-scott


[sqlite] whish list for 2016

2016-01-13 Thread Simon Slavin

On 12 Jan 2016, at 11:56pm, Scott Hess  wrote:

> If I am writing a client that can read SQLite databases, then I probably
> don't want your database to be injecting a bunch of arbitrary PRAGMA calls
> into my client.

It is, after all, the equivalent of an autoexecute macro.  And we all know how 
good an idea that was.

Okay, folks, I'm with Scott.  If you want SQLite to execute a bunch of commands 
when the database is opened write your own replacement for _open_ which opens 
the file then reads commands from a table and executes them.  That way it 
happens when you want it (in your software) and not when you're using the 
command-line tool to debug problems.

Simon.


[sqlite] whish list for 2016

2016-01-12 Thread Christian Schmitz

> Am 20.12.2015 um 19:12 schrieb Big Stone :
> 
> Hi All,
> 
> To prepare for 2016 greetings moment, here is my personnal whish list

Unless I missed something, I may suggest

* moveprev
* movefirst
* movelast
* move to record in cursor with given index  


Looping over a recordset twice is often useful.

Sincerely
Christian

-- 
Read our blog about news on our plugins:

http://www.mbsplugins.de/



[sqlite] whish list for 2016

2016-01-12 Thread Domingo Alvarez Duarte
I think this is a very good feature to have, thumbs up !  

?  
>  Tue Jan 12 2016 9:50:33 pm CET CET from "James K. Lowden"
>  Subject: Re: [sqlite] whish list for 2016
>
>  On Fri, 8 Jan 2016 08:28:29 +0100
> Dominique Devienne  wrote:
> 
>  
>>  
>>>One way to do that would be to honor a special user-created table,
>>> say "PRAGMAS", with name-value pairs that are automatically applied
>>> when the database is opened.
>>> 
>>> 

>>  Interesting idea. A "special"-to-SQLite sqlite_pragmas table, similar
>> to sqlite_master. 
>> 

>  Similar in name, yes. sqlite_master is special in that it's built into
> the database file structure, though. You couldn't add another such
> table without changing the file structure, something OFL understandably
> doesn't want to do. 
> 
> I'm suggesting that the presence of a user-created 
> "configuration table" would govern the behavior of the database,
> analogous to a configuration file. 
> 
> --jkl
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
>
>  



?



[sqlite] whish list for 2016

2016-01-12 Thread Keith Medcalf

On Tuesday, 12 January, 2016 13:58, Christian Schmitz  said:

> Unless I missed something, I may suggest

> * moveprev
> * movefirst
> * movelast
> * move to record in cursor with given index

There are no relational database engines on the market that can do this.  Those 
that *appear* to be able to do this are using some sleight of hand, either 
entirely on the client side (most common), or using some "server side 
assistance" so that the result set does not have to be cached in its entirety 
on the client.

Server side assistance was developed because of the propensity, once the client 
side feature was made "built-in", of the great unwashed to do things like 
"select * from customers;" and use the resulting client side cursor to scroll 
through the 200 million customer records with their 6 line GUI.  Needless to 
say, performance was somewhat suboptimal to say the least.  If you have ever 
used DSA.MSC to browse an Active Directory domain containing a billion objects 
then you know exactly what I mean.

Generally speaking, you have the choice of "server side" or "client side", 
either "complete set" or "keyset".  Client-Side Complete Set works as per the 
above -- the result set is sent into a row cache on the client, and the 
application fiddles with this rowset to make it look like you are going forward 
and backward (though you are not).  There is no further involvement of the 
database server.  Server-Side Set works the same way but the result set is 
cache in a temp table on the server indexed by ordinal position of the result 
row.  Magically created queries against this temp table make it look like you 
are scrolling the result set.

Keyset driven scrollable cursors work in similar fashion, except that instead 
caching the entire result row, only the primary keys of the source data rows 
are saved in the cache, with the same ordinal position primary key.  This 
keyset is stored on either the client-side or the server-side (in a temp 
table).  By the client submitting the correct queries, it can be made to appear 
that you have a scrollable cursor.  The advantage of a keyset is twofold -- it 
is often smaller than the full result set, and since it contains the primary 
keys of the underlying tables, one can do an UPDATE sourcetable ...WHERE 
CURRENT OF SET.

By design a Relational DBMS only has a concept of a result set and is not 
navigable except through artifice and trickery.  If you want a navigable 
database they you want either a Hierarchical, Network, or Network Extended 
database model, not relational.

> Looping over a recordset twice is often useful.

Simply resubmit the original query.  If you need repeatable read, do both 
inside the same transaction.







[sqlite] whish list for 2016

2016-01-12 Thread Keith Medcalf
On Tuesday, 12 January, 2016 13:51, James K. Lowden  said:

> On Fri, 8 Jan 2016 08:28:29 +0100
> Dominique Devienne  wrote:

> > > One way to do that would be to honor a special user-created table,
> > > say "PRAGMAS", with name-value pairs that are automatically applied
> > > when the database is opened.

> > Interesting idea. A "special"-to-SQLite sqlite_pragmas table, similar
> > to sqlite_master.

> Similar in name, yes.  sqlite_master is special in that it's built into
> the database file structure, though.  You couldn't add another such
> table without changing the file structure, something OFL understandably
> doesn't want to do.

> I'm suggesting that the presence of a user-created
> "configuration table" would govern the behavior of the database,
> analogous to a configuration file.

I would suppose that one could implement this using an SQLITE_EXTRA_INIT linked 
function (a symbol attached to SQLITE_EXTRA_INIT is called at the end of the 
sqlite3_initialize routine to perform extra initialization).  Pointing this to 
a function which calls a bunch of sqlite3_auto_extension to add internally 
compiled extension init routines to the initialization of each connection thus 
making such extensions available to all connections by auto-magic.

I suppose one could write some code which looks for the desired table and, if 
found, reads and executes the statements contained therein.

The entry point of this routine can be added into the auto_extension list (even 
though it is not an extension) through the EXTRA_INIT hook.

>From my reading of the code, a pointer in the auto_extension list does not 
>*have* to actually be an extension -- it is just code that is called when a 
>connection object is created that receives the connection object as an 
>argument.  If it *is* an extension init routine, it just happens to make some 
>calls to add functions/aggregates/modules/etc., but I see nothing to indicate 
>that this is actually required.

The only thing you will NOT be able to do this way is anything that needs to be 
set *before* initialization.  Running a set of pragma's on each connection 
should work just peachy though, I would expect.






[sqlite] whish list for 2016

2016-01-12 Thread Richard Hipp
On 1/12/16, Christian Schmitz  wrote:
>
> Unless I missed something, I may suggest
>
> * moveprev
> * movefirst
> * movelast
> * move to record in cursor with given index

Not possible, at least not in general.  Something like this could be
done for the simpliest queries against a single real table.  But for
joins (especially LEFT JOINs) or queries containing subqueries or
queries against virtual tables or table-valued functions or queries
involving common-table-expressions, this is not possible.
-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] whish list for 2016

2016-01-12 Thread Scott Hess
On Tue, Jan 12, 2016 at 3:43 PM, Keith Medcalf  wrote:

> On Tuesday, 12 January, 2016 13:51, James K. Lowden <
> jklowden at schemamania.org> said:
> > On Fri, 8 Jan 2016 08:28:29 +0100
> > Dominique Devienne  wrote:
> > > > One way to do that would be to honor a special user-created table,
> > > > say "PRAGMAS", with name-value pairs that are automatically applied
> > > > when the database is opened.
> > > Interesting idea. A "special"-to-SQLite sqlite_pragmas table, similar
> > > to sqlite_master.
> > Similar in name, yes.  sqlite_master is special in that it's built into
> > the database file structure, though.  You couldn't add another such
> > table without changing the file structure, something OFL understandably
> > doesn't want to do.
>
> > I'm suggesting that the presence of a user-created
> > "configuration table" would govern the behavior of the database,
> > analogous to a configuration file.
>
> I would suppose that one could implement this using an SQLITE_EXTRA_INIT
> linked function (a symbol attached to SQLITE_EXTRA_INIT is called at the
> end of the sqlite3_initialize routine to perform extra initialization).
> Pointing this to a function which calls a bunch of sqlite3_auto_extension
> to add internally compiled extension init routines to the initialization of
> each connection thus making such extensions available to all connections by
> auto-magic.
>
> I suppose one could write some code which looks for the desired table and,
> if found, reads and executes the statements contained therein.
>
> The entry point of this routine can be added into the auto_extension list
> (even though it is not an extension) through the EXTRA_INIT hook.
>
> From my reading of the code, a pointer in the auto_extension list does not
> *have* to actually be an extension -- it is just code that is called when a
> connection object is created that receives the connection object as an
> argument.  If it *is* an extension init routine, it just happens to make
> some calls to add functions/aggregates/modules/etc., but I see nothing to
> indicate that this is actually required.
>
> The only thing you will NOT be able to do this way is anything that needs
> to be set *before* initialization.  Running a set of pragma's on each
> connection should work just peachy though, I would expect.


If you are writing a client that uses SQLite databases, then you probably
have a wrapper for sqlite3_open*(), and you could just put the code to
query and run the pragmas in there.

If I am writing a client that can read SQLite databases, then I probably
don't want your database to be injecting a bunch of arbitrary PRAGMA calls
into my client.

To be honest, I'm kind of pressed to imagine the set of pragmas which make
sense for the database to select, rather than the client, since most of
them affect client operation.  I'd be very annoyed if my client has strong
memory constraints and the database forced some how cache_size setting.  Or
if my client was written assuming temp_store = FILE and the database forced
temp_store = MEMORY.  Having the database run synchronous = OFF or
journal_mode = OFF would be pretty fun!  The only ones which seem more like
they could be part of the schema itself are foreign_keys and secure_delete,
so maybe it would make more sense to handle them differently in the first
place, more like page_size or encoding where once set they're sticky.

-scott


[sqlite] whish list for 2016

2016-01-12 Thread James K. Lowden
On Fri, 8 Jan 2016 08:28:29 +0100
Dominique Devienne  wrote:

> > One way to do that would be to honor a special user-created table,
> > say "PRAGMAS", with name-value pairs that are automatically applied
> > when the database is opened.
> >
> 
> Interesting idea. A "special"-to-SQLite sqlite_pragmas table, similar
> to sqlite_master. 

Similar in name, yes.  sqlite_master is special in that it's built into
the database file structure, though.  You couldn't add another such
table without changing the file structure, something OFL understandably
doesn't want to do.  

I'm suggesting that the presence of a user-created 
"configuration table" would govern the behavior of the database,
analogous to a configuration file.  

--jkl


[sqlite] whish list for 2016

2016-01-08 Thread Dominique Devienne
On Fri, Jan 8, 2016 at 12:14 AM, James K. Lowden 
wrote:

> I would like to see a strict mode, too.

+1


> I would also like to be able to make "strictness" a property of the
> database, not the connection.
>
+1

similarly I'd like enforcing FKs to be per database, not per connection.


> One way to do that would be to honor a special user-created table, say
> "PRAGMAS", with name-value pairs that are automatically applied when
> the database is opened.
>

Interesting idea. A "special"-to-SQLite sqlite_pragmas table, similar to
sqlite_master. --DD


[sqlite] whish list for 2016

2016-01-07 Thread James K. Lowden
On Sun, 20 Dec 2015 19:12:39 +0100
Big Stone  wrote:

> Best whishes for 2016!

If it's not too late to join the party...

1.  create table T (t primary key);
update T set t = t+1;

for consecutive values of t.  

2.  Specific constraint references in error messages.  


3.  Correct math.  E.g.: 


$ sqlite3 db 'select typeof(1/0)'

typeof(1/0)
---
null   

should produce an error.  There are other examples, but none comes to
mind just now.  

I would like to see a strict mode, too.  I would also like to be able
to make "strictness" a property of the database, not the connection.
One way to do that would be to honor a special user-created table, say
"PRAGMAS", with name-value pairs that are automatically applied when
the database is opened.  

IMHO update isolation (#1) and mathematical correctness (#3) are
prerequisites for analytical functions.  SQLite can't be dependably
used for quantitative work if domain errors are silently ignored.  

--jkl


[sqlite] whish list for 2016

2016-01-05 Thread gunnar
Hi Scott,

It is not such a big deal for us but since I saw the subject "whish 
list" I thought to add something that I was surprised that it isn't 
there in sqlite (since it is so complete in many other ways), but that i 
use on regular basis.
I agree that SELECT * in production code is bad, but I do use it on the 
command line client now and then.
Regarding views, I don't know how sqlite handles them but I know that 
they can drastically slow down queries in MySQL for example.

gr.,
Gunnar


On 01/05/2016 03:56 PM, Scott Hess wrote:
> Maybe one option would be to add a layer to affect that explicitly, so that
> instead of the problem being that the existing rows can't be reordered
> without re-writing the entire table, the problem is to just change the
> schema to indicate where the columns should appear in "SELECT *" statements.
>
> Basically, you might have:
>CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT);
> then say:
>ALTER TABLE x ADD COLUMN k TEXT UNIQUE AFTER id;
> which converts the table to:
>CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT, k TEXT UNIQUE AFTER id);
>
> This could be extended to delete columns by instead annotating the schema
> to indicate that a column is hidden or ignored.  Perhaps the annotations
> could be constrained to only the end of the statement (so that series of
> placements and deletions can be comprehended).  All that would be left
> would be renaming a column, which is already plausible (just rename the
> column in the schema).
>
> You could already implement this by renaming the table and replacing it
> with a view plus a trigger.  But specifying that would be somewhat involved
> and brittle.
>
> WRT handling things like view and trigger and foreign key references, I
> think that would be reasonable to just expose manually.   Replace the views
> and triggers in the same transaction.  Provide a means to replace column
> definitions so that foreign key references can be handled in a similar
> fashion.
>
> [Personally my position is that "SELECT * FROM x" is not appropriate for
> production code.  Otherwise you're implicitly relying on implementation
> details.]
>
> -scott
>
>
> On Tue, Jan 5, 2016 at 5:23 AM, gunnar  wrote:
>
>> Indeed those two possibilities! And perhaps also to MODIFY a column. We
>> alter tables all the time :)
>>
>> The possibility to specify at which place an added column should appear
>> when I do a "SELECT * ..." would be a nice extra, in MySQL it is done like:
>> ALTER TABLE table_name ADD COLUMN col_name1 [_AFTER_ col_name2 | FIRST]
>>
>>
>>
>>
>> Gr.,
>> Gunnar
>>
>>
>>
>>
>> On 12/24/2015 05:14 PM, Simon Slavin wrote:
>>
>>> On 24 Dec 2015, at 3:12pm, John McKown 
>>> wrote:
>>>
>>> ?I'm curious as to why. Doing so would, most likely, require rewriting the
 entire table. If you want a SELECT * to get the columns in a particular
 order, just create a VIEW with the columns in the order in which you want
 them.

>>> I think Gunnar just wants forms such as
>>>
>>> ALTER TABLE CREATE COLUMN ...
>>> ALTER TABLE DROP COLUMN ...
>>>
>>> to aid with making small changes to the schema.  These are available in
>>> most SQL engines but the way SQLite3 is written makes it difficult or
>>> inefficient to implement them.
>>>
>>> I have a database where one table takes up more than 30 Gigabytes of
>>> space.  While developing the software I needed several times to change a
>>> column definition and since SQLite lacks these facilities I had to move 30
>>> Gig of data around every time I did it.  Annoying.  But it's not normally
>>> that much of a problem for me.
>>>
>>> Simon.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users at mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] whish list for 2016

2016-01-05 Thread gunnar
Indeed those two possibilities! And perhaps also to MODIFY a column. We 
alter tables all the time :)

The possibility to specify at which place an added column should appear 
when I do a "SELECT * ..." would be a nice extra, in MySQL it is done 
like: ALTER TABLE table_name ADD COLUMN col_name1 [_AFTER_ col_name2 | 
FIRST]




Gr.,
Gunnar



On 12/24/2015 05:14 PM, Simon Slavin wrote:
> On 24 Dec 2015, at 3:12pm, John McKown  
> wrote:
>
>> ?I'm curious as to why. Doing so would, most likely, require rewriting the
>> entire table. If you want a SELECT * to get the columns in a particular
>> order, just create a VIEW with the columns in the order in which you want
>> them.
> I think Gunnar just wants forms such as
>
> ALTER TABLE CREATE COLUMN ...
> ALTER TABLE DROP COLUMN ...
>
> to aid with making small changes to the schema.  These are available in most 
> SQL engines but the way SQLite3 is written makes it difficult or inefficient 
> to implement them.
>
> I have a database where one table takes up more than 30 Gigabytes of space.  
> While developing the software I needed several times to change a column 
> definition and since SQLite lacks these facilities I had to move 30 Gig of 
> data around every time I did it.  Annoying.  But it's not normally that much 
> of a problem for me.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] whish list for 2016

2016-01-05 Thread Keith Medcalf


> On Tuesday, 5 January, 2016 08:33, gunnar  said:

> Regarding views, I don't know how sqlite handles them but I know that
> they can drastically slow down queries in MySQL for example.

While this is true in many cases, using a view rather than a select on the 
underlying table should not incur any overhead provided that the view is a 
"simple projection".  If the view generates a non-trivial projection such that 
those conditions must be maintained (materialized) in the query plan with 
respect to the view visitation, then the view will adversely affect 
performance.  Simple projections should be flattened by the optimizer as if the 
original query had been phrased directly against the underlying tables.

So, if there are no grouping, ordering, or outer join operations in the view, 
it will probably not affect the plan.  If there are, then the view will itself 
become a "subselect" which must be visited as if it were a table.  WHERE 
conditions, in simple projections and equijoins can be simply pushed out into 
the overall constraints.  

explain query plan will make this apparent.







[sqlite] whish list for 2016

2016-01-05 Thread Scott Hess
Maybe one option would be to add a layer to affect that explicitly, so that
instead of the problem being that the existing rows can't be reordered
without re-writing the entire table, the problem is to just change the
schema to indicate where the columns should appear in "SELECT *" statements.

Basically, you might have:
  CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT);
then say:
  ALTER TABLE x ADD COLUMN k TEXT UNIQUE AFTER id;
which converts the table to:
  CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT, k TEXT UNIQUE AFTER id);

This could be extended to delete columns by instead annotating the schema
to indicate that a column is hidden or ignored.  Perhaps the annotations
could be constrained to only the end of the statement (so that series of
placements and deletions can be comprehended).  All that would be left
would be renaming a column, which is already plausible (just rename the
column in the schema).

You could already implement this by renaming the table and replacing it
with a view plus a trigger.  But specifying that would be somewhat involved
and brittle.

WRT handling things like view and trigger and foreign key references, I
think that would be reasonable to just expose manually.   Replace the views
and triggers in the same transaction.  Provide a means to replace column
definitions so that foreign key references can be handled in a similar
fashion.

[Personally my position is that "SELECT * FROM x" is not appropriate for
production code.  Otherwise you're implicitly relying on implementation
details.]

-scott


On Tue, Jan 5, 2016 at 5:23 AM, gunnar  wrote:

> Indeed those two possibilities! And perhaps also to MODIFY a column. We
> alter tables all the time :)
>
> The possibility to specify at which place an added column should appear
> when I do a "SELECT * ..." would be a nice extra, in MySQL it is done like:
> ALTER TABLE table_name ADD COLUMN col_name1 [_AFTER_ col_name2 | FIRST]
>
>
>
>
> Gr.,
> Gunnar
>
>
>
>
> On 12/24/2015 05:14 PM, Simon Slavin wrote:
>
>> On 24 Dec 2015, at 3:12pm, John McKown 
>> wrote:
>>
>> ?I'm curious as to why. Doing so would, most likely, require rewriting the
>>> entire table. If you want a SELECT * to get the columns in a particular
>>> order, just create a VIEW with the columns in the order in which you want
>>> them.
>>>
>> I think Gunnar just wants forms such as
>>
>> ALTER TABLE CREATE COLUMN ...
>> ALTER TABLE DROP COLUMN ...
>>
>> to aid with making small changes to the schema.  These are available in
>> most SQL engines but the way SQLite3 is written makes it difficult or
>> inefficient to implement them.
>>
>> I have a database where one table takes up more than 30 Gigabytes of
>> space.  While developing the software I needed several times to change a
>> column definition and since SQLite lacks these facilities I had to move 30
>> Gig of data around every time I did it.  Annoying.  But it's not normally
>> that much of a problem for me.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] whish list for 2016

2016-01-05 Thread R Smith


On 2016/01/04 6:11 PM, Bernardo Sulzbach wrote:
> On Mon, Jan 4, 2016 at 1:50 PM, Stephen Chrzanowski  
> wrote:
>> I did the test to compare SSD vs 5200rpm and 7200rpm drives.  I should
>> throw this at my 7200rpm hybrid at this app and see what happens.
>>
> Could you please share any performance data you still have? Thank you.
>

I decided to cook up a test for this since I happen to have one dev 
machine that is rather new with 3 different storage media in.

The following script was used to produce a severely fragmented database 
file around 20GB in size (while only about 7GB of actual data remained 
inside). It takes about 20 mins to run on a normal drive:


DROP TABLE IF EXISTS vacTest;

DROP TABLE IF EXISTS vacCopy;

CREATE TABLE vacTest(
   i INTEGER NOT NULL PRIMARY KEY,
   a REAL,
   b NUMERIC,
   c TEXT
);

CREATE TABLE vacCopy(
   i INTEGER NOT NULL PRIMARY KEY,
   a REAL,
   b NUMERIC,
   c TEXT
);

WITH VI(x, r) AS (
 SELECT 0, 245
   UNION ALL
 SELECT x+1, ((random()/9223372036854775806.0)*2)+245 FROM 
VI WHERE x<1
)
INSERT INTO vacTest (i,a,b,c) SELECT x, r, datetime(r), printf('Some 
Text aimed at consuming a few bytes S%6d',substr(r,2,6))
   FROM VI;

INSERT INTO vacCopy SELECT * FROM vacTest;

DELETE FROM vacTest
  WHERE (i / 5) = (i / 5.0);

DELETE FROM vacCopy
  WHERE (i / 6) = (i / 6.0);

WITH VI(x) AS (
 SELECT 0
   UNION ALL
 SELECT x+1 FROM VI WHERE x<1000
)
INSERT INTO vacTest (a,b,c) SELECT 1.0, '2000-01-01 00:00:00', 
printf('Some Text aimed at consuming a few bytes S%6d',x)
   FROM VI;

WITH VI(x) AS (
 SELECT 0
   UNION ALL
 SELECT x+1 FROM VI WHERE x<1000
)
INSERT INTO vacCopy (a,b,c) SELECT 1.0, '2000-01-01 00:00:00', 
printf('Some Text aimed at consuming a few bytes S%6d',x)
   FROM VI;

DELETE FROM vacTest
  WHERE instr(i,'4') > 0;

DELETE FROM vacCopy
  WHERE instr(i,'5') > 0;
=

It was run inside sqlite3.exe using 3.9.2 (the standard pre-compiled CLI 
as downloadable from the downloads page on sqlite.org) with 4K page size 
and WAL mode. OS is Windows 10 64 bit.

The file produced was then copied to 4 different drives as follows:

Test 1 - F: - External Western Digital 3TB 7500 rpm drive via USB3.0, 
Max Read - Write rated @ 120MB/s
Test 2 - D: - Internal Western Digital Velociraptor 600MB 10,000 rpm via 
SATA 6GB/s rated @ 150 MB/s
Test 3 - E: - Internal Western Digital Black 2TB 7500 rpm (WD2002FAEX) 
via SATA 6GB/s rated @ 164 MB/s
Test 4 - C: - Internal Kingston HyperX Predator 480GB SSD via m.2 PCIe 
interface rated @ 1400MB/s (read) - 1000 MB/s (write)

Following is the output from the VACUUM command (with timers enabled) 
using the sqlite3 CLI and connecting to the same copied file on the 4 
drives:

F:\[SCIENCE]>sqlite3 vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 1309.109 user 203.921875 sys 114.984375
sqlite> .q

F:\[SCIENCE]>sqlite3 D:\vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 824.223 user 140.953125 sys 87.921875
sqlite> .q

F:\[SCIENCE]>sqlite3 E:\ADrive\vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 788.750 user 131.921875 sys 81.593750
sqlite> .q

F:\[SCIENCE]>sqlite3 c:\AMD\vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 214.056 user 92.843750 sys 56.640625
sqlite> .q


No surprises really, the VACUUM operation is clearly IO bound.

In all three cases the processor (i7 5930K Hexacore 12-thread @ 3.5GHz) 
Never breaks the 4% total usage, and no single thread rose above 40% 
until right before the VACUUM operation concludes, at which point the 
one thread spikes up to 95% for around 2-6 seconds seemingly in step 
with each drive's general performance. (This is my perception, I had no 
means of measuring it exactly).

File Size goes down from ~20GB to around ~6.8GB, so the Vacuuming has a 
purpose in this case, but it would be hard to find this level of 
fragmentation in a normal-use real-World DB.

In all tests, the TEMP folder was set to the D drive which is the second 
slowest drive, it showed a lot of usage. I started out using the usual 
6GB Ramdrive for TEMP, but it was too small, so redirected it to D.  I 
tried the last test again with the TEMP set to the fast SSD, figures 
were only about 16% better (214 vs. 181), but I imagine using a ramdrive 
or at least a second SSD would show a significant improvement.

I initially added tests for "SELECT * FROM vacTest WHERE xxx;" but they 
seemed to be near instantaneous before and after the vacuum so I dropped 
it - I realized afterward it is probably thanks to the Primary Key I 
used in the WHERE clause, and may have 

[sqlite] whish list for 2016

2016-01-04 Thread Bernardo Sulzbach
Thanks a lot for this, Stephen. It will very likely help me make a
decision in the future.


[sqlite] whish list for 2016

2016-01-04 Thread Simon Slavin

On 28 Dec 2015, at 10:13am, Simon Slavin  wrote:

> Well I have a 43 Gigabyte database at work.  I bet it doesn't take more than 
> a few hours to check.  But I can't do it from home so it'll have to wait 
> until I get back to work next week for me to test that theory.

Got back to work today and tried it.

The database is actually 38.4 gigabytes at the moment.  Almost all that space 
is taken up by one thin table with 713,099,083 (713 million) rows.  I restored 
a server backup to the hard disk of my office computer and ran VACUUM on it.

Run Time: real 12206.707 user 1219.335537 sys 514.479783

That's 3 hours 23 minutes.  For a 38 Gigabyte database including a table with 
half a billion rows.

Details: Running in the SQLite 3.8.5 shell tool on a four year old iMac with a 
spinning rust storage system.  VACUUM was running in the background while I was 
doing light work (editing web pages, a bit of email, etc.) in the foreground.

So you can criticise how VACUUM works if you like, but on a cheap old iMac, 
working in the background, it can still get through a big database in just a 
few hours.

Simon.


[sqlite] whish list for 2016

2016-01-04 Thread Bernardo Sulzbach
On Mon, Jan 4, 2016 at 1:50 PM, Stephen Chrzanowski  
wrote:
>
> I did the test to compare SSD vs 5200rpm and 7200rpm drives.  I should
> throw this at my 7200rpm hybrid at this app and see what happens.
>

Could you please share any performance data you still have? Thank you.

-- 
Bernardo Sulzbach


[sqlite] whish list for 2016

2016-01-04 Thread Bernardo Sulzbach
On Mon, Jan 4, 2016 at 12:28 PM, Simon Slavin  wrote:
>
> That's 3 hours 23 minutes.  For a 38 Gigabyte database including a table with 
> half a billion rows.
>
> Details: Running in the SQLite 3.8.5 shell tool on a four year old iMac with 
> a spinning rust storage system.  VACUUM was running in the background while I 
> was doing light work (editing web pages, a bit of email, etc.) in the 
> foreground.
>
> So you can criticise how VACUUM works if you like, but on a cheap old iMac, 
> working in the background, it can still get through a big database in just a 
> few hours.
>
> Simon.

You wouldn't have monitored disk usage of that, would you? I am
curious about how faster a good SSD would make it as it clearly
doesn't look like a CPU or memory bound operation.

-- 
Bernardo Sulzbach


[sqlite] whish list for 2016

2016-01-04 Thread Stephen Chrzanowski
I'll see if I can get it to recompile over the next few days.  It has been
a couple years since I've even looked at it, and I've changed from Delphi
to Free Pascal.  The 5200 drive is out since I don't have IDE on this
board, and I'm not about to go out and buy any Green HDD.  ;)

I'm also at work right now and the machine I'm using is VASTLY different in
specs.  No SSD or 5200rpm drive here, but at home, I'll be able to plug in
a standard 7200rpm drive and test against SSD in RAID-0, 7200rpm Hybrid and
7200rpm standard.  I'll set it up to make a 10gig file and tear the
internals of the DB to shreds.

On Mon, Jan 4, 2016 at 11:11 AM, Bernardo Sulzbach <
mafagafogigante at gmail.com> wrote:

> On Mon, Jan 4, 2016 at 1:50 PM, Stephen Chrzanowski 
> wrote:
> >
> > I did the test to compare SSD vs 5200rpm and 7200rpm drives.  I should
> > throw this at my 7200rpm hybrid at this app and see what happens.
> >
>
> Could you please share any performance data you still have? Thank you.
>
> --
> Bernardo Sulzbach
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] whish list for 2016

2016-01-04 Thread Stephen Chrzanowski
I once faked a database, created random data, deleted random data,
re-inserted random data of random sizes, got the database to a couple gigs
in size, I noticed that the temp file SQLite made when running vacuum
accounted for about the same size of what the actual database was.  I guess
I had the very few pages to clear out.  Good rule of thumb, IMO, would be
to ensure you have 1.5x the space available that your database is.  So if
your database is 10 units of storage space, have 15 free and at the ready.

I did the test to compare SSD vs 5200rpm and 7200rpm drives.  I should
throw this at my 7200rpm hybrid at this app and see what happens.

(Honestly?  It looked like the database was completely recreated and the
file handle was changed to point at the new temp files that was created,
but, that is just my seeing what was going on while the program was running
and my watching the Explorer window)


On Mon, Jan 4, 2016 at 9:37 AM, Bernardo Sulzbach  wrote:

> On Mon, Jan 4, 2016 at 12:28 PM, Simon Slavin 
> wrote:
> >
> > That's 3 hours 23 minutes.  For a 38 Gigabyte database including a table
> with half a billion rows.
> >
> > Details: Running in the SQLite 3.8.5 shell tool on a four year old iMac
> with a spinning rust storage system.  VACUUM was running in the background
> while I was doing light work (editing web pages, a bit of email, etc.) in
> the foreground.
> >
> > So you can criticise how VACUUM works if you like, but on a cheap old
> iMac, working in the background, it can still get through a big database in
> just a few hours.
> >
> > Simon.
>
> You wouldn't have monitored disk usage of that, would you? I am
> curious about how faster a good SSD would make it as it clearly
> doesn't look like a CPU or memory bound operation.
>
> --
> Bernardo Sulzbach
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] whish list for 2016

2016-01-03 Thread Eduardo Morras

I left the list some days and find this subject discussed and finished!!

Happy New Year 2016 to every colister and sqlite developer.

I add my list and a tiny explanation.

- Lock tables/indexes on cache (make them unevictable)

When a query use a big size table, other tables/index used in the query or 
other queries tables/indexes are dropped from sqlite cache. Lock content on 
cache could boost those queries that "fights" with a big size table. Mark a 
table as non cacheable could do the same in my use case.

As a working example I c from own code, a sqlite filesystem that stores file 
content as blob or as external file:

CREATE TABLE `file` (  \
`fid`   INTEGER,   \
`size`  INTEGER NOT NULL,  \
`permission`INTEGER NOT NULL DEFAULT 700,  \
`lock`  INTEGER NOT NULL DEFAULT 0,\
`blob_id`   INTEGER NOT NULL,  \
`owner_id`  INTEGER NOT NULL,  \
`group_id`  INTEGER NOT NULL,  \
`name`  TEXT NOT NULL, \
`external_path` TEXT NOT NULL, \
PRIMARY KEY(fid)   \
); 

CREATE TABLE `fsblob` (\
`bid`   INTEGER,   \
`size`  INTEGER NOT NULL,  \
`fid`   INTEGER NOT NULL,  \
`compressed`INTEGER NOT NULL DEFAULT 0,\
`content`   BLOB,  \
PRIMARY KEY(bid)   \
); \
   \

 Each time a user access a file, the "file" table (and others like "directory", 
"users") is evicted from cache (fsblob table fills cache) and needs to be 
reread and reparsed from disk for the next query. The problem is worse when 
multiple files are accessed. Making the cache bigger works upto some filesize, 
but locking the "important" tables on cache is the correct (I think) fix. As a 
workaround, I used multiple databases, one with fs metadata and attach others 
with file content (fsblob tables), but dropped it because other tecnical 
problems.

- Access table row by fileposition

In a read only scenario (no writer and/or no wal mode), read and parse the row 
content if we know it's absolute database file position or we know page number 
plus page position offset.

- Lock constant tables on file (stick table rows to page and/or fileposition)

This way, the file position of a row is fixed or easily calculated and can 
create/use hash indexes. Now I'm using a virtual table derived from Dr. Hipps 
csv virtual table years ago that creates a inmemory hash index of an external 
csv file, but it implies use of own sqlite3 version with virtual code and an 
additional file to database.

Additionally, if we can lock constant tables (defined as is by the developer) 
at the begining of the database file, mmap feature can cache constant/static 
data from those tables. In my app, it implies file, user, group, fs and other 
tiny tables (changes are rare) could be mmaped.

- Vacuum/backup reorder tables in database file

If we can tell vacuum or backup the order we want the tables be on the database 
file, on some scenarios (like mine;) ) moving tables to the begining (constant 
data) or the end (timestamp log growing table, f.ex.) could speed up queries 

- Vacuum/backup adds free pages for table/index grow

As far as I know, vacuum and backup write database tables, indexes and metadata 
without free pages. Adding free pages at the end of the pages that store a 
table or index, and reserve those pages for use of that table or index could 
hold back internal fragmentation.


There are others ideas but... hey 2017 is closer ;)

Thanks to Dr. Richard Hipps, Dan Kennedy and Joe Mistachkin for their 
exceptional work, and other colisters for their time and help.

---   ---
Eduardo Morras 


[sqlite] whish list for 2016

2015-12-29 Thread Warren Young
On Dec 24, 2015, at 7:49 PM, Simon Slavin  wrote:
> 
> What makes "ALTER TABLE table-name DROP ?COLUMN" hard is checking the schema 
> to make sure that nothing in the schema refers to the dropped column.

Given that the current alternative to this feature is hand-rolled code like I 
gave earlier in the thread, I don?t see that SQLite *must* solve this problem.  
It would certainly be nice if it maintained consistency for you, but since the 
alternative doesn?t allow you to do that, what?s wrong with just putting a 
warning in the docs: ?If you use ALTER TABLE DROP COLUMN, you risk breaking 
consistency checks.?

Then you can push off automatic consistency check maintenance to the 
quasi-mythical SQLite 4.

Again, all I?m advocating for is automatic generation and running of the SQL I 
gave above.  That is, implement the feature in terms of existing facilities, 
don?t go creating a bunch of new code just to handle this case.  Not only does 
this make implementation easier, it will reduce the temptation to make the 
feature conditional to keep the embedded users happy, which in turn makes 
testing harder, since it doubles the number of test cases.


[sqlite] whish list for 2016

2015-12-28 Thread Richard Hipp
On 12/25/15, Valentin Davydov  wrote:
>
> It would be desirable to improve algorithm of INTEGRITY_CHECK pragma.
> Presently it is generally useless for indexed databases which don't fit
> entirely in RAM (and which usually need checking much strongly than
> smaller ones).
>

For each row of each table in the database, PRAGMA integrity_check
verifies that there are corresponding entries in the indexes
associated with that table.  This operation typically involves lots of
random seeking, which can be slow for large databases.

The "PRAGMA quick_check" command works just like "PRAGMA
integrity_check" except that it skips this row-by-row checking of
index entries.  PRAGMA quick_check still verifies that each index has
the correct number of entires, but it does not verify that all the
entries correspond to table rows.  Hence, PRAGMA quick_check is able
to do a single scan through the entire database file, reading each
page only once, and without doing a lot of seeking.  It can therefore
be a lot faster.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] whish list for 2016

2015-12-28 Thread Valentin Davydov
On Fri, Dec 25, 2015 at 12:51:55PM +, Simon Slavin wrote:
> 
> On 25 Dec 2015, at 12:39pm, Valentin Davydov  
> wrote:
> 
> > It would be desirable to improve algorithm of INTEGRITY_CHECK pragma.
> > Presently it is generally useless for indexed databases which don't fit
> > entirely in RAM (and which usually need checking much strongly than
> > smaller ones).
> 
> Valentin, could you expand on that ?

As far as I understand, INTEGRITY_CHECK simply iterates over the records 
(of tables and indices) one by one in some arbitrary order. So, if the 
database is too big to fit in the available memory (sqlite's own cache, 
system file cache etc), then each iteration implies a random seek on disk(s),
or even several ones in some scenarios. So, check of a few terabytes database 
with some tens billions of records and a dozen of indices would take more than 
10^11 disk operations of more than 10 milliseconds each. That is, years.

My wish is to involve some intermediate level of cache (perhaps bitmaps?)
which could reside in reasonable amount of memory (say, < 1% of the total 
database size) and obviate the need to read each page more than once.
So, things would speed up in proportion of page size to record size. 

> PRAGMA schema.foreign_key_check;
> 
> ?  Does it have the same problem as far as you're concerned ?

I don't have access to a huge databases with foreign keys now.

Valentin Davydov.


[sqlite] whish list for 2016

2015-12-28 Thread Simon Slavin

On 28 Dec 2015, at 9:01am, Valentin Davydov  wrote:

> As far as I understand, INTEGRITY_CHECK simply iterates over the records 
> (of tables and indices) one by one in some arbitrary order. So, if the 
> database is too big to fit in the available memory (sqlite's own cache, 
> system file cache etc), then each iteration implies a random seek on disk(s),
> or even several ones in some scenarios. So, check of a few terabytes database 
> with some tens billions of records and a dozen of indices would take more 
> than 
> 10^11 disk operations of more than 10 milliseconds each. That is, years.

Well I have a 43 Gigabyte database at work.  I bet it doesn't take more than a 
few hours to check.  But I can't do it from home so it'll have to wait until I 
get back to work next week for me to test that theory.

Hmm note to self: work on a copy, use command-line tool, .timer ON.

Simon.


[sqlite] whish list for 2016

2015-12-28 Thread Darren Duncan
On 2015-12-24 6:21 PM, Stephen Chrzanowski wrote:
> One thing I would suggest, if you're looking to add and delete columns
> dynamically is not to worry about the order of the columns in the database,
> but, have a second table hanging around that remembers the specified order
> the user wants to see the columns in.  Doing your update and insert calls
> make no difference so long you specify the fields on either call (Or in
> oter words, don't do [ insert into MyTable values (1,2,3) ].  Be aware on
> tables that get large.  Adding or deleting fields can get expensive when
> the databases physical pages need to be updated, especially if the field
> you're adding affects, or has indexes modified.

It sounds like you're advocating a data dictionary of sorts, which I agree 
with. 
  The order for displaying columns in is meta-data that should be defined 
separately for the users' sake, while the actual columns have no significant 
order in the database.  Display order is also just one of many kinds of useful 
meta-data, and storing that separately allows you to have whatever kinds of 
meta 
you want without complicating the core system. -- Darren Duncan



[sqlite] whish list for 2016

2015-12-25 Thread Valentin Davydov
Hi, All!

It would be desirable to improve algorithm of INTEGRITY_CHECK pragma.
Presently it is generally useless for indexed databases which don't fit
entirely in RAM (and which usually need checking much strongly than
smaller ones).

Valentin Davydov.


[sqlite] whish list for 2016

2015-12-25 Thread Jeffrey Mattox
> On Dec 25, 2015, at 12:51 PM, Stephen Chrzanowski  
> wrote:
> 
> *Part 2;*
> 
> More along with your application style, but a complete database schema 
> overhaul, think of a contact form that allows for multiple methods of 
> communication.  Multiple email addresses, multiple phone or fax numbers, can 
> all be associated to one contact.  Typically you'd have a table sitting aside 
> with the contact type (email, phone, fax, maybe in the future Telepresence 
> ID?), and another table containing the actual data.  You could adopt this 
> method to what you're describing.  To add or delete fields to your UI 
> (Telpresence info doesn't exist in any common contact manager I know of), all 
> you'd have to do is add or delete rows to a table, and your application 
> written to adapt to random(?) changes to the field changes, regardless of 
> additions or deletions.
> 
> All you'd need is one table that would hold the fields unique identifier, a 
> field title, the order in which it is displayed on the UI, and possibly a 
> default value field.
> 
> Another table contains a unique identifier, a FK field pointing to the UID of 
> the above table, and the raw data.
> 
> Your software would then make whatever required SELECT call to obtain the 
> required information and either store that data in a new temp table, or, 
> store the data in memory either via a stringlist or class, then render the 
> data to your UI using just this new data.

This is called a dynamic database.  One table holds the schema with one row for 
each field/column.  Another table holds the data with one row for each 
field/column.  Use joins to put the whole thing together.  I used the scheme 
for an iOS app where the user can define how many fields there are for each 
"row" of the data they are saving.  Like the Contacts app mentioned by Stephen.

The trick is to maintain integrity.  For example, when deleting a field be sure 
to delete all the data associated with that field.  Use constraints on foreign 
keys to help during development (find those bugs early).

Jeff


[sqlite] whish list for 2016

2015-12-25 Thread Simon Slavin

On 25 Dec 2015, at 12:39pm, Valentin Davydov  wrote:

> It would be desirable to improve algorithm of INTEGRITY_CHECK pragma.
> Presently it is generally useless for indexed databases which don't fit
> entirely in RAM (and which usually need checking much strongly than
> smaller ones).

Valentin, could you expand on that ?  I'm guessing that you're saying that it's 
very slow but I may have missed your point.  Also, do you know about

PRAGMA schema.foreign_key_check;

?  Does it have the same problem as far as you're concerned ?

Simon.


[sqlite] whish list for 2016

2015-12-25 Thread Simon Slavin

On 25 Dec 2015, at 2:24am, John McKown  wrote:

> DELETE is normally done as: ALTER TABLE table-name DROP ?COLUMN
> column_file_name; and would be a very nice addition. I hadn't noticed that
> it is missing. I wonder why.

SQlite stores all the data for a row together in column order.

col1, col2, col3, col4 ...

If someone dropped column 3 then there are two ways to handle this internally:

A) Make a note that col3 no longer exists by changing its name to something 
unusable and changing its affinity to "DELETED".  Existing table data stays the 
way it is.  New rows inserted into that table get a NULL value in that column.  
Extremely fast but the database now takes up more space than it needs to.

B) Do something like the above but then immediately VACUUM that table.  No 
longer extremely fast but now the database file is smaller.  Doing this would 
involve writing code which would implement a selective form of the VACUUM 
command:

VACUUM [schema.]tablename

which VACUUMs just that one table.  Which might be another good reason to go 
this way since that could be useful just by itself.

C) Choose to do either (A) or (B) depending on a PRAGMA setting.  Or maybe just 
look at the setting of PRAGMA auto_vacuum.

The above isn't hard to do.  What makes "ALTER TABLE table-name DROP ?COLUMN" 
hard is checking the schema to make sure that nothing in the schema refers to 
the dropped column.  That column you dropped might be in a CHECK constraint, or 
a FOREIGN KEY or an index or probably a bunch of things not coming to mind 
right now.  Figuring that out requires SQLite to parse the master table down to 
the level of column names.  That's difficult.

Simon.


[sqlite] whish list for 2016

2015-12-25 Thread Simon Slavin

On 25 Dec 2015, at 2:35am, Bernardo Sulzbach  
wrote:

>> ALTER TABLE table-name RENAME COLUMN column_field_name TO
>> new_column_field_name;
> 
> Are you sure? The documentation does not have anything about this and
> I get a syntax error using 3.9.2 (a bit outdated, I know).

John's confused.  The ALTER table RENAME command is for renaming tables, not 
columns.

Simon.


[sqlite] whish list for 2016

2015-12-25 Thread Bernardo Sulzbach
On Fri, Dec 25, 2015 at 12:24 AM, John McKown
 wrote:
> On Thu, Dec 24, 2015 at 4:09 PM, Christian Schmitz <
> realbasiclists at monkeybreadsoftware.de> wrote:
>
>> Hi,
>>
>> better ALTER command would be very welcome.
>>
>> e.g. RENAME/DELETE column or field.
>>
>
> RENAME exists.
>
> ALTER TABLE table-name RENAME COLUMN column_field_name TO
> new_column_field_name;
>

Are you sure? The documentation does not have anything about this and
I get a syntax error using 3.9.2 (a bit outdated, I know).

Anyway, if you are right, the documentation likely should be updated.

-- 
Bernardo Sulzbach


[sqlite] whish list for 2016

2015-12-24 Thread Christian Schmitz
Hi,

better ALTER command would be very welcome.

e.g. RENAME/DELETE column or field.

Instead of us writing code to do it, it could be a command where SQLite does 
things right.

Sincerely
Christian

-- 
Read our blog about news on our plugins:

http://www.mbsplugins.de/



[sqlite] whish list for 2016

2015-12-24 Thread John McKown
On Thu, Dec 24, 2015 at 8:39 PM, Simon Slavin  wrote:

>
> On 25 Dec 2015, at 2:35am, Bernardo Sulzbach 
> wrote:
>
> >> ALTER TABLE table-name RENAME COLUMN column_field_name TO
> >> new_column_field_name;
> >
> > Are you sure? The documentation does not have anything about this and
> > I get a syntax error using 3.9.2 (a bit outdated, I know).
>
> John's confused.  The ALTER table RENAME command is for renaming tables,
> not columns.
>

?You're right. I'm wrong. Too much "nog" in the eggnog??



>
> Simon.
>
>
-- 
Computer Science is the only discipline in which we view adding a new wing
to a building as being maintenance -- Jim Horning

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


[sqlite] whish list for 2016

2015-12-24 Thread Richard Hipp
On 12/24/15, John McKown  wrote:
>
> DELETE is normally done as: ALTER TABLE table-name DROP ?COLUMN
> column_file_name; and would be a very nice addition. I hadn't noticed that
> it is missing. I wonder why. Perhaps Dr. Hipp will comment after the
> holidays.
>

Merry Christmas.

DROP COLUMN and RENAME COLUMN are relatively easy for tables that lack
indexes, triggers, views, foreign key references, CHECK constraints,
and other constructs that might reference the dropped or renamed
column.  Reliably finding every use of a column name and changing it
can be tricky.  The problem is more acute for dropping a column - what
do you do then, change each reference to NULL?

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] whish list for 2016

2015-12-24 Thread Stephen Chrzanowski
Two parter;

*Part 1;*

One thing I would suggest, if you're looking to add and delete columns
dynamically is not to worry about the order of the columns in the database,
but, have a second table hanging around that remembers the specified order
the user wants to see the columns in.  Doing your update and insert calls
make no difference so long you specify the fields on either call (Or in
oter words, don't do [ insert into MyTable values (1,2,3) ].  Be aware on
tables that get large.  Adding or deleting fields can get expensive when
the databases physical pages need to be updated, especially if the field
you're adding affects, or has indexes modified.

*Part 2;*

More along with your application style, but a complete database schema
overhaul, think of a contact form that allows for multiple methods of
communication.  Multiple email addresses, multiple phone or fax numbers,
can all be associated to one contact.  Typically you'd have a table sitting
aside with the contact type (email, phone, fax, maybe in the future
Telepresence ID?), and another table containing the actual data.  You could
adopt this method to what you're describing.  To add or delete fields to
your UI (Telpresence info doesn't exist in any common contact manager I
know of), all you'd have to do is add or delete rows to a table, and your
application written to adapt to random(?) changes to the field changes,
regardless of additions or deletions.

All you'd need is one table that would hold the fields unique identifier, a
field title, the order in which it is displayed on the UI, and possibly a
default value field.

Another table contains a unique identifier, a FK field pointing to the UID
of the above table, and the raw data.

Your software would then make whatever required SELECT call to obtain the
required information and either store that data in a new temp table, or,
store the data in memory either via a stringlist or class, then render the
data to your UI using just this new data.

BY FAR *not *the most efficient method, but, weighing the cost of
multi-gigabyte sized tables changing frequently at the users whim, versus a
few more queries to the database to mangle data in memory, it'd be up to
you as the developer to decide which is the best method.


[sqlite] whish list for 2016

2015-12-24 Thread John McKown
On Thu, Dec 24, 2015 at 4:09 PM, Christian Schmitz <
realbasiclists at monkeybreadsoftware.de> wrote:

> Hi,
>
> better ALTER command would be very welcome.
>
> e.g. RENAME/DELETE column or field.
>

?RENAME exists.?

?ALTER TABLE table-name RENAME COLUMN column_field_name TO
new_column_field_name;

DELETE is normally done as: ALTER TABLE table-name DROP ?COLUMN
column_file_name; and would be a very nice addition. I hadn't noticed that
it is missing. I wonder why. Perhaps Dr. Hipp will comment after the
holidays.



>
> Instead of us writing code to do it, it could be a command where SQLite
> does things right.
>
> Sincerely
> Christian
>
> --
> Read our blog about news on our plugins:
>
> http://www.mbsplugins.de/
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Computer Science is the only discipline in which we view adding a new wing
to a building as being maintenance -- Jim Horning

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


[sqlite] whish list for 2016

2015-12-24 Thread Simon Slavin

On 24 Dec 2015, at 5:10pm, Richard Damon  wrote:

> being able to directly add a field would be nice.

You can directly add a field.



Simon.


[sqlite] whish list for 2016

2015-12-24 Thread Simon Slavin

On 24 Dec 2015, at 3:12pm, John McKown  wrote:

> ?I'm curious as to why. Doing so would, most likely, require rewriting the
> entire table. If you want a SELECT * to get the columns in a particular
> order, just create a VIEW with the columns in the order in which you want
> them.

I think Gunnar just wants forms such as

ALTER TABLE CREATE COLUMN ...
ALTER TABLE DROP COLUMN ...

to aid with making small changes to the schema.  These are available in most 
SQL engines but the way SQLite3 is written makes it difficult or inefficient to 
implement them.

I have a database where one table takes up more than 30 Gigabytes of space.  
While developing the software I needed several times to change a column 
definition and since SQLite lacks these facilities I had to move 30 Gig of data 
around every time I did it.  Annoying.  But it's not normally that much of a 
problem for me.

Simon.


[sqlite] whish list for 2016

2015-12-24 Thread Bernardo Sulzbach
On Thu, Dec 24, 2015 at 3:16 PM, Simon Slavin  wrote:
>
> On 24 Dec 2015, at 5:10pm, Richard Damon  wrote:
>
>> being able to directly add a field would be nice.
>
> You can directly add a field.
>
> 
>
> Simon.

Good catch, you cannot place it wherever you want, but alter table is
currently capable of "directly adding a field".

-- 
Bernardo Sulzbach


[sqlite] whish list for 2016

2015-12-24 Thread Bernardo Sulzbach
On Thu, Dec 24, 2015 at 3:10 PM, Richard Damon  
wrote:
> On 12/24/15 10:26 AM, Bernardo Sulzbach wrote:
>>
>> I think you focused too much on the ordering issue. He or she may  > just
>> want a more versatile alter table for convenience. However, I >
>
> don't know if alter table is used at all in production anywhere (why > would
> it be? the column names and ordering should not be part of the > data). They
> are useful for prototyping because by having a stronger > alter table
> command you don't need to drop and create so many times. > In the end, if
> you spent enough time in the design phase to prepare > all your schemas, you
> should not have to drop (or alter) any of the > tables at all. It is a
> feature, a nice one, but too far from > necessary (from my standpoint) to be
> worthy of the developers' time. >
>
> I am in the process of building an application that has the need to be able
> to add columns to tables in response to user actions. It is primarily in the
> stage where the user is customizing the program to their needs, but such
> customizations might happen after the program has been in use for awhile.
> The main case is to be able to add a 'Flag' to records to allow the filter
> records or save the set of records found in a search. For now, the current
> method of create new, drop and rename, isn't unworkable (and mostly hidden
> in an abstraction layer), but being able to directly add a field would be
> nice.
>
> --
> Richard Damon
>
>

Interesting. I suppose that most of this "table editing" should rely
on an abstraction layer anyway: think about supporting other RDBMS in
the future. However, I recognize that in your case a more capable
alter table may simplify things a lot.

-- 
Bernardo Sulzbach


[sqlite] whish list for 2016

2015-12-24 Thread gunnar
I would like a less limited 'alter table' statement, to be able to drop 
columns and to add columns at a position of my own choice instead of 
always at the end.






[sqlite] whish list for 2016

2015-12-24 Thread Bernardo Sulzbach
On Thu, Dec 24, 2015 at 1:12 PM, John McKown
 wrote:
>
> On Thu, Dec 24, 2015 at 6:49 AM, gunnar  wrote:
>
> > I would like a less limited 'alter table' statement, to be able to drop
> > columns and to add columns at a position of my own choice instead of always
> > at the end.
> >
> >
> I'm curious as to why. Doing so would, most likely, require rewriting the
> entire table. If you want a SELECT * to get the columns in a particular
> order, just create a VIEW with the columns in the order in which you want
> them.
>

I think you focused too much on the ordering issue. He or she may just
want a more versatile alter table for convenience. However, I don't
know if alter table is used at all in production anywhere (why would
it be? the column names and ordering should not be part of the data).
They are useful for prototyping because by having a stronger alter
table command you don't need to drop and create so many times. In the
end, if you spent enough time in the design phase to prepare all your
schemas, you should not have to drop (or alter) any of the tables at
all.

It is a feature, a nice one, but too far from necessary (from my
standpoint) to be worthy of the developers' time.



-- 
Bernardo Sulzbach


[sqlite] whish list for 2016

2015-12-24 Thread Warren Young
On Dec 24, 2015, at 11:17 AM, Warren Young  wrote:
> 
>   BEGIN TRANSACTION;
>   ALTER TABLE Foo RENAME TO oldFoo;
>   CREATE TABLE Foo ? a bunch of repeated stuff 
>? almost identical to the initial
>? DBMS creation code with just one
>? or two differences yet everything
>? that has stayed the same still has
>? to be repeated just because SQLite
>? doesn?t fully support ALTER TABLE
>   INSERT INTO Foo SELECT some,subset,of,columns FROM oldFoo;
>   COMMIT;
> 
> It would be entirely possible for SQLite to generate and run this code for 
> me.  Writing such code is a waste of human brain power.

On re-reading this, I see that there is a bug in that code, which only 
underscores my point: I wouldn?t have made the error if I?d only had to write

   ALTER TABLE Foo DROP COLUMN bar;



[sqlite] whish list for 2016

2015-12-24 Thread Richard Damon
On 12/24/15 12:17 PM, Bernardo Sulzbach wrote:
> Interesting. I suppose that most of this "table editing" should rely  > on an 
> abstraction layer anyway: think about supporting other RDBMS 
in > the future. However, I recognize that in your case a more capable > 
alter table may simplify things a lot.

Yes, I am currently working on the base Schema and the Database 
Abstraction Layer. Alternate RDBMS isn't a 'future' but first release 
requirement, so the whole program will work through the DbAL. (First 
proof of concept versions may not fully support a wide breadth of 
Databases, but will still use an Abstraction layer. )

-- 
Richard Damon



[sqlite] whish list for 2016

2015-12-24 Thread Warren Young
On Dec 24, 2015, at 8:26 AM, Bernardo Sulzbach  
wrote:
> 
> I don't
> know if alter table is used at all in production anywhere (why would
> it be? the column names and ordering should not be part of the data).

Requirements change.

In the past dozen years, the database schema I?m working on right now has 
changed about a hundred times.  Many of those changes were batched, so that 
there are ?only? about 40 separate schema versions.  But, about a third of 
those involved changes that SQLite doesn?t support directly, but other SQL 
DBMSes do.

SQLite currently only supports adding new features.  (ADD COLUMN, CREATE 
TABLE.) It doesn?t deal with mutating features (MODIFY/CHANGE COLUMN) or 
removed features (DROP COLUMN) nearly as well.

SQLite?s nearly typeless nature does allow you to paper over many changes that 
would require an ALTER TABLE in another DBMS.  (e.g. extension of an 8 bit 
integer column to 32 bits, or changing an integer column to a string column)  
But, some application level changes do still require a bunch of code at the 
SQLite layer that would be a one-liner in other DBMSes.

> In the
> end, if you spent enough time in the design phase to prepare all your
> schemas, you should not have to drop (or alter) any of the tables at
> all.

What you?re describing is the old waterfall development dream, where all we 
need to do is spend more time in the design phase, and we?ll produce perfect 
software on time, every time.  The industry ran on that mantra for decades 
before the agile movement coalesced, finally providing a better set of coherent 
philosophies.

ALTER TABLE is agile.  To the extent that agile is good, stronger ALTER TABLE 
support is good, too.

I am not arguing for an abandonment of up-front design.  The software that uses 
the DBMS I describe above changed tens of thousands of times over that same 
span, so the fact that the DBMS only had to change about 100 times is a 
testament to good up front design.  Yet, changes still occur, so it?s best if 
we don?t have to jump through hoops when that happens.


[sqlite] whish list for 2016

2015-12-24 Thread Richard Damon
On 12/24/15 10:26 AM, Bernardo Sulzbach wrote:
> I think you focused too much on the ordering issue. He or she may  > just 
> want a more versatile alter table for convenience. However, I > 
don't know if alter table is used at all in production anywhere (why > 
would it be? the column names and ordering should not be part of the > 
data). They are useful for prototyping because by having a stronger > 
alter table command you don't need to drop and create so many times. > 
In the end, if you spent enough time in the design phase to prepare > 
all your schemas, you should not have to drop (or alter) any of the > 
tables at all. It is a feature, a nice one, but too far from > necessary 
(from my standpoint) to be worthy of the developers' time. >

I am in the process of building an application that has the need to be 
able to add columns to tables in response to user actions. It is 
primarily in the stage where the user is customizing the program to 
their needs, but such customizations might happen after the program has 
been in use for awhile. The main case is to be able to add a 'Flag' to 
records to allow the filter records or save the set of records found in 
a search. For now, the current method of create new, drop and rename, 
isn't unworkable (and mostly hidden in an abstraction layer), but being 
able to directly add a field would be nice.

-- 
Richard Damon



[sqlite] whish list for 2016

2015-12-24 Thread Warren Young
On Dec 24, 2015, at 9:14 AM, Simon Slavin  wrote:
> 
> ALTER TABLE DROP COLUMN ...
> 
> ...the way SQLite3 is written makes it difficult or inefficient to implement 
> them.

I wouldn?t mind if SQLite did nothing more than the recommended manual process 
for emulating the ALTER TABLE affordances in other DBMSes which are missing in 
SQLite.

For my purposes, the table copy isn?t the problem, the problem is that a simple 
one-liner in other DBMSes becomes a dozen lines in SQLite:

   BEGIN TRANSACTION;
   ALTER TABLE Foo RENAME TO oldFoo;
   CREATE TABLE Foo ? a bunch of repeated stuff 
? almost identical to the initial
? DBMS creation code with just one
? or two differences yet everything
? that has stayed the same still has
? to be repeated just because SQLite
? doesn?t fully support ALTER TABLE
   INSERT INTO Foo SELECT some,subset,of,columns FROM oldFoo;
   COMMIT;

It would be entirely possible for SQLite to generate and run this code for me.  
Writing such code is a waste of human brain power.


[sqlite] whish list for 2016

2015-12-24 Thread John McKown
On Thu, Dec 24, 2015 at 10:14 AM, Simon Slavin  wrote:

>
> On 24 Dec 2015, at 3:12pm, John McKown 
> wrote:
>
> > ?I'm curious as to why. Doing so would, most likely, require rewriting
> the
> > entire table. If you want a SELECT * to get the columns in a particular
> > order, just create a VIEW with the columns in the order in which you want
> > them.
>
> I think Gunnar just wants forms such as
>
> ALTER TABLE CREATE COLUMN ...
> ALTER TABLE DROP COLUMN ...
>
> to aid with making small changes to the schema.  These are available in
> most SQL engines but the way SQLite3 is written makes it difficult or
> inefficient to implement them.
>
> I have a database where one table takes up more than 30 Gigabytes of
> space.  While developing the software I needed several times to change a
> column definition and since SQLite lacks these facilities I had to move 30
> Gig of data around every time I did it.  Annoying.  But it's not normally
> that much of a problem for me.
>
> Simon.
>
>
?I did overlook the DROP COLUMN request. I guess I got "shocked" by the OP
wanting something (I think) like: ALTER TABLE ADD COLUMN newcol TEXT AFTER
oldcol. Where "oldcol" is an existing column name which is not the "last"
one. I sometimes have an unusual take on things due to having read about
relational algebra _before_ doing SQL work. So I think of tables as
relationships, as a "set" for "attributes" which have no inherent order.
Read a bit too much by Dr. Codd. Oh, an Joe Celko too, for that matter. And
_no_ actual professional experience. Makes me a bit of a theoretician.

It really would be nice to be able to have a column defined as, say
VARCHAR(20) to be "redefined" as TEXT or VARCHAR(n) (where n>=20) with a
simple ALTER. That would be a simple change to the schema with no data
alteration. If one allowed to change a VARCHAR new length to be _less_ than
the old length, then it would be necessary to verify that all current rows
were still compliant with the new length. In that case, I guess it would be
"best" if the back end were do to the equivalent of a DELETE and ADD on the
now-invalid data, truncating the larger value to its new max size. That
would save some I/O by not rewriting compliant rows. I don't see any way to
avoid I/O if you want to change an INTEGER (1,2,3,4,6 or 8 bytes) to a
FLOAT? (always 8 bytes). You'd need to rewrite every row, either "in place"
(8 byte INTEGER to FLOAT) or with a DELETE / ADD to the "end". I would
really need to examine the internals to see how much I/O this might be.

I'm not aware of any RDMS which allows someone to alter the "type" (e.g.
INTEGER to FLOAT) of an existing column. What I have done, in PostgreSQL,
is something like:

ALTER TABLE table ADD COLUMN new-column FLOAT;
UPDATE table SET new-column=old-column;
ALTER TABLE table DROP COLUMN old-column;
UPDATE table ALTER COLUMN new-column RENAME TO old-name;

But the above would "move" those 30 Gig of data round, just more easily
from the standpoint of the user. What might be interesting in this type of
case would be a "column-oriented DBMS" (
https://en.wikipedia.org/wiki/Column-oriented_DBMS) Depending on how it was
implemented, it could be made to do the above operation easily. But that
would be a _major_ rewrite of SQLite internally. Hum, it could complicate
things, but this might be more easily possible if each column were placed
in a different OS file. When you add a new column, just create a new file
initialized with the same number of rows which contain NULL or the DEFAULT
value. When you drop a column, it would simply delete the row-containing
file. Maybe a VFS could be written to do this. But SQLite would need to be
enhanced to add the ALTER TABLE ... DROP COLUMN operation.



-- 
Computer Science is the only discipline in which we view adding a new wing
to a building as being maintenance -- Jim Horning

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


[sqlite] whish list for 2016

2015-12-24 Thread John McKown
On Thu, Dec 24, 2015 at 6:49 AM, gunnar  wrote:

> I would like a less limited 'alter table' statement, to be able to drop
> columns and to add columns at a position of my own choice instead of always
> at the end.
>
>
?I'm curious as to why. Doing so would, most likely, require rewriting the
entire table. If you want a SELECT * to get the columns in a particular
order, just create a VIEW with the columns in the order in which you want
them.

The SQL standard, I'm fairly sure, doesn't even specify the "natural order"
of the column returned in a "SELECT *". Of course, they will likely always
be return in the "natural" order. But that depends on how the back end is
programmed. I could see a vendor (as unlikely as it would be) deciding to
return the column in a "SELECT *" in lexicographical order based on the
server's code page, or the table's default code page, or perhaps even in
the client's code page.

But then, I admit that I am "anal" about my SELECT statements; at least
when embedded in a program. IMO, "SELECT *" is a very bad idea in any
programming language. As is making any assumption about the order of rows
returned when an ORDER BY is not specified. "Assuming _nothing_, other than
the worst." is my programming motto. Or the Russian: "Trust, but verify!".
Especially if it is coming in from "meatware" (people), or some other
organization. We have a process at work which consistently blows up because
the end user sends us junk. E.g. the cost is q.97 dollars, instead of 1.97
(q is below 1 and user is typing fast).

===
Hoping you have a nice Christmas, Hanukkah, Fetivus, Kwanza, or at least a
3 day week end.


-- 
Computer Science is the only discipline in which we view adding a new wing
to a building as being maintenance -- Jim Horning

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


[sqlite] whish list for 2016

2015-12-23 Thread lchis...@paradise.net.nz
Hi all,

What I meant to ask was:
a) Either an ORDER BY clause/equivalent for group_concat() [not GROUP BY as
originally posted], or an assurance that the kludge of sorting a sub-query first
and then grouping the result does and will continue to work - I need this
functionality.

This is what happens when you post when tired! My thanks to Darren Duncan who
questioned what I was asking for. Sorry I can't reply directly to Darren's
message as I get the list via digest.

I am using group_concat() to form lists of team members in a single report
field, and wish to control the order of the items concatenated. The
documentation explicitly states that "The order of the concatenated elements is
arbitrary", but searching the internet finds a number of answers along the 
lines of:
select a, group_concat(b) from (select a, b from db order by a, b desc) order 
by a;
where the ordering of the sub-select "b desc" is preserved in the group_concat()
result. This seems to work in practice, but I would really like to have surety
on a way to achieve this other than doing it in my application; other SQL
implementations have this feature.

Thanks, Len Chisholm.


[sqlite] whish list for 2016

2015-12-22 Thread lchis...@paradise.net.nz
Joining the throng, here are my requests:
a) Either an ORDER BY clause/equivalent for GROUP BY, or an assurance that the
kludge of sorting a sub-query first and then grouping the result does and will
continue to work - I need this functionality.
b) A separate mailing address for documentation corrections - I see so many
typos but it doesn't seem worth spamming the whole list to request an apostrophe
be added or two letters be exchanged.
c) A vote for RANK, but I'm doing that in my application at present, post-SQL
but before printing. I know you can generate 1,2,3=,3=,5 type sequences from
self-joins but it seems a lot easier to do it in Delphi!

BTW I am having a great deal of enjoyment with SQLite in my application, which
is a scoring database for card tournaments in New Zealand. I know it's small
compared to some of the monsters I've seen described here, but I have 38 tables
and 54 views in my schema, with 400-ish players and over 2800 score records in
the national euchre tournament DB.

Len Chisholm.


[sqlite] whish list for 2016

2015-12-21 Thread Darren Duncan
On 2015-12-21 3:46 PM, lchishol at paradise.net.nz wrote:
> Joining the throng, here are my requests:
> a) Either an ORDER BY clause/equivalent for GROUP BY, or an assurance that the
> kludge of sorting a sub-query first and then grouping the result does and will
> continue to work - I need this functionality.

You need what to work exactly?  SQL tables are unordered by definition, and 
ORDER BY is more of a display attribute at the end.  If you want to do 
something 
order-sensitive in the middle of a query then RANK is the proper generalized 
way 
to do it that SQL provides, that or, where applicable, order-insensitive 
aggregates like min/max/etc which still tell you what value you'd get first if 
you sorted a list without actually sorting it.

> c) A vote for RANK, but I'm doing that in my application at present, post-SQL
> but before printing. I know you can generate 1,2,3=,3=,5 type sequences from
> self-joins but it seems a lot easier to do it in Delphi!

I agree with adding RANK, it is very useful.

-- Darren Duncan



[sqlite] whish list for 2016

2015-12-21 Thread Petite Abeille

> On Dec 21, 2015, at 5:43 PM, Darren Duncan  wrote:
> 
> Comic act?  Do you consider the MERGE defined in the SQL standard to be a 
> better designed feature than Postgres' alternative,

Yes.

> or do you prefer the former soley because it is in the SQL standard?

Yes.

>  I recall that Postgres went with their brand-new alternative because it was 
> a better design, more useable in practice; both simpler and more powerful, 
> easier to express the user's intent and not have arbitrary limitations. 

No.

But lets agree to disagree in the spirit of the Holiday Season! :)



[sqlite] whish list for 2016

2015-12-21 Thread Petite Abeille

> On Dec 21, 2015, at 4:08 AM, Darren Duncan  wrote:
> 
> If you want that feature, instead do it the better way that Postgres 9.5 did, 
> which is as an extension to the INSERT statement in the form "ON CONFLICT DO 
> UPDATE/IGNORE?.

Please, enough of the comic act :P

MERGE it is.

Oh, yes, also, +1 for for Mr Smith PRAGMA strict_mode = 1; :))


[sqlite] whish list for 2016

2015-12-21 Thread Darren Duncan
On 2015-12-21 8:25 AM, Petite Abeille wrote:
>> On Dec 21, 2015, at 4:08 AM, Darren Duncan  
>> wrote:
>>
>> If you want that feature, instead do it the better way that Postgres 9.5 
>> did, which is as an extension to the INSERT statement in the form "ON 
>> CONFLICT DO UPDATE/IGNORE?.
>
> Please, enough of the comic act :P
>
> MERGE it is.

Comic act?  Do you consider the MERGE defined in the SQL standard to be a 
better 
designed feature than Postgres' alternative, or do you prefer the former soley 
because it is in the SQL standard?  I recall that Postgres went with their 
brand-new alternative because it was a better design, more useable in practice; 
both simpler and more powerful, easier to express the user's intent and not 
have 
arbitrary limitations. -- Darren Duncan



[sqlite] whish list for 2016

2015-12-21 Thread R Smith

PRAGMA strict_mode = 1;

:)



[sqlite] whish list for 2016

2015-12-20 Thread Petite Abeille

> On Dec 20, 2015, at 7:12 PM, Big Stone  wrote:
> 
> To prepare for 2016 greetings moment, here is my personnal whish list
> for sqlite:

MERGE statement! Yeah! :)

https://en.wikipedia.org/wiki/Merge_(SQL)






[sqlite] whish list for 2016

2015-12-20 Thread Domingo Alvarez Duarte
Let's add another one:  

- replace hard coded floating point type "double" by "sqlite3_double" to
allow user define alternatives like "_Decimal64" (see
https://github.com/mingodad/squilu/tree/master/SquiLu-ext sqlite3.c sqlite3.h
with "-DSQLITE_USE_DECIMAL")  

Best whishes for 2016!  
>  Sun Dec 20 2015 7:12:39 pm CET CET from "Big Stone"
>  Subject: [sqlite] whish list for 2016
>
>  Hi All,
> 
> To prepare for 2016 greetings moment, here is my personnal whish list
> for sqlite:
> - analytic functions (would fit the split/apply/combine data-science
>landscape),
> - "generate_series" extension included in default sqlite.exe and
> sqlite.dll for windows.
> 
> Best whishes for 2016!
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
>
>  



?



[sqlite] whish list for 2016

2015-12-20 Thread Big Stone
Hi All,

To prepare for 2016 greetings moment, here is my personnal whish list
for sqlite:
- analytic functions (would fit the split/apply/combine data-science landscape),
- "generate_series" extension included in default sqlite.exe and
sqlite.dll for windows.

Best whishes for 2016!


[sqlite] whish list for 2016

2015-12-20 Thread Darren Duncan
On 2015-12-20 10:25 AM, Petite Abeille wrote:
>> On Dec 20, 2015, at 7:12 PM, Big Stone  wrote:
>>
>> To prepare for 2016 greetings moment, here is my personnal whish list
>> for sqlite:
>
> MERGE statement! Yeah! :)
>
> https://en.wikipedia.org/wiki/Merge_(SQL)

If you want that feature, instead do it the better way that Postgres 9.5 did, 
which is as an extension to the INSERT statement in the form "ON CONFLICT DO 
UPDATE/IGNORE".

Relevant url:

https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.5#INSERT_..._ON_CONFLICT_DO_NOTHING.2FUPDATE_.28.22UPSERT.22.29

Example:

  INSERT INTO user_logins (username, logins)
  VALUES ('Naomi',1),('James',1)
  ON CONFLICT (username)
  DO UPDATE SET logins = user_logins.logins + EXCLUDED.logins;

-- Darren Duncan



[sqlite] whish list for 2016

2015-12-20 Thread Edward Lau
A vote from me for OLAP analytical functions.



-Original Message-
From: Big Stone <stonebi...@gmail.com>
To: sqlite-users 
Sent: Sun, Dec 20, 2015 10:12 am
Subject: [sqlite] whish list for 2016

Hi All,

To prepare for 2016 greetings moment, here is my personnal whish list
for sqlite:
- analytic functions (would fit the split/apply/combine data-science landscape),
- "generate_series" extension included in default sqlite.exe and
sqlite.dll for windows.

Best whishes for 2016!
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users