Re: [sqlite] How do I get back command history in v3.5.4?

2008-01-15 Thread John Stanton
Do you have readline installed on your Linux machine?  If not download 
the package, configure and install.  Then compile Sqlite with readline.


Fred J. Stephens wrote:

Just replaced SQLite v 3.42 on Kubuntu 7.10 with the new v3.45 binary
downloaded from sqlite.org. it works fine, but now I don't have the
(essential, to me) command history. I have searched the forums and
read that it requires readline. I do have /lib/libreadline.so.5.
I tried compiling from source but get errors, 137 if I remember
correctly. Tried editing the make file to enable readline and putting
in the path to the readline library, but I still get errors compiling.
Isn't readline support supposed to be automatically detected on Linux
by configure?

If I can't get this fixed, how likely is the possible  data corruption
problem that was corrected by 3.5.4?  Will I have to wait for the new
version to hit the Ubuntu repositories? Maybe I am spoiled by 3-4
years on Gentoo , in having the latest versions available, but if data
corruption is an issue I would like to use the new version.

Another curious thing is that the original binary for sqlite3 is only
about 32k, while this new one is 300k! The file command shows that
they are both dynamically linked and stripped, so I don't understand
the huge difference in size.

Any information appreciated

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite as a IPC mechanism

2008-01-15 Thread Ken
Joshua,
I don't think sqlite was designed to be used that way.

It would work very well for your persistent data, that is written to disk.

However, not so well using it as an IPC. The database is locked, not the table, 
row or even page. The entire DB is locked when you write (insert, update or 
delete) data.

IPC, is best done as either Message Queues depending upon your data volumes. 
Shared memory with appropriate semaphores/ Mutexes.
Or even as one person posted Mmap files. You'll stilll need a mutex or a 
semaphore to write.


Ken


"Joshua D. Boyd" <[EMAIL PROTECTED]> wrote: I have a system that currently 
consists of 2 C programs and 3 python
programs.  Currently the python programs transfer data between
themselves via pickles.  The C programs transfer data between themselves
via streaming structs, and the C programs talk to one of the python
programs via a fairly ugly text over socket method.  All of the programs
are threaded.

Of the data being communicated, some of it must also be saved to disk,
and other pieces go away after a reset.  All told there is only about 4
k of stuff saved.

I am wondering about using SQLite to communicate between the programs.
I'd use two databases.  One on a flash disk for the data that needs to
be saved, and the other database would somehow be in a ram disk.  Each
Db would have 1 table, and the fields would be key, type, val.  Most
fields would only be written to by one or two sources, but would be read
from by nearly all processes.

Is this a stupid use of SQLite?  I can't quite seem to find anyone using
it like this.  I am a little concerned about page locking as opposed to
row locking, but I think I can work around that.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




[sqlite] Berkeley DB vs. SQLite for threaded application

2008-01-15 Thread Tomas Lee
I've got an application that has Berkeley DB embedded in it.  I want
to replace Berkeley DB with SQLite.  (I plan to use the 3.5.4
almagamation, which is the latest I could find.)  The thing is, this
application uses threads.  I know threads are evil, but this
application uses them, and there it is.  So, I wanted to understand
what I had to do to be safe.

As I understand it, Berkeley DB has free-threaded database handles, so
my application can open a Berkeley DB database connection and have all
of its thread use that same connection.  But SQLite doesn't allow
this, so I'm going to have to change things.  In theory, I could just
open and close a new connection whenever I want to access the
database.  How much a performance hit is that?

 tells me that I can move a connection
handle across threads as long as that connection is holding no fcntl()
locks, and that this is due to fcntl() bugs on some OSes, like RedHat9.
But what if I'm running on an OS that doesn't have these bugs -- do
I have to worry about moving a connection handle across threads?  And
how can I tell if my OS has these bugs?

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] How do I get back command history in v3.5.4?

2008-01-15 Thread Fred J. Stephens
Just replaced SQLite v 3.42 on Kubuntu 7.10 with the new v3.45 binary
downloaded from sqlite.org. it works fine, but now I don't have the
(essential, to me) command history. I have searched the forums and
read that it requires readline. I do have /lib/libreadline.so.5.
I tried compiling from source but get errors, 137 if I remember
correctly. Tried editing the make file to enable readline and putting
in the path to the readline library, but I still get errors compiling.
Isn't readline support supposed to be automatically detected on Linux
by configure?

If I can't get this fixed, how likely is the possible  data corruption
problem that was corrected by 3.5.4?  Will I have to wait for the new
version to hit the Ubuntu repositories? Maybe I am spoiled by 3-4
years on Gentoo , in having the latest versions available, but if data
corruption is an issue I would like to use the new version.

Another curious thing is that the original binary for sqlite3 is only
about 32k, while this new one is 300k! The file command shows that
they are both dynamically linked and stripped, so I don't understand
the huge difference in size.

Any information appreciated

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Next Version of SQLite

2008-01-15 Thread John Stanton

[EMAIL PROTECTED] wrote:

Joe Wilson <[EMAIL PROTECTED]> wrote:

--- "D. Richard Hipp" <[EMAIL PROTECTED]> wrote:
Sorry for the confusion. 

No problem.

For what it's worth, I am also curious as to the final form of the 
VM opcode transformation. The number of opcodes generated by the various
SQL statements seems to be roughly the same as the old scheme. At this 
point without sub-expresssion elimination are you seeing any speed 
improvement?




I have not even looked at performance yet.  I'm assuming that
performance will drop during the conversion process and that we
will have to fight to get it back up to previous levels after
the conversion is complete.

But consider would can be done with a register machine that
would couldn't do with the old stack machine.  In a statement
like this:

SELECT * FROM a NATURAL JOIN b;

Suppose tables a and b have column c in common and unique
columns a1, a2, a3 and b1, b2, b3.  With the stack machine, 
the algorithm is roughly this:


foreach each entry in a:
  foreach entry in b with b.c==a.c:
push a.c
push a.a1
push a.a2
push a.a3
push b.b1
push b.b2
push b.b3
return one row of result
  endforeach
endforeach

For each result row, all columns had to be pushed onto the
stack.  Then the OP_Callback opcode would fire, causing
sqlite3_step() to return SQLITE_ROW.  The result columns would
then be available to sqlite3_column_xxx() routines which read
those results off of the stack.  When sqlite3_step() is called
again, all result columns are popped from the stack and 
execution continues with the first operation after the 
OP_Callback.


In the register VM, result columns are stored in a consecutive
sequence of registers.  It is no longer necessary to pop the
stack of prior results at the start of each sqlite3_step().
So the code can look more like this:

foreach entry in a:
  r1 = a.c
  r2 = a.a1
  r3 = a.a2
  r4 = a.a3
  foreach entry in b where c=r1:
r5 = b.b1
r6 = b.b2
r7 = b.b3
return one row of result
  endforeach
endforeach

When result are stored in registers, the computation of the 
first four columns of the result set can be factored out of 
the inner loop.  If there are 10 matching rows in b for every

row in a, this might result in a significant performance boost.

Do not look for this improvement right away, though.  The
first order of business is to get the VM converted over into
a register machine.  Only after that is successfully accomplished
will we look into implementing optimizations such as the above.

--
D. Richard Hipp <[EMAIL PROTECTED]>

This architecture replacement is a very significant step for Sqlite from 
my perspective.  It indicates a transition from the simple, more easily 
implemented technology into a more refined one providing the basis for 
adding more sophistication in optimization and a generally more 
efficient core.


Such ongoing evolution guarantees that Sqlite will not ossify and become
irrelevant.  Congratulation Dr Hipp and team for not leaving well alone.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Test suite

2008-01-15 Thread Joe Wilson
Grab the source tree via tar.gz file or cvs and run:

  ./configure
  make test
or
  make fulltest

To run just a single test file:

  make testfixture# if not already built by make test

  ./testfixture test/select1.test

--- Ken <[EMAIL PROTECTED]> wrote:
> Sorry if this has been asked, but I'd like to know how to run the test suite.




  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Dump w/o Schema

2008-01-15 Thread Rich Shepard

On Tue, 15 Jan 2008, Rob Sciuk wrote:


Is there a way to dump out the contents of the database w/o having the
CREATE TABLE statements?  For example, I can do the following:



However, foo.dmp contains all of the CREATE TABLE statements.  I just
want all of the INSERT INTO statements associated with this database.


Actually, this looks like a reasonable enhancement request.  As it does *NOT* 
affect the API in any way, and PostGres allows separate schema and data dumps 
(the default being both) in psql, there should be no reason not to put it 
into the sqlite3 tool, unless I'm missing something??  Am I?


  Not needed; it already exists.

  Change the mode to 'insert', select a file name for your output, then run
the select statement. For example:

sqlite> .m insert
sqlite> .o table-values.sql
sqlite> Select * from Tablename;
sqlite> .o stdout
sqlite> .m list

  The last two commands reset the output and mode back to the interactive
values.

Rich

--
Richard B. Shepard, Ph.D.   |  IntegrityCredibility
Applied Ecosystem Services, Inc.|Innovation
 Voice: 503-667-4517  Fax: 503-667-8863

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Dump w/o Schema

2008-01-15 Thread P Kishor
On 1/15/08, Mark Riehl <[EMAIL PROTECTED]> wrote:
> I'm coming from a MySQL background and the mysqldump utility supports
> schema-only,

sqlite> .mode filename
sqlite> .s

> data-only,

sqlite> .mode filename
sqlite> .mode csv|tabs
sqlite> .dump

> and schema plus data dumps.

sqlite> .mode filename
sqlite> .mode insert
sqlite> .dump

> I thought that
> there was an SQLite trick I was missing somewhere.
>
> Mark
>
> On Jan 15, 2008 4:10 PM, Rob Sciuk <[EMAIL PROTECTED]> wrote:
> >
> >
> > On Tue, 15 Jan 2008, Mark Riehl wrote:
> >
> > > Is there a way to dump out the contents of the database w/o having the
> > > CREATE TABLE statements?  For example, I can do the following:
> > >
> > > sqlite3 foo.db .dump > foo.dmp
> > >
> > > However, foo.dmp contains all of the CREATE TABLE statements.  I just
> > > want all of the INSERT INTO statements associated with this database.
> > >
> > > I looked through the documentation but didn't find the answer to this
> > > one, I apologize if it's already in the docs.
> > >
> > > Thanks,
> > > Mark
> >
> > Actually, this looks like a reasonable enhancement request.  As it does
> > *NOT* affect the API in any way, and PostGres allows separate schema and
> > data dumps (the default being both) in psql, there should be no reason not
> > to put it into the sqlite3 tool, unless I'm missing something??  Am I?
> >
> > Cheers,
> > Rob Sciuk
> >
> >

-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Date arithmetic question

2008-01-15 Thread Fowler, Jeff
Hello All,
 
SQLite newbie here. I've looked through the email archives and website
trying to find out how to compute the difference in months between two
given dates. Each date is in -MM-DD HH:MM:SS format.
 
The best I've been able to come up with seems rather ugly:
 
SELECT (strftime( '%Y', date1)*12+strftime('%m',date1)) -
(strftime('%Y',date1)*12+strftime('%m',date1))
 
Am I on the right track, or is there something obvious I'm missing?
 
Thanks in advance,
 
Jeff Fowler
 


Re: [sqlite] Dump w/o Schema

2008-01-15 Thread Mark Riehl
I'm coming from a MySQL background and the mysqldump utility supports
schema-only, data-only, and schema plus data dumps.  I thought that
there was an SQLite trick I was missing somewhere.

Mark

On Jan 15, 2008 4:10 PM, Rob Sciuk <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 15 Jan 2008, Mark Riehl wrote:
>
> > Is there a way to dump out the contents of the database w/o having the
> > CREATE TABLE statements?  For example, I can do the following:
> >
> > sqlite3 foo.db .dump > foo.dmp
> >
> > However, foo.dmp contains all of the CREATE TABLE statements.  I just
> > want all of the INSERT INTO statements associated with this database.
> >
> > I looked through the documentation but didn't find the answer to this
> > one, I apologize if it's already in the docs.
> >
> > Thanks,
> > Mark
>
> Actually, this looks like a reasonable enhancement request.  As it does
> *NOT* affect the API in any way, and PostGres allows separate schema and
> data dumps (the default being both) in psql, there should be no reason not
> to put it into the sqlite3 tool, unless I'm missing something??  Am I?
>
> Cheers,
> Rob Sciuk
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [Linux + PHP] Recommended way to access SQLite?

2008-01-15 Thread Kees Nuyt
On Tue, 15 Jan 2008 11:33:06 -0800 (PST), "kamil.szot" <[EMAIL PROTECTED]> 
wrote:

>
>
>Gilles Ganault wrote:
>> 
>> So the options are:
>> 1. use the old SQLite2 sqlite_() functions (or some class that turns this 
>> into OO)
>> 2. PDO to use the SQLite3 linked-in library
>> 3. PDO to access the SQLite3 DLL
>> 
>> ... with 2 being the recommended choice.
>> 
>With 2 (and probably 3) you will not be able to retrieve metadata (for
>example column names) for empty table.
>The only way of getting column names in PDO is executing "SELECT * FROM
>table" query and then retrieving metadata using columnCount() and
>getColumnMeta() methods of PDOStatement object returned by PDO->query()
>method. But in case of sqlite: (meaning sqlite3) databases getColumnMeta()
>returns nothing if result set is empty. In case sqlite2: driver
>getColumnMeta() always throws exception.
>
>So if you will be needing to inspect your tables at runtime then I'm
>recommending first solution.
>
>If anyone knows how to get field names of a table from sqlite3 database in
>PHP please share your knowledge with the world. :-)

There's a PRAGMA for that purpose.

$sql = "PRAGMA table_info('tablename')";
$res = $this->query($sql);

Example result set:
cid  name typenotnull   dflt_value  pk
0jobidINTEGER 991
1TSN  CHAR(4) 0 0
2jobprio  INTEGER 0 9   0
3status   CHAR(1) 0 'W' 0
4userid   VARCHAR(8)  990
etc.

The columns of this resultset have metadata, just like a normal resultset.

Tested with:
php, php_pdo.dll, php_pdo_sqlite_external.dll v5.2.5.5 (2007-11-08, Windows), 
sqlite3.dll  v3.5.4 (2007-12-14)
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Dump w/o Schema

2008-01-15 Thread Rob Sciuk


On Tue, 15 Jan 2008, Mark Riehl wrote:


Is there a way to dump out the contents of the database w/o having the
CREATE TABLE statements?  For example, I can do the following:

sqlite3 foo.db .dump > foo.dmp

However, foo.dmp contains all of the CREATE TABLE statements.  I just
want all of the INSERT INTO statements associated with this database.

I looked through the documentation but didn't find the answer to this
one, I apologize if it's already in the docs.

Thanks,
Mark


Actually, this looks like a reasonable enhancement request.  As it does 
*NOT* affect the API in any way, and PostGres allows separate schema and 
data dumps (the default being both) in psql, there should be no reason not 
to put it into the sqlite3 tool, unless I'm missing something??  Am I?


Cheers,
Rob Sciuk

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite as a IPC mechanism

2008-01-15 Thread Clay Dowling
Seems like it would work, but maybe overkill.  What's stopping you from
working out a fairly direct protocol to exchange data with?  Sending key,
type, val for all of your IPC seems reasonable.  A mem-mapped file, a
local socket or a network socket seem reasonable, depending on the
structure of the system.

Clay


Joshua D. Boyd wrote:
> I have a system that currently consists of 2 C programs and 3 python
> programs.  Currently the python programs transfer data between
> themselves via pickles.  The C programs transfer data between themselves
> via streaming structs, and the C programs talk to one of the python
> programs via a fairly ugly text over socket method.  All of the programs
> are threaded.
>
> Of the data being communicated, some of it must also be saved to disk,
> and other pieces go away after a reset.  All told there is only about 4
> k of stuff saved.
>
> I am wondering about using SQLite to communicate between the programs.
> I'd use two databases.  One on a flash disk for the data that needs to
> be saved, and the other database would somehow be in a ram disk.  Each
> Db would have 1 table, and the fields would be key, type, val.  Most
> fields would only be written to by one or two sources, but would be read
> from by nearly all processes.
>
> Is this a stupid use of SQLite?  I can't quite seem to find anyone using
> it like this.  I am a little concerned about page locking as opposed to
> row locking, but I think I can work around that.
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>


-- 
Lazarus Registration
http://www.lazarusid.com/registration.shtml


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Dump w/o Schema

2008-01-15 Thread Richard Klein

Mark Riehl wrote:

Is there a way to dump out the contents of the database w/o having the
CREATE TABLE statements?  For example, I can do the following:

sqlite3 foo.db .dump > foo.dmp

However, foo.dmp contains all of the CREATE TABLE statements.  I just
want all of the INSERT INTO statements associated with this database.

I looked through the documentation but didn't find the answer to this
one, I apologize if it's already in the docs.

Thanks,
Mark



No, there is no way to do this in sqlite3.  The .dump command is
intended to produce a complete set of SQL statements that, when
executed, will recreate the database being dumped.

Can't you just remove the CREATE TABLE statements manually, or
pipe the output of .dump to sed or awk?

Regards,
- Richard Klein


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Dump w/o Schema

2008-01-15 Thread Mike McGonagle
On Jan 15, 2008 2:37 PM, Mark Riehl <[EMAIL PROTECTED]> wrote:

> Is there a way to dump out the contents of the database w/o having the
> CREATE TABLE statements?  For example, I can do the following:
>
> sqlite3 foo.db .dump > foo.dmp
>

You could do something like on Unix:

sqlite3 foo.db .dump > fgrep INSERT > foo.dmp


Mike


-- 
Peace may sound simple—one beautiful word— but it requires everything we
have, every quality, every strength, every dream, every high ideal.
—Yehudi Menuhin (1916–1999), musician


[sqlite] SQLite as a IPC mechanism

2008-01-15 Thread Joshua D. Boyd
I have a system that currently consists of 2 C programs and 3 python
programs.  Currently the python programs transfer data between
themselves via pickles.  The C programs transfer data between themselves
via streaming structs, and the C programs talk to one of the python
programs via a fairly ugly text over socket method.  All of the programs
are threaded.

Of the data being communicated, some of it must also be saved to disk,
and other pieces go away after a reset.  All told there is only about 4
k of stuff saved.

I am wondering about using SQLite to communicate between the programs.
I'd use two databases.  One on a flash disk for the data that needs to
be saved, and the other database would somehow be in a ram disk.  Each
Db would have 1 table, and the fields would be key, type, val.  Most
fields would only be written to by one or two sources, but would be read
from by nearly all processes.

Is this a stupid use of SQLite?  I can't quite seem to find anyone using
it like this.  I am a little concerned about page locking as opposed to
row locking, but I think I can work around that.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Dump w/o Schema

2008-01-15 Thread Mark Riehl
Is there a way to dump out the contents of the database w/o having the
CREATE TABLE statements?  For example, I can do the following:

sqlite3 foo.db .dump > foo.dmp

However, foo.dmp contains all of the CREATE TABLE statements.  I just
want all of the INSERT INTO statements associated with this database.

I looked through the documentation but didn't find the answer to this
one, I apologize if it's already in the docs.

Thanks,
Mark

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [Linux + PHP] Recommended way to access SQLite?

2008-01-15 Thread kamil.szot


Gilles Ganault wrote:
> 
> So the options are:
> 1. use the old SQLite2 sqlite_() functions (or some class that turns this 
> into OO)
> 2. PDO to use the SQLite3 linked-in library
> 3. PDO to access the SQLite3 DLL
> 
> ... with 2 being the recommended choice.
> 
With 2 (and probably 3) you will not be able to retrieve metadata (for
example column names) for empty table.
The only way of getting column names in PDO is executing "SELECT * FROM
table" query and then retrieving metadata using columnCount() and
getColumnMeta() methods of PDOStatement object returned by PDO->query()
method. But in case of sqlite: (meaning sqlite3) databases getColumnMeta()
returns nothing if result set is empty. In case sqlite2: driver
getColumnMeta() always throws exception.

So if you will be needing to inspect your tables at runtime then I'm
recommending first solution.

If anyone knows how to get field names of a table from sqlite3 database in
PHP please share your knowledge with the world. :-)
-- 
View this message in context: 
http://www.nabble.com/-Linux-%2B-PHP--Recommended-way-to-access-SQLite--tp14244097p14845206.html
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Question on SQL arbitrary arrays

2008-01-15 Thread Mike McGonagle
On Jan 14, 2008 11:28 PM, mark pirogovsky <[EMAIL PROTECTED]> wrote:

> It is really hard to give you some idea without knowing more of you
> problem...  but your proposed implementation does carry a lot of
> overhead - for every number you have two extra fields, not to mention
> the database internal structures.
>

Basically, I am trying to connect SQLite up to Pure Data (a dataflow
programming language), and I am creating arbitrary lists of floating point
numbers that I would like to be able to store as sets. The SQL code from my
original message is what I am currently using for the array stuff. Just one
big table with each element tagged for grouping and another for their order.

It sounds like what you described would be great, if it were only in SQLite.

I had thought about the idea of creating everything as a long string,
separated by a space, and storing them as a group. I just wasn't sure if
this was the "accepted" way to do it in SQL.

Thanks for your suggestions.


Mike


[sqlite] Test suite

2008-01-15 Thread Ken
Sorry if this has been asked, but I'd like to know how to run the test suite.

I could not find any documentation on this on the sqlite website.

Thanks for your help.
Ken





RE: [sqlite] Aggregates in SELECT without GROUP BY

2008-01-15 Thread Ken


Joe Wilson <[EMAIL PROTECTED]> wrote: --- Ken  wrote:

> Doing this in oracle results in an error: 
> 
> SQL> select max(addr_id), emp_id from z_address;
> select max(addr_id), emp_id from z_address
>  *
> ERROR at line 1:
> ORA-00937: not a single-group group function

As expected.

> I think an error is more appropriate when there is no group by clause. But as 
> a developer I know
> better, and write aggregated sql with a group by.
> 
> select max(addr_id), emp_id from z_address group by null;
> 
> Does not return an error nor does it return data.

The GROUP BY NULL thing is not standard, which is why I qualified it
with for sqlite. It varies from database to database.

 -- mysql, sqlite
 select max(a) from t group by null;
 select max(a) from t group by '';
 
 -- postgres
 select max(a) from t group by +0;

Oracle may or may not have an equivalent.


Agreed. :)



[sqlite] How to compile SQLite with SQLITE_ENABLE_COLUMN_METADATA option under Ubuntu Linux

2008-01-15 Thread Luc DAVID

Thank you Jay for your answers.

I will try the 'lib' program.

Luc

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Aggregates in SELECT without GROUP BY

2008-01-15 Thread Joe Wilson
--- Ken <[EMAIL PROTECTED]> wrote:

> Doing this in oracle results in an error: 
> 
> SQL> select max(addr_id), emp_id from z_address;
> select max(addr_id), emp_id from z_address
>  *
> ERROR at line 1:
> ORA-00937: not a single-group group function

As expected.

> I think an error is more appropriate when there is no group by clause. But as 
> a developer I know
> better, and write aggregated sql with a group by.
> 
> select max(addr_id), emp_id from z_address group by null;
> 
> Does not return an error nor does it return data.

The GROUP BY NULL thing is not standard, which is why I qualified it
with for sqlite. It varies from database to database.

 -- mysql, sqlite
 select max(a) from t group by null;
 select max(a) from t group by '';
 
 -- postgres
 select max(a) from t group by +0;

Oracle may or may not have an equivalent.





  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Question on SQL arbitrary arrays

2008-01-15 Thread P Kishor
On 1/15/08, Joe Wilson <[EMAIL PROTECTED]> wrote:
> > On 1/14/08, mark pirogovsky <[EMAIL PROTECTED]> wrote:
> > > Some relational (PostgreSQL for example) databases allow you to store
> > > arbitrary array as a field in on row.
>
> I was not aware of Postgres arrays or that it is part of the
> SQL:1999 standard:
>
>  http://www.postgresql.org/docs/8.0/interactive/arrays.html
>
> Or, for that matter, that postgres supports Set Returning Functions
> that can be queried in the FROM clause:
>
>  http://www.postgresql.org/docs/8.0/static/functions-srf.html
>
>  select * from generate_series(2,4);
>   generate_series
>  -
> 2
> 3
> 4
>
> I suppose that generate_series() could be implemented as an iterator.
>
> Useful stuff. Thanks for the tip.
>
>


also very useful (from my pov) as it can avoid several lines of code
in the application...

SELECT array(i) FROM table

or

SELECT cols FROM table WHERE array[i] OPERATOR array[j]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Storing / Archiving of SQLite database in vcs

2008-01-15 Thread Ken
I would tend to lean towards cvs storage as a  flat file .dump format of the 
database rather than the binary database.
  
   1. Its more portable. What happens if sqlite is no longer maintained or 
actively developed?

   2. Create the db is simple. Allowing you easy migration from one version of 
sqlite to the next. Although backwards compatiblity is generally maintained, 
there have been instances when this is broken. 


I would also maintain in cvs the create sql commands (.schema) seperate from 
the data.

Just my opinion, I'm sure others here will differ.


Regards,
Ken



Rich Rattanni <[EMAIL PROTECTED]> wrote: All:
I have several sqlite databases that I want to store in my version
control system.  I was wondering if instead of storing them as binary
files, would make sense to store a SQL dump in version control.  When
I create a root file system for my development board I will create the
databases from the SQL dump.

I had two reasons for thinking this was a good idea
1) If I upgrade sqlite and build a new root file system, then my
databases will be created with the same version of sqlite.
2) This is essentially the same (or better?) as a vacuum, since the
database should be completely free of any wasted space.
2) They may be a slight possibility of using version control faculties
(diff, blame, etc) while I am developing.

However I thought of one con for doing this
1) It may be stupid to do this.


I guess I am more concerned with wether or not I can guarantee that
the sqlite engine is deterministic such that for a given sequence of
SQL input each time I wind up with the same database (of course, using
sqlite3 to dump the sql may not guarantee me the same sequence of SQL
as was used to create the database, huh?).


Feedback? (please be nice)


--
Rich Rattanni

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




RE: [sqlite] Aggregates in SELECT without GROUP BY

2008-01-15 Thread Ken
Doing this in oracle results in an error: 

SQL> select max(addr_id), emp_id from z_address;
select max(addr_id), emp_id from z_address
 *
ERROR at line 1:
ORA-00937: not a single-group group function

I think an error is more appropriate when there is no group by clause. But as a 
developer I know better, and write aggregated sql with a group by.

select max(addr_id), emp_id from z_address group by null;

Does not return an error nor does it return data.

Ken


Joe Wilson <[EMAIL PROTECTED]> wrote: --- Darren Duncan  wrote:
> At 10:17 PM -0500 1/14/08, Griggs, Donald wrote:
> >Regarding: " A DBMS accepting such queries isn't just a little
> >dangerous, its flat out wrong.  I would ask what rationale there is for
> >this query not failing. -- Darren Duncan"
> >
> >I'm not asserting that you have to agree with the rationale, but did you
> >see and read the discussion that Joe Wilson pointed out to you?
> >
> >=
> >This issue is debated from time to time on the list:
> >
> >  http://www.mail-archive.com/sqlite-users@sqlite.org/msg17769.html
> 
> Sorry, I missed the url on my first reading.
> 
> Also, my first comment was based on the idea that SQL usually returns 
> exactly one row on a query that uses an aggregate but no group-by, 
> and where all result field values are scalar.

In sqlite, assuming there's at least one row, an aggregate SELECT 
with no GROUP BY clause is conceptually the same as an equivalent 
SELECT with GROUP BY NULL - i.e., the group of all rows. 
(I say 'conceptually' because GROUP BY NULL is much slower).

But I agree with your point.





  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Joe Wilson <[EMAIL PROTECTED]> wrote: --- Darren Duncan  wrote:
> At 10:17 PM -0500 1/14/08, Griggs, Donald wrote:
> >Regarding: " A DBMS accepting such queries isn't just a little
> >dangerous, its flat out wrong.  I would ask what rationale there is for
> >this query not failing. -- Darren Duncan"
> >
> >I'm not asserting that you have to agree with the rationale, but did you
> >see and read the discussion that Joe Wilson pointed out to you?
> >
> >=
> >This issue is debated from time to time on the list:
> >
> >  http://www.mail-archive.com/sqlite-users@sqlite.org/msg17769.html
> 
> Sorry, I missed the url on my first reading.
> 
> Also, my first comment was based on the idea that SQL usually returns 
> exactly one row on a query that uses an aggregate but no group-by, 
> and where all result field values are scalar.

In sqlite, assuming there's at least one row, an aggregate SELECT 
with no GROUP BY clause is conceptually the same as an equivalent 
SELECT with GROUP BY NULL - i.e., the group of all rows. 
(I say 'conceptually' because GROUP BY NULL is much slower).

But I agree with your point.





  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Re: [sqlite] Question on SQL arbitrary arrays

2008-01-15 Thread Joe Wilson
> On 1/14/08, mark pirogovsky <[EMAIL PROTECTED]> wrote:
> > Some relational (PostgreSQL for example) databases allow you to store
> > arbitrary array as a field in on row.

I was not aware of Postgres arrays or that it is part of the 
SQL:1999 standard:

 http://www.postgresql.org/docs/8.0/interactive/arrays.html

Or, for that matter, that postgres supports Set Returning Functions
that can be queried in the FROM clause:

 http://www.postgresql.org/docs/8.0/static/functions-srf.html

 select * from generate_series(2,4);
  generate_series
 -
2
3
4

I suppose that generate_series() could be implemented as an iterator.

Useful stuff. Thanks for the tip.



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Apple Filemaker's Bento uses SQLite

2008-01-15 Thread P Kishor
I have always wondered how long would it be before Apple offered a
front-end to SQLite. Well, the wait has ended with Filemaker's Bento.

Create a new "library" (another name for a table/database... not clear
which), and it gets stored in ~/Library/Application
Support/Bento/bento.bentodb in a complicated mess of tables.

Not developer friendly, but definitely a nice commercial front-end
based on SQLite.

-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Storing / Archiving of SQLite database in vcs

2008-01-15 Thread Rich Rattanni
All:
I have several sqlite databases that I want to store in my version
control system.  I was wondering if instead of storing them as binary
files, would make sense to store a SQL dump in version control.  When
I create a root file system for my development board I will create the
databases from the SQL dump.

I had two reasons for thinking this was a good idea
1) If I upgrade sqlite and build a new root file system, then my
databases will be created with the same version of sqlite.
2) This is essentially the same (or better?) as a vacuum, since the
database should be completely free of any wasted space.
2) They may be a slight possibility of using version control faculties
(diff, blame, etc) while I am developing.

However I thought of one con for doing this
1) It may be stupid to do this.


I guess I am more concerned with wether or not I can guarantee that
the sqlite engine is deterministic such that for a given sequence of
SQL input each time I wind up with the same database (of course, using
sqlite3 to dump the sql may not guarantee me the same sequence of SQL
as was used to create the database, huh?).


Feedback? (please be nice)


--
Rich Rattanni

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Question on SQL arbitrary arrays

2008-01-15 Thread P Kishor
with SQLite being the db...

On 1/14/08, mark pirogovsky <[EMAIL PROTECTED]> wrote:
> Some relational (PostgreSQL for example) databases allow you to store
> arbitrary array as a field in on row.  So your retrieval would be much
> easier.
> Also depending on your performance requirements you can store your
> numbers in CSV format in the  single text filed.  The search for the
> group will be much faster. also there is no reordering and sorting
> during "select" as order is preserved.

Mark Pirogovsky's is probably the best suggestion. Benchmarking would
be the ultimate arbiter, but for my use I would venture that
performance would be acceptable, and other advantages as stated would
be the gravy.

>
>
> It is really hard to give you some idea without knowing more of you
> problem...  but your proposed implementation does carry a lot of
> overhead - for every number you have two extra fields, not to mention
> the database internal structures.
>
> My 2c.
>
>
> Mike McGonagle wrote:
> > Hello all,
> > This isn't s question specific to SQLite, so I hope that you don't mind...
> > but, I am trying to store an ordered lists of numbers of arbitrary length in
> > a table. I am not really sure what the best method of doing this, but the
> > way that I have come up with is to create a table with the following:
> >
> >
> > create table numberlist (
> >
> > grp integer,
> > ord integer,
> > data double
> >
> > );
> >
> >
> > where 'grp' is an id to identify the groups of numbers, 'ord' is the
> > ordering for the list, and 'data' is the numbers themselves.
> >
> > To get the list back in the order in which they were stored would be:
> >
> >
> > select data from numberlist where grp = X order by ord;
> >
> >
> > Is there something that I am missing for a better implementation? These
> > lists will be ranging anywhere from 2 numbers to 200 or so...
> >
> >
> > Thanks,
> >
> > Mike
> >
> >
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>


-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] WHERE SomeColumn IN (@SQLiteParameter) ???

2008-01-15 Thread Samuel R. Neff
Since you need notification of data so quickly, perhaps it would be better
to use some type of notification table that indicates when new data is
available and a trigger to populate this table.  Then you can query SELECT
MAX(ID) FROM Notifications which is ridiculously fast.

HTH,

Sam


---
We're Hiring! Seeking a passionate developer to join our team building Flex
based products. Position is in the Washington D.C. metro area. If interested
contact [EMAIL PROTECTED]
 
-Original Message-
From: idkfa [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 14, 2008 3:45 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] WHERE SomeColumn IN (@SQLiteParameter) ???


That definitely worked as you described, thank you.  However, performance is
key for us.  We'll need to fire off a query like that at a steady 15hz
heartbeat (a WHERE NOT IN actually).  Could you suggest a faster equivalent
to accomplish such a task?

What about dumping the string list into a #TempTable and then firing off a
SELECT WHERE NOT IN (SELECT value FROM #TempTable)?

R

-- 
View this message in context:
http://www.nabble.com/WHERE-SomeColumn-IN-%28%40SQLiteParameter%29-tp148
08093p14813594.html
Sent from the SQLite mailing list archive at Nabble.com.



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Aggregates in SELECT without GROUP BY

2008-01-15 Thread Darren Duncan

At 11:41 PM -0800 1/14/08, Joe Wilson wrote:

In sqlite, assuming there's at least one row, an aggregate SELECT
with no GROUP BY clause is conceptually the same as an equivalent
SELECT with GROUP BY NULL - i.e., the group of all rows.
(I say 'conceptually' because GROUP BY NULL is much slower).


I actually thought of raising this issue too, but then thought it 
would complicate the discussion.


One could conceive a SQL SELECT, if it has no explicit GROUP BY but 
has explicit aggregate functions in the select list, as if it had an 
explicit GROUP BY but an empty column list, that is, a group per 
distinct source sub-rows of zero columns rather than per distinct 
source sub-rows of 1..M columns, and so a source rowset of 1..N rows 
would turn into a result rowset of exactly 1 row.


However, unless I'm mistaken about SQL behaviour, I see this analogy 
not holding true when there are zero source rows.


Normal SQL will return exactly 1 row when using aggregate functions 
and no GROUP BY clause, which is actually good when using things like 
COUNT or SUM.


However, any GROUP BY, whether over zero columns or 1..N columns, 
would return zero rows if there were zero input rows.  That is the 
only way it can work if its behaviour is intended to be consistent.


Of course, that's not to say that there is any overall logical 
inconsistency, IF you consider that the native environment for 
aggregate functions is NOT with a GROUP BY.


So, use an aggregate on any rowset of 0..N rows, you get 1 row back.

If you conceive GROUP BY as actually just creating a table some of 
whose row field values are themselves tables (the columns being 
grouped by are outside of the inner tables, those not being grouped 
by are inside them), then using aggregate functions together with a 
GROUP BY is treating each inner table like the only table as far as 
the aggregates are concerned, and so applying the aggregates to inner 
tables to convert them to inner tables of one row each, then 
typically each of those is merged with its containing single outer 
row again.


On that note, a group-by of zero columns would then produce a table 
having a single row and single field whose value is the original 
table.


Now smarter relational DBMSs that support table-valued-fields could 
then let you use a GROUP BY in isolation, since if you keep any 
fields not being grouped by, they form rows of inner tables.  Less 
capable DBMSs don't let you directly use the actual result of a 
relational group, and require you to do the additional step of either 
discarding non-grouped-by columns or using aggregates on them.


I don't know if SQL has provisions for a relational operator that 
results in the intermediate value I mentioned (table of tables), but 
even if it doesn't, a truly relational DBMS would have it.


-- Darren Duncan

-
To unsubscribe, send email to [EMAIL PROTECTED]
-