[sqlite] Updatable views

2007-07-20 Thread

Hi All,

I'm using triggers to make my views updatable (ie the user can edit  
the values view because SQLite will instead change the related source  
table value). Which method is best to facilitate this?:


1. Use the "instead of update on ViewName" syntax. Trigger on the  
update of the view as a whole (ie any column triggers the one update).


or:

2. Use the "instead of update of ColumnName on ViewName" syntax in a  
trigger for each column. Trigger on the update of each column/field  
individually.


It seems to me that triggering on the view as a whole would  
unnecessarily update multiple values/columns when only one is changed.  
On the other hand, adding a trigger for each column seems overkill.  
Which is the best way?


Below is the complete SQL of a simplified example, where I have an  
Orders table and a Products table, (which lists the Products in each  
order). I have an "Orders Calc" view which shows fields from the  
Orders table, along with a sum() calculation of the Products in that  
Order.


The result of the SQL by both methods is the same:

Testing method 1: Create a trigger for the view as a whole:
10001|Apple|22.0|1020.0|Cheque|2007-07-21|R1234|2007-07-21
10001|Apple|22.0|1020.0|Cheque|2007-07-22|R1234|2007-07-21
SQL error near line 85: You cannot change the Total, since it is  
calculated.


Testing method 2: Create a trigger for the view per column:
10001|Apple|22.0|1020.0|Cheque|2007-07-21|R1234|2007-07-21
10001|Apple|22.0|1020.0|Cheque|2007-07-23|R1234|2007-07-21
SQL error near line 154: You cannot change the Total, since it is  
calculated.


Any personal approaches or revelations welcome :-)

Thanks,
Tom

/*
Create the test tables and view.
*/

create table Orders
(
"Order ID" integer primary key autoincrement,
Supplier text,  -- name of supplier
Delivery real,  -- delivery cost in dollars
	"Paid Method" text,	-- method of payment, such as deposit, credit  
card, cash

"Paid Date" date, -- date that payment was sent
Receipt text,   -- payment receipt
Ordered date-- date that the order was sent
);

create table if not exists Products
(
"Order ID" integer,   -- Orders foreign key
Code text,  -- Product Code
Description text,   -- Product Description
Buy real,   -- Buy price I pay when ordering this item
Quantity integer-- Quantity of this product in this order
);

create view "Orders Calc"
as
select
Orders."Order ID" as "Order ID",
Supplier,
Delivery,
sum( Quantity * Buy ) + Delivery
as Total,
"Paid Method",
"Paid Date",
Receipt,
Ordered
from Orders
left join Products
on Orders."Order ID" = Products."Order ID"
group by Orders."Order ID";

/*
Insert Test data
*/

begin;
insert into Orders values ( 10001, 'Apple', 22.0, 'Cheque',  
'2007-07-21', 'R1234', '2007-07-21');

insert into Products values ( 10001, 'IPH8GB', 'iPhone 8GB', 499.0, 2 );
commit;

/*
Method 1: Create a trigger for the view as a whole
*/

create trigger "Update Orders Calc"
instead of update on "Orders Calc"
for each row
begin
update Orders
set
Supplier = new.Supplier,
Delivery = new.Delivery,
"Paid Method" = new."Paid Method",
"Paid Date" = new."Paid Date",
Receipt = new.Receipt,
Ordered = new.Ordered
where "Order ID" = new."Order ID";
select
case
when old.Total != new.Total
			then raise( abort, 'You cannot change the Total, since it is  
calculated.')

end;
end;

/*
Test method 1
*/

begin;
select 'Testing method 1: Create a trigger for the view as a whole:';
select * from "Orders Calc";
update "Orders Calc"
set "Paid Date" = '2007-07-22' where "Order ID" = 10001;
select * from "Orders Calc";
update "Orders Calc"
set Total = 300.0 where "Order ID" = 10001;
rollback;

/*
Method 2: Create a trigger for the view per column
*/

drop trigger if exists "Update Orders Calc";

create trigger "Update Orders Calc Supplier"
instead of update of Supplier on "Orders Calc"
for each row
begin
	update Orders set Supplier = new.Supplier where "Order ID" =  
new."Order ID";

end;

create trigger "Update Orders Calc Delivery"
instead of update of Delivery on "Orders Calc"
for each row
begin
	update Orders set Delivery = new.Delivery where "Order ID" =  
new."Order ID";

end;

create trigger "Update Orders Calc Paid Method"
instead of update of "Paid Method" on "Orders Calc"
for each row
begin
	update Orders set "Paid Method" = new."Paid Method" where "Order ID" 

[sqlite] Static library cross-compile

2007-07-20 Thread Chris Brown
Hi
 
I have cross-compiled the sqlite library for use on Linux running on MIPS by
basically doing the following (which I gleaned from the documentation and
other posts on this mailing list):
 
1) Running 'configure' for the host.
2) Running 'make target_source' to create the sub-directory 'tsrc'.
3) Removing 'shell.c', 'tclsqlite.c', 'icu.c' and 'fts*'.
4) Compiling using the following makefile:
 
AR=/opt/hardhat/devkit/mips/fp_le/bin/mips_fp_le-ar
GCC=/opt/hardhat/devkit/mips/fp_le/bin/mips_fp_le-gcc
STRIP=/opt/hardhat/devkit/mips/fp_le/bin/mips_fp_le-strip

all:
 for i in *.c; do $(GCC) -c $$i; done
 
 $(AR) r libsqlite3.a *.o
 
 $(STRIP) libsqlite3.a --strip-all 
 
 $(GCC) -o libsqlite3.so -shared -static *.c
 $(STRIP) libsqlite3.so --strip-all 
 

# end of file
 
This appears to run fine and creates a static and shared library. However
the problem I have is that if I run the makefile for my own application I
get 'undefined reference' errors if I link to the static library e.g
'undefined reference to sqlite3_open'. If I link to the shared library then
my makefile runs without errors and my application uses sqlite as intended.
I am new to Linux so forgive me if there's something obvious I am missing. I
know that the 'for' loop in the makefile above has an extra '$' compared to
the examples I have seen but this seemed to be the only way to get it to
reference the source files correctly.
 
Can anyone offer any suggestions as I would like to use the static library?
 
Thanks
 
Chris 



[sqlite] Re: Re: SQL Challenge, select stack

2007-07-20 Thread Igor Tandetnik

Ken  wrote:

I ran your cases and came up with this after adding the (22 record to
the stack)...

sqlite> explain query plan
   ...>
   ...> select s.id, p.id
   ...>from stack s, stackpop p
   ...>   where s.value = p.value
   ...> and s.id < p.id
   ...> and (select count(*)
   ...>from stackpop p2
   ...>   where p2.value=p.value
   ...> and p2.id < p.id
   ...> and s.id < p.id) =
   ...>(select count(*)
   ...>   from stack s2
   ...>   where s2.value=s.value
   ...> and s2.id > s.id
   ...> and s2.id < p.id );


I believe this only works by accident on your particular example. Try it 
on a sequence that goes like: push push pop push pop pop.


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] invalid subselect is not detected ?

2007-07-20 Thread drh
Joe Wilson <[EMAIL PROTECTED]> wrote:
> 
> (In the 5 hour lag it takes to post to the list, this has
> probably already been answered 5 times, but what the heck...)
> 

I'd love for you to work on the slow email problem for me, Joe.
Call me at my office for the root password.  :-)

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: Re: SQL Challenge, select stack

2007-07-20 Thread Igor Tandetnik

Ken  wrote:

The 22 is kind of like a time stamp..  (or you could just as easily
add another column indicative of a timestamp.

insert into stack values (1, 1234);
insert into stack values (2, 1234);
insert into stack values (6, 1234);
insert into stack values (9, 1234);
insert into stackpop values (12, 1234) ;
insert into stackpop values (14, 1234) ;
insert into stackpop values (18, 1234) ;
insert into stack values (22, 1234);

so  that 12 should pop 9, 14 pops 6 and 18 pops 2  leaving the stack
with 1 and 22.


Ah, I didn't realize the ids are supposed to act like timestamps. In 
this case, you would need something like this:


select p.id, max(s.id) from stack s, stackpop p
where s.value = p.value and s.id < p.id and
   (select count(*) from stack s2 where s2.id between s.id and p.id) =
   (select count(*) from stackpop p2 where p2.id between s.id and p.id)
group by p.id;

Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] shared cache/ test_server.c

2007-07-20 Thread John Stanton
Why not just bloock on the transation.  That will queue waiting threads, 
serializing the access to the transaction.


Ken wrote:

My assumption for the server thread was that it needed to process all incoming 
requests  in transaction order and to not loose outstanding requests. You have 
two choices once a client initiates a transaction:
 
   a. reject the incoming request since a transaction is active in the server. The client would then be able to re-submit the request... This seemed to have alot of overhead since the client would then need to have code to resubmit in the event of a Reject. 
   And then it would simply sit in a loop re-posting the message until it got a valid acknowledgment... 
 
   b. re-Queue the request to the tail,  causing the client to block waiting for a response from the server.  The active client will eventually complete its transaction and the next in the queue will be serviced. 
  
 I favored option b. Since it caused less thrashing about when the client intiated a read request or another transaction request when a transaction was already in progress.
 
 Hope that helps.
 


John Stanton <[EMAIL PROTECTED]> wrote: Ken wrote:


Richard,

You might want to look at src/test_server.c for an example of the shared_cache 
if you haven't found it already.

Personally, I think it makes a lot of sense (read simpler) to implement 
independent connections than to implement a server. But I can see why you might 
want a server if you have many threads and memory constraints.

The server still can only have one transaction running at a time, even though the cache is shared. However, it can run multiple select operations and perform dirty reads(when enabled). 


The biggest difficulty encountered with the server is how to handle client 
requests when a transaction was in progress... Do you re-queue or just fail and 
have the client resend? My solution was to keep a state of a client thread id 
when it started a transaction. If the server thread encountered a message that 
was not from the client thread that started the transaction it moved the 
message to the end of the queue.




Why not just block on the transaction?




Your welcome to call email me directly if you need more info or call if you'd 
like to discuss my experiences with the server/thread approach.

Regards,
Ken

Richard Klein  wrote: 


Richard Klein wrote:



[EMAIL PROTECTED] wrote:



John Stanton  wrote:


Yes, each connection has a cache.  A lot of concurrent connections 
means a lot of memory allocated to cache and potentially a lot of 
duplicated cached items.  See shared cache mode for relief.




Yes.  But remember that shared cache mode has limitations:

 *  When shared cache mode is enabled, you cannot use
a connection in a thread other than the thread in which
it was originally created.

 *  Only connections opened in the same thread share a cache.

The shared cache mode is designed for building a "server thread"
that accepts connection requests and SQL statements via messages



from "client threads", acts upon those requests, and returns the



result.
--
D. Richard Hipp 



I suppose that I could accomplish almost the same thing in 2.8.17,
even though shared cache mode is not available in that version.

I could have a server thread that opens the database, and then
accepts and processes SQL statements via messages from client
threads.

The only difference would be that the client threads could not
send connection requests.  There would be only one connection,
and it would be opened implicitly by the server thread at system
startup.

The benefit would be that all the client threads would effectively
share the same cache, since there would in fact be only one connection.

The cost would be that each SQL statement would require an additional
two context switches to execute.

In my application (TiVo-like Personal Video Recorder functionality
in a set-top box), the benefit of memory savings far outweighs the
cost of a performance hit due to extra context switches.

- Richard




Upon further reflection, I realized that the scheme outlined above
won't work.

The problem can be summed up on one word:  TRANSACTIONS.  There needs
to be a way to make sure that the SQL statements composing a trans-
action in client thread 'A' aren't intermixed with those composing a
transaction in client thread 'B'.

The SQLite connection is the structure designed to keep track of state
information such as whether or not a transaction is in progress.  If
client threads 'A' and 'B' share the same connection, then the burden
of maintaining this state information falls on the server thread.  Not
a great idea.

Therefore, it would appear that I have two options:

(1) Have the server thread open separate connections for client threads
'A' and 'B', and enable shared cache mode so that the two connections
can share cached items.  This option requires upgrading to SQLite version
3.3.0 or higher.

(2) Abandon the idea of a

Re: [sqlite] SQLite Version 3.4.1

2007-07-20 Thread Scott Hess

[Note that you may need to reload or shift-reload the page.  I checked
three times across the day for the new content, and was just about to
send drh a notice that it wasn't there, when I tried a shift-reload!]


On 7/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

SQLite Version 3.4.1 is now available on the website.

Version 3.4.1 fixes a problem in the VACUUM command that
could potentially lead to database corruption.  Upgrading
is recommended for all users.

This release also includes several other small enhancements
and bug fixes.  For details see

  http://www.sqlite.org/changes.html

Please report any problems you find to this mailing list.
Tnx.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] callback and sequence number of iteration

2007-07-20 Thread Dusan Gibarac
Unfortunately, I need current iteration number inside callback function -
not total count of records.

Dusan Gibarac

-Original Message-
From: Lee Crain [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 3:44 PM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] callback and sequence number of iteration

I suggest creating a static counter. Initialize it to zero before you call
the "sqlite3_exec( )" function and increment it once for each call to the
"callback" function. 

After all records are read and the call to "sqlite3_exec( )" returns, the
counter will show the record count.

Lee Crain

_

static int iCounter;

callback( blah-blah, etc. )
{

// Extract the record

iCounter++;
}

_


-Original Message-
From: Dusan Gibarac [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 3:02 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] callback and sequence number of iteration

callback function will do something for each  retrieved row and I have to
know in each iteration what is the sequence number of iteration. How can I
read or count it?

Dusan Gibarac




--
---
To unsubscribe, send email to [EMAIL PROTECTED]
--
---




-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] invalid subselect is not detected ?

2007-07-20 Thread Joe Wilson
--- Ken <[EMAIL PROTECTED]> wrote:
> Is this an error or by design?
>  create table ss( ssid, ss_value);
>  create table s(id, s_value);
...
>  select id from s where id in ( select id from ss);

(In the 5 hour lag it takes to post to the list, this has
probably already been answered 5 times, but what the heck...)

You've written a correlated subquery:

  select id from s where id in (select s.id from ss);

>  
>  returns 
>  1
>  2
>  3
>  
>  Shouldn't the subselect fail since the id is not in the SS table ?
>  
>  select s.id from s where s.id in ( select ss.id from ss );
>  returns:  SQL error: no such column: ss.id




  

Shape Yahoo! in your own image.  Join our Network Research Panel today!   
http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] callback and sequence number of iteration

2007-07-20 Thread Lee Crain
I suggest creating a static counter. Initialize it to zero before you call
the "sqlite3_exec( )" function and increment it once for each call to the
"callback" function. 

After all records are read and the call to "sqlite3_exec( )" returns, the
counter will show the record count.

Lee Crain

_

static int iCounter;

callback( blah-blah, etc. )
{

// Extract the record

iCounter++;
}

_


-Original Message-
From: Dusan Gibarac [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 3:02 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] callback and sequence number of iteration

callback function will do something for each  retrieved row and I have to
know in each iteration what is the sequence number of iteration. How can I
read or count it?

Dusan Gibarac




--
---
To unsubscribe, send email to [EMAIL PROTECTED]
--
---



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] shared cache/ test_server.c

2007-07-20 Thread Richard Klein



Ken wrote:

Richard,
 
 You might want to look at src/test_server.c for an example of the shared_cache if you haven't found it already.


I'll take a look at it, thanks!


 Personally, I think it makes a lot of sense (read simpler) to implement 
independent connections than to implement a server.


So do I.


Your welcome to call email me directly if you need more info or call if you'd 
like to discuss my experiences with the server/thread approach.


Thanks for the kind offer.  I may take you up on it!

Thanks again,
- Richard



-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] invalid subselect is not detected ?

2007-07-20 Thread Andrew Finkenstadt

On 7/20/07, Ken <[EMAIL PROTECTED]> wrote:


Is this an error or by design?
create table ss( ssid, ss_value);
create table s(id, s_value);

insert into ss values (1,1234);
insert into ss values (2,1234);

insert into s values (1, 567);
insert into s values (2, 567);
insert into s values (3, 567);

select id from s where id in ( select id from ss);

returns
1
2
3

Shouldn't the subselect fail since the id is not in the SS table ?

select s.id from s where s.id in ( select ss.id from ss );
returns:  SQL error: no such column: ss.id


Thanks,
Ken



I believe that sqlite3 has a hidden column alias for the rowid named id,
unless another column is explicitly named 'id'.

--andy


[sqlite] callback and sequence number of iteration

2007-07-20 Thread Dusan Gibarac
callback function will do something for each  retrieved row and I have to
know in each iteration what is the sequence number of iteration. How can I
read or count it?

Dusan Gibarac




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] quickest way to duplicate table data?

2007-07-20 Thread Scott Hess

In past non-SQLite contexts, I've found that the following usually
works very well, without needing snapshot capability:

1. copy the db file.
2. lock the db.
3. copy the db file.
4. unlock the db.

The first copy hopefully pulls the entire file into memory buffers,
making the next copy very efficient.  Even if it doesn't fit, it
hopefully pulls all of the metadata in.

A slight improvement would be to check whether the lastmod time
changed between steps 1 and 3, and if not, don't bother copying again.

-scott


On 7/20/07, James Dennett <[EMAIL PROTECTED]> wrote:

> -Original Message-
> From: Scott Derrick [mailto:[EMAIL PROTECTED]
> Sent: Friday, July 20, 2007 8:54 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] quickest way to duplicate table data?
>
> I am using sqlite in an embedded application.   The data store table
> will be in its own file.
>
> The data acquisition application will be storing a row every 1 to 5
> seconds and cannot be interrupted for more than a couple seconds.
>
> A web server will access the table for read access in two ways.
> First, a live view of incoming data will read the latest row every 1
to
> 5 seconds.  No problem there.
>
> The second method is the problem as I see it. The user will be able to
> request a copy of the last 12 hours, 7 days,  or the entire data set,
> to be sent to a USB stick or over the network. The data set will be
> stored in comma delimited format.
>
> In order to not interrupt the writing of data from the DAQ application
I
> think I should replicate the table or database and then dump the
> required interval into comma delimited file.
>
> Would a simple OS file copy of the database file be the quickest way
to
> copy the table?   Locking it first, do a filecopy and then unlock?
>
> Any other way to copy it faster?

Most modern platforms support some kind of filesystem snapshot (via LVM
on Linux, ufssnap on Solaris, VSS on Windows, etc.) so that you can do
1. lock SQLite db
2. Make filesystem snapshot
3. unlock SQLite db
4. Copy sqlite.db from snapshot

If snapshotting is fast (as it should be, so long as you don't use modes
of VSS which integrate with applications), this might reduce the length
of time for which you hold the SQLite database locked.  Putting the
database on its own dedicated volume could speed this up too.

Of course, if your file is smaller than around 50MB, you might well just
be able to use a regular OS copy routine inside your 1-2 second window.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] SQLite on Mac

2007-07-20 Thread Ahmed Sulaiman
Thanks for the heads up. I have studied that and we have decided to go
with SQLite, it suits our needs :)

Cheers

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 11:50 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite on Mac

"Ahmed Sulaiman" <[EMAIL PROTECTED]> wrote:
> 
> We are now just making strategic decision as of which database engine
to
> choose that would give us better cross platform support. 
> 

Be sure to visit http://www.sqlite.org/whentouse.html to make
sure the SQLite is suited for whatever it is you are wanting
to do with your database.

--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite on Mac

2007-07-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

T&B wrote:
> As others have mentioned, yes, SQLite not only runs on a Mac, but it's
> already installed as of Mac OS X 10.4 "Tiger" and after. Apple uses it
> for indexing email in the Mail application, Core Data in XCode
> development, and media management in high end apps like Aperture.

You do have to be careful of one thing on the Mac (which has already
bitten several people).  If you run inside an app, or if your app loads
one of the many Apple components using SQLite then you will have
versioning issues.  For example if the Apple component is loaded first
then it loads the system SQLite which is an older version (3.0.8 IIRC)
and your attempts to use a new version such as a shared library you
linked against will be ignored.

You can work around this by using the amalgamation and
- -DSQLITE_API=static as noted in
http://www.sqlite.org/cvstrac/tktview?tn=2453

(The one unanswered question in that ticket is if there are two
different versions of SQLite in the same process, will the thread local
storage interfere with each other or is a different key used so they
co-exist).

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGoQAvmOOfHg372QQRArl4AJ9mnXK5WbtS3GpSkTCl6XvvKTjQrACffdY2
+ZcJygs3bLRIxm7MEKIN8Qo=
=j5X9
-END PGP SIGNATURE-

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] sqlite Text fields Vista/ 64 bit/core 2 duo machines

2007-07-20 Thread Robert Simpson
> -Original Message-
> From: James Forrester [mailto:[EMAIL PROTECTED] 
> Sent: Friday, July 20, 2007 6:37 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] sqlite Text fields Vista/ 64 bit/core 2 duo machines
> 
> This may be a problem specific to Vista alone, but I'm 
> running 64 bit and a
> core 2 dou machine. 
> 
> When I try to read a Text field with more than about 35 
> characters I get a:
> 
> "System.AccessViolationException: Attempted to read or write protected
> memory.  This is often an indication that other memory is corrupt."
> 
> entry.Description = reader.GetString(9);
> 
> Same code, same data on an XP pro-machine (single processor, 
> 32 bit) works
> perfectly.  On the 64 I'm compiling for x86.
> 
> Any help greatly appreciated.  As I've exhausted possible 
> combinations.
> 
> Important because we need to move our product which uses 
> sqlite to support
> our Vista customer base.  
> 
> Developing in VS 2005 C#. 

That would fall under my jurisdiction.  E-mail me at robert at
blackcastlesoft dot com, or post on the SQLite.NET forums at
http://sqlite.phxsoftware.com and I'll try and help figure out what's wrong.

Robert



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] New SQLite Data Provider for the SubSonic project

2007-07-20 Thread Jose Dias

Hi,

I have been using SQLite for some time now and one of my main gripes
was the lack of ORM support. So I went and started this data provider
for SubSonic which is a Rails for c# project that also generates ORM
code.

I posted a couple of articles about it at:

http://codefornothing.wordpress.com/

where the code is also available for download.

I hope this is useful and any comments/bug reports are appreciated.

cfn

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] shared cache/ test_server.c

2007-07-20 Thread Ken
My assumption for the server thread was that it needed to process all incoming 
requests  in transaction order and to not loose outstanding requests. You have 
two choices once a client initiates a transaction:
 
   a. reject the incoming request since a transaction is active in the 
server. The client would then be able to re-submit the request... This seemed 
to have alot of overhead since the client would then need to have code to 
resubmit in the event of a Reject. 
   And then it would simply sit in a loop re-posting the message until it 
got a valid acknowledgment... 
 
   b. re-Queue the request to the tail,  causing the client to block 
waiting for a response from the server.  The active client will eventually 
complete its transaction and the next in the queue will be serviced. 
  
 I favored option b. Since it caused less thrashing about when the client 
intiated a read request or another transaction request when a transaction was 
already in progress.
 
 Hope that helps.
 

John Stanton <[EMAIL PROTECTED]> wrote: Ken wrote:
> Richard,
>  
>  You might want to look at src/test_server.c for an example of the 
> shared_cache if you haven't found it already.
>  
> Personally, I think it makes a lot of sense (read simpler) to implement 
> independent connections than to implement a server. But I can see why you 
> might want a server if you have many threads and memory constraints.
>  
> The server still can only have one transaction running at a time, even though 
> the cache is shared. However, it can run multiple select operations and 
> perform dirty reads(when enabled). 
>  
> The biggest difficulty encountered with the server is how to handle client 
> requests when a transaction was in progress... Do you re-queue or just fail 
> and have the client resend? My solution was to keep a state of a client 
> thread id when it started a transaction. If the server thread encountered a 
> message that was not from the client thread that started the transaction it 
> moved the message to the end of the queue.
>

Why not just block on the transaction?

>  
> Your welcome to call email me directly if you need more info or call if you'd 
> like to discuss my experiences with the server/thread approach.
>  
>  Regards,
>  Ken
>  
> Richard Klein  wrote: 
> 
> Richard Klein wrote:
> 
>>
>>[EMAIL PROTECTED] wrote:
>>
>>>John Stanton  wrote:
>>>
Yes, each connection has a cache.  A lot of concurrent connections 
means a lot of memory allocated to cache and potentially a lot of 
duplicated cached items.  See shared cache mode for relief.

>>>
>>>Yes.  But remember that shared cache mode has limitations:
>>>
>>>   *  When shared cache mode is enabled, you cannot use
>>>  a connection in a thread other than the thread in which
>>>  it was originally created.
>>>
>>>   *  Only connections opened in the same thread share a cache.
>>>
>>>The shared cache mode is designed for building a "server thread"
>>>that accepts connection requests and SQL statements via messages
>>>from "client threads", acts upon those requests, and returns the
>>>result.
>>>-- 
>>>D. Richard Hipp 
>>>
>>
>>I suppose that I could accomplish almost the same thing in 2.8.17,
>>even though shared cache mode is not available in that version.
>>
>>I could have a server thread that opens the database, and then
>>accepts and processes SQL statements via messages from client
>>threads.
>>
>>The only difference would be that the client threads could not
>>send connection requests.  There would be only one connection,
>>and it would be opened implicitly by the server thread at system
>>startup.
>>
>>The benefit would be that all the client threads would effectively
>>share the same cache, since there would in fact be only one connection.
>>
>>The cost would be that each SQL statement would require an additional
>>two context switches to execute.
>>
>>In my application (TiVo-like Personal Video Recorder functionality
>>in a set-top box), the benefit of memory savings far outweighs the
>>cost of a performance hit due to extra context switches.
>>
>>- Richard
>>
> 
> 
> Upon further reflection, I realized that the scheme outlined above
> won't work.
> 
> The problem can be summed up on one word:  TRANSACTIONS.  There needs
> to be a way to make sure that the SQL statements composing a trans-
> action in client thread 'A' aren't intermixed with those composing a
> transaction in client thread 'B'.
> 
> The SQLite connection is the structure designed to keep track of state
> information such as whether or not a transaction is in progress.  If
> client threads 'A' and 'B' share the same connection, then the burden
> of maintaining this state information falls on the server thread.  Not
> a great idea.
> 
> Therefore, it would appear that I have two options:
> 
> (1) Have the server thread open separate connections for client threads
> 'A' and 'B', and enable shared cache mode so that the two connections
> can share cac

RE: [sqlite] quickest way to duplicate table data?

2007-07-20 Thread James Dennett
> -Original Message-
> From: Scott Derrick [mailto:[EMAIL PROTECTED]
> Sent: Friday, July 20, 2007 8:54 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] quickest way to duplicate table data?
> 
> I am using sqlite in an embedded application.   The data store table
> will be in its own file.
> 
> The data acquisition application will be storing a row every 1 to 5
> seconds and cannot be interrupted for more than a couple seconds.
> 
> A web server will access the table for read access in two ways.
> First, a live view of incoming data will read the latest row every 1
to
> 5 seconds.  No problem there.
> 
> The second method is the problem as I see it. The user will be able to
> request a copy of the last 12 hours, 7 days,  or the entire data set,
> to be sent to a USB stick or over the network. The data set will be
> stored in comma delimited format.
> 
> In order to not interrupt the writing of data from the DAQ application
I
> think I should replicate the table or database and then dump the
> required interval into comma delimited file.
> 
> Would a simple OS file copy of the database file be the quickest way
to
> copy the table?   Locking it first, do a filecopy and then unlock?
> 
> Any other way to copy it faster?

Most modern platforms support some kind of filesystem snapshot (via LVM
on Linux, ufssnap on Solaris, VSS on Windows, etc.) so that you can do
1. lock SQLite db
2. Make filesystem snapshot
3. unlock SQLite db
4. Copy sqlite.db from snapshot

If snapshotting is fast (as it should be, so long as you don't use modes
of VSS which integrate with applications), this might reduce the length
of time for which you hold the SQLite database locked.  Putting the
database on its own dedicated volume could speed this up too.

Of course, if your file is smaller than around 50MB, you might well just
be able to use a regular OS copy routine inside your 1-2 second window.

-- James


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] invalid subselect is not detected ?

2007-07-20 Thread Ken
Is this an error or by design?
 create table ss( ssid, ss_value);
 create table s(id, s_value);
 
 insert into ss values (1,1234);
 insert into ss values (2,1234);
 
 insert into s values (1, 567);
 insert into s values (2, 567);
 insert into s values (3, 567);
 
 select id from s where id in ( select id from ss);
 
 returns 
 1
 2
 3
 
 Shouldn't the subselect fail since the id is not in the SS table ?
 
 select s.id from s where s.id in ( select ss.id from ss );
 returns:  SQL error: no such column: ss.id
 
 
 Thanks,
 Ken
 
 
 




Re: [sqlite] shared cache/ test_server.c

2007-07-20 Thread John Stanton

Ken wrote:

Richard,
 
 You might want to look at src/test_server.c for an example of the shared_cache if you haven't found it already.
 
 Personally, I think it makes a lot of sense (read simpler) to implement independent connections than to implement a server. But I can see why you might want a server if you have many threads and memory constraints.
 
 The server still can only have one transaction running at a time, even though the cache is shared. However, it can run multiple select operations and perform dirty reads(when enabled). 
 
 The biggest difficulty encountered with the server is how to handle client requests when  a transaction was in progress... Do you re-queue or just fail and have the client resend? My solution was to keep a state of a client thread id when it started a transaction.  If the server thread encountered a message that was not from the client thread that started the transaction it moved the message to the end of the queue.




Why not just block on the transaction?

 
Your welcome to call email me directly if you need more info or call if you'd like to discuss my experiences with the server/thread approach.
 
 Regards,

 Ken
 
Richard Klein <[EMAIL PROTECTED]> wrote: 


Richard Klein wrote:



[EMAIL PROTECTED] wrote:


John Stanton  wrote:

Yes, each connection has a cache.  A lot of concurrent connections 
means a lot of memory allocated to cache and potentially a lot of 
duplicated cached items.  See shared cache mode for relief.




Yes.  But remember that shared cache mode has limitations:

  *  When shared cache mode is enabled, you cannot use
 a connection in a thread other than the thread in which
 it was originally created.

  *  Only connections opened in the same thread share a cache.

The shared cache mode is designed for building a "server thread"
that accepts connection requests and SQL statements via messages
from "client threads", acts upon those requests, and returns the
result.
--
D. Richard Hipp 



I suppose that I could accomplish almost the same thing in 2.8.17,
even though shared cache mode is not available in that version.

I could have a server thread that opens the database, and then
accepts and processes SQL statements via messages from client
threads.

The only difference would be that the client threads could not
send connection requests.  There would be only one connection,
and it would be opened implicitly by the server thread at system
startup.

The benefit would be that all the client threads would effectively
share the same cache, since there would in fact be only one connection.

The cost would be that each SQL statement would require an additional
two context switches to execute.

In my application (TiVo-like Personal Video Recorder functionality
in a set-top box), the benefit of memory savings far outweighs the
cost of a performance hit due to extra context switches.

- Richard




Upon further reflection, I realized that the scheme outlined above
won't work.

The problem can be summed up on one word:  TRANSACTIONS.  There needs
to be a way to make sure that the SQL statements composing a trans-
action in client thread 'A' aren't intermixed with those composing a
transaction in client thread 'B'.

The SQLite connection is the structure designed to keep track of state
information such as whether or not a transaction is in progress.  If
client threads 'A' and 'B' share the same connection, then the burden
of maintaining this state information falls on the server thread.  Not
a great idea.

Therefore, it would appear that I have two options:

(1) Have the server thread open separate connections for client threads
'A' and 'B', and enable shared cache mode so that the two connections
can share cached items.  This option requires upgrading to SQLite version
3.3.0 or higher.

(2) Abandon the idea of a server thread; have threads 'A' and 'B' open
their own connections and access SQLite directly.  This option does *not*
allow the sharing of cached items, but allows me to stay with SQLite
version 2.8.17.

- Richard


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] quickest way to duplicate table data?

2007-07-20 Thread Scott Derrick
I am using sqlite in an embedded application.   The data store table  
will be in its own file. 

The data acquisition application will be storing a row every 1 to 5 
seconds and cannot be interrupted for more than a couple seconds. 


A web server will access the table for read access in two ways.
First, a live view of incoming data will read the latest row every 1 to 
5 seconds.  No problem there.


The second method is the problem as I see it. The user will be able to 
request a copy of the last 12 hours, 7 days,  or the entire data set,  
to be sent to a USB stick or over the network. The data set will be 
stored in comma delimited format.


In order to not interrupt the writing of data from the DAQ application I 
think I should replicate the table or database and then dump the 
required interval into comma delimited file.


Would a simple OS file copy of the database file be the quickest way to 
copy the table?   Locking it first, do a filecopy and then unlock?


Any other way to copy it faster?

Scott

--

-
   Rightful liberty is unobstructed action according to our will within limits drawn 
around us by the equal rights of others. I do not add "within the limits of the 
law," because law is often but the tyrant's will, and always so when it violates the 
rights of the individual.

   Thomas Jefferson 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite on Mac

2007-07-20 Thread drh
"Ahmed Sulaiman" <[EMAIL PROTECTED]> wrote:
> 
> We are now just making strategic decision as of which database engine to
> choose that would give us better cross platform support. 
> 

Be sure to visit http://www.sqlite.org/whentouse.html to make
sure the SQLite is suited for whatever it is you are wanting
to do with your database.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: SQL Challenge, select stack

2007-07-20 Thread Ken
Igor,
 
 I ran your cases and came up with this after adding the (22 record to the 
stack)...
 
 sqlite> explain query plan
...>
...> select s.id, p.id
...>from stack s, stackpop p
...>   where s.value = p.value
...> and s.id < p.id
...> and (select count(*)
...>from stackpop p2
...>   where p2.value=p.value
...> and p2.id < p.id
...> and s.id < p.id) =
...>(select count(*)
...>   from stack s2
...>   where s2.value=s.value
...> and s2.id > s.id
...> and s2.id < p.id );
 0|0|TABLE stack AS s
 1|1|TABLE stackpop AS p USING PRIMARY KEY
 0|0|TABLE stackpop AS p2 USING PRIMARY KEY
 0|0|TABLE stack AS s2 USING PRIMARY KEY
 
 Results:
 2|18
 6|14
 9|12
 
 Thanks again for such a clean solution... 
 
 Igor Tandetnik <[EMAIL PROTECTED]> wrote:  Ken  wrote:
> Does anyone have ideas on how to implement a stack using sql 
>  Given the following tables and data:
>
>  create table stack( id integer primary key,   value integer);
>  create table stackpop ( id integer primary key, value integer );
>
>  begin;
>  insert into stack values (1, 1234);
>  insert into stack values (2, 1234);
>  insert into stack values (6, 1234);
>  insert into stack values (9, 1234);
> insert into stack values (22, 1234);
>
>  insert into stackpop values (12, 1234) ;
>  insert into stackpop values (14, 1234) ;
>  insert into stackpop values (18, 1234) ;
>  commit;
>
>  Do you have any ideas for a select that will return the stackpop and
>  stack id's paired as follows:
>  12 | 9
>  14 | 6
>  18 | 2

What's the logic supposed to be here? Why is the stack record with id=22 
omitted?

I believe I've answered this question the first time you aked it. Have 
you found the answer lacking? In what respect?

Igor Tandetnik 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-

 
 

Igor Tandetnik <[EMAIL PROTECTED]> wrote: Ken  wrote:
> Does anyone have ideas on how to implement a stack using sql 
>  Given the following tables and data:
>
>  create table stack( id integer primary key,   value integer);
>  create table stackpop ( id integer primary key, value integer );
>
>  begin;
>  insert into stack values (1, 1234);
>  insert into stack values (2, 1234);
>  insert into stack values (6, 1234);
>  insert into stack values (9, 1234);
> insert into stack values (22, 1234);
>
>  insert into stackpop values (12, 1234) ;
>  insert into stackpop values (14, 1234) ;
>  insert into stackpop values (18, 1234) ;
>  commit;
>
>  Do you have any ideas for a select that will return the stackpop and
>  stack id's paired as follows:
>  12 | 9
>  14 | 6
>  18 | 2

What's the logic supposed to be here? Why is the stack record with id=22 
omitted?

I believe I've answered this question the first time you aked it. Have 
you found the answer lacking? In what respect?

Igor Tandetnik 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




RE: [sqlite] SQLite on Mac

2007-07-20 Thread Ahmed Sulaiman
Thanks guys for the informative replies :)

We are now just making strategic decision as of which database engine to
choose that would give us better cross platform support. 

And with such a wonderful tech/community support, I believe SQLite is
the right answer :)

Cheers

-Original Message-
From: T&B [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 12:08 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite on Mac

Hi Ahmed,

> Does SQLite work on Mac, and if yes, is there any Mac enabled version
> that I could download?

As others have mentioned, yes, SQLite not only runs on a Mac, but it's  
already installed as of Mac OS X 10.4 "Tiger" and after. Apple uses it  
for indexing email in the Mail application, Core Data in XCode  
development, and media management in high end apps like Aperture.

If you have an earlier Mac OS X version, or want the very latest  
SQLite version, you can download it from the first link under the  
"Source Code" heading at:
http://www.sqlite.org/download.html
You'll need the Apple Developer Tools installed on your computer,  
which comes free with your computer or Mac OS X install discs, to  
compile and install it in about four steps.

To try it out, launch the Terminal program (already in your / 
Applications/Utilities folder) and type:

sqlite3 MyTestDatabase

then in the sqlite3 shell, type any sqlite commands, such as:

.help
.quit
create table MyTestTable( Name text, Age integer);

and so on.

There is also a range of GUI apps for the Mac for editing SQLite  
databases.

Reply here if you need more info.

Tom



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] shared cache/ test_server.c

2007-07-20 Thread Ken
Richard,
 
 You might want to look at src/test_server.c for an example of the shared_cache 
if you haven't found it already.
 
 Personally, I think it makes a lot of sense (read simpler) to implement 
independent connections than to implement a server. But I can see why you might 
want a server if you have many threads and memory constraints.
 
 The server still can only have one transaction running at a time, even though 
the cache is shared. However, it can run multiple select operations and perform 
dirty reads(when enabled). 
 
 The biggest difficulty encountered with the server is how to handle client 
requests when  a transaction was in progress... Do you re-queue or just fail 
and have the client resend? My solution was to keep a state of a client thread 
id when it started a transaction.  If the server thread encountered a message 
that was not from the client thread that started the transaction it moved the 
message to the end of the queue.
 
 
Your welcome to call email me directly if you need more info or call if you'd 
like to discuss my experiences with the server/thread approach.
 
 Regards,
 Ken
 
Richard Klein <[EMAIL PROTECTED]> wrote: 

Richard Klein wrote:
> 
> 
> [EMAIL PROTECTED] wrote:
>> John Stanton  wrote:
>>> Yes, each connection has a cache.  A lot of concurrent connections 
>>> means a lot of memory allocated to cache and potentially a lot of 
>>> duplicated cached items.  See shared cache mode for relief.
>>>
>>
>> Yes.  But remember that shared cache mode has limitations:
>>
>>*  When shared cache mode is enabled, you cannot use
>>   a connection in a thread other than the thread in which
>>   it was originally created.
>>
>>*  Only connections opened in the same thread share a cache.
>>
>> The shared cache mode is designed for building a "server thread"
>> that accepts connection requests and SQL statements via messages
>> from "client threads", acts upon those requests, and returns the
>> result.
>> -- 
>> D. Richard Hipp 
>>
> 
> I suppose that I could accomplish almost the same thing in 2.8.17,
> even though shared cache mode is not available in that version.
> 
> I could have a server thread that opens the database, and then
> accepts and processes SQL statements via messages from client
> threads.
> 
> The only difference would be that the client threads could not
> send connection requests.  There would be only one connection,
> and it would be opened implicitly by the server thread at system
> startup.
> 
> The benefit would be that all the client threads would effectively
> share the same cache, since there would in fact be only one connection.
> 
> The cost would be that each SQL statement would require an additional
> two context switches to execute.
> 
> In my application (TiVo-like Personal Video Recorder functionality
> in a set-top box), the benefit of memory savings far outweighs the
> cost of a performance hit due to extra context switches.
> 
> - Richard
> 

Upon further reflection, I realized that the scheme outlined above
won't work.

The problem can be summed up on one word:  TRANSACTIONS.  There needs
to be a way to make sure that the SQL statements composing a trans-
action in client thread 'A' aren't intermixed with those composing a
transaction in client thread 'B'.

The SQLite connection is the structure designed to keep track of state
information such as whether or not a transaction is in progress.  If
client threads 'A' and 'B' share the same connection, then the burden
of maintaining this state information falls on the server thread.  Not
a great idea.

Therefore, it would appear that I have two options:

(1) Have the server thread open separate connections for client threads
'A' and 'B', and enable shared cache mode so that the two connections
can share cached items.  This option requires upgrading to SQLite version
3.3.0 or higher.

(2) Abandon the idea of a server thread; have threads 'A' and 'B' open
their own connections and access SQLite directly.  This option does *not*
allow the sharing of cached items, but allows me to stay with SQLite
version 2.8.17.

- Richard


-
To unsubscribe, send email to [EMAIL PROTECTED]
-


Re: [sqlite] Re: SQL Challenge, select stack

2007-07-20 Thread Ken
Igor, et al.
 
 The first time I posted the question took over 5 hours to get there and back 
to me. So I assumed that the first question was lost and reposted the question 
with a small update to reflect the actual problem that wasn't represented in 
the first case.. 
 My mistake.
 
 The 22 is kind of like a time stamp..  (or you could just as easily add 
another column indicative of a timestamp.
 
 insert into stack values (1, 1234);
 insert into stack values (2, 1234);
 insert into stack values (6, 1234);
 insert into stack values (9, 1234);
 insert into stackpop values (12, 1234) ;
 insert into stackpop values (14, 1234) ;
 insert into stackpop values (18, 1234) ;
insert into stack values (22, 1234);
 
 so  that 12 should pop 9, 14 pops 6 and 18 pops 2  leaving the stack with 1 
and 22.
 
 I haven't had a chance to review your solution. However,  I did come up with 
one of my own for the first case, And a processing solution where the stackpop 
is queried and the stack table is processed based upon the query results...
 
 Solution 1:
 create temp table t_stack (nid integer primary key autoincrement
  , id integer, value integer);
 insert into t_stack (id, value)
select * from stack
where value = 1234 
   order by id desc;
 
 create temp table t_pop(nid integer primary key autoincrement
  , id integer, value integer);
 insert into t_pop (id, value) select * from stackpop order by id asc;
 
 select sp.id, s.id from t_pop sp, t_stack s where sp.nid = s.nid;
 
 Solution 2:
 for each row in stackpop  sp:
   1. stack_pop_id =  select max(id) from stack where id < sp.id 
   2. delete  the row from stack where id = stack_pop_id
 loop
 
 Thanks Igor for your suggestions... I agree using sql to implement a stack is 
pretty bad and maybe the best solution is to do this programatically. 
 

  
Igor Tandetnik <[EMAIL PROTECTED]> wrote: Ken  wrote:
> Does anyone have ideas on how to implement a stack using sql 
>  Given the following tables and data:
>
>  create table stack( id integer primary key,   value integer);
>  create table stackpop ( id integer primary key, value integer );
>
>  begin;
>  insert into stack values (1, 1234);
>  insert into stack values (2, 1234);
>  insert into stack values (6, 1234);
>  insert into stack values (9, 1234);
> insert into stack values (22, 1234);
>
>  insert into stackpop values (12, 1234) ;
>  insert into stackpop values (14, 1234) ;
>  insert into stackpop values (18, 1234) ;
>  commit;
>
>  Do you have any ideas for a select that will return the stackpop and
>  stack id's paired as follows:
>  12 | 9
>  14 | 6
>  18 | 2

What's the logic supposed to be here? Why is the stack record with id=22 
omitted?

I believe I've answered this question the first time you aked it. Have 
you found the answer lacking? In what respect?

Igor Tandetnik 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




[sqlite] sqlite Text fields Vista/ 64 bit/core 2 duo machines

2007-07-20 Thread James Forrester
This may be a problem specific to Vista alone, but I'm running 64 bit and a
core 2 dou machine. 

When I try to read a Text field with more than about 35 characters I get a:

"System.AccessViolationException: Attempted to read or write protected
memory.  This is often an indication that other memory is corrupt."

entry.Description   = reader.GetString(9);

Same code, same data on an XP pro-machine (single processor, 32 bit) works
perfectly.  On the 64 I'm compiling for x86.

Any help greatly appreciated.  As I've exhausted possible combinations.

Important because we need to move our product which uses sqlite to support
our Vista customer base.  

Developing in VS 2005 C#. 

Jim


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] How many table can i create in a db?

2007-07-20 Thread Samuel R. Neff

SQLite parses the schema every time you open a new connection so the more
complex the schema the longer it will take to connect.  We have 74 tables in
our database with a lot of triggers and it takes 17ms to open a connection.
So even if it will let you create 10,000, the performance impact of parsing
the schema each time may be prohibitive (although if you can open one
connect and leave it open for a long period of time, you can mitigate the
parsing issue).

HTH,

Sam 


---
We're Hiring! Seeking a passionate developer to join our team building
products. Position is in the Washington D.C. metro area. If interested
contact [EMAIL PROTECTED]
 
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 4:25 AM
To: SQLite
Subject: [sqlite] How many table can i create in a db?

I open one db,then create table.
How many table can i create?
10,100?
what is the max table num in one db?

BR
allen.zhang


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: SQL Challenge, select stack

2007-07-20 Thread Igor Tandetnik

Ken  wrote:

Does anyone have ideas on how to implement a stack using sql 
 Given the following tables and data:

 create table stack( id integer primary key,   value integer);
 create table stackpop ( id integer primary key, value integer );

 begin;
 insert into stack values (1, 1234);
 insert into stack values (2, 1234);
 insert into stack values (6, 1234);
 insert into stack values (9, 1234);
insert into stack values (22, 1234);

 insert into stackpop values (12, 1234) ;
 insert into stackpop values (14, 1234) ;
 insert into stackpop values (18, 1234) ;
 commit;

 Do you have any ideas for a select that will return the stackpop and
 stack id's paired as follows:
 12 | 9
 14 | 6
 18 | 2


What's the logic supposed to be here? Why is the stack record with id=22 
omitted?


I believe I've answered this question the first time you aked it. Have 
you found the answer lacking? In what respect?


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: timestamp to date in a trigger

2007-07-20 Thread Charly Caulet
I didn't understood modifiers utility, reading this page
(http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions). Thank you
Igor and Richard for explainations and help.


> I'm not sure what 1184834152 is supposed to represent,
It is an UNIX timestamp (number of seconds since 1970).

> You probably want
>
> strftime('%d-%m-%Y', new.tstp, 'unixepoch')
It's exactly what I need.

Charly CAULET

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SQLite Version 3.4.1

2007-07-20 Thread drh
SQLite Version 3.4.1 is now available on the website.

Version 3.4.1 fixes a problem in the VACUUM command that
could potentially lead to database corruption.  Upgrading
is recommended for all users.

This release also includes several other small enhancements
and bug fixes.  For details see

  http://www.sqlite.org/changes.html

Please report any problems you find to this mailing list.
Tnx.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Importing a big text file (CSV?)

2007-07-20 Thread Alberto Simões

On 7/19/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

"=?ISO-8859-1?Q?Alberto_Sim=F5es?=" <[EMAIL PROTECTED]> wrote:
> Ok, for future reference (drh, please, it would be nice to add this to
> the web site)

That is why we have wiki (http://www.sqlite.org/cvstrac/wiki) so
that you can add things like this yourself.


We have a wiki?
Nice :)

--
Alberto Simões

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] How many table can i create in a db?

2007-07-20 Thread Allen . Zhang
I open one db,then create table.
How many table can i create?
10,100?
what is the max table num in one db?

BR
allen.zhang

Re: [sqlite] optimizer question

2007-07-20 Thread Dan Kennedy
On Thu, 2007-07-19 at 22:56 +0100, Colin Manning wrote:
> Hi
> 
> If I have a table with a couple of indexed varchar fields e.g:
> 
> CREATE TABLE t (id INT, a VARCHAR(20), b VARCHAR(20)...);
> CREATE INDEX ia ON t(a);
> CREATE INDEX ib ON t(b);
> 
> then will the sqlite query optimizer use these indices in these SELECT's:
> 
> 1. SELECT * FROM t WHERE a LIKE 'M%';
> 2. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY a;
> 3. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY b;
> 4. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY b,a;

I think all of those queries will use index "ia".

For queries 3 and 4, a subset of table t will be scanned,
and a temporary b-tree structure used to do the ORDER BY.

Dan.



> ...such that none of them will result in a table scan?
> 
> Thx
> 
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite on Mac

2007-07-20 Thread

Hi Ahmed,


Does SQLite work on Mac, and if yes, is there any Mac enabled version
that I could download?


As others have mentioned, yes, SQLite not only runs on a Mac, but it's  
already installed as of Mac OS X 10.4 "Tiger" and after. Apple uses it  
for indexing email in the Mail application, Core Data in XCode  
development, and media management in high end apps like Aperture.


If you have an earlier Mac OS X version, or want the very latest  
SQLite version, you can download it from the first link under the  
"Source Code" heading at:

http://www.sqlite.org/download.html
You'll need the Apple Developer Tools installed on your computer,  
which comes free with your computer or Mac OS X install discs, to  
compile and install it in about four steps.


To try it out, launch the Terminal program (already in your / 
Applications/Utilities folder) and type:


sqlite3 MyTestDatabase

then in the sqlite3 shell, type any sqlite commands, such as:

.help
.quit
create table MyTestTable( Name text, Age integer);

and so on.

There is also a range of GUI apps for the Mac for editing SQLite  
databases.


Reply here if you need more info.

Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite on Mac

2007-07-20 Thread Kasper Daniel Hansen

On Jul 19, 2007, at 8:45 AM, Ahmed Sulaiman wrote:


Hi all,

Does SQLite work on Mac, and if yes, is there any Mac enabled version
that I could download?


SQLite is part of MacOS X. Try typing sqlite3 at the command line...

Kasper



Cheers


-- 
---

To unsubscribe, send email to [EMAIL PROTECTED]
-- 
---





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite on Mac

2007-07-20 Thread Peter Lau

On Jul 19, 2007, at 11:45 AM, Ahmed Sulaiman wrote:


Does SQLite work on Mac, and if yes, is there any Mac enabled version
that I could download?


SQLite is built-in on Tiger (10.4.x)... no installation is required.

pete

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] optimizer question

2007-07-20 Thread drh
Colin Manning <[EMAIL PROTECTED]> wrote:
> Hi
> 
> If I have a table with a couple of indexed varchar fields e.g:
> 
> CREATE TABLE t (id INT, a VARCHAR(20), b VARCHAR(20)...);
> CREATE INDEX ia ON t(a);
> CREATE INDEX ib ON t(b);
> 
> then will the sqlite query optimizer use these indices in these SELECT's:

The LIKE cannot be optimized unless you either

A)  Set PRAGMA case_sensitive_like=ON;
B)  Declare column a to have COLLATE NOCASE.
C)  Use "a GLOB 'M*'" instead

> 
> 1. SELECT * FROM t WHERE a LIKE 'M%';
> 2. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY a;
> 3. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY b;
> 4. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY b,a;
> 

Indices will be used to skip the sorting step in 2 and 3.
If you do this:

   DROP INDEX ib;
   CREATE INDEX ib ON t(b,a);

Then the new index ib will be used to sort in 4.

If you take the steps A, B, or C above, then the ia
index will be used to both sort and restrict the search
in query 2.

All of this you can discover for yourself by typing
in the query with the following prefix:

   EXPLAIN QUERY PLAN


--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: optimizer question

2007-07-20 Thread Igor Tandetnik

Colin Manning <[EMAIL PROTECTED]> wrote:

If I have a table with a couple of indexed varchar fields e.g:

CREATE TABLE t (id INT, a VARCHAR(20), b VARCHAR(20)...);
CREATE INDEX ia ON t(a);
CREATE INDEX ib ON t(b);

then will the sqlite query optimizer use these indices in these
SELECT's:
1. SELECT * FROM t WHERE a LIKE 'M%';
2. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY a;
3. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY b;
4. SELECT * FROM t WHERE a LIKE 'M%' ORDER BY b,a;


Prepend each statement with EXPLAIN QUERY PLAN and find out for 
yourself.


I believe SQLite is smart enough to optimize this particular usage of 
LIKE. If it turns out it isn't after all, change the condition to


a >= 'M' and a < 'N'

Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-