Re: [sqlite] [Video] An Introduction to SQLite

2006-06-03 Thread imcs ee

the url returns
"

Currently, the playback feature of Google Video isn't available in your
country.

We hope to make this feature available more widely in the future, and we
really appreciate your patience.
"

On 6/4/06, A. Pagaltzis <[EMAIL PROTECTED]> wrote:


Hi,

for those who haven't noticed, a video of a 45-min talk by
Dr. Hipp about SQLite that he gave at Google has been posted on
Google Video:

http://video.google.com/videoplay?docid=-5160435487953918649

Not much nitty-gritty, but a nice 20,000 ft view; there isn't
much news to me in it, but there are interesting cues and bits
in there. I like it.

[Apologies if this has already been posted about in a more
appropriate venue; I didn't see it on the list or the homepage.]

Regards,
--
Aristotle Pagaltzis // 



[sqlite] [Video] An Introduction to SQLite

2006-06-03 Thread A. Pagaltzis
Hi,

for those who haven’t noticed, a video of a 45-min talk by
Dr. Hipp about SQLite that he gave at Google has been posted on
Google Video:

http://video.google.com/videoplay?docid=-5160435487953918649

Not much nitty-gritty, but a nice 20,000 ft view; there isn’t
much news to me in it, but there are interesting cues and bits
in there. I like it.

[Apologies if this has already been posted about in a more
appropriate venue; I didn’t see it on the list or the homepage.]

Regards,
-- 
Aristotle Pagaltzis // 


RE: [sqlite] Random error SQLITE_CANTOPEN on Windows XP SP2 with sqlite3_exec

2006-06-03 Thread Costas Stergiou
Hello John,
Thank you for your answer but I suspect this is not the reason for the
following cases:
1. I modified the code to check for the sqlite3_close return value and it
never fails; the error keeps happening though
2. If I remove completely the sqlite3_close statement, the error still
occurs
3. In my actual application, I don't open so many dbs: I open about 7 and I
do many transactions in each one (and do not close them until program exit).
4. The problem is related to this post:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg15494.html My case is
exactly the same (although I cannot step into the c code to see what is
happening).
5. Why does the error goes away when I set 'PRAGMA synchronous = off;'?

I am afraid it is somehow related with the file handling routines and WinXP
but I don't have the slightest idea what to do: the problem is that this is
a really serious issue and, if it is not my fault, a show-stopper for winXP
usage of SQLITE3.

Costas


> -Original Message-
> From: John Stanton [mailto:[EMAIL PROTECTED]
> Sent: Κυριακή, 4 Ιουνίου 2006 4:05 πμ
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Random error SQLITE_CANTOPEN on Windows XP SP2 with
> sqlite3_exec
> 
> You don't test for an error on your close statement.  I suspect that the
> Sqlite databases are not being closed because activity is not finalized.
> Your error pops up when you have a very large number of databases
open.
> JS
> 
> Costas Stergiou wrote:
> > Hello everyone,
> >
> > I lately came upon a very strange error that occurs at random times. I
> > managed to trace down and produce a small piece of code to reproduce it.
> >
> > I am using Delphi and the sqlite3.dll (latest) but I am sure that this
> error
> > is not related to something other than the engine. Here is a code that
> > produces this error (read almost like pseudo-code):
> >
> >
> >
> > var
> >
> >   i, j: Integer;
> >
> >   s: String;
> >
> >   db: pointer;
> >
> >   MSG: pchar;
> >
> > begin
> >
> >   for i:=1 to 30 do begin
> >
> > s := 'testsqlite3_' + inttostr(i) + '.db3';
> >
> > deletefile(s);
> >
> >   end;
> >
> >
> >
> >   for i:=1 to 30 do begin
> >
> > s := 'testsqlite3_' + inttostr(i) + '.db3';
> >
> > sqlite3_open(pchar(s), db);
> >
> > //sql_exec(db, 'PRAGMA synchronous = off;');
> >
> > Assert( sqlite3_exec(db, pchar('create table t1(i1 int, i2 int)'),
> nil,
> > nil, msg) = SQLITE_OK );
> >
> > Assert( sqlite3_exec(db, pchar('create table t2(i1 int, i2 int)'),
> nil,
> > nil, msg) = SQLITE_OK );
> >
> > sqlite3_close(db);
> >
> >   end;
> >
> > end;
> >
> >
> >
> > What I am doing here is just creating 20 test databases and creating 2
> > tables in each one of these. That's all.
> >
> > To repeat this error, the code above may need to be repeated some times
> > (some time it gives the error, some other it doesn't). Between
> invocations
> > of the code it would be good to wait 1 second (I put it on a form and
> > associate it with a button; then I press the button repeatedly until I
> get
> > the error, usually in 2-3 clicks).
> >
> >
> >
> > Now, if I uncomment the "PRAGMA synchronous=off" command, the error goes
> > away!
> >
> > Also, if I remove the second create table command, I have no problem
> again
> > (the problem is not associated with the 'create table command'
actually).
> >
> >
> >
> > Any help or insight on the above will be very valuable since I am
> completely
> > stack with it.
> >
> > Thanks,
> >
> > Costas
> >




Re: [sqlite] Multithreading. Again.

2006-06-03 Thread John Stanton

Joe Wilson wrote:

Remember, that the operating system bug that is causing all the
multithreading grief is that file locks created by one thread
cannot be reliably removed or modified by a different thread.



You could have a single thread that exclusively performs file locking/unlocking.
This thread would wait on a threadsafe work queue (using a POSIX 
condition variable mechanism) and execute file locking/unlocking tasks, 
otherwise it would use zero CPU. Functions could be provided to put 
file lock/unlock operations on to this work queue and wait for the result. 
Such file locking/unlocking functions could be called safely and reliably

from any thread.

That sounds like good practice to me.  When one discovers that a 
particular function is flaky and variable between implementations, O/S's 
and versions prudence requires that it be bypassed.  In our Sqlite usage 
we want platform independence and reliability and accordingly never rely 
on file locking for synchronization.


A bonus of such an approach is simpler code and logic and better 
execution efficiency.

JS


Re: [sqlite] Random error SQLITE_CANTOPEN on Windows XP SP2 with sqlite3_exec

2006-06-03 Thread John Stanton
You don't test for an error on your close statement.  I suspect that the 
Sqlite databases are not being closed because activity is not finalized. 
   Your error pops up when you have a very large number of databases open.

JS

Costas Stergiou wrote:

Hello everyone,

I lately came upon a very strange error that occurs at random times. I
managed to trace down and produce a small piece of code to reproduce it.

I am using Delphi and the sqlite3.dll (latest) but I am sure that this error
is not related to something other than the engine. Here is a code that
produces this error (read almost like pseudo-code):

 


var

  i, j: Integer;

  s: String;

  db: pointer;

  MSG: pchar;

begin

  for i:=1 to 30 do begin

s := 'testsqlite3_' + inttostr(i) + '.db3';

deletefile(s);

  end;

 


  for i:=1 to 30 do begin

s := 'testsqlite3_' + inttostr(i) + '.db3';

sqlite3_open(pchar(s), db);

//sql_exec(db, 'PRAGMA synchronous = off;');

Assert( sqlite3_exec(db, pchar('create table t1(i1 int, i2 int)'), nil,
nil, msg) = SQLITE_OK );

Assert( sqlite3_exec(db, pchar('create table t2(i1 int, i2 int)'), nil,
nil, msg) = SQLITE_OK );

sqlite3_close(db);

  end;

end;

 


What I am doing here is just creating 20 test databases and creating 2
tables in each one of these. That's all.

To repeat this error, the code above may need to be repeated some times
(some time it gives the error, some other it doesn't). Between invocations
of the code it would be good to wait 1 second (I put it on a form and
associate it with a button; then I press the button repeatedly until I get
the error, usually in 2-3 clicks).

 


Now, if I uncomment the "PRAGMA synchronous=off" command, the error goes
away!

Also, if I remove the second create table command, I have no problem again
(the problem is not associated with the 'create table command' actually).

 


Any help or insight on the above will be very valuable since I am completely
stack with it.

Thanks,

Costas



Re: [sqlite] question about php_pdo_sqlite

2006-06-03 Thread Clay Dowling
yuyen wrote:
> Ok, finally I found the problem is that the PHP and Apache are installed
> in local drive and I put the sqlite database file on a remote file
> server which is a Linux / Samba. I can use sqlite3.exe to open that
> remote database with a mapped remote drive. And Insert, Update and
> Delete have no problem with it. While with Apache and PHP pdo, if the
> database is concurrently opened by sqlite3.exe, it will report "can't
> not open database". Close sqlite3.exe, then PHP pdo can open it, but it
> open as read-only. If I put the database file in local dirve, then there
> is no any problem at all. Even I can use sqlite3.exe concurrently with
> PHP pdo. Does Apache has different thinking about remote server?

SQLite has issues with remote servers.  It's entirely dependent upon the
network file system's file locking mechanism, which tends to be
something of a blunt instrument.  It's also somewhat unpredictable.
You'll be best off to keep the file on a local drive.  If you need
access from multiple servers a database engine with a network server
daemon is going to be a better choice.  PostgreSQL comes highly
recommended, as do several others.

Clay
-- 
CeaMuS
http://www.ceamus.com
Simple Content Management


Re: [sqlite] Multithreading. Again.

2006-06-03 Thread Joe Wilson
> Remember, that the operating system bug that is causing all the
> multithreading grief is that file locks created by one thread
> cannot be reliably removed or modified by a different thread.

You could have a single thread that exclusively performs file locking/unlocking.
This thread would wait on a threadsafe work queue (using a POSIX 
condition variable mechanism) and execute file locking/unlocking tasks, 
otherwise it would use zero CPU. Functions could be provided to put 
file lock/unlock operations on to this work queue and wait for the result. 
Such file locking/unlocking functions could be called safely and reliably
from any thread.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [sqlite] Re: Query performance issues - index selection

2006-06-03 Thread Joe Wilson
If you test against 3.2.1, just use your original where clause 
and original index pk_data:

 where utime >= 1146441600 and utime < 114912

as I don't know what effect the unary +'s would have on it.

--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> For my databases, GROUP BY is slower in recent SQLite releases
> because my queries are not able to use indexed columns for 
> GROUP BY items by design:
> 
>  http://www.mail-archive.com/sqlite-users%40sqlite.org/msg15563.html
> 
> I'm curious what sort of timings you'd get with SQLite 3.2.1.
> http://sqlite.org/sqlite-3.2.1.tar.gz
> 
> To convert to the older database format:
> 
>  sqlite334 334.db .dump | sqlite321 321.db
> 
> And then perform your timings with both SQLite versions against 
> the 321.db file to level the playing field.
> Just having an index on utime should be optimal for SQLite 3.2.1.
> 
> --- Steve Green <[EMAIL PROTECTED]> wrote:
> > Using Richard's suggestion of changing the where clause of my query
> > to
> > 
> > where +utime >= 1146441600 and +utime < 114912000
> > 
> > did force sqlite to use the index that gave better performance.
> > 
> > However, I'm seeing some strange behavior that I'm hoping someone can
> > shed some light on.
> > 
> > With the time period mentioned below, the data set is about 2.5 million
> > rows and and 86K distinct u_id values.  Using the index on (u_id, utime),
> > the query time was reduced from 13.5 minutes to 26 seconds (not great,
> > but at least the browser won't timeout waiting for a response).
> > 
> > However, with a different time period, I have a much smaller data set of
> > about 150K rows and 20K distinct u_id values.  Using the index on (u_id,
> > utime), the query still takes about 20 seconds.  However, if the primary
> > index on (utime, r_id, u_id) is used, the query only takes 0.5 seconds.
> > Unfortunately at query time I have no idea of knowing how much data is
> > going to have to be traversed, so the idea of modifying the query to force
> > the use of different indexes is not possible.  Can anyone explain why the
> > performance is so poor with the smaller data set and the "optimal" index.
> > Any suggestions on a workaround so that optimal performance can be achieved
> > with all data set sizes?
> > 
> > Thanks for your help,
> > 
> > Steve
> > 
> > Steve Green wrote:
> > 
> > > Sorry, I forgot to mention that I'm using sqlite v3.3.4 on redhat linux
> > > v7.3
> > > 
> > > Steve
> > > 
> > > Steve Green wrote:
> > > 
> > >> Hi,
> > >>
> > >> I have the following schema
> > >>
> > >> CREATE TABLE data(
> > >> utime int4,
> > >> r_id int2,
> > >> u_id int4,
> > >> data1 int8,
> > >> data2 int8
> > >> );
> > >>
> > >> Each row is uniquely defined by utime, r_id, and u_id, so I have the
> > >> following index
> > >>
> > >> CREATE UNIQUE INDEX pk_data on data( utime, r_id, u_id );
> > >>
> > >> This index is also needed because at update time, I use "insert or
> > >> replace" to update the data table.
> > >>
> > >> The type of query that I'm attempting to perform is similar to
> > >>
> > >> select u_id, sum( data1 ), sum( data2 )
> > >> from data where utime >= 1146441600 and utime < 114912
> > >> group by u_id
> > >> order by sum( data1 ) desc
> > >> limit 10
> > >>
> > >> My current table has about 2.5 million rows and about 86,000 distinct
> > >> u_id values in the time period selected, and the query takes about 13.5
> > >> minutes.
> > >>
> > >> Performing an explain query plan reveals
> > >>
> > >> 0|0|TABLE data WITH INDEX pk_data
> > >>
> > >> so the primary index is being used...
> > >>
> > >> Based on some past experiences, I added the following index to the table
> > >>
> > >> CREATE INDEX ix_data_ut on data( u_id, utime );
> > >>
> > >> Note that the utime is the last parameter in the index.  With the primary
> > >> index in place, I was not able to convince sqlite to use this index.  
> > >> To test
> > >> the index, I was forced to drop the primary index (which I can't do in my
> > >> production environment).   After dropping the primary index, an 
> > >> explain query
> > >> plan revealed
> > >>
> > >> 0|0|TABLE data WITH INDEX ix_data_ut
> > >>
> > >> and the query ran in 26 seconds...
> > >>
> > >> Subsequent tests using the following indexes provided no performance 
> > >> improvement
> > >> over the unique index, although sqlite's query planner chose these 
> > >> indexes over
> > >> the unique index
> > >>
> > >> CREATE INDEX ix_data_tu on data( utime, u_id );
> > >> CREATE INDEX ix_data_t on data( utime );
> > >>
> > >> So, is there a way that I can get sqlite to use the optimal index 
> > >> without having
> > >> to remove my unique index?  Perhaps this would involve rewritting the 
> > >> query, but
> > >> I'm at a loss as to how that could be done.
> > >>
> > >> Thanks for your time,
> > >>
> > >> Steve


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 

Re: [sqlite] Re: Query performance issues - index selection

2006-06-03 Thread Joe Wilson
For my databases, GROUP BY is slower in recent SQLite releases
because my queries are not able to use indexed columns for 
GROUP BY items by design:

 http://www.mail-archive.com/sqlite-users%40sqlite.org/msg15563.html

I'm curious what sort of timings you'd get with SQLite 3.2.1.
http://sqlite.org/sqlite-3.2.1.tar.gz

To convert to the older database format:

 sqlite334 334.db .dump | sqlite321 321.db

And then perform your timings with both SQLite versions against 
the 321.db file to level the playing field.
Just having an index on utime should be optimal for SQLite 3.2.1.

--- Steve Green <[EMAIL PROTECTED]> wrote:
> Using Richard's suggestion of changing the where clause of my query
> to
> 
> where +utime >= 1146441600 and +utime < 114912000
> 
> did force sqlite to use the index that gave better performance.
> 
> However, I'm seeing some strange behavior that I'm hoping someone can
> shed some light on.
> 
> With the time period mentioned below, the data set is about 2.5 million
> rows and and 86K distinct u_id values.  Using the index on (u_id, utime),
> the query time was reduced from 13.5 minutes to 26 seconds (not great,
> but at least the browser won't timeout waiting for a response).
> 
> However, with a different time period, I have a much smaller data set of
> about 150K rows and 20K distinct u_id values.  Using the index on (u_id,
> utime), the query still takes about 20 seconds.  However, if the primary
> index on (utime, r_id, u_id) is used, the query only takes 0.5 seconds.
> Unfortunately at query time I have no idea of knowing how much data is
> going to have to be traversed, so the idea of modifying the query to force
> the use of different indexes is not possible.  Can anyone explain why the
> performance is so poor with the smaller data set and the "optimal" index.
> Any suggestions on a workaround so that optimal performance can be achieved
> with all data set sizes?
> 
> Thanks for your help,
> 
> Steve
> 
> Steve Green wrote:
> 
> > Sorry, I forgot to mention that I'm using sqlite v3.3.4 on redhat linux
> > v7.3
> > 
> > Steve
> > 
> > Steve Green wrote:
> > 
> >> Hi,
> >>
> >> I have the following schema
> >>
> >> CREATE TABLE data(
> >> utime int4,
> >> r_id int2,
> >> u_id int4,
> >> data1 int8,
> >> data2 int8
> >> );
> >>
> >> Each row is uniquely defined by utime, r_id, and u_id, so I have the
> >> following index
> >>
> >> CREATE UNIQUE INDEX pk_data on data( utime, r_id, u_id );
> >>
> >> This index is also needed because at update time, I use "insert or
> >> replace" to update the data table.
> >>
> >> The type of query that I'm attempting to perform is similar to
> >>
> >> select u_id, sum( data1 ), sum( data2 )
> >> from data where utime >= 1146441600 and utime < 114912
> >> group by u_id
> >> order by sum( data1 ) desc
> >> limit 10
> >>
> >> My current table has about 2.5 million rows and about 86,000 distinct
> >> u_id values in the time period selected, and the query takes about 13.5
> >> minutes.
> >>
> >> Performing an explain query plan reveals
> >>
> >> 0|0|TABLE data WITH INDEX pk_data
> >>
> >> so the primary index is being used...
> >>
> >> Based on some past experiences, I added the following index to the table
> >>
> >> CREATE INDEX ix_data_ut on data( u_id, utime );
> >>
> >> Note that the utime is the last parameter in the index.  With the primary
> >> index in place, I was not able to convince sqlite to use this index.  
> >> To test
> >> the index, I was forced to drop the primary index (which I can't do in my
> >> production environment).   After dropping the primary index, an 
> >> explain query
> >> plan revealed
> >>
> >> 0|0|TABLE data WITH INDEX ix_data_ut
> >>
> >> and the query ran in 26 seconds...
> >>
> >> Subsequent tests using the following indexes provided no performance 
> >> improvement
> >> over the unique index, although sqlite's query planner chose these 
> >> indexes over
> >> the unique index
> >>
> >> CREATE INDEX ix_data_tu on data( utime, u_id );
> >> CREATE INDEX ix_data_t on data( utime );
> >>
> >> So, is there a way that I can get sqlite to use the optimal index 
> >> without having
> >> to remove my unique index?  Perhaps this would involve rewritting the 
> >> query, but
> >> I'm at a loss as to how that could be done.
> >>
> >> Thanks for your time,
> >>
> >> Steve


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[sqlite] Random error SQLITE_CANTOPEN on Windows XP SP2 with sqlite3_exec

2006-06-03 Thread Costas Stergiou
Hello everyone,

I lately came upon a very strange error that occurs at random times. I
managed to trace down and produce a small piece of code to reproduce it.

I am using Delphi and the sqlite3.dll (latest) but I am sure that this error
is not related to something other than the engine. Here is a code that
produces this error (read almost like pseudo-code):

 

var

  i, j: Integer;

  s: String;

  db: pointer;

  MSG: pchar;

begin

  for i:=1 to 30 do begin

s := 'testsqlite3_' + inttostr(i) + '.db3';

deletefile(s);

  end;

 

  for i:=1 to 30 do begin

s := 'testsqlite3_' + inttostr(i) + '.db3';

sqlite3_open(pchar(s), db);

//sql_exec(db, 'PRAGMA synchronous = off;');

Assert( sqlite3_exec(db, pchar('create table t1(i1 int, i2 int)'), nil,
nil, msg) = SQLITE_OK );

Assert( sqlite3_exec(db, pchar('create table t2(i1 int, i2 int)'), nil,
nil, msg) = SQLITE_OK );

sqlite3_close(db);

  end;

end;

 

What I am doing here is just creating 20 test databases and creating 2
tables in each one of these. That's all.

To repeat this error, the code above may need to be repeated some times
(some time it gives the error, some other it doesn't). Between invocations
of the code it would be good to wait 1 second (I put it on a form and
associate it with a button; then I press the button repeatedly until I get
the error, usually in 2-3 clicks).

 

Now, if I uncomment the "PRAGMA synchronous=off" command, the error goes
away!

Also, if I remove the second create table command, I have no problem again
(the problem is not associated with the 'create table command' actually).

 

Any help or insight on the above will be very valuable since I am completely
stack with it.

Thanks,

Costas

 

 



Re: [sqlite] question about php_pdo_sqlite

2006-06-03 Thread DJ Anubis
yuyen wrote:
> Ok, finally I found the problem is that the PHP and Apache are
> installed in local drive and I put the sqlite database file on a
> remote file server which is a Linux / Samba. I can use sqlite3.exe to
> open that remote database with a mapped remote drive. And Insert,
> Update and Delete have no problem with it. While with Apache and PHP
> pdo, if the database is concurrently opened by sqlite3.exe, it will
> report "can't not open database". Close sqlite3.exe, then PHP pdo can
> open it, but it open as read-only. If I put the database file in local
> dirve, then there is no any problem at all. Even I can use sqlite3.exe
> concurrently with PHP pdo. Does Apache has different thinking about
> remote server?
>
> Jack
Did you fully read Apache documentation ? Maybe some RTFM can help with
this.
>From Apache doc, section Running Apache as a service on Windows:
"By default, all Apache services are registered to run as the system
user (the LocalSystem account). The LocalSystem account has no
privileges to your network via any Windows-secured mechanism, including
the file system, named pipes, DCOM, or secure RPC. It has, however, wide
privileges locally."

This explains the problem you face. Apache is more secured than most
Windows services. There is a simple solution which will give you write
access to your remote file system:

Create a separate account for running Apache services. This is mandatory
to access network resources. Simple steps:

   1. Create a normal domain user account, and memorize (write doiwn)
  its password.
   2. Grant the newly created account a Logon privilege as a service and
  Act as part of the OS. You'll have to use Group Policy to
  propagate those settings.
   3. Confirm the new account as a member of the Users Group
   4. Grant this account Read and Execute rights to all documents and
  folders (including subfolders) in your Apache Root Directory
  (htdocs and cgi-bin)
   5. Grant this account Change (RWXD) rights to Apache logs folder.
   6. Grant the account Read and Execute (RX) rights to Apache binary.

Then stop and restart Apache Services. You should have access to your
network mounts, providing your scripts use the standard:
   \
instead of the classic
   :\\\


This should solve all of your access rights assignations between your
PHP script, Apache and the network mount.

If this can help, let us know :)

JCR