[sqlite] sqlite driven web-site

2006-05-04 Thread Rajan, Vivek K
Dear SQLite community- 

I want to develop a dynamic web-site (for small group of people < 5-10).
For this I am thinking of using SQLite. 

Couple of questions: 
* Are there some docs and/or package available to do something like
this?
* I would like to use perl as the programming interface - this is most
what I am familiar with

Has someone done something like that and would share their experience on
this topic. 

Vivek


Re: [sqlite] XML to SQLite upload

2006-05-04 Thread A. Pagaltzis
Hi Vivek,

* Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 06:40]:
> Sorry, I was not clear in my previous email. I do know the
> structure of the for the XML - I was wondering if there was a
> direct upload capability, once I know the structure of the XML. 

well, you can map generic XML to a relational database by storing
each syntactical element of the file in a row of a table,
together with information about how the elements are nested.
(Various ways to represent trees in a relational database exist.)
However, what you get is nearly useless for the kind of querying
that you typically want to do with a database.

Regards,
-- 
Aristotle Pagaltzis // 


RE: [sqlite] XML to SQLite upload

2006-05-04 Thread Rajan, Vivek K
Thanks for answers. 

Sorry, I was not clear in my previous email. I do know the structure of
the for the XML - I was wondering if there was a direct upload
capability, once I know the structure of the XML. 

Vivek


>-Original Message-
>From: A. Pagaltzis [mailto:[EMAIL PROTECTED]
>Sent: Thursday, May 04, 2006 8:54 PM
>To: sqlite-users@sqlite.org
>Subject: Re: [sqlite] XML to SQLite upload
>
>* John Stanton <[EMAIL PROTECTED]> [2006-05-05 05:45]:
>> We feed XML into an SQLITE database, but the XML DTD maps the
>> database in structure and names. To get general XML data and
>> load it into any database requires a program of some
>> description to translate names and structures.
>
>That was the point though. You have to assume some sort of
>convention about the structure of the XML, because there is no
>direct way to map XML into a relational model, and if Vivek Rajan
>does not tell us what he needs, we cannot tell him if such a
>thing exists or how to go about it.
>
>The question does not preclude an answer; it just gives too few
>constraints to answer it usefully.
>
>Regards,
>--
>Aristotle Pagaltzis // 


Re: [sqlite] XML to SQLite upload

2006-05-04 Thread A. Pagaltzis
* John Stanton <[EMAIL PROTECTED]> [2006-05-05 05:45]:
> We feed XML into an SQLITE database, but the XML DTD maps the
> database in structure and names. To get general XML data and
> load it into any database requires a program of some
> description to translate names and structures.

That was the point though. You have to assume some sort of
convention about the structure of the XML, because there is no
direct way to map XML into a relational model, and if Vivek Rajan
does not tell us what he needs, we cannot tell him if such a
thing exists or how to go about it.

The question does not preclude an answer; it just gives too few
constraints to answer it usefully.

Regards,
-- 
Aristotle Pagaltzis // 


Re: [sqlite] XML to SQLite upload

2006-05-04 Thread John Stanton
We feed XML into an SQLITE database, but the XML DTD maps the database 
in structure and names.  To get general XML data and load it into any 
database requires a program of some description to translate names and 
structures.

JS

A. Pagaltzis wrote:

* Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 02:20]:

Does someone have XML to SQLite upload utility in perl/C++? 



That’s like asking if someone has an ASCII to CSV “upload
utility”. It doesn’t make any sense.

Regards,




Re: [sqlite] XML to SQLite upload

2006-05-04 Thread James Bailie

A. Pagaltzis wrote:

> * Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 02:20]:

>> Does someone have XML to SQLite upload utility in perl/C++?
>
> Thats like asking if someone has an ASCII to CSV upload
> utility. It doesnt make any sense.
>

He is probably looking for a utility which will serialize an XML
document into a SQLite database.  The first place to look would
be www.cpan.org.

I have written such a utility, but I wrote it as an example
program for a little lisp dialect I created for text processing,
which only runs on FreeBSD.  A Perl programmer could port it
easily enough:

http://www.jamesbailie.com/munger.html

--
James Bailie <[EMAIL PROTECTED]>
http://www.jamesbailie.com


[sqlite] XML to SQLite upload

2006-05-04 Thread Danilo


Hi,
you could be you profit this example:

FLTK + SQLite + TinyXML = rubrinive, an alternative rubric that allows
to store for every identity N addresses, N telephone numbers and N e-mails
or Web Links.
http://www.digitazero.org/?p=33
Regards, Danilo.
Home Page: http://www.digitazero.org
venerdì 5 maggio 2006


A. Pagaltzis ha scritto:

* Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 02:20]:
Does someone have XML to SQLite upload utility in perl/C++? 


That’s like asking if someone has an ASCII to CSV “upload
utility”. It doesn’t make any sense.

Regards,




Re: [sqlite] Some advanced questions

2006-05-04 Thread John Stanton
Sqlite3_complete does no syntax checking other than to see if the SQL 
statement is terminated with a semi-colon.  It is used to split multiple 
SQL statements in the same string, not verify syntax.


Why not use the Sqlite SQL syntax checker by running sqlite3_prepare on 
your statement?


You can get the column names by reading the schema in the SQLITE_MASTER 
table in the database.   That will give you the  SELECT * ... columns.


For an insight into optimization you can use the explain capability and 
look at the VDBE byte codes generated when Sqlite compiles the SQL 
statement.


Unit 5 wrote:

Hello,

I started with Sqlite just a few days ago and gievn
how simple its api is, I think I am ready to do some
more complex things with it.  I am primarily using it
from the tcl api so far.

I have a few questions:

1) If I have a statement that includes "WHERE 0 = 1",
does sqlite optimize the query and return right away
or would it execute the query anyway?


2) Is there a way to get column names form a statement
without (or before) executing the statement?  For
example, when a user enters a statement like "select *
from ...".

 
3) I would like to test the validity of sql statements

before executing them.  So I thought I could use "db
complete" command.  But now I am not sure what "db
complete" command does.  In my tests, it returns 0
(false) regardless of the sql statement.  In fact the
only times I have gotton true is if the statement is
empty.  The statements are (all are valid and return
data):
  % db complete "select 2 * 2 "
  % db complete "select a from tab1"
  % db complete "select a from tab1 where a < 10"
  % db complete "select a from tab1 where a < 10
order by a"




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 




Re: [sqlite] XML to SQLite upload

2006-05-04 Thread A. Pagaltzis
* Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 02:20]:
> Does someone have XML to SQLite upload utility in perl/C++? 

That’s like asking if someone has an ASCII to CSV “upload
utility”. It doesn’t make any sense.

Regards,
-- 
Aristotle Pagaltzis // 


[sqlite] XML to SQLite upload

2006-05-04 Thread Rajan, Vivek K
Does someone have XML to SQLite upload utility in perl/C++? 

Vivek


[sqlite] means for explicitly escalating a transaction from RESERVED to PENDING/EXCLUSIVE

2006-05-04 Thread Pat Wibbeler
Is there a means for explicitly escalating an existing transaction from
RESERVED to EXCLUSIVE (either through issuing sql statements or the C
API)? 

I'm using the following locking strategy:
* Read only transactions start with a BEGIN and allow sqlite to escalate
to SHARED on read.
* Transactions that may (or are known to) write use BEGIN IMMEDIATE to
avoid the potential lock escalation issue that might occur with two
simultaneous SHARED transactions that each attempt to escalate to
RESERVED by issuing a read followed by a write.
* Install a busy handler that directs sqlite to continue trying to
obtain the lock. 

Unfortunately, I also have some transactions that I'd like to be
EXCLUSIVE.  In some cases, I know when beginning the transaction that
these should be EXCLUSIVE, and in some cases, I don't know until I'm
within the transaction that I'd like to be EXCLUSIVE.  Even in cases
where I know I'll eventually be exclusive, I'd like to delay exclusivity
as long as possible, allowing read operations to continue.  

Wwhat I'd like to do is begin this transaction BEGIN IMMEDIATE, allowing
SHARED locks to be acquired and the database read.  At some point within
the transaction, I'd like to force an escalation to EXCLUSIVE, forcing
all SHARED and other locks to clear before continuing.

Is there a way to do this?

Thanks!

Pat


Re: [sqlite] SQLite disk performance

2006-05-04 Thread drh
<[EMAIL PROTECTED]> wrote:
> 
> The main reason seems to be that inserting in the btree is very slow (even 
> using transactions) because a lot of data has to be moved around in the file.


No.  I think the reason is that the database file is too
big to fit in your operating systems disk cache and hence
reads and writes really do have to do actual disk I/O, which
is slow.  You just do not notice this so much on smaller
databases because the I/O is really back and forth to the
disk cache rather than to the disk itself.

> 
> An other concern I have is row size: the average data in our data contains a 
> blob of about 3 kb. My understanding is that in that case, the first kilobyte 
> (primary key included) is stored in the b-tree node and the rest somewhere 
> else in the file.


You might benefit from specifying a larger page size.
Try:

PRAGMA page_size=16384;

Before creating any tables in a new database and see if
that helps.  Or recompile with -DSQLITE_DEFAULT_PAGE_SIZE=16384.


--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Jay Sprenkle

On 5/4/06, Dennis Cote <[EMAIL PROTECTED]> wrote:

It seems *really* strange that deleting the file would reduce the
average run time of the 1 loop case by more than 60 seconds. Neither
overwriting the file or deleting the file should take anywhere near this
long. These are small files of only 10K characters after all.


I think I understand that part. In one case (the deleted file)
it's going to the free disk block list to allocate new chunks of disk
space for the file.
This is probably very quick since that's probably cached. In the other case,
If the file isn't deleted it has to read the directory entry, figure
out where the disk
block is that represents that portion of the file, then seek to it, insert the
data into the existing block image, then write.


Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Dennis Cote

Doug Currie wrote:




Add the line:

   DeleteFile("test.txt");

in front of the CreateFile() call...



Doug,

This does make a difference. I thought that the CREATE_ALWAYS flag to 
CreateFile would delete any existing file, but upon further 
investigation I found out that what it does is open the file and set its 
length to 0. Apparently it it faster to delete the file. In either case 
it seems to be a side effect of these calls, since their execution time 
is not included in the measurements.


There seems to be another effect due to the run time of the program. For 
loops of 1000 flushes I consistently get higher rates than for loops of 
1 flushes. I know some OS's lower the priority of long running 
process that do lots of I/O. This is supposed to be an indication of a 
long running batch type process so they do this to improve the response 
to more interactive processes. In my case the CPU is spending most of 
its time in the idle process waiting for the disk so this still doesn't 
make much sense.


The following is the from my first six runs with 1000 flushes per run. 
The first column is the time, the second is the flushes/sec rate.


2638.5
1855.6
2638.5
1952.6
2638.5
1855.6
  
These are the average and standard deviation of the execution time 
above, as well as the ratio of the standard deviation to the average as 
percentage which is an indication of the variability.


22.246.5
4.28.9
  
19.02%19.12%


When I increased the number of loops by a factor of 10 to average over a 
longer time I got the following results for seven runs.


27436.5
31431.8
23842.0
26537.7
31232.1
22943.7
26837.3
  
271.437.3

32.84.5
  
12.07%12.03%


The average flush rate dropped to 80% of it previous value, and the 
variability dropped by about 40%.


After I saw your posting, I modified my code to explicitly delete the 
file before each run and repeated the tests. For 10 runs with 1000 loops 
I got:


1283.3
1283.3
1190.9
1566.7
1283.3
1471.4
1471.4
1471.4
1283.3
1283.3
  
12.878.9

1.37.9

10.29%10.00%

The average flush rate increased by 70% and the variability dropped by 
about 50%.


I then repeated the test with loops of 1 flushes per run and got the 
following results.


20748.3
21147.4
20449.0
19750.8
20648.5
  
205.048.8

5.11.2
  
2.51%2.55%


Again the average flush rate over the longer run is about 60% of what it 
was for the short runs. The variability is only 20% of what it was 
without the explicit delete.


It seems *really* strange that deleting the file would reduce the 
average run time of the 1 loop case by more than 60 seconds. Neither 
overwriting the file or deleting the file should take anywhere near this 
long. These are small files of only 10K characters after all.


Something else is going on here. If we can figure out what, we may be 
able to speed up SQLite under Windows. It currently seems to be running 
about 1/8 the speed of FreeBSD when doing inserts with individual 
transactions.


In any case we seem to be settling to a long term sustained rate of 
about 40 flushes per second for WinXP. Since SQLite is getting only 10 
inserts per second, it seems to me that SQLite must be doing about 4 
flushes per insert. Does that seem right to you Richard?


Dennis Cote


RE: [sqlite] Cannot load sqlite3.dll

2006-05-04 Thread Bob Dankert
You can not use it directly in .Net as it is not a .Net module - you
need to reference it's API using PInvoke (platform invoke).  You can
look at some of the .Net wrappers in the wiki at
http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers

Bob

Bob

-Original Message-
From: Paul Hunnisett [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 04, 2006 3:57 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Cannot load sqlite3.dll

I'm trying to add sqlite3.dll ot my application in VS 2005.  I 
downloaded the dll from the sqlite home page.  VS simply says that it is

not a valid assembly and can't be loaded.

I have no idea what to do next...

Can anyone point me in the right direction?

Cheers

Paul Hunnisett



[sqlite] Cannot load sqlite3.dll

2006-05-04 Thread Paul Hunnisett
I'm trying to add sqlite3.dll ot my application in VS 2005.  I 
downloaded the dll from the sqlite home page.  VS simply says that it is 
not a valid assembly and can't be loaded.


I have no idea what to do next...

Can anyone point me in the right direction?

Cheers

Paul Hunnisett



Re: [sqlite] Some advanced questions

2006-05-04 Thread Unit 5
--- [EMAIL PROTECTED] wrote:
> ...
> Most of the interfaces in bind.test are
> special-purpose
> testing hacks that are only available if you build
> the "testfixture".  They are not a part of the
> standard
> TCL API.
> 
> ...
> Three things to take away from this overview:
> 
>   (1)  The TCL API adds new syntax to the TCL
> language
>(the scripts eval and transactio methods). 
> This
>is an exceedingly powerful mechanism which
> cannot
>be accomplished with Algol-derived languages
> such
>as Perl, Python, Ruby, or Java.  You have to
> look
>outside of parsed languages to Lisp and
> Scheme
>to find this kind of raw power.
> 
>   (2)  Everything is done at a very high level.  No
>calls to prepare() or step() or finalize() or
>column_text().  Conversions and caching are
> all
>handled transparently and automatically.  You
>do not have to think about them.  This allows
> 
>you to devote more brain cycles to the
> problem
>you are trying to solve.

Richard,

I agree with you 100%.  The tcl api looks, no, *is*
quite powerful, simple and straightforward.  This is
really what attracted us to Sqlite in the first place.

What I was looking for is a way to test whether a
given SQL is valid or not, before executing it. 
Running the SQL may take too long just to have a
OK/not OK test.  More importantly, sometimes, the SQL
itself has side effects so that it should not be
executed twice (like an INSERT statement.)  

It looks like this functionality is already there as
you use it in the test suite.  Is there a reason not
to make it available generally?  At least a version of
it that does nothing else or that does not make the
full range of promises the regular sqlite3_prepare may
do (like setting cursors, setting things up for
fetches, etc.).

I am just trying to understand the author and
inventor's thought processes here more than anything.

I am sure you've heard it a lot but thanks a lot for
the great software and your generosity in making it
public!







__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread COS
Hi,

- Original Message - 
From: "Dennis Cote" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, May 04, 2006 2:27 PM
Subject: Re: [sqlite] SQLite :memory: performance difference between v2 and
v3?


> All,
>
> More mysteries. To investigate this low insert performance under WinXP I
> wrote a simple test program that loops writing one character to a file
> and then flushing the file. The strange thing is that it seems to
> alternate between two different run times as shown below. This is for 6
> consecutive runs.

If you are using WinXP with Service Pack 2 this can be the problem. I have
experienced a lot of issues with the Windows XP Firewall. The funniest (or
strangest) thing I have found so far is that the Firewall seems to block
certain things even if it is deactivated. The problem gets a lot worse if
you use an Anti-Virus software. In my case I use Viruscan and although I
unchecked all the options to check TCP/IP connections, emails, etc it still
slows my network traffic down. If I disable it everything works perfectly.
For example I work with mysql and connections to query data from the server
(another PC in local network) takes forever when Viruscan is active.

Just some thoughts.

Best Regards,

COS



Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Doug Currie
Thursday, May 4, 2006, 1:27:49 PM, Dennis Cote wrote:

> More mysteries. To investigate this low insert performance under WinXP I
> wrote a simple test program that loops writing one character to a file
> and then flushing the file. The strange thing is that it seems to 
> alternate between two different run times ...

Your program...

$ flushtst
15 seconds, 67 flushes/sec

$ flushtst
26 seconds, 38 flushes/sec

$ flushtst
26 seconds, 38 flushes/sec

$ flushtst
27 seconds, 37 flushes/sec

$ flushtst
27 seconds, 37 flushes/sec

Add the line:

DeleteFile("test.txt");

in front of the CreateFile() call...

$ flushtst
15 seconds, 67 flushes/sec

$ flushtst
14 seconds, 71 flushes/sec

$ flushtst
13 seconds, 77 flushes/sec

$ flushtst
12 seconds, 83 flushes/sec

e




Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Jay Sprenkle

This program is almost completely I/O bound. It spends all its time
sleeping so its CPU usage is nearly zero. There might be some disk
contention from other programs, but I would think that should be fairly
constant. I am re-testing with longer run times to check this.


You give up your time slice to the operating system when you block
waiting on I/O.
If the operating system doesn't return control to you before your current I/O
completes then you'll certainly slow down. I've seen current versions of windows
just freeze for long periods if a CD isn't readable, so I'm certain this can
happen.

Given Ivan's notes about how NTFS works under the covers I would bet on his
idea over mine though.


Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Dennis Cote

Jay Sprenkle wrote:



Initially performance is good but the degrades?
Some other processes must be sucking up cpu time?


Jay,

This program is almost completely I/O bound. It spends all its time 
sleeping so its CPU usage is nearly zero. There might be some disk 
contention from other programs, but I would think that should be fairly 
constant. I am re-testing with longer run times to check this.


Dennis Cote


Re: [sqlite] Some advanced questions

2006-05-04 Thread drh
Dennis Cote <[EMAIL PROTECTED]> wrote:
> 
> I pulled the following from the bind.test file. It shows how 
> sqlite_prepare is used to test the parameter binding, and how the 
> sqlite_column_count and sqlite_column_name APIs are used.
> 

Most of the interfaces in bind.test are special-purpose
testing hacks that are only available if you build
the "testfixture".  They are not a part of the standard
TCL API.

Here is a quick review of the official TCL API:

You connect to the database by creating a database
object using the "sqlite3" command:

   sqlite3 db test.db

In the example above, the name of the newly created
database object is "db" and the database file is "test.db".
You can substitute appropriate values here, of course.

All interaction is by invoking methods on the database
object.  The most frequently used method is "eval".
Use it to evaluate SQL statements.

   db eval {CREATE TABLE ex1(a,b,c)}
   db eval {
 INSERT INTO ex1 VALUES(1,2,3);
   }

It is best to include the SQL inside {...}.  You can
use TCL variable names inside the {...} and SQLite will
recognize them and do the appropriate bindings to
access them automatically.  For example, to insert the
content of a file as a blob, you could do this:

   set in [open somefile.txt]
   set file_content [read $in]
   close $in
   db eval {
 INSERT INTO ex1 VALUES(4,5,$file_content)
   }

Notice that you do *not* have to quote $file_content
or escape any internal characters.  TCL is using the
binding mechanism to make this happen.

Note also that TCL is using sqlite3_prepare() internally.
It keeps a cache of recently used statements and 
automatically reuses them if they are invoked again.

Query results are returned as a list from the eval method.
Or, you can add a script after the SQL that is run once
for each row in the result set:

   db eval {SELECT * FROM ex1} {
 puts "a=$a b=$b c=$c"
   }

Within the script that follows eval, the value of each
column is loaded into variables with the same name as the
column.  So, for example, the value of column "a" is
loaded into a variable name "a".  Warning:  This creates
a problem if you have two or more columns with the same
name in your result set.  You are advised to use unique
column names.

Here is a quick shorthand:

   db eval {SELECT * FROM ex1} break

When the script is "break" the eval stops at the first
row.  This leaves the contents of the columns of the
first row in variables named $a, $b, and $c.  The
"continue" command in the eval script also works as
you would expect.

One more important feature is the transaction method.

   db transaction {
  # some script
  db eval {... whatever ...}
  # more script
   }

The transaction method runs its content inside an SQLite
transaction.  If the script fails, the transaction is
automatically rolled back.  If the script succeeds the
transaction automatically commits.  Transaction methods
can be nested.  Using the transaction method is much,
much nicer than manually trying to do BEGIN and COMMIT.

Three things to take away from this overview:

  (1)  The TCL API adds new syntax to the TCL language
   (the scripts eval and transactio methods).  This
   is an exceedingly powerful mechanism which cannot
   be accomplished with Algol-derived languages such
   as Perl, Python, Ruby, or Java.  You have to look
   outside of parsed languages to Lisp and Scheme
   to find this kind of raw power.

  (2)  Everything is done at a very high level.  No
   calls to prepare() or step() or finalize() or
   column_text().  Conversions and caching are all
   handled transparently and automatically.  You
   do not have to think about them.  This allows 
   you to devote more brain cycles to the problem
   you are trying to solve.

  (3)  There is no support for cursors anywhere in any
   of this.  Sorry.


   
 



Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Ivan Voras
--- Dennis Cote <[EMAIL PROTECTED]> wrote:

> All,
> 
> More mysteries. To investigate this low insert
> performance under WinXP I 
> wrote a simple test program that loops writing one
> character to a file 
> and then flushing the file. The strange thing is
> that it seems to 
> alternate between two different run times as shown
> below. This is for 6 
> consecutive runs.

Don't know for sure, but I can offer some educated
guesses for the scattering of results:

- NTFS is a fairly complex file system (more complex
than traditional FS-es), with several tables into
which file data must be written and cross linked, as
well as a data journal. The scattering of locations
for these journals means that seek times are
different, depending where the data is laid out on
disk. In particular because there's a journal,
consecutive program runs never actually write to the
same place on the disk even if it seems so to the
application. The results you have seen (alternating
between two values) can also be provoked on complex
database systems (in particular PostgreSQL) with
simple benchmarks (e.g. pgbench) on relativly simple
non-journaled file systems (such as UFS) when one
benchmark run nearly fills a write-ahead log and the
log gets processed/commited in the next run.

- Windows has many background disk users / writers -
Explorer, registry and other components are known to
"wake up" periodically and write their data (whatever
it is).


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [sqlite] Some advanced questions

2006-05-04 Thread Unit 5


--- Dennis Cote <[EMAIL PROTECTED]> wrote:
> I pulled the following from the bind.test file. It
> shows how 
> sqlite_prepare is used to test the parameter
> binding, and how the 
> sqlite_column_count and sqlite_column_name APIs are
> used.
> 
> HTH
> Dennis Cote

Dennis,

This is great!  These commands do not seem to be
documented at the web page for tcl api.  I will check
out the tests to see what else is available.





__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Jay Sprenkle

On 5/4/06, Dennis Cote <[EMAIL PROTECTED]> wrote:

All,

More mysteries. To investigate this low insert performance under WinXP I
wrote a simple test program that loops writing one character to a file
and then flushing the file. The strange thing is that it seems to
alternate between two different run times as shown below. This is for 6
consecutive runs.



C:\Temp\test\Debug>test.exe
13 seconds, 77 flushes/sec

C:\Temp\test\Debug>test.exe
22 seconds, 45 flushes/sec

C:\Temp\test\Debug>test.exe
17 seconds, 59 flushes/sec

C:\Temp\test\Debug>test.exe
18 seconds, 56 flushes/sec

C:\Temp\test\Debug>test.exe
21 seconds, 48 flushes/sec

C:\Temp\test\Debug>test.exe
22 seconds, 45 flushes/sec

Initially performance is good but the degrades?
Some other processes must be sucking up cpu time?


Re: [sqlite] Some advanced questions

2006-05-04 Thread Dennis Cote

Unit 5 wrote:



Thank your very much for your answers!  It looks like 
sqlite3_prepare() is what I need.  


I have gone through the tcl api and did not find any
mappings to it or any command that matched its
functionality.  Do you have the reference or the
command name handy?
 


Unit,

I don't use TCL much so I can't help much more with this, but most of 
the sqlite test suite is done with TCL.


I pulled the following from the bind.test file. It shows how 
sqlite_prepare is used to test the parameter binding, and how the 
sqlite_column_count and sqlite_column_name APIs are used.


HTH
Dennis Cote

   proc sqlite_step {stmt N VALS COLS} {
 upvar VALS vals
 upvar COLS cols
 set vals [list]
 set cols [list]

 set rc [sqlite3_step $stmt]
 for {set i 0} {$i < [sqlite3_column_count $stmt]} {incr i} {
   lappend cols [sqlite3_column_name $stmt $i]
 }
 for {set i 0} {$i < [sqlite3_data_count $stmt]} {incr i} {
   lappend vals [sqlite3_column_text $stmt $i]
 }

 return $rc
   }

   do_test bind-1.1 {
 set DB [sqlite3_connection_pointer db]
 execsql {CREATE TABLE t1(a,b,c);}
 set VM [sqlite3_prepare $DB {INSERT INTO t1 VALUES(:1,?,:abc)} -1 
TAIL]

 set TAIL
   } {}
   do_test bind-1.1.1 {
 sqlite3_bind_parameter_count $VM
   } 3
   do_test bind-1.1.2 {
 sqlite3_bind_parameter_name $VM 1
   } {:1}
   do_test bind-1.1.3 {
 sqlite3_bind_parameter_name $VM 2
   } {}
   do_test bind-1.1.4 {
 sqlite3_bind_parameter_name $VM 3
   } {:abc}
   do_test bind-1.2 {
 sqlite_step $VM N VALUES COLNAMES
   } {SQLITE_DONE}
   do_test bind-1.3 {
 execsql {SELECT rowid, * FROM t1}
   } {1 {} {} {}}
   do_test bind-1.4 {
 sqlite3_reset $VM
 sqlite_bind $VM 1 {test value 1} normal
 sqlite_step $VM N VALUES COLNAMES
   } SQLITE_DONE
   do_test bind-1.5 {
 execsql {SELECT rowid, * FROM t1}
   } {1 {} {} {} 2 {test value 1} {} {}}
   do_test bind-1.6 {
 sqlite3_reset $VM
 sqlite_bind $VM 3 {'test value 2'} normal
 sqlite_step $VM N VALUES COLNAMES
   } SQLITE_DONE
   do_test bind-1.7 {
 execsql {SELECT rowid, * FROM t1}
   } {1 {} {} {} 2 {test value 1} {} {} 3 {test value 1} {} {'test 
value 2'}}

   do_test bind-1.8 {
 sqlite3_reset $VM
 set sqlite_static_bind_value 123
 sqlite_bind $VM 1 {} static
 sqlite_bind $VM 2 {abcdefg} normal
 sqlite_bind $VM 3 {} null
 execsql {DELETE FROM t1}
 sqlite_step $VM N VALUES COLNAMES
 execsql {SELECT rowid, * FROM t1}
   } {1 123 abcdefg {}}
   do_test bind-1.9 {
 sqlite3_reset $VM
 sqlite_bind $VM 1 {456} normal
 sqlite_step $VM N VALUES COLNAMES
 execsql {SELECT rowid, * FROM t1}
   } {1 123 abcdefg {} 2 456 abcdefg {}}

   do_test bind-1.99 {
 sqlite3_finalize $VM
   } SQLITE_OK



Re: [sqlite] SQLite :memory: performance difference between v2 and v3?

2006-05-04 Thread Dennis Cote

All,

More mysteries. To investigate this low insert performance under WinXP I 
wrote a simple test program that loops writing one character to a file 
and then flushing the file. The strange thing is that it seems to 
alternate between two different run times as shown below. This is for 6 
consecutive runs.


   C:\src\sqlite_speed>win_flush_test.exe
   26 seconds, 38 flushes/sec

   C:\src\sqlite_speed>win_flush_test.exe
   18 seconds, 56 flushes/sec

   C:\src\sqlite_speed>win_flush_test.exe
   26 seconds, 38 flushes/sec

   C:\src\sqlite_speed>win_flush_test.exe
   19 seconds, 53 flushes/sec

   C:\src\sqlite_speed>win_flush_test.exe
   26 seconds, 38 flushes/sec

   C:\src\sqlite_speed>win_flush_test.exe
   18 seconds, 56 flushes/sec

Does anyone have any ideas what might cause this kind of behavior?

My test code is included below. Note in the version using the standard 
library, fflush() seems to be ignored under WinXP because it does about 
1M flushes per second. Hence the Win32 API version which produces the 
results above.


Dennis Cote




#include 
#include 
#include 

#ifdef WIN32

#include 

int main(int argc, char *argv[])
{
   int i, count = 1000;
   char c;
   time_t bgn, end;
   double t;
   HANDLE f;
   long written;

   f = CreateFile("test.txt", GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, 
FILE_ATTRIBUTE_NORMAL, 0);


   bgn = time(NULL);
   for (i = 0; i < count; i++) {
   c = 'a' + i % 26;
   WriteFile(f, , 1, , NULL);
   FlushFileBuffers(f);   
   }

   end = time(NULL);
  
   CloseHandle(f);


   t = difftime(end, bgn);
   printf("%0.0f seconds, %0.0f flushes/sec\n", t, count / t);
  
   //system("PAUSE");  //for Dev-Cpp IDE
   return 0;

}

#else

int main(int argc, char *argv[])
{
   int i, count = 100;
   FILE* f;
   char c;
   time_t bgn, end;
   double t;

   f = fopen("test.txt", "w");

   bgn = time(NULL);
   for (i = 0; i < count; i++) {
   c = 'a' + i % 26;
   fputc(c, f);
   fflush(f);   
   }

   end = time(NULL);
  
   fclose(f);


   t = difftime(end, bgn);
   printf("%0.0f seconds, %0.0f flushes/sec\n", t, count / t);
  
   //system("PAUSE");  //for Dev-Cpp IDE   
   return 0;

}

#endif



Re: [sqlite] Detecting table scan

2006-05-04 Thread Dennis Cote

Doug Nebeker wrote:


So does that mean if I get "TABLE xyz" in the 'detail' column back and
it does NOT mention an index that a full table scan is taking place?
I found some info about EXPLAIN QUERY PLAN
(http://www.sqlite.org/cvstrac/wiki?p=QueryPlans) but haven't found out
exactly what the response columns mean.



Doug,

Yes, that's what that means. Here are some simple examples. Note it is 
easier to see what is happening if you turn on the headers and go to 
column mode.


   SQLite version 3.3.5
   Enter ".help" for instructions
   sqlite> create table t(a integer primary key, b);
   sqlite> .mode column
   sqlite> .header on
   sqlite> explain query plan select * from t;
   order   fromdetail
   --  --  --
   0   0   TABLE t
   sqlite> explain query plan select * from t where a > 100;
   order   fromdetail
   --  --  -
   0   0   TABLE t USING PRIMARY KEY
   sqlite> explain query plan select * from t where b > 100;
   order   fromdetail
   --  --  --
   0   0   TABLE t
   sqlite> create index t_b on t(b);
   sqlite> explain query plan select * from t where b > 100;
   order   fromdetail
   --  --  --
   0   0   TABLE t WITH INDEX t_b
   sqlite> create table t2(b, c);
   sqlite> explain query plan select * from t left join t2 using(b) 
where c > 100;

   order   fromdetail
   --  --  --
   0   0   TABLE t
   1   1   TABLE t2
   sqlite> explain query plan select * from t left join t2 using(b) 
where a > 100;

   order   fromdetail
   --  --  -
   0   0   TABLE t USING PRIMARY KEY
   1   1   TABLE t2
   sqlite>sqlite>

The first query does a table scan,and it has to because it needs to 
return all records. The second uses the primary key index. The third 
does a table scan because there is no index on column b. Finally the 
fourth uses the new index on column b created just before. The last two 
show how multiple table scans are listed for joins.


I'm not sure what the order and from columns are myself. Perhaps someone 
else can explain them.


HTH
Dennis Cote


Re: [sqlite] Some advanced questions

2006-05-04 Thread Unit 5
--- Dennis Cote <[EMAIL PROTECTED]> wrote:
> What you probably want to do is use
> sqlite3_prepare() to check if the 
> statement is syntactically valid. If it is you will
> get a success return 
> code, if not you will get an error return code.
> 
> I understand you are using TCL but I believe all
> these API functions are 
> exposed through the TCL API as well.
> 

Hi Dennis,

Thank your very much for your answers!  It looks like 
sqlite3_prepare() is what I need.  

I have gone through the tcl api and did not find any
mappings to it or any command that matched its
functionality.  Do you have the reference or the
command name handy?

Thanks again!

 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [sqlite] Some advanced questions

2006-05-04 Thread Eugene Wee

Hi 'Unit 5',


3) I would like to test the validity of sql statements
before executing them.  So I thought I could use "db
complete" command.  But now I am not sure what "db
complete" command does.  In my tests, it returns 0
(false) regardless of the sql statement.  In fact the
only times I have gotton true is if the statement is
empty.  The statements are (all are valid and return
data):
  % db complete "select 2 * 2 "
  % db complete "select a from tab1"
  % db complete "select a from tab1 where a < 10"
  % db complete "select a from tab1 where a < 10
order by a"


You could take a look at:
http://www.sqlite.org/tclsqlite.html#complete

Basically, it does not validate the syntax, only checks that the SQL statement 
is complete (i.e. has a terminating semi-colon).


Regards,
Eugene Wee


Re: [sqlite] Some advanced questions

2006-05-04 Thread Dennis Cote

Unit 5 wrote:


1) If I have a statement that includes "WHERE 0 = 1",
does sqlite optimize the query and return right away
or would it execute the query anyway?


 


It will be executed.


2) Is there a way to get column names form a statement
without (or before) executing the statement?  For
example, when a user enters a statement like "select *
from ...".

 

Prepare thee statement using the sqlite3_prepare() API, then use the 
sqlite3_column_name() API to get the names of the result columns. The 
query won't execute until you use the sqlite3_step() API.




3) I would like to test the validity of sql statements
before executing them.  So I thought I could use "db
complete" command.  But now I am not sure what "db
complete" command does.  In my tests, it returns 0
(false) regardless of the sql statement.  In fact the
only times I have gotton true is if the statement is
empty.  The statements are (all are valid and return
data):
 % db complete "select 2 * 2 "
 % db complete "select a from tab1"
 % db complete "select a from tab1 where a < 10"
 % db complete "select a from tab1 where a < 10
order by a"

 

This API function basically check for lexical completeness for the shell 
input routines. It simply checks for a lexically valid statement  that 
ends with a semicolon.  You statements don't end with semicolons, hence 
they return false.


What you probably want to do is use sqlite3_prepare() to check if the 
statement is syntactically valid. If it is you will get a success return 
code, if not you will get an error return code.


I understand you are using TCL but I believe all these API functions are 
exposed through the TCL API as well.


HTH
Dennis Cote


Re: [sqlite] Detecting table scan

2006-05-04 Thread Doug Nebeker
Thanks Dennis.

So does that mean if I get "TABLE xyz" in the 'detail' column back and
it does NOT mention an index that a full table scan is taking place?
I found some info about EXPLAIN QUERY PLAN
(http://www.sqlite.org/cvstrac/wiki?p=QueryPlans) but haven't found out
exactly what the response columns mean.

Thanks
Doug

>>I have a wrapper class that I use for SQLite that I'm quite happy
with.
>>I've decided that if it is compiled for DEBUG usage, I want it to do
an 
>>EXPLAIN on every SQL statement and then check the output to see if any

>>table scans are happening (I'm just barely good enough at SQL to be 
>>dangerous).  This way I'll be notified if I've got some statements or 
>>table definitions that could use some help.
>>  
>>
>Doug,
>
>Instead of using the EXPLAIN command, use the EXPLAIN QUERY PLAN
command. Its output tells you how it is scanning the tables and indexes
to implement a >query.
>
>HTH
>Dennis Cote


To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual sender, except 
where the sender specifically states them to be the views of Reuters Ltd.



[sqlite] Some advanced questions

2006-05-04 Thread Unit 5
Hello,

I started with Sqlite just a few days ago and gievn
how simple its api is, I think I am ready to do some
more complex things with it.  I am primarily using it
from the tcl api so far.

I have a few questions:

1) If I have a statement that includes "WHERE 0 = 1",
does sqlite optimize the query and return right away
or would it execute the query anyway?


2) Is there a way to get column names form a statement
without (or before) executing the statement?  For
example, when a user enters a statement like "select *
from ...".

 
3) I would like to test the validity of sql statements
before executing them.  So I thought I could use "db
complete" command.  But now I am not sure what "db
complete" command does.  In my tests, it returns 0
(false) regardless of the sql statement.  In fact the
only times I have gotton true is if the statement is
empty.  The statements are (all are valid and return
data):
  % db complete "select 2 * 2 "
  % db complete "select a from tab1"
  % db complete "select a from tab1 where a < 10"
  % db complete "select a from tab1 where a < 10
order by a"




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[sqlite] Re: sqlite puzzle

2006-05-04 Thread Igor Tandetnik

JP  wrote:

Thanks all.  Actually I was just looking for the position of a single
name.  Based on your feedback, this one works to get the results:

SELECT count(*) FROM clients WHERE name<'foo';

but its performance is directly proportional to the position of the
name in the table.  For example, searching for Zach takes longer than
searching for Abigail.  It seems it is not using any index, but rather
doing a record by record sweep on the 'count'.


If it were not using any index but performed a direct scan of the table, 
then it would have had to examine every record every time regardless of 
the string it were searching for. Thus the time would be proportional to 
the number of records in the table, and independent of the parameter.


Most likely the query uses an index on name, and has to scan the index 
until the name is encountered to count up the number of records, so the 
time is proportional to the position of the parameter in the index. 
Given an index stored as a B-tree, I can't think of any algorithm to do 
it any faster than O(N) worst case.


Igor Tandetnik 



Re: [sqlite] Temporary or regular table?

2006-05-04 Thread Gerry Snyder

Unit 5 wrote:

It seems that you cannot create a temporary table in a
database other than "temp".  In other words, the
following is an error:

% CREATE TEMP TABLE temp_db.temp_table ...
temporary table name must be unqualified

But this is not:
% CREATE TEMP TABLE temp.temp_table ...

Neither is this:
% CREATE TABLE temp_db.temp_table ...

In this last case, is the table being created as a
temporary table?  Or is it a regular table?  

  


According to the very complete docs on the webpage:

http://sqlite.org/lang_createtable.html

If the "TEMP" or "TEMPORARY" keyword occurs in between "CREATE" and 
"TABLE" then the table that is created is only visible within that same 
database connection and is automatically deleted when the database 
connection is closed. Any indices created on a temporary table are also 
temporary. Temporary tables and indices are stored in a separate file 
distinct from the main database file.


If a  is specified, then the table is created in the 
named database. It is an error to specify both a  and the 
TEMP keyword, unless the  is "temp". If no database name 
is specified, and the TEMP keyword is not present, the table is created 
in the main database.



There would seem to be no reason why your last case would result in a 
temporary table (although surely it would have been _much_ faster for 
you to just check, rather than ask here).


HTH,

Gerry




Re: [sqlite] More columns vs. several tables

2006-05-04 Thread Felix Schwarz


Am 02.05.2006 um 18:46 schrieb [EMAIL PROTECTED]:


Felix Schwarz <[EMAIL PROTECTED]> wrote:


I'm wondering whether there is a big
performance hit for a simple

SELECT binarydata FROM entries WHERE somehash = 27817298;

when I use

CREATE TABLE entries(
entry_id INTEGER PRIMARY KEY,
somehash  INTEGER,
property1 INTEGER,
property2  VARCHAR(255),
property3  VARCHAR(255),
binarydata BLOB
);

instead of splitting the binary data (around 40K each) into two
tables like this:

CREATE TABLE entries(
entry_id INTEGER PRIMARY KEY,
somehash  INTEGER,
property1 INTEGER,
property2  VARCHAR(255),
property3  VARCHAR(255),
binary_id INTEGER
);

CREATE TABLE binaries(
binary_id INTEGER PRIMARY KEY,
binarydata BLOB
);



If you do things like this:

   UPDATE entries SET property1=property1+1;

In other words, if you make changes that do not
alter the blob, then the second form can be dramatically
faster since the blob never has to be copied.  The whole blob
has to be copied several times to do this update in the first
form.  In order to do an update, SQLite has to load the entire
row into memory, decode it, substitute the changed value(s),
reencode the row, then write the entire row back out to disk.
So the entire row has to be read and written in order to change
a single byte.  (This is because SQLite uses variable-length
encodings for everything, including integers, so changing a
single byte can change all subsequent bytes in the row.)  So
if you are going to be updating things, it is best to keep
the rows relatively small by moving large blobs out to a
separate table.

If you do things like this:

   SELECT sum(property1) FROM entries;

Then the second form is slightly faster because the entries table
will be smaller and you will get better locality of reference.
SQLite does not read the blob if it is not used here, but it does
have to skip over it.
--
D. Richard Hipp   <[EMAIL PROTECTED]>


Thank you (and everbody else) very much for the answers. They help a  
lot. However, for the second part of your answer about the second  
form being faster because the entries table will be smaller and me  
getting better locality of reference:


If I don't build the two tables in the

INSERT INTO entries ..
INSERT INTO entries ..
INSERT INTO entries ..

INSERT INTO binaries ..
INSERT INTO binaries ..
INSERT INTO binaries ..

but instead do a

INSERT INTO entries ..
INSERT INTO binaries ..

INSERT INTO entries ..
INSERT INTO binaries ..

INSERT INTO entries ..
INSERT INTO binaries ..

over a longer period of time .. does the part of your answer about  
better locality still apply? Or does this way of creating the table  
data lead to a loss of this benefit?


Thanks in advance,

Felix



Re: [sqlite] sqlite puzzle

2006-05-04 Thread JP
Thanks all.  Actually I was just looking for the position of a single 
name.  Based on your feedback, this one works to get the results:


SELECT count(*) FROM clients WHERE name<'foo';

but its performance is directly proportional to the position of the name 
in the table.  For example, searching for Zach takes longer than 
searching for Abigail.  It seems it is not using any index, but rather 
doing a record by record sweep on the 'count'.


jp