[sqlite] Network access to SQLite

2004-04-15 Thread Aaron Patterson
I'm working on a project, and I thought it would be of some interest to 
people in this group.  I've written an Apache module used for giving 
access to an SQLite database over a network.  I have also written a 
perl DBD driver so that the SQLite server can be accessed in perl.  The 
homepage is here:

http://modsqlite.sourceforge.net/

I hope someone here finds it useful!

--Aaron

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Concurrency, MVCC

2004-04-15 Thread Mark D. Anderson

On Thu, 15 Apr 2004 20:16:32 -0400, "Doug Currie" <[EMAIL PROTECTED]> said:

> I used this design in a proprietary database in the late 1980s. The
> only reason I didn't consider modifying SQLite this way up until now
> is that I was anticipating BTree changes for 3.0, so I confined my
> efforts to the pager layer.

btw, another example of this class of approach, with a bsd-style
license, is GigaBASE from the prolific Konstantin Knizhnik:
   http://www.garret.ru/~knizhnik/gigabase/GigaBASE.htm

It does not offer anything approaching full SQL.
It does however have several features not available in sqlite:
- online backup [1]
- master-slave replication
- group commit [2]
- parallel query (multiple threads for full table scans)

[1] There is kind of support in sqlite for online backup, via
   echo '.dump' | sqlite ex1 > backup.sql
though this would result in a largish file and blocks
everything else.

[2] Grouping commits is a mechanism that allows for pending transactions 
to get fsync'd together.
This allows for greater performance with a risk only of losing
some transactions (at most the size of the group), but not
greater risk of a corrupted database.
This is more flexibility than sqlite's big knob of OFF/NORMAL/FULL.
It is also offered by DB2, Oracle, and MySQL.



In idle moments I've toyed with what it would take to splice
GigaBASE with the query parser and planner from
lambda-db or sqlite.
But then I wake up

-mda

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[sqlite] anyone seen this SQLite related product?

2004-04-15 Thread Darren Duncan
I don't know if the creator is on this list, but just now I saw an 
article on the MacCentral website about a SQLite manager program 
written with the RealBasic RAD tool.  http://www.sqlabs.net/  Thought 
I'd mention it. -- Darren Duncan

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Concurrency, MVCC

2004-04-15 Thread Doug Currie

Thursday, April 15, 2004, 9:16:01 AM, Christian Smith wrote:

> On Wed, 14 Apr 2004, Doug Currie wrote:

>>One way to get table level locking without a great deal of pain is to
>>integrate the shadow paging ideas with BTree management. Rather than
>>using page tables for the shadow pages, use the BTrees themselves.
>>This means that any change to a BTree requires changes along the
>>entire path back to the root so that only free pages are used to store
>>new data, including the BTree itself. Writing the root page(s) of the
>>BTree(s) commits the changes to that table (these tables).

> Actually, this gets my vote. Keeps the pager layer the same,

The pager gets *much* simpler because it doesn't need to make a log
file. The log file is not necessary because writes only go to free
pages.

Well, there would be one write-ahead log. It's needed to prevent
partial updates to the page number pointers to the roots page(s) of
the BTree(s) at commit. This log is created at commit time, and is
much simpler and much smaller than the present log file.

> and only requires a cache of the root btree for each object
> (table/index) in the database to be maintained on a per-transaction
> basis

Yes, you need to cache the page number of each BTree root at
transaction start.

You'd also need a forest of free pages organized by transaction so
they can be freed at the right time (when the oldest read-transaction
that can reference them has completed).

> , reducing the complications of what to do under memory pressure
> when pages are spilled from the cache as we should be able to keep
> them in memory all the time.

Yes.

> Committing of a transaction would then be an atomic update root btree page
> number in the catalog table.

Yes, atomically for all the BTrees modified. This is probably a single
page of data (4 to 8 bytes of root page number per BTree, i.e., per
table and per index). Well, I usually assume fairly large pages
compared with SQLite's default of 1K. Using larger pages also
decreases the depth of the BTree which reduces the number of pages
written.

This design works well. It has the advantage (compared with shadow
pager) that reads are not burdened with page table indirection. It has
the potential disadvantage (compared with SQLite 2.8) that small
writes can modify several pages (based on the depth of the BTree).

I used this design in a proprietary database in the late 1980s. The
only reason I didn't consider modifying SQLite this way up until now
is that I was anticipating BTree changes for 3.0, so I confined my
efforts to the pager layer.

e


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread Greg Miller
D. Richard Hipp wrote:

 From what I am told, most IDE drives do signal the OS when the data
reaches the platter.  I'm also told that the Linux fsync() call does
not return until it gets that signal.  The Windows FlushFileBuffers(),
on the other hand, does not wait for the data to get to platter.  So
on a windows system, there is a brief moment of vulnerability where
a power loss can lose data.  But on Linux, that window of vulnerability
is zero.
The above is how IDE drives are *suppose* to work.  There is wide-
spread suspicion that many cheap IDE drives do not implement the
protocol correctly.  If your have one of those broken IDE disks,
all bets are off.


Keep in mind that I'm simply parroting my interpretation of the 
discussions over on the mailing lists at freebsd.org... You might want 
to go straight to the horse's mouth instead of having it filtered 
(possibly incorrectly) through me. :)

I am also told that the Linux IDE driver is broken with respect to
media errors.  If the disk drive has a media error, Linux does not
take appropriate corrective action, nor does it alert the user-space
code.  I don't know how true this is or if it is really a problem.
(How common are media errors?)


Not very common, but I don't anything about the Linux ATA driver, so I 
couldn't begin to guess just how badly broken it might or might not be.

Regardless of the situation, though, the window of vulnerability
during which a power loss might cause database corruption is small.
And Liz is reporting that she can reproduce the corruption
consistently.  So perhaps her trouble have a different cause.
Even a small window could do the job if it's being written to at a high 
rate of speed. By the time one set of writes actually hits the disk, 
more may be in flight. Dunno, there could be any number of factors 
contributing to this.

I guess the moral of the story is that reliable power is important.
--
http://www.classic-games.com/ http://www.indie-games.com/
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread D. Richard Hipp
Greg Miller wrote:
Liz Steel wrote:

You say that I shouldn't get a corrupt database when I pull the power, 
but I am consistently getting this. I am using SQLite version 2.8.9 
using the C++ interface running on Windows XP Home. Is there anything 
I can do to stop this happening?


If you have an IDE hard drive that's caching writes, there's not much 
the OS and database software can do to prevent corruption on power loss. 
It's possible to avoid this with tagged queueing, but most drives don't 
support that. The FreeBSD folks tried to solve this by turning off write 
caching by default. Unfortunately, this hurt performance so much they 
had to turn it back on and just recommend SCSI drives for important data.
I looked into this some and came back with different information.
Who can tell me what is right?
From what I am told, most IDE drives do signal the OS when the data
reaches the platter.  I'm also told that the Linux fsync() call does
not return until it gets that signal.  The Windows FlushFileBuffers(),
on the other hand, does not wait for the data to get to platter.  So
on a windows system, there is a brief moment of vulnerability where
a power loss can lose data.  But on Linux, that window of vulnerability
is zero.
The above is how IDE drives are *suppose* to work.  There is wide-
spread suspicion that many cheap IDE drives do not implement the
protocol correctly.  If your have one of those broken IDE disks,
all bets are off.
I am also told that the Linux IDE driver is broken with respect to
media errors.  If the disk drive has a media error, Linux does not
take appropriate corrective action, nor does it alert the user-space
code.  I don't know how true this is or if it is really a problem.
(How common are media errors?)
Regardless of the situation, though, the window of vulnerability
during which a power loss might cause database corruption is small.
And Liz is reporting that she can reproduce the corruption
consistently.  So perhaps her trouble have a different cause.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] XML translation inquiry

2004-04-15 Thread Andrew Piskorski
On Thu, Apr 15, 2004 at 02:37:13PM -0400, rich coco wrote:

> >>the goal would be to abstract out of the application the entire
> >>DB schema and allow the application to work exclusively with XML -
> >>and the generated methods - wrt not only data representation and
> >>transport but data storage and retrieval too. (ie, the application need
> >
> >And WHY would you want to do that exactly?  To me it sounds like an
> >excellent strategy to recommend to a competitor.

> please tell me more...as in why it's a bad idea.

It's not that I know it to be a bad idea - I don't.  Perhaps it's
actually a very nice idea.  But without knowing more, it sounds like a
good way to make things a lot more complicated than they need to be
for no gain.  If you're using a RDBMS, why not just use the RDBMS?
Generating some sort of limited XML format for sending over the wire
to some remote application, sure I can see the value of that.  But
converting your entire relational schema to XML for internal use in
your own program?  Why?  What can that possibly buy you over just
using the relational schema directly?

I mean, heck, the relational model was invented for a reason, why
would you want to transform it to a hierarchal XML model and have your
program manipulate that?  And if (for some reason I can't fathom) what
you really want is some sort of tree-structured XML schema, then why
not use an native object or XML database or persistent storage API
intended for that, rather than repeatedly converting between the RDBMS
and XML?

If you tell me you're sure there is some good reason I may be prepared
to believe you, but offhand I can't think of one myself.

-- 
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] XML translation inquiry

2004-04-15 Thread rich coco
please tell me more...as in why it's a bad idea.

performance? are the in-memory translations really that
cpu intensive? what if the data sets are never very large?
thanks,

- rich

Andrew Piskorski wrote:
On Thu, Apr 15, 2004 at 01:47:01PM -0400, rich coco wrote:


the goal would be to abstract out of the application the entire
DB schema and allow the application to work exclusively with XML -
and the generated methods - wrt not only data representation and
transport but data storage and retrieval too. (ie, the application need


And WHY would you want to do that exactly?  To me it sounds like an
excellent strategy to recommend to a competitor.
But if you really want to do it, Red Hat CCM/RHEA (open source, in
Java) does include some sort of object-relational mapping layer:
  http://rhea.redhat.com/

--
rich coco
[EMAIL PROTECTED]
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] XML translation inquiry

2004-04-15 Thread Andrew Piskorski
On Thu, Apr 15, 2004 at 01:47:01PM -0400, rich coco wrote:

> the goal would be to abstract out of the application the entire
> DB schema and allow the application to work exclusively with XML -
> and the generated methods - wrt not only data representation and
> transport but data storage and retrieval too. (ie, the application need

And WHY would you want to do that exactly?  To me it sounds like an
excellent strategy to recommend to a competitor.

But if you really want to do it, Red Hat CCM/RHEA (open source, in
Java) does include some sort of object-relational mapping layer:

  http://rhea.redhat.com/

-- 
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[sqlite] XML translation inquiry

2004-04-15 Thread rich coco
this is a bit off topic i suppose, but here goes...

does anyone know of an open-source translation toolkit
that ultimately would generate code to map XML to/from SQL?
Or maybe i am not phrasing the question properly.
What I am looking for is a translation layer between
the use of relational technology for storing and processing data
and the hierarchical representation of the data - as XML in particular -
for things like presentation and transport.
I understand that Microsoft has a product called the Web Services Toolkit,
which is specific to MS SQL Server that does this kind of thing.
i am wondering if their may exist a generic open-source framework:
  * for specifying the relationship between an XML Schema and a
RDBMS table schema
  * importing such a relationship (using whatever grammar) and
generating class definitions that implement the desired translation
layer.
the goal would be to abstract out of the application the entire
DB schema and allow the application to work exclusively with XML -
and the generated methods - wrt not only data representation and
transport but data storage and retrieval too. (ie, the application need
not care about databases schemas and the specifics regarding
data persistence, etc.
google-ing hasn't helped me here, but that may be because i'm not
google-ing smartly. i am open to suggestions as to where else I might
post such a query.
tia,

- rich

--
rich coco
[EMAIL PROTECTED]
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread Greg Miller
Andrew Piskorski wrote:

On Thu, Apr 15, 2004 at 08:33:14AM -0500, Greg Miller wrote:
support that. The FreeBSD folks tried to solve this by turning off write 
caching by default. Unfortunately, this hurt performance so much they 
had to turn it back on and just recommend SCSI drives for important data.


Why, do SCSI drives all come with battery-backed cache?  (So when you
power them up again they complete the old cached write.)  I didn't
think so, but would be pleased to learn otherwise...


No, but the OS can ensure that the ordering constraints are honored on 
any writes that actually make it to the disk. That's the only constraint 
the OS makes anyway, since it ensures that the only disk corruption that 
can occur is that some disk space that is currently unused may still 
appear to be in use. Then when the system boots after a failure, the 
system snapshots the disk, and fsck runs in the background to free up 
that unused space in the background. That's how FreeBSD avoids journalling.

Of course, with a good UPS *AND* the proper software running to react
to signals from the UPS, you get that sort of protection for free, and
you certainly want the system UPS anyway.  But that's also much more
complicated and vulnerable to failures due to misconfigured software,
so it'd sure be nice to have the hard-drive-UPS as well.
I suspect there's just not enough demand. People that need the safety 
just go out and by SCSI drives, IDE drives with tagged queueing, or a 
general purpose UPS.
--
http://www.classic-games.com/ http://www.indie-games.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread Andrew Piskorski
On Thu, Apr 15, 2004 at 08:33:14AM -0500, Greg Miller wrote:

> support that. The FreeBSD folks tried to solve this by turning off write 
> caching by default. Unfortunately, this hurt performance so much they 
> had to turn it back on and just recommend SCSI drives for important data.

Why, do SCSI drives all come with battery-backed cache?  (So when you
power them up again they complete the old cached write.)  I didn't
think so, but would be pleased to learn otherwise...

Is there any device available which basically serves as an add-on UPS
for hard drives?  Something you'd just plug all your IDE drives into,
and it would keep the drives going just long enough after a power fail
to finish writing their cached data and spin down cleanly?  AFAIK no
such device is available, but if reasonably priced it sure would be
nice to have.

Of course, with a good UPS *AND* the proper software running to react
to signals from the UPS, you get that sort of protection for free, and
you certainly want the system UPS anyway.  But that's also much more
complicated and vulnerable to failures due to misconfigured software,
so it'd sure be nice to have the hard-drive-UPS as well.

-- 
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Concurrency, MVCC

2004-04-15 Thread Christian Smith
On Thu, 15 Apr 2004, Andrew Piskorski wrote:

>On Thu, Apr 15, 2004 at 02:16:01PM +0100, Christian Smith wrote:
>> Right tool for the job. Multiple writers has client/server database
>> written all over it. KISS.
>
>No, not true, at least not when the multiple writers are all threads
>within one single process, which appears to be the common case for
>people who'd like greater concurrency in SQLite.

SQLite is an embedded database engine, which I guess is it's main use
(I'm using it at my place to implement a package management tool) and
the desired concurrency from what I gather on the list is read access
while writing, not multiple writers.

>
>Also, if multiple writers worked well for the one-process many-threads
>case, then if you wished you could write a small multi-threaded
>client/server database using SQLite as the underlying storage engine.
>As things stand now, the concurrency limitations mean there isn't much
>point to doing that.

But why re-invent the wheel. There are plenty of client/server databases
that support concurrent writes.

>
>Simplicity however, is of course an important concern.

Absolutely, that's my main point. I'm all for the concurrent
readers/writer, but multiple writers is a whole new ball game, probably
best left to the big boys.

Cheers,
Christian

-- 
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] TCL and SQLITE

2004-04-15 Thread shamil_daghestani

Thanks Christian.  I figured something just wasn't right... Total
processing time dropped from 207 to just 0.7 seconds

SD



   
 
  "Christian Smith"
 
  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
   
  .com>cc: 
 
   Subject:  Re: [sqlite] TCL and SQLITE   
 
  04/15/2004 09:51 
 
  AM   
 
   
 
   
 




On Thu, 15 Apr 2004 [EMAIL PROTECTED] wrote:

>
>>Try wrapping the inserts in transaction.
>
>I'm afraid I don't understand...Could you show be in my example below how
I
>can do that?

Something like:

% package require sqlite
% sqlite db c:/newDB
% set start [clock seconds]
% db eval {CREATE TABLE t1(a int)}
% db evel "BEGIN TRANSACTION"
% for {set j 1} {$j <= 1000} {incr j} {db eval "INSERT INTO t1 VALUES($j)"}
% db evel "COMMIT TRANSACTION"
% puts "total processing time = [expr [clock seconds] - $start]" total
processing time = 207
%




>
>SD
>
>
>
>
>
>  "Christian Smith"
>  <[EMAIL PROTECTED]To:
[EMAIL PROTECTED]
>  .com>cc:
[EMAIL PROTECTED]
>   Subject:  Re: [sqlite] TCL
and SQLITE
>  04/15/2004 09:30
>  AM
>
>
>
>
>
>
>On Thu, 15 Apr 2004 [EMAIL PROTECTED] wrote:
>
>>I'm just getting around to using SQLITE to see how it performs.
Therefore,
>>I'm just a beginner.
>>
>>I have a couple of questions about it: I downloaded the TCL binding and
>did
>>pkg_mkIndex to create the pkgIndex.tcl file.  Then I did "package require
>>sqlite", I got version "2.0".  However, I see from the SQLITE website
that
>>the latest version is 2.8.13.  What's the reason for that?  Is TCL
binding
>>no longer supported by newer versions of SQLITE?
>
>Should be. Maybe the 2.0 is the minimum supported version
>
>>
>>The second question is about speed.  I created the most basic table and
>did
>>a loop to insert 1000 integers and that took 207 seconds... an awful lot
>of
>>time!!  What did I do wrong, if any, in the example below.
>
>Try wrapping the inserts in transaction.
>
>Without that, each insert requires fsyncs to flush data to disk and
>purging of page cache as cache data is not reused from one transaction to
>the next. All this slows inserts down as their is a lot of IO going on.
>
>In a single transaction, there is only one lot of fsyncs and the cache is
>maintained throughout.
>
>
>>
>>
>>% package require sqlite
>>2.0
>>% sqlite db c:/newDB
>>0x008752B8
>>% set start [clock seconds]
>>1082034600
>>% db eval {CREATE TABLE t1(a int)}
>>% for {set j 1} {$j <= 1000} {incr j} {db eval "INSERT INTO t1
>VALUES($j)"}
>>% puts "total processing time = [expr [clock seconds] - $start]"
>>total processing time = 207
>>%
>>
>>thanks,
>>SD
>>
>>
>>
>>The information transmitted is intended only for the person(s)or entity
>>to which it is addressed and may contain confidential and/or legally
>>privileged material. Delivery of this message to any person other than
>>the intended recipient(s) is not intended in any way to waive privilege
>>or confidentiality. Any review, retransmission, dissemination or other
>>use of, or taking of any action in reliance upon, this information by
>>entities other than the intended recipient is prohibited. If you
>>receive this in error, please contact the sender and delete the
>>material from any computer.
>>
>>For Translation:
>>
>>http://www.baxter.com/email_disclaimer
>>
>>
>>-
>>To unsubscribe, e-mail: [EMAIL PROTECTED]
>>For additional commands, e-mail: [EMAIL PROTECTED]
>>
>
>--
>/"\\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
> X   - AGAINST MS ATTACHMENTS
>/ \
>
>-
>To unsubscribe, e-mail: [EMAIL PROTECTED]
>For additional commands, e-mail: [EMAIL PROTECTED]
>
>
>
>
>
>
>
>
>The information transmitted is intended only for the person(s)or entity
>to which it is addressed and may contain confidential and/or 

Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread Greg Miller
Liz Steel wrote:

You say that I shouldn't get a corrupt database when I pull the power, 
but I am consistently getting this. I am using SQLite version 2.8.9 
using the C++ interface running on Windows XP Home. Is there anything I 
can do to stop this happening?
If you have an IDE hard drive that's caching writes, there's not much 
the OS and database software can do to prevent corruption on power loss. 
It's possible to avoid this with tagged queueing, but most drives don't 
support that. The FreeBSD folks tried to solve this by turning off write 
caching by default. Unfortunately, this hurt performance so much they 
had to turn it back on and just recommend SCSI drives for important data.
--
http://www.classic-games.com/ http://www.indie-games.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Concurrency, MVCC

2004-04-15 Thread Andrew Piskorski
On Thu, Apr 15, 2004 at 02:16:01PM +0100, Christian Smith wrote:
> Right tool for the job. Multiple writers has client/server database
> written all over it. KISS.

No, not true, at least not when the multiple writers are all threads
within one single process, which appears to be the common case for
people who'd like greater concurrency in SQLite.

Also, if multiple writers worked well for the one-process many-threads
case, then if you wished you could write a small multi-threaded
client/server database using SQLite as the underlying storage engine.
As things stand now, the concurrency limitations mean there isn't much
point to doing that.

Simplicity however, is of course an important concern.

-- 
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] TCL and SQLITE

2004-04-15 Thread Christian Smith
On Thu, 15 Apr 2004 [EMAIL PROTECTED] wrote:

>I'm just getting around to using SQLITE to see how it performs. Therefore,
>I'm just a beginner.
>
>I have a couple of questions about it: I downloaded the TCL binding and did
>pkg_mkIndex to create the pkgIndex.tcl file.  Then I did "package require
>sqlite", I got version "2.0".  However, I see from the SQLITE website that
>the latest version is 2.8.13.  What's the reason for that?  Is TCL binding
>no longer supported by newer versions of SQLITE?

Should be. Maybe the 2.0 is the minimum supported version

>
>The second question is about speed.  I created the most basic table and did
>a loop to insert 1000 integers and that took 207 seconds... an awful lot of
>time!!  What did I do wrong, if any, in the example below.

Try wrapping the inserts in transaction.

Without that, each insert requires fsyncs to flush data to disk and
purging of page cache as cache data is not reused from one transaction to
the next. All this slows inserts down as their is a lot of IO going on.

In a single transaction, there is only one lot of fsyncs and the cache is
maintained throughout.


>
>
>% package require sqlite
>2.0
>% sqlite db c:/newDB
>0x008752B8
>% set start [clock seconds]
>1082034600
>% db eval {CREATE TABLE t1(a int)}
>% for {set j 1} {$j <= 1000} {incr j} {db eval "INSERT INTO t1 VALUES($j)"}
>% puts "total processing time = [expr [clock seconds] - $start]"
>total processing time = 207
>%
>
>thanks,
>SD
>
>
>
>The information transmitted is intended only for the person(s)or entity
>to which it is addressed and may contain confidential and/or legally
>privileged material. Delivery of this message to any person other than
>the intended recipient(s) is not intended in any way to waive privilege
>or confidentiality. Any review, retransmission, dissemination or other
>use of, or taking of any action in reliance upon, this information by
>entities other than the intended recipient is prohibited. If you
>receive this in error, please contact the sender and delete the
>material from any computer.
>
>For Translation:
>
>http://www.baxter.com/email_disclaimer
>
>
>-
>To unsubscribe, e-mail: [EMAIL PROTECTED]
>For additional commands, e-mail: [EMAIL PROTECTED]
>

-- 
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[sqlite] TCL and SQLITE

2004-04-15 Thread shamil_daghestani
I'm just getting around to using SQLITE to see how it performs. Therefore,
I'm just a beginner.

I have a couple of questions about it: I downloaded the TCL binding and did
pkg_mkIndex to create the pkgIndex.tcl file.  Then I did "package require
sqlite", I got version "2.0".  However, I see from the SQLITE website that
the latest version is 2.8.13.  What's the reason for that?  Is TCL binding
no longer supported by newer versions of SQLITE?

The second question is about speed.  I created the most basic table and did
a loop to insert 1000 integers and that took 207 seconds... an awful lot of
time!!  What did I do wrong, if any, in the example below.


% package require sqlite
2.0
% sqlite db c:/newDB
0x008752B8
% set start [clock seconds]
1082034600
% db eval {CREATE TABLE t1(a int)}
% for {set j 1} {$j <= 1000} {incr j} {db eval "INSERT INTO t1 VALUES($j)"}
% puts "total processing time = [expr [clock seconds] - $start]"
total processing time = 207
%

thanks,
SD



The information transmitted is intended only for the person(s)or entity 
to which it is addressed and may contain confidential and/or legally 
privileged material. Delivery of this message to any person other than 
the intended recipient(s) is not intended in any way to waive privilege 
or confidentiality. Any review, retransmission, dissemination or other 
use of, or taking of any action in reliance upon, this information by 
entities other than the intended recipient is prohibited. If you 
receive this in error, please contact the sender and delete the 
material from any computer.

For Translation:

http://www.baxter.com/email_disclaimer


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Concurrency, MVCC

2004-04-15 Thread Christian Smith
On Wed, 14 Apr 2004, Doug Currie wrote:

>Wednesday, April 14, 2004, 1:16:54 AM, Andrew Piskorski wrote:
>
>> How could this be extended to support table locking and PostgreSQL's
>> default "read committed" isolation level? Would the smallest locking
>> granularity possible in Currie's design be one page of rows, however
>> many rows that happens to be?
>
>Things get *much* more complicated once you have multiple simultaneous
>write transactions. I didn't want to go there.

Right tool for the job. Multiple writers has client/server database
written all over it. KISS.

>
>One way to get table level locking without a great deal of pain is to
>integrate the shadow paging ideas with BTree management. Rather than
>using page tables for the shadow pages, use the BTrees themselves.
>This means that any change to a BTree requires changes along the
>entire path back to the root so that only free pages are used to store
>new data, including the BTree itself. Writing the root page(s) of the
>BTree(s) commits the changes to that table (these tables).

Actually, this gets my vote. Keeps the pager layer the same, and only
requires a cache of the root btree for each object (table/index) in the
database to be maintained on a per-transaction basis, reducing the
complications of what to do under memory pressure when pages are spilled
from the cache as we should be able to keep them in memory all the time.

Committing of a transaction would then be an atomic update root btree page
number in the catalog table.

Christian

-- 
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Adding SQL commands

2004-04-15 Thread Joel Rosdahl
Bertrand Mansion <[EMAIL PROTECTED]> writes:

> <[EMAIL PROTECTED]> wrote :
>
>> I have a similar problem with character encoding in the current
>> SQLite versions, by the way. I store UTF-8 encoded data from a
>> Python program using PySQLite using a prepackaged SQLite, but since
>> the prepackaged libsqlite is built without SQLITE_UTF8, I can't use
>> LIKE et.al. in the way I want to. It's good to hear that this will
>> be improved in SQLite 3.0.
>
> How will this be improved in sqlite 3.0 ?

It's maybe not mentioned explicitly in
http://www.sqlite.org/prop2.html, but that's my conclusion of the
implications of the character encoding handling. Correct me if I'm
wrong.

Regards,
Joel

-- 
Joel Rosdahl <[EMAIL PROTECTED]>
Key BB845E97; fingerprint 9F4B D780 6EF4 5700 778D  8B22 0064 F9FF BB84 5E97

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Adding SQL commands

2004-04-15 Thread Bertrand Mansion
<[EMAIL PROTECTED]> wrote :

> I have a similar problem with character encoding in the current SQLite
> versions, by the way. I store UTF-8 encoded data from a Python program
> using PySQLite using a prepackaged SQLite, but since the prepackaged
> libsqlite is built without SQLITE_UTF8, I can't use LIKE et.al. in the
> way I want to. It's good to hear that this will be improved in SQLite
> 3.0.

How will this be improved in sqlite 3.0 ?

Bertrand Mansion
Mamasam


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread D. Richard Hipp
D. Richard Hipp wrote:
(1) Change to version 2.8.13.

(2) Describe in detail what kind of changes you are making
to the database as you pull the power.
(3) Send me one of your corrupt databases for analysis.

(4) Begin with a database that passes a "PRAGMA integrity_check".
Do whatever it is you do to make it go corrupt.  But before
you open the database file again, make a copy of both the
database and the journal.  Open the database again to make
sure it really did go corrupt.  Then send me both the database
and the journal.
(5) In step (4), also make a copy of the database before it went
corrupt - when it passed the "PRAGMA integrity_check" and send
me that copy along with the corrupt database and the journal.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread D. Richard Hipp
Liz Steel wrote:
Hello again,

I'm not sure if you received my last email, so I'm sending it to the 
list in the hope that someone can help me.

You say that I shouldn't get a corrupt database when I pull the power, 
but I am consistently getting this. I am using SQLite version 2.8.9 
using the C++ interface running on Windows XP Home. Is there anything I 
can do to stop this happening?

(1) Change to version 2.8.13.

(2) Describe in detail what kind of changes you are making
to the database as you pull the power.
(3) Send me one of your corrupt databases for analysis.

--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [sqlite] Concurrency, MVCC

2004-04-15 Thread Mark D. Anderson

On Wed, 14 Apr 2004 08:13:39 -0400, "D. Richard Hipp" <[EMAIL PROTECTED]>
said:

>* Support for atomic commits of multi-database transactions,
>  which gives you a limited kind of table-level locking,
>  assuming you are willing to put each table in a separate
>  database.

and also a limited form of concurrent writers, as a consequence,
right?
assuming that table locks are acquired in a consistent order
to avoid deadlock, there could be concurrent writers that do
not touch the same tables (in this database-per-table model).

btw, what about offering better behavior about throwing away
cache pages? one approach would be something like a 
commit_begin() function which is offered by some rdbms native
apis. It says "commit what i've done, but at the same time
attempt to acquire the write lock".
Failure to "win" and actually be able to retain the write
lock might not be reported -- the idea is that the application
can at least indicate its desire.

This could also be done as some sort of connection option.

So in the case that a single writer is keeping up with all
requests, it can do so efficiently without throwing away
its pages.

-mda

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [sqlite] FreeBSD and SQLite

2004-04-15 Thread Jaroslaw Nozderko
Hi Al,

 could you please provide more information ?
(FreeBSD version, SQLite version, what are the
problems, etc.).

First of all, is it FreeBSD 4.x or 5.x ?
Usually 4.x is used for production purposes,
I didn't try it.

However, I don't remember serious problems with 
several recent versions of SQLite (perhaps not the latest
one) on FreeBSD 5.1 and 5.2. 

Regards,
Jarek

Jaroslaw Nozderko
GSM +48 601131870 / Kapsch (22) 6075013
[EMAIL PROTECTED]
IT/CCBS/RS - Analyst Programmer


> -Original Message-
> From: Al Rider [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 14, 2004 6:37 PM
> To: D. Richard Hipp
> Cc: [EMAIL PROTECTED]
> Subject: [sqlite] FreeBSD and SQLite
>
>
> I do website design for clubs, etc., with a lot of custom, CM
> php scripts.
> SQLite is ideally suited for many of my scripts; but,
> unfortunately one of
> the sites is hosted on a FreeBSD based server.  Most of my designs are
> Linux; but, I want to keep the designs portable.
>
> I tried to compile and install SQLite without any success.
>
> I posted a ticket about the problem and you relied FreeBSD was not a
> supported platform.
>
> Because of the obvious advantages for my applications, I'd
> really like to
> start using SQLite.
>
> Plus, I'm looking forward to php 5.  If SQLite is not
> supported on FreeBSD
> machines does that mean it won't be compiled into php 5 on them?
>
> Is there anyone successfully running SQLite on a FreeBSD
> machine?  If so,
> would you email me and give me some help with it.
>
> Thanks...
>
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
> 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] Effectiveness of PRAGMA integrity_check;

2004-04-15 Thread Liz Steel
Hello again,

I'm not sure if you received my last email, so I'm sending it to the list in 
the hope that someone can help me.

You say that I shouldn't get a corrupt database when I pull the power, but I 
am consistently getting this. I am using SQLite version 2.8.9 using the C++ 
interface running on Windows XP Home. Is there anything I can do to stop 
this happening?

Thanks,

Liz.

Original Message Follows
From: "D. Richard Hipp" <[EMAIL PROTECTED]>
CC: [EMAIL PROTECTED]
Subject: Re: [sqlite] Effectiveness of PRAGMA integrity_check;
Date: Wed, 14 Apr 2004 10:50:28 -0400
Liz Steel wrote:
I am trying to do a similar sort of thing with my database. The only way 
I've found to fairly reliably create a corrupt database file is to pull the 
battery out of my laptop whilst my application is accessing the 
database

I've just tried it, and I get a code 11 (SQLITE_CORRUPT) returned from the 
PRAGMA integrity_check command. Is this correct behaviour?

No, this is not correct.  SQLite is suppose to survive an abrupt
power loss with no loss of data.  (Uncommitted transactions will be
rolled back, but committed transactions should persist and be
consistent.)
I believe that SQLite does survive power loss without problems
on Linux.  However, I have received reports that the windows API
function FlushFileBuffers() sometimes lies and does not really
flush contents to the disk surface as it claims it does.  This
is just hearsay - I have not independently verified those reports.
If FlushFileBuffers() does lie and a power loss occurred in the
middle of a COMMIT, then database corruption is possible on
windows.  This is a bug in the OS and there is not anything
SQLite (or any other database engine) can do about it.
There was a bug in SQLite version 2.8.12 that could cause
database corruption if a power loss occurred at a particularly
inauspicious moment in the middle of a COMMIT.  That problem
was fixed with version 2.8.13.
If you are seeing database corruption following power loss
on Linux with SQLite version 2.8.13, please let us know about
it right away.  If you are seeing corruption on Windows, let
us know there too - the problem might be the FlushFileBuffers()
bug or it might be something else - either way we want to
investigate.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
_
Stay in touch with absent friends - get MSN Messenger 
http://www.msn.co.uk/messenger

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]