Re: [sqlite] design question / discussion

2008-05-21 Thread Rich Rattanni
Adolfo:
I can't tell you how many times I felt a flat file approach would be
better.  However, 2 years ago when the design began there was a
thought of 'Having the ability to mine data on the device would be an
invaluable tool'.  SQLite has proven superb for some aspects of the
system, but not for storing simple flag data I believe someone
name Occam had something to say about this?  Ah the benefits of
hindsight  (sorry for the sarcasm, its the only thing that keeps a
smile on my face).

Ken:
>How do you decide which 20% to clear incase of space treshold?
Oldest 20% is cleared once a max size is reached.  Its kinda
arbitrary... I just figured it was better to clear a large swath of
flags than a delete one, insert one approach.
>Is the downloaded data always deleted once successful?
Yes

Woody:
Good to know, thank you :-).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-21 Thread Harold Wood & Meyuni Gani
I've done an app like that before with a different db foundation. Basically 2 
different databases, same structure. The logging app hits an ini file before 
each write, if the current db is different than the name in the ini file then 
close the current db, open the new db and write the row to the new db, 
otherwise write the row to the current db.

I had background app that ran as a service and would switch the fb name in the 
ini file when one hour had passed or the db was full.

It worked great .

Woody
from his pda

-Original Message-
From: A.J.Millan <[EMAIL PROTECTED]>
Sent: Wednesday, May 21, 2008 2:29 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] design question / discussion

Rich Rattanni wrote:>Hi I have a general design question.  I have the
following senario...

>In an embedded system running linux 2.6.2x I have a sqlite database
>constantly being updated with data acquired by the system.  I cant
>lose data (hence why I am using sqlite in the first place).  However
>periodically I have download the data contain within the database to a
>central server.  The system cannot stall during the download and must
>continue to record data.  Also, after the download I need to shrink
>the database size, simply because if the database is allowed to grow
>to its max size (~50MB) then every download thereafter would be 50MB,
>which is unacceptable.

After thinking in your's problem, according to yours first exposition, it 
seems that you are using the SQLite dbase as a mere tampon or 
temporarybuffer to the acquired data.  In that condition, with no further 
process of those data in the embedded system, perhaps you can consider 
simply write a flat file appending to it the incoming data (may be 
alternating between two or more files) and then compress and send the data 
to the host where they can be further processed or appended to a dbase.

>From the security point of view, the data in the embedded device are not 
necesarily  safer in a SQLite dbase that in a flat file.  Perhaps that 
layer(SQLite) are not necessary at all in the embedded device.

Just thinking out loud :-)

Adolfo.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-21 Thread A.J.Millan
Rich Rattanni wrote:>Hi I have a general design question.  I have the
following senario...

>In an embedded system running linux 2.6.2x I have a sqlite database
>constantly being updated with data acquired by the system.  I cant
>lose data (hence why I am using sqlite in the first place).  However
>periodically I have download the data contain within the database to a
>central server.  The system cannot stall during the download and must
>continue to record data.  Also, after the download I need to shrink
>the database size, simply because if the database is allowed to grow
>to its max size (~50MB) then every download thereafter would be 50MB,
>which is unacceptable.

After thinking in your's problem, according to yours first exposition, it 
seems that you are using the SQLite dbase as a mere tampon or 
temporarybuffer to the acquired data.  In that condition, with no further 
process of those data in the embedded system, perhaps you can consider 
simply write a flat file appending to it the incoming data (may be 
alternating between two or more files) and then compress and send the data 
to the host where they can be further processed or appended to a dbase.

>From the security point of view, the data in the embedded device are not 
necesarily  safer in a SQLite dbase that in a flat file.  Perhaps that 
layer(SQLite) are not necessary at all in the embedded device.

Just thinking out loud :-)

Adolfo.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-21 Thread Rich Rattanni
> It seems unclear to me what your requirements are trying to attempt.
>
> Do you need to keep any of this data, if so for how long?
   I have to keep all data until a download.  Downloads can fail too
so I cannot delete data until a download succeeds.
> Do you need to be able to read the older data?
   The device supports viewing the flag information via a webpage.
Not to mention, I only want the device to store a fixed amount (say
5000) flags, and if this limit is reached I will clear some amount
(say 20%) to make room for new data.

> Do you need to be able to subset the data?
   No


>Main.db  = contains download.db and is an attachment point for ancillary db's.
>wrtdb_###.db  = Always write to this location.

>When a download is needed simply close the current wrtdb_###. Create a new 
>wrrtdb_###.db and Incrementing new wrtdb table in the >main.db

Are you saying that when I want to do a download, I copy the data from
the wrtdb_###.db to main?  Then download main?  If so I thought about
that, but then I have to reserve space for 2X the size of wrtdb_###,
because during the copy the data will exist on the unit in duplicate.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-21 Thread Ken
I think your trying to overcome the read/write concurrency issue with sqlite 
correct?  You want to have the ability to copy data (ie ftp) and recieve new 
data into an overflow database. 

Main.db  = contains download.db and is an attachment point for ancillary db's.
wrtdb_###.db  = Always write to this location.

When a download is needed simply close the current wrtdb_###. Create a new 
wrrtdb_###.db and Incrementing new wrtdb table in the main.db 

It seems unclear to me what your requirements are trying to attempt.

Do you need to keep any of this data, if so for how long?
Do you need to be able to read the older data? (except for downloads)
Do you need to be able to subset the data? 

Ken




Rich Rattanni <[EMAIL PROTECTED]> wrote: > Perhaps i've missed something or 
don't understand it well. Your
> databases is all on the same file or do you have 2 separate sqlite
> sessions to 2 different databases files? In the first scenario you
> must be very fast and in the second you can switch from one database
> to the other, unbind (close) the sqlite, do ftp or what ever you want
> and delete database file.
>
Yes in my code, I was thinking of having two database files on the filesystem
x and x+1.
During the download process I was going to drop any data generated during
the download process into x+1 (that is to say the system continues running
normally while a download is in progress).

> You attach x+1 to x. Why do you need it? If you delete old records on
> x after the ftp you can trash x, work with x+1 and recreate a void x.

I can see where I may not need it. I was just thinking of when the unit powers
back up I need to know which database is the 'main' database and which
database is the 'overflow'.  I would use the rule that x is the main and x+1
is overflow data. Strickly policy.  Incase it is unclear x and x+1 refer to
the actual filename of the database on disk, so I would have
flags.sqlite.0  <- Main
flags.sqlite.1 <- Overflow
***After download and next power up***
flags.sqlite.1 <- Main
flags.sqlite.2 <- Overflow


> I think you only need 2 databases and while you add data to A, you
> copy and delete B. Then switch A and B. Perhaps you need 3 databases,
> and separate the download and . On the other side you can attach the
> databases and reconstruct one big database.
>
Ah the design process I thought I had a good reason for my switching
policy but as I look back perhaps it is overly complex.  My original design
was a two database scheme, but as mentioned I thought the filename
was a slick way of determining which database was the primary (of course
a simple table in each database could do the same, that I join to and
update who is Main and Overflow).

Oh thats right, I actually remember now why I implemented this the way
I did.  The system has file size constraints on the amount of data
stored in the database, and downloads may be interrupted.  In the
event of a cancel I wanted all data to be in one database, hence the
copy of data from X+1 back into X.  I figured this works well because
when I move data from X+1 to X, I can check if storage constriants
have been violated and clear old data if necessary.

Also, I wanted to save the deletion and recreation of databases for
the next powerup, because the device is battery powered.  I have
a backup battery that allows me to run briefly after power is
removed, but this time is limited.  I figured doing this operation at
powerup is the safest bet (in the worst case, the power is removed
and I am back to relying on the backup battery, but on average the
battery is not removed immediately after insertion).

At the heart of the matter is the fact that vacuum's are too costly
(time wise) and while the device is not 'real time' per se, I must
services requests from another processor fairly quickly (<1 sec).

> If you need compression you can check any lzp, lzo or lzrh
> algorithms, they are very fast, and compress the files "on the fly".
> This compression algorithms works well with text data and bad with
> binary data. Take care because sqlite does already compress data in
> the databases files.

I cant reveal the nature of the data I am compressing, but on average, with
gzip, I see a reduction of 50 -> 70% in size.


Thanks for your reply, I implemented something similar to this but I end up
with corrupt databases if a download is performed, and power is removed,
and the sun and the stars alignblah blah blah.   In a word, its buggy.
I think violating sqlite and moving databases around using OS calls is
what is getting me.  I am up against a wall to design a solution
thatworks.  Stupid input specs!  Anyways, thats why I posted to the
list and I really do apprecaite your input
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org

Re: [sqlite] design question / discussion

2008-05-21 Thread Rich Rattanni
> Perhaps i've missed something or don't understand it well. Your
> databases is all on the same file or do you have 2 separate sqlite
> sessions to 2 different databases files? In the first scenario you
> must be very fast and in the second you can switch from one database
> to the other, unbind (close) the sqlite, do ftp or what ever you want
> and delete database file.
>
Yes in my code, I was thinking of having two database files on the filesystem
x and x+1.
During the download process I was going to drop any data generated during
the download process into x+1 (that is to say the system continues running
normally while a download is in progress).

> You attach x+1 to x. Why do you need it? If you delete old records on
> x after the ftp you can trash x, work with x+1 and recreate a void x.

I can see where I may not need it. I was just thinking of when the unit powers
back up I need to know which database is the 'main' database and which
database is the 'overflow'.  I would use the rule that x is the main and x+1
is overflow data. Strickly policy.  Incase it is unclear x and x+1 refer to
the actual filename of the database on disk, so I would have
flags.sqlite.0  <- Main
flags.sqlite.1 <- Overflow
***After download and next power up***
flags.sqlite.1 <- Main
flags.sqlite.2 <- Overflow


> I think you only need 2 databases and while you add data to A, you
> copy and delete B. Then switch A and B. Perhaps you need 3 databases,
> and separate the download and . On the other side you can attach the
> databases and reconstruct one big database.
>
Ah the design process I thought I had a good reason for my switching
policy but as I look back perhaps it is overly complex.  My original design
was a two database scheme, but as mentioned I thought the filename
was a slick way of determining which database was the primary (of course
a simple table in each database could do the same, that I join to and
update who is Main and Overflow).

Oh thats right, I actually remember now why I implemented this the way
I did.  The system has file size constraints on the amount of data
stored in the database, and downloads may be interrupted.  In the
event of a cancel I wanted all data to be in one database, hence the
copy of data from X+1 back into X.  I figured this works well because
when I move data from X+1 to X, I can check if storage constriants
have been violated and clear old data if necessary.

Also, I wanted to save the deletion and recreation of databases for
the next powerup, because the device is battery powered.  I have
a backup battery that allows me to run briefly after power is
removed, but this time is limited.  I figured doing this operation at
powerup is the safest bet (in the worst case, the power is removed
and I am back to relying on the backup battery, but on average the
battery is not removed immediately after insertion).

At the heart of the matter is the fact that vacuum's are too costly
(time wise) and while the device is not 'real time' per se, I must
services requests from another processor fairly quickly (<1 sec).

> If you need compression you can check any lzp, lzo or lzrh
> algorithms, they are very fast, and compress the files "on the fly".
> This compression algorithms works well with text data and bad with
> binary data. Take care because sqlite does already compress data in
> the databases files.

I cant reveal the nature of the data I am compressing, but on average, with
gzip, I see a reduction of 50 -> 70% in size.


Thanks for your reply, I implemented something similar to this but I end up
with corrupt databases if a download is performed, and power is removed,
and the sun and the stars alignblah blah blah.   In a word, its buggy.
I think violating sqlite and moving databases around using OS calls is
what is getting me.  I am up against a wall to design a solution
thatworks.  Stupid input specs!  Anyways, thats why I posted to the
list and I really do apprecaite your input
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-21 Thread Eduardo Morras
At 19:12 20/05/2008, you wrote:
>Actually my reason for writing into a seperate database is more...
>well crude.  I tar several databases together then encrypt using
>openSSL. Then an FTP like program transmits the data a central server.
>  I must suspend writing into the database for the duration of the tar
>operation since tar does not abide by sqlites file locking rules.

Perhaps i've missed something or don't understand it well. Your 
databases is all on the same file or do you have 2 separate sqlite 
sessions to 2 different databases files? In the first scenario you 
must be very fast and in the second you can switch from one database 
to the other, unbind (close) the sqlite, do ftp or what ever you want 
and delete database file.

You attach x+1 to x. Why do you need it? If you delete old records on 
x after the ftp you can trash x, work with x+1 and recreate a void x.

I think you only need 2 databases and while you add data to A, you 
copy and delete B. Then switch A and B. Perhaps you need 3 databases, 
and separate the download and . On the other side you can attach the 
databases and reconstruct one big database.

If you need compression you can check any lzp, lzo or lzrh 
algorithms, they are very fast, and compress the files "on the fly". 
This compression algorithms works well with text data and bad with 
binary data. Take care because sqlite does already compress data in 
the databases files.

HTH



--
With sufficient thrust, pigs fly just fine. However, this is not
necessarily a good idea. It is hard to be sure where they are going to
land, and it could be dangerous sitting under them as they fly
overhead. -- RFC 1925  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-20 Thread Rich Rattanni
Actually my reason for writing into a seperate database is more...
well crude.  I tar several databases together then encrypt using
openSSL. Then an FTP like program transmits the data a central server.
 I must suspend writing into the database for the duration of the tar
operation since tar does not abide by sqlites file locking rules.

Thanks for your input, any and all help is appreciated!

--
Rich

On Mon, May 19, 2008 at 11:50 AM, Ken <[EMAIL PROTECTED]> wrote:
> Rich,
>
> From your design it appears you are writing to a seperate db while a 
> "download" is happening? I'm guessing that is to prevent a read/write lock 
> contention correct?
>
> It seems to me that any new data coming in will need to write and you are 
> simply looking to read during a download operation and trying to avoid lock 
> contention and delays correct?
>
> DownloadInfo table is used to keep track of the point where the last download 
> completed successfully.
>
> data to download = last successful to max rowid. (ie a subset).
>
> One thought I had to avoid the contention is if this is a threaded 
> application? you could enable the shared cache and  read_uncommitted 
> isolation. It might be a bit tricky in that you'll probably have to get the 
> "committed" data in a txn, then set the uncomitted mode to read to avoid 
> waiting for locks.
>
> Ken
>
> Rich Rattanni <[EMAIL PROTECTED]> wrote: Hi I have a general design question. 
>  I have the following senario...
>
> In an embedded system running linux 2.6.2x I have a sqlite database
> constantly being updated with data acquired by the system.  I cant
> lose data (hence why I am using sqlite in the first place).  However
> periodically I have download the data contain within the database to a
> central server.  The system cannot stall during the download and must
> continue to record data.  Also, after the download I need to shrink
> the database size, simply because if the database is allowed to grow
> to its max size (~50MB) then every download thereafter would be 50MB,
> which is unacceptable.  I would simply vacuum the database, but this
> takes too much time and stalls the system.
>
> My solution is the following (still roughed out on scraps of paper and
> gray matter).
>
> have two databases on the system at all times (data.sqlite.(x) and
> data.sqlite.(x+1))
> All data written into x.
> When a download is requested...
> Mark highest rowid in each table in database (x) in a table
> called DownloadInfo
> Begin logging data to (x+1)
> Download done (success or failure - downloads may be cancelled or timeout)
>Attach x+1 to x
>Begin transaction
>delete all data in x from tables equal to <= rowid saved in DownloadInfo
>move any data stored in x+1 to x
>if download was successful...
>mark in x that a download was successful in DownloadInfo
>
> At next powerup...
> Scan x.DownloadInfo, see if a download was successful...
>Yes
>Attach x+1 to x
>attach x+2 to x
>begin transaction
>Build new database x+2
>Move data from x to x+1
>Mark database has been deleted in DownloadInfo
>commit.
>delete (using os, unlink perhaps)
>   No
>Do nothing.
>
>
> So its kinda complicated, but I think such things are necessary.  For
> instance, a vacuum is out of the question, it just takes too long.
> Thats why  the double database scheme works good for deleting old
> databases.  I guess i want to stop here and leave some info out.  That
> way I don't suppress any good ideas.
>
> And as always I really appreciate any help i can get.  I tried to
> implement something similar, but I was copying an already prepared
> sqlite database which was not very reliable.  Guess another question,
> maybe one that solves this one. has any improvements on
> auto-vacuum been made?  Does anyone trust it or can anyone attest to
> its fault tolerance.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-19 Thread Ken
Rich,

>From your design it appears you are writing to a seperate db while a 
>"download" is happening? I'm guessing that is to prevent a read/write lock 
>contention correct?

It seems to me that any new data coming in will need to write and you are 
simply looking to read during a download operation and trying to avoid lock 
contention and delays correct?

DownloadInfo table is used to keep track of the point where the last download 
completed successfully.

data to download = last successful to max rowid. (ie a subset).

One thought I had to avoid the contention is if this is a threaded application? 
you could enable the shared cache and  read_uncommitted isolation. It might be 
a bit tricky in that you'll probably have to get the "committed" data in a txn, 
then set the uncomitted mode to read to avoid waiting for locks.

Ken

Rich Rattanni <[EMAIL PROTECTED]> wrote: Hi I have a general design question.  
I have the following senario...

In an embedded system running linux 2.6.2x I have a sqlite database
constantly being updated with data acquired by the system.  I cant
lose data (hence why I am using sqlite in the first place).  However
periodically I have download the data contain within the database to a
central server.  The system cannot stall during the download and must
continue to record data.  Also, after the download I need to shrink
the database size, simply because if the database is allowed to grow
to its max size (~50MB) then every download thereafter would be 50MB,
which is unacceptable.  I would simply vacuum the database, but this
takes too much time and stalls the system.

My solution is the following (still roughed out on scraps of paper and
gray matter).

have two databases on the system at all times (data.sqlite.(x) and
data.sqlite.(x+1))
All data written into x.
When a download is requested...
 Mark highest rowid in each table in database (x) in a table
called DownloadInfo
 Begin logging data to (x+1)
Download done (success or failure - downloads may be cancelled or timeout)
Attach x+1 to x
Begin transaction
delete all data in x from tables equal to <= rowid saved in DownloadInfo
move any data stored in x+1 to x
if download was successful...
mark in x that a download was successful in DownloadInfo

At next powerup...
Scan x.DownloadInfo, see if a download was successful...
Yes
Attach x+1 to x
attach x+2 to x
begin transaction
Build new database x+2
Move data from x to x+1
Mark database has been deleted in DownloadInfo
commit.
delete (using os, unlink perhaps)
   No
Do nothing.


So its kinda complicated, but I think such things are necessary.  For
instance, a vacuum is out of the question, it just takes too long.
Thats why  the double database scheme works good for deleting old
databases.  I guess i want to stop here and leave some info out.  That
way I don't suppress any good ideas.

And as always I really appreciate any help i can get.  I tried to
implement something similar, but I was copying an already prepared
sqlite database which was not very reliable.  Guess another question,
maybe one that solves this one. has any improvements on
auto-vacuum been made?  Does anyone trust it or can anyone attest to
its fault tolerance.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-18 Thread Rich Rattanni
Thanks for your reply.  I have done some quick timing tests on my
system; a vacuum can take 5 or more minutes (synchronous full), and a
delete and recreate is rougly 3 seconds.  I think I did such a test
with a 30MB database.  The database resides on a jffs2 file system
(compression off), which seems to have a constant time for deletions.

I should have included I am using sqlite 3.4.0.


On Sun, May 18, 2008 at 4:45 AM,  <[EMAIL PROTECTED]> wrote:
>> Hi I have a general design question.  I have the following senario...
>
> IMHO your design sound reasonable. In relation with the vacuum question
> I suggest try to delete and re-create each table and watch both timings.
>
> HTH
>
> Adolfo
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-18 Thread ajm
> Hi I have a general design question.  I have the following senario...

IMHO your design sound reasonable. In relation with the vacuum question
I suggest try to delete and re-create each table and watch both timings.

HTH

Adolfo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] design question / discussion

2008-05-17 Thread Rich Rattanni
Hi I have a general design question.  I have the following senario...

In an embedded system running linux 2.6.2x I have a sqlite database
constantly being updated with data acquired by the system.  I cant
lose data (hence why I am using sqlite in the first place).  However
periodically I have download the data contain within the database to a
central server.  The system cannot stall during the download and must
continue to record data.  Also, after the download I need to shrink
the database size, simply because if the database is allowed to grow
to its max size (~50MB) then every download thereafter would be 50MB,
which is unacceptable.  I would simply vacuum the database, but this
takes too much time and stalls the system.

My solution is the following (still roughed out on scraps of paper and
gray matter).

have two databases on the system at all times (data.sqlite.(x) and
data.sqlite.(x+1))
All data written into x.
When a download is requested...
 Mark highest rowid in each table in database (x) in a table
called DownloadInfo
 Begin logging data to (x+1)
Download done (success or failure - downloads may be cancelled or timeout)
Attach x+1 to x
Begin transaction
delete all data in x from tables equal to <= rowid saved in DownloadInfo
move any data stored in x+1 to x
if download was successful...
mark in x that a download was successful in DownloadInfo

At next powerup...
Scan x.DownloadInfo, see if a download was successful...
Yes
Attach x+1 to x
attach x+2 to x
begin transaction
Build new database x+2
Move data from x to x+1
Mark database has been deleted in DownloadInfo
commit.
delete (using os, unlink perhaps)
   No
Do nothing.


So its kinda complicated, but I think such things are necessary.  For
instance, a vacuum is out of the question, it just takes too long.
Thats why  the double database scheme works good for deleting old
databases.  I guess i want to stop here and leave some info out.  That
way I don't suppress any good ideas.

And as always I really appreciate any help i can get.  I tried to
implement something similar, but I was copying an already prepared
sqlite database which was not very reliable.  Guess another question,
maybe one that solves this one. has any improvements on
auto-vacuum been made?  Does anyone trust it or can anyone attest to
its fault tolerance.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users