Re: [sqlite] Slow performance with Sum function

2009-03-11 Thread Griggs, Donald
 

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Trainor, Chris
Sent: Wednesday, March 11, 2009 5:31 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow performance with Sum function

> Do not be tempted by the incremental vacuum feature.  Incremental 
> vacuum will reduce the database size as content is deleted, but it 
> will not reduce fragmentation.  In fact, incremental vacuum will 
> likely increase fragmentation.  Incremental vacuum is just a variation

> on auto_vacuum.  It is designed for flash memory with zero seek
latency.

> D. Richard Hipp
> d...@hwaci.com

Thanks for the reply, but I am confused again.  Is incremental vacuum
different from the vacuum command?  It seems like vacuum would
defragment the database according to the description here:
http://www.sqlite.org/lang_vacuum.html 

=
No, the auto_vacuum command differs from the regular vacuum command.
The auto_vacuum command does not reduce (and may increase
fragmentation).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-11 Thread Trainor, Chris
> Do not be tempted by the incremental vacuum feature.  Incremental  
> vacuum will reduce the database size as content is deleted, but it  
> will not reduce fragmentation.  In fact, incremental vacuum will  
> likely increase fragmentation.  Incremental vacuum is just a variation

> on auto_vacuum.  It is designed for flash memory with zero seek
latency.

> D. Richard Hipp
> d...@hwaci.com

Thanks for the reply, but I am confused again.  Is incremental vacuum
different from the vacuum command?  It seems like vacuum would
defragment the database according to the description here:
http://www.sqlite.org/lang_vacuum.html 

"The VACUUM command cleans the main database by copying its contents to
a temporary database file and reloading the original database file from
the copy. This eliminates free pages, aligns table data to be
contiguous, and otherwise cleans up the database file structure."

If incremental vacuum and vacuum are the same, then I am still uncertain
of what to do about my original problem.  Any ideas on why the sum
function is slow on my existing table, but it is fast on a copy of the
table?  Also, after calling vacuum, sum is fast on the original table.

Here's my original question:

I am trying to use the Sum function on a column in a table with ~450K
rows in it.  

Select sum(Col4) from Table1

Where Table1 looks like this:

Create TABLE Table1 (
Col1 INTEGER NOT NULL,
Col2 INTEGER NOT NULL,
Col3 INTEGER NOT NULL,
Col4 BIGINT NOT NULL,
Col5 BIGINT NOT NULL,
Col6 BLOB NOT NULL,
Col7 CHAR(1) DEFAULT '0',
Col8 NUMERIC(2) NOT NULL,
Col9 NUMERIC(2) NOT NULL,
Col10 INTEGER NOT NULL,
Col11 INTEGER NOT NULL,
CONSTRAINT FK_1 FOREIGN KEY (Col1) REFERENCES Table2 (Col1)
ON DELETE CASCADE
ON UPDATE CASCADE,
CONSTRAINT PK_1 PRIMARY KEY (Col10, Col11, Col1, Col3 DESC) );


It takes over 2 minutes to execute when using the original table.  I
created an exact copy of the table with the same indices and constraints
and inserted all the data from the original table into it.  Summing that
column on the copied table only takes a few seconds.

I am guessing that using the copied table is faster because it has all
of its data arranged contiguously, but that is just a guess.

Can anyone shed some light on this?  Making a copy of the table is not
an option, so is there anything I can do to get better performance from
the original table?

Thanks
The information contained in this email message and its attachments
is intended
only for the private and confidential use of the recipient(s) named
above, unless the sender expressly agrees otherwise. Transmission
of email over the Internet
 is not a secure communications medium. If you are requesting or
have requested
the transmittal of personal data, as defined in applicable privacy
laws by means
 of email or in an attachment to email you must select a more
secure alternate means of transmittal that supports your
obligations to protect such personal data. If the reader of this
message is not the intended recipient and/or you have received this
email in error, you must take no action based on the information in
this email and you are hereby notified that any dissemination,
misuse, copying, or disclosure of this communication is strictly
prohibited. If you have received
this communication in error, please notify us immediately by email
and delete the original message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-04 Thread Alexey Pechnikov
Hello!

On Wednesday 04 March 2009 17:19:09 Jim Wilcoxson wrote:
> Have you tried changing the page size to 4096 or 8192?  Doing this
> with my SQLite application and increasing the transaction size
> decreased runtime from over 4 hours to 75 minutes.    The runtime for
> my app writing the same amount of data to flat files was 55 minutes,
> so the time penalty for building a database was about 35%, which
> seemed reasonable.
>
> I haven't tried changing the cache size yet, because I like that my
> app uses a small amount of memory.

I have my own build of SQLite with default page size 4096 and increased caches 
for server applications. For huge databases and SAS disks i'm use page size 
16384.

Best regards.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-04 Thread Jim Wilcoxson
Have you tried changing the page size to 4096 or 8192?  Doing this
with my SQLite application and increasing the transaction size
decreased runtime from over 4 hours to 75 minutes.The runtime for
my app writing the same amount of data to flat files was 55 minutes,
so the time penalty for building a database was about 35%, which
seemed reasonable.

I haven't tried changing the cache size yet, because I like that my
app uses a small amount of memory.

Good luck!
Jim

On 3/4/09, Alexey Pechnikov  wrote:

> Can enough cache size prevent fragmentation? And how to calculate degree of
> fragmentation and when is needed make vacuum of database?
>
> Best regards.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-04 Thread Alexey Pechnikov
Hello!

On Wednesday 04 March 2009 04:44:05 D. Richard Hipp wrote:
>  One could envision future versions  
> of SQLite that allowed you to preallocate a large database files such  
> that the database always stayed less than 80% full.  Then we could use  
> filesystem techniques to keep fragmentation down.  The penalty, of  
> course, is that your database file is larger.  Probably much larger.  
> And just to be clear: SQLite does not have that capability at this time.

Can enough cache size prevent fragmentation? And how to calculate degree of 
fragmentation and when is needed make vacuum of database?

Best regards.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread D. Richard Hipp

On Mar 3, 2009, at 8:01 PM, Trainor, Chris wrote:

> I'm not sure how much we can do about preventing adds and deletes.   
> It *may* be possible to replace them with updates, but I am not sure  
> yet.  These adds and deletes are happening in a different table than  
> the one being summed.  This other table contains a large blob  
> column.  Would changing to updates help or will updates fragment the  
> database as much as adds and deletes?

SQLite implements an UPDATE by first deleting the old row then  
inserting a new one in its place.  So I don't think changing DELETE/ 
INSERT pairs into UPDATEs will help much with fragmentation.  And,  
besides, deleting and inserting does not really cause much  
fragmentation, as long as the data inserted is roughly the same size  
as the data deleted.

Fragmentation occurs for many reasons, but one important reason is  
that two or more b-trees within the database file are growing at the  
same time.  As each b-tree grows, it needs to allocate new pages.  New  
pages are allocated from the end of the database file (unless there  
were previously deleted pages that can be reused).  Imagine that you  
have (say) 10 b-trees all growing at roughly the same rate.  As the b- 
trees all grow, they will each allocates pages off the end of the file  
as they need time.  And you will end up with pages of the 10 b-tress  
all interleaved rather than being bunched together.

Note that there is one b-tree for each table and for each index.  So  
if you have a single SQL table with 3 unique columns (there is one  
implied index for each UNIQUE constraint) and 2 explicit indices, you  
will have 1+3+2=6 b-trees.  As you insert new information into this  
table, all 6 b-trees are updated together, so there iwill be some  
interleaving and hence fragmentation.

When you run the VACUUM command, it rebuilds each b-tree one by one,  
so all the pages for a single b-tree are bunched together in the file.

Note that using auto_vacuum does *not* help with fragmentation.  In  
fact, auto_vacuum makes fragmentation worse.  Auto_vacuum is designed  
for used on small flash-memory drives (such as found on cell-phones)  
that have low capacity and zero seek latency.  Auto_vacuum is a very  
helpful feature for the right problem, but fragmentation is not the  
right problem.

When there are free pages in the database file and new pages are  
needed by a growing b-tree, an attempt is made to reuse free pages  
that are as close as possible to the rest of the b-tree.  But  
typically the free list is short and the choices are limited, so it  
does not often happen that the chosen free page is immediately  
adjacent to the growing b-tree.

Decades of experience with filesystems have taught us that various  
heuristics can prevent filesystem fragmentation, as long as the  
filesystem is less than about 80% or 90% full.  Once a filesystem gets  
close to being full, fragmentation is inevitable.  To transfer this  
experience to SQLite, recognize that SQLite attempts to keep its  
database file as small as possible.  In other words, SQLite tries to  
keep itself 100% full at all times.  Hence, fragmentation of data in  
SQLite is pretty much inevitable.  One could envision future versions  
of SQLite that allowed you to preallocate a large database files such  
that the database always stayed less than 80% full.  Then we could use  
filesystem techniques to keep fragmentation down.  The penalty, of  
course, is that your database file is larger.  Probably much larger.   
And just to be clear: SQLite does not have that capability at this time.

>
>
> The second option is the one I am considering.  It looks like there  
> might be a good time to run vacuum.  I need to do some more timings  
> to tell for sure.

Do not be tempted by the incremental vacuum feature.  Incremental  
vacuum will reduce the database size as content is deleted, but it  
will not reduce fragmentation.  In fact, incremental vacuum will  
likely increase fragmentation.  Incremental vacuum is just a variation  
on auto_vacuum.  It is designed for flash memory with zero seek latency.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread Trainor, Chris
The blob is fairly small but is referenced very often, so it wouldn't be
feasible to move it to another table.  Does the blob make a difference
in this case?  I thought that since the blob column is to the right of
the column being summed that it would never get read during the
summation.

Thanks

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin
Sent: Tuesday, March 03, 2009 7:53 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow performance with Sum function

On 4/03/2009 5:52 AM, Trainor, Chris wrote:
> I am trying to use the Sum function on a column in a table with ~450K
> rows in it.  
> 
>   Select sum(Col4) from Table1
> 
> Where Table1 looks like this:
> 
> Create TABLE Table1 (
>   Col1 INTEGER NOT NULL,
>   Col2 INTEGER NOT NULL,
>   Col3 INTEGER NOT NULL,
>   Col4 BIGINT NOT NULL,
>   Col5 BIGINT NOT NULL,
>   Col6 BLOB NOT NULL,

What is the min/max/average size of this blob and how often do you need 
to access it? If the answer tends towards "huge and rarely", consider 
putting it in a separate table.


>   Col7 CHAR(1) DEFAULT '0',
>   Col8 NUMERIC(2) NOT NULL,
>   Col9 NUMERIC(2) NOT NULL,
>   Col10 INTEGER NOT NULL,
> Col11 INTEGER NOT NULL,
>   CONSTRAINT FK_1 FOREIGN KEY (Col1) REFERENCES Table2 (Col1)
>   ON DELETE CASCADE
>   ON UPDATE CASCADE,
>   CONSTRAINT PK_1 PRIMARY KEY (Col10, Col11, Col1, Col3 DESC)
> );
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
The information contained in this email message and its attachments
is intended
only for the private and confidential use of the recipient(s) named
above, unless the sender expressly agrees otherwise. Transmission
of email over the Internet
 is not a secure communications medium. If you are requesting or
have requested
the transmittal of personal data, as defined in applicable privacy
laws by means
 of email or in an attachment to email you must select a more
secure alternate means of transmittal that supports your
obligations to protect such personal data. If the reader of this
message is not the intended recipient and/or you have received this
email in error, you must take no action based on the information in
this email and you are hereby notified that any dissemination,
misuse, copying, or disclosure of this communication is strictly
prohibited. If you have received
this communication in error, please notify us immediately by email
and delete the original message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread Trainor, Chris
I'm not sure how much we can do about preventing adds and deletes.  It *may* be 
possible to replace them with updates, but I am not sure yet.  These adds and 
deletes are happening in a different table than the one being summed.  This 
other table contains a large blob column.  Would changing to updates help or 
will updates fragment the database as much as adds and deletes?

The second option is the one I am considering.  It looks like there might be a 
good time to run vacuum.  I need to do some more timings to tell for sure.

Thanks for the suggestions.

 
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of P Kishor
Sent: Tuesday, March 03, 2009 7:41 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow performance with Sum function

On Tue, Mar 3, 2009 at 6:36 PM, Greg Palmer <gregorylpal...@netscape.net> wrote:
> Trainor, Chris wrote:
>> After running vacuum, sum is fast on the original table.  However,
>> running vacuum took a long time, so I'm not sure if that is a feasible
>> solution.  Is there any way to prevent fragmentation in the first place?
>> If not for the whole database, then for a specific table?  (e.g. is it
>> possible to preallocate space for a table?)
>>
>> Thanks
>>
> I'm not an expert on SQLite but generally speaking fragmentation in a
> database is usually a result of records being added and deleted. Are you
> doing a lot of these and if so, can you change your algorithm to cut
> down on this activity?

exactly the right approach. Even better, make your application do the
vaccuming when your users are away, much like housekeeping in a hotel.



>
> Regards,
>  Greg
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Carbon Model http://carbonmodel.org/
Open Source Geospatial Foundation http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
The information contained in this email message and its attachments
is intended
only for the private and confidential use of the recipient(s) named
above, unless the sender expressly agrees otherwise. Transmission
of email over the Internet
 is not a secure communications medium. If you are requesting or
have requested
the transmittal of personal data, as defined in applicable privacy
laws by means
 of email or in an attachment to email you must select a more
secure alternate means of transmittal that supports your
obligations to protect such personal data. If the reader of this
message is not the intended recipient and/or you have received this
email in error, you must take no action based on the information in
this email and you are hereby notified that any dissemination,
misuse, copying, or disclosure of this communication is strictly
prohibited. If you have received
this communication in error, please notify us immediately by email
and delete the original message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread John Machin
On 4/03/2009 5:52 AM, Trainor, Chris wrote:
> I am trying to use the Sum function on a column in a table with ~450K
> rows in it.  
> 
>   Select sum(Col4) from Table1
> 
> Where Table1 looks like this:
> 
> Create TABLE Table1 (
>   Col1 INTEGER NOT NULL,
>   Col2 INTEGER NOT NULL,
>   Col3 INTEGER NOT NULL,
>   Col4 BIGINT NOT NULL,
>   Col5 BIGINT NOT NULL,
>   Col6 BLOB NOT NULL,

What is the min/max/average size of this blob and how often do you need 
to access it? If the answer tends towards "huge and rarely", consider 
putting it in a separate table.


>   Col7 CHAR(1) DEFAULT '0',
>   Col8 NUMERIC(2) NOT NULL,
>   Col9 NUMERIC(2) NOT NULL,
>   Col10 INTEGER NOT NULL,
> Col11 INTEGER NOT NULL,
>   CONSTRAINT FK_1 FOREIGN KEY (Col1) REFERENCES Table2 (Col1)
>   ON DELETE CASCADE
>   ON UPDATE CASCADE,
>   CONSTRAINT PK_1 PRIMARY KEY (Col10, Col11, Col1, Col3 DESC)
> );
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread P Kishor
On Tue, Mar 3, 2009 at 6:36 PM, Greg Palmer  wrote:
> Trainor, Chris wrote:
>> After running vacuum, sum is fast on the original table.  However,
>> running vacuum took a long time, so I'm not sure if that is a feasible
>> solution.  Is there any way to prevent fragmentation in the first place?
>> If not for the whole database, then for a specific table?  (e.g. is it
>> possible to preallocate space for a table?)
>>
>> Thanks
>>
> I'm not an expert on SQLite but generally speaking fragmentation in a
> database is usually a result of records being added and deleted. Are you
> doing a lot of these and if so, can you change your algorithm to cut
> down on this activity?

exactly the right approach. Even better, make your application do the
vaccuming when your users are away, much like housekeeping in a hotel.



>
> Regards,
>  Greg
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Carbon Model http://carbonmodel.org/
Open Source Geospatial Foundation http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread Greg Palmer
Trainor, Chris wrote:
> After running vacuum, sum is fast on the original table.  However,
> running vacuum took a long time, so I'm not sure if that is a feasible
> solution.  Is there any way to prevent fragmentation in the first place?
> If not for the whole database, then for a specific table?  (e.g. is it
> possible to preallocate space for a table?)
>
> Thanks
>   
I'm not an expert on SQLite but generally speaking fragmentation in a 
database is usually a result of records being added and deleted. Are you 
doing a lot of these and if so, can you change your algorithm to cut 
down on this activity?

Regards,
  Greg
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread Trainor, Chris
After running vacuum, sum is fast on the original table.  However,
running vacuum took a long time, so I'm not sure if that is a feasible
solution.  Is there any way to prevent fragmentation in the first place?
If not for the whole database, then for a specific table?  (e.g. is it
possible to preallocate space for a table?)

Thanks

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Igor Tandetnik
Sent: Tuesday, March 03, 2009 2:00 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Slow performance with Sum function

Trainor, Chris <chris.trai...@ironmountain.com>
wrote:
> I am trying to use the Sum function on a column in a table with ~450K
> rows in it.
>
> Select sum(Col4) from Table1
>
> It takes over 2 minutes to execute when using the original table.  I
> created an exact copy of the table with the same indices and
> constraints and inserted all the data from the original table into
> it.  Summing that column on the copied table only takes a few seconds.

Try running VACUUM on your database. Your original table is probably 
badly fragmented and results in excessive disk seeking.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
The information contained in this email message and its attachments
is intended
only for the private and confidential use of the recipient(s) named
above, unless the sender expressly agrees otherwise. Transmission
of email over the Internet
 is not a secure communications medium. If you are requesting or
have requested
the transmittal of personal data, as defined in applicable privacy
laws by means
 of email or in an attachment to email you must select a more
secure alternate means of transmittal that supports your
obligations to protect such personal data. If the reader of this
message is not the intended recipient and/or you have received this
email in error, you must take no action based on the information in
this email and you are hereby notified that any dissemination,
misuse, copying, or disclosure of this communication is strictly
prohibited. If you have received
this communication in error, please notify us immediately by email
and delete the original message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow performance with Sum function

2009-03-03 Thread Igor Tandetnik
Trainor, Chris 
wrote:
> I am trying to use the Sum function on a column in a table with ~450K
> rows in it.
>
> Select sum(Col4) from Table1
>
> It takes over 2 minutes to execute when using the original table.  I
> created an exact copy of the table with the same indices and
> constraints and inserted all the data from the original table into
> it.  Summing that column on the copied table only takes a few seconds.

Try running VACUUM on your database. Your original table is probably 
badly fragmented and results in excessive disk seeking.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Slow performance with Sum function

2009-03-03 Thread Trainor, Chris
I am trying to use the Sum function on a column in a table with ~450K
rows in it.  

Select sum(Col4) from Table1

Where Table1 looks like this:

Create TABLE Table1 (
Col1 INTEGER NOT NULL,
Col2 INTEGER NOT NULL,
Col3 INTEGER NOT NULL,
Col4 BIGINT NOT NULL,
Col5 BIGINT NOT NULL,
Col6 BLOB NOT NULL,
Col7 CHAR(1) DEFAULT '0',
Col8 NUMERIC(2) NOT NULL,
Col9 NUMERIC(2) NOT NULL,
Col10 INTEGER NOT NULL,
Col11 INTEGER NOT NULL,
CONSTRAINT FK_1 FOREIGN KEY (Col1) REFERENCES Table2 (Col1)
ON DELETE CASCADE
ON UPDATE CASCADE,
CONSTRAINT PK_1 PRIMARY KEY (Col10, Col11, Col1, Col3 DESC)
);


It takes over 2 minutes to execute when using the original table.  I
created an exact copy of the table with the same indices and constraints
and inserted all the data from the original table into it.  Summing that
column on the copied table only takes a few seconds.

I am guessing that using the copied table is faster because it has all
of its data arranged contiguously, but that is just a guess.

Can anyone shed some light on this?  Making a copy of the table is not
an option, so is there anything I can do to get better performance from
the original table?

Thanks
The information contained in this email message and its attachments
is intended
only for the private and confidential use of the recipient(s) named
above, unless the sender expressly agrees otherwise. Transmission
of email over the Internet
 is not a secure communications medium. If you are requesting or
have requested
the transmittal of personal data, as defined in applicable privacy
laws by means
 of email or in an attachment to email you must select a more
secure alternate means of transmittal that supports your
obligations to protect such personal data. If the reader of this
message is not the intended recipient and/or you have received this
email in error, you must take no action based on the information in
this email and you are hereby notified that any dissemination,
misuse, copying, or disclosure of this communication is strictly
prohibited. If you have received
this communication in error, please notify us immediately by email
and delete the original message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users