Re: [sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-24 Thread Ken
I see now, thanks for the clarification. Sorry for the ramblings.

And there is a .mode csv so that should take care of data loading.

Regards,
Ken

Joe Wilson <[EMAIL PROTECTED]> wrote: --- Ken  wrote:
>A new command ".imp" command could be made to use my proposed array based 
> processing. 

There's already an .import command in the shell.

>   I don't agree with the aproach of modify the SQL language to create a 
> generic high speed
> loading interface.

It's not my extension.
According to Wikipedia, multi-row inserts are part of SQL-92
and is supported by DB2, PostgreSQL (since version 8.2) and MySQL.

http://en.wikipedia.org/wiki/Insert_%28SQL%29#Multirow_inserts


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Re: [sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-23 Thread Ken
Same problem, just a different thought process.

   The .dump format could me made to use a CSV making it more compact.

   A new command ".imp" command could be made to use my proposed array based 
processing. And also use DRH's recommendation to use a TXN

   Unfortunately,  .dump is generic for any sql so that the sql can be simply 
replayed to recreate a Database  vs being specific for DATA loading. 
   The table structure .dump  and the Data .dump Could be made into two well 
defined and seperate entities. 

  I don't agree with the aproach of modify the SQL language to create a generic 
high speed loading interface. When actually the problem lies with the format of 
the .dump and the lack of an .import (or .load) command. Array based processing 
and user based addressing would most probably not help  with this problem.

Maybe a seperate programs could be developed for simple data unload and reload 
Vs the sqlite .dump  

Regards,
Ken


Joe Wilson <[EMAIL PROTECTED]> wrote: You're solving a different problem. 
Better to stick with the
existing API for what you're proposing.

I just want a faster, more compact and flexible SQL INSERT command 
from the command-line without resorting to custom programming.

--- Ken  wrote:

> Joe,
> 
> that is quite interesting performance gain... 
> 
> One thing that might help with this "multi row" insert is the concept
> of binding address variables. IMHO, the binding functions (sqlite_bind_int 
> etc) should allow one
> to permanently bind an address to a placeholder.
> 
> Then you multi row insert could be implemented more simply by having the user 
> bind arrays of
> structs or arrays of an intrinsic type to the sqlite statement.
> 
> That way after a "step" using say a select would not require all of the data 
> transfer
> (sqlite_column_int) from sqlite into user space. Sqlite would already have 
> the hooks into the
> user space addressing. And the user would pass in the number of rows in the 
> array to be stepped.
> Sqlite would return the rows actually stepped (ie end of data)
> 
> This I do understand is quite a departure from the current paradigm. But one 
> that would be good
> to explore from a performance standpoint. 
> 
> 
> Regards,
> Ken



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Re: [sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-23 Thread Joe Wilson
You're solving a different problem. Better to stick with the
existing API for what you're proposing.

I just want a faster, more compact and flexible SQL INSERT command 
from the command-line without resorting to custom programming.

--- Ken <[EMAIL PROTECTED]> wrote:

> Joe,
> 
> that is quite interesting performance gain... 
> 
> One thing that might help with this "multi row" insert is the concept
> of binding address variables. IMHO, the binding functions (sqlite_bind_int 
> etc) should allow one
> to permanently bind an address to a placeholder.
> 
> Then you multi row insert could be implemented more simply by having the user 
> bind arrays of
> structs or arrays of an intrinsic type to the sqlite statement.
> 
> That way after a "step" using say a select would not require all of the data 
> transfer
> (sqlite_column_int) from sqlite into user space. Sqlite would already have 
> the hooks into the
> user space addressing. And the user would pass in the number of rows in the 
> array to be stepped.
> Sqlite would return the rows actually stepped (ie end of data)
> 
> This I do understand is quite a departure from the current paradigm. But one 
> that would be good
> to explore from a performance standpoint. 
> 
> 
> Regards,
> Ken



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-23 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:

> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > FYI: When this multi-row-insert patch is combined with today's CVS 
> > OP_Real and OP_Int64 fix for Tickets #2733 and #2731 (Check-in [4507]), 
> > the multi-row ASCII INSERT statements are as much as 25% faster than 
> > the equivalent number of traditional individual ASCII INSERT statements 
> > within a transaction.
> > 
> > i.e.,
> > 
> >   -- this is 25% faster...
> >   INSERT INTO BIGTABLE VALUES
> >(-334712687065.09, -334712687065.12, -334712687065.13),
> >(-334712687065.09, -334712687065.12, -334712687065.13),
> >... 9996 similar rows ...
> >(-334712687065.09, -334712687065.12, -334712687065.13),
> >(-334712687065.09, -334712687065.12, -334712687065.13);
> > 
> >   -- ...than this
> >   BEGIN
> >   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> > -334712687065.13);
> >   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> > -334712687065.13);
> >   ... 9996 similar rows ...
> >   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> > -334712687065.13);
> >   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> > -334712687065.13);
> >   COMMIT;
> > 
> 
> I wonder how the performance of the above compares to this:
> 
>BEGIN;
>INSERT INTO BIGTABLE VALUES(?,?,?);  -- bind values as appropriate
>-- bind new values and rerun the same prepared statement  more
>-- times.
>END;
> 
> I'm guessing that the latter is at least twice as fast as either of
> the first two.

It's a tiny bit faster to use a prepared statement, but not by much.
But you're missing the point. The real point of the exercise is to:

1. Add a common and useful SQL INSERT extension that works in the 
   sqlite3 shell.

2. Speed up the SQL INSERT command from ASCII without writing 
   custom code for default columns, etc.

3. And optionally, create a .dump format that is much more compact
   on huge datasets.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-23 Thread drh
Joe Wilson <[EMAIL PROTECTED]> wrote:
> FYI: When this multi-row-insert patch is combined with today's CVS 
> OP_Real and OP_Int64 fix for Tickets #2733 and #2731 (Check-in [4507]), 
> the multi-row ASCII INSERT statements are as much as 25% faster than 
> the equivalent number of traditional individual ASCII INSERT statements 
> within a transaction.
> 
> i.e.,
> 
>   -- this is 25% faster...
>   INSERT INTO BIGTABLE VALUES
>(-334712687065.09, -334712687065.12, -334712687065.13),
>(-334712687065.09, -334712687065.12, -334712687065.13),
>... 9996 similar rows ...
>(-334712687065.09, -334712687065.12, -334712687065.13),
>(-334712687065.09, -334712687065.12, -334712687065.13);
> 
>   -- ...than this
>   BEGIN
>   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> -334712687065.13);
>   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> -334712687065.13);
>   ... 9996 similar rows ...
>   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> -334712687065.13);
>   INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
> -334712687065.13);
>   COMMIT;
> 

I wonder how the performance of the above compares to this:

   BEGIN;
   INSERT INTO BIGTABLE VALUES(?,?,?);  -- bind values as appropriate
   -- bind new values and rerun the same prepared statement  more
   -- times.
   END;

I'm guessing that the latter is at least twice as fast as either of
the first two.

--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-23 Thread Ken
Joe,

that is quite interesting performance gain... 

One thing that might help with this "multi row" insert is the concept
of binding address variables. IMHO, the binding functions (sqlite_bind_int etc) 
should allow one to permanently bind an address to a placeholder.

Then you multi row insert could be implemented more simply by having the user 
bind arrays of structs or arrays of an intrinsic type to the sqlite statement.

That way after a "step" using say a select would not require all of the data 
transfer (sqlite_column_int) from sqlite into user space. Sqlite would already 
have the hooks into the user space addressing. And the user would pass in the 
number of rows in the array to be stepped. Sqlite would return the rows 
actually stepped (ie end of data)

This I do understand is quite a departure from the current paradigm. But one 
that would be good to explore from a performance standpoint. 


Regards,
Ken



Joe Wilson <[EMAIL PROTECTED]> wrote: FYI: When this multi-row-insert patch is 
combined with today's CVS 
OP_Real and OP_Int64 fix for Tickets #2733 and #2731 (Check-in [4507]), 
the multi-row ASCII INSERT statements are as much as 25% faster than 
the equivalent number of traditional individual ASCII INSERT statements 
within a transaction.

i.e.,

  -- this is 25% faster...
  INSERT INTO BIGTABLE VALUES
   (-334712687065.09, -334712687065.12, -334712687065.13),
   (-334712687065.09, -334712687065.12, -334712687065.13),
   ... 9996 similar rows ...
   (-334712687065.09, -334712687065.12, -334712687065.13),
   (-334712687065.09, -334712687065.12, -334712687065.13);

  -- ...than this
  BEGIN
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  ... 9996 similar rows ...
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  COMMIT;

I also had to bump up the SQLITE_MAX_SQL_LENGTH define to 5000
to get this to work.

The caveat is that you can't have more than approximately 10,000 to 
15,000 rows per multi-insert statement due to 

  sqlite3Select -> multiSelect -> sqlite3Select

recursion leading to stack overflow. This compound select recursion limit 
could be increased by a factor of 100 or more by rewriting these 2 
functions to use heap-allocated stack frame structs to store all function 
local variables. Or with more work, eliminate the compound select stack 
limitation altogether by using a loop instead of recursion for multiSelect 
compound select chain vdbe code generation.

--- Joe Wilson  wrote:
> The attached patch implements the compound (multi-row) INSERT statement 
> syntax against the SQLite 3.5.1 source tree. Both named and unnamed
> INSERT column syntax with DEFAULT column values are supported.
> 
> The patch transforms multi-row INSERT statements into compound SELECT
> statements separated by UNION ALLs. It should be fairly efficient,
> as all rows within the compound INSERT are inserted within a single 
> implicit transaction.
> 
> SQLITE_MAX_COMPOUND_SELECT in sqliteLimit.h also serves as the maximum 
> number of rows in a multi-row INSERT.
> 
> No regressions in "make test". Post any problems to the mailing list, 
> and I'll try to keep this patch up to date.
> 
> This patch is hereby placed in the public domain.
> 
> Examples:
> 
> SQLite version 3.5.1
> Enter ".help" for instructions
> sqlite> CREATE TABLE foo(a DEFAULT 123, b DEFAULT 'Bob');
> sqlite> insert into foo values (1,'Ace'), (2,'Two'), (3,'Three');
> sqlite> select * from foo;
> 1|Ace
> 2|Two
> 3|Three
> sqlite> delete from foo;
> sqlite> insert into foo(b,a) values (1,'Ace'), (2,'Two'), (3,'Three');
> sqlite> select * from foo;
> Ace|1
> Two|2
> Three|3
> sqlite> delete from foo;
> sqlite> insert into foo(a) values (10), (20), (30);
> sqlite> select * from foo;
> 10|Bob
> 20|Bob
> 30|Bob
> 
> 
> Index: src/parse.y
> ===
> RCS file: /sqlite/sqlite/src/parse.y,v
> retrieving revision 1.234
> diff -u -3 -p -r1.234 parse.y
> --- src/parse.y 21 Aug 2007 10:44:16 - 1.234
> +++ src/parse.y 14 Oct 2007 07:29:32 -
> @@ -624,6 +624,42 @@ inscollist(A) ::= inscollist(X) COMMA nm
>  inscollist(A) ::= nm(Y).
>  {A = sqlite3IdListAppend(pParse->db,0,);}
>  
> +%ifndef SQLITE_OMIT_COMPOUND_INSERT
> +%ifndef SQLITE_OMIT_COMPOUND_SELECT
> +
> +%type inslistItem {Select*}
> +%destructor inslistItem {sqlite3SelectDelete($$);}
> +
> +inslistItem(A) ::= LP itemlist(W) RP. {
> +SrcList *pSrc = sqlite3DbMallocZero(pParse->db, sizeof(SrcList));
> +A = sqlite3SelectNew(pParse,W,pSrc,0,0,0,0,0,0,0);
> +}
> +
> +%type inslist {Select*}
> +%destructor inslist {sqlite3SelectDelete($$);}
> +
> +%type comma_opt {int}
> +comma_opt(X) ::= .  {X = 0;}
> +comma_opt(X) ::= COMMA. {X = 1;}
> +
> +inslist(A) ::= 

Re: [sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-23 Thread Joe Wilson
FYI: When this multi-row-insert patch is combined with today's CVS 
OP_Real and OP_Int64 fix for Tickets #2733 and #2731 (Check-in [4507]), 
the multi-row ASCII INSERT statements are as much as 25% faster than 
the equivalent number of traditional individual ASCII INSERT statements 
within a transaction.

i.e.,

  -- this is 25% faster...
  INSERT INTO BIGTABLE VALUES
   (-334712687065.09, -334712687065.12, -334712687065.13),
   (-334712687065.09, -334712687065.12, -334712687065.13),
   ... 9996 similar rows ...
   (-334712687065.09, -334712687065.12, -334712687065.13),
   (-334712687065.09, -334712687065.12, -334712687065.13);

  -- ...than this
  BEGIN
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  ... 9996 similar rows ...
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  COMMIT;

I also had to bump up the SQLITE_MAX_SQL_LENGTH define to 5000
to get this to work.

The caveat is that you can't have more than approximately 10,000 to 
15,000 rows per multi-insert statement due to 

  sqlite3Select -> multiSelect -> sqlite3Select

recursion leading to stack overflow. This compound select recursion limit 
could be increased by a factor of 100 or more by rewriting these 2 
functions to use heap-allocated stack frame structs to store all function 
local variables. Or with more work, eliminate the compound select stack 
limitation altogether by using a loop instead of recursion for multiSelect 
compound select chain vdbe code generation.

--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> The attached patch implements the compound (multi-row) INSERT statement 
> syntax against the SQLite 3.5.1 source tree. Both named and unnamed
> INSERT column syntax with DEFAULT column values are supported.
> 
> The patch transforms multi-row INSERT statements into compound SELECT
> statements separated by UNION ALLs. It should be fairly efficient,
> as all rows within the compound INSERT are inserted within a single 
> implicit transaction.
> 
> SQLITE_MAX_COMPOUND_SELECT in sqliteLimit.h also serves as the maximum 
> number of rows in a multi-row INSERT.
> 
> No regressions in "make test". Post any problems to the mailing list, 
> and I'll try to keep this patch up to date.
> 
> This patch is hereby placed in the public domain.
> 
> Examples:
> 
> SQLite version 3.5.1
> Enter ".help" for instructions
> sqlite> CREATE TABLE foo(a DEFAULT 123, b DEFAULT 'Bob');
> sqlite> insert into foo values (1,'Ace'), (2,'Two'), (3,'Three');
> sqlite> select * from foo;
> 1|Ace
> 2|Two
> 3|Three
> sqlite> delete from foo;
> sqlite> insert into foo(b,a) values (1,'Ace'), (2,'Two'), (3,'Three');
> sqlite> select * from foo;
> Ace|1
> Two|2
> Three|3
> sqlite> delete from foo;
> sqlite> insert into foo(a) values (10), (20), (30);
> sqlite> select * from foo;
> 10|Bob
> 20|Bob
> 30|Bob
> 
> 
> Index: src/parse.y
> ===
> RCS file: /sqlite/sqlite/src/parse.y,v
> retrieving revision 1.234
> diff -u -3 -p -r1.234 parse.y
> --- src/parse.y   21 Aug 2007 10:44:16 -  1.234
> +++ src/parse.y   14 Oct 2007 07:29:32 -
> @@ -624,6 +624,42 @@ inscollist(A) ::= inscollist(X) COMMA nm
>  inscollist(A) ::= nm(Y).
>  {A = sqlite3IdListAppend(pParse->db,0,);}
>  
> +%ifndef SQLITE_OMIT_COMPOUND_INSERT
> +%ifndef SQLITE_OMIT_COMPOUND_SELECT
> +
> +%type inslistItem {Select*}
> +%destructor inslistItem {sqlite3SelectDelete($$);}
> +
> +inslistItem(A) ::= LP itemlist(W) RP. {
> +SrcList *pSrc = sqlite3DbMallocZero(pParse->db, sizeof(SrcList));
> +A = sqlite3SelectNew(pParse,W,pSrc,0,0,0,0,0,0,0);
> +}
> +
> +%type inslist {Select*}
> +%destructor inslist {sqlite3SelectDelete($$);}
> +
> +%type comma_opt {int}
> +comma_opt(X) ::= .  {X = 0;}
> +comma_opt(X) ::= COMMA. {X = 1;}
> +
> +inslist(A) ::= inslistItem(Z) COMMA. {A = Z;}
> +inslist(A) ::= inslist(X) inslistItem(Z) comma_opt(C). {
> +  C = C;
> +  if( Z ){
> +Z->op = TK_ALL;
> +Z->pPrior = X;
> +  }else{
> +sqlite3SelectDelete(X);
> +  }
> +  A = Z;
> +}
> +
> +cmd ::= insert_cmd(R) INTO fullname(X) inscollist_opt(F) VALUES inslist(S).
> +{sqlite3Insert(pParse, X, 0, S, F, R);}
> +
> +%endif  SQLITE_OMIT_COMPOUND_SELECT
> +%endif  SQLITE_OMIT_COMPOUND_INSERT
> +
>  /// Expression Processing 
> /
>  //
>  
> Index: src/sqliteLimit.h
> ===
> RCS file: /sqlite/sqlite/src/sqliteLimit.h,v
> retrieving revision 1.2
> diff -u -3 -p -r1.2 sqliteLimit.h
> --- src/sqliteLimit.h 24 Aug 2007 11:52:29 -  1.2
> +++ src/sqliteLimit.h 14 Oct 2007 07:29:33 -
> @@ -75,7 +75,7 @@
>  ** any limit on the number of terms in a compount 

[sqlite] PATCH: compound (multi-row) INSERT statements

2007-10-14 Thread Joe Wilson
The attached patch implements the compound (multi-row) INSERT statement 
syntax against the SQLite 3.5.1 source tree. Both named and unnamed
INSERT column syntax with DEFAULT column values are supported.

The patch transforms multi-row INSERT statements into compound SELECT
statements separated by UNION ALLs. It should be fairly efficient,
as all rows within the compound INSERT are inserted within a single 
implicit transaction.

SQLITE_MAX_COMPOUND_SELECT in sqliteLimit.h also serves as the maximum 
number of rows in a multi-row INSERT.

No regressions in "make test". Post any problems to the mailing list, 
and I'll try to keep this patch up to date.

This patch is hereby placed in the public domain.

Examples:

SQLite version 3.5.1
Enter ".help" for instructions
sqlite> CREATE TABLE foo(a DEFAULT 123, b DEFAULT 'Bob');
sqlite> insert into foo values (1,'Ace'), (2,'Two'), (3,'Three');
sqlite> select * from foo;
1|Ace
2|Two
3|Three
sqlite> delete from foo;
sqlite> insert into foo(b,a) values (1,'Ace'), (2,'Two'), (3,'Three');
sqlite> select * from foo;
Ace|1
Two|2
Three|3
sqlite> delete from foo;
sqlite> insert into foo(a) values (10), (20), (30);
sqlite> select * from foo;
10|Bob
20|Bob
30|Bob




   

Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for 
today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow  Index: src/parse.y
===
RCS file: /sqlite/sqlite/src/parse.y,v
retrieving revision 1.234
diff -u -3 -p -r1.234 parse.y
--- src/parse.y 21 Aug 2007 10:44:16 -  1.234
+++ src/parse.y 14 Oct 2007 07:29:32 -
@@ -624,6 +624,42 @@ inscollist(A) ::= inscollist(X) COMMA nm
 inscollist(A) ::= nm(Y).
 {A = sqlite3IdListAppend(pParse->db,0,);}
 
+%ifndef SQLITE_OMIT_COMPOUND_INSERT
+%ifndef SQLITE_OMIT_COMPOUND_SELECT
+
+%type inslistItem {Select*}
+%destructor inslistItem {sqlite3SelectDelete($$);}
+
+inslistItem(A) ::= LP itemlist(W) RP. {
+SrcList *pSrc = sqlite3DbMallocZero(pParse->db, sizeof(SrcList));
+A = sqlite3SelectNew(pParse,W,pSrc,0,0,0,0,0,0,0);
+}
+
+%type inslist {Select*}
+%destructor inslist {sqlite3SelectDelete($$);}
+
+%type comma_opt {int}
+comma_opt(X) ::= .  {X = 0;}
+comma_opt(X) ::= COMMA. {X = 1;}
+
+inslist(A) ::= inslistItem(Z) COMMA. {A = Z;}
+inslist(A) ::= inslist(X) inslistItem(Z) comma_opt(C). {
+  C = C;
+  if( Z ){
+Z->op = TK_ALL;
+Z->pPrior = X;
+  }else{
+sqlite3SelectDelete(X);
+  }
+  A = Z;
+}
+
+cmd ::= insert_cmd(R) INTO fullname(X) inscollist_opt(F) VALUES inslist(S).
+{sqlite3Insert(pParse, X, 0, S, F, R);}
+
+%endif  SQLITE_OMIT_COMPOUND_SELECT
+%endif  SQLITE_OMIT_COMPOUND_INSERT
+
 /// Expression Processing /
 //
 
Index: src/sqliteLimit.h
===
RCS file: /sqlite/sqlite/src/sqliteLimit.h,v
retrieving revision 1.2
diff -u -3 -p -r1.2 sqliteLimit.h
--- src/sqliteLimit.h   24 Aug 2007 11:52:29 -  1.2
+++ src/sqliteLimit.h   14 Oct 2007 07:29:33 -
@@ -75,7 +75,7 @@
 ** any limit on the number of terms in a compount SELECT.
 */
 #ifndef SQLITE_MAX_COMPOUND_SELECT
-# define SQLITE_MAX_COMPOUND_SELECT 500
+# define SQLITE_MAX_COMPOUND_SELECT 500
 #endif
 
 /*

-
To unsubscribe, send email to [EMAIL PROTECTED]
-