Re: [sqlite] Contrib uploads

2005-03-24 Thread Eugene Wee
Hi,
have you considered using UPX to reduce the executable filesize?
http://upx.sourceforge.net
Eugene Wee
Cariotoglou Mike wrote:
1.1 mb I used the [EMAIL PROTECTED] devExpress grid, which is great
functionality-wise but bloats the 
Exe. 


-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 24, 2005 11:44 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Contrib uploads

On Thu, 2005-03-24 at 11:24 +0200, Cariotoglou Mike wrote:
I tried to upload a new version of sqlite3Explorer, and I 
got back the
error:
"Too much POST data". 
How big of an upload are we talking about?
--
D. Richard Hipp <[EMAIL PROTECTED]>





RE: [sqlite] getting table column names and types programaticly

2005-03-24 Thread Cariotoglou Mike
Pragma table_info(tablename). For God's name, do read the documentation,
somebody spent good time to write it! 

> -Original Message-
> From: Gerry Snyder [mailto:[EMAIL PROTECTED] 
> Sent: Friday, March 25, 2005 1:19 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] getting table column names and types 
> programaticly
> 
> Jim Dodgen wrote:
> > Anyone know how I can query the field names and types for a 
> given table? 
> 
> Jim,
> 
> select sql from sqlite_master where type="table" and name="gigo"
> 
> will get something like:
> 
> create table gigo(a,b,c)
> 
> which includes the field names, and would include the types 
> if I had used any.
> 
> Gerry
> 
> 
> 



RE: [sqlite] Contrib uploads

2005-03-24 Thread Cariotoglou Mike
1.1 mb I used the [EMAIL PROTECTED] devExpress grid, which is great
functionality-wise but bloats the 
Exe. 

> -Original Message-
> From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 24, 2005 11:44 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Contrib uploads
> 
> On Thu, 2005-03-24 at 11:24 +0200, Cariotoglou Mike wrote:
> > I tried to upload a new version of sqlite3Explorer, and I 
> got back the
> > error:
> > "Too much POST data". 
> 
> How big of an upload are we talking about?
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
> 
> 



[sqlite] Pressure test and see the actual performance (phase I)

2005-03-24 Thread chan wilson
Hi All,
   I used the http://sourceforge.net/projects/adodotnetsqlite to do the 
pressure test and see the actual performance on my PC: PIV 3.0, 1G RAM, 
win2003 server(Simplifed Chinese)+vs.net2003, with unicode characters:

I slightly modified the official test: the first field content and row 
count.

(23+i+i*3.3+i*4.4 Bytes) * 10 million
Inserting version 3... 47009 inserts/sec
Reading version 3... 262595 reads/sec
(23+i+i*3.3+i*4.4 Bytes) * 100 million
Inserting version 3... 48042 inserts/sec
Reading version 3... 102899 reads/sec
around 6.7G in size
(10 thousand+i+i*3.3+i*4.4 Bytes) * 100 thousand
Inserting version 3... 468 inserts/sec
Reading version 3... 654 reads/sec
(10 thousand+i+i*3.3+i*4.4 Bytes) * 500 thousand
Inserting version 3... 334 inserts/sec
Reading version 3... 641 reads/sec
around 7.5G in size
to simply put, longer field worse performance, but but larger records will 
not.

btw. when inserting, 50% cpu usage, 10-20% when selecting.
_
与联机的朋友进行交流,请使用 MSN Messenger: http://messenger.msn.com/cn 



Re: [sqlite] getting table column names and types programaticly

2005-03-24 Thread Kurt Welgehausen
> Anyone know how I can query the field names and types for a given table?

pragma table_info(tablename)


Re: [sqlite] getting table column names and types programaticly

2005-03-24 Thread Gerry Snyder
Jim Dodgen wrote:
Anyone know how I can query the field names and types for a given table? 
Jim,
select sql from sqlite_master where type="table" and name="gigo"
will get something like:
create table gigo(a,b,c)
which includes the field names, and would include the types if I had 
used any.

Gerry


[sqlite] getting table column names and types programaticly

2005-03-24 Thread Jim Dodgen
Anyone know how I can query the field names and types for a given table? 






RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread Thomas Briggs

   I continue to hope that you're correct.  I'm already somewhat stumped
though.

   I think I see, at the simplest level, how to get the aggregate bucket
data into the b-tree - in AggInsert, change the data passed to
BtreeInsert to be the aggregate bucket itself, not the pointer, and
change the size appropriately; then, correspondingly in the handling of
OP_AggFocus, change the call to BtreeData to read back the entire
aggregate bucket instead of just the pointer.  Conceptually pretty
straightforward.  (I think.  Please correct me if I'm wrong.)

   Here's where I get stuck: calling BtreeInsert from AggInsert no
longer makes any sense, because the data in the aggregate bucket hasn't
yet been accumulated.  So I think, conceptually, I need another op code
that is called before OP_Next to get the data in the current aggregate
bucket written out to the b-tree.  That would seem to require changes to
the code generator, where I haven't yet ventured, so I'm hoping you can
confirm that I'm on the right track before I head off down the garden
path on my own. :)

   Just to make sure I'm not missing anything obvious: OP_Next seems to
be multi-purpose, so I don't want to just stick something in there.  If
there's a way to tell that an aggregate operation is underway such that
I can write the bucket data to the btree I'd be happy with that, but I'm
not sure that there is and the inelegance of that approach irks me
anyway.

   Thanks
   -Tom

> -Original Message-
> From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 24, 2005 4:26 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Memory usage for queries containing a 
> GROUP BY clause
> 
> On Thu, 2005-03-24 at 16:08 -0500, Thomas Briggs wrote:
> >Am I wrong in interpreting your comment to mean that 
> this should be
> > feasible within the current architecture, and more 
> importantly, feasible
> > for someone like me who looked at the SQLite source code 
> for the first
> > time yesterday? :)
> > 
> 
> I know of no reason why you should not be able to tackle this
> problem yourself.
> 
> 


[sqlite] in-code documentation for Mem->type

2005-03-24 Thread Bernhard Döbler
Hello,

vcdbeInt.h reads:

** Each value has a manifest type. The manifest type of the value stored
** in a Mem struct is returned by the MemType(Mem*) macro. The type is
** one of SQLITE_NULL, SQLITE_INTEGER, SQLITE_REAL, SQLITE_TEXT or
** SQLITE_BLOB.
*/
struct Mem {
  i64 i;  /* Integer value */
  int n;  /* Number of characters in string value, including
'\0' */
  u16 flags;  /* Some combination of MEM_Null, MEM_Str, MEM_Dyn,
etc. */
  u8  type;   /* One of MEM_Null, MEM_Str, etc. */

In the block comment SQLITE_REAL must obviously be SQLITE_FLOAT
In the inline-comment for the "type" member it must not be MEM_Null and the
like since MEM_Null != SQLITE_NULL
According to my understanding the MEM_* constants are used for the
"flags"-member indicating which members (string, float, integer) are validly
set.

Happy easter to all readers,
Bernhard



Re: [sqlite] Contrib uploads

2005-03-24 Thread D. Richard Hipp
On Thu, 2005-03-24 at 11:24 +0200, Cariotoglou Mike wrote:
> I tried to upload a new version of sqlite3Explorer, and I got back the
> error:
> "Too much POST data". 

How big of an upload are we talking about?
-- 
D. Richard Hipp <[EMAIL PROTECTED]>



Re: [sqlite] Contrib uploads

2005-03-24 Thread Uriel_Carrasquilla




I use your product.  I would be interested in getting the new version.

Regards,

[EMAIL PROTECTED]
NCCI
Boca Raton, Florida
561.893.2415
greetings / avec mes meilleures salutations / Cordialmente
mit freundlichen Grüßen / Med vänlig hälsning


   
  "Cariotoglou 
  Mike"To:   

  <[EMAIL PROTECTED]cc: 
  ar.gr>   Subject:  [sqlite] Contrib 
uploads
   
  03/24/2005 04:24 
  AM   
  Please respond to
  sqlite-users 
   
   




I tried to upload a new version of sqlite3Explorer, and I got back the
error:
"Too much POST data". I assume there is a limit to the size we can
upload. If so, can it be extended a little?
If not, anybody interested in sqlite3Explorer should contact me to see
how I can send the
file to you. However, since I see that there are 13,000 downloads, I
would not be very excited about e-mailing 13,000 copies...








RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread D. Richard Hipp
On Thu, 2005-03-24 at 16:08 -0500, Thomas Briggs wrote:
>Am I wrong in interpreting your comment to mean that this should be
> feasible within the current architecture, and more importantly, feasible
> for someone like me who looked at the SQLite source code for the first
> time yesterday? :)
> 

I know of no reason why you should not be able to tackle this
problem yourself.



RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread Thomas Briggs
 
> You are welcomed to experiment with changes that will store the
> entire result set row in the btree rather than just a pointer.
> If you can produce some performance improvements, we'll likely
> check in your changes.

   Am I wrong in interpreting your comment to mean that this should be
feasible within the current architecture, and more importantly, feasible
for someone like me who looked at the SQLite source code for the first
time yesterday? :)

   Thanks for all your help tracking this down.

   -Tom


RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread D. Richard Hipp
On Thu, 2005-03-24 at 15:31 -0500, Thomas Briggs wrote:
>Well, I'm using the command line tool that comes with SQLite and
> there is no ORDER BY clause in my query, so both the good news and the
> bad news is that it certainly seems like something that SQLite is doing,
> uhh... sub-optimally, shall we say. :)
> 

It appears that the btree is storing just a pointer to the result
row data, not the result row data itself.  So the complete result
set is being kept in memory even if an external btree is used to
index it.

You are welcomed to experiment with changes that will store the
entire result set row in the btree rather than just a pointer.
If you can produce some performance improvements, we'll likely
check in your changes.

-- 
D. Richard Hipp <[EMAIL PROTECTED]>



RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread Thomas Briggs

   Well, I'm using the command line tool that comes with SQLite and
there is no ORDER BY clause in my query, so both the good news and the
bad news is that it certainly seems like something that SQLite is doing,
uhh... sub-optimally, shall we say. :)

   I'm working my way through the VDBE, attempting to figure out what's
going on, and there's one thing to which I can't yet find an answer, and
it's driving me crazy: when handling the AggFocus operation, AggInsert
is called if no entry is found in the b-tree for a given agg key.
AggInsert allocates the memory necessary for the new aggregate bucket,
adds it to the b-tree, and makes it the current bucket.  All well and
good.  But where does that bucket get freed?  Doesn't the next call to
AggInsert clobber the pointer to the previously-allocated bucket?
Unless I'm misunderstanding, sqlite3BtreeInsert makes a copy of the data
that needs to be stored (which only makes sense, if the b-tree is being
written to disk...), so I don't think that sqlite3BtreeInsert can be
responsible for it, right?

   Thanks
   -Tom

> -Original Message-
> From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 24, 2005 2:21 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Memory usage for queries containing a 
> GROUP BY clause
> 
> On Thu, 2005-03-24 at 13:59 -0500, Thomas Briggs wrote:
> >I feel like I'm missing something, but that didn't seem 
> to help.  I
> > can see in the code why it should be behaving differently 
> (many thanks
> > for the hint on where to look, BTW), but the memory usage 
> is unchanged.
> > 
> >I modified sqliteInt.h to define SQLITE_OMIT_MEMORYDB, 
> then verified
> > that it is defined by:
> >a) inserting garbage into vdbeaux.c to prevent it from 
> compiling when
> > SQLITE_OMIT_MEMORYDB is defined
> >b) tried to attach to database :memory: without success 
> (it created a
> > file name :memory: instead), and
> >c) started the command line tool without specifying a 
> database and
> > noting that the main database is a file named $CD/:memory: 
> > 
> >Am I missing something stupid here?  Or is all my memory 
> being used
> > somewhere other than constructing the b-tree used for aggregation?
> > 
> 
> It might be the case that SQLite is using memory in ways that
> are extravagent, wasteful, and unnecessary.  Or you could be doing
> something wrong in your app.  Hard to say.
> 
> Another big users of memory is ORDER BY.  If the ORDER BY clause
> cannot be satisfied by use of an index, then the entire result
> set is pulled into memory and sorted there.  Unlike the aggregate
> issue, there is no easy fix for getting ORDER BY to work off of
> disk, except appropriate use of indices in your schema.
> -- 
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
> 


RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread D. Richard Hipp
On Thu, 2005-03-24 at 13:59 -0500, Thomas Briggs wrote:
>I feel like I'm missing something, but that didn't seem to help.  I
> can see in the code why it should be behaving differently (many thanks
> for the hint on where to look, BTW), but the memory usage is unchanged.
> 
>I modified sqliteInt.h to define SQLITE_OMIT_MEMORYDB, then verified
> that it is defined by:
>a) inserting garbage into vdbeaux.c to prevent it from compiling when
> SQLITE_OMIT_MEMORYDB is defined
>b) tried to attach to database :memory: without success (it created a
> file name :memory: instead), and
>c) started the command line tool without specifying a database and
> noting that the main database is a file named $CD/:memory: 
> 
>Am I missing something stupid here?  Or is all my memory being used
> somewhere other than constructing the b-tree used for aggregation?
> 

It might be the case that SQLite is using memory in ways that
are extravagent, wasteful, and unnecessary.  Or you could be doing
something wrong in your app.  Hard to say.

Another big users of memory is ORDER BY.  If the ORDER BY clause
cannot be satisfied by use of an index, then the entire result
set is pulled into memory and sorted there.  Unlike the aggregate
issue, there is no easy fix for getting ORDER BY to work off of
disk, except appropriate use of indices in your schema.
-- 
D. Richard Hipp <[EMAIL PROTECTED]>



RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread Thomas Briggs

   I feel like I'm missing something, but that didn't seem to help.  I
can see in the code why it should be behaving differently (many thanks
for the hint on where to look, BTW), but the memory usage is unchanged.

   I modified sqliteInt.h to define SQLITE_OMIT_MEMORYDB, then verified
that it is defined by:
   a) inserting garbage into vdbeaux.c to prevent it from compiling when
SQLITE_OMIT_MEMORYDB is defined
   b) tried to attach to database :memory: without success (it created a
file name :memory: instead), and
   c) started the command line tool without specifying a database and
noting that the main database is a file named $CD/:memory: 

   Am I missing something stupid here?  Or is all my memory being used
somewhere other than constructing the b-tree used for aggregation?

   Thanks
   -Tom

> -Original Message-
> From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 24, 2005 11:19 AM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Memory usage for queries containing a 
> GROUP BY clause
> 
> On Thu, 2005-03-24 at 10:57 -0500, Thomas Briggs wrote:
> >After posting my question, I found the discussion of how 
> aggregate
> > operations are performed in the VDBE Tutorial; that implies 
> that memory
> > usage will correspond with the number of unique keys 
> encountered by the
> > query, but I appreciate having it stated explicitly.
> > 
> >How difficult would it be, in concept, to change the 
> storage of the
> > hash buckets from in-memory to on-disk?  The VDBE Tutorial makes it
> > sound like it would be a matter of changing the AggFocus, 
> AggNext, and
> > maybe a few other operations to store/retrieve buckets from 
> disk before
> > operating on them in memory.  How dramatically am I oversimplifying
> > this? :)
> > 
> 
> You have the right idea.  But the job is really much easier.  All of
> the Agg* opcodes already use the standard btree mechanism for storing
> and retrieving their buckets.  But they use a ":memory:" btree by
> default.
> To make it use disk, all you have to do is get it to store 
> the btree in
> a temporary file on disk.
> 
> One easy way to make this happen is to recompile with
> -DSQLITE_OMIT_MEMORYDB.
> See source file vdbeaux.c near line 800 for additional information.
> If you want to hack together a customized version of SQLite that
> behaves differently, that would be a good place to start.
> -- 
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
> 


Re: [sqlite] Version 3.2.0

2005-03-24 Thread Jim Dodgen
Quoting "D. Richard Hipp" <[EMAIL PROTECTED]>:
> > Error: unsupported file format
> 
> This is as documented.  See for example ...

thanks
I guess I need to learn how to read :) 

>> also when I use the "up arrow"
> 
> I used to compile the command-line client using GNU readline
> so that the arrow keys would work. ...

I pulled down the source and built it on RHES3, now the arrows work.





Re: [sqlite] Version 3.2.0

2005-03-24 Thread Ted Unangst
D. Richard Hipp wrote:
I used to compile the command-line client using GNU readline
so that the arrow keys would work.  But a lot of users complained
that readline didn't work on their systems because their system
didn't have the right libraries installed.  And in fact, when I
recently upgraded to SuSE 9.2 (from RedHat 7.2) I found that
SuSE is missing GNU readline.  (SuSE is missing a lot of other
stuff too, I've found.)  So my latest builds do not have readline
support.  If somebody can suggest a simple readline facility
that can be statically linked and which is easy to support, I
be happy to start using it.
The NetBSD version of libedit supports libreadline compat.  There's a 
portable version at http://www.thrysoee.dk/editline/

--
Ted Unangst www.coverity.com Coverity, Inc.


Re: [sqlite] Version 3.2.0

2005-03-24 Thread Ulrik Petersen
D. Richard Hipp wrote:
also when I use the "up arrow" (within the 3.2.0 version) to retreve the last 
command [it doesn't work]

   

I used to compile the command-line client using GNU readline
so that the arrow keys would work.  But a lot of users complained
that readline didn't work on their systems because their system
didn't have the right libraries installed.  And in fact, when I
recently upgraded to SuSE 9.2 (from RedHat 7.2) I found that
SuSE is missing GNU readline.  (SuSE is missing a lot of other
stuff too, I've found.)  So my latest builds do not have readline
support.  If somebody can suggest a simple readline facility
that can be statically linked and which is easy to support, I
be happy to start using it.
 

Pardon my ignorance of Unix linker semantics, but can't you statically 
link GNU readline in through the

-Bstatic
flag given to ld (or libtool)?
$ man ld
[snip]
  -Bstatic
  -dn
  -non_shared
  -static
  Do  not  link against shared libraries.  This is only 
meaningful on
  platforms for which shared libraries are supported.  The  
different
  variants of this option are for compatibility with various 
systems.
  You may use this option multiple times  on  the  command  
line:  it
  affects  library  searching  for  -l options which follow 
it.  This
  option also implies --unresolved-symbols=report-all.
[snip]

HTH
Ulrik Petersen
--
Ulrik Petersen, MA, B.Sc.
University of Aalborg, Denmark



Re: [sqlite] Version 3.2.0

2005-03-24 Thread D. Richard Hipp
On Thu, 2005-03-24 at 12:15 -0600, Jim Dodgen wrote:
> just for testing I went into an existing 3.0.8 database with the 3.2.0 
> sqlite3 
> and added a column to a table.
> then using the 3.0.8 sqlite3 went into the same database.
> 
> [EMAIL PROTECTED] dbs]# sqlite3.0.8 ref.db
> SQLite version 3.0.8
> Enter ".help" for instructions
> sqlite> .schema
> Error: unsupported file format

This is as documented.  See for example
http://www.sqlite.org/formatchng.html near the bottom.
Version 3.2 is backwards but not forwards compatible.
That's why the second number in the version changed
from a "1" to a "2".

> 
> also when I use the "up arrow" (within the 3.2.0 version) to retreve the last 
> command [it doesn't work]
> 

I used to compile the command-line client using GNU readline
so that the arrow keys would work.  But a lot of users complained
that readline didn't work on their systems because their system
didn't have the right libraries installed.  And in fact, when I
recently upgraded to SuSE 9.2 (from RedHat 7.2) I found that
SuSE is missing GNU readline.  (SuSE is missing a lot of other
stuff too, I've found.)  So my latest builds do not have readline
support.  If somebody can suggest a simple readline facility
that can be statically linked and which is easy to support, I
be happy to start using it.
-- 
D. Richard Hipp <[EMAIL PROTECTED]>



RE: [sqlite] Version 3.2.0

2005-03-24 Thread Griggs, Donald
Regarding:
  and added a column to a table.
then using the 3.0.8 sqlite3 went into the same database.


Did you 
vacuum
(using 3.2.0) after adding the column?   That's required if you want to
manipulate the database with older version 3 code.


Donald Griggs

Opinions are not necessarily those of Misys Healthcare Systems nor its board
of directors.




Re: [sqlite] Version 3.2.0

2005-03-24 Thread Jim Dodgen
I just pulled down the linux command line utility for 3.2.0 my previous verson 
was 3.0.8. as i remember, I installed the 3.0.8 version from a rpm. the 3.2.0 
was a pre-compiled binary.

also upgrade the perl wrapper from 1.07 to 1.08.

just for testing I went into an existing 3.0.8 database with the 3.2.0 sqlite3 
and added a column to a table.
then using the 3.0.8 sqlite3 went into the same database.

[EMAIL PROTECTED] dbs]# sqlite3.0.8 ref.db
SQLite version 3.0.8
Enter ".help" for instructions
sqlite> .schema
Error: unsupported file format

also when I use the "up arrow" (within the 3.2.0 version) to retreve the last 
command

i get hung up wit the folowing, Only a Ctrl D gets me out.
# sqlite3 ref.db
SQLite version 3.2.0
Enter ".help" for instructions
sqlite> ^[[A
   ...> ;
   ...> ;
   ...> ;
   ...> .quit
   ...>
;  ...> Incomplete SQL:
;
;
.quit


thanks






RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread D. Richard Hipp
On Thu, 2005-03-24 at 10:57 -0500, Thomas Briggs wrote:
>After posting my question, I found the discussion of how aggregate
> operations are performed in the VDBE Tutorial; that implies that memory
> usage will correspond with the number of unique keys encountered by the
> query, but I appreciate having it stated explicitly.
> 
>How difficult would it be, in concept, to change the storage of the
> hash buckets from in-memory to on-disk?  The VDBE Tutorial makes it
> sound like it would be a matter of changing the AggFocus, AggNext, and
> maybe a few other operations to store/retrieve buckets from disk before
> operating on them in memory.  How dramatically am I oversimplifying
> this? :)
> 

You have the right idea.  But the job is really much easier.  All of
the Agg* opcodes already use the standard btree mechanism for storing
and retrieving their buckets.  But they use a ":memory:" btree by
default.
To make it use disk, all you have to do is get it to store the btree in
a temporary file on disk.

One easy way to make this happen is to recompile with
-DSQLITE_OMIT_MEMORYDB.
See source file vdbeaux.c near line 800 for additional information.
If you want to hack together a customized version of SQLite that
behaves differently, that would be a good place to start.
-- 
D. Richard Hipp <[EMAIL PROTECTED]>



RE: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread Thomas Briggs

   After posting my question, I found the discussion of how aggregate
operations are performed in the VDBE Tutorial; that implies that memory
usage will correspond with the number of unique keys encountered by the
query, but I appreciate having it stated explicitly.

   How difficult would it be, in concept, to change the storage of the
hash buckets from in-memory to on-disk?  The VDBE Tutorial makes it
sound like it would be a matter of changing the AggFocus, AggNext, and
maybe a few other operations to store/retrieve buckets from disk before
operating on them in memory.  How dramatically am I oversimplifying
this? :)

   Thanks
   -Tom

> -Original Message-
> From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 24, 2005 10:32 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Memory usage for queries containing a 
> GROUP BY clause
> 
> On Thu, 2005-03-24 at 10:09 -0500, Thomas Briggs wrote:
> 
> >I have a 1GB database containing a single table.  Simple queries
> > against this table (SELECT COUNT(*), etc.) run without 
> using more than a
> > few MBs of memory; the amount used seems to correspond 
> directly with the
> > size of the page cache, as I expected.  If I execute an 
> aggregate query
> > that contains a GROUP BY clause, however, the memory usage 
> seems to jump
> > quite a bit - the memory usage seems to correlate with the number of
> > columns in the GROUP BY.  Grouping by three columns using a couple
> > hundred megs of memory; grouping by eight columns uses more 
> than 1.3GB!.
> > 
> 
> The entire results set of an aggregate query is held in
> memory.  The only way to limit the memory usage of an
> aggregate query is to construct the query in such a way
> that the number of rows in the result set is limited.
> If you are using 1.3GB of memory to do an aggregate query,
> my guess is that there are a very large number of rows in
> the result set.
> 
> It is, in theory, possible to reduce the memory requirements
> for an aggregate query if the GROUP BY clause specifies columns
> that are indexed.  But SQLite does not currently implement
> that optimization.  You can implement the optimization manually,
> though, by turning the GROUP BY clause into an ORDER BY clause,
> dropping the aggregate functions, then doing the aggregation
> manually as the rows come out of the database in order.
> 
> -- 
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
> 


Re: [sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread D. Richard Hipp
On Thu, 2005-03-24 at 10:09 -0500, Thomas Briggs wrote:

>I have a 1GB database containing a single table.  Simple queries
> against this table (SELECT COUNT(*), etc.) run without using more than a
> few MBs of memory; the amount used seems to correspond directly with the
> size of the page cache, as I expected.  If I execute an aggregate query
> that contains a GROUP BY clause, however, the memory usage seems to jump
> quite a bit - the memory usage seems to correlate with the number of
> columns in the GROUP BY.  Grouping by three columns using a couple
> hundred megs of memory; grouping by eight columns uses more than 1.3GB!.
> 

The entire results set of an aggregate query is held in
memory.  The only way to limit the memory usage of an
aggregate query is to construct the query in such a way
that the number of rows in the result set is limited.
If you are using 1.3GB of memory to do an aggregate query,
my guess is that there are a very large number of rows in
the result set.

It is, in theory, possible to reduce the memory requirements
for an aggregate query if the GROUP BY clause specifies columns
that are indexed.  But SQLite does not currently implement
that optimization.  You can implement the optimization manually,
though, by turning the GROUP BY clause into an ORDER BY clause,
dropping the aggregate functions, then doing the aggregation
manually as the rows come out of the database in order.

-- 
D. Richard Hipp <[EMAIL PROTECTED]>



[sqlite] Memory usage for queries containing a GROUP BY clause

2005-03-24 Thread Thomas Briggs

   Is it possible to limit the amount of memory SQLite uses while
processing an aggregate query?

   I have a 1GB database containing a single table.  Simple queries
against this table (SELECT COUNT(*), etc.) run without using more than a
few MBs of memory; the amount used seems to correspond directly with the
size of the page cache, as I expected.  If I execute an aggregate query
that contains a GROUP BY clause, however, the memory usage seems to jump
quite a bit - the memory usage seems to correlate with the number of
columns in the GROUP BY.  Grouping by three columns using a couple
hundred megs of memory; grouping by eight columns uses more than 1.3GB!.

   Given that the queries I'm testing with will return a lot of rows,
I'm guessing that the memory is being used to store the result data
before it's returned to the caller (or, if the query is part of a CREATE
TABLE AS or INSERT INTO ... SELECT statement, before being inserted into
the destination table).  My other theory is that the memory is being
used to hold intermediate results while performing the grouping.
Regardless of the cause, is this avoidable?  Given its nature as an
embedded database, this just doesn't seem right to me, so I'm hoping so.

   Thanks
   -Tom


Re: [sqlite] synchorineze

2005-03-24 Thread Derrell . Lipman
"illias" <[EMAIL PROTECTED]> writes:

> how to sync two tables in sqlite..
> ...
> in synchorinze table i import new items which not exist in
> production table but also items
> which price is changed and alredy exist in production table.
>

It's unclear whether you want the maximum price from the two tables, or always
the price from second table.  In the former case, you can do something like
this:


SQLite version 2.8.15
Enter ".help" for instructions
sqlite> create table t1 (id integer primary key, name text, price float);
sqlite> create table t2 (id integer primary key, name text, price float);
sqlite> insert into t1 values (1, 'Item1', 110);
sqlite> insert into t1 values (2, 'Item2', 120);
sqlite> insert into t1 values (3, 'Item3', 130);
sqlite> insert into t1 values (4, 'Item4', 140);
sqlite> insert into t1 values (5, 'Item5', 150);
sqlite> insert into t1 values (6, 'Item6', 160);
sqlite> insert into t2 values (1, 'Item1', 199);
sqlite> insert into t2 values (2, 'Item2', 220);
sqlite> insert into t2 values (7, 'Item7', 170);
sqlite> select id, max(price) from
   ...>   (select id, price from t1
   ...>union
   ...>select id, price from t2)
   ...>   group by id
   ...>   order by id;
id = 1
max(price) = 199

id = 2
max(price) = 220

id = 3
max(price) = 130

id = 4
max(price) = 140

id = 5
max(price) = 150

id = 6
max(price) = 160

id = 7
max(price) = 170
sqlite>

In the latter case, this should do what you want:

sqlite> insert or replace into t1 select * from t2;
sqlite> select id, price from t1;
   id = 1
price = 199

   id = 2
price = 220

   id = 3
price = 130

   id = 4
price = 140

   id = 5
price = 150

   id = 6
price = 160

   id = 7
price = 170
sqlite> 

Note that either 'id' or 'name must be defined as PRIMARY KEY for this to
work.  Also, for the former case, i'll be a lot easier if you get rid of the
currency mark in your prices.

Derrell


Re: [sqlite]AutoVacuum in version 3

2005-03-24 Thread Dan Kennedy

> * The database file itself contains one extra page for every 
>  (/5) pages of data.
> * The worst case scenario for a database transaction is that
>   all of these extra pages need to be journalled. So the journal
>   file could contain all these extra pages.

Should also have mentioned that the number of extra pages is
the ceiling of (*(/5)).

e.g. Assuming a page size of 1024 bytes, if you have 5 pages of 
data your auto-vacuum database is 1 page bigger than the non-auto-vacuum
equivalent. If you have 210 pages of data, it's 2 pages bigger.




__ 
Do you Yahoo!? 
Make Yahoo! your home page 
http://www.yahoo.com/r/hs


Re: [sqlite]AutoVacuum in version 3

2005-03-24 Thread Dan Kennedy
No. Extra space requirements relative to non-auto-vacuum databases 
are:

* The database file itself contains one extra page for every 
 (/5) pages of data.
* The worst case scenario for a database transaction is that
  all of these extra pages need to be journalled. So the journal
  file could contain all these extra pages.

Dan.

--- RexChan <[EMAIL PROTECTED]> wrote:
> Hi all,
> 
> In version 3.x, there is an autovacuum for free used space. What is the 
> maximum space for the autovacuum consumed?  For example, if one record 
> is deleted from 1000 records and the vacuum will be 
> automatically(autovacuum) operated, does the maximum used space for 
> autovacuum operation is equal to the size of the database file? Thanks!
> 
> Best regards,
> Kei
> 
> 



__ 
Do you Yahoo!? 
Yahoo! Sports - Sign up for Fantasy Baseball. 
http://baseball.fantasysports.yahoo.com/


[sqlite]AutoVacuum in version 3

2005-03-24 Thread RexChan
Hi all,
In version 3.x, there is an autovacuum for free used space. What is the 
maximum space for the autovacuum consumed?  For example, if one record 
is deleted from 1000 records and the vacuum will be 
automatically(autovacuum) operated, does the maximum used space for 
autovacuum operation is equal to the size of the database file? Thanks!

Best regards,
Kei


[sqlite] Contrib uploads

2005-03-24 Thread Cariotoglou Mike
I tried to upload a new version of sqlite3Explorer, and I got back the
error:
"Too much POST data". I assume there is a limit to the size we can
upload. If so, can it be extended a little? 
If not, anybody interested in sqlite3Explorer should contact me to see
how I can send the
file to you. However, since I see that there are 13,000 downloads, I
would not be very excited about e-mailing 13,000 copies...