Hi,
have you considered using UPX to reduce the executable filesize?
http://upx.sourceforge.net
Eugene Wee
Cariotoglou Mike wrote:
1.1 mb I used the [EMAIL PROTECTED] devExpress grid, which is great
functionality-wise but bloats the
Exe.
-Original Message-
From: D. Richard Hipp [mailto:
Pragma table_info(tablename). For God's name, do read the documentation,
somebody spent good time to write it!
> -Original Message-
> From: Gerry Snyder [mailto:[EMAIL PROTECTED]
> Sent: Friday, March 25, 2005 1:19 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] getting table co
1.1 mb I used the [EMAIL PROTECTED] devExpress grid, which is great
functionality-wise but bloats the
Exe.
> -Original Message-
> From: D. Richard Hipp [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 24, 2005 11:44 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Contrib uploa
Hi All,
I used the http://sourceforge.net/projects/adodotnetsqlite to do the
pressure test and see the actual performance on my PC: PIV 3.0, 1G RAM,
win2003 server(Simplifed Chinese)+vs.net2003, with unicode characters:
I slightly modified the official test: the first field content and row
c
> Anyone know how I can query the field names and types for a given table?
pragma table_info(tablename)
Jim Dodgen wrote:
Anyone know how I can query the field names and types for a given table?
Jim,
select sql from sqlite_master where type="table" and name="gigo"
will get something like:
create table gigo(a,b,c)
which includes the field names, and would include the types if I had
used any.
Gerry
Anyone know how I can query the field names and types for a given table?
I continue to hope that you're correct. I'm already somewhat stumped
though.
I think I see, at the simplest level, how to get the aggregate bucket
data into the b-tree - in AggInsert, change the data passed to
BtreeInsert to be the aggregate bucket itself, not the pointer, and
change the s
Hello,
vcdbeInt.h reads:
** Each value has a manifest type. The manifest type of the value stored
** in a Mem struct is returned by the MemType(Mem*) macro. The type is
** one of SQLITE_NULL, SQLITE_INTEGER, SQLITE_REAL, SQLITE_TEXT or
** SQLITE_BLOB.
*/
struct Mem {
i64 i; /* Inte
On Thu, 2005-03-24 at 11:24 +0200, Cariotoglou Mike wrote:
> I tried to upload a new version of sqlite3Explorer, and I got back the
> error:
> "Too much POST data".
How big of an upload are we talking about?
--
D. Richard Hipp <[EMAIL PROTECTED]>
I use your product. I would be interested in getting the new version.
Regards,
[EMAIL PROTECTED]
NCCI
Boca Raton, Florida
561.893.2415
greetings / avec mes meilleures salutations / Cordialmente
mit freundlichen Grüßen / Med vänlig hälsning
On Thu, 2005-03-24 at 16:08 -0500, Thomas Briggs wrote:
>Am I wrong in interpreting your comment to mean that this should be
> feasible within the current architecture, and more importantly, feasible
> for someone like me who looked at the SQLite source code for the first
> time yesterday? :)
>
> You are welcomed to experiment with changes that will store the
> entire result set row in the btree rather than just a pointer.
> If you can produce some performance improvements, we'll likely
> check in your changes.
Am I wrong in interpreting your comment to mean that this should be
feas
On Thu, 2005-03-24 at 15:31 -0500, Thomas Briggs wrote:
>Well, I'm using the command line tool that comes with SQLite and
> there is no ORDER BY clause in my query, so both the good news and the
> bad news is that it certainly seems like something that SQLite is doing,
> uhh... sub-optimally, s
Well, I'm using the command line tool that comes with SQLite and
there is no ORDER BY clause in my query, so both the good news and the
bad news is that it certainly seems like something that SQLite is doing,
uhh... sub-optimally, shall we say. :)
I'm working my way through the VDBE, attemp
On Thu, 2005-03-24 at 13:59 -0500, Thomas Briggs wrote:
>I feel like I'm missing something, but that didn't seem to help. I
> can see in the code why it should be behaving differently (many thanks
> for the hint on where to look, BTW), but the memory usage is unchanged.
>
>I modified sqli
I feel like I'm missing something, but that didn't seem to help. I
can see in the code why it should be behaving differently (many thanks
for the hint on where to look, BTW), but the memory usage is unchanged.
I modified sqliteInt.h to define SQLITE_OMIT_MEMORYDB, then verified
that it is
Quoting "D. Richard Hipp" <[EMAIL PROTECTED]>:
> > Error: unsupported file format
>
> This is as documented. See for example ...
thanks
I guess I need to learn how to read :)
>> also when I use the "up arrow"
>
> I used to compile the command-line client using GNU readline
> so that the arrow
D. Richard Hipp wrote:
I used to compile the command-line client using GNU readline
so that the arrow keys would work. But a lot of users complained
that readline didn't work on their systems because their system
didn't have the right libraries installed. And in fact, when I
recently upgraded to
D. Richard Hipp wrote:
also when I use the "up arrow" (within the 3.2.0 version) to retreve the last
command [it doesn't work]
I used to compile the command-line client using GNU readline
so that the arrow keys would work. But a lot of users complained
that readline didn't work on their sys
On Thu, 2005-03-24 at 12:15 -0600, Jim Dodgen wrote:
> just for testing I went into an existing 3.0.8 database with the 3.2.0
> sqlite3
> and added a column to a table.
> then using the 3.0.8 sqlite3 went into the same database.
>
> [EMAIL PROTECTED] dbs]# sqlite3.0.8 ref.db
> SQLite version 3.0
Regarding:
and added a column to a table.
then using the 3.0.8 sqlite3 went into the same database.
Did you
vacuum
(using 3.2.0) after adding the column? That's required if you want to
manipulate the database with older version 3 code.
Donald Griggs
Opinions are not ne
I just pulled down the linux command line utility for 3.2.0 my previous verson
was 3.0.8. as i remember, I installed the 3.0.8 version from a rpm. the 3.2.0
was a pre-compiled binary.
also upgrade the perl wrapper from 1.07 to 1.08.
just for testing I went into an existing 3.0.8 database with t
On Thu, 2005-03-24 at 10:57 -0500, Thomas Briggs wrote:
>After posting my question, I found the discussion of how aggregate
> operations are performed in the VDBE Tutorial; that implies that memory
> usage will correspond with the number of unique keys encountered by the
> query, but I apprecia
After posting my question, I found the discussion of how aggregate
operations are performed in the VDBE Tutorial; that implies that memory
usage will correspond with the number of unique keys encountered by the
query, but I appreciate having it stated explicitly.
How difficult would it be,
On Thu, 2005-03-24 at 10:09 -0500, Thomas Briggs wrote:
>I have a 1GB database containing a single table. Simple queries
> against this table (SELECT COUNT(*), etc.) run without using more than a
> few MBs of memory; the amount used seems to correspond directly with the
> size of the page cac
Is it possible to limit the amount of memory SQLite uses while
processing an aggregate query?
I have a 1GB database containing a single table. Simple queries
against this table (SELECT COUNT(*), etc.) run without using more than a
few MBs of memory; the amount used seems to correspond dire
"illias" <[EMAIL PROTECTED]> writes:
> how to sync two tables in sqlite..
> ...
> in synchorinze table i import new items which not exist in
> production table but also items
> which price is changed and alredy exist in production table.
>
It's unclear whether you want the maximum price from the
> * The database file itself contains one extra page for every
> (/5) pages of data.
> * The worst case scenario for a database transaction is that
> all of these extra pages need to be journalled. So the journal
> file could contain all these extra pages.
Should also have mentioned that th
No. Extra space requirements relative to non-auto-vacuum databases
are:
* The database file itself contains one extra page for every
(/5) pages of data.
* The worst case scenario for a database transaction is that
all of these extra pages need to be journalled. So the journal
file could con
Hi all,
In version 3.x, there is an autovacuum for free used space. What is the
maximum space for the autovacuum consumed? For example, if one record
is deleted from 1000 records and the vacuum will be
automatically(autovacuum) operated, does the maximum used space for
autovacuum operation is
I tried to upload a new version of sqlite3Explorer, and I got back the
error:
"Too much POST data". I assume there is a limit to the size we can
upload. If so, can it be extended a little?
If not, anybody interested in sqlite3Explorer should contact me to see
how I can send the
file to you. Howeve
32 matches
Mail list logo