sorry I forgot to mention the sqlite version that I'm using, its SQLite
3.3.8.
Below is the sample code that I tried,
static char *database_name = ":memory:";
static sqlite3* db_handle;
#define PRINT_TIME \
{ \
unsigned long millisec = clock(); \
printf("milliseconds = %ld\n", millisec); \
}
static const char *TestSqlStats[] = {
"CREATE TABLE testTbl (t_id INTEGER, t_name TEXT);",
"INSERT INTO testTbl (ROWID,t_id,t_name) VALUES (:A,:B,:C);",
};
int main(int argc, char *argv[])
{
char *zErrMsg = NULL;
int status, i = 0;
/* Open database */
status = sqlite3_open(database_name, &db_handle);
/* Create Table */
status = sqlite3_exec(db_handle, "BEGIN;", NULL, NULL, NULL);
status = sqlite3_exec(db_handle, TestSqlStats[0], NULL, NULL, NULL);
PRINT_TIME
{
static sqlite3_stmt *MyInsertStmt = NULL;
status = sqlite3_prepare(db_handle, TestSqlStats[1], -1,
&MyInsertStmt, (void*)0);
for (i = 10000; i > 0; i--)
{
status = sqlite3_bind_int(MyInsertStmt, 1, i);
status = sqlite3_bind_int(MyInsertStmt, 2, i);
status = sqlite3_bind_text(MyInsertStmt, 3, "It's a bad
statement", -1, SQLITE_TRANSIENT);
status = sqlite3_step(MyInsertStmt);
status = sqlite3_reset(MyInsertStmt);
}
status = sqlite3_finalize(MyInsertStmt);
}
PRINT_TIME
}
These are the test results on VC++6.0 on Pentium 4 and 3GHz Processor with
1GB RAM.
with descending order
milliseconds = 0
milliseconds = 672
Total Memory Consumed = 626040
1 It's a bad statement
2 It's a bad statement
3 It's a bad statement
4 It's a bad statement
5 It's a bad statement
Press any key to continue
with ascending order
milliseconds = 0
milliseconds = 234
Total Memory Consumed = 328960
0 It's a bad statement
1 It's a bad statement
2 It's a bad statement
3 It's a bad statement
4 It's a bad statement
Press any key to continue
From the above results, the time & space complexity has been almost doubled
when ROWID is modified with another unique value or if inserted in
descending order.
Is there any way in which I can overcome this? or is it corrected in higher
version, before trying higher version if anybody already tried please reply.
Scott -- thanks for the info, I tried enabling the macro
SQLITE_OMIT_QUICKBALANCE, it didn't effect anything.
There are some TODO's comment which may not be implemented. Please correct
me if I'm wrong.
Thanks,
-Lokee
On 10/17/07, Vitali Lovich <[EMAIL PROTECTED]> wrote:
Well, rough calculations indicate that the situation that sqlite
preallocates about an additional 3 rows every time a new ROWID is
assigned (on the fairly common use case that ROWID is always
monotonically increasing). Assuming that this corner case is not
accounted for, it could be that when assigning randomly or in decreasing
order, the additional rows are kept around but unassigned, resulting in
unused space.
Without actually looking into the internals or profiling the code, this
is just a guess and I would recommend that someone better familiar with
the internals comment on this topic.
Scott Hess wrote:
Could it be that you're seeing the btree optimization around in-order
insertion? From btree.c:
#ifndef SQLITE_OMIT_QUICKBALANCE
/*
** A special case: If a new entry has just been inserted into a
** table (that is, a btree with integer keys and all data at the
leaves)
** and the new entry is the right-most entry in the tree (it has the
** largest key) then use the special balance_quick() routine for
** balancing. balance_quick() is much faster and results in a tighter
** packing of data in the common case.
*/
-scott
On 10/16/07, Babu, Lokesh <[EMAIL PROTECTED]> wrote:
Dear all,
I'll reframe the question again, If ROWID(hidden column/b-tree
key/internal
to all table) is changed manually, means I'll insert some unique values
in
random order / in descending order (say from 10000 to 1), the memory
occupied increases more. why? I observed entire table is getting sorted
based on ROWID.
My results were, for 10000 records if it is normally inserted (ie., not
modifying ROWID) it takes 500KB. The same thing if I modify the ROWID
it
consumes 1.5MB. why?
Is that all the columns are indexed or only ROWID is indexed?
Memory measurement is done by sqlite3GenericMalloc using
SQLITE_ENABLE_MEMORY_MANAGEMENT macro enabled. Calculating each
allocation
size gives total memory allocation.
Thanks.
On 10/17/07, Vitali Lovich <[EMAIL PROTECTED]> wrote:
My question is how you're measuring the memory useage? Are you
accounting for the space overhead of the various bookkeeping sqlite
needs (i.e. master table)? The way you're creating you're table
implies
you're not using autoincrement for the integer field - are you
accounting for the extra internal row id column?
Joe Wilson wrote:
It could be malloc fragmentation.
Which sqlite version, operating system, and malloc implementation are
you using?
--- "Babu, Lokesh" <[EMAIL PROTECTED]> wrote:
Say I have 3 columns in one Table, with one INTEGER, two TEXT
columns,
If
ROWID is manually inserted and made descending for 10000 records
from
10000
to 1, (or even if random number for ROWID - both these cases), the
memory
occupied is more. Why is this so?