[sqlite] Reducing the SQLITE_MAX_PAGE_SIZE for embedded systems usage

2004-09-05 Thread sankara . narayanan
Hi,

In Sqlite 3.0, SQLITEMAX_PAGE_SIZE is set to 8192 in the source(pager.h). 
We have an application where the database is created in PC (Using Windows 
application) and the database contents are read in an embedded application 
(in an ARM based platform). When I port SQLITE3.0 in the embedded platform 
I find that the stack size is used heavily due to this 8KB value for 
SQLITE_MAX_PAGE_SIZE and some structures are allocated in stack using 
these values. I need to change is value to a smaller one (preferably 1024 
Bytes = 1KB) which is the page size in 2.8.x versions. With regard to this 
I have the following clarifications.

1. What will be the impact on the performance if I reduce the 
SQLITE_MAX_PAGE_SIZE to 1KB? Will my queries become slower?
2. If I change the SQLITE_MAX_PAGE_SIZE to 1KB in the embedded 
application, I am not able to open the database file created by the PC 
application (where the SQLITE_MAX_PAGE_SIZE = 8K). Is this the expected 
behaviour or am I missing some relevant modifications?
3. If I need to create the DB Files with the same SQLITE_MAX_PAGE_SIZE 
with 1KB, is it possible that I get the sqlite3 command line application 
created with this value or atleast a reference of how to build sqlite.exe 
for Windows with the source code (I could not find enough details in Wiki 
page).

Please provide me the details at the earliest.

Thank you,

With regards,
Sankara Narayanan Balasubramanian
Philips Innovation Campus,
No. 1, Murphy Road,
Ulsoor.  Bangalore - 560 008. 

Re: [sqlite] [OT] Shell globbing - Re: [sqlite] trying to compile SQLite

2004-09-05 Thread Greg Miller
Christian Smith wrote:
On Fri, 3 Sep 2004, Greg Miller wrote:
I guess the UNIX folks just didn't know any better way back then.
Putting globbing in the API instead of the shell is a much better
approach, but that wasn't all that obvious when UNIX first came along.

You condone the DOS/Windows (lack of) approach?
Put it in a library, yes. But it is still best done by the shell (IMHO.)
The shell is written once, so for the common case, globbing only has to be
implemented in one place.

And for anything even slightly uncommon, life becomes more difficult. 
It's hard to produce good error messages in some cases, certain command 
syntaxes become impossible, many commands (e.g., find) need wildcard 
args quoted in order to prevent the shell from screwing them up, you 
have to work around commandlines that exceed the length limit, etc. As 
both a user and programmer on UNIX(ish) operating systems, I've come to 
be quite annoyed by this behavior.

Having every utility do it's own globbing results is code reuse for the
sake of code reuse. I prefer just having a list of files the user wants
manipulated.

Putting globbing support in the API makes it easy without destroying 
information about the arguments. UNIX(ish) platforms ended up having to 
do this, anyway with functions like glob().

I do recall reading both the DDJ column you mentioned and the follow-up 
in which he acknowledged the problems with shell globbing, BTW.
--
http://www.velocityvector.com/ | "F--- 'em all. That's how I feel."
http://www.classic-games.com/  | -- Michael Moore on small business
http://www.indie-games.com/|


[sqlite] temporary databases

2004-09-05 Thread Miguel Angel Latorre Díaz
Shouldn't vdbeaux.c line 753:
rc = sqlite3BtreeFactory(db, ":memory:", 0, TEMP_PAGES, &pAgg->pBtree);
obey the TEMP_STORE macro so it is stored either in memory or disk?

I know it's there for speed, but that's what TEMP_STORE is for. Isn't it?

If I change in that line the ":memory:" to a 0 (zero), would I break
anything?

Also it seems the parser "supports" something weird:
CREATE TABLE table1 (field1 integer);
CREATE TABLE table2 (field2 integer);
INSERT INTO table1 (field1) VALUES (1);
INSERT INTO table2 (field2) VALUES (2);
SELECT table1.  field1, table2. field2 FROM table1, table2;

Note the extra spaces between tables and fields in SELECT.



Re: [sqlite] LIMIT does not speed up query execution

2004-09-05 Thread D. Richard Hipp
hilaner wrote:
And - if I have to have sorted results as a small (but ordered!) part of big
amount of data - there is no way to make it faster...
SQLite will pull records out of the database in sorted order,
if you have an index on the columns of the ORDER BY clause and
you don't need to use a different index to satisfy the WHERE
clause.  But in your case this won't help because you have DESC
on one term of the ORDER BY clause and ASC on the other.  SQLite
could handle it if both terms where ASC or both where DESC, but
because they are different, it defeats the optimizer and SQLite
has to fall back to using an O(NlogN) sort after the fact.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565


Re: [sqlite] LIMIT does not speed up query execution

2004-09-05 Thread hilaner
Christian Smith wrote:

> Query (2) has an extra condition in the WHERE clause, thus reducing
> the result set size to be sorted. As sorting is probably an
> O(n.log(n)) operation, halving the result set will more than halve
> the time taken to sort, for example. Add that extra condition to
> query (1), and you should get similar results for both queries.

So, If I understood well, sorting is the most time consumpting part of a
query.
LIMIT is applied after all results are gathered (filtered, sorted, and so
on), so getting e.g. 20 sorted records from 20 000 item table takes much
more time than another query 20 sorted records but limited by another WHERE
condition - because sorting is done on much smaller group of records in
second case.

And - if I have to have sorted results as a small (but ordered!) part of big
amount of data - there is no way to make it faster...

Ok, thanks to all for your explanations!
Regards
Adam



Re: [sqlite] Re: [sqlite3] Conditional jump or move depends on uninitialised value(s)

2004-09-05 Thread Tiago Dionizio
Thanks for the clarification.
I finally found the problem... i was assuming the sqlite3_context pointer
for the step/finalize callbacks of aggregate functions was always the same
and was saving it (on the step callback) for future use. Updating that
pointer fixed things - all works just fine and valgrind doesn't complain
anymore. Now i realize why i was getting segfaults.
Thanks anyway.
Tiago
D. Richard Hipp wrote:
Tiago Dionizio wrote:
Using:
SQLite 3 from cvs
Fedora Core 2
gcc 3.3.3
I am currently working on trying to bind sqlite3 to Lua, and
while testing things i encountered some problems regarding
uninitialised values in the sqlite library.
These tests were done using user defined functions, and more
specifically, when using aggregate functions.
This is a stack trace from valgrind with the problem i found
on my lua tests:
==9321== Conditional jump or move depends on uninitialised value(s)
==9321==at 0x1BA6E8FB: sqlite3VdbeMemRelease (vdbemem.c:185)
==9321==by 0x1BA6ED48: sqlite3VdbeMemSetStr (vdbemem.c:372)
==9321==by 0x1BA6ADAF: sqlite3_result_text (vdbeapi.c:101)
==9321==by 0x1BA5F77D: lcontext_result (lsqlite3.c:814)
==9321==by 0x1B910083: luaD_precall (ldo.c:260)
==9321==by 0x1B91AB45: luaV_execute (lvm.c:627)
==9321==by 0x1B9103A1: luaD_call (ldo.c:313)
==9321==by 0x1B90C69E: f_call (lapi.c:672)
==9321==by 0x1B90F927: luaD_rawrunprotected (ldo.c:88)
==9321==by 0x1B91089D: luaD_pcall (ldo.c:416)
==9321==by 0x1B90C6F9: lua_pcall (lapi.c:685)
==9321==by 0x1BA6010D: db_sql_finalize_function (lsqlite3.c:1058)
==9321==by 0x1BA99783: sqlite3VdbeExec (vdbe.c:4363)

The value that valgrind claims is uninitialized at vdbemem.c:185 is
clearly initialized to MEM_Null on vdbe.c:4357, 6 lines above the
call to db_sql_finalize_function.  It appears that somewhere in the
chain of 8 intermediate LUA calls the sqlite_context* pointer is
begin corrupted somehow.
==9305== Conditional jump or move depends on uninitialised value(s)
==9305==at 0x80528FB: sqlite3_aggregate_context (vdbeapi.c:190)
==9305==by 0x8048E69: udf1_scalar (test.c:72)
==9305==by 0x807C31A: sqlite3VdbeExec (vdbe.c:1175)
==9305==by 0x8052899: sqlite3_step (vdbeapi.c:159)
==9305==by 0x8048D77: do_query (test.c:51)
==9305==by 0x8049240: main (test.c:142)
i tried to make the test case as similar as possible as it would go
on the lua script. (file attached)
I have no idea where to look for, and believe this is an issue with
sqlite, if i am using the functions correctly.
Udf1_scalar is a scalar function (not an aggregate function) and
scalar functions should never call sqlite3_aggregate_context().  This
is an error in test1.c, not in SQLite.  You might argue that SQLite
could do a better job of detecting this kind of programmer error
on the part of the calling application, and you would be right.
But technically, it is not a problem in SQLite.



Re: [sqlite] LIMIT does not speed up query execution

2004-09-05 Thread Christian Smith
On Sat, 4 Sep 2004, Darren Duncan wrote:

>So to make this faster you either have to make WHERE return fewer
>rows (better), or let it return more but remove the ORDER BY.


It's not a good idea to use LIMIT on unordered results, of course, as the
order of results for unordered result sets is, err, unspecified, hence you
will not get predictable results.

LIMIT is a big win on client/server databases, where the bandwidth saved
against returning a full result set is significant.

More info inline...


>
>-- Darren Duncan
>
>At 10:49 PM +0200 9/4/04, hilaner wrote:
>>Hi all!
>>
>>Since my database growed to more than 20 000 records, I have noticed that
>>select limited to a few numer of records by LIMIT takes much more time than
>>select limited to similar number of records by another WHERE condition.
>>I use sqlite_get_table function.
>>
>>In my case I have the following queries:
>>(1) SELECT *, col_1 - col_2 WHERE col_3 = 0 AND col_4 > 0 ORDER BY col_0 ASC
>>col_5 DESC LIMIT 40 OFFSET 0;
>>(2) SELECT *, col_1 - col_2 WHERE col_3 = 0 AND col_4 > 0 AND col_6 = 5
>>ORDER BY col_0 ASC col_5 DESC;
>>
>>And the (2) query executes much faster than (1), even it has another
>>contition in WHERE section. (1) takes nearly the same time like (1) without
>>LIMIT condition (full table select).
>>Is it normal?
>>What can I do to speed up (1) query?


Query (2) has an extra condition in the WHERE clause, thus reducing the
result set size to be sorted. As sorting is probably an O(n.log(n))
operation, halving the result set will more than halve the time taken to
sort, for example. Add that extra condition to query (1), and you should
get similar results for both queries.


>>
>>(SQLite 2.8.15, one table of 16 columns and more than 2 records, Windows
>>2000, VC++ 6.0)
>>
>>Regards
>>Adam
>


Christian


-- 
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \


[sqlite] Re: [sqlite3] Conditional jump or move depends on uninitialised value(s)

2004-09-05 Thread D. Richard Hipp
Tiago Dionizio wrote:
Using:
SQLite 3 from cvs
Fedora Core 2
gcc 3.3.3
I am currently working on trying to bind sqlite3 to Lua, and
while testing things i encountered some problems regarding
uninitialised values in the sqlite library.
These tests were done using user defined functions, and more
specifically, when using aggregate functions.
This is a stack trace from valgrind with the problem i found
on my lua tests:
==9321== Conditional jump or move depends on uninitialised value(s)
==9321==at 0x1BA6E8FB: sqlite3VdbeMemRelease (vdbemem.c:185)
==9321==by 0x1BA6ED48: sqlite3VdbeMemSetStr (vdbemem.c:372)
==9321==by 0x1BA6ADAF: sqlite3_result_text (vdbeapi.c:101)
==9321==by 0x1BA5F77D: lcontext_result (lsqlite3.c:814)
==9321==by 0x1B910083: luaD_precall (ldo.c:260)
==9321==by 0x1B91AB45: luaV_execute (lvm.c:627)
==9321==by 0x1B9103A1: luaD_call (ldo.c:313)
==9321==by 0x1B90C69E: f_call (lapi.c:672)
==9321==by 0x1B90F927: luaD_rawrunprotected (ldo.c:88)
==9321==by 0x1B91089D: luaD_pcall (ldo.c:416)
==9321==by 0x1B90C6F9: lua_pcall (lapi.c:685)
==9321==by 0x1BA6010D: db_sql_finalize_function (lsqlite3.c:1058)
==9321==by 0x1BA99783: sqlite3VdbeExec (vdbe.c:4363)
The value that valgrind claims is uninitialized at vdbemem.c:185 is
clearly initialized to MEM_Null on vdbe.c:4357, 6 lines above the
call to db_sql_finalize_function.  It appears that somewhere in the
chain of 8 intermediate LUA calls the sqlite_context* pointer is
begin corrupted somehow.
==9305== Conditional jump or move depends on uninitialised value(s)
==9305==at 0x80528FB: sqlite3_aggregate_context (vdbeapi.c:190)
==9305==by 0x8048E69: udf1_scalar (test.c:72)
==9305==by 0x807C31A: sqlite3VdbeExec (vdbe.c:1175)
==9305==by 0x8052899: sqlite3_step (vdbeapi.c:159)
==9305==by 0x8048D77: do_query (test.c:51)
==9305==by 0x8049240: main (test.c:142)
i tried to make the test case as similar as possible as it would go
on the lua script. (file attached)
I have no idea where to look for, and believe this is an issue with
sqlite, if i am using the functions correctly.
Udf1_scalar is a scalar function (not an aggregate function) and
scalar functions should never call sqlite3_aggregate_context().  This
is an error in test1.c, not in SQLite.  You might argue that SQLite
could do a better job of detecting this kind of programmer error
on the part of the calling application, and you would be right.
But technically, it is not a problem in SQLite.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565


Re: [sqlite] Single INSERT is very slow

2004-09-05 Thread Nuno Lucas
George Ionescu, dando pulos de alegria, escreveu :
the INSERT statement above takes approx. 150 milliseconds to complete, 
which is very slow compared to Access/ADO, which does the same in 3 
milliseconds.
As you know, average (read) disk access times are between 10~15 
miliseconds, so it's impossible to Access/ADO to have commited your data 
to disk in that time.

Please read about the "PRAGMA SYNCHRONOUS" command [1]  and you will 
probably find similar results then.

Regards,
~Nuno Lucas
[1] http://www.sqlite.org/lang.html#pragma_synchronous