Re: [sqlite] possible bug "foreign key mismatch" in TCL

2011-07-12 Thread Dan Kennedy
On 07/13/2011 07:18 AM, Victor Mayevski wrote:
> I am getting a "foreign key mismatch" in TCL in the following situation:
> #The system is Ubuntu 10.10 32bit, using ActiveState 8.6 32 bit TCL binaries
>
> % package require sqlite3
> 3.7.6.3
>
>
> #Two example tables:
> create table users (name primary key unique, pass not null)
> create table users1 (name references users (name), pass references users 
> (pass))

Maybe you need a unique constraint on column 'pass' of table "users".

   http://www.sqlite.org/foreignkeys.html#fk_indexes

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] possible bug "foreign key mismatch" in TCL

2011-07-12 Thread Victor Mayevski
I am getting a "foreign key mismatch" in TCL in the following situation:
#The system is Ubuntu 10.10 32bit, using ActiveState 8.6 32 bit TCL binaries

% package require sqlite3
3.7.6.3


#Two example tables:
create table users (name primary key unique, pass not null)
create table users1 (name references users (name), pass references users (pass))

#the following works fine
insert into users values ('user1', 'pass1')

#either replace or update fail with "foreign key mismatch" error:
replace into users values ('user1', 'pass2')
  #or
update users set name = 'user1', pass = 'pass2'

#if I remove one of the foreign keys from users1 table, the error goes away.

#The exact same replace and update commands work fine under sqlite3
shell, version 3.7.2-1ubuntu0.1, 32 bit


Thanks,

Victor
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Storing/editing hierarchical data sets

2011-07-12 Thread Christopher Melen

Thank you again, Michael - a very interesting suggestion. I'm going to start 
experimenting with your previous suggestion of a linked list. This is simple 
and intuitive, and I can see it working very well. In fact it's a similar 
principle to that used by Audacity, the open source audio editor, which 
segments the original audio into a linked list of temporary 'block' files, 
enabling fast cut/copy/paste manipulations (in contrast to many editors, which 
rely on literal copying and insertion into a single file).


I will be storing my data as blobs between 1024 and 2048 bytes, rather than 
individual samples. Of course it is extremely unlikely that any cut or copy 
selection will fall on the boundary of a blob - most of the time it will slice 
through it. So any blob thus affected will have to be rebuilt, or its data 
redistributed to neighbouring rows. Given the average size of a blob this 
shouldn't be an issue, however. And even for a large file experiment indicates 
that the corresponding reduction hierarchy will be unlikely to exceed a few MB.


So now I am going to actually perform some tests...



Many thanks again,
Christopher

> From: michael.bla...@ngc.com
> To: sqlite-users@sqlite.org
> Date: Tue, 12 Jul 2011 11:38:13 +
> Subject: Re: [sqlite] Storing/editing hierarchical data sets
> 
> I thought of another way to do your copy/cut/paste...
> 
> 
> 
> Assuming you keep the original audio around and use the levels I showed 
> before.
> 
> 
> 
> create table sequence(level int,parent int,start int,end end);
> 
> insert into seqeunce values(1,0,0,-1); // note that -1 means "until end of 
> data".
> 
> 
> 
> See where I'm going?  You keep a sequence table that is much like your btree. 
>  It's just a collection of clips that when strung together can make your 
> audio clip.  By default you have one sequence per level.
> 
> 
> 
> Cut 1000-1999 from level=1
> 
> select * from sequence where level=1;
> 
> delete from sequence where level=1;
> 
> insert into sequence values(1,0,0,999);
> 
> insert into sequence values(2,1,2000,-1);
> 
> 
> 
> Insert some data:
> 
> 1st you find where it fits
> 
> select * from sequence where level=1;
> 
> bytes1=0;
> 
> while moredata
> 
>   bytes2+=end-start;
> 
>   if (insertpoint >=bytes1 and insertpoint <=bytes2)
> 
>   update sequence set id=id+1,parent=parent+1 where id>=currentid;
> 
>   break;
> 
> end
> 
> 
> 
> Cuts are just splitting one record in 2, or adjusting 2 records and deleting 
> records in between.
> 
> I'll leave that as an exercise for you.
> 
> 
> 
> This would
> 
> 
> 
> Michael D. Black
> 
> Senior Scientist
> 
> NG Information Systems
> 
> Advanced Analytics Directorate
> 
> 
> 
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
> behalf of Christopher Melen [relativef...@hotmail.co.uk]
> Sent: Sunday, July 10, 2011 12:52 PM
> To: sqlite-users@sqlite.org
> Subject: EXT :[sqlite] Storing/editing hierarchical data sets
> 
> 
> Hi,
> 
> 
> I am developing an application which analyses audio data, and I have recently 
> been looking into Sqlite as a possible file format. The result of an analysis 
> in my application is a hierarchical data set, where each level in the 
> hierarchy represents a summary of the level below, taking the max of each 
> pair in the sub-level, in the following way:
> 
> 
>  251  214
> 
> 
>   251  54 201  214
> 
> 
>251  9117   54   31  201   
> 214  66
> 
> 
> 251 18 5 91   11 17 54 169 31 201 148173 214 43 66
> 
> 
> Such a structure essentially represents the same data set at different levels 
> of resolution ('zoom levels', if you like). My first experiments involved a 
> btree-like structure (actually something closer to an enfilade* or counted 
> btree**), where the data stored in each node is simply a summary of its child 
> nodes. Edits to any node at the leaf level propagate up the tree, whilst 
> large edits simply entail unlinking pointers to subtrees, thus making edits 
> on any scale generally log-like in nature. This works fine as an in-memory 
> structure, but since my data sets might potentially grow fairly large (a few 
> hundred MB at least) I need a disk-based solution. I naively assumed that I 
> might be able to utilize Sqlite's btree layer in order to implement this more 
> effectively; this doesn't seem possible, however, given that the btree layer 
> isn't directly exposed, and in any case it doesn't map onto the user 
> interface in any way that seems helpful for this task.
> 
> 
> I am aware of some of the ways in which hierarchical or tree-like structures 
> can be represented in a database (adjacency lists, nested sets, materialized 
> paths, etc.), but none of these seems to offer a good solution. What I'm 
> experimenting with at present is the idea of entering each node of the 
> 

[sqlite] Data type of the blob returned by matchinfo()

2011-07-12 Thread Abhinav Upadhyay
Hi,

Quoting the ranking function given in the appendix of the FTS3
documentation page (http://www.sqlite.org/fts3.html#appendix_a)

static void rankfunc(sqlite3_context *pCtx, int nVal, sqlite3_value **apVal){
  int *aMatchinfo;/* Return value of matchinfo() */
...
...
aMatchinfo = (unsigned int *)sqlite3_value_blob(apVal[0]);
...
...

aMatchinfo is declared as int * and the value obtained from
sqlite3_value_blob() is being case to unsigned int *. This is causing
a compiler warning, so I am wondering what is the datatype of the
matchinfo blob (int * or unsigned int *) ? Although common sense says
it should be unsigned int *, but just wanted to confirm .

Thanks
Abhinav
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] surprising missing query optimization

2011-07-12 Thread Bart Smissaert
> Try   ON (EA.ENTRY_ID = +E.ENTRY_ID)

Yes, that works indeed nicely.
Thanks for that.
I keep forgetting these non-standard SQL tricks.

RBS


On Tue, Jul 12, 2011 at 9:28 PM, Igor Tandetnik  wrote:
> On 7/12/2011 3:39 PM, Bart Smissaert wrote:
>> Joining a large table (ENTRY_ATTRIBUTES) and a small table
>> (BPNewENTRY) and putting the resulting records
>> in a third table, BP3. Large table may have a few million records and
>> small table a few hundred records.
>> The join field is called ENTRY_ID in both tables and this has a
>> non-unique index in the large table and is the integer primary key
>> in the small table. Data type is integer in both these fields.
>>
>> This is the SQL:
>>
>> INSERT OR REPLACE INTO BP3
>> (ENTRY_ID, NUMERIC_VALUE, UNIT)
>> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
>> FROM
>> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
>> (EA.ENTRY_ID = E.ENTRY_ID)
>
> Try   ON (EA.ENTRY_ID = +E.ENTRY_ID) . The unary plus should suppress
> the index on BPNewENTRY, and force SQLite to turn the execution around:
> scan BPNewENTRY and look up in ENTRY_ATTRIBUTES using index.
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] surprising missing query optimization

2011-07-12 Thread Igor Tandetnik
On 7/12/2011 3:39 PM, Bart Smissaert wrote:
> Joining a large table (ENTRY_ATTRIBUTES) and a small table
> (BPNewENTRY) and putting the resulting records
> in a third table, BP3. Large table may have a few million records and
> small table a few hundred records.
> The join field is called ENTRY_ID in both tables and this has a
> non-unique index in the large table and is the integer primary key
> in the small table. Data type is integer in both these fields.
>
> This is the SQL:
>
> INSERT OR REPLACE INTO BP3
> (ENTRY_ID, NUMERIC_VALUE, UNIT)
> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
> FROM
> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
> (EA.ENTRY_ID = E.ENTRY_ID)

Try   ON (EA.ENTRY_ID = +E.ENTRY_ID) . The unary plus should suppress 
the index on BPNewENTRY, and force SQLite to turn the execution around: 
scan BPNewENTRY and look up in ENTRY_ATTRIBUTES using index.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] surprising missing query optimization

2011-07-12 Thread Bart Smissaert
Have checked and missing analyze is indeed not the cause of this
difference in query speed.

RBS


On Tue, Jul 12, 2011 at 9:06 PM, Bart Smissaert
 wrote:
>> it thinks that scanning the whole ENTRY_ATTRIBUTES table will read
>> about 54855 rows. And you say that it has much more rows.
>
> This particular database has less rows, the millions I mentioned are
> in a different
> database. I think the figures are right, but will check.
>
>> I think running ANALYZE on your database
>
> Analyze was done, but will double-check.
>
> RBS
>
>
> On Tue, Jul 12, 2011 at 8:59 PM, Pavel Ivanov  wrote:
>>> Now what surprises me is that this optimization is not done
>>> automatically by SQLite.
>>> I suppose I just over estimate the capabilities of the SQLite plan 
>>> generator.
>>> Or, would this be something that could be improved?
>>
>> It's very non-obvious optimization and I think other type of
>> optimization will give much better result in your case.
>> To understand the optimizer's behavior look at numbers it shows you:
>> it thinks that scanning the whole ENTRY_ATTRIBUTES table will read
>> about 54855 rows. And you say that it has much more rows. That's why
>> optimizer selects sub-optimal plan.
>> I think running ANALYZE on your database should fix selected plans and
>> even first query will run much faster.
>>
>>
>> Pavel
>>
>>
>> On Tue, Jul 12, 2011 at 3:39 PM, Bart Smissaert
>>  wrote:
>>> Joining a large table (ENTRY_ATTRIBUTES) and a small table
>>> (BPNewENTRY) and putting the resulting records
>>> in a third table, BP3. Large table may have a few million records and
>>> small table a few hundred records.
>>> The join field is called ENTRY_ID in both tables and this has a
>>> non-unique index in the large table and is the integer primary key
>>> in the small table. Data type is integer in both these fields.
>>>
>>> This is the SQL:
>>>
>>> INSERT OR REPLACE INTO BP3
>>> (ENTRY_ID, NUMERIC_VALUE, UNIT)
>>> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
>>> FROM
>>> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
>>> (EA.ENTRY_ID = E.ENTRY_ID)
>>>
>>> and this is the generated query plan for that:
>>>
>>> 0--0--0--SCAN TABLE ENTRY_ATTRIBUTES AS EA (~54855 rows) <-->
>>> 0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
>>> (rowid=?) (~1 rows)
>>>
>>> Now I can speed up the query a lot by putting a where clause in with
>>> the min(ENTRY_ID) of the small table:
>>>
>>> INSERT OR REPLACE INTO BP3
>>> (ENTRY_ID, NUMERIC_VALUE, UNIT)
>>> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
>>> FROM
>>> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
>>> (EA.ENTRY_ID = E.ENTRY_ID)
>>> WHERE
>>> EA.ENTRY_ID >= 4262936
>>>
>>> and that will have this query plan:
>>>
>>> 0--0--0--SEARCH TABLE ENTRY_ATTRIBUTES AS EA USING INDEX
>>> IDX_ENTRY_ATTRIBUTES_ENTRY_ID (ENTRY_ID>?) (~18102 rows) <-->
>>> 0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
>>> (rowid=?) (~1 rows)
>>>
>>> Getting this min(ENTRY_ID) is done in a separate query and because it
>>> is done on the small table it is very fast.
>>>
>>> Now what surprises me is that this optimization is not done
>>> automatically by SQLite.
>>> I suppose I just over estimate the capabilities of the SQLite plan 
>>> generator.
>>> Or, would this be something that could be improved?
>>>
>>> I am using SQLite version 3.7.5.
>>>
>>>
>>> RBS
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] surprising missing query optimization

2011-07-12 Thread Bart Smissaert
> it thinks that scanning the whole ENTRY_ATTRIBUTES table will read
> about 54855 rows. And you say that it has much more rows.

This particular database has less rows, the millions I mentioned are
in a different
database. I think the figures are right, but will check.

> I think running ANALYZE on your database

Analyze was done, but will double-check.

RBS


On Tue, Jul 12, 2011 at 8:59 PM, Pavel Ivanov  wrote:
>> Now what surprises me is that this optimization is not done
>> automatically by SQLite.
>> I suppose I just over estimate the capabilities of the SQLite plan generator.
>> Or, would this be something that could be improved?
>
> It's very non-obvious optimization and I think other type of
> optimization will give much better result in your case.
> To understand the optimizer's behavior look at numbers it shows you:
> it thinks that scanning the whole ENTRY_ATTRIBUTES table will read
> about 54855 rows. And you say that it has much more rows. That's why
> optimizer selects sub-optimal plan.
> I think running ANALYZE on your database should fix selected plans and
> even first query will run much faster.
>
>
> Pavel
>
>
> On Tue, Jul 12, 2011 at 3:39 PM, Bart Smissaert
>  wrote:
>> Joining a large table (ENTRY_ATTRIBUTES) and a small table
>> (BPNewENTRY) and putting the resulting records
>> in a third table, BP3. Large table may have a few million records and
>> small table a few hundred records.
>> The join field is called ENTRY_ID in both tables and this has a
>> non-unique index in the large table and is the integer primary key
>> in the small table. Data type is integer in both these fields.
>>
>> This is the SQL:
>>
>> INSERT OR REPLACE INTO BP3
>> (ENTRY_ID, NUMERIC_VALUE, UNIT)
>> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
>> FROM
>> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
>> (EA.ENTRY_ID = E.ENTRY_ID)
>>
>> and this is the generated query plan for that:
>>
>> 0--0--0--SCAN TABLE ENTRY_ATTRIBUTES AS EA (~54855 rows) <-->
>> 0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
>> (rowid=?) (~1 rows)
>>
>> Now I can speed up the query a lot by putting a where clause in with
>> the min(ENTRY_ID) of the small table:
>>
>> INSERT OR REPLACE INTO BP3
>> (ENTRY_ID, NUMERIC_VALUE, UNIT)
>> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
>> FROM
>> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
>> (EA.ENTRY_ID = E.ENTRY_ID)
>> WHERE
>> EA.ENTRY_ID >= 4262936
>>
>> and that will have this query plan:
>>
>> 0--0--0--SEARCH TABLE ENTRY_ATTRIBUTES AS EA USING INDEX
>> IDX_ENTRY_ATTRIBUTES_ENTRY_ID (ENTRY_ID>?) (~18102 rows) <-->
>> 0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
>> (rowid=?) (~1 rows)
>>
>> Getting this min(ENTRY_ID) is done in a separate query and because it
>> is done on the small table it is very fast.
>>
>> Now what surprises me is that this optimization is not done
>> automatically by SQLite.
>> I suppose I just over estimate the capabilities of the SQLite plan generator.
>> Or, would this be something that could be improved?
>>
>> I am using SQLite version 3.7.5.
>>
>>
>> RBS
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] surprising missing query optimization

2011-07-12 Thread Pavel Ivanov
> Now what surprises me is that this optimization is not done
> automatically by SQLite.
> I suppose I just over estimate the capabilities of the SQLite plan generator.
> Or, would this be something that could be improved?

It's very non-obvious optimization and I think other type of
optimization will give much better result in your case.
To understand the optimizer's behavior look at numbers it shows you:
it thinks that scanning the whole ENTRY_ATTRIBUTES table will read
about 54855 rows. And you say that it has much more rows. That's why
optimizer selects sub-optimal plan.
I think running ANALYZE on your database should fix selected plans and
even first query will run much faster.


Pavel


On Tue, Jul 12, 2011 at 3:39 PM, Bart Smissaert
 wrote:
> Joining a large table (ENTRY_ATTRIBUTES) and a small table
> (BPNewENTRY) and putting the resulting records
> in a third table, BP3. Large table may have a few million records and
> small table a few hundred records.
> The join field is called ENTRY_ID in both tables and this has a
> non-unique index in the large table and is the integer primary key
> in the small table. Data type is integer in both these fields.
>
> This is the SQL:
>
> INSERT OR REPLACE INTO BP3
> (ENTRY_ID, NUMERIC_VALUE, UNIT)
> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
> FROM
> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
> (EA.ENTRY_ID = E.ENTRY_ID)
>
> and this is the generated query plan for that:
>
> 0--0--0--SCAN TABLE ENTRY_ATTRIBUTES AS EA (~54855 rows) <-->
> 0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
> (rowid=?) (~1 rows)
>
> Now I can speed up the query a lot by putting a where clause in with
> the min(ENTRY_ID) of the small table:
>
> INSERT OR REPLACE INTO BP3
> (ENTRY_ID, NUMERIC_VALUE, UNIT)
> SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
> FROM
> ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
> (EA.ENTRY_ID = E.ENTRY_ID)
> WHERE
> EA.ENTRY_ID >= 4262936
>
> and that will have this query plan:
>
> 0--0--0--SEARCH TABLE ENTRY_ATTRIBUTES AS EA USING INDEX
> IDX_ENTRY_ATTRIBUTES_ENTRY_ID (ENTRY_ID>?) (~18102 rows) <-->
> 0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
> (rowid=?) (~1 rows)
>
> Getting this min(ENTRY_ID) is done in a separate query and because it
> is done on the small table it is very fast.
>
> Now what surprises me is that this optimization is not done
> automatically by SQLite.
> I suppose I just over estimate the capabilities of the SQLite plan generator.
> Or, would this be something that could be improved?
>
> I am using SQLite version 3.7.5.
>
>
> RBS
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] surprising missing query optimization

2011-07-12 Thread Bart Smissaert
Joining a large table (ENTRY_ATTRIBUTES) and a small table
(BPNewENTRY) and putting the resulting records
in a third table, BP3. Large table may have a few million records and
small table a few hundred records.
The join field is called ENTRY_ID in both tables and this has a
non-unique index in the large table and is the integer primary key
in the small table. Data type is integer in both these fields.

This is the SQL:

INSERT OR REPLACE INTO BP3
(ENTRY_ID, NUMERIC_VALUE, UNIT)
SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
FROM
ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
(EA.ENTRY_ID = E.ENTRY_ID)

and this is the generated query plan for that:

0--0--0--SCAN TABLE ENTRY_ATTRIBUTES AS EA (~54855 rows) <-->
0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
(rowid=?) (~1 rows)

Now I can speed up the query a lot by putting a where clause in with
the min(ENTRY_ID) of the small table:

INSERT OR REPLACE INTO BP3
(ENTRY_ID, NUMERIC_VALUE, UNIT)
SELECT EA.ENTRY_ID, EA.NUMERIC_VALUE, EA.TYPE_SPECIFIC_INFO
FROM
ENTRY_ATTRIBUTES EA INNER JOIN BPNewENTRY E ON
(EA.ENTRY_ID = E.ENTRY_ID)
WHERE
EA.ENTRY_ID >= 4262936

and that will have this query plan:

0--0--0--SEARCH TABLE ENTRY_ATTRIBUTES AS EA USING INDEX
IDX_ENTRY_ATTRIBUTES_ENTRY_ID (ENTRY_ID>?) (~18102 rows) <-->
0--1--1--SEARCH TABLE BPNewENTRY AS E USING INTEGER PRIMARY KEY
(rowid=?) (~1 rows)

Getting this min(ENTRY_ID) is done in a separate query and because it
is done on the small table it is very fast.

Now what surprises me is that this optimization is not done
automatically by SQLite.
I suppose I just over estimate the capabilities of the SQLite plan generator.
Or, would this be something that could be improved?

I am using SQLite version 3.7.5.


RBS
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there any existing to open .db-wal file?

2011-07-12 Thread Jan Hudec
On Tue, Jul 12, 2011 at 13:13:25 +0900, 박성근 wrote:
> I am trying to investigate .db-wal file gathered from sudden power off.

Why do you want to investigate it? When you open the db again and checkpoint
it, sqlite will apply all the transactions that managed to finish before the
power-off to the .db file proper and drop the ones that did not.

If you said after application crash (that you can't reproduce) I'd kind of
see point in analyzing the unfinished transactions, but after a power off?

-- 
 Jan 'Bulb' Hudec 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Richard Hipp
On Tue, Jul 12, 2011 at 10:48 AM, Baruch Burstein wrote:

> These do not work with the amalgamation. How do I use these?
>

Some will, some won't.  To use those that won't, you'll need to rebuild the
SQLite amalgamation with the options you want to use and import that new
amalagmation into the Fossil source tree.


>
> On Tue, Jul 12, 2011 at 5:44 PM, Richard Hipp  wrote:
>
> > Here are some additional options you might try to minimize the SQLite
> size
> > in Fossil:
> >
> > SQLITE_OMIT_ANALYZE
> > SQLITE_OMIT_AUTOMATIC_INDEX
> > SQLITE_OMIT_AUTOVACUUM
> > SQLITE_OMIT_BUILTIN_TEST
> > SQLITE_OMIT_DEPRECATED
> > SQLITE_OMIT_GET_TABLE
> > SQLITE_OMIT_INCRBLOB
> > SQLITE_OMIT_SHARED_CACHE
> > SQLITE_OMIT_UTF16
> > SQLITE_OMIT_VIRTUALTABLE
> >
> > No guarantees that Fossil will run, or even compile, with the above.  But
> > if
> > you experiment you can probably find a subset of the above that will work
> > for you.
> >
> >
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Michael Stephenson
Add #include  or #include "SomeHeader.h"  to the top of the
amalgamation, then define them in SomeHeader.h?

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Baruch Burstein
Sent: Tuesday, July 12, 2011 10:48 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Minimize sqlite3.o size

These do not work with the amalgamation. How do I use these?

On Tue, Jul 12, 2011 at 5:44 PM, Richard Hipp  wrote:

> Here are some additional options you might try to minimize the SQLite 
> size in Fossil:
>
> SQLITE_OMIT_ANALYZE
> SQLITE_OMIT_AUTOMATIC_INDEX
> SQLITE_OMIT_AUTOVACUUM
> SQLITE_OMIT_BUILTIN_TEST
> SQLITE_OMIT_DEPRECATED
> SQLITE_OMIT_GET_TABLE
> SQLITE_OMIT_INCRBLOB
> SQLITE_OMIT_SHARED_CACHE
> SQLITE_OMIT_UTF16
> SQLITE_OMIT_VIRTUALTABLE
>
> No guarantees that Fossil will run, or even compile, with the above.  
> But if you experiment you can probably find a subset of the above that 
> will work for you.
>
>
>
> On Tue, Jul 12, 2011 at 9:36 AM, Baruch Burstein  >wrote:
>
> > Yes I know (although I only got a 200k difference), but -O3 is about
> twice
> > as fast in my tests.
> >
> >
> > On Tue, Jul 12, 2011 at 4:22 PM, Stephan Beal 
> > 
> > wrote:
> >
> > > On Tue, Jul 12, 2011 at 3:10 PM, Baruch Burstein 
> > >  > > >wrote:
> > >
> > > > I assume this is the same for sqlite3.dll or sqlite3.lib, but if 
> > > > not,
> I
> > > am
> > > > interested in sqlite3.o (mingw32).
> > > > How can I minimize the size of the library (compiling with -O3, 
> > > > since
> > > speed
> > > > is my top concern, so different optimizations is not an option)? 
> > > > If I
> > > know
> > > >
> > >
> > > Coincidentally, i just tried -O3 and the end result was almost 
> > > 0.5MB
> > larger
> > > than with -g -Os.
> > >
> > > --
> > > - stephan beal
> > > http://wanderinghorse.net/home/stephan/
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Baruch Burstein
These do not work with the amalgamation. How do I use these?

On Tue, Jul 12, 2011 at 5:44 PM, Richard Hipp  wrote:

> Here are some additional options you might try to minimize the SQLite size
> in Fossil:
>
> SQLITE_OMIT_ANALYZE
> SQLITE_OMIT_AUTOMATIC_INDEX
> SQLITE_OMIT_AUTOVACUUM
> SQLITE_OMIT_BUILTIN_TEST
> SQLITE_OMIT_DEPRECATED
> SQLITE_OMIT_GET_TABLE
> SQLITE_OMIT_INCRBLOB
> SQLITE_OMIT_SHARED_CACHE
> SQLITE_OMIT_UTF16
> SQLITE_OMIT_VIRTUALTABLE
>
> No guarantees that Fossil will run, or even compile, with the above.  But
> if
> you experiment you can probably find a subset of the above that will work
> for you.
>
>
>
> On Tue, Jul 12, 2011 at 9:36 AM, Baruch Burstein  >wrote:
>
> > Yes I know (although I only got a 200k difference), but -O3 is about
> twice
> > as fast in my tests.
> >
> >
> > On Tue, Jul 12, 2011 at 4:22 PM, Stephan Beal 
> > wrote:
> >
> > > On Tue, Jul 12, 2011 at 3:10 PM, Baruch Burstein  > > >wrote:
> > >
> > > > I assume this is the same for sqlite3.dll or sqlite3.lib, but if not,
> I
> > > am
> > > > interested in sqlite3.o (mingw32).
> > > > How can I minimize the size of the library (compiling with -O3, since
> > > speed
> > > > is my top concern, so different optimizations is not an option)? If I
> > > know
> > > >
> > >
> > > Coincidentally, i just tried -O3 and the end result was almost 0.5MB
> > larger
> > > than with -g -Os.
> > >
> > > --
> > > - stephan beal
> > > http://wanderinghorse.net/home/stephan/
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Richard Hipp
Here are some additional options you might try to minimize the SQLite size
in Fossil:

SQLITE_OMIT_ANALYZE
SQLITE_OMIT_AUTOMATIC_INDEX
SQLITE_OMIT_AUTOVACUUM
SQLITE_OMIT_BUILTIN_TEST
SQLITE_OMIT_DEPRECATED
SQLITE_OMIT_GET_TABLE
SQLITE_OMIT_INCRBLOB
SQLITE_OMIT_SHARED_CACHE
SQLITE_OMIT_UTF16
SQLITE_OMIT_VIRTUALTABLE

No guarantees that Fossil will run, or even compile, with the above.  But if
you experiment you can probably find a subset of the above that will work
for you.



On Tue, Jul 12, 2011 at 9:36 AM, Baruch Burstein wrote:

> Yes I know (although I only got a 200k difference), but -O3 is about twice
> as fast in my tests.
>
>
> On Tue, Jul 12, 2011 at 4:22 PM, Stephan Beal 
> wrote:
>
> > On Tue, Jul 12, 2011 at 3:10 PM, Baruch Burstein  > >wrote:
> >
> > > I assume this is the same for sqlite3.dll or sqlite3.lib, but if not, I
> > am
> > > interested in sqlite3.o (mingw32).
> > > How can I minimize the size of the library (compiling with -O3, since
> > speed
> > > is my top concern, so different optimizations is not an option)? If I
> > know
> > >
> >
> > Coincidentally, i just tried -O3 and the end result was almost 0.5MB
> larger
> > than with -g -Os.
> >
> > --
> > - stephan beal
> > http://wanderinghorse.net/home/stephan/
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compiling sqlite

2011-07-12 Thread Richard Hipp
On Tue, Jul 12, 2011 at 10:37 AM, Baruch Burstein wrote:

> The website (http://www.sqlite.org/about.html) says that sqlite can be
> compiled to under 300K. I am using mingw32, and with no configuration did I
> manage to get sqlite3.o below 400k. What compiler/compiler options are used
> to reach <300K?
>

-Os together with various -DSQLITE_OMIT_x options.  But those
-DSQLITE_OMIT_ options are incompatible with Fossil.


> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Compiling sqlite

2011-07-12 Thread Baruch Burstein
The website (http://www.sqlite.org/about.html) says that sqlite can be
compiled to under 300K. I am using mingw32, and with no configuration did I
manage to get sqlite3.o below 400k. What compiler/compiler options are used
to reach <300K?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Baruch Burstein
Yes I know (although I only got a 200k difference), but -O3 is about twice
as fast in my tests.


On Tue, Jul 12, 2011 at 4:22 PM, Stephan Beal  wrote:

> On Tue, Jul 12, 2011 at 3:10 PM, Baruch Burstein  >wrote:
>
> > I assume this is the same for sqlite3.dll or sqlite3.lib, but if not, I
> am
> > interested in sqlite3.o (mingw32).
> > How can I minimize the size of the library (compiling with -O3, since
> speed
> > is my top concern, so different optimizations is not an option)? If I
> know
> >
>
> Coincidentally, i just tried -O3 and the end result was almost 0.5MB larger
> than with -g -Os.
>
> --
> - stephan beal
> http://wanderinghorse.net/home/stephan/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Stephan Beal
On Tue, Jul 12, 2011 at 3:10 PM, Baruch Burstein wrote:

> I assume this is the same for sqlite3.dll or sqlite3.lib, but if not, I am
> interested in sqlite3.o (mingw32).
> How can I minimize the size of the library (compiling with -O3, since speed
> is my top concern, so different optimizations is not an option)? If I know
>

Coincidentally, i just tried -O3 and the end result was almost 0.5MB larger
than with -g -Os.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Black, Michael (IS)
http://www.sqlite.org/compile.html

Read section 1.4 -- tells you all you need to know.



First thing you should do is see if there's any significant performance 
difference with -Os instead of -O3.

You may be surprised.







Michael D. Black

Senior Scientist

NG Information Systems

Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Baruch Burstein [bmburst...@gmail.com]
Sent: Tuesday, July 12, 2011 8:10 AM
To: sqlite-users@sqlite.org
Subject: EXT :[sqlite] Minimize sqlite3.o size

I assume this is the same for sqlite3.dll or sqlite3.lib, but if not, I am
interested in sqlite3.o (mingw32).
How can I minimize the size of the library (compiling with -O3, since speed
is my top concern, so different optimizations is not an option)? If I know
there are parts of SQL I don't need in my program (like JOIN or ATTACH or
triggers), can I remove support for them (this is for an embedded system and
every byte counts, or at least every 100 bytes or so)? What pre-processor
options can I set to remove things like FTS or R-Tree support which I do not
need? Are there any other things easily removable that I might not need and
might not even know they exist in order to remove them? How about some of
the c API? Most of it I never use.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimize sqlite3.o size

2011-07-12 Thread Pavel Ivanov
http://www.sqlite.org/compile.html#omitfeatures

Make sure you read all important notes and tested your app thoroughly.


Pavel


On Tue, Jul 12, 2011 at 9:10 AM, Baruch Burstein  wrote:
> I assume this is the same for sqlite3.dll or sqlite3.lib, but if not, I am
> interested in sqlite3.o (mingw32).
> How can I minimize the size of the library (compiling with -O3, since speed
> is my top concern, so different optimizations is not an option)? If I know
> there are parts of SQL I don't need in my program (like JOIN or ATTACH or
> triggers), can I remove support for them (this is for an embedded system and
> every byte counts, or at least every 100 bytes or so)? What pre-processor
> options can I set to remove things like FTS or R-Tree support which I do not
> need? Are there any other things easily removable that I might not need and
> might not even know they exist in order to remove them? How about some of
> the c API? Most of it I never use.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Minimize sqlite3.o size

2011-07-12 Thread Baruch Burstein
I assume this is the same for sqlite3.dll or sqlite3.lib, but if not, I am
interested in sqlite3.o (mingw32).
How can I minimize the size of the library (compiling with -O3, since speed
is my top concern, so different optimizations is not an option)? If I know
there are parts of SQL I don't need in my program (like JOIN or ATTACH or
triggers), can I remove support for them (this is for an embedded system and
every byte counts, or at least every 100 bytes or so)? What pre-processor
options can I set to remove things like FTS or R-Tree support which I do not
need? Are there any other things easily removable that I might not need and
might not even know they exist in order to remove them? How about some of
the c API? Most of it I never use.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Storing/editing hierarchical data sets

2011-07-12 Thread Black, Michael (IS)
I thought of another way to do your copy/cut/paste...



Assuming you keep the original audio around and use the levels I showed before.



create table sequence(level int,parent int,start int,end end);

insert into seqeunce values(1,0,0,-1); // note that -1 means "until end of 
data".



See where I'm going?  You keep a sequence table that is much like your btree.  
It's just a collection of clips that when strung together can make your audio 
clip.  By default you have one sequence per level.



Cut 1000-1999 from level=1

select * from sequence where level=1;

delete from sequence where level=1;

insert into sequence values(1,0,0,999);

insert into sequence values(2,1,2000,-1);



Insert some data:

1st you find where it fits

select * from sequence where level=1;

bytes1=0;

while moredata

  bytes2+=end-start;

  if (insertpoint >=bytes1 and insertpoint <=bytes2)

  update sequence set id=id+1,parent=parent+1 where id>=currentid;

  break;

end



Cuts are just splitting one record in 2, or adjusting 2 records and deleting 
records in between.

I'll leave that as an exercise for you.



This would



Michael D. Black

Senior Scientist

NG Information Systems

Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Christopher Melen [relativef...@hotmail.co.uk]
Sent: Sunday, July 10, 2011 12:52 PM
To: sqlite-users@sqlite.org
Subject: EXT :[sqlite] Storing/editing hierarchical data sets


Hi,


I am developing an application which analyses audio data, and I have recently 
been looking into Sqlite as a possible file format. The result of an analysis 
in my application is a hierarchical data set, where each level in the hierarchy 
represents a summary of the level below, taking the max of each pair in the 
sub-level, in the following way:


 251  214


  251  54 201  214


   251  9117   54   31  201 
  214  66


251 18 5 91   11 17 54 169 31 201 148173 214 43 66


Such a structure essentially represents the same data set at different levels 
of resolution ('zoom levels', if you like). My first experiments involved a 
btree-like structure (actually something closer to an enfilade* or counted 
btree**), where the data stored in each node is simply a summary of its child 
nodes. Edits to any node at the leaf level propagate up the tree, whilst large 
edits simply entail unlinking pointers to subtrees, thus making edits on any 
scale generally log-like in nature. This works fine as an in-memory structure, 
but since my data sets might potentially grow fairly large (a few hundred MB at 
least) I need a disk-based solution. I naively assumed that I might be able to 
utilize Sqlite's btree layer in order to implement this more effectively; this 
doesn't seem possible, however, given that the btree layer isn't directly 
exposed, and in any case it doesn't map onto the user interface in any way that 
seems helpful for this task.


I am aware of some of the ways in which hierarchical or tree-like structures 
can be represented in a database (adjacency lists, nested sets, materialized 
paths, etc.), but none of these seems to offer a good solution. What I'm 
experimenting with at present is the idea of entering each node of the 
hierarchy into the database as a blob (of say, 1024 bytes), while maintaining a 
separate in-memory tree which then maps on to this flat database of nodes (each 
node in the tree maintains a pointer to a node in the database).


I would be very interested in
thoughts/observations
on this problem - or even better a solution!



Many thanks in advance,
Christopher


* http://en.wikipedia.org/wiki/Enfilade_(Xanadu)
** http://www.chiark.greenend.org.uk/~sgtatham/algorithms/cbtree.html

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Storing/editing hierarchical data sets

2011-07-12 Thread Alexey Pechnikov
See FTS3 extension where the full-text index is stored in multi btree
in regular tables. Note: FTS2 is more simple.

-- 
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users