On 7/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
"Scott Hess" <[EMAIL PROTECTED]> wrote:
> do_test test-1.1 {
> execsql {PRAGMA encoding}
>
> sqlite3 db2 test.db
> execsql {CREATE TABLE t (id int)} db2
> db2 close
>
> #execsql {SELECT * FROM sqlite_master}
> catchsql {SELECT *
"Scott Hess" <[EMAIL PROTECTED]> wrote:
> do_test test-1.1 {
> execsql {PRAGMA encoding}
>
> sqlite3 db2 test.db
> execsql {CREATE TABLE t (id int)} db2
> db2 close
>
> #execsql {SELECT * FROM sqlite_master}
> catchsql {SELECT * FROM t}
> } {1 {no such table: t}}
>
> Looks like the s
> SELECT d.device_type, dpi.* FROM device d, device_perf_interval dpi
> WHERE d.device_id=dpi.device_id AND dpi.interval_type=1 AND
> dpi.interval_duration=300
> ORDER BY dpi.interval_end_date LIMIT ;
>
> What can I do to speed this up? I tried a third index on interval_end_date
> but can't ge
I think the simplest approach for me is to get rid of the join on device.
After I did so, SQLite uses the index on interval_end_date. I'll just need
to grab the device -> device_type mapping and interpret it myself.
Chris
On 7/11/07, Chris Eich <[EMAIL PROTECTED]> wrote:
Me again. I said at th
So I wonder if "select null from sqlite_master limit 1" is the
cheapest, fastest, dirtiest way to make it work right now.
On 7/11/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
--- Scott Hess <[EMAIL PROTECTED]> wrote:
> do_test test-1.1 {
> execsql {PRAGMA encoding}
>
> sqlite3 db2 test.db
> e
Me again. I said at the end of the last thread that I had learned a lot.
What I learned is that my query's performance problems were not due to
picking a bad index (given my dataset, the two indices were identical for
this query).
Instead the problem seems to be caused by ORDER BY and LIMIT clau
Aha, yes, that does sound like exactly it. Thanks!
On 7/11/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
--- Scott Hess <[EMAIL PROTECTED]> wrote:
> do_test test-1.1 {
> execsql {PRAGMA encoding}
>
> sqlite3 db2 test.db
> execsql {CREATE TABLE t (id int)} db2
> db2 close
>
> #execsql {SEL
--- Scott Hess <[EMAIL PROTECTED]> wrote:
> do_test test-1.1 {
> execsql {PRAGMA encoding}
>
> sqlite3 db2 test.db
> execsql {CREATE TABLE t (id int)} db2
> db2 close
>
> #execsql {SELECT * FROM sqlite_master}
> catchsql {SELECT * FROM t}
> } {1 {no such table: t}}
>
> Looks like the
Hmm, I wonder if this is the same problem I'm having. I randomly get
"no such table" errors too. Anyone have experience with this
particular problem?
On 7/11/07, Scott Hess <[EMAIL PROTECTED]> wrote:
do_test test-1.1 {
execsql {PRAGMA encoding}
sqlite3 db2 test.db
execsql {CREATE TABLE
do_test test-1.1 {
execsql {PRAGMA encoding}
sqlite3 db2 test.db
execsql {CREATE TABLE t (id int)} db2
db2 close
#execsql {SELECT * FROM sqlite_master}
catchsql {SELECT * FROM t}
} {1 {no such table: t}}
Looks like the schema info is obviously being cached. The first
PRAGMA is to make su
Joe, the two plans are actually equal (within a few %).
Thanks for all your help folks! I've learned quite a bit and I hope other
readers did too.
Chris
--- Chris Eich <[EMAIL PROTECTED]> wrote:
> Joe, the numbers are 100, 4236301 and 100 respectively (as you suspected,
> there are few devices, many intervals, and all devices have intervals).
>
> Richard, sqlite_stat1 has these rows for the dpi indexes:
> INSERT INTO "sqlite_stat1" VALUES('devic
> So no, you won't gain anything by trying to avoid this column -
> it is always there whether you declare it or not.
But I found that inserts were faster if I didn't create the table with
INTEGER PRIMARY KEY, so it looked I gained there, although I understand
I might lose out somewhere else.
RBS
> I have not measured it, but I'm guessing that the speed reduction
> caused by inserting INTEGER PRIMARY KEY out of order is much less
> than the speed penalty of building a separate index.
Thanks.
I did measure that and thought that the speed penalty of creating the new
index was less than the
Hi Yves, hi Filip,
thanks a lot for your help. You solved my problem
and I learned something new.
Kind regards
Ulrich
Am Wednesday 11 July 2007 20:00 schrieb [EMAIL PROTECTED]:
> Ulrich Schöbel a écrit :
> > As this list has excellent SQL wizards, I hope someone can help
> > me on the problem
I believe the most important factor in whether connection pooling will be
useful is how complex your schema is. The more complex your schema, the
longer it will take to open a new connection.
Remember the choice should not be between using a single static connection
and a connection pool but s
RB Smissaert <[EMAIL PROTECTED]>
wrote:
It seems if you do inserts on a table it is faster if you have no
INTEGER
PRIMARY KEY on that table
You _always_ have an INTEGER PRIMARY KEY on every table. It's part of
SQLite storage mechanism.
If you don't explicitly declare one, it's still there un
"RB Smissaert" <[EMAIL PROTECTED]> wrote:
> > Looking up a record by INTEGER PRIMARY KEY is always twice as
> > fast as looking up the same record by any other key
>
> Didn't realize that, but I have a question in connection with this.
> It seems if you do inserts on a table it is faster if you ha
Joe, the numbers are 100, 4236301 and 100 respectively (as you suspected,
there are few devices, many intervals, and all devices have intervals).
Richard, sqlite_stat1 has these rows for the dpi indexes:
INSERT INTO "sqlite_stat1" VALUES('device_perf_interval','dpi1','4236301
2118151 21182 21182
> Looking up a record by INTEGER PRIMARY KEY is always twice as
> fast as looking up the same record by any other key
Didn't realize that, but I have a question in connection with this.
It seems if you do inserts on a table it is faster if you have no INTEGER
PRIMARY KEY on that table and then lat
Joe Wilson <[EMAIL PROTECTED]> wrote:
> --- Chris Eich <[EMAIL PROTECTED]> wrote:
> > Thanks, I tried both 3.3.13 and 3.4.0. With 3.3.13 the +'s didn't make a
> > difference; the loops always were in d, dpi order with index dpi1 used. With
> > 3.4.0 the loops were always in dpi, d order with index
"Steve Krulewitz" <[EMAIL PROTECTED]> wrote:
> Hey all --
>
> In the application I am working on (Songbird), we have a simple two
> table schema representing the tracks in your music collection and the
> properties on those tracks. The keys used are all UUIDs (128 bit
> number) which we are curre
--- Chris Eich <[EMAIL PROTECTED]> wrote:
> Thanks, I tried both 3.3.13 and 3.4.0. With 3.3.13 the +'s didn't make a
> difference; the loops always were in d, dpi order with index dpi1 used. With
> 3.4.0 the loops were always in dpi, d order with index dpi1 used, unless I
> used CROSS JOIN.
Index
Ulrich Schöbel a écrit :
As this list has excellent SQL wizards, I hope someone can help
me on the problem to split a table into two.
I have something like this:
create table org (
a text,
b text,
c text,
d text
);
I need to split it into rwo tables as follows:
create tbl_a (
a_id i
Chris Eich <[EMAIL PROTECTED]> wrote:
But the basic question I have is why (given d, dpi order) the
device_id
can't be plugged into the dpi1 index.
I believe it can. What makes you think it doesn't?
EXPLAIN QUERY PLAN output
doesn't show
how much of the index will be used--does anyone have a
Thanks, I tried both 3.3.13 and 3.4.0. With 3.3.13 the +'s didn't make a
difference; the loops always were in d, dpi order with index dpi1 used. With
3.4.0 the loops were always in dpi, d order with index dpi1 used, unless I
used CROSS JOIN.
But the basic question I have is why (given d, dpi orde
On 7/11/07, Steve Krulewitz <[EMAIL PROTECTED]> wrote:
In the application I am working on (Songbird), we have a simple two
table schema representing the tracks in your music collection and the
properties on those tracks. The keys used are all UUIDs (128 bit
number) which we are currently storing
thanks for the reply
But i have some clarifications though...
I downloaded System.Data.SQLite Dll from the website u mentioned but i was
clearly stated that i was for Win32 systems.. ie for WIndows OS only... And
i am using Linux. So is there a release for linux also? correct me if i am
wro
On 7/11/07, Steve Krulewitz <[EMAIL PROTECTED]> wrote:
I was wondering if there is much to be gained by storing these 128 bit
values in binary rather than as strings. We estimate roughly 20
properties per track, so in a moderate sized database you'd have 10k
rows in the tracks table and 200k ro
Charly Caulet <[EMAIL PROTECTED]>
wrote:
I mean that I had a trigger on "INSERT" that executes an update, and
I had
some triggers on "UPDATE" that executes any other updates... So it
was an
infernal loop.
Be aware that SQLite never enters infinite loop of triggers. It keeps
track of a chain o
Hey all --
In the application I am working on (Songbird), we have a simple two
table schema representing the tracks in your music collection and the
properties on those tracks. The keys used are all UUIDs (128 bit
number) which we are currently storing in hex string form in a text
column, so the
On 7/11/07 8:55 PM, "Joe Wilson" <[EMAIL PROTECTED]> wrote:
> The problem is schema design.
> Just use a single task table with another key column for the task number.
> Queries on N task tables with UNIONS are killing your performance.
>
But how do I associate processes to different tasks ha
The problem is schema design.
Just use a single task table with another key column for the task number.
Queries on N task tables with UNIONS are killing your performance.
--- Bharath Booshan L <[EMAIL PROTECTED]> wrote:
> TaskN_table
> | Name | TaskID |
>
> TaskN_info_table
>
> | TaskID REFE
Hi All,
Following are the table structures I am using in one of my application
System_table
| Name | SystemID PRIMARY KEY |
Process_table
| Name | SytemID REFERENCES System_table | ProcessID PRIMARY KEY |
ParentProcessID | IsParentProcess |
Each process represented by UNIQUE ProcessID
On 14 Nov 2006 Alexey Tourbin <[EMAIL PROTECTED]> writes:
Hello,
I've just written sqlite3 loadable module which implements REGEXP
operator with Perl-Compatible Regular Expressions library. It has
LRU cache for compiled regular expressions, so it is probably
fast.
$ cflags=`pkg-config --c
> Try this:
>
> CREATE TRIGGER trig_insert BEFORE INSERT ON tab
> WHEN (new.champ1 LIKE "%-%")
> BEGIN
> insert into tab(champ1) values (strftime("%s", new.champ1));
> select RAISE(IGNORE);
> END;
It works perfectly. Thank you. I had tryed a similar solution, but I
didn't know how to st
Joe Wilson <[EMAIL PROTECTED]> wrote:
>
> The following question is purely for curiosity's sake - I don't
> advocate doing this... I just want to get a better understanding
> of the sqlite3 file format.
>
> Let's assume the most trivial case - a table without indexes of
> any kind (implicit or e
Charly Caulet <[EMAIL PROTECTED]>
wrote:
I have a table tab(UniqueID INTEGER PRIMARY KEY, champ1 TIMESTAMP).
And
when a date ("-MM-DD") is inserted in champ1, I would like to
convert
it to timestamp.
I would like to modify new.champ1 value in a "BEFORE" trigger, but
everything I tryed doesn't
> > >DELETE sqlite_master WHERE type='f';
> >
> > Will all the btree pages orphaned at rootpage for these rows
> > not be recyclable until you run a full VACUUM?
> But if you delete the record for a regular table or index from
> the sqlite_master table using the "writable_schema" backdoor,
>
While the bulk INSERTs are being performed, I have a small shell
script that runs sqlite3 database.db "select count(*) from table" in
loop and sleeps 1 seconds between iterations.
Occasionally, sqlite3_step returns an error probably due to
concurrency issues. This is fine.
However, that puzzles
I have an application that does bulk INSERTS into a table. Simplified
pseudo code is below:
sqlite3_open(dbh);
sqlite3_prepare(stmt);
for (i=0; i < 10; i++) {
sqlite3_bind(i);
sqlite3_step(stmt);
sqlite3_reset(stmt);
}
sqlite3_close(dbh);
While the bulk INSERTs are being performed
Hello,
First of all I apologize for my poor english speaking.
I have some problems with SQLite triggers.
I have a table tab(UniqueID INTEGER PRIMARY KEY, champ1 TIMESTAMP). And
when a date ("-MM-DD") is inserted in champ1, I would like to convert
it to timestamp.
I would like to modify new.c
42 matches
Mail list logo