RE: [sqlite] Limit statement size?

2007-01-30 Thread Joe Wilson
--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> I can see now what the trouble is if I do the SELECT without the INSERT OR
> REPLACE (shortened):

Can't suggest anything without seeing the schema for all the tables 
involved and any unique indexes related to those tables.

It should work. Perhaps you're not specifying some columns related to 
a unique index on the table being updated. Maybe there's a bug in REPLACE?
Specifically, what does ".schema A3TestB67_J" return?

I suspect this REPLACE technique with its reduced number of database lookups
ought to be much faster than all those subselects you are using now for 
every column, but if you're happy with the timings with the old way and 
it works, there's no point changing it. 

I am curious with it not updating the table, though.

> 
> select
> t1.PATIENT_ID,
> g2.ENTRY_ID,
> g2.READ_CODE,
> g2.TERM_TEXT,
> g2.START_DATE,
> g2.ADDED_DATE,
> g2.NUMERIC_VALUE,
> g3.ENTRY_ID,
> g3.READ_CODE,
> g3.TERM_TEXT,
> g3.START_DATE,
> g3.ADDED_DATE,
> g3.NUMERIC_VALUE,
> g4.ENTRY_ID,
> g4.READ_CODE,
> g4.TERM_TEXT,
> g4.START_DATE,
> g4.ADDED_DATE,
> g4.NUMERIC_VALUE
> from
> A3TestB67_J t1,
> GROUP_2 g2,
> GROUP_3 g3,
> GROUP_4 g4
> where
> t1.PATIENT_ID = g2.PID and
> t1.PATIENT_ID = g3.PID and
> t1.PATIENT_ID = g4.PID
> 
> I only get the rows that have entries in all groups. So, this is like an
> inner join and I need a left join.
> Have tried this, but it didn't alter the table, although there was no error:
> 
> INSERT OR REPLACE INTO 
> A3TestB67_J(PATIENT_ID,
> ENTRY_ID_E1,
> READ_CODE_E1,
> TERM_TEXT_E1,
> START_DATE_E1,
> ADDED_DATE_E1,
> NUMERIC_VALUE_E1,
> ENTRY_ID_E2,
> READ_CODE_E2,
> TERM_TEXT_E2,
> START_DATE_E2,
> ADDED_DATE_E2,
> NUMERIC_VALUE_E2,
> ENTRY_ID_E3,
> READ_CODE_E3,
> TERM_TEXT_E3,
> START_DATE_E3,
> ADDED_DATE_E3,
> NUMERIC_VALUE_E3,
> ENTRY_ID_E4,
> READ_CODE_E4,
> TERM_TEXT_E4,
> START_DATE_E4,
> ADDED_DATE_E4,
> NUMERIC_VALUE_E4)
> select
> t1.PATIENT_ID,
> g2.ENTRY_ID,
> g2.READ_CODE,
> g2.TERM_TEXT,
> g2.START_DATE,
> g2.ADDED_DATE,
> g2.NUMERIC_VALUE,
> g3.ENTRY_ID,
> g3.READ_CODE,
> g3.TERM_TEXT,
> g3.START_DATE,
> g3.ADDED_DATE,
> g3.NUMERIC_VALUE,
> g4.ENTRY_ID,
> g4.READ_CODE,
> g4.TERM_TEXT,
> g4.START_DATE,
> g4.ADDED_DATE,
> g4.NUMERIC_VALUE
> from
> A3TestB67_J t1
> left join GROUP_2 g2 on (t1.PATIENT_ID = g2.PID)
> left join GROUP_3 g3 on (t1.PATIENT_ID = g3.PID)
> left join GROUP_4 g4 on (t1.PATIENT_ID = g4.PID)
> 
> My old method is actually quite fast and not sure if I can improve on it.
> 
> RBS
> 
> 
> 
> -Original Message-
> From: Joe Wilson [mailto:[EMAIL PROTECTED] 
> Sent: 30 January 2007 05:53
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Limit statement size?
> 
> --- RB Smissaert <[EMAIL PROTECTED]> wrote:
> > Thanks, that is how I understood it to be.
> > I must be overlooking something simple here.
> 
> Check your SELECT sub-statement within the REPLACE statement to see 
> what rows it returns.
> 
> .header on
> .mode tabs
> 
> create table t1(id primary key, e2_a, e2_b, e3_a, e3_b);
> insert into t1 values(3, 30,31, 23,230);
> insert into t1 values(4, 40,41, 24,240);
> insert into t1 values(5, 50,51, 25,250);
> 
> create table e2(id primary key, a, b);
> insert into e2 values(3, 300, 310);
> insert into e2 values(4, 400, 410);
> insert into e2 values(5, 500, 510);
> 
> create table e3(id primary key, a, b);
> insert into e3 values(3, 23.1, 230.1);
> insert into e3 values(4, 24.1, 240.1);
> insert into e3 values(5, 25.1, 250.1);
> 
> select * from t1 order by id;
> 
> replace into t1(id, e2_a, e2_b, e3_a, e3_b)
>   select t1.id, e2.a, e2.b, e3.a, e3.b
>   from t1, e2, e3 
>   where t1.id = e2.id and t1.id = e3.id;
> 
> select * from t1 order by id;
> 
> id  e2_ae2_be3_ae3_b
> 3   30  31  23  230
> 4   40  41  24  240
> 5   50  51  25  250
> 
> id  e2_ae2_be3_ae3_b
> 3   300 310 23.1230.1
> 4   400 410 24.1240.1
> 5   500 510 25.1250.1
> 
> 
> > 
> > RBS
> > 
> > -Original Message-
> > From: Gerry Snyder [mailto:[EMAIL PROTECTED] 
> > Sent: 29 January 2007 23:52
> > To: sqlite-users@sqlite.org
> > Subject: Re: [sqlite] Limit statement size?
> > 
> > RB Smissaert wrote:
> > > Had a go at this, but sofar I haven't been able yet to get it to work.
> > > I get no error, but A3Test115_J remains just at it is.
> > > I couldn't find much information about INSERT OR REPLACE in the SQLite
> > > documentation. What exactly should it do?




 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Can't build 3.3.12 on my Unix

2007-01-30 Thread Joe Wilson
> > Try setting sqlite3_int64 to just "int" and compiling with
> > -DSQLITE_32BIT_ROWID=1.  That might work.
> > --
> > D. Richard Hipp  <[EMAIL PROTECTED]>
>
> "So apparently your compiler cannot cast a double to type xlong_t."
> That's because I did the earlier xlong stuff by hand.  I can try your above
> recommendation, and let you know.
> 
> I should mention 1: That I am running SCO Openserver 5 [popularity not
> withstanding], and 2: That I had to change the makefile because the vanilla
> make on my system does not like "TCC +=" style entries.
> 
> I probably should ask my question this way: What is the *safe* method for
> 32 bit machines to build Sqlite 3.3x?

Setting sqlite3_int64 to just "int" and compiling with -DSQLITE_32BIT_ROWID=1
will not work on some compilers due to certain bit shifts used in SQLite
that are not valid for 32 bit types.

See:
 http://marc.theaimsgroup.com/?t=11645507631&r=1&w=2

 Ticket 2089: Decouple sqlite_int64 from other 64bit datatypes
 http://www.sqlite.org/cvstrac/tktview?tn=2089

A better idea would be to simply use GCC to build SQLite on your SCO box.

Side note... the patch in the SQLite ticket, which used to work, is now garbage:

 http://www.sqlite.org/cvstrac/attach_get/311/patch-for-int32-support2.txt

Is there some corruption in the SQLite CVSTrac database?



 

Bored stiff? Loosen up... 
Download and play hundreds of games for free on Yahoo! Games.
http://games.yahoo.com/games/front

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Limit statement size?

2007-01-30 Thread Joe Wilson
Your INSERT OR REPLACE statement is in error. 
You have fewer columns in your SELECT clause than are specified 
in your INSERT column name list. You should have seen an error like 
this in SQLite version 3.3.12:

  SQL error: X values for Y columns

Assuming PATIENT_ID is the sole unique key for A3TestB67_J and
your SQL column counts match, the REPLACE should work.

--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> Have tried this, but it didn't alter the table, although there was no error:
> 
> INSERT OR REPLACE INTO 
> A3TestB67_J(PATIENT_ID,
> ENTRY_ID_E1,
> READ_CODE_E1,
> TERM_TEXT_E1,
> START_DATE_E1,
> ADDED_DATE_E1,
> NUMERIC_VALUE_E1,
> ENTRY_ID_E2,
> READ_CODE_E2,
> TERM_TEXT_E2,
> START_DATE_E2,
> ADDED_DATE_E2,
> NUMERIC_VALUE_E2,
> ENTRY_ID_E3,
> READ_CODE_E3,
> TERM_TEXT_E3,
> START_DATE_E3,
> ADDED_DATE_E3,
> NUMERIC_VALUE_E3,
> ENTRY_ID_E4,
> READ_CODE_E4,
> TERM_TEXT_E4,
> START_DATE_E4,
> ADDED_DATE_E4,
> NUMERIC_VALUE_E4)
> select
> t1.PATIENT_ID,
> g2.ENTRY_ID,
> g2.READ_CODE,
> g2.TERM_TEXT,
> g2.START_DATE,
> g2.ADDED_DATE,
> g2.NUMERIC_VALUE,
> g3.ENTRY_ID,
> g3.READ_CODE,
> g3.TERM_TEXT,
> g3.START_DATE,
> g3.ADDED_DATE,
> g3.NUMERIC_VALUE,
> g4.ENTRY_ID,
> g4.READ_CODE,
> g4.TERM_TEXT,
> g4.START_DATE,
> g4.ADDED_DATE,
> g4.NUMERIC_VALUE
> from
> A3TestB67_J t1
> left join GROUP_2 g2 on (t1.PATIENT_ID = g2.PID)
> left join GROUP_3 g3 on (t1.PATIENT_ID = g3.PID)
> left join GROUP_4 g4 on (t1.PATIENT_ID = g4.PID)



 

No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Limit statement size?

2007-01-31 Thread Joe Wilson
--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> There is one important problem though that I just discovered.
> Just found out that the maximum number of tables in a join is 32!
> So, with my base table that is only 31 to add.

Let's do some grepping...

  #define BMS  (sizeof(Bitmask)*8)
 ...
  /* The number of tables in the FROM clause is limited by the number of
  ** bits in a Bitmask
  */
  if( pTabList->nSrc>BMS ){
sqlite3ErrorMsg(pParse, "at most %d tables in a join", BMS);
return 0;
  }
 ...

You could try changing src/sqliteInt.h:

 -typedef unsigned int Bitmask;
 +typedef u64 Bitmask;

and then recompiling sqlite. If all goes well, you should be able to
join up to 64 tables. Never tried it. It might work, or might not.

Alternatively, you can either perform 2 consecutive REPLACE commands
with half the tables in each update (less efficient), or just do a single 
REPLACE command with a SELECT on 2 or more subqueries on sub-sets of 
the tables (more efficient).

> Actually make that about 5 to 6 times as fast.

This stands to reason since you're only doing a single lookup per
sub-table instead of the 6 lookups per sub-table you did with the 
UPDATE command.



 

Never Miss an Email
Stay connected with Yahoo! Mail on your mobile.  Get started!
http://mobile.yahoo.com/services?promote=mail

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Limit statement size?

2007-01-31 Thread Joe Wilson
> I wonder what the reason was to limit the number of table joins to 32.

http://www.sqlite.org/cvstrac/chngview?cn=3622



 

Finding fabulous fares is fun.  
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel 
bargains.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Another sybase conversion question

2007-02-02 Thread Joe Wilson
--- "Anderson, James H (IT)" <[EMAIL PROTECTED]> wrote:
> The following works fine in sybase, but because in sqlite "update" does
> not support "from" it doesn't work. Is there another way of doing this?
> 
> Thanks,
> 
> Jim
> 
> update C1_credDerivEvent
>set CDEvent = a.CDEvent || ',' || b.CDEvent
>   from C1_credDerivEvent a,
>C1_tmp_credDerivEvent b,
>tmp_eventsc
>  where a.CDId = b.CDId
>and b.CDApplicable = 'Yes'
>and b.CDEvent  = c.CDEvent;

http://www.sqlite.org/lang_replace.html



 

Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] OR, IN: which is faster?

2007-02-03 Thread Joe Wilson
--- chueng alex1985 <[EMAIL PROTECTED]> wrote:
> I don't think so. If the field has been indexed, searching speed will be
> imprved very much. The query "f1 = value1 OR f1 = value2 OR f1=value3 OR
> ..." will be faster if the field f1 has been indexed. On the other hand, the
> clause "f1 in (value1, value2, value3, ...)" seems to be slower then 'OR'
> clause because i think it will get the all value of 'f1' and check whether
> the f1's value is in (value1, value2, value3, ...). If the table has 100K
> records, it will compare 100K times, in contrast, the OR clause only need
> compare few times because of the index when the number of values in (value1,
> value2, value3, ...) list is not too big.

As drh pointed out, if you run the following in sqlite 3.3.12 you'll see 
both the IN and the OR queries produce the exact same instructions, 
resulting in the same query speed if an index is used:

 CREATE TABLE abc(a,b,c);
 CREATE INDEX abc_c on abc(c);
 explain select * from abc where c in (11,22,33);
 explain select * from abc where c=11 or c=22 or c=33;

(same output for both)
0|Goto|0|41|
1|Integer|0|0|
2|OpenRead|0|2|
3|SetNumColumns|0|3|
4|Integer|0|0|
5|OpenRead|1|3|keyinfo(1,BINARY)
6|MemLoad|2|0|
7|If|0|20|
8|MemInt|1|2|
9|OpenEphemeral|2|0|keyinfo(1,BINARY)
10|SetNumColumns|2|1|
11|Integer|11|0|
12|MakeRecord|1|0|b
13|IdxInsert|2|0|
14|Integer|22|0|
15|MakeRecord|1|0|b
16|IdxInsert|2|0|
17|Integer|33|0|
18|MakeRecord|1|0|b
19|IdxInsert|2|0|
20|Rewind|2|38|
21|Column|2|0|
22|IsNull|-1|37|
23|MemStore|1|1|
24|MemLoad|1|0|
25|MakeRecord|1|0|b
26|MemStore|0|0|
27|MoveGe|1|37|
28|MemLoad|0|0|
29|IdxGE|1|37|+
30|IdxRowid|1|0|
31|MoveGe|0|0|
32|Column|0|0|
33|Column|0|1|
34|Column|0|2|
35|Callback|3|0|
36|Next|1|28|
37|Next|2|21|
38|Close|0|0|
39|Close|1|0|
40|Halt|0|0|
41|Transaction|0|0|
42|VerifyCookie|0|2|
43|Goto|0|1|
44|Noop|0|0|

If you drop in the index abc_c, you will see the behavior you mentioned.
The OR query is on the left, and the IN query is on the right:

0|Goto|0|21|  | 0|Goto|0|36|
1|Integer|0|0|  1|Integer|0|0|
2|OpenRead|0|2| 2|OpenRead|0|2|
3|SetNumColumns|0|3|3|SetNumColumns|0|3|
4|Rewind|0|19|| 4|Rewind|0|34|
5|Column|0|2| | 5|MemLoad|0|0|
6|Integer|11|0|   | 6|If|0|19|
7|Eq|98|14|collseq(BINARY)| 7|MemInt|1|0|
8|Column|0|2| | 8|OpenEphemeral|1|0|keyinfo(1,BINARY)
9|Integer|22|0|   | 9|SetNumColumns|1|1|
10|Eq|98|14|collseq(BINARY)   | 10|Integer|11|0|
11|Column|0|2|| 11|MakeRecord|1|0|b
12|Integer|33|0|  | 12|IdxInsert|1|0|
13|Ne|354|18|collseq(BINARY)  | 13|Integer|22|0|
14|Column|0|0|| 14|MakeRecord|1|0|b
15|Column|0|1|| 15|IdxInsert|1|0|
16|Column|0|2|| 16|Integer|33|0|
17|Callback|3|0|  | 17|MakeRecord|1|0|b
18|Next|0|5|  | 18|IdxInsert|1|0|
19|Close|0|0| | 19|Integer|1|0|
20|Halt|0|0|  | 20|Column|0|2|
21|Transaction|0|0|   | 21|NotNull|-1|25|
22|VerifyCookie|0|3|  | 22|Pop|2|0|
23|Goto|0|1|  | 23|Null|0|0|
24|Noop|0|0|  | 24|Goto|0|28|
  > 25|MakeRecord|1|0|b
  > 26|Found|1|28|
  > 27|AddImm|-1|0|
  > 28|IfNot|1|33|
  > 29|Column|0|0|
  > 30|Column|0|1|
  > 31|Column|0|2|
  > 32|Callback|3|0|
  > 33|Next|0|5|
  > 34|Close|0|0|
  > 35|Halt|0|0|
  > 36|Transaction|0|0|
  > 37|VerifyCookie|0|3|
  > 38|Goto|0|1|
  > 39|Noop|0|0|

You can mimic the index-less OR behavior on the column (even in the presence
of such an index) by putting a plus before each column in the where clause,
thus disqualifying the column from using an index:

 select * from abc where +c=11 or +c=22 or +c=33

This "OR" query table scan might be faster than using an index in cases 
where you know that you will be selecting the majority of the rows in 
the table.

> 
> 2007/2/1, [EMAIL PROTECTED] <[EMAIL PROTECTED]>:
> >
> > Ion Silvestru <[EMAIL PROTECTED]> wrote:
> > > If we have a query where we compare a column to a set of values, then
> > > which is faster: OR or IN?
> > > Ex: OR: (mycol = "a") OR (mycol = "b") OR (mycol = "c") OR...
> > > IN: (mycol IN "a", "b", "c" 

RE: [sqlite] Another sybase conversion question

2007-02-03 Thread Joe Wilson
It will work fine, just remember to specify every column in the
table being updated (aka "REPLACEd INTO") or they will contain NULL.

It would be nice if SQLite featured an "INSERT OR MERGE" or "MERGE INTO"
command that would not require specifying all the columns. i.e., grab
the old row's values for the columns not specified to act more like
an update. For a 5 column table it's not a big deal, but when you wish
to REPLACE INTO a table with 50 columns, the SQL gets a bit unwieldy.

Happy credit derivativing.

--- "Anderson, James H (IT)" <[EMAIL PROTECTED]> wrote:
> Thanks, Joe! That looks like it might be the solution. 
> 
> -Original Message-
> From: Joe Wilson [mailto:[EMAIL PROTECTED] 
> Sent: Friday, February 02, 2007 9:58 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Another sybase conversion question
> 
> --- "Anderson, James H (IT)" <[EMAIL PROTECTED]> wrote:
> > The following works fine in sybase, but because in sqlite "update"
> does
> > not support "from" it doesn't work. Is there another way of doing
> this?
> > 
> > Thanks,
> > 
> > Jim
> > 
> > update C1_credDerivEvent
> >set CDEvent = a.CDEvent || ',' || b.CDEvent
> >   from C1_credDerivEvent a,
> >C1_tmp_credDerivEvent b,
> >tmp_eventsc
> >  where a.CDId = b.CDId
> >and b.CDApplicable = 'Yes'
> >and b.CDEvent  = c.CDEvent;
> 
> http://www.sqlite.org/lang_replace.html


 

Want to start your own business?
Learn how on Yahoo! Small Business.
http://smallbusiness.yahoo.com/r-index

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] UPDATE OR REPLACE same as UPDATE?

2007-02-03 Thread Joe Wilson
Is UPDATE OR REPLACE always equivalent to just UPDATE?


 

Food fight? Enjoy some healthy debate 
in the Yahoo! Answers Food & Drink Q&A.
http://answers.yahoo.com/dir/?link=list&sid=396545367

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] UPDATE OR REPLACE same as UPDATE?

2007-02-03 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > Is UPDATE OR REPLACE always equivalent to just UPDATE?
> 
> No.  UPDATE is the same as UPDATE OR ABORT.  Try replacing
> the UPDATE OR REPLACE in the following script with just
> UPDATE to see the difference:
> 
>CREATE TABLE t1(x UNIQUE, y);
>INSERT INTO t1 VALUES(1,2);
>INSERT INTO t1 VALUES(3,4);
>
>UPDATE OR REPLACE t1 SET x=3 WHERE y=2;
>SELECT * FROM t1;

Thanks. That's quite useful, actually. 
I used to do a DELETE followed by an INSERT in this situation.
The UDPATE OR REPLACE construct is more efficient.

Does anyone know whether UPDATE OR REPLACE is portable to any other 
popular database? (Oracle, SQL Server, MySQL, Postgres)
REPLACE() seems to be a string function in other databases.


 

Yahoo! Music Unlimited
Access over 1 million songs.
http://music.yahoo.com/unlimited

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] two process problem

2007-02-04 Thread Joe Wilson
--- Dennis Cote <[EMAIL PROTECTED]> wrote:
> On 2/3/07, Tom Shaw <[EMAIL PROTECTED]> wrote:
> > SQLSTATE[HY000]: General error: 1 SQL logic error or missing database
> > and
> > SQLSTATE[HY000]: General error: 8 attempt to write a readonly database
> 
> Tom what wrapper are you using to access sqlite?  These messages are not
> from sqlite itself, but your wrapper. It may have restrictions that sqlite
> does not.

$ grep 'SQL logic error or missing database' */*.c
src/main.c:case SQLITE_ERROR:  z = "SQL logic error or missing 
database";   break;
$ grep 'attempt to write a readonly database' */*.c
src/main.c:case SQLITE_READONLY:   z = "attempt to write a readonly 
database";  break;

He got this from sqlite3_errmsg() which in turn calls sqlite3ErrStr():

/*
** Return UTF-8 encoded English language explanation of the most recent
** error.
*/
const char *sqlite3_errmsg(sqlite3 *db){
  const char *z;
  if( !db || sqlite3MallocFailed() ){
return sqlite3ErrStr(SQLITE_NOMEM);
  }
  if( sqlite3SafetyCheck(db) || db->errCode==SQLITE_MISUSE ){
return sqlite3ErrStr(SQLITE_MISUSE);
  }
  z = (char*)sqlite3_value_text(db->pErr);
  if( z==0 ){
z = sqlite3ErrStr(db->errCode);
  }
  return z;
}

...

/*
** Return a static string that describes the kind of error specified in the
** argument.
*/
const char *sqlite3ErrStr(int rc){
  const char *z;
  switch( rc & 0xff ){
case SQLITE_ROW:
case SQLITE_DONE:
case SQLITE_OK: z = "not an error";  break;
case SQLITE_ERROR:  z = "SQL logic error or missing database";   break;
case SQLITE_PERM:   z = "access permission denied";  break;
case SQLITE_ABORT:  z = "callback requested query abort";break;
case SQLITE_BUSY:   z = "database is locked";break;
case SQLITE_LOCKED: z = "database table is locked";  break;
case SQLITE_NOMEM:  z = "out of memory"; break;
case SQLITE_READONLY:   z = "attempt to write a readonly database";  break;
case SQLITE_INTERRUPT:  z = "interrupted";   break;
case SQLITE_IOERR:  z = "disk I/O error";break;
case SQLITE_CORRUPT:z = "database disk image is malformed";  break;
case SQLITE_FULL:   z = "database or disk is full";  break;
case SQLITE_CANTOPEN:   z = "unable to open database file";  break;
case SQLITE_PROTOCOL:   z = "database locking protocol failure"; break;
case SQLITE_EMPTY:  z = "table contains no data";break;
case SQLITE_SCHEMA: z = "database schema has changed";   break;
case SQLITE_CONSTRAINT: z = "constraint failed"; break;
case SQLITE_MISMATCH:   z = "datatype mismatch"; break;
case SQLITE_MISUSE: z = "library routine called out of sequence";break;
case SQLITE_NOLFS:  z = "kernel lacks large file support";   break;
case SQLITE_AUTH:   z = "authorization denied";  break;
case SQLITE_FORMAT: z = "auxiliary database format error";   break;
case SQLITE_RANGE:  z = "bind or column index out of range"; break;
case SQLITE_NOTADB: z = "file is encrypted or is not a database";break;
default:z = "unknown error"; break;
  }
  return z;
}



 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to keep syncronized between two DB on different severs?

2007-02-05 Thread Joe Wilson
http://www.mail-archive.com/sqlite-users@sqlite.org/msg19628.html

--- Alex Cheng <[EMAIL PROTECTED]> wrote:
> I encountered a problem. I have two servers (A and B) which host two same
> DBs, each of them may be modified. If the DB has been modified on server A,
> the DB on server B should also keep synchronized with server A. If the DB on
> server B has been modified, the DB on server A should also be modified
> automatically. How do I implement it? Is there any machanism in Sqlite to do
> it?


 

Get your own web address.  
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: UNIQUE constraint on column

2007-02-05 Thread Joe Wilson

--- Shane Harrelson <[EMAIL PROTECTED]> wrote:

> On 2/5/07, Dennis Cote <[EMAIL PROTECTED]> wrote:
> > Shane Harrelson wrote:
> > > Perform 50 transactions of 1000 inserts each (5 total).
> > >
> >
> > Shane,
> >
> > Is this your normal usage pattern? Inserting records in blocks of around
> > 1000 per transaction. Or would you be more likely to insert using 1000
> > transactions of 50 records, or perhaps only a single record per transaction?
> >
> > Dennis Cote
> >
> 
> Yes, it's typical.   Each database instance is typically composed of
> around 50k records, all inserted in a single pass.  If I could do
> larger transactions (or not do them at all) I would, for if I
> encounter an error I have to discard all records - my application is
> extremely non-fault tolerant.
> 
> Is there anyway to disable journaling completely?  I'm not certain
> that for my application it gains me anything.

50k records is nothing. Just sort it entirely in memory outside of the 
database and blast in the results via just plain inserts in the two tables 
in sorted order so sqlite will always perform a table append and not
have to shuffle the pages around. It will be significantly faster if you 
handle the conflicts yourself in your code in memory using STL or a 
conventional data structure than using a general purpose database such 
as SQLite.


 

Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives.
http://tools.search.yahoo.com/toolbar/features/mail/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] database is locked (my solution)

2007-02-05 Thread Joe Wilson
--- Andrew Teirney <[EMAIL PROTECTED]> wrote:
> If you are interested in the "BEGIN SHARED" transaction i posted a 
> simple patch to this mailing list within the last month if i recall 
> correctly.

 http://www.mail-archive.com/sqlite-users@sqlite.org/msg21260.html

Do you find that using this BEGIN SHARED patch/technique improves 
typical throughput to the database?


 

Get your own web address.  
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: UNIQUE constraint on column

2007-02-05 Thread Joe Wilson
--- Shane Harrelson <[EMAIL PROTECTED]> wrote:
> On 2/5/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
> > > Yes, it's typical.   Each database instance is typically composed of
> > > around 50k records, all inserted in a single pass.  If I could do
> > > larger transactions (or not do them at all) I would, for if I
> > > encounter an error I have to discard all records - my application is
> > > extremely non-fault tolerant.
> > >
> > > Is there anyway to disable journaling completely?  I'm not certain
> > > that for my application it gains me anything.
> >
> > 50k records is nothing. Just sort it entirely in memory outside of the
> > database and blast in the results via just plain inserts in the two tables
> > in sorted order so sqlite will always perform a table append and not
> > have to shuffle the pages around. It will be significantly faster if you
> > handle the conflicts yourself in your code in memory using STL or a
> > conventional data structure than using a general purpose database such
> > as SQLite.
> >
> >
> 
> I'm working in an embedded environment, I have a total of 64mb of RAM
> for everything - O/S, file system, application usage, etc., and my
> table structure is more complicated (and larger) than the examples
> I've given.  We've considered rolling our on data storage format, but
> wanted to explore the flexibility and ease of use of SQLite.   So far,
> it's been a star.  Kudos to DRH, et.al. on such a nice product.

Fair enough. I wasn't aware of your constraints and how much time
you wanted to dedicate to optimization. I thought you were looking for 
a 10X speedup as opposed to a 1.3X speedup.

In any event, pre-sorting data in primary key order prior to insert 
is always a big win in SQLite. You might consider doing that, even if
in small batches of rows (a thousand at a time). I generally get a 2X
speedup with this simple trick.


 

Never Miss an Email
Stay connected with Yahoo! Mail on your mobile.  Get started!
http://mobile.yahoo.com/services?promote=mail

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: UNIQUE constraint on column

2007-02-06 Thread Joe Wilson
> Would there be any benefit to "pre-sorting" in this scenario?

somewhat related discussion:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg19111.html



 

Sucker-punch spam with award-winning protection. 
Try the free Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/features_spam.html

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SQLite CVSTrac ticket attachments prior to CVSTrac upgrade?

2007-02-06 Thread Joe Wilson
Are the corrupted attachments prior to the CVSTrac upgrade recoverable
into the current SQLite Trac system?

  http://www.sqlite.org/cvstrac/tktview?tn=2208

If not, could you please post the old Trac attachment table in a 
compressed database on sqlite.org?

thanks.


 

Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives.
http://tools.search.yahoo.com/toolbar/features/mail/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Performance problems potentially related to index scalability

2007-02-09 Thread Joe Wilson
> I understand that I'm experiencing thrashing, but I'm curious as to why I'm
> experiencing it. My understanding is that the row_id PRIMARY KEY column has
> an index on it by default. Inserting into this index doesn't cause
> thrashing, but inserting into the secondary index does. I hoped that
> enabling the shared cache mode and periodically doing the select operation
> on it would keep it in SQLITE's cache. Based on what I saw from I/O stat,
> top, and my timing measurements, that doesn't seem to be the case. In fact,
> my cache "priming" experiment showed that it didn't help at all,  despite a
> cache size of 500 MB and the index being on a column with an INTEGER pragma.
> The transaction immediately following the cache priming would be just as
> slow as the transactions far away from it. Why would a transaction inserting
> 1024 rows of  0.5 KB each result in the eviction of that index, despite the
> abundance of cache?

The OS disk cache is not effective because the index fragments are currently 
scattered across all pages of the table it is indexing. Even VACUUM (as it 
is currently written in sqlite/src/vacuum.c) won't help such indexes be 
contiguous in the file.

The following patch to VACUUM allows pages of non-inline indexes explicitly 
made with CREATE INDEX to be contiguous in the file, which can improve VACUUM
times and non-inline index locality of reference somewhat:

 http://www.sqlite.org/cvstrac/tktview?tn=2075

It works by moving the index creation after the table row inserts (as 
one might do manually) so the index pages are not intermixed with table 
pages.

Unfortunately this patch will not aid inline indexes made within the 
CREATE TABLE statement (i.e., UNIQUE). Nor will this patch help the 
situation when rows are being first inserted into a table (prior to 
VACUUM). So it won't help table population speed at all.

> The solution you propose doesn't work when your table has multiple indices
> of this type. One can imagine altering the schema as to store each indexed
> column in a separate table, with a column containing the rowid to the main
> table, but that would result in a significant amount of redundant overhead.

Don't discount this workaround without first doing timings and database 
size experiments.

If SQLite could be persuaded to reserve contiguous blocks of pages in advance 
for exclusive use for each index, insert times could improve for table with 
more than one index. The database file would be larger while the tables are 
being populated, but it could be made to shrink to a smaller size with an 
explicit VACUUM. I think this time/space trade-off would be acceptable to most 
users. This could be controlled via a PRAGMA if you want the classic paging
behavior.


 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] ANN: SQLiteODBC Driver 0.72

2007-02-10 Thread Joe Wilson
Nice job on the new driver.

The new DSN parameter LoadExt=module1.dll,module2.dll for dynamically 
loadable sqlite extension modules is particularly useful.

The SQLite+TCC extension is very cool.

Thanks.

--- Christian Werner <[EMAIL PROTECTED]> wrote:
> Version 0.72 of the SQLiteODBC Driver is ready for
> download from http://www.ch-werner.de/sqliteodbc
> 
> It now supports loadable extensions as of SQLite >= 3.3.7
> in the form of a DSN option.
> The one-click Win32 installer is based on SQLite 3.3.12
> and contains the FTS1/FTS2 extensions and the experimental
> SQLite+TCC from www.sqlite.org/contrib 
> 
> Enjoy,
> 
> Christian



 

Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite3 with cygwin - bash & rxvt-bash interaction

2007-02-10 Thread Joe Wilson
--- David M X Green <[EMAIL PROTECTED]> wrote:
> I'm trying sqlite3 run on WinXP by console using 
> (1) cygwin bash & 
> (2) bash run on rxvt. 
> I find sqlite3 - interactive  does not work well with .help.
> With just bash only part of the help printed to the screen; though the second 
> attempt seemed to
> provide the complete listing.
> With rxvt-bash all the help text appears but only with some difficulty such 
> as three .help's in
> succession at the command line to get any result at all. Another way of 
> obtaining the .help text
> is to .quit after .help, at which help test appears.
> Other communication from sqlite appears to work, eg .show .schema & select, 
> though I have not
> tested it thoroughly. I have not changed the stdout sqlite3 option, which is 
> set to stdout as
> shown by .show.
> Why is it that .help in particular does not work properly?
> 
> Using the windows command-line console things appear to work fine.
> The rxvt window  is preferable to the Windows command line so it would be 
> useful to use it
> reliably if possible. However I see that rxvt does not work with 
> interactively with a simple
> program compiled from c, which does a lot worse than the sqlite3.exe.
> 
> It seems that rxvt is capable of working with sqlite3.exe but just with some 
> remaining problems.
> 
> Any way of fixing these issues?
> Thank you
> David M X Green

The sqlite3.exe file on sqlite.org was compiled with MinGW (a cross 
MinGW from Linux to be precise) and is not a proper Cygwin executable,
and as result it has no notion of isatty() to tell whether it is run 
from a terminal, and its stdout/stderr buffering is screwed up when 
not run from a Windows console window.

You can run it from rxvt like this with recent versions of sqlite3:

  sqlite3 -interactive foo.db

Or, even better, just build it from scratch so it will work with readline
(up and down arrow keys to get previous commands).

  wget http://sqlite.org/sqlite-3.3.12.tar.gz
  tar xzvf sqlite-3.3.12.tar.gz
  cd sqlite-3.3.12
  ./configure && make sqlite3.exe
  ./sqlite3.exe

If I did not make a typo, it should work.


 

Don't get soaked.  Take a quick peak at the forecast
with the Yahoo! Search weather shortcut.
http://tools.search.yahoo.com/shortcuts/#loc_weather

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] ANN: SQLiteODBC Driver 0.72

2007-02-10 Thread Joe Wilson
Hi Christian,

I'm not sure what this patch to SQLite 3.3.12 does.
Is it fixing a bug or extending SQLite for use in your ODBC driver?
Should a ticket be created for it in SQLite CVSTrac?

thanks.

# patch: parse foreign key constraints on virtual tables
patch -d sqlite3 -p1 <<'EOD'
diff -ur sqlite3.orig/src/build.c sqlite3/src/build.c
--- sqlite3.orig/src/build.c2007-01-09 14:53:04.0 +0100
+++ sqlite3/src/build.c 2007-01-30 08:14:41.0 +0100
@@ -2063,7 +2063,7 @@
   char *z;

   assert( pTo!=0 );
-  if( p==0 || pParse->nErr || IN_DECLARE_VTAB ) goto fk_end;
+  if( p==0 || pParse->nErr ) goto fk_end;
   if( pFromCol==0 ){
 int iCol = p->nCol-1;
 if( iCol<0 ) goto fk_end;
diff -ur sqlite3.orig/src/pragma.c sqlite3/src/pragma.c
--- sqlite3.orig/src/pragma.c   2007-01-27 03:24:56.0 +0100
+++ sqlite3/src/pragma.c2007-01-30 09:19:30.0 +0100
@@ -589,6 +589,9 @@
 pTab = sqlite3FindTable(db, zRight, zDb);
 if( pTab ){
   v = sqlite3GetVdbe(pParse);
+#ifndef SQLITE_OMIT_VIRTUAL_TABLE
+  if( pTab->isVirtual ) sqlite3ViewGetColumnNames(pParse, pTab);
+#endif
   pFK = pTab->pFKey;
   if( pFK ){
 int i = 0;
diff -ur sqlite3.orig/src/vtab.c sqlite3/src/vtab.c
--- sqlite3.orig/src/vtab.c 2007-01-09 15:01:14.0 +0100
+++ sqlite3/src/vtab.c  2007-01-30 08:23:22.0 +0100
@@ -436,6 +436,9 @@
   int rc = SQLITE_OK;
   Table *pTab = db->pVTab;
   char *zErr = 0;
+#ifndef SQLITE_OMIT_FOREIGN_KEYS
+  FKey *pFKey;
+#endif

   if( !pTab ){
 sqlite3Error(db, SQLITE_MISUSE, 0);
@@ -464,6 +467,15 @@
   }
   sParse.declareVtab = 0;

+#ifndef SQLITE_OMIT_FOREIGN_KEYS
+  assert( pTab->pFKey==0 );
+  pTab->pFKey = sParse.pNewTable->pFKey;
+  sParse.pNewTable->pFKey = 0;
+  for(pFKey=pTab->pFKey; pFKey; pFKey=pFKey->pNextFrom){
+pFKey->pFrom=pTab;
+  }
+#endif
+
   sqlite3_finalize((sqlite3_stmt*)sParse.pVdbe);
   sqlite3DeleteTable(0, sParse.pNewTable);
   sParse.pNewTable = 0;
EOD



 

8:00? 8:25? 8:40? Find a flick in no time 
with the Yahoo! Search movie showtime shortcut.
http://tools.search.yahoo.com/shortcuts/#news

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [patch] cleanup cross-compiling logic

2007-02-16 Thread Joe Wilson
The sqlite autoconf stuff has been neglected for a while...

 http://marc.10east.com/?l=sqlite-users&m=116760371614235&w=2

Consider making a ticket for this issue and mentioning it in this autoconf meta 
ticket: 

 http://www.sqlite.org/cvstrac/tktview?tn=2133

--- Mike Frysinger <[EMAIL PROTECTED]> wrote:
> for some reason the current configure script tries to implement its own 
> cross-compiling logic which ends up being pretty fragile ... the attached 
> patch punts pretty much all of it in favor of the standard method that just 
> about every other autoconf-based project uses
> 
> there is one more small change required that i didnt include because it'll 
> conflict with the previous readline patch i sent out:
...


 

Get your own web address.  
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] new algorithm for handling INSERT, and .import patch

2007-02-18 Thread Joe Wilson
It does not appear that the recent INSERT checkin speeds up bulk inserts
on already populated tables with many indexes:

  [3643] Add a new algorithm for handling INSERT 
 which reduces fragmentation on a VACUUM

  http://www.sqlite.org/cvstrac/chngview?cn=3643
  http://www.sqlite.org/cvstrac/tktview?tn=2075

Regarding the newly optimized code for the statement:

  INSERT INTO table1 SELECT * FROM table2;

I created a patch (attached) in the hope of speeding up the sqlite3 
shell .import command using the same INSERT INTO construct.
It uses a staging table in an exact schema copy of the table and 
indexes (with code borrowed from vacuum.c) into an attached :memory: 
database table which bulk inserts the data into the real table, batch 
by batch. The supplied patch stages in 10,000 row batches.

Unfortunately, its results are not very promising.

Although checkin [3643] greatly improves VACUUM speed and index locality 
after a VACUUM, it does not help in situations where you're trying to
insert into already populated large tables with numerous indexes.
In such cases, simply using a high value for cache_size

  pragma cache_size=20;

yields substantially times faster .import times than the supplied 
patch .import with a default cache_size.

Only if you increase the patch's IMPORT_BATCH_SIZE value to a number greater
than the number of rows being imported (say 10) - and the table being 
into imported into is empty prior to .import - do you see any speed 
improvement.  Even with these ideal conditions the patched .import is still 
2 times slower than the non-patched .import command with a very high pragma 
cache_size.

The test scenario is below. 59,049 rows are to be .imported into table foo
which has numerous indexes. Try it with the latest CVS sqlite3, as well
as the latest CVS plus the .import patch.

 .import patch  cache_size IMPORT_BATCH_SIZE  time (s)
 -  -- -  
 no   2000   n/a  1201
 no 20   n/a22
 yes  2000 1  1665
 yes  2000 3   966
 yes20 378
 yes  20001051

Perhaps SQLite insert speed could benefit from having blocks of contiguous 
pages for exclusively use by each index. There seems to be a great deal
of disk activity when the cache_size is low.

#!/bin/bash

rm -f foo.db t59049.csv

sqlite3 foo.db separator);
@@ -1124,9 +1162,30 @@ static int do_meta_command(char *zLine, 
 }
 sqlite3_finalize(pStmt);
 if( nCol==0 ) return 0;
-zSql = malloc( nByte + 20 + nCol*2 );
-if( zSql==0 ) return 0;
-sqlite3_snprintf(nByte+20, zSql, "INSERT INTO '%q' VALUES(?", zTable);
+
+rc = execSql(p->db, "attach database ':memory:' as import_db");
+zSql = sqlite3_mprintf(
+"SELECT 'CREATE TABLE import_db.' || substr(sql,14,1) "
+"FROM sqlite_master WHERE tbl_name='%q' AND type='table' "
+"UNION ALL "
+"SELECT 'CREATE INDEX import_db.' || substr(sql

Re: [sqlite] What is wrong with this SELECT CASE statement?

2007-02-18 Thread Joe Wilson
--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> Trying to update my mmdd integers to months with a SELECT CASE
> statement:
> 
> SELECT CASE
> (CAST(DATE_OF_BIRTH / 100 AS INTEGER) - CAST(DATE_OF_BIRTH / 1 AS
> INTEGER) * 100)
> WHEN 1 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'January'
> WHEN 2 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'February'
> WHEN 3 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'March'
> WHEN 4 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'April'
> WHEN 5 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'May'
> WHEN 6 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'June'
> WHEN 7 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'July'
> WHEN 8 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'August' 
> WHEN 9 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'September'
> WHEN 10 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'October'
> WHEN 11 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'November'
> WHEN 12 THEN UPDATE A2IDC21_J SET DATE_OF_BIRTH = 'December'
> END
> FROM A2IDC21_J
> 
> But no updates take place.

SQL error: near "UPDATE": syntax error

Does the environment you run in return error codes?

You can't perform an UPDATE in a WHEN sub-clause.

 UPDATE TABLE1 
 SET FOO = (CASE ...whatever... END)

> Thanks for any advice.
> 
> RBS



 

Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite: current_time is off

2007-02-18 Thread Joe Wilson
--- P Kishor <[EMAIL PROTECTED]> wrote:
> On 2/18/07, Neil McLeod <[EMAIL PROTECTED]> wrote:
> > I just started using SQLite, and it seems my timestamps are all 5 hours
> > ahead of time (i.e. "select current_time" gives me 0:47 when it is actually
> > 19:47 here). I guess this is GMT. Is there a way to adjust the time? I'm on
> > Windows XP, and the time is listed correctly in my taskbar.
> 
> by default, time is in UTC. Use 'localtime' modifier for your time.
> 
> select time('now', 'localtime');

Is there any way to force current_time to display localtime (other than
hacking the sqlite source code)?  Some environment setting or PRAGMA
or something?


 

Finding fabulous fares is fun.  
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel 
bargains.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite: current_time is off

2007-02-19 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > --- P Kishor <[EMAIL PROTECTED]> wrote:
> > > On 2/18/07, Neil McLeod <[EMAIL PROTECTED]> wrote:
> > > > I just started using SQLite, and it seems my timestamps are all 5 hours
> > > > ahead of time (i.e. "select current_time" gives me 0:47 when it is 
> > > > actually
> > > > 19:47 here). I guess this is GMT. Is there a way to adjust the time? 
> > > > I'm on
> > > > Windows XP, and the time is listed correctly in my taskbar.
> > > 
> > > by default, time is in UTC. Use 'localtime' modifier for your time.
> > > 
> > > select time('now', 'localtime');
> > 
> > Is there any way to force current_time to display localtime (other than
> > hacking the sqlite source code)?  Some environment setting or PRAGMA
> > or something?
> > 
> 
> CURRENT_TIME does not "display" anything.  It generates a value
> to be stored in the database.  Whether you realize it or not, 
> you want to store time and date values in UTC, not localtime.
> Convert from UTC to localtime at the point where you extract
> the information out of the database to display it to the user.
> 
> And to answer your question:  No, there is no way to get CURRENT_TIME
> to use anything other than UTC.

Thanks for your response.

I'm aware of UTC time handling and time zones. 
I just thought SQLite might support per-connection time zone settings 
to display/set datetime information as many other databases do. 

 http://dev.mysql.com/doc/refman/5.0/en/time-zone-support.html
 http://dev.mysql.com/doc/refman/5.0/en/timestamp-4-1.html

 "You can set the current time zone on a per-connection basis, 
 as described in Section 5.11.8, “MySQL Server Time Zone Support”. 
 TIMESTAMP values are stored in UTC, being converted from the 
 current time zone for storage, and converted back to the current 
 time zone upon retrieval."



 

Finding fabulous fares is fun.  
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel 
bargains.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] query on match between application and sqlite

2007-02-19 Thread Joe Wilson
> 1) need to know if sqlite can help me cut down on the number of little db's 

Sure.

> 2) can I use it for a message store and have it run faster than a 1 
> message per file system?

What does that mean? Is "file system" a unit of time?

You can store messages in BLOBs or as TEXT in an SQLite table.

> 3) how would one allocate the different data sets across how many dbms?

You can put all your data in a single sqlite database file,
or use several different sqlite database files and ATTACH them
at runtime - it's up to you.

If you're new to SQL, here's a good link: http://sqlzoo.net/


 

Yahoo! Music Unlimited
Access over 1 million songs.
http://music.yahoo.com/unlimited

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Precision of dates stores as Julian "real"

2007-02-20 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> A double is sufficient to store the current time to with about
> 25 microseconds.  If you use 'now' to get the current time, the
> date functions try to capture the current time to this precision.
> That is implemented in the os_XXX.c layer.  It's system dependent.
> Check to see what your system is doing.

Useless trick to find out the minimum resolution of your machine's clock
using only sqlite...

create view v1 as select 1 union all select 2 union all select 3;
select distinct julianday('now') from v1,v1,v1,v1,v1,v1,v1,v1,v1,v1,v1;
2454152.64735713
2454152.64735731
2454152.64735749
2454152.64735768
2454152.64735786
2454152.64735804
2454152.64735822
2454152.6473584
2454152.64735858
2454152.64735876
2454152.64735894
2454152.64735912
2454152.6473593
2454152.64735948
2454152.64735967
2454152.64735985
2454152.64736003
2454152.64736021
2454152.64736039
2454152.64736057
2454152.64736075
2454152.64736093
2454152.64736111
2454152.64736129
2454152.64736147

sqlite> select (2454152.64736147 - 2454152.64736129) * 24 * 60 * 60;
0.015529990196228

So this machine's minimum timer resolution is 0.0155 seconds, 
or 15.5 milliseconds.



 

Bored stiff? Loosen up... 
Download and play hundreds of games for free on Yahoo! Games.
http://games.yahoo.com/games/front

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Nested Parens Stack Overflow

2007-02-20 Thread Joe Wilson
Can you post the schema of the tables in the SELECT?

--- Matt Froncek <[EMAIL PROTECTED]> wrote:
> I have a program generator that creates ORs in SQL nested. This causes a
> stack overflow in SQLite. Has this been addressed or will it be? If so what
> version was it addressed. I am not sure how to search for the answer to this
> question.
> 
>  
> 
> Sample SQL:
> 
> SELECT "SalesLine"."ROWID" AS "FQROWID", "Item"."FullName" AS "FQALIAS_1",
> "Ite
> 
> m"."Type" AS "FQALIAS_2", "Customer"."CustomFieldLevel" AS "FQALIAS_3",
> "Custom
> 
> er"."FullName" AS "FQALIAS_4", "SalesLine"."SalesLineDesc" AS "FQALIAS_5",
> "Sal
> 
> esLine"."SalesLineItemRefFullName" AS "FQALIAS_6", "SalesLine"."TxnDate" AS
> "FQ
> 
> ALIAS_7", "SalesLine"."SalesLineAmount" AS "FQALIAS_8", "SalesLine"."Type"
> AS "
> 
> FQALIAS_9" FROM "SalesLine" "SalesLine" LEFT OUTER JOIN "Item" "Item" ON
> "Item"
> 
> ."ListID" = "SalesLine"."SalesLineItemRefListID" LEFT OUTER JOIN "Customer"
> "Cu
> 
> stomer" ON "Customer"."ListID" = "SalesLine"."CustomerRefListID" WHERE
> ((("Sale
> 
> sLine"."TxnDate" >= '2006-01-01') AND ("SalesLine"."TxnDate" <=
> '2006-09-30')) 
> 
> AND (("Item"."Type" = 'ItemInventory') AND (("Customer"."FullName" =
> 'Amazon.co
> 
> m.ksdc, Inc. - Campbellsville') OR (("Customer"."FullName" =
> 'Amazon.com.ksdc, 
> 
> Inc. - Coffeyville') OR (("Customer"."FullName" = 'Amazon.com.ksdc, Inc. -
> Fern
> 
> ley') OR (("Customer"."FullName" = 'Arizona Select') OR
> (("Customer"."FullName"
> 
>  = 'ATW - Cobb Dist.') OR (("Customer"."FullName" = 'SunOpta Food
> Distribution 
> 
> Group') OR (("Customer"."FullName" = 'Callaway Consumer Products, LLC') OR
> (("C
> 
> ustomer"."FullName" = 'Cedarlane Natural Foods') OR (("Customer"."FullName"
> = '
> 
> City Glatt, Inc.') OR (("Customer"."FullName" = 'Columbus Distributing,
> Inc.') 
> 
> OR (("Customer"."FullName" = 'Cost Plus World Markets-West') OR
> (("Customer"."F
> 
> ullName" = 'DeKalb Farmers Market') OR (("Customer"."FullName" =
> 'DPI-Midwest')
> 
>  OR (("Customer"."FullName" = 'DPI-Northwest') OR (("Customer"."FullName" =
> 'DP
> 
> I-Rocky Mountain') OR (("Customer"."FullName" = 'DPI-West') OR
> (("Customer"."Fu
> 
> llName" = 'Exel Pak') OR (("Customer"."FullName" = 'Falcon Trading Co.') OR
> (("
> 
> Customer"."FullName" = 'Foodguys') OR (("Customer"."FullName" = 'Foodology')
> OR
> 
>  (("Customer"."FullName" = 'Gourmet Awards - Milwaukee') OR
> (("Customer"."FullN
> 
> ame" = 'Kehe Foods') OR (("Customer"."FullName" = 'Marc Popcorn Company') OR
> ((
> 
> "Customer"."FullName" = 'Marukai Markets') OR (("Customer"."FullName" =
> 'Matsuk
> 
> as Food Company') OR (("Customer"."FullName" = 'McCain Foods USA, Inc.') OR
> (("
> 
> Customer"."FullName" = 'Okami') OR ("Customer"."FullName" = 'Quickspice,
> Inc.')
> 
> )
> 



 

Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] about default file permission of SQLite database file

2007-02-21 Thread Joe Wilson
--- "Shan, Zhe (Jay)" <[EMAIL PROTECTED]> wrote:
> If to use SQLite to create a database in Linux, the database file will
> be granted permission 644 as default.
> Is this value hardcoded in the current version? Is it possible to
> change this default vaule, say to 664 or something else?

man umask



 

Cheap talk?
Check out Yahoo! Messenger's low PC-to-Phone call rates.
http://voice.yahoo.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Nested Parens Stack Overflow

2007-02-23 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> The default maximum stack depth is 100.  If you are planning

Have you given any consideration to a variable sized yystack?

Parser memory used might be less in typical cases considering 
sizeof(yyStackEntry)*100 = 2000 bytes on x86.

The yyParser struct would need a few new members (malloc function 
pointer, free function pointer, stack capacity).
The API for ParseAlloc() would need an additional argument for a 
free function to resize yystack on demand in yy_shift() -
but ParseFree() could then lose its free function argument since it 
would already be in the yyParser struct.

SQLite may not need such a deep stack, but I can see other Lemon
generated parsers benefitting from this.

> on parsing some deeply right-recursive SQL statements (and apparently
> you are) then you will need to increase the depth of the stack.
> If you use the version of Lemon that I just checked in to build
> the parser, then you do this by compiling the parse.c file using
> 
> -DYYSTACKDEPTH=1



 

We won't tell. Get more on shows you hate to love 
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] about default file permission of SQLite database file

2007-02-23 Thread Joe Wilson
--- "Shan, Zhe (Jay)" <[EMAIL PROTECTED]> wrote:
> I've tried umask, but it does not work for SQLite.

Here are the default permissions used with open()'s O_CREAT flag:

 src/os_os2.h:# define SQLITE_DEFAULT_FILE_PERMISSIONS 0600
 src/os_unix.c:# define SQLITE_DEFAULT_FILE_PERMISSIONS 0644

But umask still plays a role in this (I can never remember the umask
number's effect without experimentation). 

man open:

SYNOPSIS

 #include 

 int open(const char *path, int flags, mode_t mode);

DESCRIPTION

 The file name specified by path is opened for reading and/or 
 writing as specified by the argument flags and the file descriptor 
 returned to the calling process.  The flags argument may indicate 
 the file is to be created if it does not exist (by specifying the 
 O_CREAT flag), in which case the file is created with mode mode as 
 described in chmod(2) and modified by the process' umask value (see 
 umask(2)).

Another option is to create a zero-length sqlite database file yourself 
through other means (touch, chmod, etc) before you call sqlite3_open().


 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] db design options

2007-02-23 Thread Joe Wilson
> Any suggestions?

If you know that you will likely only perform per-site queries then
you want all the readings for a given site contiguous in the database 
file (or files). You can accomplish that in many ways, as you've outlined.

Hopefully your reading_id's always increase as time goes forward.
Consider collapsing timestamp and reading_id into one value (timestamp)
and make that your primary integer key if you can.

No point guessing about the various strategies - try all your ideas and 
do timings using typical queries under normal usage patterns (vacuumed vs. 
no-vacuum). There are benefits to seperating the data into seperate 
databases and/or tables (reducing row size by eliminating site_id) as 
well as keeping it all together (reducing your admin/coding effort).
In general you want to reduce the reading row size to a minimum number
of bytes to obtain greater insert and query speed.

The manner and order in which you populate your tables also plays a 
large role. If you populate data from all sites each day into a
one-database/one-table solution without vacuuming then data from each
site will be farther apart in the database file, and your per-site 
queries will take longer. If you bulk load all the data at once prior 
to analysis do so one site at a time with rows inserted in order of time.
That will reduce or eliminate the need to vacuum the database.




 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] compiling with VC++

2007-02-24 Thread Joe Wilson
They're probably harmless.

Even so, can you post the file name/line of one example of each of the 
following 3 warnings as they seem odd:

warning C4047: 'initializing' : '__int64 (__cdecl *)(struct Mem *)' differs
in levels of indirection from 'char *(__cdecl *)(const char *,char *)'

warning C4113: '__int64 (__cdecl *)(struct Mem *)' differs in parameter
lists from 'void *(__cdecl *)(struct sqlite3 *,void (__cdecl *)(void *,int
,const char *,const char *,__int64 ),void *)'

warning C4133: 'initializing' : incompatible types - from '__int64 (__cdecl
*)(struct sqlite3_stmt *,int )' to 'double (__cdecl *)(struct sqlite3_stmt
*,int )'

--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> When compiling this source code I get 265 warning, which doesn't really
> worry me that much as it all seems to be working fine, but in general what
> kind of warning should be taken seriously?
> 
> I only have 8 different types of warnings:
> 
> warning C4018: '!=' : signed/unsigned mismatch
> warning C4028: formal parameter 1 different from declaration
> warning C4047: 'initializing' : '__int64 (__cdecl *)(struct Mem *)' differs
> in levels of indirection from 'char *(__cdecl *)(const char *,char *)'
> warning C4090: 'function' : different 'const' qualifiers
> warning C4113: '__int64 (__cdecl *)(struct Mem *)' differs in parameter
> lists from 'void *(__cdecl *)(struct sqlite3 *,void (__cdecl *)(void *,int
> ,const char *,const char *,__int64 ),void *)'
> warning C4133: 'initializing' : incompatible types - from '__int64 (__cdecl
> *)(struct sqlite3_stmt *,int )' to 'double (__cdecl *)(struct sqlite3_stmt
> *,int )'
> warning C4244: '+=' : conversion from '__int64 ' to 'int ', possible loss of
> data
> warning C4761: integral size mismatch in argument; conversion supplied



 

Looking for earth-friendly autos? 
Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.
http://autos.yahoo.com/green_center/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] compiling with VC++

2007-02-24 Thread Joe Wilson
The function signatures at the lines in question look wrong to me.
Do you have the correct version of sqlite3ext.h?

--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> Here examples of all 3, all from the same source file and the same code
> block:
> 
> C:\SQLite\SourceWin\loadext.c(138) : warning C4047: 'initializing' : 'int
> (__cdecl *)(struct sqlite3_stmt *,int )' differs in levels of indirection
> from 'const char *(__cdecl *)(struct sqlite3_stmt *,int )'
> 
> const sqlite3_api_routines sqlite3_apis = {
>   sqlite3_column_double,
> 
> 
> C:\SQLite\SourceWin\loadext.c(125) : warning C4113: 'int (__cdecl *)(struct
> sqlite3 *,void *,void (__cdecl *)(void *,struct sqlite3 *,int ,const char
> *))' differs in parameter lists from 'int (__cdecl *)(struct sqlite3 *)'
> 
> const sqlite3_api_routines sqlite3_apis = {
>   sqlite3_collation_needed,
> 
> 
> C:\SQLite\SourceWin\loadext.c(208) : warning C4133: 'initializing' :
> incompatible types - from 'void *(__cdecl *)(struct sqlite3 *,void (__cdecl
> *)(void *,int ,const char *,const char *,__int64 ),void *)' to 'char
> *(__cdecl *)(int ,char *,const char
>  *,... )'
> 
> 
> const sqlite3_api_routines sqlite3_apis = {
>   sqlite3_update_hook,
> 
> 
> This codeblock ends like this:
> 
>   /*
>   ** The original API set ends here.  All extensions can call any
>   ** of the APIs above provided that the pointer is not NULL.  But
>   ** before calling APIs that follow, extension should check the
>   ** sqlite3_libversion_number() to make sure they are dealing with
>   ** a library that is new enough to support that API.
>   *
>   */
>   sqlite3_overload_function,
> };
> 
> 
> RBS
> 
> 
> -Original Message-
> From: Joe Wilson [mailto:[EMAIL PROTECTED] 
> Sent: 24 February 2007 17:29
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] compiling with VC++
> 
> They're probably harmless.
> 
> Even so, can you post the file name/line of one example of each of the 
> following 3 warnings as they seem odd:
> 
> warning C4047: 'initializing' : '__int64 (__cdecl *)(struct Mem *)' differs
> in levels of indirection from 'char *(__cdecl *)(const char *,char *)'
> 
> warning C4113: '__int64 (__cdecl *)(struct Mem *)' differs in parameter
> lists from 'void *(__cdecl *)(struct sqlite3 *,void (__cdecl *)(void *,int
> ,const char *,const char *,__int64 ),void *)'
> 
> warning C4133: 'initializing' : incompatible types - from '__int64 (__cdecl
> *)(struct sqlite3_stmt *,int )' to 'double (__cdecl *)(struct sqlite3_stmt
> *,int )'
> 
> --- RB Smissaert <[EMAIL PROTECTED]> wrote:
> > When compiling this source code I get 265 warning, which doesn't really
> > worry me that much as it all seems to be working fine, but in general what
> > kind of warning should be taken seriously?
> > 
> > I only have 8 different types of warnings:
> > 
> > warning C4018: '!=' : signed/unsigned mismatch
> > warning C4028: formal parameter 1 different from declaration
> > warning C4047: 'initializing' : '__int64 (__cdecl *)(struct Mem *)'
> differs
> > in levels of indirection from 'char *(__cdecl *)(const char *,char *)'
> > warning C4090: 'function' : different 'const' qualifiers
> > warning C4113: '__int64 (__cdecl *)(struct Mem *)' differs in parameter
> > lists from 'void *(__cdecl *)(struct sqlite3 *,void (__cdecl *)(void *,int
> > ,const char *,const char *,__int64 ),void *)'
> > warning C4133: 'initializing' : incompatible types - from '__int64
> (__cdecl
> > *)(struct sqlite3_stmt *,int )' to 'double (__cdecl *)(struct sqlite3_stmt
> > *,int )'
> > warning C4244: '+=' : conversion from '__int64 ' to 'int ', possible loss
> of
> > data
> > warning C4761: integral size mismatch in argument; conversion supplied




 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] compiling with VC++

2007-02-24 Thread Joe Wilson
The suggested changes the web site recommends are incorrect:

 loadext.c

 Comment out the following lines is loadext.c by adding 2 back slashes ("//") 
to the start of the
line. This prevents errors due to our changes.

 sqlite3_changes,
 sqlite3_close,

 sqlite3_last_insert_rowid,
 sqlite3_libversion,
 sqlite3_libversion_number,

 sqlite3_open,
 sqlite3_open16, 

Instead you should disable compiling the loadable module extension by
defining SQLITE_OMIT_LOAD_EXTENSION.

--- RB Smissaert <[EMAIL PROTECTED]> wrote:

> Microsoft Visual C++ 6 Enterprise edition.
> I think it is the latest before .net came in.
> 
> I am compiling the latest 3.3.13 source as from the URL you mentioned to me.
> Keep in mind I altered the source to make it VB compatible as in this URL:
> 
> http://www.tannertech.net/sqlite3vb/index.htm
> 
> RBS
> 
> 
> -Original Message-
> From: Martin Jenkins [mailto:[EMAIL PROTECTED] 
> Sent: 24 February 2007 18:52
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] compiling with VC++
> 
> RB Smissaert wrote:
> > Thanks, that is very helpful and reassuring as well.
> > Will see if I can figure out then what is causing this one:
> > warning C4028, as you say that could be a typo.
> 
> I've just built 3.3.12 source with VC6 and didn't see that error. I 
> changed the warning level to 4 and got 500 warnings (signed/unsigned 
> mismatch, oddities in M$ headers, unreferenced parameter NotUsed ;)) but 
> not a C4028 in sight. VC6 is pretty old now - what version are you 
> using? And which sqlite source are you building?
> 
> Martin



 

Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] compiling with VC++

2007-02-24 Thread Joe Wilson
--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> Downloaded it from the one you pointed to me:

You're confusing me with someone else.

Here's a way to build sqlite3.dll without Microsoft tools using the
official sources:

Grab and install MinGW and MSYS (google for it), 
download http://sqlite.org/sqlite-3.3.13.tar.gz, 
launch the bash shell and issue these commands:

 tar xzvf sqlite-3.3.13.tar.gz
 cd sqlite-3.3.13
 ./configure
 make sqlite3.dll

> 
> and a zip file - sqlite-source-3_3_13.zip - containing preprocessed source
> for Windows users at:
> 
> http://www.sqlite.org/download.html
> 
> Where else could I get it from?
> 
> RBS



 

Want to start your own business?
Learn how on Yahoo! Small Business.
http://smallbusiness.yahoo.com/r-index

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] compiling with VC++

2007-02-24 Thread Joe Wilson
--- RB Smissaert <[EMAIL PROTECTED]> wrote:
> Could try with MinGW and MSYS and it may give less warnings, but would it
> produce a dll that is any better? If possible I want to avoid installing
> more software when I already have VC6++.

It's completely up to you. How much time do you want to spend on learning 
VC6 when you have an alternative that is known to work?

No more VC++ for me, thanks. Good luck.



 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FTS1 dump/restore needs column names

2007-02-25 Thread Joe Wilson
--- Adam Megacz <[EMAIL PROTECTED]> wrote:
>   INSERT INTO "foo" (a,b,c) VALUES (x,y,z)
> 
> So, question: is there a way to get the .dump command to emit dumps in
> the latter form?

Not to my knowledge.

It would be nice if .dump could optionally omit PRAGMAs as well.



 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] garbage between end of SQL and COMMIT in a dump?

2007-02-25 Thread Joe Wilson
--- Adam Megacz <[EMAIL PROTECTED]> wrote:
> Using sqlite 3.3.10 I'm experiencing a strange situation where .dump
> appears to repeat the last few lines of the dump (not including the
> final "COMMIT") twice.
> 
> Has anybody else experienced this?

I've never seen this before. Does it happen with 3.3.13?
Can you make a small test case?



 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Table Info Pragmas as Virtual Table?

2007-02-27 Thread Joe Wilson
--- Dan Kennedy <[EMAIL PROTECTED]> wrote:
> On Tue, 2007-02-27 at 10:48 +0100, Michael Schlenker wrote:
> > Hi all,
> > 
> > for some uses its nice to query the PRAGMA and the sqlite_master table 
> > with a single query instead of multiple PRAGMA statements.
> > 
> > Could the Pragmas to query the database schema be converted to virtual 
> > tables? (that would enable joins with sqlite_master and other nice uses 
> > to get all schema info for a table with a single statement).
> > 
> > Or did some already do something like that?
> 
> There is some code for an sqlite extension in the src/test_schema.c of
> the distribution that provides a read-only view of the database schema
> as a virtual table.

Are there any plans to make all SQLite PRAGMA meta-data information
available via virtual tables as part of the standard sqlite library?
(with OMIT_META_DATA_IN_VIRTUAL_TABLES and all that?)

Everyone could roll their own easily enough, but everyone's implementation
would be different.


 

We won't tell. Get more on shows you hate to love 
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL error: database is locked

2007-02-27 Thread Joe Wilson
> > 2007/2/27, Jakub Ladman <[EMAIL PROTECTED]>:
> > > Dear Friends
> > > Please let me know what should i do with this?
> > >
> > > SQLite version 3.3.13
> > > Enter ".help" for instructions
> > > sqlite> create table tbl1(one varchar(10), two smallint);
> > > SQL error: database is locked
> > > sqlite>
> > >
> > > (sqlite is launched by root)

I suspect file locking via fcntl() is not working in your embedded libc.

Try compiling the source code with -D__DJGPP__ which will make fcntl() a no op.

- Or - if that fails try compiling with -DSQLITE_LOCK_TRACE -DSQLITE_DEBUG to 
see what errors it generates.

(Don't do both at the same time)


 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL error: database is locked

2007-02-27 Thread Joe Wilson
On second thought, instead of -D__DJGPP__, comment out the following 
lines in src/os_unix.c instead:

/* #ifdef __DJGPP__ */
# define fcntl(A,B,C) 0 
/* #endif */

--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> I suspect file locking via fcntl() is not working in your embedded libc.
> 
> Try compiling the source code with -D__DJGPP__ which will make fcntl() a no 
> op.
> 
> - Or - if that fails try compiling with -DSQLITE_LOCK_TRACE -DSQLITE_DEBUG to 
> see what errors it generates.
> 
> (Don't do both at the same time)



 

No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE3 bombs on Windows 95

2007-03-01 Thread Joe Wilson
I don't have access to Windows 95, but you might try finding the highest
version of sqlite3 that did work on 95 and then diff'ing the code (probably
os_win.c) in the next version to see what change broke it and then attach 
it to the ticket you've already created. The problem is likely something 
to do with file locking or unicode.

This may give you some ideas:

http://www.sqlite.org/cvstrac/rlog?f=sqlite/src/os_win.c

--- Zvi Dershowitz <[EMAIL PROTECTED]> wrote:
> Has anyone had any luck running the latest version of SQLITE3 (3.3.13) under 
> Windows 95?
> 
> I have just upgraded from revision 3.2.2 that run with no problem but newer 
> versions do not.
> 



 

It's here! Your new message!  
Get new email alerts with the free Yahoo! Toolbar.
http://tools.search.yahoo.com/toolbar/features/mail/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] question regarding Check-in [3671]: Changes to the btree and pager that reduce the amount of I/O when dealing with the freelist.

2007-03-04 Thread Joe Wilson
What sort of SQL statements will benefit most from this change?
Large delete operations or vacuum? What about query speed?


 

Bored stiff? Loosen up... 
Download and play hundreds of games for free on Yahoo! Games.
http://games.yahoo.com/games/front

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Why it does not work properly?

2007-03-04 Thread Joe Wilson
--- Igor Tandetnik <[EMAIL PROTECTED]> wrote:
> woj <[EMAIL PROTECTED]> wrote:
> > Now, when I run a query:
> > SELECT Mieszalnia.IdMat, Mieszalnia.Partia, Mieszalnia.Kont,
> > Mieszalnia.Uk, Max(Mieszalnia.Data) FROM Mieszalnia;
> > I always get:
> > IdMat   Partia  KontUk  Data
> > 6 3 3 sl1 1172135769
> >
> > In this result there is indeed max from Data field but rest of the
> > fields fit not...
> 
> When a SELECT statement involves aggregate functions, all column 
> references in the SELECT must be either parameters to some aggregate 
> functions, or else be also mentioned in GROUP BY clause. SQLite allows, 
> as an extension, departure from this rule, but the row from which values 
> for columns that are neither aggregated nor grouped by are taken is 
> random and unpredictable.
> 
> Specifically, in the query you show, there's no guarantee that values 
> for IdMat, Partia and so on would be taken from the same row from which 
> Max(Data) comes. Even if SQLite really wanted to help you out here, it 
> is impossible in general. Consider:
> 
> SELECT IdMat, Max(Data), Min(Data) from Mieszalnia;
> 
> Which value of IdMat would you expect to see in response to such a 
> query? Should it come from the row with the largest value of Data, with 
> the smallest, or some other?

This thread describes how the non-aggregate values in a group by are 
selected by SQLite:

 http://www.mail-archive.com/sqlite-users@sqlite.org/msg17770.html



 

Yahoo! Music Unlimited
Access over 1 million songs.
http://music.yahoo.com/unlimited

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] journal - "Unable to open the database file"

2007-03-09 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Is there something that the SQLite core can do better?

Perhaps exclusive locks on journal files would help avoid this problem.
Or are the -journal and etilqs_* files supposed to be sharable by other 
sqlite processes?

 http://www.backupassist.com/BackupAssist/faq.html

 "Basic support - open files locked with a shared lock or no lock are 
 copied and backed up after the main backup. Files with an exclusive 
 lock cannot be copied or backed up. Exclusively locked files are 
 typically SQL Server or Exchange data files."



 

No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] journal - "Unable to open the database file"

2007-03-10 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > --- [EMAIL PROTECTED] wrote:
> > > Is there something that the SQLite core can do better?
> > 
> > Perhaps exclusive locks on journal files would help avoid this problem.
> > Or are the -journal and etilqs_* files supposed to be sharable by other 
> > sqlite processes?
> 
> They are, at least on unix.  On unix, both files are opened with
> the O_EXCL flag.  How do I do the same thing for windows?

I'm just guessing, but LockFileEx() and LOCKFILE_EXCLUSIVE_LOCK might do
the trick:

 http://msdn2.microsoft.com/en-us/library/aa365203.aspx

Unfortunately, the open and exclusive lock combination is not atomic.

There's also this - CreateFileTransacted():

 http://msdn2.microsoft.com/en-us/library/aa363859.aspx

which has the argument:

 dwShareMode
...
If this parameter is 0 (zero) and CreateFileTransacted succeeds, 
the object cannot be shared and cannot be opened again until the 
handle is closed. 
...
The sharing options remain in effect until you close the handle to 
an object.



 

Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] journal - "Unable to open the database file"

2007-03-10 Thread Joe Wilson
--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> There's also this - CreateFileTransacted():
> 
>  http://msdn2.microsoft.com/en-us/library/aa363859.aspx
> 
> which has the argument:
> 
>  dwShareMode
> ...
> If this parameter is 0 (zero) and CreateFileTransacted succeeds, 
> the object cannot be shared and cannot be opened again until the 
> handle is closed. 
> ...
> The sharing options remain in effect until you close the handle to 
> an object.

Regrettably, CreateFileTransacted() is not a good option as it is new to 
Windows Vista.


 

Looking for earth-friendly autos? 
Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.
http://autos.yahoo.com/green_center/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Search engines and the Sqlite.Org website

2007-03-18 Thread Joe Wilson
I noticed that one of the sqlite contrib files had 1500 downloads in a single 
day this past week, which is around 100X normal. I'm guessing it was a robot.

I see that Google and many other websites' robots.txt prefer this form:

  Disallow: /contrib/download/

over this form:

  Disallow: /contrib/download

for directories.

Is the trailing slash required to disallow directories for (some) robots?

Also, what does the trailing question mark mean in robots.txt?

  Disallow: /foo/bar?

Is it a wildcard or does it literally mean the '?' character?


 

The fish are biting. 
Get more visitors on your site using Yahoo! Search Marketing.
http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] beginner's questions: atexit problem

2007-03-21 Thread Joe Wilson
Are you mixing Cygwin and MinGW libraries and/or header files?

--- timm2 <[EMAIL PROTECTED]> wrote:
> Using dev-c++ 4.9.9.2-wx-beta-6.8 with mingw 3.4.2 I have got the error:
> 
> sqlite3.A(.text+0x44):fake: multiple definition of `atexit'
> C:/../lib/gcc/mingw32/3.4.2/../../../crt2.o(.text+0x260):crt1.c: first 
> defined here
> 
> I did not find helpfull hint in archive of conference. 
> Could you help me?
>  Thanks for any hint,
> Tim




 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] beginner's questions: atexit problem

2007-03-22 Thread Joe Wilson
I haven't heard of Dev-Cpp.
If you use MinGW gcc from http://mingw.org/ and MSYS, you shouldn't have any 
problems.

--- timm2 <[EMAIL PROTECTED]> wrote:
> I use MMinGW as it was installed by Dev-Cpp, I think, thera are no Cygwin 
> files.
> Tim
> 
> > Are you mixing Cygwin and MinGW libraries and/or header files?
> > 
> > --- timm2 <[EMAIL PROTECTED]> wrote:
> > > Using dev-c++ 4.9.9.2-wx-beta-6.8 with mingw 3.4.2 I have got the error:
> > > 
> > > sqlite3.A(.text+0x44):fake: multiple definition of `atexit'
> > > C:/../lib/gcc/mingw32/3.4.2/../../../crt2.o(.text+0x260):crt1.c: first 
> > > defined here



 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation on huge table will never finish.

2007-03-22 Thread Joe Wilson
--- Dennis Cote <[EMAIL PROTECTED]> wrote:
> You could also improve the locality in the database file further by 
> running a vacuum command after it has been created. this will move the 
> pages around so that the page of the table are contiguous and so are the 
> pages of the index, rather than having them interspersed with each other.

In SQLite 3.3.13 and earlier, VACUUM does indeed intermix the table pages 
with the index pages leading to fragmentation. It's also quite slow on large 
tables with multiple indexes unless you use a large cache.

This patch in CVS address these problems:
http://www.sqlite.org/cvstrac/chngview?cn=3643

See also: Changes to support fragmentation analysis in sqlite3_analyzer.
http://www.sqlite.org/cvstrac/chngview?cn=3634



 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation on huge table will never finish.

2007-03-22 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Gerry Snyder <[EMAIL PROTECTED]> wrote:
> > Chris Jones wrote:
> > > Hi all,
> > > 
> > > I have a very simple schema.  I need to assign a unique identifier to a
> > > large collection of strings, each at most 80-bytes, although typically
> > > shorter.
> > > 
> > > The problem is I have 112 million of them.
> > 
> > Maybe you could start by breaking the data into 8 equal groups and make 
> > a table of each group. Then merge the original groups pairwise, then 
> > merge those 4 groups, and finally the 2 semifinal groups (kinda like 
> > March Madness, come to think of it). Since each merging will be of 
> > already sorted/indexed data, it might save a lot of time.
> > 
> 
> This is the right idea.
> 
> The problem is that your working set is bigger than your cache
> which is causing thrashing.  I suggest a solution like this:
> 
> Add entries to table ONE until the table and its unique index get
> so big that they no longer fit in cache all at once.  Then
> transfer ONE into TWO like this:
> 
>INSERT INTO two SELECT * FROM one ORDER BY unique_column;
> 
> The ORDER BY is important here.
> 
> Do the above a time or two until TWO is signficantly larger than ONE.
> Then do the same into TWO_B.  Later combine TWO and TWO_B into THREE:
> 
>INSERT INTO three SELECT * FROM two ORDER BY unique_column;
>INSERT INTO three SELECT * FROM two_b ORDER BY unique_column;
> 
> Repeat as necessary for FOUR, FIVE, SIX and so forth.

I've tried something like the algorithm you've proposed and it's much
slower than pre-sorting all the data prior to bulk insert.
It may have something to do with the cost of multiple repeated inserts 
for each original row.

Here's a different attempt at speeding up bulk insert following your
suggestion from Ticket 2075:

 http://www.mail-archive.com/sqlite-users%40sqlite.org/msg22143.html

Any suggestions to speed it up are welcome.
Sample insert speed test included.


 

Don't get soaked.  Take a quick peek at the forecast
with the Yahoo! Search weather shortcut.
http://tools.search.yahoo.com/shortcuts/#loc_weather

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation

2007-03-22 Thread Joe Wilson
--- turnleftjohn <[EMAIL PROTECTED]> wrote:
> improved dramatically. So I attempted the creation of the index off hours on
> the production system, and after 4 hours no index.  I can't detect any
> activity at all. The journal file and the .db file just sit at the same size
> for 4 hours.  Why is this failing?  It seems like it is just sitting there
> doing nothing.  When I created the test index, I noticed the journal file
> changing and the .db file changing during the 2.5 hours to create.  On the
> production .db file, nothing is happening.  I have all associated processes
> killed that ineract with the db file, so I know it is not locked.

Run lsof on the production database file just in case there is some contention. 
(You never know).

I assume that the copied "test" database was indexed immediately after its
creation. If this was the case then the entire file may have been in the OS
cache resulting in very quick indexing. Try running "wc prod.db" or 
"cat prod.db >/dev/null" and then creating the indexes on prod.db to see 
what happens.


 

8:00? 8:25? 8:40? Find a flick in no time 
with the Yahoo! Search movie showtime shortcut.
http://tools.search.yahoo.com/shortcuts/#news

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation

2007-03-22 Thread Joe Wilson
--- turnleftjohn <[EMAIL PROTECTED]> wrote:
> the production system, and after 4 hours no index.  I can't detect any
> activity at all. The journal file and the .db file just sit at the same size
> for 4 hours.  Why is this failing?  It seems like it is just sitting there
> doing nothing.  When I created the test index, I noticed the journal file
> changing and the .db file changing during the 2.5 hours to create.  On the
> production .db file, nothing is happening.  I have all associated processes
> killed that ineract with the db file, so I know it is not locked.

If all else fails...

Attach strace to the seemingly idle process to see if it is making any 
system calls:

 strace -p pid

You might also use http://sourceforge.net/projects/lsstack/ to get a live
stack trace of the Linux process. (Or use gdb to attach to the live process).



 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation on huge table will never finish.

2007-03-22 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > 
> > See also: Changes to support fragmentation analysis in sqlite3_analyzer.
> > http://www.sqlite.org/cvstrac/chngview?cn=3634
> > 
> 
> I'm not real sure those patches are working right.
> I need to revisit that whole fragmentation analysis
> thing before the next release.  Somebody please
> remind me

What do you suspect is not working right?

I've run the new CVS VACUUM on large databases without any apparent ill effect
and it's noticably faster than the previous VACUUM implementation.

As for the stats from sqlite3_analyzer, they seem to be in the right ballpark.
But I'm not sure its heuristic accounts for rows that are significantly larger
than the page size, though. In such cases I am seeing higher than expected 
fragmentation after a VACUUM. This is just a guess, of course.


 

Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation on huge table will never finish.

2007-03-22 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > 
> > As for the stats from sqlite3_analyzer, they seem to be in the right 
> > ballpark.
> > But I'm not sure its heuristic accounts for rows that are significantly 
> > larger
> > than the page size, though. In such cases I am seeing higher than expected 
> > fragmentation after a VACUUM. This is just a guess, of course.
> 
> I'm not sure sqlite3_analyzer does any of the fragmentation
> measurement right.  For that matter, how do you measure
> fragmentation with a number?  (Suggestions are welcomed.)
> 
> I may yet just yank that whole fragmentation measurement
> idea.

With non-volatile RAM drives getting larger and cheaper by the day, 
you may not need to worry about fragmentation, fsync and disk-seek time 
in a year or two.



 

Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] PRAGMA locking_mode = EXCLUSIVE

2007-03-26 Thread Joe Wilson
What operations/work patterns will benefit most from 
PRAGMA locking_mode = EXCLUSIVE?

Can you roughly quantify the speedups in such cases?


 

TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Questions on views

2007-03-26 Thread Joe Wilson
--- Dennis Cote <[EMAIL PROTECTED]> wrote:
> Dennis Volodomanov wrote:
> > Is doing a SELECT EXISTS (...) faster than a SELECT COUNT (... LIMIT 1)
> > or would it be the same (I would expect them to be the same, but that's
> > only my guess)?
> >   
> I would expect them to be very nearly the same. Any difference would 
> only be apparent if you repeat them many times (which is what you are 
> doing). You should probably try it both ways and measure the execution 
> time to see which is faster.

"EXPLAIN SELECT ..." is also a good way to find out what the queries are doing
if you're prepared to decipher the opcodes.



 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Joe Wilson
--- Alberto Simões <[EMAIL PROTECTED]> wrote:
> I have a database with three tables (bigrams, trigrams and
> tetragrams). Basically, they are:
>TABLE( A, B, Count)  IDX(A,B)  IDX(B)
>TABLE( A, B, C, Count)  IDX(A,B,C)  IDX(B)  IDX(C)
>TABLE( A, B, C, D, Count) IDX (A,B,C,D) IDX(B) IDX(C) IDX(D)
> 
> Now, this database is quickly getting around the 2GB of disk space.
> My question, about efficiency, is, it better to have:
>  - one big database, one connection from C code to the database
>  - three smaller databases, three connection from C code to the database

It depends on how and how frequently they are populated, but I'd suggest 
3 database files, one connection from C code with 2 ATTACH DATABASE commands.

 http://sqlite.org/lang_attach.html


 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Joe Wilson
--- Alberto Simões <[EMAIL PROTECTED]> wrote:
> The tables are related but they are already being created in parallel
> (three tables). I was joining them just because it was easier to move.
> But given the new, erm, big size, I think I'll maintain them split.
> 
> Joe Wilson idea of attaching the tables seems cool. I'll give it a try.

If you're creating the 3 databases in parallel, you would be better
off creating 3 connections (each from a seperate thread or process) 
to populate them in order to more efficiently use the spare cycles of 
the CPU during I/O waits.

Use ATTACH from one connection after the databases have been populated
to read from all of them.


 

We won't tell. Get more on shows you hate to love 
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation

2007-03-26 Thread Joe Wilson
--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> > improved dramatically. So I attempted the creation of the index off hours on
> > the production system, and after 4 hours no index.  I can't detect any
> > activity at all. The journal file and the .db file just sit at the same size
> > for 4 hours.  Why is this failing?  It seems like it is just sitting there
> > doing nothing.  When I created the test index, I noticed the journal file
> > changing and the .db file changing during the 2.5 hours to create.  On the
> > production .db file, nothing is happening.  I have all associated processes
> > killed that ineract with the db file, so I know it is not locked.
> 
> I assume that the copied "test" database was indexed immediately after its
> creation. If this was the case then the entire file may have been in the OS
> cache resulting in very quick indexing. Try running "wc prod.db" or 
> "cat prod.db >/dev/null" and then creating the indexes on prod.db to see 
> what happens.

The original poster confirmed that cat'ting the file to /dev/null reduced index
creation time to 2.5 hours on the original database file.

Could some optional heuristic be incorporated into SQLite's pager to do 
something 
similar for such large transactions and/or queries?




 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Joe Wilson
--- Dennis Volodomanov <[EMAIL PROTECTED]> wrote:
> Yes, after timing both I found that SELECT EXISTS is actually a tiny bit
> faster, which does matter when multiplied by thousands of executions in
> a row.
> 
> Unfortunately, I still cannot get it as fast as I want - it takes
> approximately 1500-2000 ms per approximately 2000-3000 executions. Is
> there any way to speed this up even further somehow? The scenario is
> that I have a table full of SQL statements that need to be reexecuted
> often to check whether they return any results or not (from other
> tables). I have all those SQL statements in memory in the application,
> so that saves a bit of time, but can I do anything else?

0.6 milliseconds per query is not fast enough? Wow!
What's your system doing that it needs to poll the database so often?

Unless you want to redesign your application, there's not much you can 
do except eliminate the parsing overhead.

In the table where you store the SQL statements, create a column to 
hold the MD5 hash value of the SQL and use that as a key to an in-memory 
hash map of prepared statements, where you create the prepared statement 
and insert it into the map with the MD5 value as its key if it does not 
exist. Keep in mind that your prepared statements are tied to the 
connection on which they were created, so if you have many connections
you will need many maps. Use sqlite3_prepare_v2().


 

TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] PRAGMA locking_mode = EXCLUSIVE

2007-03-26 Thread Joe Wilson
Congratulations. This is a big milestone.

The single "sqlite3.c" source file release will certainly simplify 
embeddeding.

I guess the single translation unit thing is why GCC is busy working 
on link-time optimization (and why LLVM is faster in some cases).

--- [EMAIL PROTECTED] wrote:
> When you do "PRAGMA locking_mode=EXCLUSIVE", it means that
> SQLite will not release its EXCLUSIVE lock after its does its
> first write until you either close the connection or you
> do "PRAGMA locking_mode=NORMAL".  There are various optimizations
> that can occur when this is the case.
> 
> There is a plot of runtime of CVSHEAD versus version 3.3.13
> at
> 
>http://www.sqlite.org/relspeed-20070326-ephemeral.gif
> 
> (As the name implies, this GIF will ephemeral.  Do not expect it
> to be there for more than a few days.)  Any value less than
> 1.0 means that performance has improved.  The test script is
> 
>http://www.sqlite.org/cvstrac/fileview?f=sqlite/test/speed2.test
> 
> The orange bars show the relative performance of CVSHEAD with
> no special build options.  1.0 is the performance of version 3.3.13
> so anything less than 1.0 is an improvement.  The red bars show
> CVSHEAD with all the source files concatenated into a single big
> file and compiled as a single translation unit.  Compiling this
> way allows the compiler to do additional optimizations that result
> in improved performance.  The blue bars are the same sqlite-as-one-
> great-big-source-file optimization but with -DSQLITE_OMIT_XFER_OPT=1
> defined.
> 
> The tests are cumulative.  The database that results from one test
> feeds into the next.  The two "insert" tests at the top both
> consists of a large number (5) of distinct INSERT statements.
> These tests are dominated by the parser and code generator and
> all of our recent work has been directed toward optimizing the
> backend so these tests do not show any improvement.  You can see
> that the new "XFER Optimizer" actually results in a slight
> performance drop since for each INSERT statement, the parser has
> to check to see if the XFER optimization applies.  (It never
> does in the first two tests.)
> 
> The interface between the B-Tree layer and the Pager layer has
> been refactored.  This refactoring prevented the compiler from
> doing certain function inlinings that it was able to do before,
> resulting in a performance drop for the orange bar.  But when 
> the pager and b-tree modules were combined into a single source
> file, the function inlining was able to occur again and so
> performance improved.
> 
> Tests select4 through select6 are about twice as fast.  We believe
> this is because the pager cache is no longer flushed unless the
> database is changed by another process.
> 
> VACUUM is much faster due to the XFER optimization.  Furthermore,
> VACUUM with the XFER optimization greatly reduces th database
> fragmentation.  This is (we believe) why performance is so much
> better in the later tests for the orange and red bars.  Operations
> on a defragmented database file go much faster.
> 
> Changes have been made to the b-tree and pager layers that omit
> certain unnecessary writes to the database file and to the rollback
> journal.  This reduction in disk I/O results in the 20x performance
> improvements seen for tests delete1 and drop1.
> 
> CVSHEAD passes the "quick" regression tests.  But we know there
> are still issues with the code.  CVSHEAD is not currently
> recommended for use in shipping products, but it is adequate
> for development work, we believe.
> 
> In past releases of SQLite, we have made available a ZIP archive
> with preprocessed source files.  In the future, we may change this
> so that instead of a ZIP archive full of individual files, we
> ship a single "sqlite3.c" source file which contains all of the
> source file in a single translation unit.  By this mode of
> delivery, we hope to enable users to see the performance improvements
> we are seeing in our red bars.



 

Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Joe Wilson
Assuming you're running on multi-core machines, spread the work over a few
threads/connections.

--- Dennis Volodomanov <[EMAIL PROTECTED]> wrote:
> Oops - that last sentence I wrote actually doesn't make sense :) I know
> what prepared statements are as I'm using them (doh!). I might have a
> problem that I need to add more WHERE conditions to those "basic"
> statements, which wouldn't work probably with storing them, as I can't
> possible know all possible combinations. Still - I'll give this some
> thought to see if I can find something in this direction.




 

Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives.
http://tools.search.yahoo.com/toolbar/features/mail/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Running multiple DDL statements in a batch (via JDBC)

2007-03-26 Thread Joe Wilson
AFAIK The behavior you're looking for is not well defined by the JDBC API:

 http://java.sun.com/j2se/1.4.2/docs/api/java/sql/Statement.html
 
http://java.sun.com/j2se/1.4.2/docs/api/java/sql/Statement.html#addBatch(java.lang.String)

Perhaps some JDBC drivers implement the behavior you expect, but
technically, you should call addBatch() to add each individual SQL
statement to the batch prior to calling executeBatch().

Perhaps if you post to the sqlitejdbc mailing list, the author may consider
making such an extension. But you can probably get away with just splitting
your DDL string on ";" and feeding them to addBatch in a loop and then
calling executeBatch.

--- "Steven E. Harris" <[EMAIL PROTECTED]> wrote:
> I'm using the "pure" SQLite JDBC driver¹ and trying to bootstrap my
> database schema by running a batch of DDL statements. I read in the
> entire DDL script from a file, collect it into a string, and feed that
> string into either java.sql.Statement.executeUpdate() or
> java.sql.Statement.executeBatch().
> 
> In either case, only the first DDL statement takes effect, and no
> errors are signaled. Stepping through the JDBC driver, it looks as
> though it sqlite3_prepare()s a statement with my SQL string, then
> calls sqlite3_step() on the statement, and the return value comes back
> as SQLITE_DONE, upon which it finalizes the statement and returns
> successfully.
> 
> I understand that we have this JDBC layer in the middle, but it's
> pretty thin, and I'm trying to figure out which party in this
> arrangement is responsible for only executing the first statement (up
> through the first semicolon) in the SQL string.
> 
> Does SQLite normally execute more than one statement provided in a SQL
> string? I'm fearing having to cut up this DDL file into ten parts:
> three CREATE TABLE statements, one CREATE INDEX statement, and six
> CREATE TRIGGER statements for foreign key enforcement.
> 
> Please advise.
> 
> 
> Footnotes: 
> ¹ http://www.zentus.com/sqlitejdbc/



 

Sucker-punch spam with award-winning protection. 
Try the free Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/features_spam.html

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Difference in these indices?

2007-03-27 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> If you insert records in order of ascending integer primary
> key, then the inserts will be very fast.  If you insert records
> where the integer primary key is randomized, inserts will be
> reasonably fast until the size of your table exceeds the size
> of your disk cache.  Then each insert will need to do multiple
> reads and writes to disk as it tries to figure out where in
> your massive table is the right place to put the new record,
> and then make space for that new record.  All this disk I/O
> will slow things down dramatically.

SQLite is extremely fast in most area of SQL functionality except for 
bulk inserts of non-ordered data into multi-indexes tables, where it 
is very slow. This pre-sorting-before-insert trick only works for single 
indexed tables, and is not useful in the general case. It also defeats
the purpose of using a database - why should the user have to pre-sort
data to achieve acceptable performance?

In the current SQLite insert algorithm there's way too much shuffling 
around of disk pages in order to minimize database footprint at 
all costs. There ought to be a PRAGMA to prefer maximizing insert 
speed when inserting, and if it doubles or quadruples the database 
size - so be it. Most embedded devices may opt for minimum space; but 
most PC-based apps would likely opt for maximum insert speed.

I believe 2 measures when applied together would significantly 
improve insert speed of indexed rows:

1. Allot each index/btree a contiguous region of the database file in 
which to grow without conflicting with the other indexes' pages and pages 
of the underlying table itself. (Putting indexes in separate files if 
only temporarily for bulk insert would be better, but hey, it's SQLite).

2. Adjust the btree algorithm to create larger gaps in the data when you
make space for new keys in the btree. Such page-sized gaps could minimize 
much of the page shuffling currently done.

When the inserts are done, the user need only issue a VACUUM to bring 
the database file back to its normal "packed" sized without such gaps 
if they should so choose.


 

TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] PRAGMA locking_mode = EXCLUSIVE

2007-03-27 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> There is a plot of runtime of CVSHEAD versus version 3.3.13
> at
> 
>http://www.sqlite.org/relspeed-20070326-ephemeral.gif
> 
>  Any value less than
> 1.0 means that performance has improved.  The test script is
> 
>http://www.sqlite.org/cvstrac/fileview?f=sqlite/test/speed2.test
> 
> The orange bars show the relative performance of CVSHEAD with
> no special build options.  1.0 is the performance of version 3.3.13
> so anything less than 1.0 is an improvement.  The red bars show
> CVSHEAD with all the source files concatenated into a single big
> file and compiled as a single translation unit.  Compiling this
> way allows the compiler to do additional optimizations that result
> in improved performance.  The blue bars are the same sqlite-as-one-
> great-big-source-file optimization but with -DSQLITE_OMIT_XFER_OPT=1
> defined.

Might I ask which compiler optimization flags were used for the 3 bars?
In particular, was strict aliasing used for all of them and what was
the CPU make/model or at least how much CPU cache.

I wonder if the latter tests had working data sets small enough to 
run completely in the L2/L3 cache as result of being completely in the 
disk cache (i.e., copy3, random-del1/2).


 

Food fight? Enjoy some healthy debate 
in the Yahoo! Answers Food & Drink Q&A.
http://answers.yahoo.com/dir/?link=list&sid=396545367

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Reading across processes on Solaris

2007-04-01 Thread Joe Wilson
--- Charles Cazabon <[EMAIL PROTECTED]> wrote:
> I'm using sqlite (through the pysqlite wrapper, but this behaviour seems
> unrelated to the wrapper) in an application on various platforms.  One process
> create an sqlite database and starts writing data to it; another process opens
> the sqlite database and reads from it.
> 
> On most platforms, the reader sees new data appear in the database
> periodically as the writer creates new records.  But on Solaris, the reader
> never sees any updates -- it only ever sees whatever data was in the database
> when the reader first opened it, even though the writer is continuing to
> insert new data periodically.
> 
> I've seen similar behaviour with non-database files on Solaris -- writes to a
> file across processes aren't seen by the reader unless the reader supplies the
> O_RSYNC flag to the open(2) call.  It seems to be a Solaris peculiarity, as I
> don't see this behaviour on Linux, *BSD, or other commercial Unices.
> 
> I've looked at the sqlite source code, and it does not appear to be supplying
> the O_RSYNC or O_SYNC flags to open(2).
> 
> So, my questions are:
> 
>   1) Has anyone else run into this issue?  Is there a known way to work around
>   it?
> 
>   2) Should the open(2) call be modified to provide the O_RSYNC flag?  Would
>   this have nasty side effects?

One would think that if such an issue existed in sqlite under Solaris someone 
would have reported it long ago - but you never know.
I don't have access to Solaris at the moment, but you might try testing with
2 instances of the commandline shell, sqlite3, to make sure it's not a python 
driver issue. If you have any trouble building sqlite3 from sources, just post
any compile issue to the list.

Make sure you are running sqlite3 on a database file on a _local_ file system.
Perform updates/inserts/deletes in one sqlite3 instance and see what happens
when you do an appropriate select from the other sqlite3 instance.

If it is proven that Solaris sqlite3 does indeed require the O_RSYNC flag in 
its open, I'm sure it could be put into os_unix.c with an appropriate #ifdef.



 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Reading across processes on Solaris

2007-04-01 Thread Joe Wilson
--- Charles Cazabon <[EMAIL PROTECTED]> wrote:
> I'm using sqlite (through the pysqlite wrapper, but this behaviour seems
> unrelated to the wrapper) in an application on various platforms.  One process
> create an sqlite database and starts writing data to it; another process opens
> the sqlite database and reads from it.
> 
> On most platforms, the reader sees new data appear in the database
> periodically as the writer creates new records.  But on Solaris, the reader
> never sees any updates -- it only ever sees whatever data was in the database
> when the reader first opened it, even though the writer is continuing to
> insert new data periodically.
> 
> I've seen similar behaviour with non-database files on Solaris -- writes to a
> file across processes aren't seen by the reader unless the reader supplies the
> O_RSYNC flag to the open(2) call.  It seems to be a Solaris peculiarity, as I
> don't see this behaviour on Linux, *BSD, or other commercial Unices.
> 
> I've looked at the sqlite source code, and it does not appear to be supplying
> the O_RSYNC or O_SYNC flags to open(2).

I thought that SQLite's use of fdatasync on Solaris should be enough to 
synchronize reads and writes from various processes:

 The fdatasync() function forces  all  currently  queued  I/O
 operations  associated  with  the  file  indicated  by  file
 descriptor fildes to the synchronized I/O completion state.

 The functionality is as described for  fsync(3C)  (with  the
 symbol _XOPEN_REALTIME defined), with the exception that all
 I/O operations are completed as defined for synchronised I/O
 data integrity completion.

As far as I know, this ought to have the same effect as O_RSYNC:

 O_RSYNC
 If this flag is set, reading the data will block until any 
 pending writes which affect the data are complete. Consider 
 the situation where we want to read a block of data, which 
 another process is updating. If this flag is not set, it is 
 indeterminate whether the data returned will be that which 
 is on the disk, or that which is scheduled to be written.

Can you confirm that the pysqlite wrapper that you're using 
compiled sqlite with fdatasync?

  nm your_python_sqlite_wrapper.so | grep fdatasync
or
  nm sqlite3.so | grep fdatasync



 

Sucker-punch spam with award-winning protection. 
Try the free Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/features_spam.html

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Reading across processes on Solaris

2007-04-01 Thread Joe Wilson
> I'd have thought so, too.  I've confirmed fdatasync shows up in the symbols in
> the compiled sqlite library, and that two instances of the `sqlite3` SQL shell
> don't show the problem.  Unfortunately, two minimal Python programs don't show
> the problem either.
> 
> That points to the application code -- except that I only see this behaviour
> on Solaris, and there's no OS-specific code in the database-related portions
> of the application.  I'll continue investigating and try to get to the bottom
> of this.  Thanks for the sanity-check.

Might there be some autocommit/sql statement batching difference for the driver 
or your app on different platforms?

If all else fails, try truss on the "failing" Solaris processes and compare 
its output to strace for the same "correct" application on Linux.


 

Sucker-punch spam with award-winning protection. 
Try the free Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/features_spam.html

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation

2007-04-02 Thread Joe Wilson
Sure, pre-caching is a hack - but a very effective one.

It has also been suggested to use MAP_POPULATE for mmap() and 
posix_fadvise() in Linux.

 http://www.mail-archive.com/monotone-devel@nongnu.org/msg03222.html

The general reaction against pre-caching (especially for tiny 100M 
databases) is amusing to me. Some programmers would rather wait 3 times 
longer for a database operation while the majority of the RAM in their 
machine goes unused. But if the OS did the same pre-cache thing behind 
their back without their knowledge - they would be good with that.

--- Scott Hess <[EMAIL PROTECTED]> wrote:
> An interesting approach would be to use some sort of async I/O
> facility to implement read-ahead.
> 
> Short of that, I have found that in some cases, on some operating
> systems, implementing explicit read-ahead buffering for fts2 segment
> merges improves performance when the disk caches are cold.  Linux
> kernel 2.6 seems to get no benefit, 2.4 gets more.  This is somewhat
> of a special case, though, as fts2 segment merges are merging streams
> from different locations together, like an external sort.
> 
> -scott
> 
> 
> On 3/26/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
> > --- Joe Wilson <[EMAIL PROTECTED]> wrote:
> > > > improved dramatically. So I attempted the creation of the index off 
> > > > hours on
> > > > the production system, and after 4 hours no index.  I can't detect any
> > > > activity at all. The journal file and the .db file just sit at the same 
> > > > size
> > > > for 4 hours.  Why is this failing?  It seems like it is just sitting 
> > > > there
> > > > doing nothing.  When I created the test index, I noticed the journal 
> > > > file
> > > > changing and the .db file changing during the 2.5 hours to create.  On 
> > > > the
> > > > production .db file, nothing is happening.  I have all associated 
> > > > processes
> > > > killed that ineract with the db file, so I know it is not locked.
> > >
> > > I assume that the copied "test" database was indexed immediately after its
> > > creation. If this was the case then the entire file may have been in the 
> > > OS
> > > cache resulting in very quick indexing. Try running "wc prod.db" or
> > > "cat prod.db >/dev/null" and then creating the indexes on prod.db to see
> > > what happens.
> >
> > The original poster confirmed that cat'ting the file to /dev/null reduced 
> > index
> > creation time to 2.5 hours on the original database file.
> >
> > Could some optional heuristic be incorporated into SQLite's pager to do 
> > something
> > similar for such large transactions and/or queries?


 

Don't get soaked.  Take a quick peek at the forecast
with the Yahoo! Search weather shortcut.
http://tools.search.yahoo.com/shortcuts/#loc_weather

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Data integrity in extreme situations

2007-04-04 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> It has already been reported that Apple Mail keeps all its metadata
> in an SQLite database.  (There was quite a stir in the Mac community
> a few weeks ago when somebody discovered and blogged that you could
> substantially increase the speed of Apple Mail by running VACUUM on
> its database!) 

VACUUM is one way to get the entire sqlite database into the OS disk cache. 

Here's another way:

  cat ~/Library/Mail/Envelope\ Index >/dev/null

The Apple Mail speedup in both cases is not as dramatic after a reboot, though.


 

No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Currency Formatting within SQLite

2007-04-06 Thread Joe Wilson
--- Mitchell Vincent <[EMAIL PROTECTED]> wrote:
> Yes, I know about creating a function but I'm wondering if I can hook
> into the already-existing Windows API function for it. Currency
> formatting is quite complicated (much more than just storing a
> currency "symbol") and Windows has already done the leg work - I just
> need to see gain access to the API function for it.

You gain access to the API function by writing code to call it.

On 4/6/07, Igor Tandetnik <[EMAIL PROTECTED]> wrote:
> > otherwise). You can install a custom function - see
> > sqlite3_create_function[16]. This function can do whatever you want,
> > including calling GetCurrencyFormat.


 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: FTS does not support REPLACE

2007-04-10 Thread Joe Wilson
--- Dan Kennedy <[EMAIL PROTECTED]> wrote:
> I checked the code and conflict handling mechanisms (OR ERROR, 
> OR ABORT, OR REPLACE) do not apply to virtual tables.
...
> Something to think about anyhow... Do we want conflict handling 
> for FTS (and other virtual modules)?

It would be nice if any SQL statement that can be performed on a normal 
table could also be done on a virtual table.

But since I ain't writing the code, my vote don't matter.


   

No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and nested transactions

2007-04-10 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> It may be more difficult to implement this in a backwards-compatible
> way such that older versions of SQLite can rollback a journal created
> by a newer version if it encounters one.

I wonder if there are many projects that have different versions of 
SQLite updating and reading the same database file at the same time.
This can't be very common.

Perhaps if a new version of SQLite is killed mid-way through a nested 
transaction and an older version tries to subsequently read the same 
database, but this is pretty obscure.

Hypothetically, would an older version of SQLite change the database
file or journal file in any destructive way if it encountered a journal
format that it was not familiar with?


   

Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and nested transactions

2007-04-11 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:

> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > --- [EMAIL PROTECTED] wrote:
> > > It may be more difficult to implement this in a backwards-compatible
> > > way such that older versions of SQLite can rollback a journal created
> > > by a newer version if it encounters one.
> > 
> > I wonder if there are many projects that have different versions of 
> > SQLite updating and reading the same database file at the same time.
> > This can't be very common.
> > 
> 
> As we learned from the release of 3.3.0, this is more common
> that you might expect.  There are surprisingly many projects 
> that expect to be able to access SQLite databases from both 
> older and newer versions of the library.

It's not quite the same thing - the 3.x file format change was not 
backwards compatible _at all_ with previous versions of SQLite.
Having a large degree of backwards compatibilty makes all the difference.

Changing the journal file format to accommodate a hypothetical new 
feature would still produce a backwards compatible database _except_ 
in the rare case where a transaction in a new version of SQLite is 
abruptly aborted or if the power fails. If the transactions are finished 
you would still have backwards compatibility with previous versions.

But I still think that simultaneous read/write access to the same database 
file with different version of SQLite is not very common.


   

Don't get soaked.  Take a quick peak at the forecast
with the Yahoo! Search weather shortcut.
http://tools.search.yahoo.com/shortcuts/#loc_weather

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] 3.3.15 test coverage improvements? (.dump again)

2007-04-11 Thread Joe Wilson
Unfortunately, the sqlite3 commandline shell is not part of the test suite
despite its widespread use as an administrative tool for sqlite databases.

 http://marc.info/?l=sqlite-users&m=117253099812346&w=2

But you know this already - you're the same guy as this previous post.  :-)

--- Travis Daygale <[EMAIL PROTECTED]> wrote:
> Change log for 3.3.15 says:
> Many improvements to the test suite.  Test coverage now exceeded 98%
> 
> What does this mean?  
> 
> Does it mean that (say) the sqlite3 command line tool (especially the .dump 
> command) is tested
> at each release now?
> 
>   ---
> 
> I'm asking this because previously on this list it was made clear that the 
> command line tool was
> not part of the test suite.  The .dump command in that tool is (very very 
> often!!!) described as
> a good backup.  But it isn't (IMHO).  It's not a tested tool, and indeed, 
> there were 3.x.x
> releases where that tool was not correctly dumping out (eg.) triggers.  
> 
> It just seems that to download sqlite source and know that it contains a 
> reliable backup tool
> with it would add to this first rate open source software, regardless of how 
> simple it might be
> to write our own code in our language of choice to dump databases.
> 
> Either that or consider removing the .dump command?  I say this because I 
> have seen so very much
> documentation that references the .dump feature, and yet that feature lies in 
> code that is
> (was?) tested very differently than the rest of sqlite.That just seems 
> uncomfortable... 
> 
> Backups are so important, etc..
> 
> I hope my point is made.  I'm not complaining here at all.   Trying to be 
> helpful.



   

Bored stiff? Loosen up... 
Download and play hundreds of games for free on Yahoo! Games.
http://games.yahoo.com/games/front

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] File size issue?

2007-04-11 Thread Joe Wilson
> What is worse is that VACUUM didn't really help that much. It takes
> forever, and it doesn't really "fix" the fragmentation either.

That used to be the case, but VACUUM is vastly improved in the latest 
version of SQLite.


   

Sucker-punch spam with award-winning protection. 
Try the free Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/features_spam.html

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Select columns & performance

2007-04-11 Thread Joe Wilson
--- Mike Johnston <[EMAIL PROTECTED]> wrote:
> Are there any significant performance benefits by limiting the number of 
> columns in a single
> table select statement?   Does joining (<5 tables) make a significant 
> difference to that answer?

If you need the columns, you don't have much choice but to select them.

Limiting the number of columns in subqueries makes a big difference.
i.e., try to avoid SELECT * in subqueries when you just need a few values.

Do a lot of benchmarking.


   

Get your own web address.  
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Help with SQL syntax. Ticket #2296

2007-04-12 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Consider this query:
> 
>SELECT a, b FROM t1 UNION SELECT b, a FROM t1 ORDER BY a,b;
> 
> Is the query above equalent to:
> 
>   (1)  SELECT a, b FROM t1 UNION SELECT b, a FROM t1 ORDER BY 1,2;
> 
> Or is it the same as:
> 
>   (2)  SELECT a, b FROM t1 UNION SELECT b, a FROM t1 ORDER BY 2,1;

Most databases use form (1).

SQLite already leans in this direction anyway, using the first select 
in a union for the headings. May as well be consistant.

  SQLite version 3.3.15
  Enter ".help" for instructions
  sqlite> .header on
  sqlite> select 3 a union select 4 b;
  a
  3
  4



   

8:00? 8:25? 8:40? Find a flick in no time 
with the Yahoo! Search movie showtime shortcut.
http://tools.search.yahoo.com/shortcuts/#news

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database performance

2007-04-15 Thread Joe Wilson
--- "Andrey A. Lapin" <[EMAIL PROTECTED]> wrote:
> I develop database in which information on the traffic (headings of packets
> MAC, TCP/IP) is stored.
> Processing of queries occurs slowly.
> For example, quantity of records in a database of 17 million lines and its
> size 3 Gb.
> How to increase performance?

No one will be able to help you unless you post the table schema and
provide some working SQL expressions that you wish to speed up.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Journal file question

2007-04-18 Thread Joe Wilson
--- DragonK <[EMAIL PROTECTED]> wrote:
> I'm having the following problem:  a sqlite database file is on an NTFS
> filesystem, in a directory with no permissions to create new files, but only
> to modify the original database. By using filemon i've noticed some access
> denied errors when sqlite attempted to create the journal files.
> I've created a sepparate test case and (by using filemon again) i've noticed
> that indeed, sqlite uses the journal file, even outside transactions (an
> insert sql was executed).
> 
> My question is how can I stop this behaviour (creating/deleting the journal)
> so that sqlite will work properly under the scenario described above (when
> it can't create the journal)?

No problem - just create your own virtual file system in a file and
change sqlite's I/O functions:

/*
** An instance of the following structure contains pointers to all
** methods on an OsFile object.
*/
struct IoMethod {
  int (*xClose)(OsFile**);
  int (*xOpenDirectory)(OsFile*, const char*);
  int (*xRead)(OsFile*, void*, int amt);
  int (*xWrite)(OsFile*, const void*, int amt);
  int (*xSeek)(OsFile*, i64 offset);
  int (*xTruncate)(OsFile*, i64 size);
  int (*xSync)(OsFile*, int);
  void (*xSetFullSync)(OsFile *id, int setting);
  int (*xFileHandle)(OsFile *id);
  int (*xFileSize)(OsFile*, i64 *pSize);
  int (*xLock)(OsFile*, int);
  int (*xUnlock)(OsFile*, int);
  int (*xLockState)(OsFile *id);
  int (*xCheckReservedLock)(OsFile *id);
  int (*xSectorSize)(OsFile *id);
};

I don't know of any other way given your constraints.

See also: Single-file virtual file systems
http://en.wikipedia.org/wiki/Virtual_file_system


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How is the windows binary built?

2007-04-20 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> I'm using cygwin under windows XP.
> gcc version 3.4.4
> 
> I unzipped the sqlite-3.3.16.tar.gz to the directory sqlite-3.3.16.
> 
> Executed the following:
> 
> cd sqlite-3.3.16
> mkdir build
> cd build
> ./configure
> make
> 
> The resulting sqlite3.exe is 4 times bigger than the windows release in 
> sqlite-3.3.16.zip.
> 
> Can anyone explain the size difference?

The debug symbols.

  strip sqlite3.exe

The sqlite3.exe posted on the website is built with a MinGW gcc cross 
compiler hosted on Linux. It is not dependent on cygwin DLLs, unlike the
cygwin version:

# cygwin
$ cygcheck ./sqlite3.exe 
.\sqlite3.exe
  c:\cygwin\bin\cygwin1.dll
C:\WINNT\system32\ADVAPI32.DLL
  C:\WINNT\system32\NTDLL.DLL
  C:\WINNT\system32\KERNEL32.DLL
  C:\WINNT\system32\RPCRT4.DLL
  c:\cygwin\bin\cygreadline6.dll
c:\cygwin\bin\cygncurses-8.dll
C:\WINNT\system32\USER32.dll
  C:\WINNT\system32\GDI32.DLL

# MinGW
$ cygcheck ./sqlite3.exe 
.\sqlite3.exe
  C:\WINNT\system32\KERNEL32.dll
C:\WINNT\system32\NTDLL.DLL
  C:\WINNT\system32\msvcrt.dll

DLLs aside, the cygwin version has the advantage of playing nice with 
rxvt and xterm.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: Re: [sqlite] How is the windows binary built?

2007-04-21 Thread Joe Wilson
--- Miha Vrhovnik <[EMAIL PROTECTED]> wrote:
> >  strip sqlite3.exe
> is not ok at least for DLL, because it strips everything even reallocation 
> info.
> strip --strip-unneeded sqlite3.dll

The GNU toolchain's strip command corrupts the reallocation information for 
MinGW DLLs, so I would not recommend using it.

  http://www.sqlite.org/cvstrac/tktview?tn=1474

Use the configure generated Makefile target "make sqlite3.dll" instead for 
MinGW to have the linker not generate the extra stuff in the first place:

#
# Windows section
#
dll: sqlite3.dll

REAL_LIBOBJ = $(LIBOBJ:%.lo=.libs/%.o)

$(REAL_LIBOBJ): $(LIBOBJ)

sqlite3.def: $(REAL_LIBOBJ)
echo 'EXPORTS' >sqlite3.def
nm $(REAL_LIBOBJ) | grep ' T ' | grep ' _sqlite3_' \
| sed 's/^.* _//' >>sqlite3.def

sqlite3.dll: $(REAL_LIBOBJ) sqlite3.def
$(TCC) -shared -o sqlite3.dll sqlite3.def \
-Wl,"--strip-all" $(REAL_LIBOBJ)


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: sqlite3 shell doesn't install correctly

2007-04-21 Thread Joe Wilson
> I've used SQLite-3.3.4 for quite a while, mostly the Tcl binding.
> A few days ago I installed 3.3.15 and the Tcl binding worked fine.
> Tonight I downloaded 3.3.16 and compiled it without any errors
> or warnings and then installed it. When I tried to execute sqlite3
> the following error appered:
> 
> sqlite3: symbol lookup error: sqlite3: undefined symbol: sqlite3_io_trace
> 
> I switched back to 3.3.15 with the same result. A switch back to
> 3.3.4 behaved normal.
> 
> What's wrong with 3.3.15 and 3.3.16?


Apply this patch to fix this problem.


Index: src/main.c
===
RCS file: /sqlite/sqlite/src/main.c,v
retrieving revision 1.370
diff -u -3 -p -r1.370 main.c
--- src/main.c  18 Apr 2007 14:24:33 -  1.370
+++ src/main.c  21 Apr 2007 13:35:18 -
@@ -27,6 +27,7 @@ const char sqlite3_version[] = SQLITE_VE
 const char *sqlite3_libversion(void){ return sqlite3_version; }
 int sqlite3_libversion_number(void){ return SQLITE_VERSION_NUMBER; }
 
+#if !defined(SQLITE_OMIT_TRACE) && defined(SQLITE_ENABLE_IOTRACE)
 /*
 ** If the following function pointer is not NULL and if
 ** SQLITE_ENABLE_IOTRACE is enabled, then messages describing
@@ -34,6 +35,7 @@ int sqlite3_libversion_number(void){ ret
 ** are intended for debugging activity only.
 */
 void (*sqlite3_io_trace)(const char*, ...) = 0;
+#endif
 
 /*
 ** If the following global variable points to a string which is the
Index: src/shell.c
===
RCS file: /sqlite/sqlite/src/shell.c,v
retrieving revision 1.160
diff -u -3 -p -r1.160 shell.c
--- src/shell.c 28 Feb 2007 06:14:25 -  1.160
+++ src/shell.c 21 Apr 2007 13:35:19 -
@@ -1242,6 +1242,7 @@ static int do_meta_command(char *zLine, 
 }
   }else
 
+#if !defined(SQLITE_OMIT_TRACE) && defined(SQLITE_ENABLE_IOTRACE)
   if( c=='i' && strncmp(azArg[0], "iotrace", n)==0 ){
 extern void (*sqlite3_io_trace)(const char*, ...);
 if( iotrace && iotrace!=stdout ) fclose(iotrace);
@@ -1261,6 +1262,7 @@ static int do_meta_command(char *zLine, 
   }
 }
   }else
+#endif
 
 #ifndef SQLITE_OMIT_LOAD_EXTENSION
   if( c=='l' && strncmp(azArg[0], "load", n)==0 && nArg>=2 ){


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: sqlite3 shell doesn't install correctly

2007-04-22 Thread Joe Wilson
> I've used SQLite-3.3.4 for quite a while, mostly the Tcl binding.
> A few days ago I installed 3.3.15 and the Tcl binding worked fine.
> Tonight I downloaded 3.3.16 and compiled it without any errors
> or warnings and then installed it. When I tried to execute sqlite3
> the following error appered:
> 
> sqlite3: symbol lookup error: sqlite3: undefined symbol: sqlite3_io_trace
> 
> I switched back to 3.3.15 with the same result. A switch back to
> 3.3.4 behaved normal.
> 
> What's wrong with 3.3.15 and 3.3.16?

Okay, I get it now. You seem to be using a 3.3.15 sqlite3 binary requiring
the shared library sqlite3.so. This 3.3.15 sqlite3 binary's shell.c makes 
reference to sqlite3_io_trace in the 3.3.15 sqlite3.so's main.c.
You cannot use an older 3.3.4 sqlite3.so against the 3.3.15 sqlite3 binary
because the older shared library lacks sqlite3_io_trace.

In general, sqlite3.so is backwards compatible, but not necessarily 
forward compatible. So you can upgrade an older binary to use a more 
recent 3.x sqlite3.so release, but not the other way around.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite3 shell doesn't install correctly

2007-04-22 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Ulrich =?iso-8859-1?q?Sch=F6bel?= <[EMAIL PROTECTED]> wrote:
> > On Saturday 21 April 2007 15:40, Joe Wilson wrote:
> > >
> > > Apply this patch to fix this problem.
> > >
> > 
> > Thanks for the patch, but there is still something wrong in the
> > linking stage.
> > 
> 
> I do not understand why you are having problems.  Nor can
> I figure out why Joe's patch might help.

I was just trying to avoid the unresolved references to sqlite3_io_trace
that he was reporting, without knowing what options he built the code with
in the first place.

But is there any reason why the reference sqlite3_io_trace in shell.c
and the extern sqlite3_io_trace in main.c should not be #ifdef'd like 
the others?


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: sqlite3 shell doesn't install correctly

2007-04-22 Thread Joe Wilson
--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> In general, sqlite3.so is backwards compatible, but not necessarily 
> forward compatible. So you can upgrade an older binary to use a more 
> recent 3.x sqlite3.so release, but not the other way around.

I meant to to say:

 In general, sqlite3.so is forward compatible, but not necessarily 
 backwards compatible. So you can upgrade an older binary to use a more 
 recent 3.x sqlite3.so release, but not the other way around.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] older http://sqlite.org/sqlite-3.x.y.tar.gz downloads no longer available?

2007-04-22 Thread Joe Wilson
I noticed that sqlite tar.gz downloads prior to 

  http://sqlite.org/sqlite-3.3.10.tar.gz 

are missing. So historical releases are not available except by CVS.
Is this intentional?

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: sqlite3 shell doesn't install correctly

2007-04-22 Thread Joe Wilson
--- Ulrich Schöbel <[EMAIL PROTECTED]> wrote:
> Thanks for the patch, but there is still something wrong in the
> linking stage.
> 
> I have 3 instances of SQLite on my system:
> The first is 3.2.1, installed with the system and not used, at least
> not by me, library in /usr/lib.
> The second is installed in a separate directory tree, version 3.3.4,
> which I don't want to change, at least not now.
> The third is in /usr/local/..., version 3.3.4 until now, to be substituted
> by 3.3.16.
> 
> When I run make it produces an sqlite3 binary linked against the
> 3.3.4 library. I made it temporarily unsearchable, now it links
> against the 3.2.1. Next step was to remove the 3.2.1 lib in /usr/lib.
> Now the linker doesn't find a library. It simply refuses to accept the
> right one. Scanning libtool showed all three lib directories in place.
> 
> The shell script sqlite3 runs fine as it did yesterday without the patch.

To avoid future linkage and path problems with the sqlite3 shell, 
I'd recommend building it statically. Fewer possible problems.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] older http://sqlite.org/sqlite-3.x.y.tar.gz downloads no longer available?

2007-04-22 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > I noticed that sqlite tar.gz downloads prior to 
> > 
> >   http://sqlite.org/sqlite-3.3.10.tar.gz 
> > 
> > are missing. So historical releases are not available except by CVS.
> > Is this intentional?
> 
> Yes.  Why would you want an historical release?  Everything
> is backwards compatible.

To support and debug software using old specific old sqlite releases
(both open source and proprietary). I was trying to see if 3.3.4 had
sqlite3_io_trace, for example. Also, some historical versions have (much)
smaller memory footprints in certain cases.

In my case, SQLite 3.2.2 in particular executes several classes
of data mining queries 10X faster using 100X less temp store involving 
GROUP BY clauses (that cannot be indexed).

  http://www.sqlite.org/cvstrac/tktview?tn=1809

I still recommend that version for exactly this case.

It's not a big problem no longer having the old releases on the 
site - one could grab a release via CVS - I just was wondering 
why the recent change?


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] v3.6.16-threads fails to build via freebsd ports sys

2007-04-22 Thread Joe Wilson
This libtool --tag problem does not seem to be unique to sqlite.

 http://lists.freebsd.org/pipermail/freebsd-questions/2005-November/104076.html

I think it's a question for the freebsd port maintainers.

--- snowcrash <[EMAIL PROTECTED]> wrote:
> hi,
> 
> building v3.3.16 from src on osx works fine.
> 
> trying same, using freebsd 6.2's ports sys fails to build,
> 
> % portinstall sqlite3-threads
> [Updating the pkgdb  in /var/db/pkg ... - 97
> packages found (-1 +1) (...). done]
> --->  Installing 'sqlite3-threads-3.3.16' from a port
> (databases/sqlite3-threads)
> --->  Building '/usr/ports/databases/sqlite3-threads'
> ...
> /usr/local/bin/libtool --mode=compile --tag=CC
> /usr/local/libexec/ccache/world-cc -pipe -march=pentium-mmx
> -DSQLITE_ENABLE_FTS1 -g -I. -I./src -DSQLITE_DEBUG=1
> -I/usr/local/include/tcl8.4 -DTHREADSAFE=1
> -DSQLITE_THREAD_OVERRIDE_LOCK=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -c
> ./ext/fts1/fts1_tokenizer1.c
>  /usr/local/libexec/ccache/world-cc -pipe -march=pentium-mmx
> -DSQLITE_ENABLE_FTS1 -g -I. -I./src -DSQLITE_DEBUG=1
> -I/usr/local/include/tcl8.4 -DTHREADSAFE=1
> -DSQLITE_THREAD_OVERRIDE_LOCK=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -c
> ./ext/fts1/fts1_tokenizer1.c  -fPIC -DPIC -o .libs/fts1_tokenizer1.o
>  /usr/local/libexec/ccache/world-cc -pipe -march=pentium-mmx
> -DSQLITE_ENABLE_FTS1 -g -I. -I./src -DSQLITE_DEBUG=1
> -I/usr/local/include/tcl8.4 -DTHREADSAFE=1
> -DSQLITE_THREAD_OVERRIDE_LOCK=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -c
> ./ext/fts1/fts1_tokenizer1.c -o fts1_tokenizer1.o >/dev/null 2>&1
> /usr/local/bin/libtool --mode=link /usr/local/libexec/ccache/world-cc
> -pipe -march=pentium-mmx -DSQLITE_ENABLE_FTS1 -g -I. -I./src
> -DSQLITE_DEBUG=1  -I/usr/local/include/tcl8.4 -DTHREADSAFE=1
> -DSQLITE_THREAD_OVERRIDE_LOCK=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -o
> libsqlite3.la alter.lo analyze.lo attach.lo auth.lo btree.lo build.lo
> callback.lo complete.lo date.lo delete.lo expr.lo func.lo hash.lo
> insert.lo loadext.lo main.lo opcodes.lo os.lo os_unix.lo os_win.lo
> os_os2.lo pager.lo parse.lo pragma.lo prepare.lo printf.lo random.lo
> select.lo table.lo tokenize.lo trigger.lo update.lo util.lo vacuum.lo
> vdbe.lo vdbeapi.lo vdbeaux.lo vdbefifo.lo vdbemem.lo where.lo utf.lo
> legacy.lo vtab.lo fts1.lo fts1_hash.lo fts1_porter.lo
> fts1_tokenizer1.lo -pthread  \
>  -rpath /usr/local/lib -version-info "8:6:8"
> libtool: link: unable to infer tagged configuration
> libtool: link: specify a tag with `--tag'
> gmake: *** [libsqlite3.la] Error 1
> *** Error code 2
> 
> Stop in /usr/ports/databases/sqlite3-threads.
> ** Command failed [exit code 1]: /usr/bin/script -qa
> /tmp/portinstall.47997.0 env make
> ** Fix the problem and try again.
> ** Listing the failed packages (*:skipped / !:failed)
> ! databases/sqlite3-threads (unknown build error)
> --->  Packages processed: 0 done, 0 ignored, 0 skipped and 1 failed
> %
> 
> is this a known issue?  short of a manual build, is there a fix/workaround?



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] test fixture tcl errors on Windows

2007-04-24 Thread Joe Wilson
The test script itself has the test.db open, and as result Windows cannot
delete an open file. It worked on UNIX because you can delete anything at 
any time whether it is open, running, locked or whatever.

If you apply this patch, exclusive2.test will run to completion on cygwin
without error. It ought to work on MinGW as well.


Index: test/exclusive2.test
===
RCS file: /sqlite/sqlite/test/exclusive2.test,v
retrieving revision 1.4
diff -u -3 -p -r1.4 exclusive2.test
--- test/exclusive2.test16 Apr 2007 15:02:20 -  1.4
+++ test/exclusive2.test25 Apr 2007 02:42:56 -
@@ -161,6 +161,7 @@ do_test exclusive2-2.4 {
   seek $fd 1024
   puts -nonewline $fd [string repeat [binary format c 0] 1]
   flush $fd
+  close $fd
   t1sig
 } $::sig
 

--- Dennis Cote <[EMAIL PROTECTED]> wrote:

> I'm getting a strange failure of the test suite on Windows (XP all 
> updates). The exclusive2 test is getting a permission denied error when 
> deleting a file. I can manually delete the file and the tclsh can 
> execute the same file delete command if I enter it manually.
> 
> $ ./testfixture.exe ../sqlite/test/exclusive2.test
> exclusive2-1.0... Ok
> exclusive2-1.1... Ok
> exclusive2-1.2... Ok
> exclusive2-1.3... Ok
> exclusive2-1.4... Ok
> exclusive2-1.5... Ok
> exclusive2-1.6... Ok
> exclusive2-1.7... Ok
> exclusive2-1.9... Ok
> exclusive2-1.10... Ok
> exclusive2-1.11... Ok
> exclusive2-2.1... Ok
> exclusive2-2.2... Ok
> exclusive2-2.3... Ok
> exclusive2-2.4... Ok
> exclusive2-2.5... Ok
> exclusive2-2.6... Ok
> exclusive2-2.7... Ok
> exclusive2-2.8... Ok
> c:\SQLite\SQLiteV3\build\testfixture.exe: error deleting "test.db": 
> permission denied
> while executing
> "file delete -force test.db"
> (file "../sqlite/test/exclusive2.test" line 192)
> 
> $ ls test.db
> test.db
> 
> $ tclsh
> file delete -force test.db
> exit
> 
> $ ls test.db
> ls: test.db: No such file or directory
> 
> I'm building SQLite and running these tests using MinGW/MSYS. I haven't 
> had any problems before, but I haven't built sqlite from source since 
> around version 3.3.12.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] test fixture tcl errors on Windows

2007-04-24 Thread Joe Wilson
a better patch...

Index: test/exclusive2.test
===
RCS file: /sqlite/sqlite/test/exclusive2.test,v
retrieving revision 1.4
diff -u -3 -p -r1.4 exclusive2.test
--- test/exclusive2.test16 Apr 2007 15:02:20 -  1.4
+++ test/exclusive2.test25 Apr 2007 03:05:04 -
@@ -161,6 +161,7 @@ do_test exclusive2-2.4 {
   seek $fd 1024
   puts -nonewline $fd [string repeat [binary format c 0] 1]
   flush $fd
+  close $fd
   t1sig
 } $::sig
 
@@ -242,4 +243,8 @@ do_test exclusive2-3.6 {
   pagerChangeCounter test.db
 } {5}
 
+db close
+file delete -force test.db
+file delete -force test.db-journal
+
 finish_test



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] how to compile sqlite3 in ads1.2 environment?

2007-04-25 Thread Joe Wilson
What's an ads1.2?

--- [EMAIL PROTECTED] wrote:
> Dose anybody compile the sqlite3 in the ads1.2 environment?

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] how to compile sqlite3 in ads1.2 environment?

2007-04-25 Thread Joe Wilson
Try compiling with sqlite3.c from http://sqlite.org/sqlite-source-3_3_17.zip

It is already pre-generated and does not require generating the parser from 
the .y file.

See: http://www.sqlite.org/cvstrac/wiki?p=TheAmalgamation

--- [EMAIL PROTECTED] wrote:
> arm developer suite v1.2
> support arm7,arm9


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite and borland c++ builder

2007-04-28 Thread Joe Wilson
>   When I try to use the header I get errors
> 
>  [C++ Error] sqlite3.h(1778): E2232 Constant member
> 'sqlite3_index_info::nConstraint' in class without constructors

It appears it is trying to compile the sqlite header file as if it were C++.
Lemme guess - you're using the almalgomated sqlite3.c from 3.3.17, right?

from the generated sqlite3.c:

 /*
 ** Make sure we can call this stuff from C++.
 */
 #if 0
 extern "C" {
 #endif

See the #if 0? That's the problem. It should be:

 #if __cplusplus

SQLite 3.3.17 has a bug in sqlite3.c generation.
To work around this issue, do this:

 extern "C" {
 #include "sqlite3.h"
 }


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite and borland c++ builder

2007-04-28 Thread Joe Wilson
I wrote too quickly - sqlite3.h correctly uses __cplusplus for extern "C".

But the almalgomated sqlite3.c cannot be compiled from a C++ compiler
for the reasons described below.

--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> >   When I try to use the header I get errors
> > 
> >  [C++ Error] sqlite3.h(1778): E2232 Constant member
> > 'sqlite3_index_info::nConstraint' in class without constructors
> 
> It appears it is trying to compile the sqlite header file as if it were C++.
> Lemme guess - you're using the almalgomated sqlite3.c from 3.3.17, right?
> 
> from the generated sqlite3.c:
> 
>  /*
>  ** Make sure we can call this stuff from C++.
>  */
>  #if 0
>  extern "C" {
>  #endif
> 
> See the #if 0? That's the problem. It should be:
> 
>  #if __cplusplus
> 
> SQLite 3.3.17 has a bug in sqlite3.c generation.
> To work around this issue, do this:
> 
>  extern "C" {
>  #include "sqlite3.h"
>  }
> 
> 
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around 
> http://mail.yahoo.com 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



  1   2   3   4   5   6   7   8   >