Re: [sqlite] C Code in a trigger supported in 3.3.x?

2006-01-25 Thread Russell Leighton


Solved. Bug in my code. Sorry for the alarm.

Russell Leighton wrote:



Update. I actually have 2 nearly identical triggers as described below.

One works, the other produces "shifted" output...it appears that the 
sqlite3_column_xxx() functions
are returning the wrong data when executing in this trigger. Calling 
the same function from outside the trigger

produces proper data.

Digging into it further...

Russell Leighton wrote:



This worked in 3.2.8 but not in 3.3.1/2 

I delcare a function using sqlite3_create_function()...this takes 1 
arg , an integer and calls the usual
code to execute a 'select[ based on the integer as a key and does 
some work based on the result.


I declare a temp trigger to call the above when certain columns are 
updated.


*When the trigger executes the result of th select is garbage.*

I call the same function outside the trigger and it works fine.

Is there some rule you cannot execute the code of the C API inside a 
trigger?


Thx

Russ









Re: [sqlite] TCLSQLITE3: Problem inserting a variable in a eval command

2006-01-25 Thread Lazarus Long
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- - Original Message - 
From: "Jan Kandziora" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, January 25, 2006 9:05 PM
Subject: Re: [sqlite] TCLSQLITE3: Problem inserting a variable in a eval
command


> Am Mittwoch, 25. Januar 2006 20:36 schrieb Lazarus Long:
>> db eval { SELECT * FROM `auction_items` "$where" ORDER BY
>> `endtime`
>> DESC LIMIT 15 }
>>
> The problem is, db eval will treat "where" as a tcl variable for it's
> internal variable expansion, which works only in a limited sql-like,
> not tcl-like fashion. If you want to build a part of the sql statement
> from scratch, you have to write the final eval like this:
>
> db eval "SELECT * FROM auction_items $where ORDER BY endtime"
>
> This way $where is treated with tcl variable expansion *before* "db
> eval" gets it. However, there is a small drawback, as you have to
> protect all variables db eval should treat *internally* with
> backslashes.
>

Jan:

Thank you for your reply. That made it work.

Best regards,

- -- 
Lazarus Long

GNUPG: 

Please do not send me attachments in proprietary formats
without request (i.e. Word, PowerPoint or Excel documents),
see 

Por favor não me envie anexos em formatos proprietários sem que
os tenha pedido (p.e. documentos em Word, PowerPoint ou Excel),
veja 



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (MingW32) - GPGOE 0.4.1

iD8DBQFD2FiZCXRGvVwdwgURAki1AKC6s1viW4uXV/fCa1n2ngBwEqY2HQCgzz3K
81EdcVRnGGHvJxp1sng26ug=
=452d
-END PGP SIGNATURE-




[sqlite] Copy tables from one memory database to another with no disk access?

2006-01-25 Thread Randy Graham

Hello,

Is it possible to copy tables from one memory database to another 
without going through an intermediate disk database?


Thanks,


-Randy



Re: [sqlite] page size and record packing

2006-01-25 Thread deminix
Thank you.

On 1/25/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> deminix <[EMAIL PROTECTED]> wrote:
> > I was curious if a single page of the database was limited to at most
> one
> > record, aka can records be packed into a single page?
>
> Multiple small records can fit on one page.  Or a large record
> can span multiple pages.
>
> >
> > If it does pack records, then the purpose of the page size becomes less
> > obvious to me.  It can certainly be used to match the size of the
> underlying
> > OS/hardware more efficiently, but could it serve another purpose?
> >
>
> The file is read and written a page at a time.  If you
> have a large page size (32K, say) but only want to read
> or write a few bytes, you still have to do I/O on the
> whole 32K.  This argues for smaller pages.
>
> On the other hand, there is a fixed memory space, disk
> space, and processing time overhead associated with each
> page.  The smaller the pages, the more overhead for the
> same amount of data.  This argues for larger pages.
>
> A 1K page works well on unix for most applications.  But
> it is nice to have the flexibility to adjust the page size
> up or down for those cases where a different page size
> might give better performance.
>
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
>
>


Re: [sqlite] page size and record packing

2006-01-25 Thread drh
deminix <[EMAIL PROTECTED]> wrote:
> I was curious if a single page of the database was limited to at most one
> record, aka can records be packed into a single page?

Multiple small records can fit on one page.  Or a large record
can span multiple pages.

> 
> If it does pack records, then the purpose of the page size becomes less
> obvious to me.  It can certainly be used to match the size of the underlying
> OS/hardware more efficiently, but could it serve another purpose?
> 

The file is read and written a page at a time.  If you
have a large page size (32K, say) but only want to read
or write a few bytes, you still have to do I/O on the
whole 32K.  This argues for smaller pages.

On the other hand, there is a fixed memory space, disk
space, and processing time overhead associated with each
page.  The smaller the pages, the more overhead for the
same amount of data.  This argues for larger pages.

A 1K page works well on unix for most applications.  But
it is nice to have the flexibility to adjust the page size
up or down for those cases where a different page size
might give better performance.

--
D. Richard Hipp <[EMAIL PROTECTED]>



Re: [sqlite] sqlite and index

2006-01-25 Thread Dennis Cote

Klaus Schären wrote:


Hi

i have the following table definition:
CREATE TABLE Pistor (Nummer INTEGER PRIMARY KEY NOT NULL, Artikel TEXT NOT
NULL DEFAULT '', Inhalt TEXT NOT NULL DEFAULT '')

In addition, I have created  the following index:
CREATE UNIQUE INDEX xPistor ON PISTOR(ARTIKEL, NUMMER)

when i run a query with "order by artikel", sqlite returns the result very
fast.
Query: SELECT ARTIKEL, NUMMER FROM PISTOR ORDER BY ARTIKEL

when i run a quey with "order by artikel, nummer" (witch is exactly the
index xPistor!), sqlite takes longer.
Query: SELECT ARTIKEL, NUMMER FROM PISTOR ORDER BY ARTIKEL, NUMMER

i run the explain command with both querys. with the second query, sqlite
makes a sort. is this ok? why does sqlite not use the index xPistor?

i try the query with different versions of sqlite (3.2.8, 3.3.1 and 3.3.2).
I have also running the ANALYZE command. but nothing is changing.

the test db is included in the attached zip.

thanks for your help

K. Schären

 


Klaus,

You have defined the field Nummer as a primary key. Therefore, each row 
will have a unique value of Nummer. Your second index will enforce a 
rule that all pairs of Artikel and Nummer must be unique, but that must 
always be true since all values of Nummer are unique, regardless of the 
value of Artikel.


The fact that Nummer is a primary key seems to confuse the optimizer for 
this query. If I remove the primary key part of the definition from the 
field Nummer, and use the explicit unique index you defined, then the 
second query uses that index. When I do this:


CREATE TABLE Pistor2 (Nummer INTEGER NOT NULL, Artikel TEXT NOT
NULL DEFAULT '', Inhalt TEXT NOT NULL DEFAULT '')

CREATE UNIQUE INDEX xPistor2 ON PISTOR2(ARTIKEL, NUMMER)

insert into pistor2 select * from pistor

explain query plan SELECT ARTIKEL, NUMMER FROM PISTOR ORDER BY ARTIKEL, NUMMER

the query plan shows that the index is being used, and an explain 
command shows that the sort is not done.


I think you have found an issue (I won't call it a bug because the query 
does work correctly) with the optimizer. Your first query uses the index 
because the output list is not ordered by Nummer, so the implicit index 
on Nummer is not of any use, and the optimizer then find the other index 
and uses it. The second query seems to think that it can use the 
implicit index on Nummer to do part of the ordering, and since SQLite 
only uses one index per table scan it can't use the explicit index. Then 
it determines that it needs to do a sort to order by Artikel. My change 
removes the implicit index on Nummer, so then it finds and uses the 
explicit index to scan the table in the correct order.


HTH
Dennis Cote


[sqlite] TCLSQLITE3: Problem inserting a variable in a eval command

2006-01-25 Thread Lazarus Long
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello, everybody:

First let me say that I'm no expert neither with TCL nor with
SQLite.

I'm trying to convert the underlying database of a TCL script to
SQLite.
I'm using 3.2.3 (also tried 3.2.8 with the same result)

I don't think that all the following information will be needed, but
just in case I'm wrong here goes:

The database is set as follows:

proc create_db { cargs } {
if { $cargs == "redo" } {
db eval { DROP INDEX `byname` }
db eval { DROP TABLE `auction_items` }
db eval { DROP TABLE `auction_bids` }
db eval { VACUUM }
}

db eval { CREATE TABLE `auction_items` (`id` INTEGER PRIMARY KEY
AUTOINCREMENT, `seller` TEXT NOT NULL DEFAULT '', `createtime` TEXT
NOT
NULL DEFAULT CURRENT_TIMESTAMP, `endtime` TEXT NOT NULL DEFAULT
'-00-00 00:00:00', `item_id` INTEGER NOT NULL DEFAULT 0,
`item_name`
TEXT NOT NULL DEFAULT '', `item_icon` TEXT NOT NULL DEFAULT '',
`item_count` INTEGER NOT NULL DEFAULT 0, `item_category` INTEGER NOT
NULL DEFAULT 0, `item_quality` INTEGER NOT NULL DEFAULT 0,
`duration`
INTEGER NOT NULL DEFAULT 0, `start_price` INTEGER NOT NULL DEFAULT
0,
`buy_price` INTEGER NOT NULL DEFAULT 0, `item_level` INTEGER NOT
NULL
DEFAULT 0) }
db eval { CREATE INDEX `byname` ON `auction_items` (`seller`) }
db eval { CREATE TABLE `auction_bids` (`id` INTEGER PRIMARY KEY
AUTOINCREMENT, `bidder` TEXT NOT NULL DEFAULT '', `auc_id` INTEGER
NOT
NULL DEFAULT 0, `money` INTEGER NOT NULL DEFAULT 0) }
return "database setup done."
}

The problem procedure is:

proc searchitems { full } {
set sdata [ split $full "#" ]
set criteria [ lindex $sdata 1 ]
set minlevel [ lindex $sdata 2 ]
set maxlevel [ lindex $sdata 3 ]
set quality [ lindex $sdata 4 ]
set where "WHERE ("

if { $criteria != "NULL" }  {
append where "`item_name` LIKE %$criteria%"
}

if { $minlevel != "NULL" } {
 if { $where != "WHERE (" } {
append where " AND "
}

append where "`item_level` >= $minlevel"
}

if { $maxlevel != "NULL" } {
if { $where != "WHERE (" } {
append where " AND "
}

append where "`item_level` <= $maxlevel"
}

if { $quality != "NULL\n" && $quality != "-1" } {
if { $where != "WHERE (" } {
append where " AND "
}

append where "`item_quality` = $quality"
}

append where ")"
set pname [ GetName $player ]
set ret "AUCTION_ITEM_LIST:"

if { $where == "WHERE ()" } {
set where ""
}

puts "SELECT * FROM `auction_items` $where ORDER BY `endtime`
DESC
LIMIT 15"
db eval { SELECT * FROM `auction_items` "$where" ORDER BY
`endtime`
DESC LIMIT 15 } {
# `id` `seller` `createtime` `endtime` `item_id` `item_name`
`item_icon` `item_count`
# `item_category` `item_quality` `duration` `start_price`
`buy_price` `item_level`

set ptime [ expr ( ( [ clock scan $endtime ] - [ clock
seconds ] ) / 60 ) ]
set money 0
db eval { SELECT `money` FROM `auction_bids` WHERE (`auc_id`
=
$id) ORDER BY `money` DESC LIMIT 1 } {
puts "money=\"$money\""
}

append ret
"\n#$item_id#$item_name#$item_icon#$item_count#$item_quality#1#$item_level#$start_price#50#$buy_price#$money#$pname#$ptime#$id#$seller"
}

return "$ret\n"
}

The problem:

Regarding the $where variable, which is constructed at the begining
of
the above procedure, the database call eval simply ignores it and
retrieves all the entries from the database.

The puts command just before the call to eval prints out correctly,
it's
the eval command that ignores the WHERE statement. I tried '$where',
"$where" and [ { $where } ] without success, if I try entering
$where
without quoting it I get a call error and the script exits.

Can anyone please put me on the right track to figure this out? I
don't
thing it's TCL related, rather SQLite eval syntax, but I may be
wrong.

Thank you very much in advance.

Best regards,

- -- 
Lazarus Long

GNUPG:


Please do not send me attachments in proprietary formats
without request (i.e. Word, PowerPoint or Excel documents),
see 

Por favor no me envie anexos em formatos proprietrios sem que
os tenha pedido (p.e. documentos em Word, PowerPoint ou Excel),
veja 


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (MingW32) - GPGOE 0.4.1

iD8DBQFD19M3CXRGvVwdwgURAs5HAJ4zUv+rY9nOsSA3GGPT3JD3DqOwJACcC5tH
Xxl4G3tOP/rGrn0ERQIIaJM=
=equ1
-END PGP SIGNATURE-




Re: [sqlite] GUID in place of AUTO_INCREMENT

2006-01-25 Thread Mike Ashmore

On Jan 25, 2006, at 10:00 AM, Kurt Welgehausen wrote:


... but Rails doesn't seem to support composite keys.


I don't know much about RoR. How does RoR keep you from creating
a multi-column key on an SQLite table?

Regards


In point of fact, it doesn't prevent me from *creating* a multi- 
column key, just from using it effectively. It's just that Rails'  
implementation of ActiveRecord doesn't understand multi-column PKs.  
As you might suspect, ActiveRecord handles the relationships between  
records, and that's a lot easier when PKs and FKs are single-column  
and follow a reasonable naming convention.


At any rate, Dr. Hipp's suggestion of the AFTER INSERT trigger was  
exactly what I needed. Thanks!


-Mike Ashmore


Re: [sqlite] GUID in place of AUTO_INCREMENT

2006-01-25 Thread drh
Mike Ashmore <[EMAIL PROTECTED]> wrote:
> So, I've got what I think is a good reason to modify the behavior of  
> primary keys in my (Ruby on Rails-based) application.
> 
> What I need is a modifier, say 'GUID', that can be applied in place  
> of (mutually exclusive with) the AUTOINCREMENT modifier [1]. On a  
> field with the GUID modifier, if an INSERT statement proffers a NULL  
> value for that field, an RFC4122-compliant GUID generator should be  
> invoked to create a globally unique value for that field.
> 

You can do this with an insert trigger.  First create 
a user-defined function to generate your guid. Suppose 
that function is named new_guid().  Further suppose that
the column that wants the guid is called "pk".  Then write
the trigger like this:

  CREATE TRIGGER AFTER INSERT ON table1 WHEN new.pk IS NULL
  BEGIN
UPDATE table1 SET pk=new_guid() WHERE rowid=new.rowid;
  END;

--
D. Richard Hipp <[EMAIL PROTECTED]>



Re: [sqlite] GUID in place of AUTO_INCREMENT

2006-01-25 Thread Kurt Welgehausen
> ... but Rails doesn't seem to support composite keys.

I don't know much about RoR. How does RoR keep you from creating
a multi-column key on an SQLite table?

Regards


[sqlite] GUID in place of AUTO_INCREMENT

2006-01-25 Thread Mike Ashmore
So, I've got what I think is a good reason to modify the behavior of  
primary keys in my (Ruby on Rails-based) application.


What I need is a modifier, say 'GUID', that can be applied in place  
of (mutually exclusive with) the AUTOINCREMENT modifier [1]. On a  
field with the GUID modifier, if an INSERT statement proffers a NULL  
value for that field, an RFC4122-compliant GUID generator should be  
invoked to create a globally unique value for that field.


If a patch to do something like this already exists, could someone  
point me in its direction? And if not, would anybody care to work  
with me on creating one? And in any case, am I about to do something  
that was tried before, found to be a bad idea, and maybe replaced  
with a better idea? GUIDs tend to smell like bad design to me, but I  
can't seem to think of any alternatives.


Thanks,
-Mike Ashmore

[1] Why, you ask? Well, I've got a view, and that view is an  
aggregate of multiple tables. Those tables each exist in a separate  
database file. When I INSERT into that view, an INSTEAD OF trigger is  
called that invokes a user-defined function that asks a separate  
process to open the appropriate database file and execute the insert  
there [2]. But with a  primary key field called 'id' that's declared  
AUTOINCREMENT, you can see that we quickly get duplicates in the 'id'  
field as we insert into multiple files. According to my database  
design class in college, the primary key could be the combined key of  
'id' and 'origin'[3], but Rails doesn't seem to support composite  
keys. I think modifying SQLite would be more generally useful than  
changing Rails anyway.


[2] Why? As a mechanism for implementing record-level access control  
using filesystem permissions. Long story. I might do a white paper on  
it if anyone's interested.


[3] I attach all the database files I'm compositing, then create a  
composite view of a table that exists in each of those files. The  
composite view consists of a bunch of "SELECT *, [original database]  
as origin FROM [original database].table" statements UNIONed together.