Re: [sqlite] optimizing out function calls

2005-11-12 Thread Darren Duncan
According to my understanding of standard SQL, you should be able to say: SELECT arbitrary_expression() AS bar FROM foo ORDER BY bar; ... and the expression is only evaluated once per row, not twice. Your actual example seems confusing, since you appear to alias your 'vectors' table to

Re: [sqlite] optimizing out function calls

2005-11-12 Thread Nathan Kurz
On Sat, Nov 12, 2005 at 10:01:29PM -0700, Nathan Kurz wrote: > SELECT uid, match("complex", "function", vector) FROM vectors AS match > ORDER BY match DESC LIMIT 20; Please pardon the silly typo. I do have the AS in the right spot. SELECT uid, match("complex", "function", vector) AS match

[sqlite] optimizing out function calls

2005-11-12 Thread Nathan Kurz
Hello -- I'm trying to figure out how to optimize a query a bit, and think I've hit a case that could easily be optimized by sqlite but isn't. I'm wondering if it would be an easy optimization to add, or whether there is some way I can 'hint' the optization into being. I'm using a

Re: [sqlite] uSQLiteServer Source code available

2005-11-12 Thread Andrew Piskorski
On Sat, Nov 12, 2005 at 05:24:47PM -0700, [EMAIL PROTECTED] wrote: > http://users.iol.it/irwin > 4) Am I doing the right thing? Of course I think the uSQLiteServer is > the best thing since sliced bread, but then it was designed to meet my > criteria :-) OTOH reception has been mixed. I have had

Re: [sqlite] uSQLiteServer Source code available

2005-11-12 Thread Alfredo Cole
El Sábado, 12 de Noviembre de 2005 18:24, [EMAIL PROTECTED] escribió: > I have reorganized the archive and got all the source into it this time. > > http://users.iol.it/irwin > > It's an interesting concept. I downloaded it and will try it. Thank you, Roger. -- Alfredo J. Cole Grupo ACyC

[sqlite] uSQLiteServer Source code available

2005-11-12 Thread roger
I have reorganized the archive and got all the source into it this time. http://users.iol.it/irwin A few notes: 1) This has nothing to do with the RPC based uSQLite project, which I have found has the same name! That project does seem a bit dead thougth so I shall not worry about it. 2) I

RE: [sqlite] Organizing large database into multiple files

2005-11-12 Thread roger
> Original Message > Subject: [sqlite] Organizing large database into multiple files > From: "Rajan, Vivek K" <[EMAIL PROTECTED]> > Date: Sat, November 12, 2005 5:09 am > To: > > Hello- > > > > I have a need to store large volumes of data

Re: [sqlite] Organizing large database into multiple files

2005-11-12 Thread Jay Sprenkle
> I have a need to store large volumes of data (~5-10G) in SQLite > database. The data which I am storing is organized hierarchically. The > schema for my database has foreign-key constraints, the tables are > interrelated. My questions: > > - How can I organize the entire database into multiple

Re: [sqlite] qmark style updates

2005-11-12 Thread Jay Sprenkle
> I tried this to and got some strange behavior, like if > I entered a value like "333" it would give me a All text constants are entered with single quotes. insert into mytable(five) values( 'data' )

[sqlite] Re: Thanks Alexander

2005-11-12 Thread Dan McDaniel
Yes! This works Thanks very much Alexander. --- Alexander Kozlovsky <[EMAIL PROTECTED]> wrote: > The second parameter of cursor.execute() accept > **sequence** of > bindings. Try this: > > c.execute(toDo, [s1]) > > > > from pysqlite2 import dbapi2 as sqlite > > > > con =

Re: [sqlite] qmark style updates

2005-11-12 Thread Alexander Kozlovsky
The second parameter of cursor.execute() accept **sequence** of bindings. Try this: c.execute(toDo, [s1]) > from pysqlite2 import dbapi2 as sqlite > > con = sqlite.connect("mydb.db") > c = con.cursor() > > s1 =3 > toDo ="Update ex set amount = ? where ex_id = 1" >

Re: [sqlite] How to speed up create index on temp database?

2005-11-12 Thread 黄涛
Jay Sprenkle wrote: On 11/10/05, Huang Tao <[EMAIL PROTECTED]> wrote: Hello: I run sqlite in embedded system which use nand flash. So I have to reduce write count. Save index in master database will cause much write. I try to dynamic create index on temp database. But the speed is not very