>From the new version I read:

----- Original Message ----- 
From: <sqlite-users-requ...@sqlite.org>
To: <sqlite-users@sqlite.org>
Sent: Tuesday, June 16, 2009 2:00 PM
Subject: sqlite-users Digest, Vol 18, Issue 62


> Send sqlite-users mailing list submissions to
> sqlite-users@sqlite.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> or, via email, send a message with subject or body 'help' to
> sqlite-users-requ...@sqlite.org
>
> You can reach the person managing the list at
> sqlite-users-ow...@sqlite.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of sqlite-users digest..."
>
>
> Today's Topics:
>
>   1. Re: sqlite3_step performance degredation (Ken)
>   2. Re: sqlite3_step performance degredation (Simon Slavin)
>   3. Re: sqlite3_step performance degredation (Jim Wilcoxson)
>   4. Re: SQlite3 - SQL injection using ruby (dave lilley)
>   5. Bug in retrieving last rowid? (hartwig.wiesm...@online.nl)
>   6. Sqlite-3.5.9: getting sqlite_autoindex error (h o)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 15 Jun 2009 14:33:04 -0700 (PDT)
> From: Ken <kennethinbox-sql...@yahoo.com>
> Subject: Re: [sqlite] sqlite3_step performance degredation
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID: <780955.31349...@web81003.mail.mud.yahoo.com>
> Content-Type: text/plain; charset=iso-8859-1
>
>
> Also is there an index on the table B.ID field?
>
> --- On Mon, 6/15/09, Mike Borland <mike.borl...@cygnetscada.com> wrote:
>
>> From: Mike Borland <mike.borl...@cygnetscada.com>
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>> To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
>> Date: Monday, June 15, 2009, 4:11 PM
>> Nuno, unfortunately your psychic
>> skills are a bit off on this one.? Sorry I wasn't more
>> explicit.? I am not using any LIMIT or OFFSET to do any
>> virtual scrolling.? Basically I have table A which has
>> 900 rows.? Table B has 180,000 rows (900 * 200) which
>> has a foreign key relationship back to table A.? So for
>> each row in table A, there are 200 rows in table B.? My
>> query is basically a "SELECT * FROM Table B WHERE ID =
>> TableA.ID".? I'm executing this query 900 times, once
>> for each row in table A.?
>>
>> When I start the 900 read iterations (always in the same
>> order), the first one generally reads in about 50ms and by
>> the last read, it's taking roughly 1000ms.? Sometimes
>> it slows down immediately, sometimes after the 100th
>> iteration.? The only absolutely reproducible aspect is
>> that it always slows down eventually and once it slows down,
>> it never speeds back up.? I don't believe it's a
>> locking issue since my timer doesn't start until the query
>> is successfully executed.
>>
>> Any ideas?? Would the occasional write operation in
>> the midst of these reads cause any permanent slow down to
>> the read time?? Thanks.
>>
>> Mike Borland
>>
>> -----Original Message-----
>> From: Nuno Lucas [mailto:ntlu...@gmail.com]
>>
>> Sent: Friday, June 12, 2009 7:16 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>>
>> On Sat, Jun 13, 2009 at 1:52 AM, Mike
>> Borland<mike.borl...@cygnetscada.com>
>> wrote:
>> > I have a fairly complex program where I am seeing a
>> performance
>> > degradation of the sqlite3_step() function.
>> ?Basically I'm iterating
>> > roughly 200 rows at a time, over and over. ?The only
>> work happening
>> > during the iteration is I'm copying the record into an
>> array. ?At first,
>> > sqlite3_step() takes less than a millisecond to run.
>> ?After 0-50
>> > iterations, it's taking anywhere from 10-100ms.
>> >
>> > Does anybody have any insight into what's happening
>> behind the scenes
>> > with this function to help me track down the cause?
>> ?I appreciate it!
>>
>> You should explicitly say what your SQL query is. Without
>> that we can
>> only guess.
>>
>> My current "psychic" guess is that you are using LIMIT to
>> obtain those
>> 200 rows, one "page" at a time, and as you go advancing
>> "pages" it
>> becomes slower and slower.
>> If this is true, then you should re-think your design as
>> LIMIT just
>> skips the rows, but it will? "generate" them before,
>> meaning it
>> becomes slower as you advance on the offset given.
>> Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
>> (and
>> notice the "What not to do" at the end, talking about
>> "LIMIT" and
>> "OFFSET").
>>
>> If my my psychic abilities are becoming weak, then please
>> supply your
>> exact query that is getting slower? (and maybe your
>> database schema)
>> and then someone can give you an exact answer.
>>
>>
>> Regards,
>> ~Nuno Lucas
>>
>> >
>> > Mike Borland
>> >
>> > _______________________________________________
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>> >
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 15 Jun 2009 22:53:03 +0100
> From: Simon Slavin <slav...@hearsay.demon.co.uk>
> Subject: Re: [sqlite] sqlite3_step performance degredation
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID: <f9a438ee-0752-4922-b882-2b77c1c2e...@hearsay.demon.co.uk>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
>
> On 15 Jun 2009, at 10:11pm, Mike Borland wrote:
>
>>
>> When I start the 900 read iterations (always in the same order), the
>> first one generally reads in about 50ms and by the last read, it's
>> taking roughly 1000ms.  Sometimes it slows down immediately,
>> sometimes after the 100th iteration.
>
> Put some code in your application that closes and reopens the database
> after iteration 500 (or every 100).  See if it speeds up when you've
> just done that.  If it does, the bug is somewhere in your code or
> SQLite.  If it doesn't, the bug is somewhere in memory management or
> disk access.
>
> Simon.
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 15 Jun 2009 18:59:49 -0400
> From: Jim Wilcoxson <pri...@gmail.com>
> Subject: Re: [sqlite] sqlite3_step performance degredation
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID:
> <c5830b750906151559m29e5f114k39e88f4e36993...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> There was a recent SQLite bug that caused the size of the SQLite cache
> to shrink in some circumstances, and the longer a program ran, the
> smaller the cache became.  Maybe you are running into this bug.  IIRC,
> you had to do an update in the select loop to trigger the bug, so if
> you're not doing that, maybe this doesn't apply.
>
> Jim
>
> On 6/15/09, Mike Borland <mike.borl...@cygnetscada.com> wrote:
>> Nuno, unfortunately your psychic skills are a bit off on this one.  Sorry 
>> I
>> wasn't more explicit.  I am not using any LIMIT or OFFSET to do any 
>> virtual
>> scrolling.  Basically I have table A which has 900 rows.  Table B has
>> 180,000 rows (900 * 200) which has a foreign key relationship back to 
>> table
>> A.  So for each row in table A, there are 200 rows in table B.  My query 
>> is
>> basically a "SELECT * FROM Table B WHERE ID = TableA.ID".  I'm executing
>> this query 900 times, once for each row in table A.
>>
>> When I start the 900 read iterations (always in the same order), the 
>> first
>> one generally reads in about 50ms and by the last read, it's taking 
>> roughly
>> 1000ms.  Sometimes it slows down immediately, sometimes after the 100th
>> iteration.  The only absolutely reproducible aspect is that it always 
>> slows
>> down eventually and once it slows down, it never speeds back up.  I don't
>> believe it's a locking issue since my timer doesn't start until the query 
>> is
>> successfully executed.
>>
>> Any ideas?  Would the occasional write operation in the midst of these 
>> reads
>> cause any permanent slow down to the read time?  Thanks.
>>
>> Mike Borland
>>
>> -----Original Message-----
>> From: Nuno Lucas [mailto:ntlu...@gmail.com]
>> Sent: Friday, June 12, 2009 7:16 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] sqlite3_step performance degredation
>>
>> On Sat, Jun 13, 2009 at 1:52 AM, Mike
>> Borland<mike.borl...@cygnetscada.com> wrote:
>>> I have a fairly complex program where I am seeing a performance
>>> degradation of the sqlite3_step() function.  Basically I'm iterating
>>> roughly 200 rows at a time, over and over.  The only work happening
>>> during the iteration is I'm copying the record into an array.  At first,
>>> sqlite3_step() takes less than a millisecond to run.  After 0-50
>>> iterations, it's taking anywhere from 10-100ms.
>>>
>>> Does anybody have any insight into what's happening behind the scenes
>>> with this function to help me track down the cause?  I appreciate it!
>>
>> You should explicitly say what your SQL query is. Without that we can
>> only guess.
>>
>> My current "psychic" guess is that you are using LIMIT to obtain those
>> 200 rows, one "page" at a time, and as you go advancing "pages" it
>> becomes slower and slower.
>> If this is true, then you should re-think your design as LIMIT just
>> skips the rows, but it will  "generate" them before, meaning it
>> becomes slower as you advance on the offset given.
>> Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor (and
>> notice the "What not to do" at the end, talking about "LIMIT" and
>> "OFFSET").
>>
>> If my my psychic abilities are becoming weak, then please supply your
>> exact query that is getting slower  (and maybe your database schema)
>> and then someone can give you an exact answer.
>>
>>
>> Regards,
>> ~Nuno Lucas
>>
>>>
>>> Mike Borland
>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> -- 
> Software first.  Software lasts!
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 16 Jun 2009 18:43:58 +1200
> From: dave lilley <dgl...@gmail.com>
> Subject: Re: [sqlite] SQlite3 - SQL injection using ruby
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Message-ID:
> <82aaacae0906152343k44a5d89ap498c964913a83...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Many thanks to all who have replied,
>
> I know understand the difference and shall use that approach to creating 
> my
> queries.
>
> regarding the "ruby way" it was more how I saw saving code typing by
> injection different table, field and user data into one query thus saving
> typing.
>
> BUT in the interests of eliminating/ reducing SQL injection I shall pass 
> the
> table name to my method and test against it to select which query to use
> against that table and use the SQLite3 binding method.
>
> e.g.
> def makesql (tablename, uservar)
> case tablename
>
> when customers == tablename
>   stmt = select * from customers where cust_nos = ?"
>
> when jobs == tablename
>  stmt = ....
>
> end
>
> row = db.execute(stmt,uservar)
>
> again many thanks to all,
>
>
> Dave.
>
> 2009/6/16 John Elrick <john.elr...@fenestra.com>
>
>> dave lilley wrote:
>> > Many thanks John so if i take that example and push it out so i can 
>> > have
>> 1
>> > method that can return a SQL select statement on any table, field and
>> search
>> > criteria i would only need to do this?
>> >
>> > In ruby it would be ....
>> >
>> > make_SQL (table, field, criteria)
>> >    stmt = "select * from #{table} where #{field} = #{criteria}"
>> >    row = db.execute(stmt)
>> > end
>> >
>> > and SQLite3 way would be ...
>> >
>> > make_SQL(table,field,criteria)
>> >   stmt = "select * from ? where ? = ?"
>> >   row = db.execute(stmt)
>> > end
>> >
>> > would this presumtion be correct?
>> >
>> >
>>
>> No.  You would have to use the table and field names directly:
>>
>> def make_SQL(table, field, criteria)
>>   stmt = "select * from #{table} where #{field} = ?"
>>   row = db.execute(stmt, criteria)
>> end
>>
>>
>>
>> John
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ------------------------------
>
> Message: 5
> Date: Mon, 15 Jun 2009 22:15:23 +0200 (CEST)
> From: hartwig.wiesm...@online.nl
> Subject: [sqlite] Bug in retrieving last rowid?
> To: Sqlite-users@sqlite.org
> Message-ID: <746050942.97311245096923844.javamail.r...@mailstore13>
> Content-Type: text/plain; charset=utf-8
>
> I have created  Tables A & AS_FTS
>
> "create table A (id integer primary key, string Text);"
>
> "create virtual table AS_FTS (Name);"
>
> and a trigger
>
> "insert into A_FTS (rowid,Name) values (New.%@,New.%@);"
>
> (and a not shown delete trigger).
>
> Now, I enter two strings into tue table: one and two. I delete table entry 
> one and insert afterwrds two times one again. Finally I check the last 
> inserted rowid using "sqlite_last_insert_rowid". Unfortunately, this is 
> wrong.
>
> BTW: other non-FTS insert triggers seem to work. I am using 3.6.14.
>
> Any ideas?
>
> Hartwig
>
>
> ------------------------------
>
> Message: 6
> Date: Tue, 16 Jun 2009 16:34:30 +0530
> From: h o <hiralsmaill...@gmail.com>
> Subject: [sqlite] Sqlite-3.5.9: getting sqlite_autoindex error
> To: sqlite-users@sqlite.org
> Message-ID:
> <b1f4b3d0906160404s62ef7877y5f49df6238650...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> I am using sqlite-3.5.9 and observing a 'disk image malformed' error
> nfs, on doing 'PRAGMA integrity_check' I got following messages...
>
> SQLite version 3.5.9
> Enter ".help" for instructions
> sqlite> PRAGMA integrity_check;
> *** in database main ***
> Page 5275 is never used
> wrong # of entries in index sqlite_autoindex_<table_name>_1
> sqlite>
>
> Can you please let me know what is the problem here.
>
> Thank you.
> -Hiral
>
>
> ------------------------------
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> End of sqlite-users Digest, Vol 18, Issue 62
> ********************************************
> 

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to