Re: [sqlite] sqlite3_step performance degredation

2009-06-15 Thread Mike Borland
Nuno, unfortunately your psychic skills are a bit off on this one.  Sorry I 
wasn't more explicit.  I am not using any LIMIT or OFFSET to do any virtual 
scrolling.  Basically I have table A which has 900 rows.  Table B has 180,000 
rows (900 * 200) which has a foreign key relationship back to table A.  So for 
each row in table A, there are 200 rows in table B.  My query is basically a 
"SELECT * FROM Table B WHERE ID = TableA.ID".  I'm executing this query 900 
times, once for each row in table A.  

When I start the 900 read iterations (always in the same order), the first one 
generally reads in about 50ms and by the last read, it's taking roughly 1000ms. 
 Sometimes it slows down immediately, sometimes after the 100th iteration.  The 
only absolutely reproducible aspect is that it always slows down eventually and 
once it slows down, it never speeds back up.  I don't believe it's a locking 
issue since my timer doesn't start until the query is successfully executed.

Any ideas?  Would the occasional write operation in the midst of these reads 
cause any permanent slow down to the read time?  Thanks.

Mike Borland

-Original Message-
From: Nuno Lucas [mailto:ntlu...@gmail.com] 
Sent: Friday, June 12, 2009 7:16 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3_step performance degredation

On Sat, Jun 13, 2009 at 1:52 AM, Mike
Borland<mike.borl...@cygnetscada.com> wrote:
> I have a fairly complex program where I am seeing a performance
> degradation of the sqlite3_step() function.  Basically I'm iterating
> roughly 200 rows at a time, over and over.  The only work happening
> during the iteration is I'm copying the record into an array.  At first,
> sqlite3_step() takes less than a millisecond to run.  After 0-50
> iterations, it's taking anywhere from 10-100ms.
>
> Does anybody have any insight into what's happening behind the scenes
> with this function to help me track down the cause?  I appreciate it!

You should explicitly say what your SQL query is. Without that we can
only guess.

My current "psychic" guess is that you are using LIMIT to obtain those
200 rows, one "page" at a time, and as you go advancing "pages" it
becomes slower and slower.
If this is true, then you should re-think your design as LIMIT just
skips the rows, but it will  "generate" them before, meaning it
becomes slower as you advance on the offset given.
Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor (and
notice the "What not to do" at the end, talking about "LIMIT" and
"OFFSET").

If my my psychic abilities are becoming weak, then please supply your
exact query that is getting slower  (and maybe your database schema)
and then someone can give you an exact answer.


Regards,
~Nuno Lucas

>
> Mike Borland
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_step performance degredation

2009-06-12 Thread Mike Borland
Simon,

Thanks for the reply!  I have read that documentation and everything
else that google search would bring up on the site.  Memory is constant.
When you ask about "disposing of the result of one step before
proceeding to the next", what exactly do you mean?  Can't I
theoretically call sqlite3_step() multiple times just to iterate rows?
I happen to be binding the data in each row, but I thought that is
optional.

Mike Borland

-Original Message-
From: Simon Slavin [mailto:slav...@hearsay.demon.co.uk] 
Sent: Friday, June 12, 2009 6:00 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3_step performance degredation


On 13 Jun 2009, at 1:52am, Mike Borland wrote:

> I have a fairly complex program where I am seeing a performance
> degradation of the sqlite3_step() function.  Basically I'm iterating
> roughly 200 rows at a time, over and over.  The only work happening
> during the iteration is I'm copying the record into an array.  At  
> first,
> sqlite3_step() takes less than a millisecond to run.  After 0-50
> iterations, it's taking anywhere from 10-100ms.

Read <http://www.sqlite.org/c3ref/step.html>, if you haven't already  
done so.

Check your memory usage as you get the slow-down.  Do you have a  
resource leak of some kind ?  Are you, for example, correctly  
disposing of the result of one step before proceeding to the next ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_step performance degredation

2009-06-12 Thread Mike Borland
Hi all!

I have a fairly complex program where I am seeing a performance
degradation of the sqlite3_step() function.  Basically I'm iterating
roughly 200 rows at a time, over and over.  The only work happening
during the iteration is I'm copying the record into an array.  At first,
sqlite3_step() takes less than a millisecond to run.  After 0-50
iterations, it's taking anywhere from 10-100ms.  

Does anybody have any insight into what's happening behind the scenes
with this function to help me track down the cause?  I appreciate it!

Mike Borland

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Caching model and aging timeout

2009-06-05 Thread Mike Borland
Hi!

I have a few questions/observations about the caching model.  Any help
clarifying these issues would be greatly appreciated. (I'm working on a
Windows Server class OS).
 

1) Setting the SQLITE_CONFIG_PAGECACHE keyword within sqlite3_config()
seems to define a single cache for the instance of the database.
However, from what I've observed/read, each connection to the database
has its own cache (Unless the shared cache mode is enabled).  So how
does the single buffer defined by SQLITE_CONFIG_PAGECACHE relate to the
cache for each connection instance?

2) I had stupidly defined the SQLITE_DEFAULT_PAGE_SIZE to be 4096, and
the SQLITE_DEFAULT_CACHE_SIZE to 200,000 within a program that has 15+
database connections.  When I started hammering my program, I saw the
memory shoot up to well over 1.5GB and continue climbing.  I came to
realize that each connection had a maximum cache size of 750MB which
combined would easily exceed the 2GB program limit.  What's interesting
is that after I let the app sit with no work for about an hour and then
executed a single write and commit, the app size shrunk by about 1GB
which implies the cache was cleaned out.  Does anybody have details on
when and how the cache is flushed?


Thanks,
Mike Borland

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PL/SQL in Sqlite?

2009-01-29 Thread Mike Borland
I am having similar concurrency issues and found that executing a "BEGIN
IMMEDIATE" will only grab a reserved lock.  This allows other threads to
read the database and the writer won't grab the exclusive lock until it
commits.  This is helpful if the writer does a fair amount of work
inside the transaction before committing and your readers are getting
starved.

I also found a case where if there are say 10 threads all attempting to
get a lock on the database then it's possible for some to be starved
even though no single thread holds the lock for a long time.  I had to
implement a fifo queue on my side to make sure that the writers obtained
the lock in the order requested.  Otherwise I saw cases where one thread
kept getting the lock because it would finish its work and request the
lock again before the other waiting threads completed their busy timeout
sleep.

Mike Borland

-Original Message-
From: Thomas Briggs [mailto:t...@briggs.cx] 
Sent: Thursday, January 29, 2009 9:57 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] PL/SQL in Sqlite?

   If you really are only updating 20 records at a time you should be
able to make it work plenty fast enough with plain old SQL.

   Something feels wrong about using an exclusive transaction here
too.  I can't say why, and I may well be wrong, but... just a gut
hunch.

On Thu, Jan 29, 2009 at 12:47 PM, Daniel Watrous <dwmaill...@gmail.com>
wrote:
> Hello,
>
> I'm wondering if there is anything like PL/SQL that is built into
> SQLite?  I have an application that requires me to use EXCLUSIVE
> transactions as follows:
>
> BEGIN EXCLUSIVE
> SELECT from table limit 20
> for each
>  UPDATE record assignment
> COMMIT
>
> The requests for assignments are coming in concurrently from many
> clients and I'm finding that SQLite isn't keeping up.  I know that
> SQLite isn't intended for highly concurrent environments that make
> many writes to the database, but I'd like to make it work if
> possible...
>
> So, is there a way to push this processing over to SQLite (e.g. PL/SQL
> style) that would speed it up?
>
> Or, is there some way you can think of to make assignments of a subset
> of records to concurrent clients that would be more efficient and help
> me avoid the large number of timeouts I'm getting right now?
>
> Thanks,
> Daniel
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multi-thread concurrency problem

2009-01-16 Thread Mike Borland
Hi,

 

I'm running a multi-threaded application and am running into some
concurrency issues.  I have thread "A" which is attempting to do
thousands of reads that take approximately 20-50ms each.  Thread "B" is
writing to the database and each write takes a few seconds.  Currently,
it appears that thread "A" is locked out from reading while thread "B"
is writing (which is what I expect based upon documentation).  Thread
"B" is working within transactions so thread "A" is only locked out
while thread "B" is committing.  So essentially thread "A" is taking 60x
longer to execute when thread "B" is writing.  I've tried enabling the
shared cache, but that results in lots of database lock failures and
seems to ignore the busytimeout.  Does anybody have any creative
solutions to such a problem?

 

I've looked in to the read_uncommitted pragma command, but can't find a
lot of detail.  In the code documentation, it says that the
read_uncommited pragma ignores the "table" lock, but my understanding of
the locking model is that the database as a whole is locked, not
individual tables.  Also, the documentation seemed to indicate that it
had some relationship to the shared cache mode.  Would enabling the
read_uncommited pragma ever result in invalid data being read from a
table which is being written to?

 

Thank for any help!

 

Mike Borland

CygNet Software, Inc.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users