On 6/18/07, Jeff Nokes <[EMAIL PROTECTED]> wrote:
[Jeff]  When I'm testing against my dev server, I'm the only one playing in 
that sandbox,
so I'm just doing single requests, there is definitely no locking or 
concurrency issues
happening.

I was mostly thinking of the kind of issues that more sophisticated
databases sometimes have with isolation levels.  For example, when
using the default isolation level with MySQL InnoDB tables, a reader
can't see data committed by other processes if the reader is in a
transaction.

When I had multiple shells open, and my web app all connected to the same 
SQLite db file,
I saw the same behavior as before, where the shells would show something 
different than the
web app.  But, after some amount of time, and some number of writes to the db, 
all of the shells
and the web app were in-sync.  So, it looks as though if I wait long enough, 
all processes on the
same dev box would be in-sync, with the web app, and all is well.

Just a hunch: if you add a commit to the beginning of your web code,
does that change anything?

[Jeff]  I'm doing very vanilla stuff; I doubt example code would show anything.

I was mostly thinking of the typical perl scoping issues, like
accidental closures:

my $x = some_database_method();

sub foo {
   print $x;
}

This code will never pick up changes from the database.

It's almost like the OS is keeping two distinct
versions of this file, when there is only one on the drive.

It may well be holding onto the old file, even though you don't see it
when you list the directory.  Open handles to an unlinked file still
work.

At least I know when I do everything from the web interface, it seems to work
great across all apache children.

That's why I think that either doing a commit at the beginning of the
request, or reopening your DBI connection, will fix the problem.

- Perrin

Reply via email to