If you want to use a lightweight DB like Sqlite and you are setting up your own daemon and server situation then you can place the DB synchronization function in the daemon around the Sqlite so that its action is single streamed. In a similar situation we have installations which manage many hundreds of simultaneous users.

If you don't want to do that, use a DBMS like PostgreSQL which manages it all for you by having a DB server, not linking the DB function into the application.

Mark Robson wrote:
On Monday 20 March 2006 11:47, [EMAIL PROTECTED] wrote:

BTW: Lots of people have multiple processes writing to the same
SQLite database without problems - the SQLite website is a good
example.  I do not know what you are doing wrong to get the
locking problems you are experiencing.


I don't know how they manage it (unless of course, many of their writes fail and the txns roll back, and they don't notice or care).

On Monday 20 March 2006 11:58, Roger wrote:

I am developing a web based application in PHP/Sqlite and i am forever
getting that error. What i normally do is a simple

service httpd restart.


This is no good. I'm creating a daemon-based server application, which is carrying out autonomous tasks. It does not currently run under httpd, and I have no plans to make it do so.

I have several processes which are carrying out a fair amount of work inside a transaction - doing several writes, then doing some other time-consuming operations, then providing everything goes OK, committing these transactions.

This means that there are some relatively long-lived transactions (several seconds, anyway) in progress.

However, with proper locking this should NOT cause a problem - it should simply serialise the transactional operations (or so I thought).

As it is, I've actually tried to port this to MySQL (using Mysql5 and InnoDB), but I'm getting some problems there too - I think I'll have to review my use of transactions etc.

Regards
 Mark

Reply via email to