Hi Martin,
first it works. Second the performance after these many hints are really impressive for me (100 user testbase on my notebook):
MySQL with InnoDB: import 7 - 7.5 users per second create_pin 3 - 3.5 users per second
PostgreSQL import 8 - 8.5 users per second create_pin 3.5 - 4 users per second
Ok, we cannot support millions of users with my notebook but I think 10.000 to 15.000 PINs per hour are a good starting point :)
Martin Bartosch wrote:
the InnoDB does work great, but it was introduced in one of the more recent versions of mysql. Using it will be a problem for users with an older mysql database. But as we need transadtions, I would suggest to use InnoDB at the cost of backward compatibility. InnoDB is also a bit slower than BDB, as far as I know, but not much. Transaction support is definitely worth the price.
I need nearly an hour but now I get innodb activated on my MySQL (max 3.23) server.
Since you are currently working on the database stuff: I have not looked to deep into the code, so I don't know the answer right now.
Simply look into the code and continue to write such nice hints - and these ideas help a lot. Coding is only the last step. Feel free to do what you want with your knowledge ;-D
Just wondering: How do you start/end transactions? I hope you keep the transaction while a request is running even across several DBI calls, because a transaction rollback does give you nothing if it happens after a (partial) commit within a logical sequence of actions.
First autocommit is always off if OpenCA uses a database. If we run a normal command then we commit after the command if the command returns with a true value. If the command fails and returns undef or the command crashs then we perform an automatic rollback with the destructor of OpenCA::DBI.
If you use the batch system then we commit or rollback on every executed action. this is necessary to avoid that a problem with the data of user A stops the complete batch system. Please see at bpDoStep do understand what I meaning. Sometimes my explanations are not really helpful ;)
But if you end transaction (committing data) after "add user data..." then you are hosed if an errors occur in later stages. Can this happen?
During normal commands, yes. Within a single batch action, yes. Outside a failing action for user A in the batchsystem, no.
If yes, then we need to propagage the transaction to each method that operates on the transaction. After the last operation has finished, the transaction may be closed with a commit.
This is handled by the return value of doQuery.
(BTW, using a destructor methods it is possible to build wonderful auto-rollback transaction handlers for database operations. Just drop me a note if you need more info on this.)
I know because I use it :)
Actually critical is if doQuery returns an error (return undef) and the user continues to use this transaction. This is useless at minimum for PostgreSQL. SQL error means automatic rollback at end of transaction.
Michael -- _______________________________________________________________
Michael Bell Humboldt-Universitaet zu Berlin
Tel.: +49 (0)30-2093 2482 ZE Computer- und Medienservice Fax: +49 (0)30-2093 2704 Unter den Linden 6 [EMAIL PROTECTED] D-10099 Berlin _______________________________________________________________
------------------------------------------------------- This SF.Net email is sponsored by: Sybase ASE Linux Express Edition - download now for FREE LinuxWorld Reader's Choice Award Winner for best database on Linux. http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click _______________________________________________ OpenCA-Devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/openca-devel
