oh, on using adodb.sf.net and 0-overhead for jumping between mysql and
postgresql;

keep all your queries to as simple & early-standard sql as possible.
the auto_increment incompatibilities can be circumvented with a
relatively simple
function getMax($table, $field) {

in adodb, you'd loop through a huge dataset like this, ensuring proper
comms & mem-usage betweeen the db server and php.

$dbConn = adoEasyConnection(); //returns adodb connection object,
initialized with values from config.php
$sql = 'select * from data where bladiebla="yep"';
$q = $dbConn->execute ($sql);
if (!$q || $q->EOF) {
  $errorMsg = $dbConn->ErrorMsg();
  handleError ($errorMsg, $sql);
} else {
 while (!$q->EOF) {

   //use $q->fields['field_name']; // from the currently loaded record

  $q->MoveNext();
  }
}

for short resultsets you could call $q->getRows() to get all the rows
returned as 1 multilevel array.

instead of difficult outer-join constructs and stored procedures,
(that are not portable), i find it much easier to aim for small
intermediate
computation-result arrays in php, which are used to construct
fetch-final-result-sql on the fly.
itnermediate &/ result arrays can be stored on db / disk in json, too ;)

i built a cms that can store media items and their meta-properties in db,
with the ability to update some meta-properties of an arbitrary
selection of media items to new
values, in 1 go.
i had no problem switching from postgresql to mysql, at all, using the
methods described above.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to