ID: 25093 Updated by: [EMAIL PROTECTED] Reported By: php at pv2c dot sk -Status: Open +Status: Closed Bug Type: PostgreSQL related Operating System: Linux PHP Version: 4.3.2 New Comment:
This bug has been fixed in CVS. In case this was a PHP problem, snapshots of the sources are packaged every three hours; this change will be in the next snapshot. You can grab the snapshot at http://snaps.php.net/. In case this was a documentation problem, the fix will show up soon at http://www.php.net/manual/. In case this was a PHP.net website problem, the change will show up on the PHP.net site and on the mirror sites in short time. Thank you for the report, and for helping us make PHP better. Previous Comments: ------------------------------------------------------------------------ [2003-08-14 06:53:42] php at pv2c dot sk Sorry :), correct SQL in pg_query should be: "INSERT INTO aaa (test) VALUES (1);" ------------------------------------------------------------------------ [2003-08-14 06:51:01] php at pv2c dot sk Description: ------------ pg_query doesn't return resource for failed queries - that's not very wise, IMHO (see related bug 18747), but the real problem with this is, that you cannot free failed results. It may not be noticeable if you have only a few failed queries, but it becomes a serious problem if you have lots. Try the example code. Reproduce code: --------------- // assume one table "aaa" with one column "test", that is // unique (primary key maybe) $con=pg_connect(...); for($t=0; $t<10000; $t++) { $ret = pg_query($con, "INSERT INTO aaa (test) VALUES 1"); // $ret is FALSE (cannot inset duplicate value) => no way to free it } Expected result: ---------------- Some way to free the result resource... Actual result: -------------- PHP memory consumption grows *really fast*, in my case it even ignores memory_limit setting in php.ini. ------------------------------------------------------------------------ -- Edit this bug report at http://bugs.php.net/?id=25093&edit=1
