Re: [PERFORM] 100 simultaneous connections, critical limit?
Hi! AA scott.marlowe wrote: A few tips from an old PHP/Apache/Postgresql developer. 1: Avoid pg_pconnect unless you are certain you have load tested the system and it will behave properly. pg_pconnect often creates as many issues as it solves. My experience with persistant connections in PHP is quite similar to the one of Scott Marlowe. There are some nasty effects if something is not working. The most harmless results come probably from not closed transactions which will result in a warning as PHP seems to send always a BEGIN; ROLLBACK; for reusing a connection. AA I share the above view. I've had little success with persistent AA connections. The cost of pg_connect is minimal, pg_pconnect is not a AA viable solution IMHO. Connections are rarely actually reused. Still I think it´s a good way to speed things up. Probably the connection time it takes in PHP is not so the gain, but the general saving of processor time. Spawning a new process on the backend can be a very expensive operation. And if it happens often, it sums up. Perhaps it´s only a memory for CPU time deal. My persistant connections get very evenly used, no matter if there are 2 or 10. The CPU usage for them is very equally distributed. Christoph Nelles -- Mit freundlichen Grüssen Evil Azraelmailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] select max/count(id) not using index
Guten Tag Ryszard Lach, Am Montag, 22. Dezember 2003 um 11:39 schrieben Sie: RL Hi. RL I have a table with 24k records and btree index on column 'id'. Is this RL normal, that 'select max(id)' or 'select count(id)' causes a sequential RL scan? It takes over 24 seconds (on a pretty fast machine): Yes, that was occasionally discussed on the mailinglists. For the max(id) you can use instead SELECT id FROM table ORDER BY id DESC LIMIT 1 Christoph Nelles = explain ANALYZE select max(id) from ogloszenia; RL QUERY PLAN RL -- RL Aggregate (cost=3511.05..3511.05 rows=1 width=4) (actual RL time=24834.629..24834.629 rows=1 loops=1) RL- Seq Scan on ogloszenia (cost=0.00..3473.04 rows=15204 width=4) RL (actual time=0.013..24808.377 rows=16873 loops=1) RL Total runtime: 24897.897 ms RL Maybe it's caused by a number of varchar fields in this table? However, RL 'id' column is 'integer' and is primary key. RL Clustering table on index created on 'id' makes such a queries RL many faster, but they still use a sequential scan. RL Richard. -- Mit freundlichen Grssen Evil Azraelmailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PERFORM] Various Questions
Hi! I have 4 question which probably someone can answer. 1) I have a transaction during which no data was modified, does it make a difference whether i send COMMIT or ROLLBACK? The effect is the same, but what´s about the speed? 2) Is there any general rule when the GEQO will start using an index? Does he consider the number of tuples in the table or the number of data pages? Or is it even more complex even if you don´t tweak the cost setting for the GEQO? 3) Makes it sense to add a index to a table used for logging? I mean the table can grow rather large due to many INSERTs, but is also seldom queried. Does the index slowdown noticable INSERTs? 4) Temporary tables will always be rather slow as they can´t gain from ANALYZE runs, correct? Thanx in advance for any answer Christoph Nelles -- Mit freundlichen Grüssen Evil Azrael mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PERFORM] Various Questions
Hi! I have 4 question which probably someone can answer. 1) I have a transaction during which no data was modified, does it make a difference whether i send COMMIT or ROLLBACK? The effect is the same, but what´s about the speed? 2) Is there any general rule when the GEQO will start using an index? Does he consider the number of tuples in the table or the number of data pages? Or is it even more complex even if you don´t tweak the cost setting for the GEQO? 3) Makes it sense to add a index to a table used for logging? I mean the table can grow rather large due to many INSERTs, but is also seldom queried. Does the index slowdown noticable INSERTs? 4) Temporary tables will always be rather slow as they can´t gain from ANALYZE runs, correct? Thanx in advance for any answer Christoph Nelles -- Mit freundlichen Grüssen Evil Azrael mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Postgres 7.3.4 + Slackware 9.1
Hi! I haven´t really tested it on Slackware 9.1. But i am running Postgresql now for over two years on various Slackware versions. My current server is running on Slackware 8.0 with a lot of packages (especially the core libs) upgraded to slack 9.1 packages. I had never problems with postgresql related to slackware except that the old 8.0 readline packages was too old for postgresql 7.3.x, but that was not really a problem. But it seems, there´s no prepackaged Postgresql for Slackware, so you would have to compile it yourself. Christoph Nelles Am Freitag, 31. Oktober 2003 um 21:55 schrieben Sie: P Hello all! P Do anyone have experience installing Postgres 7.3.4 on Slackware 9.1? P Do exist any trouble, bug, problem... or is a good MIX? P I want to leave RedHat (9) because is not free anymore and i don't P want to use fedora BETA TEST versions. P Any suggestion? P THANKS ALL. P ---(end of P broadcast)--- P TIP 5: Have you checked our extensive FAQ? Phttp://www.postgresql.org/docs/faqs/FAQ.html -- Mit freundlichen Grüssen Evil Azraelmailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]