Re: [PERFORM] Persistent Connections
Hi, [EMAIL PROTECTED] wrote: Hi I have a php script and i make a pg_pconnect If i want to make 4-10 pg_query in that script Have i to close the connection at end of the script? (i would say yes, is it right?) If you want to make multiple pg_query's in a page you can, and you can use the same connection. You dont have to use persistent connections for this. Just open the connection and fire off the different queries. The persistent connection remains open between different pages loading, which is supposedly faster because you dont have the overhead of opening the connection. If you want to use a persistent connection then definitely dont close it at the bottom of the page. If you want to use the other connection (pg_connect, non-persistent) then you dont have to close this connection at the bottom of the page because PHP does it for you, although you can if you are feeling nice ;-). Sorry I m a little bit confused about the persistent thing!! Is it smart to use persistent connections at all if i expect 100K Users to hit the script in an hour and the script calls up to 10-15 pg functions? I have at the mom one function but the server needs 500 ms, its a little bit too much i think, and it crashed when i had 20K users Use the persistent connection but make sure the parameters in postgresql.conf match up with the Apache config. The specific settings are MaxClients in httpd.conf and max_connections in postgresql.conf. Make sure that max_connections is at least as big as MaxClients for every database that your PHP scripts connect to. Thanks Bye ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Benchmarking PostgreSQL?
Tom Lane wrote: It is notoriously hard to get reproducible results from pgbench. However... - I'm running pgbench with 35 clients and 50 transactions/client (1) what scale factor did you use to size the database? One of the gotchas is that you need to use a scale factor at least as large as the I forgot to mention that - I read the pgbench README, and the scale factor was set to 40. (2) 50 xacts/client is too small to get anything reproducible; you'll mostly be measuring startup transients. I usually use 1000 xacts/client. I was using 100 and 50, hoping that the larger value will help reproducability and the smaller just what you said - to measure startup time. What I also forgot to mention was that the numbers I was talking about were got by using '-C' pgbench switch. Without it the results wary from about 60 and 145 (same 'alternating' effects, etc). Thanks, I will try 1000 transactions! There's another thing I'm puzzled about: I deliberately used -C switch in intention to measure connection time, but with it, the numbers displayed by pgbench for 'tps with' and 'tps without connection time' are same to the 6th decimal place. Without -C, both numbers are more then doubled and are different by about 2-3 tps. (I was expecting that with -C the 'tps with c.t.' would be much lower than 'tps without c.t.'). (the README is here: http://developer.postgresql.org/cvsweb.cgi/pgsql-server/contrib/pgbench/README.pgbench) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] query slows under load
On Fri, 23 Jan 2004, Jenny Zhang wrote: > 3. index with desc/asc is not supported in PG, why it is not needed? Is > there any work-around? You can do this with index operator classes. There aren't any automatically provided ones that do the reversed sort iirc, but I think that's come up before with examples. I've toyed with the idea of writing the reverse opclasses for at least some of the types, but haven't been seriously motivated to actually get it done. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])