[PERFORM] Problems with high traffic

2005-01-06 Thread Ben Bostow
I'm still relatively new to Postgres. I usually just do SQL programming but have found my self having to administer the DB now. I have I have a problem on my website that when there is high amounts of traffic coming from one computer to my web server. I suspect it is because of a virus. But

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Dawid Kuroczko
On Wed, 5 Jan 2005 22:35:42 +0700, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Now I turn hyperthreading off and readjust the conf . I found the bulb query that was : update one flag of the table [8 million records which I think not too much] Ahh, the huge update. Below are my hints I've

Re: [PERFORM] Problems with high traffic

2005-01-06 Thread Dave Cramer
Ben Well, we need more information pg version, hardware, memory, etc you may want to turn on log_duration to see exactly which statement is causeing the problem. I'm assuming since it is taking a lot of CPU it will take some time to complete( this may not be true) On your last point, that is

Re: [PERFORM] Problems with high traffic

2005-01-06 Thread Ben Bostow
I am running postgresql 7.2.4-5.73, Dual P4, 1GB Ram. The big problem is that I redirect all internal port 80 traffic to my web server so I see all traffic whether it is a virus or not and intended for my server or not. I originally had a problem with running out of memory but I found a bug in

[PERFORM] first postgrreSQL tunning

2005-01-06 Thread Vinicius Caldeira Carvalho
Hi there! I'm doing my first tunning on my postgreSQL, my server is for a small app, largest table shall never exceed 10k rows, and less than 1k transactions/day. So I don't think I should run out of resources. The machine is a Fedora Core 3.0 with 1gb ran and kernel 2.6. I'm thinking in

Re: [PERFORM] Problems with high traffic

2005-01-06 Thread Dave Cramer
Ben, Hmmm... ok 7.2.4 is quite old now and log_duration doesn't exist in the logging. You will see an immediate performance benefit just by moving to 7.4.x, but I'll bet that's not a reasonable path for you. in postgresql.conf you can change the logging to: log_pid=true log_duration=true

Re: [PERFORM] first postgrreSQL tunning

2005-01-06 Thread Merlin Moncure
Hi there! I'm doing my first tunning on my postgreSQL, my server is for a small app, largest table shall never exceed 10k rows, and less than 1k transactions/day. So I don't think I should run out of resources. The machine is a Fedora Core 3.0 with 1gb ran and kernel 2.6. I'm thinking in

Re: [PERFORM] first postgrreSQL tunning

2005-01-06 Thread Vinicius Caldeira Carvalho
Merlin Moncure wrote: Hi there! I'm doing my first tunning on my postgreSQL, my server is for a small app, largest table shall never exceed 10k rows, and less than 1k transactions/day. So I don't think I should run out of resources. The machine is a Fedora Core 3.0 with 1gb ran and

Re: [PERFORM] Problems with high traffic

2005-01-06 Thread Ben Bostow
I know 7.2 is old I'm trying to fix this in the mean time moving everything to the latest Linux software when RedHat releases the enterprise with 2.6. Postgres complains about log_duration and log_statement are they a different name under 7.2? Is there documentation on the type of logging the

Re: [PERFORM] Problems with high traffic

2005-01-06 Thread Dave Cramer
Ben, It turns out that 7.2 has neither of those options you will have to set the debug_level to something higher than 0 and less than 4 to get information out. I'm afraid I'm not sure which value will give you what you are looking for. The link below explains what is available, and it isn't

Re: [PERFORM] first postgrreSQL tunning

2005-01-06 Thread Frank Wiles
On Thu, 06 Jan 2005 11:19:51 -0200 Vinicius Caldeira Carvalho [EMAIL PROTECTED] wrote: Hi there! I'm doing my first tunning on my postgreSQL, my server is for a small app, largest table shall never exceed 10k rows, and less than 1k transactions/day. So I don't think I should run out of

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread amrit
Ahh, the huge update. Below are my hints I've found while trying to optimize such updates. First of all, does this update really changes this 'flag'? Say, you have update: UPDATE foo SET flag = 4 WHERE [blah]; are you sure, that flag always is different than 4? If not, then add: UPDATE

Re: [PERFORM] Benchmark two separate SELECTs versus one LEFT JOIN

2005-01-06 Thread Josh Berkus
Miles, I only have a laptop here so I can't really benchmark properly. I'm hoping maybe someone else has, or just knows which would be faster under high traffic/quantity. Well, it's really a difference between round-trip time vs. the time required to compute the join. If your database is

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Frank Wiles
On Thu, 6 Jan 2005 09:06:55 -0800 Josh Berkus josh@agliodbs.com wrote: I can't tell you how many times I've seen this sort of thing. And the developers always tell me Well, we denormalized for performance reasons ... Now that's rich. I don't think I've ever seen a database perform

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Josh Berkus
Dawid, Ahh, the huge update. Below are my hints I've found while trying to optimize such updates. Divide the update, if possible. This way query uses less memory and you may call VACUUM inbetween updates. To do this, first SELECT INTO TEMPORARY table the list of rows to update (their ids

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Dave Cramer
Reading can be worse for a normalized db, which is likely what the developers were concerned about. One always have to be careful to measure the right thing. Dave Frank Wiles wrote: On Thu, 6 Jan 2005 09:06:55 -0800 Josh Berkus josh@agliodbs.com wrote: I can't tell you how many times I've

Re: [PERFORM] Denormalization WAS: Low Performance for big hospital server ..

2005-01-06 Thread Josh Berkus
Frank, Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! Oh, there are some.For example, Primer's issues around his dating database; it turned out that a fully normalized

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Rod Taylor
On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote: Reading can be worse for a normalized db, which is likely what the developers were concerned about. To a point. Once you have enough data that you start running out of space in memory then normalization starts to rapidly gain ground again

Re: [PERFORM] Denormalization WAS: Low Performance for big hospital

2005-01-06 Thread Frank Wiles
On Thu, 6 Jan 2005 09:38:45 -0800 Josh Berkus josh@agliodbs.com wrote: Frank, Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! Oh, there are some.For example, Primer's

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Richard_D_Levine
In my younger days I denormalized a database for performance reasons and have been paid for it dearly with increased maintenance costs. Adding enhanced capabilities and new functionality will render denormalization worse than useless quickly. --Rick

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Yann Michel
Hi On Thu, Jan 06, 2005 at 12:51:14PM -0500, Rod Taylor wrote: On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote: Reading can be worse for a normalized db, which is likely what the developers were concerned about. To a point. Once you have enough data that you start running out of

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Greg Stark
Frank Wiles [EMAIL PROTECTED] writes: Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! Just remember. All generalisations are false. -- greg ---(end

Re: [PERFORM] Low Performance for big hospital server ..

2005-01-06 Thread Joshua D. Drake
Greg Stark wrote: Frank Wiles [EMAIL PROTECTED] writes: Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! Just remember. All generalisations are false. In general, I would agree.