On Thu, 2007-11-01 at 07:54 +0100, Magnus Hagander wrote:
Ow Mun Heng wrote:
On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:
2) separate the transaction log from the database
It's mostly written, and it's the most valuable data you have. And in
case you use PITR, this
Ow Mun Heng wrote:
You're likely better off (performance-wise) putting it on the same disk
as the database itself if that one has better RAID, for example.
I'm thinking along the lines of since nothing much writes to the OS
Disk, I should(keyword) be safe.
Unless it's *always* in the cache
Magnus Hagander wrote:
Ron St-Pierre wrote:
Joe Uhl wrote:
I realize there are people who discourage looking at Dell, but i've been
very happy with a larger ball of equipment we ordered recently from
them. Our database servers consist of a PowerEdge 2950 connected to a
PowerVault
You're likely better off (performance-wise) putting it on the same disk
as the database itself if that one has better RAID, for example.
I'm thinking along the lines of since nothing much writes to the OS
Disk, I should(keyword) be safe.
You are almost certainly wrong about this; think
Hi,
I have a table login with approx 600,000 tuples, a person table with
approx 10 tuples.
When running
select max(when) from login where userid='userid'
it takes a second or two, but when adding group by userid the planner
decides on using another plan, and it gets *much* faster. See
Sami Dalouche [EMAIL PROTECTED] writes:
Compare that to the following query, that is exactly the same except
that the City table is inner'joined instead of outer joined
...
the explain analyze is available at :
http://www.photosdesami.com/temp/exp6.txt
AFAICS it's just absolutely blind luck
Palle Girgensohn [EMAIL PROTECTED] writes:
When running
select max(when) from login where userid='userid'
it takes a second or two, but when adding group by userid the planner
decides on using another plan, and it gets *much* faster. See example below.
It's only faster for cases where there
Thanks for your answer.
So, basically, what you are saying is that there is nothing particularly
wrong with the query, nor with its optimization ? So if I need
performance for this query, I should just revert to other techniques
(giving more memory to postgres, caching outside postgres, etc..) ?
--On torsdag, november 01, 2007 09.43.39 -0400 Tom Lane [EMAIL PROTECTED]
wrote:
Palle Girgensohn [EMAIL PROTECTED] writes:
When running
select max(when) from login where userid='userid'
it takes a second or two, but when adding group by userid the planner
decides on using another plan,
Palle Girgensohn [EMAIL PROTECTED] writes:
Unfortunately, altering statistics doesn't help. I see no difference when
altering the value from 10 (default) to 100, 1000 or 10. :-(
Um, you did re-ANALYZE the table after changing the setting?
regards, tom lane
--On torsdag, november 01, 2007 11.06.57 -0400 Tom Lane [EMAIL PROTECTED]
wrote:
Palle Girgensohn [EMAIL PROTECTED] writes:
Unfortunately, altering statistics doesn't help. I see no difference
when altering the value from 10 (default) to 100, 1000 or 10. :-(
Um, you did re-ANALYZE
On 11/1/07, Tom Lane [EMAIL PROTECTED] wrote:
Palle Girgensohn [EMAIL PROTECTED] writes:
Unfortunately, altering statistics doesn't help. I see no difference when
altering the value from 10 (default) to 100, 1000 or 10. :-(
Um, you did re-ANALYZE the table after changing the setting?
Palle Girgensohn [EMAIL PROTECTED] writes:
--On torsdag, november 01, 2007 11.06.57 -0400 Tom Lane [EMAIL PROTECTED]
wrote:
Um, you did re-ANALYZE the table after changing the setting?
alter table login alter userid SET statistics 1000;
vacuum analyze login;
Hm, that's the approved
Palle Girgensohn [EMAIL PROTECTED] writes:
Are there any other things I can modify?
You might consider an index on userid,when. Keep in mind that every new
index imposes an incremental cost on every update and insert and increases the
time for vacuum.
max_prepared_transactions = 100
Ketema wrote:
I am trying to build a very Robust DB server that will support 1000+
concurrent users (all ready have seen max of 237 no pooling being
used). I have read so many articles now that I am just saturated. I
have a general idea but would like feedback from others.
Describe a bit
--On torsdag, november 01, 2007 09.43.39 -0400 Tom Lane [EMAIL PROTECTED]
wrote:
Palle Girgensohn [EMAIL PROTECTED] writes:
When running
select max(when) from login where userid='userid'
it takes a second or two, but when adding group by userid the planner
decides on using another plan,
Magnus Hagander wrote:
Ow Mun Heng wrote:
You're likely better off (performance-wise) putting it on the same disk
as the database itself if that one has better RAID, for example.
I'm thinking along the lines of since nothing much writes to the OS
Disk, I should(keyword) be safe.
Unless
I am comparing the same query on two different PG 8.2 servers, one Linux
(8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.
The Windows posgrestsql.config is pretty well tuned but it looks like
someone had wiped out the Linux config so the default one was re-installed.
On 11/1/07, Carlo Stonebanks [EMAIL PROTECTED] wrote:
I am comparing the same query on two different PG 8.2 servers, one Linux
(8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.
The Windows posgrestsql.config is pretty well tuned but it looks like
someone had wiped out
Carlo Stonebanks [EMAIL PROTECTED] writes:
Still, the Linux server did not create the same, fast plan as the Windows
server. In order to get the same plan we had to:
set enable_hashjoin to 'off';
set enable_mergejoin to 'off';
This is just about never the appropriate way to solve a
On Thu, 2007-11-01 at 11:16 -0700, Steve Crawford wrote:
Magnus Hagander wrote:
Ow Mun Heng wrote:
You're likely better off (performance-wise) putting it on the same disk
as the database itself if that one has better RAID, for example.
I'm thinking along the lines of since nothing much
Hello,
I am new to setting up PostgreSQL machines for our operational
environments and would appreciate if someone can take a look at this
setup; throw tomatoes if it looks too bad. We're expecting an
initial load of about 5 million text meta-data records to our
database; and are
On 11/1/07, Mark Floyd [EMAIL PROTECTED] wrote:
Hello,
Dell PowerEdge Energy 2950
(2) Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB
4GB 667Mhz Dual Ranked DIMMs, Energy Smart
PERC 5/i, x8 Backplane, Integrated Controller Card
Hard Drive Configuration: Integrated SAS/SATA
This is just about never the appropriate way to solve a performance
problem, as it will inevitably create performance problems in other
queries.
In this particular example, this was done to force the query on the Linux
box to use the same plan as on the Windows box to prove that - once the
24 matches
Mail list logo