soon as you can afford it and then tune your PostgreSQL parameters
to make best use of it. The more RAM resident your DB, the better.
Hope this helps,
Ron Peacetree
===Original Message Follows===
From: Kari Lavikka
To: Merlin Moncure
Subject: Re: Finding bottleneck
At 05:15 AM 8/17/2005, Ulrich Wisser wrote:
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io.
A clear indication that you need to improve your HD IO subsystem.
At that time all my queries are *very* slow.
To be more precise, your server pe
fit less from RAM than
data mining ones, but still 2GB of RAM is just not that much for a
real DB server...
Ron Peacetree
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
nly help in case "B" above. If you go the "hard" route of
using systems programming, you will have a lot of details that must
be paid attention to correctly or Bad Things (tm) will
happen. Putting the semaphore in place is the tip of the iceberg.
Hope
At 01:55 PM 8/18/2005, John Arbash Meinel wrote:
Jeremiah Jahn wrote:
>here's an example standard query. Ireally have to make the first hit go
>faster. The table is clustered as well on full_name as well. 'Smith%'
>took 87 seconds on the first hit. I wonder if I set up may array wrong.
>I remebe
less there's a bottleneck somewhere else in the system design.
Hope this helps,
Ron Peacetree
At 08:40 AM 8/19/2005, Alex Turner wrote:
I have managed tx speeds that high from postgresql going even as high
as 2500/sec for small tables, but it does require a good RAID
controler card (ye
the code under discussion, but I have seen mySQL
easily achieve these kinds of numbers using the myISAM storage engine
in write-through cache
mode.
myISAM can be =FAST=. Particularly when decent HW is thrown at it.
Ron
---(end of broadcast)--
At 12:34 PM 8/19/2005, Jeffrey W. Baker wrote:
On Fri, 2005-08-19 at 10:54 -0400, Ron wrote:
> Maxtor Atlas 15K II's.
> Areca's 1GB buffer RAID cards
The former are SCSI disks and the latter is an SATA controller. The
combination would have a transaction rate of approximat
he DB schema needs examining to see if it matches up well
with its real usage?
Ron Peacetree
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
the xlog baby which is sequential writes, and all about large
block reads, which is sequential reads.
Alex Turner
NetEconomist
P.S. Sorry if i'm a bit punchy, I've been up since yestarday with
server upgrade nightmares that continue ;)
My condolences and sympathies. I've def
At 04:11 PM 8/19/2005, Jeremiah Jahn wrote:
On Fri, 2005-08-19 at 14:23 -0500, John A Meinel wrote:
> Ron wrote:
> > At 01:18 PM 8/19/2005, John A Meinel wrote:
> >
> >> Jeremiah Jahn wrote:
> >> > Sorry about the formatting.
> >> >
> >>
gaRAID controllers)
are high powered enough.
Talk to your HW supplier to make sure you have controllers adequate
to your HD's.
...and yes, your average access time will be in the 5.5ms - 6ms range
when doing a physical seek.
Even with RAID, you want to minimize seeks and maximize sequen
ss time will be in the 5.5ms - 6ms range
when doing a physical seek.
Even with RAID, you want to minimize seeks and maximize sequential IO
when accessing them.
Best to not go to HD at all ;-)
Hope this helps,
Ron Peacetree
---(end of broadcast)--
r HD
subsystem as much as possible to writes (which is unavoidable HD
IO). As I've posted before, at $75-$150/GB, it's well worth the
investment whenever you can prove it will help as we have here.
Hope this helps,
Ron Peacetree
---(end of broadcast)---
I'm resending this as it appears not to have made it to the list.
At 10:54 AM 8/21/2005, Jeremiah Jahn wrote:
On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:
> Ron wrote:
>
> Well, since you can get a read of the RAID at 150MB/s, that means that
> it is actual I/O spe
At 10:54 AM 8/21/2005, Jeremiah Jahn wrote:
On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:
> Ron wrote:
>
> Well, since you can get a read of the RAID at 150MB/s, that means that
> it is actual I/O speed. It may not be cached in RAM. Perhaps you could
> try the same test,
At 03:10 AM 8/25/2005, Ulrich Wisser wrote:
I realize I need to be much more specific. Here is a more detailed
description of my hardware and system design.
Pentium 4 2.4GHz
Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
Motherboard chipset 'I865G', two IDE channels on board
First suggestion
sorting or sorting IO performance.
Ron
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
At 04:49 PM 8/25/2005, Chris Browne wrote:
[EMAIL PROTECTED] (Ron) writes:
> At 03:45 PM 8/25/2005, Josh Berkus wrote:
>> > Ask me sometime about my replacement for GNU sort. Â It uses the
>> > same sorting algorithm, but it's an order of magnitude faster due
>
easurements of how
long it takes to do such scans then the estimated cost has a decent
chance of being fairly accurate under such circumstances.
It might not work well, but it seems like a reasonable first attempt
at a solution?
Ron Peacetree
---(end of broadcast)--
he cost of corrupted or lost data.
HD's and RAM are cheap enough that you should be able to upgrade in
more ways, but do at least that "upgrade"!
Beyond that, the best ways to spend you limited $ are highly
dependent on your exact DB and its usage pattern.
Ron Peacetree
-
At 12:56 PM 8/30/2005, Joshua D. Drake wrote:
Ron wrote:
At 08:37 AM 8/30/2005, Alvaro Nunes Melo wrote:
Hello,
We are about to install a new PostgreSQL server, and despite of
being a very humble configuration compared to the ones we see in
the list, it's the biggest one we'v
At 03:27 PM 8/30/2005, Joshua D. Drake wrote:
If you still have the budget, I would suggest considering either
what Ron suggested or possibly using a 4 drive RAID 10 instead.
IME, with only 4 HDs, it's usually better to split them them into
two RAID 1's (one for the db, one for
At 08:04 PM 8/30/2005, Michael Stone wrote:
On Tue, Aug 30, 2005 at 07:02:28PM -0400, Ron wrote:
purpose(s). That's why the TPC bench marked systems tend to have
literally 100's of HD's and they tend to be split into very focused purposes.
Of course, TPC benchmark systems
At 08:43 PM 8/30/2005, Michael Stone wrote:
On Tue, Aug 30, 2005 at 08:41:40PM -0400, Ron wrote:
The scary thing is that I've worked on RW production systems that
bore a striking resemblance to a TPC benchmark system. As you can
imagine, they uniformly belonged to BIG organizations
1885.34MBps. What size are those network connects (Server A <->
storage, Server B <-> storage, Server A <-> Server B)?
Ron Peacetree
At 10:16 AM 9/1/2005, Ernst Einstein wrote:
I've set up a Package Cluster ( Fail-Over Cluster ) on our two HP
DL380 G4 with MSA S
igh enough to hold the working set of the DB. The indications
from the OP are that you may very well be able to hold the entire DB
in RAM. That's a big win whenever you can achieve it.
After these steps, there may still be performance issues that need
attention, but the DBMS should be _much_ faster.
Ron Peacetree
---(end of broadcast)---
TIP 6: explain analyze is your friend
At 04:25 PM 9/1/2005, Tom Lane wrote:
Ron <[EMAIL PROTECTED]> writes:
> ... Your target is to have each row take <= 512B.
Ron, are you assuming that the varchar fields are blank-padded or
something? I think it's highly unlikely that he's got more than a
couple hundr
age where I can run queries such as:
> > select street, locality_1, locality_2, city from address
> > where (city = 'Nottingham' or locality_2 = 'Nottingham'
> >or locality_1 = 'Nottingham')
> > and upper(substring(stree
At 06:22 PM 9/1/2005, Matthew Sackman wrote:
On Thu, Sep 01, 2005 at 06:05:43PM -0400, Ron wrote:
>
> Since I assume you are not going to run anything with the string
> "unstable" in its name in production (?!), why not try a decent
> production ready distro like SUSE 9.
d
TerminatedOrdes tables. Upsides are that the insert problem goes
away and certain kinds of accounting and inventory reports are now
easier to create.
Ron
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appr
#x27;s are the TOTL now). We need controller technology to keep up.
Ron
At 12:16 AM 11/16/2005, Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access
ion
numbers in aggregate.
Ron
At 12:08 PM 11/16/2005, Alex Turner wrote:
Yes - that very benchmark shows that for a MySQL Datadrive in RAID 10,
the 3ware controllers beat the Areca card.
Alex.
On 11/16/05, Ron <[EMAIL PROTECTED]> wrote:
> Got some hard numbers to back your statement
marks) show the usual pattern of the 1GB 1160's in
1st place or tied for 1st place, it seems reasonable that mySQL has
something to due with the aberrant results in those 2 (IIRC) cases.
Ron
At 03:57 PM 11/16/2005, Ron wrote:
You _ARE_ kidding right? In what hallucination?
The performan
ecent RAID controllers and HBAs are not cheap either. Even SW
RAID benefits from having a big dedicated RAM buffer to talk to.
While the above may not cost you $80K, it sure isn't costing you $1K either.
Maybe ~$15-$20K, but not $1K.
Ron
At 01:07 AM 11/18/2005, Luke Lonergan wrote:
were different then your expectations or
previously taken stance. Alan Stange's comment
re: the use of direct IO along with your comments
re: async IO and mem copies plus the results of
these experiments could very well point us
directly at how to most easily solve pg's CPU boundness
tables
that both want services at the same time, disk arm contention will
drag performance into the floor when they are on the same HW set.
Profile your HD access and put tables that want to be accessed at the
same time on different HD sets. Even if you have to buy more HW to do it.
Ron
At 04
At 09:26 AM 11/22/2005, Guillaume Smet wrote:
Ron wrote:
If I understand your HW config correctly, all of the pg stuff is on
the same RAID 10 set?
No, the system and the WAL are on a RAID 1 array and the data on
their own RAID 10 array.
As has been noted many times around here, put the WAL
At 10:26 AM 11/22/2005, Guillaume Smet wrote:
Ron,
First of all, thanks for your time.
Happy to help.
As has been noted many times around here, put the WAL on its own
dedicated HD's. You don't want any head movement on those HD's.
Yep, I know that. That's just we su
re the
current commodity RAID controller performance leader. Better
performance can be gotten out of HW from vendors like Xyratex, but it
will cost much more.
Ron
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
scan times should be in the
mins or low single digit hours, not days. Particularly if you use
RAM to maximum advantage.
Ron
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail comma
At 02:11 PM 11/27/2005, Luke Lonergan wrote:
Ron,
On 11/27/05 9:10 AM, "Ron" <[EMAIL PROTECTED]> wrote:
> Clever use of RAM can get a 5TB sequential scan down to ~17mins.
>
> Yes, it's a lot of data. But sequential scan times should be in the
> mins or
expect raw HD average IO rates to be at least 100MBps.
If you are getting >= 100MBps of average HD IO,
you should be getting > 5MBps during pg_dump, and certainly > 375MBps!
Ron
---(end of broadcast)---
TIP 1: if posting/reading through Use
l work just fine.
What do the MS performance-charts show is happening? Specifically,
CPU and disk I/O.
His original post said ~3% CPU under W2K and ~70% CPU under WinXP
Ron
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
mp will be fast.
Franlin: are you making pg_dump from local or remote box and is this a
clean install? Try fresh patched win2k install and see what happens.
He claimed this was local, not network. It is certainly an
intriguing possibility that W2K and WinXP handle bytea
differently. I'm not competent to comment on that however.
Ron
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Agreed. Also the odds of fs corruption or data loss are higher in a
non journaling fs. Best practice seems to be to use a journaling fs
but to put the fs log on dedicated spindles separate from the actual
fs or pg_xlog.
Ron
At 01:40 PM 12/1/2005, Tino Wildenhain wrote:
Am Donnerstag, den
metadata of fs dedicated to WAL as well, but that may very well be overkill.
Ron
At 01:57 PM 12/1/2005, Tom Lane wrote:
Ron <[EMAIL PROTECTED]> writes:
> Agreed. Also the odds of fs corruption or data loss are higher in a
> non journaling fs. Best practice seems to be to use a journaling
c= pg table on one + WAL and input file on the other.
The big goal here is to minimize HD head seeks.
Ron
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
7.4.8. Server dual Opteron 240, 4Gb RAM.
_Especially_ with that HW, upgrade to at least 8.0.x ASAP. It's a
good idea to not be running pg 7.x anymore anyway, but it's
particularly so if you are running 64b SMP boxes.
Ron
---(end of broadcast)--
At 12:52 AM 12/6/2005, Thomas Harold wrote:
David Lang wrote:
in that case you logicly have two disks, so see the post from Ron
earlier in this thread.
And it's a very nice performance gain. Percent spent waiting
according to "top" is down around 10-20% instead of 80-90%.
try duplicating the addresses and customers tables and using
the appropriate CLUSTERed Index on each.
I know this breaks Normal Form. OTOH, this kind of thing is common
practice for data mining problems on static or almost static data.
Hope this is helpful,
Ron
---(e
cally feasible.
RAID levels are like any other tool. Each is useful in the proper
circumstances.
Happy holidays,
Ron Peacetree
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
l you
take the entire RAID 50 array off line, reinitialize it, and rebuild
it from scratch.
IME "a" and "b" make RAID 50 inappropriate for any but the biggest
and most dedicated of DB admin groups.
YMMV,
Ron
---(end of broadcast)---
5 for less (write)
performance and lower reliability.
TANSTAAFL.
Ron Peacetree
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
any RAID 5 and any RAID 10
built on the same HW under the same OS running the same DBMS and
=guarantee= there is an IO load above which it can be shown that the
RAID 10 will do writes faster than the RAID 5. The only exception in
my career thus far has been the aforem
At 02:05 PM 12/27/2005, Michael Stone wrote:
On Tue, Dec 27, 2005 at 11:50:16AM -0500, Ron wrote:
Sorry. A decade+ RWE in production with RAID 5 using controllers
as bad as Adaptec and as good as Mylex, Chaparral, LSI Logic
(including their Engino stuff), and Xyratex under 5 different OS
ffering a
failure that loses data are less than the odds of
it happening in a RAID 6 array of n HDs. You are
correct that RAID 6 is more robust than RAID 5.
cheers,
Ron
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please
, at least DROP your indexes, do your INSERTs in batches, and
rebuild your indexes.
Doing 90K individual INSERTs should usually be avoided.
cheers,
Ron
---(end of broadcast)---
TIP 4: Have you searched our list archives?
might allow a kill
signal to get through.
Hope this helps,
Ron
At 05:09 PM 12/29/2005, Jeffrey W. Baker wrote:
A few WEEKS ago, the autovacuum on my instance of pg 7.4 unilaterally
decided to VACUUM a table which has not been updated in over a year and
is more than one terabyte on the disk
checkpoint*?
What HW on you running on and what kind of performance do you typically get?
Inquiring minds definitely want to know ;-)
Ron
At 08:54 AM 1/4/2006, Ian Westmacott wrote:
We have a similar application thats doing upwards of 2B inserts
per day. We have spent a lot of time optimizing
I'll second all of Luke Lonergan's comments and add these.
You should be able to increase both "cold" and "warm" performance (as
well as data integrity. read below.) considerably.
Ron
At 05:59 PM 1/6/2006, peter royal wrote:
Howdy.
I'm running into scalin
At 10:40 AM 12/6/2006, Brian Wipf wrote:
All tests are with bonnie++ 1.03a
Main components of system:
16 WD Raptor 150GB 1 RPM drives all in a RAID 10
ARECA 1280 PCI-Express RAID adapter with 1GB BB Cache (Thanks for the
recommendation, Ron!)
32 GB RAM
Dual Intel 5160 Xeon Woodcrest 3.0
e lot's of
4S mainboard options, but the AMD 4C CPUs won't be available until
sometime late in 2007.
I've got other ideas, but this list is not the appropriate venue for
the level of detail required.
Ron Peacetree
At 05:30 PM 12/6/2006, Brian Wipf wrote:
On 6-Dec-06, at
At 06:40 PM 12/6/2006, Brian Wipf wrote:
I appreciate your suggestions, Ron. And that helps answer my question
on processor selection for our next box; I wasn't sure if the lower
MHz speed of the Kentsfield compared to the Woodcrest but with double
the cores would be better for us overall o
At 03:37 AM 12/7/2006, Brian Wipf wrote:
On 6-Dec-06, at 5:26 PM, Ron wrote:
All this stuff is so leading edge that it is far from clear what
the RW performance of DBMS based on these components will be
without extensive testing of =your= app under =your= workload.
I want the best performance
s, stop. Else start considering the more complicated alternatives.
Remember that adding HDs and RAM is far cheaper than even a few hours
of skilled technical labor.
Ron Peacetree
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL pr
ling,
not less, compared to their previous versions.
Side Note: I wonder what if anything pg could gain from using SWAR
instructions (SSE*, MMX, etc)?
I'd say the fairest attitude is to do everything we can to support
having the proper experiments done w/o presuming the results.
Ron Peacet
performance can
be found as easily as recompiling your kernel or your
compiler. While it certainly could be argued how "general purpose"
such SW is, the same could be said for just about any SW at some
level of abstraction.
Ron Peacetree
At 12:31 PM 12/11/2006, Michael Stone wrote:
At 01:47 PM 12/11/2006, Michael Stone wrote:
On Mon, Dec 11, 2006 at 01:20:50PM -0500, Ron wrote:
(The validity of the claim has nothing to do with the skills or
experience of the claimant or anyone else in the discussion. Only
on the evidence.)
Please go back and reread the original post
and logic that are compelling even to the
non expert when asked to do so.
All I'm saying is let's all remember how "assume" is spelled and
support the getting of some hard data.
Ron Peacetree
---(end of broadcast)---
TIP 2:
ults are confirmed but no other OS shows this
effect. Much digging ensues ;-)
C= Daniel's results are confirmed as platform independent once we
take all factor into account properly
We all learn more re: how to best set up pg for highest performance.
Ron Peacetree
At 01:35 AM 12/12/2006, Gre
checkpoint_segments should be?
Ron Peacetree
---(end of broadcast)---
TIP 6: explain analyze is your friend
2D laptop to be HD IO limited and for Micheal's 2.5 GHZ P4 PC
to be CPU limited during pgbench runs.)
Ron Peacetree
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if
ing using something other than pgbench for
such experiments?
Ron Peacetree
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
ce I don't
really want postgres using large amounts of RAM (all that does is
require me to build a larger test DB).
Daniel's orginal system had 512MB RAM. This suggests to me that
tests involving 256MB of pg memory should be plenty big enough.
Thoughts?
Hope they are useful.
Ron
probably should as well.
Ron Peacetree
At 12:44 AM 12/14/2006, Tom Lane wrote:
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:
>> Mostly, though, pgbench just gives the I/O system a workout. It's not a
>>
At 10:00 AM 12/14/2006, Greg Smith wrote:
On Wed, 13 Dec 2006, Ron wrote:
The slowest results, Michael's, are on the system with what appears
to be the slowest CPU of the bunch; and the ranking of the rest of
the results seem to similarly depend on relative CPU
performance. This is not
you've given no
data to show what effect arch specific compiler options have by themselves.
Also, what HDs are you using? How many in what config?
Thanks in Advance,
Ron Peacetree
At 02:14 PM 12/14/2006, Alexander Staubo wrote:
My PostgreSQL config overrides, then, are:
shared_b
At 05:39 PM 12/14/2006, Alexander Staubo wrote:
On Dec 14, 2006, at 20:28 , Ron wrote:
Can you do runs with just CFLAGS="-O3" and just CFLAGS="-msse2 -
mfpmath=sse -funroll-loops -m64 - march=opteron -pipe" as well ?
All right. From my perspective, the effect of -O3 is s
At 07:27 PM 12/14/2006, Alexander Staubo wrote:
Sorry, I neglected to include the pertinent graph:
http://purefiction.net/paste/pgbench2.pdf
In fact, your graph suggests that using arch specific options in
addition to -O3 actually =hurts= performance.
...that seems unexpected...
Ron
At 04:54 AM 12/15/2006, Alexander Staubo wrote:
On Dec 15, 2006, at 04:09 , Ron wrote:
At 07:27 PM 12/14/2006, Alexander Staubo wrote:
Sorry, I neglected to include the pertinent graph:
http://purefiction.net/paste/pgbench2.pdf
In fact, your graph suggests that using arch specific options
At 09:23 AM 12/15/2006, Merlin Moncure wrote:
On 12/15/06, Ron <[EMAIL PROTECTED]> wrote:
It seems unusual that code generation options which give access to
more registers would ever result in slower code...
The slower is probably due to the unroll loops switch which can
actually hur
abled.
4= ...and this one looks like a 50/50 shot.
Ron Peacetree
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
this. Perhaps the best compromise is for the pg
community to make thoughtful suggestions to the glibc community?
Ron Peacetree
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
At 10:55 AM 12/15/2006, Merlin Moncure wrote:
On 12/15/06, Ron <[EMAIL PROTECTED]> wrote:
There are many instances of x86 compatible code that get
30-40% speedups just because they get access to 16 rather than 8 GPRs
when recompiled for x84-64.
...We benchmarked PostgreSQL internall
At 07:06 PM 12/15/2006, Michael Stone wrote:
On Fri, Dec 15, 2006 at 12:24:46PM -0500, Ron wrote:
ATM, the most we can say is that in a number of systems with modest
physical IO subsystems
So I reran it on a 3.2GHz xeon with 6G RAM off a ramdisk; I/O ain't
the bottleneck on tha
ocumenting pg performance means we know where best
to allocate resources for improving pg. Or where using pg is
(in)appropriate compared to competitors.
Potential performance gains are not the only value of this thread.
Ron Peacetree
At 12:33 PM 12/16/2006, Michael Stone wrote:
On Sat, Dec 16,
finitely have enough spindles
to place pg_xlog somewhere separate from all the other pg tables. In
addition, you should analyze you table access patterns and then
scatter them across your 4 arrays in such as way as to minimize head
contention.
I'm out of ideas how to impro
There are extensive FAQs on what the above values should be for
pg. The lore is very different for pg 8.x vs pg 7.x
Thank you
You're welcome.
Ron Peacetree
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Set it to 2GB and you'll be covered.
I thought that on 32b systems the 2GB shmmax limit had been raised to 4GB?
and that there essentially is no limit to shmmax on 64b systems?
What are Oracle and EnterpriseDB recommending for shmmax these days?
My random thoughts,
Ron Peace
I strongly encourage anyone who is interested in the general external
sorting problem peruse Jim Gray's site:
http://research.microsoft.com/barc/SortBenchmark/
Ron Peacetree
At 08:24 AM 1/31/2007, Gregory Stark wrote:
Tom Lane <[EMAIL PROTECTED]> writes:
> Alvaro Herrera <
ctice in a way that is easily usable by the pg community.
Cheers,
Ron
---(end of broadcast)---
TIP 6: explain analyze is your friend
ore you buy" if the
ease of use / quality of the SW matters to your overall purchase decision.
Then there are the various CSSW and OSSW packages that contain this
functionality or are dedicated to it. Go find some reputable reviews.
(HEY LURKERS FROM Tweakers.n
on caching and pooling handle things.
If that does not result in enough performance, it's time to initiate
the traditional optimization hunt.
Also, note Josh's deployed HW for systems that can handle 1000+
connections. ...and you can bet the IO subsystems on those
ion exist, I've even made
some of them, but AFAIK this is currently considered one of the
"tough pg problems".
Cheers,
Ron Peacetree
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
in the
annotated pg conf file: shared_buffers, work_mem, maintenance_work_mem, etc.
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
Cheers,
Ron Peacetree
---(end of broadcast)---
TIP 4: Have you searched our list
ably into RAM, do it and buy
yourself the time to figure out the rest of the story w/o impacting
on production performance.
Cheers,
Ron Peacetree
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
At 11:03 AM 3/2/2007, Alex Deucher wrote:
On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
May I suggest that it is possible that your schema, queries, etc were
all optimized for pg 7.x running on the old HW?
(explain analyze shows the old system taking ~1/10 the time per row
as well as esti
At 02:43 PM 3/2/2007, Alex Deucher wrote:
On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
...and I still think looking closely at the actual physical layout of
the tables in the SAN is likely to be worth it.
How would I go about doing that?
Alex
Hard for me to give specific advice when I
Profile, benchmark, and only then start allocating dedicated resources.
For instance, I've seen situations where putting pg_xlog on its own
spindles was !not! the right thing to do.
Best Wishes,
Ron Peacetree
---(end of broadcast)
1 - 100 of 409 matches
Mail list logo