if it is cfq change that to deadline or
noop.
that can make a huge difference.
--
Jeff Trout http://www.postgresql.org/mailpref/pgsql-performance
ad though. Given our workload, we need the random perf
much more than seq, but I can see the opposite being true in a
warehouse workload.
btw, the tool I wrote is here http://pgfoundry.org/projects/pgiosim/
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmarte
the pgiosim project on pgfoundry, it sort of simulates a pg index
scan, which is probably what you'll want to focus on more than seq
read speed.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/
-
nt to have data in the cache while seeing
how long it takes for the battery to drain :)
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 3: Have you checked our ex
he 6i controller.
If you look in the pgsql-performance archives a week or two ago
you'll see a similar thread to
this one - in fact, it is also about a dl385 (but he had a 5i
controller)
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://ww
72 1
so I'm going to be going with hw r5, which went against what I
thought going in - read perf is more important for my usage than write.
I'm still not sure about that software 10 read number. something is
not right there...
--
Jeff Trout <[EMAIL PROTECTED]
at is
kicking the performance down the tube.
If it was PG's fault it wouldn't be stuck uninterruptable.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP
On May 3, 2006, at 10:16 AM, Vivek Khera wrote:
On May 3, 2006, at 9:19 AM, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to
figure out hte size you want (2x ram) - the original bonnie is
limited to 2GB.
but you have to be careful building bonnie++ since
very large datasets. It also tries to figure
out hte size you want (2x ram) - the original bonnie is limited to 2GB.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 6: explain
300.003 rows=1221391 loops=1)"
"Total runtime: 4472646.988 ms"
Have you been vacuuming or running autovacuum?
If you keep running queries like this you're certianly going to have
a ton of dead tuples, which would def explain these times too.
--
Jeff Trout <[EMAIL
alue.
Another thing that may be a factor is the network - when doing
explain analyze it doesn't have to transfer the dataset to the client.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
(loadavg) with low cpu usage means you are IO bound.
Either change some queries around to generate less IO, or add more
disks.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 2:
roblem). I could also get you access to
this machine, but be warned gprof on tiger is pretty useless from
what I've seen.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 6: explain analyze is your friend
Thread model: posix
gcc version 4.0.0 (Apple Computer, Inc. build 5026)
The snapshot on ftp.psotgresql.org (dated 8/29) also runs with no
optimization.
No cflags are set.
need to see anything from config.log?
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
#s, but no timing data. Instead you can do
something even better - compile PG normally and attach to it with
Shark (Comes with the CHUD tools) and check out its profile. Quite
slick actually :)
I'll keep people updated on my progress, but I just wanted to get
these issues out in the
have any real effect?
This doesn't allocate anything - it is a hint to the planner about
how much data it can assume is cached.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)--
look into some more hardware.. see if
you can borrow any or fabricate a "poor man"'s equivalent for testing.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 5
ovement
thereafter. Do you have a recommendation for a value?
there's been a thread on -hackers recently about checkpoint issues..
in a nut shell there isn't much to do. But I'd say give bizgres a
try if you're going to be continually loading huge amounts of data.
--
Jeff T
really prepared.
That depends on what version you are using. Older versions did what
Tom mentioned rather than sending PREPARE & EXECUTE.
Not sure what version that changed in.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stua
t sort_mem = 80) then only
that session will use the looney sort_mem
It would be interesting to know if your machine is swapping.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)--
20 matches
Mail list logo