It's likely that data is in filesystem (not database) cache the second time you
run the query. See if the same thing happens when you stop and restart the
postmaster (it likely wont), then do something like this to flush the
filesystem cache (read a big file, can't give you a sample cmd
Joshua,
On 7/22/05 10:11 AM, Joshua D. Drake [EMAIL PROTECTED] wrote:
The database server is a PE (Power Edge) 6600
Database Server IO:
[EMAIL PROTECTED] root]# /sbin/hdparm -tT /dev/sda
/dev/sda:
Timing buffer-cache reads: 1888 MB in 2.00 seconds = 944.00 MB/sec
Timing
Mark,
On 7/22/05 12:47 PM, Mark Wong [EMAIL PROTECTED] wrote:
On a single spindle:
$ time dd if=/dev/zero of=bigfile bs=8k count=200
200+0 records in
200+0 records out
real2m8.569s
user0m0.725s
sys 0m19.633s
This is super fast! 124MB/s seems too fast for true
Vacuum full takes an exclusive lock on the tables it runs against, so if you
have anything else reading the table while you are trying to run it, the
vacuum full will wait, possibly forever until it can get the lock.
What does the system load look like while you are running this? What does
Husam,
On 7/25/05 4:31 PM, John A Meinel [EMAIL PROTECTED] wrote:
Tomeh, Husam wrote:
Nothing was running except the job. The server did not look stressed out
looking at top and vmstat. We have seen slower query performance when
performing load tests, so I run the re-index on all
Title: Re: [PERFORM] COPY insert performance
Chris,
You can try the Bizgres distribution of postgres (based on version 8.0.3), the COPY support is 30% faster as reported by OSDL (without indexes). This is due to very slow parsing within the COPY command, which is sped up using micro-optimized
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
The real differentiator is the battery backup part. Otherwise, the
filesystem caching is more effective, so put the RAM on the motherboard.
- Luke
John,
On 7/26/05 9:56 AM, John A Meinel [EMAIL PROTECTED] wrote:
You could insert all of your data into a temporary table, and then do:
INSERT INTO final_table SELECT * FROM temp_table WHERE NOT EXISTS
(SELECT info FROM final_table WHERE id=id, path=path, y=y);
Or you could load it into
Hannu,
On 7/26/05 11:56 AM, Hannu Krosing [EMAIL PROTECTED] wrote:
On T, 2005-07-26 at 11:46 -0700, Luke Lonergan wrote:
Yah - that's a typical approach, and it would be excellent if the COPY
bypassed WAL for the temp table load.
Don't *all* operations on TEMP tables bypass WAL ?
Good
On 7/28/05 2:21 AM, Kari Lavikka [EMAIL PROTECTED] wrote:
There's a new profiling tool called oprofile:
http://oprofile.sourceforge.net/download/
that can be run without instrumenting the binaries beforehand. To actually
find out what the code is doing during these stalls, oprofile can show
Can you post the time dd if=/dev/zero of=bigfile bs=8k count=50
results? Also do the reverse (read the file) with time dd if=bigfile
of=/dev/null bs=8k.
I didn't see this come across before... here ya go:
time dd if=/dev/zero of=bigfile bs=8k count=50
50+0 records in
Mark,
On 7/28/05 4:43 PM, Mark Wong [EMAIL PROTECTED] wrote:
Are there any recommendations for Qlogic controllers on Linux, scsi or
fiber channel? I might be able to my hands on some. I have pci-x slots
for AMD, Itanium, or POWER5 if the architecture makes a difference.
I don't have a
work_mem = 131072 # min 64, size in KB
shared_buffers = 16000 # min 16, at least max_connections*2, 8KB each
checkpoint_segments = 128 # in logfile segments, min 1, 16MB each
effective_cache_size = 75 # typically 8KB each
fsync=false # turns
Tom,
On 7/29/05 7:12 AM, Tom Lane [EMAIL PROTECTED] wrote:
Luke Lonergan [EMAIL PROTECTED] writes:
I guess we see the real culprit here. Anyone surprised it's the WAL?
You have not proved that at all.
As Alvaro pointed out, fsync has impact on more than WAL, so good point.
Interesting
Alvaro,
On 7/29/05 6:23 AM, Alvaro Herrera [EMAIL PROTECTED] wrote:
On Fri, Jul 29, 2005 at 03:01:07AM -0400, Luke Lonergan wrote:
I guess we see the real culprit here. Anyone surprised it's the WAL?
So what? Are you planning to suggest people to turn fsync=false?
That's
Bruce,
On 7/29/05 5:37 AM, Bruce Momjian pgman@candle.pha.pa.us wrote:
Where is the most recent version of the COPY patch?
My direct e-mails aren't getting to you, they are trapped in a spam filter
on your end, so you didn't get my e-mail with the patch!
I've attached it here, sorry to the
Tom,
On 7/27/05 11:19 PM, Tom Lane [EMAIL PROTECTED] wrote:
Matthew Schumacher [EMAIL PROTECTED] writes:
After playing with various indexes and what not I simply am unable to
make this procedure perform any better. Perhaps someone on the list can
spot the bottleneck and reveal why this
Tom,
On 8/22/05 8:41 AM, Tom Lane [EMAIL PROTECTED] wrote:
MemoryContextSwitchTo and LockBuffer itself takes 15% of the total time of
the query. I was expecting read to be the slowest part (biggest component)
but it was way down in the 0.4% level.
You do know that gprof counts only CPU
Ulrich,
On 9/1/05 6:25 AM, Ulrich Wisser [EMAIL PROTECTED] wrote:
My application basically imports Apache log files into a Postgres
database. Every row in the log file gets imported in one of three (raw
data) tables. My columns are exactly as in the log file. The import is
run approx. every
Are you using the built-in HP SmartArray RAID/SCSI controllers? If so, that
could be your problem, they are known to have terrible and variable
performance with Linux.
The only good fix is to add a simple SCSI controller to your system (HP
sells them) and stay away from hardware RAID.
- Luke
Dan,
On 9/1/05 4:02 PM, Dan Harris [EMAIL PROTECTED] wrote:
Do you have any sources for that information? I am running dual
SmartArray 6402's in my DL585 and haven't noticed anything poor about
their performance.
I've previously posted comprehensive results using the 5i and 6xxx series
Ernst
Am Donnerstag, den 01.09.2005, 21:54 -0700 schrieb Luke Lonergan:
Dan,
On 9/1/05 4:02 PM, Dan Harris [EMAIL PROTECTED] wrote:
Do you have any sources for that information? I am running dual
SmartArray 6402's in my DL585 and haven't noticed anything poor about
their performance.
I've
Even for RAID5 ? it uses a bit more CPU for the parity calculations.
I honestly can't speak to RAID 5. I don't (and won't) use it. RAID 5 is
a little brutal when under
heavy write load. I use either 1, or 10.
Yes, for RAID5 software RAID is better than HW RAID today - the modern general
Josh,
On 9/29/05 9:54 AM, Josh Berkus josh@agliodbs.com wrote:
Following an index creation, we see that 95% of the time required is the
external sort, which averages 2mb/s. This is with seperate drives for
the WAL, the pg_tmp, the table and the index. I've confirmed that
increasing
Jeff,
On 9/29/05 10:44 AM, Jeffrey W. Baker [EMAIL PROTECTED] wrote:
On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:
Looking through tuplesort.c, I have a couple of initial ideas. Are we
allowed to fork here? That would open up the possibility of using the
CPU and the I/O
Ron,
On 9/30/05 1:20 PM, Ron Peacetree [EMAIL PROTECTED] wrote:
That 11MBps was your =bulk load= speed. If just loading a table
is this slow, then there are issues with basic physical IO, not just
IO during sort operations.
Bulk loading speed is irrelevant here - that is dominated by
Nope - it would be disk wait.
COPY is CPU bound on I/O subsystems faster that 50 MB/s on COPY (in) and about
15 MB/s (out).
- Luke
-Original Message-
From: Michael Stone [mailto:[EMAIL PROTECTED]
Sent: Wed Oct 05 09:58:41 2005
To: Martijn van Oosterhout
Cc:
Steinar,
On 10/5/05 5:12 PM, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
What? strlen is definitely not in the kernel, and thus won't count as system
time.
System time on Linux includes time spent in glibc routines.
- Luke
---(end of
Mark,
On 10/7/05 5:17 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Fri, Oct 07, 2005 at 04:55:28PM -0700, Luke Lonergan wrote:
On 10/5/05 5:12 PM, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
What? strlen is definitely not in the kernel, and thus won't count as
system time.
System
Greg,
Increasing memory actually slows down the current sort performance.
We're working on a fix for this now in bizgres.
Luke
--
Sent from my BlackBerry Wireless Device
-Original Message-
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: PostgreSQL [EMAIL
Charlie,
Should I expect results like this? I realize that the
computer is quite low-end and is very IO bound for this
query, but I'm still surprised that the sort operation takes so long.
It's the sort performance of Postgres that's your problem.
Out of curiosity, I setup an Oracle
Title: Re: [PERFORM] Sort performance on large tables
Stephan,
On 11/8/05 9:38 AM, Stephan Szabo [EMAIL PROTECTED] wrote:
Just as we find with a similar comparison (with a popular commercial,
proprietary database :-) Though some might suggest you increase
work_mem or other tuning
Adam,
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Claus Guttesen
Sent: Tuesday, November 15, 2005 12:29 AM
To: Adam Weisberg
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Hardware/OS recommendations for large
databases ( 5TB)
Dave,
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dave
CramerSent: Tuesday, November 15, 2005 6:15 AMTo: Luke
LonerganCc: Adam Weisberg;
pgsql-performance@postgresql.orgSubject: Re: [PERFORM] Hardware/OS
recommendations for large databases (
Luke,
Merlin,
just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):
http://www.swt.com/vx50.html
It can be loaded with up to 128 gb memory if all the sockets
are filled :).
Cool!
Just remember that you can't get more than 1 CPU working on a query at a
time without a parallel
Merlin,
just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):
http://www.swt.com/vx50.html
It can be loaded with up to 128 gb memory if all the sockets are
filled :).
Another thought - I priced out a maxed out machine with 16 cores and
128GB of RAM and 1.5TB of usable disk -
Because only 1 cpu is used on each query.
- Luke
--
Sent from my BlackBerry Wireless Device
-Original Message-
From: Adam Weisberg [EMAIL PROTECTED]
To: Luke Lonergan [EMAIL PROTECTED]
CC: pgsql-performance@postgresql.org pgsql-performance@postgresql.org
Sent: Tue
Title: Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware
Joost,
On 11/15/05 8:35 AM, Joost Kraaijeveld [EMAIL PROTECTED] wrote:
thousand go relatively fast, after that PostgreSQL crawls to a halt
(other benchmarks like bonnie++ or just dd'ing a big file don't have
this behavior).
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Mike,
On 11/15/05 6:55 AM, Michael Stone [EMAIL PROTECTED] wrote:
On Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:
write performance is now up to par with the best cards I believe. We
find that you still
Title: Re: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)
Merlin,
On 11/15/05 7:20 AM, Merlin Moncure [EMAIL PROTECTED] wrote:
It's hard to say what would be better. My gut says the 5u box would be
a lot better at handling high cpu/high concurrency problems...like your
Title: Re: [PERFORM] Performance PG 8.0 on dual opteron / 4GB / 3ware
Joost,
On 11/15/05 11:51 AM, Joost Kraaijeveld [EMAIL PROTECTED] wrote:
On Tue, 2005-11-15 at 12:41 -0700, Steve Wampler wrote:
Joost Kraaijeveld wrote:
If I understand correctly (I have 4GB ram):
[EMAIL
Title: Re: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)
James,
On 11/15/05 11:07 AM, James Mello [EMAIL PROTECTED] wrote:
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Scott,
On 11/16/05 9:09 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
The biggest gain is going from 1 to 2 CPUs (real cpus, like the DC
Opterons or genuine dual CPU mobo, not hyperthreaded). Part of the
issue isn't just raw
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Oops,
Last point should be worded: All CPUs on all machines used by a parallel database
- Luke
On 11/16/05 9:47 AM, Luke Lonergan [EMAIL PROTECTED] wrote:
Scott,
On 11/16/05 9:09 AM, Scott Marlowe [EMAIL PROTECTED
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Dave,
On 11/18/05 5:00 AM, Dave Cramer [EMAIL PROTECTED] wrote:
Now there's an interesting line drawn in the sand. I presume you have
numbers to back this up ?
This should draw some interesting posts.
Part 2: The
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Dave,
On 11/18/05 5:00 AM, Dave Cramer [EMAIL PROTECTED] wrote:
Now there's an interesting line drawn in the sand. I presume you have
numbers to back this up ?
This should draw some interesting posts.
OK, here we
Alan,
On 11/18/05 5:41 AM, Alan Stange [EMAIL PROTECTED] wrote:
That's interesting, as I occasionally see more than 110MB/s of
postgresql IO on our system. I'm using a 32KB block size, which has
been a huge win in performance for our usage patterns. 300GB database
with a lot of
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Richard,
On 11/18/05 5:22 AM, Richard Huxton dev@archonet.com wrote:
Well, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000
system if he's got one going :-)
Finally, a game worth playing!
Except its backward
Alex,
On 11/18/05 8:28 AM, Alex Turner [EMAIL PROTECTED] wrote:
Ok - so I ran the same test on my system and get a total speed of
113MB/sec.
Why is this? Why is the system so limited to around just
110MB/sec? I
tuned read ahead up a bit, and my results improve a
bit..
OK! Now we're on the
Bill,
On 11/18/05 7:55 AM, Bill McGonigle [EMAIL PROTECTED] wrote:
There is some truth to it. For an app I'm currently running (full-text
search using tsearch2 on ~100MB of data) on:
Do you mean 100GB? Sounds like you are more like a decision support
/warehousing application.
Dev
Alan,
On 11/18/05 10:30 AM, Alan Stange [EMAIL PROTECTED] wrote:
Actually, this was dual cpu and there was other activity during the full
minute, but it was on other file devices, which I didn't include in the
above output. Given that, and given what I see on the box now I'd
raise the 20%
Greg,
On 11/18/05 11:07 AM, Greg Stark [EMAIL PROTECTED] wrote:
That said, 130MB/s is nothing to sneeze at, that's maxing out two high end
drives and quite respectable for a 3-disk stripe set, even reasonable for a
4-disk stripe set. If you're using 5 or more disks in RAID-0 or RAID 1+0 and
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Vivek,
On 11/18/05 8:07 AM, Vivek Khera [EMAIL PROTECTED] wrote:
On Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:
Still, there is a CPU limit here this is not I/O bound, it is CPU limited as evidenced
Mark,
On 11/18/05 3:46 PM, Mark Kirkwood [EMAIL PROTECTED] wrote:
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it still the
case?
4-way star, same result, that's part of my point. With Bizgres MPP, the
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Mark,
On 11/18/05 3:46 PM, Mark Kirkwood [EMAIL PROTECTED] wrote:
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it still the
case?
I may
Mark,
On 11/18/05 6:27 PM, Mark Kirkwood [EMAIL PROTECTED] wrote:
That too, meaning the business of 1 executor random reading a given
relation file whilst another is sequentially scanning (some other) part
of it
I think it should actually improve things - each I/O will read 16MB into the
Alan,
On 11/18/05 11:39 AM, Alan Stange [EMAIL PROTECTED] wrote:
Yes and no. The one cpu is clearly idle. The second cpu is 40% busy
and 60% idle (aka iowait in the above numbers).
The aka iowait is the problem here - iowait is not idle (otherwise it
would be in the idle column).
Iowait
Alan,
On 11/19/05 8:43 PM, Alan Stange [EMAIL PROTECTED] wrote:
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
sdd 343.73175035.73 277.555251072 8326
while doing a select count(1) on the same large table as before.
Subsequent iostat
Alan,
On 11/21/05 6:57 AM, Alan Stange [EMAIL PROTECTED] wrote:
$ time dd if=/dev/zero of=/fidb1/bigfile bs=8k count=80
80+0 records in
80+0 records out
real0m13.780s
user0m0.134s
sys 0m13.510s
Oops. I just wrote 470MB/s to a file system that has peak write
Tom,
On 11/21/05 6:56 AM, Tom Lane [EMAIL PROTECTED] wrote:
Luke Lonergan [EMAIL PROTECTED] writes:
OK - slower this time:
We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU
machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but
which all are capped
Alan,
Looks like Postgres gets sensible scan rate scaling as the filesystem speed
increases, as shown below. I'll drop my 120MB/s observation - perhaps CPUs
got faster since I last tested this.
The scaling looks like 64% of the I/O subsystem speed is available to the
executor - so as the I/O
Bruce,
On 11/22/05 4:13 PM, Bruce Momjian pgman@candle.pha.pa.us wrote:
Perfect summary. We have a background writer now. Ideally we would
have a background reader, that reads-ahead blocks into the buffer cache.
The problem is that while there is a relatively long time between a
buffer
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Alan,
Why not contribute something - put up proof of your stated 8KB versus 32KB page size improvement.
- Luke
Alan,
On 11/23/05 2:00 PM, Alan Stange [EMAIL PROTECTED] wrote:
Luke Lonergan wrote:
Why not contribute something - put up proof of your stated 8KB versus
32KB page size improvement.
I did observe that 32KB block sizes were a significant win for our
usage patterns. It might be a win
to do the same thing for 8.0?
- Luke
On 11/21/05 9:10 PM, Mark Kirkwood [EMAIL PROTECTED] wrote:
Luke Lonergan wrote:
So that leaves the question - why not more than 64% of the I/O scan rate?
And why is it a flat 64% as the I/O subsystem increases in speed from
333-400MB/s?
It might
is going. I'm also curious about the impact of the new 8.1 virtual
tuples in reducing the executor overhead. In this case my bet's on the agg
node itself, what do you think?
- Luke
On 11/21/05 9:10 PM, Mark Kirkwood [EMAIL PROTECTED] wrote:
Luke Lonergan wrote:
So that leaves the question
Mark,
Time: 197870.105 ms
So 198 seconds is the uncached read time with count (Just for clarity,
did you clear the Pg and filesystem caches or unmount / remount the
filesystem?)
Nope - the longer time is due to the second write known issue with
Postgres - it writes the data to the table,
The same 12.9GB distributed across 4 machines using Bizgres MPP fits into
I/O cache. The interesting result is that the query select count(1) is
limited in speed to 280 MB/s per CPU when run on the lineitem table. So
when I run it spread over 4 machines, one CPU per machine I get this:
Jeff, Qingqing,
On 11/26/05 10:57 AM, Qingqing Zhou [EMAIL PROTECTED] wrote:
Jeff Frost [EMAIL PROTECTED] wrote
Did you folks see this article on Slashdot with a fellow requesting input
on what sort of benchmarks to run to get a good Postgresql vs Mysql
dataset? Perhaps this would be a
For data warehousing its pretty well open and shut. To use all cpus and io
channels on each query you will need mpp.
Has anyone done the math.on the original post? 5TB takes how long to scan
once? If you want to wait less than a couple of days just for a seq scan,
you'd better be in the
.
And your price is wrong - but if you want free then you'll have to find another
way to get your work done.
- Luke
- Luke
--
Sent from my BlackBerry Wireless Device
-Original Message-
From: David Lang [EMAIL PROTECTED]
To: Luke Lonergan [EMAIL PROTECTED]
CC: pgsql
Ron,
On 11/27/05 9:10 AM, Ron [EMAIL PROTECTED] wrote:
Clever use of RAM can get a 5TB sequential scan down to ~17mins.
Yes, it's a lot of data. But sequential scan times should be in the
mins or low single digit hours, not days. Particularly if you use
RAM to maximum advantage.
Stephan,
On 11/27/05 7:48 AM, Stephan Szabo [EMAIL PROTECTED] wrote:
On Sun, 27 Nov 2005, Luke Lonergan wrote:
Has anyone done the math.on the original post? 5TB takes how long to
scan once? If you want to wait less than a couple of days just for a
seq scan, you'd better be in the multi
Rick,
On 12/1/05 2:18 PM, Rick Schumeyer [EMAIL PROTECTED] wrote:
As a follow up to my own question:
I reran the COPY both ways (with the index and without) while running iostat.
The following values
are averages:
%user %nice %sys %iowait %idle
no index 39 0
Tom,
That analysis is far too simplistic, because only the WAL
write has to happen before the transaction can commit. The
table and index writes will normally happen at some later
point in the bgwriter, and with any luck there will only need
to be one write per page, not per tuple.
Steve,
When we restore the postmaster process tries to use 100% of the CPU.
The questions we have are:
1) What is postmaster doing that it needs so much CPU?
Parsing mostly, and attribute conversion from text to DBMS native
formats.
2) How can we get our system to go faster?
Use
Stephen,
On 12/2/05 12:18 PM, Stephen Frost [EMAIL PROTECTED] wrote:
Just a thought, but couldn't psql be made to use the binary mode of
libpq and do at least some of the conversion on the client side? Or
does binary mode not work with copy (that wouldn't suprise me, but
perhaps copy could
Stephen,
On 12/2/05 1:19 PM, Stephen Frost [EMAIL PROTECTED] wrote:
I've used the binary mode stuff before, sure, Postgres may have to
convert some things but I have a hard time believing it'd be more
expensive to do a network_encoding - host_encoding (or toasting, or
whatever) than to do
Stephen,
On 12/2/05 1:19 PM, Stephen Frost [EMAIL PROTECTED] wrote:
I've used the binary mode stuff before, sure, Postgres may have to
convert some things but I have a hard time believing it'd be more
expensive to do a network_encoding - host_encoding (or toasting, or
whatever) than to do
Micahel,
On 12/2/05 1:46 PM, Michael Stone [EMAIL PROTECTED] wrote:
Not necessarily; you may be betting that it's more *efficient* to do the
parsing on a bunch of lightly loaded clients than your server. Even if
you're using the same code this may be a big win.
If it were possible in light
from one machine
and not from another?
- Luke
--
Sent from my BlackBerry Wireless Device
-Original Message-
From: Mitch Skinner [EMAIL PROTECTED]
To: Luke Lonergan [EMAIL PROTECTED]
CC: Stephen Frost [EMAIL PROTECTED]; David Lang [EMAIL PROTECTED]; Steve
Oualline
Tom,
On 12/3/05 12:32 PM, Tom Lane [EMAIL PROTECTED] wrote:
Luke Lonergan [EMAIL PROTECTED] writes:
Last I looked at the Postgres binary dump format, it was not portable or
efficient enough to suit the need. The efficiency problem with it was that
there was descriptive information attached
Paal,
I'm currently benchmarking several RDBMSs with respect to
analytical query performance on medium-sized multidimensional
data sets. The data set contains 30,000,000 fact rows evenly
distributed in a multidimensional space of 9 hierarchical
dimensions. Each dimension has 8000
Paal,
On 12/12/05 2:10 AM, Pål Stenslet [EMAIL PROTECTED] wrote:
Here are the schema details, but first a little introduction:
Terrific, very helpful and thanks for both.
I wonder why the bitmap scan isn't selected in this query, Tom might have
some opinion and suggestions about it.
I'd
Simon,
Yes, I'd expect something like this right now in 8.1; the
numbers stack up to PostgreSQL doing equivalent join speeds,
but w/o star join.
I do expect a significant improvement from 8.1 using the new bitmap index
because there is no need to scan the full Btree indexes. Also, the
Charles,
Lastly, one thing that I'm not yet finding in trying to
educate myself on SANs is a good overview of what's come out
in the past few years that's more affordable than the old
big-iron stuff. For example I saw some brief info on this
list's archives about the Dell/EMC offerings.
Tom,
On 12/17/05 10:47 AM, Tom Lane [EMAIL PROTECTED] wrote:
BTW, some experimentation suggests that in fact a star join is already
slower than the regular plan in 8.1. You can force a star-join plan
to be generated like this:
Cool!
We've got Paal's test case in the queue to run, it's
Juan,
On 12/18/05 8:35 AM, Juan Casero [EMAIL PROTECTED] wrote:
Can anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc
T1 processor and architecture on Solaris 10? I have a custom built retail
sales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora
Jignesh,
On 12/19/05 11:29 AM, Jignesh Shah [EMAIL PROTECTED] wrote:
I have gone to the max with 4 fibers on Sun Fire T2000. But I am not sure
about the answers that you asked. Let me see if I can get answers for them. I
am going to try to max out the IO on these systems with 8 fibers as soon
Jignesh,
On 12/19/05 12:21 PM, Jignesh Shah [EMAIL PROTECTED] wrote:
I got about 720 MB/sec to 730 MB/sec with plain dd tests on my current
storage configuration (8 LUNS on 4 fibers) which slowed me down (10K rpm 146
GB disks FC) with 4 LUNS going through a longer pass to the disks (via a
Jignesh,
On 12/19/05 12:21 PM, Jignesh Shah [EMAIL PROTECTED] wrote:
extended device statistics
r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
0.8 14.00.00.0 0.0 0.30.0 17.8 0 4 c3t0d0
91.40.0 91.40.0 0.0 1.0
Charles,
On 12/20/05 9:58 PM, Charles Sprickman [EMAIL PROTECTED] wrote:
You've given me a lot to go on... Now I'm going to have to do some
research as to real-world RAID controller performance. It's vexing (to
say the least) that most vendors don't supply any raw throughput or TPS
stats
What version of postgres?
Copy has been substantially improved in bizgres and also in 8.1.
- Luke
--
Sent from my BlackBerry Wireless Device
-Original Message-
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: pgsql-performance@postgresql.org
Madison,
On 12/21/05 10:58 PM, Madison Kelly [EMAIL PROTECTED] wrote:
Ah, that makes a lot of sense (I read about the 'fsync' issue before,
now that you mention it). I am not too familiar with MySQL but IIRC
MyISAM is their open-source DB and InnoDB is their commercial one, ne?
If so, then I
Madison,
On 12/21/05 11:02 PM, Madison Kelly [EMAIL PROTECTED] wrote:
Currently 7.4 (what comes with Debian Sarge). I have run my program on
8.0 but not since I have added MySQL support. I should run the tests on
the newer versions of both DBs (using v4.1 for MySQL which is also
mature at
Frank,
You definitely DO NOT want to do RAID 5 on a database server. That
is probably the worst setup you could have, I've seen it have lower
performance than just a single hard disk.
I've seen that on RAID0 and RAID10 as well.
This is more about the quality and modernity of the
David,
now hot-swap may not be supported on all interface types, that may be what
you have run into, but with SCSI or SATA you should be able to hot-swap
with the right controller.
That's actually the problem - Linux hot swap is virtually non-functional for
SCSI. You can write into the
Benjamin,
Have you done any benchmarking of the 9550SX against a software raid
configuration?
Interesting - no, not on SATA, mostly because I've had awful luck with Linux
drivers and SATA. The popular manufacturers of SATA to PCI bridge chipsets are
Silicon Image and Highpoint, and
Benjamin,
On 12/26/05 10:21 AM, Benjamin Arai [EMAIL PROTECTED] wrote:
Have you have any experience rebuilding arrays in linux using the 3Ware
utilities? If so, did it work well?
Sure we have - nowadays with disks failing as much as they do how could we
not? ;-)
3Ware has some *nice* tools
Bruce,
On 12/27/05 9:51 AM, Bruce Momjian pgman@candle.pha.pa.us wrote:
Historically, I have heard that RAID5 is only faster than RAID10 if
there are six or more drives.
I think the real question here is faster for what? Also, just like the
optimizer tunables for cpu/disk/memory speed
1 - 100 of 320 matches
Mail list logo