On Mon, 28 Nov 2005, Brendan Duddridge wrote:
Forgive my ignorance, but what is MPP? Is that part of Bizgres? Is it
possible to upgrade from Postgres 8.1 to Bizgres?
MPP is the Greenplum propriatary extention to postgres that spreads the
data over multiple machines, (raid, but with entire mac
On Mon, 28 Nov 2005, Brendan Duddridge wrote:
Hi David,
Thanks for your reply. So how is that different than something like Slony2 or
pgcluster with multi-master replication? Is it similar technology? We're
currently looking for a good clustering solution that will work on our Apple
Xserves
Brendan Duddridge wrote:
Thanks for your reply. So how is that different than something like
Slony2 or pgcluster with multi-master replication? Is it similar
technology? We're currently looking for a good clustering solution
that will work on our Apple Xserves and Xserve RAIDs.
I think yo
Hi David,
Thanks for your reply. So how is that different than something like
Slony2 or pgcluster with multi-master replication? Is it similar
technology? We're currently looking for a good clustering solution
that will work on our Apple Xserves and Xserve RAIDs.
Thanks,
Forgive my ignorance, but what is MPP? Is that part of Bizgres? Is it
possible to upgrade from Postgres 8.1 to Bizgres?
Thanks,
Brendan Duddridge | CTO | 403-277-5591 x24 | [EMAIL PROTECTED]
ClickSpace Interactive Inc.
Suit
Mark,
On 11/28/05 1:45 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
>>> 8.0 : 32 s
>>> 8.1 : 25 s
A 22% reduction.
select count(1) on 12,900MB = 1617125 pages fully cached:
MPP based on 8.0 : 6.06s
MPP based on 8.1 : 4.45s
A 26% reduction.
I'll take it!
I am looking to back-port Tom's pre
Merlin Moncure wrote:
It certainly makes quite a difference as I measure it:
doing select(1) from a 181000 page table (completely uncached) on my
PIII:
8.0 : 32 s
8.1 : 25 s
Note that the 'fastcount()' function takes 21 s in both cases - so all
the improvement seems to be from the count ove
>
> It certainly makes quite a difference as I measure it:
>
> doing select(1) from a 181000 page table (completely uncached) on my
PIII:
>
> 8.0 : 32 s
> 8.1 : 25 s
>
> Note that the 'fastcount()' function takes 21 s in both cases - so all
> the improvement seems to be from the count overhead
The MPP test I ran was with the release version 2.0 of MPP which is based on
Postgres 8.0, the upcoming 2.1 release is based on 8.1, and 8.1 is far
faster at seq scan + agg. 12,937MB were counted in 4.5 seconds, or 2890MB/s
from I/O cache. That's 722MB/s per host, and 360MB/s per Postgres instanc
On Sun, 27 Nov 2005, Luke Lonergan wrote:
> Stephan,
>
> On 11/27/05 7:48 AM, "Stephan Szabo" <[EMAIL PROTECTED]> wrote:
>
> > On Sun, 27 Nov 2005, Luke Lonergan wrote:
> >
> >> Has anyone done the math.on the original post? 5TB takes how long to
> >> scan once? If you want to wait less than a
At 02:11 PM 11/27/2005, Luke Lonergan wrote:
Ron,
On 11/27/05 9:10 AM, "Ron" <[EMAIL PROTECTED]> wrote:
> Clever use of RAM can get a 5TB sequential scan down to ~17mins.
>
> Yes, it's a lot of data. But sequential scan times should be in the
> mins or low single digit hours, not days. Partic
Stephan,
On 11/27/05 7:48 AM, "Stephan Szabo" <[EMAIL PROTECTED]> wrote:
> On Sun, 27 Nov 2005, Luke Lonergan wrote:
>
>> Has anyone done the math.on the original post? 5TB takes how long to
>> scan once? If you want to wait less than a couple of days just for a
>> seq scan, you'd better be in
Ron,
On 11/27/05 9:10 AM, "Ron" <[EMAIL PROTECTED]> wrote:
> Clever use of RAM can get a 5TB sequential scan down to ~17mins.
>
> Yes, it's a lot of data. But sequential scan times should be in the
> mins or low single digit hours, not days. Particularly if you use
> RAM to maximum advantage.
At 01:18 AM 11/27/2005, Luke Lonergan wrote:
For data warehousing its pretty well open and shut. To use all cpus
and io channels on each query you will need mpp.
Has anyone done the math.on the original post? 5TB takes how long
to scan once? If you want to wait less than a couple of days ju
On Sun, 27 Nov 2005, Luke Lonergan wrote:
> Has anyone done the math.on the original post? 5TB takes how long to
> scan once? If you want to wait less than a couple of days just for a
> seq scan, you'd better be in the multi-gb per second range.
Err, I get about 31 megabytes/second to do 5TB in
IL PROTECTED]>
CC: pgsql-performance@postgresql.org
Sent: Sat Nov 26 14:34:14 2005
Subject: Re: [PERFORM] Hardware/OS recommendations for large databases (
On Sun, 27 Nov 2005, Luke Lonergan wrote:
> For data warehousing its pretty well open and shut. To use all cpus and
> io channel
On Sun, 27 Nov 2005, Luke Lonergan wrote:
For data warehousing its pretty well open and shut. To use all cpus and
io channels on each query you will need mpp.
Has anyone done the math.on the original post? 5TB takes how long to
scan once? If you want to wait less than a couple of days just
n the multi-gb per second range.
- Luke
--
Sent from my BlackBerry Wireless Device
-Original Message-
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: pgsql-performance@postgresql.org
Sent: Sat Nov 26 13:51:18 2005
Subject: Re: [PERFORM] Hardware/OS recommen
Another thought - I priced out a maxed out machine with 16 cores and
128GB of RAM and 1.5TB of usable disk - $71,000.
You could instead buy 8 machines that total 16 cores, 128GB RAM and 28TB
of disk for $48,000, and it would be 16 times faster in scan rate, which
is the most important factor for
Tom Lane wrote:
Greg Stark <[EMAIL PROTECTED]> writes:
Last I heard the reason count(*) was so expensive was because its state
variable was a bigint. That means it doesn't fit in a Datum and has to be
alloced and stored as a pointer. And because of the Aggregate API that means
it has to be allo
The same 12.9GB distributed across 4 machines using Bizgres MPP fits into
I/O cache. The interesting result is that the query "select count(1)" is
limited in speed to 280 MB/s per CPU when run on the lineitem table. So
when I run it spread over 4 machines, one CPU per machine I get this:
===
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Last I heard the reason count(*) was so expensive was because its state
> > variable was a bigint. That means it doesn't fit in a Datum and has to be
> > alloced and stored as a pointer. And because of the Aggregate
Greg Stark <[EMAIL PROTECTED]> writes:
> Last I heard the reason count(*) was so expensive was because its state
> variable was a bigint. That means it doesn't fit in a Datum and has to be
> alloced and stored as a pointer. And because of the Aggregate API that means
> it has to be allocated and fr
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Yeah - it's pretty clear that the count aggregate is fairly expensive wrt cpu
> -
> However, I am not sure if all agg nodes suffer this way (guess we could try a
> trivial aggregate that does nothing for all tuples bar the last and just
> reports the fi
Luke Lonergan wrote:
Mark,
Time: 197870.105 ms
So 198 seconds is the uncached read time with count (Just for clarity,
did you clear the Pg and filesystem caches or unmount / remount the
filesystem?)
Nope - the longer time is due to the "second write" known issue with
Postgres - it writes
Luke Lonergan wrote:
That says it's something else in the path. As you probably know there is a
page lock taken, a copy of the tuple from the page, lock removed, count
incremented for every iteration of the agg node on a count(*). Is the same
true of a count(1)?
Sorry Luke - message 3 - I s
Mark,
>> Time: 197870.105 ms
>
> So 198 seconds is the uncached read time with count (Just for clarity,
> did you clear the Pg and filesystem caches or unmount / remount the
> filesystem?)
Nope - the longer time is due to the "second write" known issue with
Postgres - it writes the data to the t
Luke Lonergan wrote:
Mark,
It would be nice to put some tracers into the executor and see where the
time is going. I'm also curious about the impact of the new 8.1 virtual
tuples in reducing the executor overhead. In this case my bet's on the agg
node itself, what do you think?
Yeah - it's
Luke Lonergan wrote:
12.9GB of DBT-3 data from the lineitem table
llonergan=# select relpages from pg_class where relname='lineitem';
relpage
Mark,
See the results below and analysis - the pure HeapScan gets 94.1% of the max
available read bandwidth (cool!). Nothing wrong with heapscan in the
presence of large readahead, which is good news.
That says it's something else in the path. As you probably know there is a
page lock taken, a
Luke Lonergan wrote:
Mark,
This is an excellent idea – unfortunately I’m in Maui right now
(Mahalo!) and I’m not getting to testing with this. My first try was
with 8.0.3 and it’s an 8.1 function I presume.
Not to be lazy – but any hint as to how to do the same thing for 8.0?
Yeah, it's
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Mark,
This is an excellent idea – unfortunately I’m in Maui right now (Mahalo!) and I’m not getting to testing with this. My first try was with 8.0.3 and it’s an 8.1 function I presume.
Not to be lazy – but any hint as to
Alan,
On 11/23/05 2:00 PM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> Luke Lonergan wrote:
>> Why not contribute something - put up proof of your stated 8KB versus
>> 32KB page size improvement.
>
> I did observe that 32KB block sizes were a significant win "for our
> usage patterns". It might
Luke Lonergan wrote:
Why not contribute something - put up proof of your stated 8KB versus
32KB page size improvement.
I did observe that 32KB block sizes were a significant win "for our
usage patterns". It might be a win for any of the following reasons:
0) The preliminaries: ~300GB dat
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Alan,
Why not contribute something - put up proof of your stated 8KB versus 32KB page size improvement.
- Luke
Bruce,
On 11/22/05 4:13 PM, "Bruce Momjian" wrote:
> Perfect summary. We have a background writer now. Ideally we would
> have a background reader, that reads-ahead blocks into the buffer cache.
> The problem is that while there is a relatively long time between a
> buffer being dirtied and th
Alan Stange wrote:
> Bruce Momjian wrote:
> > Right now the file system will do read-ahead for a heap scan (but not an
> > index scan), but even then, there is time required to get that kernel
> > block into the PostgreSQL shared buffers, backing up Luke's observation
> > of heavy memcpy() usage.
>
Alan Stange <[EMAIL PROTECTED]> writes:
> For sequential scans, you do have a background reader. It's the kernel. As
> long as you don't issue a seek() between read() calls, the kernel will get the
> hint about sequential IO and begin to perform a read ahead for you. This is
> where the above
Bruce Momjian wrote:
Greg Stark wrote:
Alan Stange <[EMAIL PROTECTED]> writes:
The point your making doesn't match my experience with *any* storage or program
I've ever used, including postgresql. Your point suggests that the storage
system is idle and that postgresql is broken beca
Greg Stark wrote:
>
> Alan Stange <[EMAIL PROTECTED]> writes:
>
> > The point your making doesn't match my experience with *any* storage or
> > program
> > I've ever used, including postgresql. Your point suggests that the storage
> > system is idle and that postgresql is broken because it is
Luke,
- XFS will probably generate better data rates with larger files. You
really need to use the same file size as does postgresql. Why compare
the speed to reading a 16G file and the speed to reading a 1G file.
They won't be the same. If need be, write some code that does the test
or
Luke Lonergan wrote:
So that leaves the question - why not more than 64% of the I/O scan rate?
And why is it a flat 64% as the I/O subsystem increases in speed from
333-400MB/s?
It might be interesting to see what effect reducing the cpu consumption
entailed by the count aggregation has - b
Alan,
Looks like Postgres gets sensible scan rate scaling as the filesystem speed
increases, as shown below. I'll drop my 120MB/s observation - perhaps CPUs
got faster since I last tested this.
The scaling looks like 64% of the I/O subsystem speed is available to the
executor - so as the I/O sub
On Mon, Nov 21, 2005 at 10:14:29AM -0800, Luke Lonergan wrote:
This has partly been a challenge to get others to post their results.
You'll find that people respond better if you don't play games with
them.
---(end of broadcast)---
TIP 9: In vers
Alan,
Unless noted otherwise all results posted are for block device readahead set
to 16M using "blockdev --setra=16384 ". All are using the
2.6.9-11 Centos 4.1 kernel.
For those who don't have lmdd, here is a comparison of two results on an
ext2 filesystem:
Luke,
it's time to back yourself up with some numbers. You're claiming the
need for a significant rewrite of portions of postgresql and you haven't
done the work to make that case.
You've apparently made some mistakes on the use of dd to benchmark a
storage system. Use lmdd and umount th
Tom,
On 11/21/05 6:56 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> "Luke Lonergan" <[EMAIL PROTECTED]> writes:
>> OK - slower this time:
>
>> We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU
>> machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but
>> which
Alan,
On 11/21/05 6:57 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> $ time dd if=/dev/zero of=/fidb1/bigfile bs=8k count=80
> 80+0 records in
> 80+0 records out
>
> real0m13.780s
> user0m0.134s
> sys 0m13.510s
>
> Oops. I just wrote 470MB/s to a file system that has
Would it be worth first agreeing on a common set of criteria to
measure? I see many data points going back and forth but not much
agreement on what's worth measuring and how to measure.
I'm not necessarily trying to herd cats, but it sure would be swell to
have the several knowledgeable minds
On Mon, Nov 21, 2005 at 02:01:26PM -0500, Greg Stark wrote:
I also fear that heading in that direction could push Postgres even further
from the niche of software that works fine even on low end hardware into the
realm of software that only works on high end hardware. It's already suffering
a bit
Greg Stark wrote:
> I also fear that heading in that direction could push Postgres even further
> from the niche of software that works fine even on low end hardware into the
> realm of software that only works on high end hardware. It's already suffering
> a bit from that.
What's high end hardwa
Alan Stange <[EMAIL PROTECTED]> writes:
> The point your making doesn't match my experience with *any* storage or
> program
> I've ever used, including postgresql. Your point suggests that the storage
> system is idle and that postgresql is broken because it isn't able to use the
> resources
"Luke Lonergan" <[EMAIL PROTECTED]> writes:
> OK - slower this time:
> We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU
> machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but
> which all are capped at 120MB/s when doing sequential scans with different
> ver
Luke Lonergan wrote:
OK - slower this time:
We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU
machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but
which all are capped at 120MB/s when doing sequential scans with different
versions of Postgres.
Postgresql
Alan,
On 11/19/05 8:43 PM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
> sdd 343.73175035.73 277.555251072 8326
>
> while doing a select count(1) on the same large table as before.
> Subsequent
Mark Kirkwood wrote:
The test is SELECT 1 FROM table
That should read "The test is SELECT count(1) FROM table"
---(end of broadcast)---
TIP 6: explain analyze is your friend
Alan Stange wrote:
Another data point.
We had some down time on our system today to complete some maintenance
work. It took the opportunity to rebuild the 700GB file system using
XFS instead of Reiser.
One iostat output for 30 seconds is
avg-cpu: %user %nice%sys %iowait %idle
Greg Stark wrote:
Alan Stange <[EMAIL PROTECTED]> writes:
Iowait is time spent waiting on blocking io calls. As another poster
pointed out, you have a two CPU system, and during your scan, as predicted,
one CPU went 100% busy on the seq scan. During iowait periods, the CPU can
be context s
On Sun, Nov 20, 2005 at 09:22:41AM -0500, Greg Stark wrote:
> I don't think that's true. If the syscall was preemptable then it wouldn't
> show up under "iowait", but rather "idle". The time spent in iowait is time in
> uninterruptable sleeps where no other process can be scheduled.
You are confus
Alan Stange <[EMAIL PROTECTED]> writes:
> > Iowait is time spent waiting on blocking io calls. As another poster
> > pointed out, you have a two CPU system, and during your scan, as predicted,
> > one CPU went 100% busy on the seq scan. During iowait periods, the CPU can
> > be context switched
William Yu wrote:
Alan Stange wrote:
Luke Lonergan wrote:
The "aka iowait" is the problem here - iowait is not idle (otherwise it
would be in the "idle" column).
Iowait is time spent waiting on blocking io calls. As another poster
pointed out, you have a two CPU system, and during your scan,
On Sat, Nov 19, 2005 at 08:13:09AM -0800, Luke Lonergan wrote:
> Iowait is time spent waiting on blocking io calls.
To be picky, iowait is time spent in the idle task while the I/O queue is not
empty. It does not matter if the I/O is blocking or not (from userspace's
point of view), and if the I/
Alan Stange wrote:
Luke Lonergan wrote:
The "aka iowait" is the problem here - iowait is not idle (otherwise it
would be in the "idle" column).
Iowait is time spent waiting on blocking io calls. As another poster
pointed out, you have a two CPU system, and during your scan, as
iowait time i
Mark Kirkwood wrote:
- I am happy that seqscan is cpu bound after ~110M/s (It's cpu bound on
my old P3 system even earlier than that)
Ahem - after reading Alan's postings I am not so sure, ISTM that there
is some more investigation required here too :-).
---(
Another data point.
We had some down time on our system today to complete some maintenance
work. It took the opportunity to rebuild the 700GB file system using
XFS instead of Reiser.
One iostat output for 30 seconds is
avg-cpu: %user %nice%sys %iowait %idle
1.580.00
Luke Lonergan wrote:
Alan,
On 11/18/05 11:39 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
Yes and no. The one cpu is clearly idle. The second cpu is 40% busy
and 60% idle (aka iowait in the above numbers).
The "aka iowait" is the problem here - iowait is not idle (otherwise it
wo
Luke Lonergan wrote:
Mark,
On 11/18/05 3:46 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it still the
case?
4-way star, same result, that's part of my point.
Alan,
On 11/18/05 11:39 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> Yes and no. The one cpu is clearly idle. The second cpu is 40% busy
> and 60% idle (aka iowait in the above numbers).
The "aka iowait" is the problem here - iowait is not idle (otherwise it
would be in the "idle" column).
Mark,
On 11/18/05 6:27 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
> That too, meaning the business of 1 executor random reading a given
> relation file whilst another is sequentially scanning (some other) part
> of it
I think it should actually improve things - each I/O will read 16MB in
Luke Lonergan wrote:
Mark,
On 11/18/05 3:46 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it still the
case?
I may not have listened to you - are you
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Mark,
On 11/18/05 3:46 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it st
Mark,
On 11/18/05 3:46 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
> If you alter this to involve more complex joins (e.g 4. way star) and
> (maybe add a small number of concurrent executors too) - is it still the
> case?
4-way star, same result, that's part of my point. With Bizgres MPP, t
Luke Lonergan wrote:
(mass snippage)
time psql -c "select count(*) from ivp.bigtable1" dgtestdb
[EMAIL PROTECTED] IVP]$ cat sysout3
count
--
8000
(1 row)
real1m9.875s
user0m0.000s
sys 0m0.004s
[EMAIL PROTECTED] IVP]$ !
Breaking the ~120MBps pg IO ceiling by any means
is an important result. Particularly when you
get a ~2x improvement. I'm curious how far we
can get using simple approaches like this.
At 10:13 AM 11/18/2005, Luke Lonergan wrote:
Dave,
On 11/18/05 5:00 AM, "Dave Cramer" <[EMAIL PROTECTED]>
Luke,Interesting numbers. I'm a little concerned about the use of blockdev —setra 16384. If I understand this correctly it assumes that the table is contiguous on the disk does it not ?DaveOn 18-Nov-05, at 10:13 AM, Luke Lonergan wrote: Dave, On 11/18/05 5:00 AM, "Dave Cramer" <[EMAIL PROTECTED]>
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Alan,
On 11/18/05 8:13 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
I told you in my initial post that I was observing numbers in excess of
what you claiming, but you seemed to think I didn't know how
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Vivek,
On 11/18/05 8:07 AM, "Vivek Khera" <[EMAIL PROTECTED]> wrote:
On Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:
Still, there is a CPU limit here – this is not I/O bound, it is CPU limited as
Greg Stark wrote:
Alan Stange <[EMAIL PROTECTED]> writes:
Luke Lonergan wrote:
Alan,
On 11/18/05 9:31 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
Here's the output from one iteration of iostat -k 60 while the box is
doing a select count(1) on a 238GB table.
avg-cpu: %user
Luke Lonergan wrote:
opterons from Sun that we got some time ago. I think the 130MB/s is
slow given the hardware, but it's acceptable. I'm not too price
sensitive; I care much more about reliability, uptime, etc.
I don't know what the system cost. It was part of block of dual
Then I kno
Greg,
On 11/18/05 11:07 AM, "Greg Stark" <[EMAIL PROTECTED]> wrote:
> That said, 130MB/s is nothing to sneeze at, that's maxing out two high end
> drives and quite respectable for a 3-disk stripe set, even reasonable for a
> 4-disk stripe set. If you're using 5 or more disks in RAID-0 or RAID 1+0
Alan Stange <[EMAIL PROTECTED]> writes:
> Luke Lonergan wrote:
> > Alan,
> >
> > On 11/18/05 9:31 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> >
> >
> >> Here's the output from one iteration of iostat -k 60 while the box is
> >> doing a select count(1) on a 238GB table.
> >>
> >> avg-cpu: %user
Alan,
On 11/18/05 10:30 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
> Actually, this was dual cpu and there was other activity during the full
> minute, but it was on other file devices, which I didn't include in the
> above output. Given that, and given what I see on the box now I'd
> raise t
Luke Lonergan wrote:
Alan,
On 11/18/05 9:31 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
Here's the output from one iteration of iostat -k 60 while the box is
doing a select count(1) on a 238GB table.
avg-cpu: %user %nice%sys %iowait %idle
0.990.00 17.97 32.40
Ok - so I ran the same test on my system and get a total speed of
113MB/sec. Why is this? Why is the system so limited to around just
110MB/sec? I tuned read ahead up a bit, and my results improve a
bit..
Alex
On 11/18/05, Luke Lonergan <[EMAIL PROTECTED]> wrote:
> Dave,
>
> On 11/18/05 5:0
Bill,
On 11/18/05 7:55 AM, "Bill McGonigle" <[EMAIL PROTECTED]> wrote:
>
> There is some truth to it. For an app I'm currently running (full-text
> search using tsearch2 on ~100MB of data) on:
Do you mean 100GB? Sounds like you are more like a decision support
/warehousing application.
> Dev
Alex,
On 11/18/05 8:28 AM, "Alex Turner" <[EMAIL PROTECTED]> wrote:
> Ok - so I ran the same test on my system and get a total speed of
113MB/sec.
> Why is this? Why is the system so limited to around just
110MB/sec? I
> tuned read ahead up a bit, and my results improve a
bit..
OK! Now we're o
Luke Lonergan wrote:
Alan,
On 11/18/05 8:13 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
I told you in my initial post that I was observing numbers in
excess of
what you claiming, but you seemed to think I didn't know how to
measure
an IO rate.
Prove me wrong, post your dat
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Richard,
On 11/18/05 5:22 AM, "Richard Huxton" wrote:
Well, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000
system if he's got one going :-)
Finally, a game worth playing!
Except it’s
Richard Huxton wrote:
Dave Cramer wrote:
On 18-Nov-05, at 1:07 AM, Luke Lonergan wrote:
Postgres + Any x86 CPU from 2.4GHz up to Opteron 280 is CPU bound
after
110MB/s of I/O. This is true of Postgres 7.4, 8.0 and 8.1.
A $1,000 system with one CPU and two SATA disks in a software RAID0
Alan,
On 11/18/05 5:41 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
>
> That's interesting, as I occasionally see more than 110MB/s of
> postgresql IO on our system. I'm using a 32KB block size, which has
> been a huge win in performance for our usage patterns. 300GB database
> with a lot of
On Nov 18, 2005, at 08:00, Dave Cramer wrote:
A $1,000 system with one CPU and two SATA disks in a software RAID0
will
perform exactly the same as a $80,000 system with 8 dual core CPUs
and the
world's best SCSI RAID hardware on a large database for decision
support
(what the poster asked abo
On Nov 18, 2005, at 1:07 AM, Luke Lonergan wrote:
A $1,000 system with one CPU and two SATA disks in a software RAID0
will
perform exactly the same as a $80,000 system with 8 dual core CPUs
and the
world's best SCSI RAID hardware on a large database for decision
support
(what the poster as
On Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:Still, there is a CPU limit here – this is not I/O bound, it is CPU limited as evidenced by the sensitivity to readahead settings. If the filesystem could do 1GB/s, you wouldn’t go any faster than 244MB/s.Yeah, and mysql would probably be faster o
On 18-Nov-05, at 8:30 AM, Luke Lonergan wrote: Richard, On 11/18/05 5:22 AM, "Richard Huxton" wrote: Well, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000 system if he's got one going :-) Finally, a game worth playing! Except it’s backward – I’ll show you 80 $1,
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Dave,
On 11/18/05 5:00 AM, "Dave Cramer" <[EMAIL PROTECTED]> wrote:
>
> Now there's an interesting line drawn in the sand. I presume you have
> numbers to back this up ?
>
> This sho
While I agree with you in principle that pg becomes CPU bound
relatively easily compared to other DB products (at ~110-120MBps
according to a recent thread), there's a bit of hyperbole in your post.
a. There's a big difference between the worst performing 1C x86 ISA
CPU available and the best
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Dave,
On 11/18/05 5:00 AM, "Dave Cramer" <[EMAIL PROTECTED]> wrote:
>
> Now there's an interesting line drawn in the sand. I presume you have
> numbers to back this up ?
>
> This shoul
Luke Lonergan wrote:
Alan,
On 11/18/05 5:41 AM, "Alan Stange" <[EMAIL PROTECTED]> wrote:
That's interesting, as I occasionally see more than 110MB/s of
postgresql IO on our system. I'm using a 32KB block size, which has
been a huge win in performance for our usage patterns. 300GB databas
Dave Cramer wrote:
On 18-Nov-05, at 1:07 AM, Luke Lonergan wrote:
Postgres + Any x86 CPU from 2.4GHz up to Opteron 280 is CPU bound after
110MB/s of I/O. This is true of Postgres 7.4, 8.0 and 8.1.
A $1,000 system with one CPU and two SATA disks in a software RAID0 will
perform exactly the
On 17-Nov-05, at 2:50 PM, Alex Turner wrote:
Just pick up a SCSI drive and a consumer ATA drive.
Feel their weight.
You don't have to look inside to tell the difference.
At one point stereo manufacturers put weights in the case just to
make them heavier.
The older ones weighed more and the
1 - 100 of 169 matches
Mail list logo