It could be the drives, it could be a particular interaction between them and
the drivers or firmware.
Do you know if NCQ is activated for them?
Can you test a single drive JBOD through the array to the same drive through
something else, perhaps the motherboard's SATA port?
You may also have be
Scott Marlowe schrieb:
On Tue, Dec 9, 2008 at 5:17 AM, Mario Weilguni <[EMAIL PROTECTED]> wrote:
Alan Hodgson schrieb:
Mario Weilguni <[EMAIL PROTECTED]> wrote:
strange values. An individual drive is capable of delivering 91
MB/sec
sequential read performance, and
Just to clarify, I'm not talking about random I/O bound loads today, on hard
drives, targetted by the fadvise stuff - these aren't CPU bound, and they will
be helped by it.
For sequential scans, this situation is different, since the OS has sufficient
read-ahead prefetching algorithms of its ow
On Tue, Dec 9, 2008 at 8:16 PM, Vincent Predoehl
<[EMAIL PROTECTED]> wrote:
> I have postgresql 8.3.5 installed on MacOS X / Darwin. I remember setting
> shared memory buffer parameters and that solved the initial performance
> problem, but after running several tests, the performance goes way, wa
I have postgresql 8.3.5 installed on MacOS X / Darwin. I remember
setting shared memory buffer parameters and that solved the initial
performance problem, but after running several tests, the performance
goes way, way down. Restarting the server doesn't seem to help.
I'm using pqxx to acce
> Well, when select count(1) reads pages slower than my disk, its 16x + slower
> than my RAM. Until one can demonstrate that the system can even read pages
> in RAM faster than what disks will do next year, it doesn't matter much that
> RAM is faster. It does matter that RAM is faster for sorts,
justin wrote:
Tom Lane wrote:
Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
no
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU eff
Matthew Wakeling <[EMAIL PROTECTED]> writes:
> On Tue, 9 Dec 2008, Scott Marlowe wrote:
>> I wonder how many hard drives it would take to be CPU bound on random
>> access patterns? About 40 to 60? And probably 15k / SAS drives to
>> boot. Cause that's what we're looking at in the next few year
Scott Carey <[EMAIL PROTECTED]> writes:
> And as far as I can tell, even after the 8.4 fadvise patch, all I/O is in
> block_size chunks. (hopefully I am wrong)
>...
> In addition to the fadvise patch, postgres needs to merge adjacent I/O's
> into larger ones to reduce the overhead. It only really
Prefetch CPU cost should be rather low in the grand scheme of things, and does
help performance even for very fast I/O. I would not expect a very large CPU
use increase from that sort of patch in the grand scheme of things - there is a
lot that is more expensive to do on a per block basis.
The
Richard Yen <[EMAIL PROTECTED]> writes:
> I've discovered a peculiarity with using btrim in an index and was
> wondering if anyone has any input.
What PG version is this?
In particular, I'm wondering if it's one of the early 8.2.x releases,
which had some bugs in and around choose_bitmap_and()
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU
On Tue, 9 Dec 2008, Robert Haas wrote:
I don't believe the thesis. The gap between disk speeds and memory
speeds may narrow over time, but I doubt it's likely to disappear
altogether any time soon, and certainly not for all users.
I think the "not for all users" is the critical part. In 2 yea
On Tue, 2008-12-09 at 17:38 -0500, Tom Lane wrote:
> Scott Carey <[EMAIL PROTECTED]> writes:
> > Which brings this back around to the point I care the most about:
> > I/O per second will diminish as the most common database performance
> > limiting factor in Postgres 8.4's lifetime, and become alm
> Hmm ... I wonder whether this means that the current work on
> parallelizing I/O (the posix_fadvise patch in particular) is a dead
> end. Because what that is basically going to do is expend more CPU
> to improve I/O efficiency. If you believe this thesis then that's
> not the road we want to g
On Tue, Dec 9, 2008 at 2:56 PM, Richard Yen <[EMAIL PROTECTED]> wrote:
> In practice, the difference is 300+ seconds when $LASTNAME == 5 chars and <1
> second when $LASTNAME != 5 chars.
>
> Would anyone know what's going on here? Is there something about the way
> btrim works, or perhaps with the
Scott Carey <[EMAIL PROTECTED]> writes:
> Which brings this back around to the point I care the most about:
> I/O per second will diminish as the most common database performance limiting
> factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
> Becoming more CPU efficient will
Hi,
I've discovered a peculiarity with using btrim in an index and was
wondering if anyone has any input.
My table is like this:
Table "public.m_object_paper"
Column| Type | Modifiers
-++--
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU efficient will become very important, and for some, already
On Tue, 2008-12-09 at 15:07 -0500, Merlin Moncure wrote:
> On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> > Hard drives work, their cheap and fast. I can get 25 spindles, 15k in a
> > 3U with controller and battery backed cache for <$10k.
>
> While I agree with your g
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Hard drives work, their cheap and fast. I can get 25 spindles, 15k in a
> 3U with controller and battery backed cache for <$10k.
While I agree with your general sentiments about early adoption, etc
(the intel ssd products
On Tue, 2008-12-09 at 11:08 -0700, Scott Marlowe wrote:
> On Tue, Dec 9, 2008 at 11:01 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> > Let me re-phrase this.
> >
> > For today, at 200GB or less of required space, and 500GB or less next year.
> >
> > "Where we're going, we don't NEED spindles."
>
>
On Tue, Dec 9, 2008 at 11:01 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> Let me re-phrase this.
>
> For today, at 200GB or less of required space, and 500GB or less next year.
>
> "Where we're going, we don't NEED spindles."
Those intel SSDs sound compelling. I've been waiting for SSDs to get
co
Let me re-phrase this.
For today, at 200GB or less of required space, and 500GB or less next year.
"Where we're going, we don't NEED spindles."
Seriously, go down to the store and get 6 X25-M's, they're as cheap as $550
each and will be sub $500 soon. These are more than sufficient for all bu
On Tue, Dec 9, 2008 at 10:35 AM, Matthew Wakeling <[EMAIL PROTECTED]> wrote:
> On Tue, 9 Dec 2008, Scott Marlowe wrote:
>>
>> I wonder how many hard drives it would take to be CPU bound on random
>> access patterns? About 40 to 60? And probably 15k / SAS drives to
>> boot. Cause that's what we'r
On Tue, 9 Dec 2008, Scott Marlowe wrote:
I wonder how many hard drives it would take to be CPU bound on random
access patterns? About 40 to 60? And probably 15k / SAS drives to
boot. Cause that's what we're looking at in the next few years where
I work.
There's a problem with that thinking.
> Lucky you, having needs that are fulfilled by sequential reads. :)
> I wonder how many hard drives it would take to be CPU bound on random
> access patterns? About 40 to 60? And probably 15k / SAS drives to
> boot. Cause that's what we're looking at in the next few years where
> I work.
Abo
On Tue, 2008-12-09 at 10:21 -0700, Scott Marlowe wrote:
> On Tue, Dec 9, 2008 at 9:37 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> Lucky you, having needs that are fulfilled by sequential reads. :)
>
> I wonder how many hard drives it would take to be CPU bound on random
> access patterns? Abou
On Tue, Dec 9, 2008 at 9:37 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> As for tipping points and pg_bench -- It doesn't seem to reflect the kind of
> workload we use postgres for at all, though my workload does a lot of big
> hashes and seqscans, and I'm curious how much improved those may be
On Tue, 2008-12-09 at 09:25 -0700, Scott Marlowe wrote:
> On Tue, Dec 9, 2008 at 9:03 AM, Gabriele Turchi
> <[EMAIL PROTECTED]> wrote:
> > We reached a fairly good performance on a P400 controller (8 SATA 146GB 2,5"
> > 10k rpm) with raid5 or raid6 Linux software raid: the writing bandwidth
> > rea
On Tue, 2008-12-09 at 18:27 +0200, Peter Eisentraut wrote:
> Aidan Van Dyk wrote:
> > * Joshua D. Drake <[EMAIL PROTECTED]> [081209 11:01]:
> >
> >> Yes the SmartArray series is quite common and actually know to perform
> >> reasonably well, in RAID 10. You still appear to be trying RAID 5.
> >
>
> From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of > Jean-David Beyer
> [EMAIL PROTECTED]
> Sent: Tuesday, December 09, 2008 5:08 AM
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Need help with 8.4 Performance Testing
> -BEGIN
* Peter Eisentraut <[EMAIL PROTECTED]> [081209 11:28]:
> What do you suggest when there is not enough room for a RAID 10?
More disks ;-)
But if you've given up on performance and reliability in favour of
cheaper storage, I guess raid5 is ok. But then I'm not sure what the
point of asking about
Aidan Van Dyk wrote:
* Joshua D. Drake <[EMAIL PROTECTED]> [081209 11:01]:
Yes the SmartArray series is quite common and actually know to perform
reasonably well, in RAID 10. You still appear to be trying RAID 5.
*boggle*
Are people *still* using raid5?
/me gives up!
What do you sugges
On Tue, Dec 9, 2008 at 9:03 AM, Gabriele Turchi
<[EMAIL PROTECTED]> wrote:
> We reached a fairly good performance on a P400 controller (8 SATA 146GB 2,5"
> 10k rpm) with raid5 or raid6 Linux software raid: the writing bandwidth
> reached about 140 MB/s sustained throughput (the hardware raid5 gave
* Joshua D. Drake <[EMAIL PROTECTED]> [081209 11:01]:
> Yes the SmartArray series is quite common and actually know to perform
> reasonably well, in RAID 10. You still appear to be trying RAID 5.
*boggle*
Are people *still* using raid5?
/me gives up!
--
Aidan Van Dyk
We reached a fairly good performance on a P400 controller (8 SATA 146GB
2,5" 10k rpm) with raid5 or raid6 Linux software raid: the writing
bandwidth reached about 140 MB/s sustained throughput (the hardware
raid5 gave a sustained 20 MB/s...). With a second, equal controller (16
disks) we reache
On Tue, 2008-12-09 at 13:10 +0100, Mario Weilguni wrote:
> Scott Marlowe schrieb:
> > On Tue, Dec 2, 2008 at 2:22 AM, Mario Weilguni <[EMAIL PROTECTED]> wrote:
> I still think we must be doing something wrong here, I googled the
> controller and Linux, and did not find anything indicating a probl
On Tue, Dec 9, 2008 at 5:17 AM, Mario Weilguni <[EMAIL PROTECTED]> wrote:
> Alan Hodgson schrieb:
>>
>> Mario Weilguni <[EMAIL PROTECTED]> wrote:
>>
strange values. An individual drive is capable of delivering 91
MB/sec
sequential read performance, and we get va
On Sun, Dec 7, 2008 at 7:38 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
>
> Also, the following patches currently still have bugs, but when the bugs are
> fixed I'll be looking for performance testers, so please either watch the
> wiki or watch this space:
>...
> -- posix_fadvise (Gregory Stark)
Eh
Scott Marlowe wrote:
> On Sun, Dec 7, 2008 at 10:59 PM, M. Edward (Ed) Borasky
> <[EMAIL PROTECTED]> wrote:
>> Ah, but shouldn't a PostgreSQL (or any other database, for that matter)
>> have its own set of filesystems tuned to the application's I/O patterns?
>> Sure, there are some people who need
On Tuesday 09 December 2008 13:08:14 Jean-David Beyer wrote:
>
> and even if they can, I do not know if postgres uses that ability. I doubt
> it, since I believe (at least in Linux) a process can do that only if run
> as root, which I imagine few (if any) users do.
Disclaimer: I'm not a system pr
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Greg Smith wrote:
| On Mon, 8 Dec 2008, Merlin Moncure wrote:
|
|> I wonder if shared_buffers has any effect on how far you can go before
|> you hit the 'tipping point'.
|
| If your operating system has any reasonable caching itself, not so much at
|
Greg Smith <[EMAIL PROTECTED]> writes:
> On Mon, 8 Dec 2008, Merlin Moncure wrote:
>
>> I wonder if shared_buffers has any effect on how far you can go before
>> you hit the 'tipping point'.
>
> If your operating system has any reasonable caching itself, not so much at
> first. As long as the in
Alan Hodgson schrieb:
Mario Weilguni <[EMAIL PROTECTED]> wrote:
strange values. An individual drive is capable of delivering 91
MB/sec
sequential read performance, and we get values ~102MB/sec out of a
8-drive RAID5, seems to be ridiculous slow.
What command are you
Kevin Grittner schrieb:
Mario Weilguni <[EMAIL PROTECTED]> wrote:
Has anyone benchmarked this controller (PCIe/4x, 512 MB BBC)? We try
to
use it with 8x SATA 1TB drives in RAID-5 mode under Linux, and
measure
strange values. An individual drive is capable of deliveri
Scott Marlowe schrieb:
On Tue, Dec 2, 2008 at 2:22 AM, Mario Weilguni <[EMAIL PROTECTED]> wrote:
Has anyone benchmarked this controller (PCIe/4x, 512 MB BBC)? We try to use
it with 8x SATA 1TB drives in RAID-5 mode under Linux, and measure strange
values. An individual drive is capable of del
48 matches
Mail list logo