Michael,
> Well, you don't have to spend *quite* that much to get a decent storage
> array. :)
Yes, I'm just pointing out that it's only the extreme cases which are
clear-cut. Middle cases are a lot harder to define. For example, we've
found that on DBT2 running of a 14-drive JBOD, seperating
John A Meinel <[EMAIL PROTECTED]> writes:
> So pg_xlog is really only needed for a dirty shutdown. So what about the
> idea of having pg_xlog on a ramdisk that is syncronized periodically to
> a real disk.
Well, if "periodically" means "at every transaction commit", that's
pretty much what we do n
Tom Lane wrote:
> John A Meinel <[EMAIL PROTECTED]> writes:
>
>>Alvaro Herrera wrote:
>>
>>>I've been asked this a couple of times and I don't know the answer: what
>>>happens if you give XLog a single drive (unmirrored single spindle), and
>>>that drive dies? So the question really is, should you
John A Meinel <[EMAIL PROTECTED]> writes:
> Alvaro Herrera wrote:
>> I've been asked this a couple of times and I don't know the answer: what
>> happens if you give XLog a single drive (unmirrored single spindle), and
>> that drive dies? So the question really is, should you be giving two
>> disks
On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:
However, you are absolutely correct in that it's *relative* advice, not
absolute advice. If, for example, you're using a $100,000 EMC SAN as your
storage you'll probably be better off giving it everything and letting its
controller
Alvaro Herrera wrote:
> On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:
>
>
>>However, you are absolutely correct in that it's *relative* advice, not
>>absolute advice. If, for example, you're using a $100,000 EMC SAN as your
>>storage you'll probably be better off giving it everythi
I've been asked this a couple of times and I don't know the answer: what
happens if you give XLog a single drive (unmirrored single spindle), and
that drive dies? So the question really is, should you be giving two
disks to XLog?
If that drive dies your restoring from backup. You would need t
On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:
> However, you are absolutely correct in that it's *relative* advice, not
> absolute advice. If, for example, you're using a $100,000 EMC SAN as your
> storage you'll probably be better off giving it everything and letting its
> con
Jeff,
> > 4) pg_xlog: If you're pg_xlog is on a spindle is *only* for pg_xlog
> > you're better off.
>
> Like Mr. Stone said earlier, this is pure dogma. In my experience,
> xlogs on the same volume with data is much faster if both are on
> battery-backed write-back RAID controller memory. Movin
--
>From Paul Johnson <[EMAIL PROTECTED]>
DateThu, 11 Aug 2005 13:23:21 +0100 (BST)
To pgsql-performance@postgresql.org
Subject [PERFORM] PG8 Tuning
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an e
On Aug 11, 2005, at 12:58 PM, Jeffrey W. Baker wrote:
Like Mr. Stone said earlier, this is pure dogma. In my experience,
xlogs on the same volume with data is much faster if both are on
battery-backed write-back RAID controller memory. Moving from this
situation to xlogs on a single normal di
On Thu, Aug 11, 2005 at 10:18:44AM -0700, Mark Lewis wrote:
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed write
cache would make a really
> Actually, it seems to me that with the addition of the WAL in
PostgreSQL
> and the subsequent decreased need to fsync the data files themselves
> (only during checkpoints?), that the only time a battery-backed write
> cache would make a really large performance difference would be on the
> drive(
(Musing, trying to think of a general-purpose performance-tuning rule
that applies here):
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed wr
I think the T-3 RAID at least breaks some of these rules -- I've got 2
T-3's, 1 configured as RAID-10 and the other as RAID5, and they both
seem to perform about the same. I use RAID5 with a hot spare, so it's
using 8 spindles.
I got a lot of performance improvement out of mount the fs noatim
On Fri, 2005-08-12 at 08:47 +, Steve Poe wrote:
> Paul,
>
> Before I say anything else, one online document which may be of
> assistance to you is:
> http://www.powerpostgresql.com/PerfList/
>
> Some thoughts I have:
>
> 3) You're shared RAM setting seems overkill to me. Part of the challeng
On Thu, Aug 11, 2005 at 01:23:21PM +0100, Paul Johnson wrote:
I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data f
Paul,
Before I say anything else, one online document which may be of
assistance to you is:
http://www.powerpostgresql.com/PerfList/
Some thoughts I have:
3) You're shared RAM setting seems overkill to me. Part of the challenge
is you're going from 1000 to 262K with no assessment in between. Eac
Paul Johnson wrote:
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.
The system is for the sole use of a couple of data warehouse
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.
The system is for the sole use of a couple of data warehouse developers,
hence we a
20 matches
Mail list logo