Luke,I'll try it, but you're right, it should not matter. The two systems are:HP DL385 (dual Opteron 265 I believe) 8GB of RAM, two internal RAID1 U320 10KSun W2100z (dual Opteron 245 I believe) 4GB of RAM, 1 U320 10K drive with LSI MegaRAID 2X 128M driving two external 4-disc arrays U320 10K drive
Title: Re: [PERFORM] Postgresql Performance on an HP DL385 and
Steve,
One thing here is that “wal_sync_method” should be set to “fdatasync” and not “fsync”. In fact, the default is fdatasync, but because you have uncommented the standard line in the file, it is changed to “fsync”, which is a
Luke, ISTM that the main performance issue for xlog is going to be the rate at
which fdatasync operations complete, and the stripe size shouldn't hurtthat.I thought so. However, I've also tried running the PGDATA off of the RAID1 as a test and it is poor.
What are your postgresql.conf settings for
Steve,
On 8/18/06 10:39 AM, "Steve Poe" <[EMAIL PROTECTED]> wrote:
> Nope. it is only a RAID1 for the 2 internal discs connected to the SmartArray
> 6i. This is where I *had* the pg_xlog located when the performance was very
> poor. Also, I just found out the default stripe size is 128k. Would th
Luke,Nope. it is only a RAID1 for the 2 internal discs connected to the SmartArray 6i. This is where I *had* the pg_xlog located when the performance was very poor. Also, I just found out the default stripe size is 128k. Would this be a problem for pg_xlog?
The 6-disc RAID10 you speak of is on the
gust 18, 2006 10:38 AM
To: [EMAIL PROTECTED]; Scott Marlowe
Cc: Michael Stone; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and
Steve,
If this is an internal RAID1 on two disks, it looks great.
Based on the random seeks though (578 seeks/sec), it looks l
Steve,
If this is an internal RAID1 on two disks, it looks great.
Based on the random seeks though (578 seeks/sec), it looks like maybe it's 6
disks in a RAID10?
- Luke
On 8/16/06 7:10 PM, "Steve Poe" <[EMAIL PROTECTED]> wrote:
> Everyone,
>
> I wanted to follow-up on bonnie results for the
> There is 64MB on the 6i and 192MB on the 642 controller. I wish the
> controllers had a "wrieback" enable option like the LSI MegaRAID
> adapters have. I have tried splitting the cache accelerator 25/75
> 75/25 0/100 100/0 but the results really did not improve.
They have a writeback option, but
Everyone,
I wanted to follow-up on bonnie results for the internal RAID1 which is
connected to the SmartArray 6i. I believe this is the problem, but I am
not good at interepting the results. Here's an sample of three runs:
scsi disc
array ,16G,47983,67,65492,20,37214,6,73785,87,89787,6,578.2,0,16
Hi,
> Can you run bonnie++ version 1.03a on the machine and report the results
> here?
Do you know if the figures from bonnie++ are able to measure the
performance related to the overhead of the 'fsync' option? I had
very strange performance differences between two Dell 1850
machines months ago,
Hi, Jim,
Jim C. Nasby wrote:
> Well, if the controller is caching with a BBU, I'm not sure that order
> matters anymore, because the controller should be able to re-order at
> will. Theoretically. :) But this is why having some actual data posted
> somewhere would be great.
Well, actually, the c
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:
>> This is only valid if the pre-allocation is also fsync'd *and* fsync
>> ensures that both the metadata and file data are on disk. Anyone
>> actually checked that? :)
> fsync() does
On Tue, 15 Aug 2006 [EMAIL PROTECTED] wrote:
This is also wrong. fsck is needed because the file system is broken.
nope, the file system *may* be broken. the dirty flag simply indicates
that the filesystem needs to be checked to find out whether or not it is
broken.
Ah, but if we knew it wasn'
On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:
> This is only valid if the pre-allocation is also fsync'd *and* fsync
> ensures that both the metadata and file data are on disk. Anyone
> actually checked that? :)
fsync() does that, yes. fdatasync() (if it exists), OTOH, doesn't sync
On Tue, Aug 15, 2006 at 05:38:43PM -0400, [EMAIL PROTECTED] wrote:
> I didn't know that the xlog segment only uses pre-allocated space. I
> ignore mtime/atime as they don't count as file system structure
> changes to me. It's updating a field in place. No change to the structure.
>
> With the pre-
On Tue, Aug 15, 2006 at 04:58:59PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 03:39:51PM -0400, [EMAIL PROTECTED] wrote:
> >No. This is not true. Updating the file system structure (inodes, indirect
> >blocks) touches a separate part of the disk than the actual data. If
> >the file syste
On Tue, Aug 15, 2006 at 03:39:51PM -0400, [EMAIL PROTECTED] wrote:
No. This is not true. Updating the file system structure (inodes, indirect
blocks) touches a separate part of the disk than the actual data. If
the file system structure is modified, say, to extend a file to allow
it to contain mo
On Tue, Aug 15, 2006 at 02:15:05PM -0500, Jim C. Nasby wrote:
Now, if
fsync'ing a file also ensures that all the metadata is written, then
we're probably fine...
...and it does. Unclean shutdowns cause problems in general because
filesystems operate asynchronously. postgres (and other similar
[EMAIL PROTECTED] writes:
> WAL file is never appended - only re-written?
> If so, then I'm wrong, and ext2 is fine. The requirement is that no
> file system structures change as a result of any writes that
> PostgreSQL does. If no file system structures change, then I take
> everything back as un
On Tue, Aug 15, 2006 at 04:05:17PM -0400, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > I've been worrying about this myself, and my current conclusion is that
> > ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which
> > could lead to the need to trash the xlog.
> > Even ext3
[EMAIL PROTECTED] writes:
> I've been worrying about this myself, and my current conclusion is that
> ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which
> could lead to the need to trash the xlog.
> Even ext3 in writeback mode allows for the indirect blocks to be updated
> w
On Tue, Aug 15, 2006 at 02:15:05PM -0500, Jim C. Nasby wrote:
> So what causes files to get 'lost' and get stuck in lost+found?
> AFAIK that's because the file was written before the metadata. Now, if
> fsync'ing a file also ensures that all the metadata is written, then
> we're probably fine... if
On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
> >>>Are 'we' sure that such a setup can't lose any data?
> >>Yes. If you check the archives, you can even find the last time this was
> >>discussed...
> >I looked la
On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
> >On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
> >>On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
> >>>Are 'we' sure that such a setu
On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
>Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find
On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
> >Are 'we' sure that such a setup can't lose any data?
> Yes. If you check the archives, you can even find the last time this was
> discussed...
I looked last night (coi
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
> On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:
> > On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
> > >Wow, interesting. IIRC, XFS is lower performing than ext3,
> > For xlog, maybe. For data, no. Both ar
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find the last time this was
discussed...
The bottom line is that the only reason you need a metadata journalling
filesystem is to s
On Tue, Aug 15, 2006 at 11:25:24AM -0500, Jim C. Nasby wrote:
Well, if the controller is caching with a BBU, I'm not sure that order
matters anymore, because the controller should be able to re-order at
will. Theoretically. :) But this is why having some actual data posted
somewhere would be grea
On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:
> On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
> >Wow, interesting. IIRC, XFS is lower performing than ext3,
>
> For xlog, maybe. For data, no. Both are definately slower than ext2 for
> xlog, which is another reason
On Mon, Aug 14, 2006 at 01:03:41PM -0400, Michael Stone wrote:
> On Mon, Aug 14, 2006 at 10:38:41AM -0500, Jim C. Nasby wrote:
> >Got any data to back that up?
>
> yes. that I'm willing to dig out? no. :)
Well, I'm not digging hard numbers out either, so that's fair. :) But it
would be very hand
On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
Wow, interesting. IIRC, XFS is lower performing than ext3,
For xlog, maybe. For data, no. Both are definately slower than ext2 for
xlog, which is another reason to have xlog on a small filesystem which
doesn't need metadata journal
On Mon, Aug 14, 2006 at 08:51:09AM -0700, Steve Poe wrote:
> Jim,
>
> I have to say Michael is onto something here to my surprise. I partitioned
> the RAID10 on the SmartArray 642 adapter into two parts, PGDATA formatted
> with XFS and pg_xlog as ext2. Performance jumped up to median of 98 TPS. I
On Mon, Aug 14, 2006 at 10:38:41AM -0500, Jim C. Nasby wrote:
Got any data to back that up?
yes. that I'm willing to dig out? no. :)
The problem with seperate partitions is that it means more head movement
for the drives. If it's all one partition the pg_xlog data will tend to
be interspersed
Jim,I have to say Michael is onto something here to my surprise. I partitioned the RAID10 on the SmartArray 642 adapter into two parts, PGDATA formatted with XFS and pg_xlog as ext2. Performance jumped up to median of 98 TPS. I could reproduce the similar result with the LSI MegaRAID 2X adapter as
On Thu, Aug 10, 2006 at 07:09:38AM -0400, Michael Stone wrote:
> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
> >I tried as you suggested and my performance dropped by 50%. I went from
> >a 32 TPS to 16. Oh well.
>
> If you put data & xlog on the same array, put them on seperate
> p
Scott,I *could* rip out the LSI MegaRAID 2X from my Sun box. This belongs to me for testing. but I don't know if it will fit in the DL385. Do they have full-heigth/length slots? I've not worked on this type of box before. I was thinking this is the next step. In the meantime, I've discovered their
On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:
> Mike,
>
> On 8/10/06 4:09 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
> >> I tried as you suggested and my performance dropped by 50%. I went from
> >> a 32 TPS to 16. Oh well.
Mike,
On 8/10/06 4:09 AM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
>> I tried as you suggested and my performance dropped by 50%. I went from
>> a 32 TPS to 16. Oh well.
>
> If you put data & xlog on the same array, put them on seper
On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
I tried as you suggested and my performance dropped by 50%. I went from
a 32 TPS to 16. Oh well.
If you put data & xlog on the same array, put them on seperate
partitions, probably formatted differently (ext2 on xlog).
Mike Stone
--
Jim,
I tried as you suggested and my performance dropped by 50%. I went from
a 32 TPS to 16. Oh well.
Steve
On Wed, 2006-08-09 at 16:05 -0500, Jim C. Nasby wrote:
> On Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:
> > Luke,
> >
> > I thought so. In my test, I tried to be fair/equal si
Scott,Do you know how to activate the writeback on the RAID controller from HP?SteveOn 8/9/06, Scott Marlowe <
[EMAIL PROTECTED]> wrote:On Wed, 2006-08-09 at 16:11, Steve Poe wrote:
> Jim,>> I'll give it a try. However, I did not see anywhere in the BIOS> configuration of the 642 RAID adapter to en
I believe it does, I'll need to check.Thanks for the correction.
Steve On 8/9/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Wed, 2006-08-09 at 16:11, Steve Poe wrote:> Jim,>> I'll give it a try. However, I did not see anywhere in the BIOS> configuration of the 642 RAID adapter to enable writeba
On Wed, 2006-08-09 at 16:11, Steve Poe wrote:
> Jim,
>
> I'll give it a try. However, I did not see anywhere in the BIOS
> configuration of the 642 RAID adapter to enable writeback. It may have
> been mislabled cache accelerator where you can give a percentage to
> read/write. That aspect did not
Jim,I'll give it a try. However, I did not see anywhere in the BIOS configuration of the 642 RAID adapter to enable writeback. It may have been mislabled cache accelerator where you can give a percentage to read/write. That aspect did not change the performance like the LSI MegaRAID adapter does.
S
On Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:
> Luke,
>
> I thought so. In my test, I tried to be fair/equal since my Sun box has two
> 4-disc arrays each on their own channel. So, I just used one of them which
> should be a little slower than the 6-disc with 192MB cache.
>
> Inciden
Luke,I hope so. I'll keep you and the list up-to-date as I learn more.SteveOn 8/8/06, Luke Lonergan <
[EMAIL PROTECTED]> wrote:Steve,> I will do that. If it is the general impression that this
> server should perform well with Postgresql, Are the RAID> cards, the 6i and 642 sufficient to your knowl
Steve,
> I will do that. If it is the general impression that this
> server should perform well with Postgresql, Are the RAID
> cards, the 6i and 642 sufficient to your knowledge? I am
> wondering if it is the disc array itself.
I think that is the question to be answered by HP support. Ask
ou contact them through HP tech support and report backto this list what you find out?- Luke> -Original Message-> From: Steve Poe [mailto:
[EMAIL PROTECTED]]> Sent: Tuesday, August 08, 2006 11:33 PM> To: Luke Lonergan> Cc: Alex Turner; pgsql-performance@postgresql.org
> Subje
performance@postgresql.org
> Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and
>
> Luke,
>
> I check dmesg one more time and I found this regarding the
> cciss driver:
>
> Filesystem "cciss/c1d0p1": Disabling barriers, not supported
> by the und
Luke,I check dmesg one more time and I found this regarding the cciss driver:Filesystem "cciss/c1d0p1": Disabling barriers, not supported by the underlying device.Don't know if it means anything, but thought I'd mention it.
SteveOn 8/8/06, Steve Poe <[EMAIL PROTECTED]> wrote:
Luke,I thought so. In
Luke,I thought so. In my test, I tried to be fair/equal since my Sun box has two 4-disc arrays each on their own channel. So, I just used one of them which should be a little slower than the 6-disc with 192MB cache.
Incidently, the two internal SCSI drives, which are on the 6i adapter, generated a
Steve,
> > Are any of the disks not healthy? Do you see any I/O
> errors in dmesg?
>
> In my vmstat report, I it is an average per minute not
> per-second. Also, I found that in the first minute of the
> very first run, the HP's "bi"
> value hits a high of 221184 then it tanks after that.
B
Steve,
> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10
> LSI MegaRAID 128MB). This is after 8 runs.
>
> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5
> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53
> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1
> Are any of the disks not healthy? Do you see any I/O errors in dmesg?
Luke,
In my vmstat report, I it is an average per minute not per-second. Also,
I found that in the first minute of the very first run, the HP's "bi"
value hits a high of 221184 then it tanks after that.
Steve
>Sounds like there are a few moving parts here, one of which is the ODBC>driver.
Yes, I need to use it since my clients use it for their veterinary application.
>First - using 7.4.x postgres is a big variable - not much experience on this>list with 7.4.x anymore.Like the previous, we have to use i
Steve,
On 8/8/06 9:57 AM, "Steve Poe" <[EMAIL PROTECTED]> wrote:
> On the Sun box, with 4 discs (RAID10) to one channel on the LSI RAID card, I
> see an average TPS around 70. If I ran this off of one disc, I see an average
> TPS of 32.
>
> on the HP box, with 6-discs in RAID10 and 1 spare. I s
Luke,Here's some background:I use Pg 7.4.13 (I've tested as far back as 7.4.8). I use an 8GB data with a program called odbc-bench. I run an 18 minute test. With each run, HP box excluded, I unmount the discs involved, reformat, un-tar the backup of PGDATA and pg_xlog back on the discs, start-up Po
Steve,
On 8/8/06 8:01 AM, "Steve Poe" <[EMAIL PROTECTED]> wrote:
> Thanks for the feedback. I use the same database test that I've run a Sun
> dual Opteron with 4Gb RAM and (2) four disk arrays in RAID10. The sun box with
> one disc on an LSI MegaRAID 2-channel adapter outperforms this HP box. I
inal Message-From: Alex Turner [mailto:[EMAIL PROTECTED]]Sent: Tuesday, August 08, 2006 02:40 AM Eastern Standard TimeTo:
[EMAIL PROTECTED]Cc: Luke Lonergan; pgsql-performance@postgresql.orgSubject: Re: [PERFORM] Postgresql Performance on an HP DL385 and
These number are p
Tuesday, August 08, 2006 02:40 AM Eastern Standard Time
To: [EMAIL PROTECTED]
Cc: Luke Lonergan; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and
These number are pretty darn good for a four disk RAID 10, pretty close to
perfect inf
Alex, Maybe I mis-stated, this is a 6-disk array.SteveOn 8/7/06, Alex Turner <[EMAIL PROTECTED]
> wrote:These number are pretty darn good for a four disk RAID 10, pretty close to perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID controller than will read indpendently from m
On Aug 5, 2006, at 7:10 PM, Steve Poe wrote:
Has anyone worked with server before. I've read the SmartArray 6i is a
poor performer, I wonder if the SmartArray 642 adapter would have the
same fate?
My newest db is a DL385, 6 disks. It runs very nicely. I have no
issues with the 6i controll
These number are pretty darn good for a four disk RAID 10, pretty close to perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID controller than will read indpendently from mirrors.Alex
On 8/8/06, Steve Poe <[EMAIL PROTECTED]> wrote:
Luke,Here are the results of two runs of 16G
Luke,
Here are the results of two runs of 16GB file tests on XFS.
scsi disc array
xfs
,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1,0,16,3172,7,+,+++,2957,9,3197,10,+,+++,2484,8
scsi disc array
xfs
,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1,0,16,3119,10,
There is 64MB on the 6i and 192MB on the 642 controller. I wish the
controllers had a "wrieback" enable option like the LSI MegaRAID
adapters have. I have tried splitting the cache accelerator 25/75 75/25
0/100 100/0 but the results really did not improve.
SteveOn 8/7/06, Joshua D. Drake <[EMAIL P
The database data is on the drive array(RAID10) and the pg_xlog is on
the internal RAID1 on the 6i controller. The results have been poor.
I have heard that the 6i was actually decent but to avoid the 5i.
Joshua D. Drake
My guess is the controllers are garbage.
Can you run bonnie++ vers
Luke,
I'll do that then post the results. I ran zcav on it (default
settlings) on the disc array formatted XFS and its peak MB/s was around
85-90. I am using kernel 2.6.17.7. mounting the disc array with
noatime, nodiratime.
Thanks for your feedback.
Steve
On 8/7/06, Luke Lonergan <[EMAIL PROTEC
Steve,
On 8/5/06 4:10 PM, "Steve Poe" <[EMAIL PROTECTED]> wrote:
> I am do some consulting for an animal hospital in the Boston, MA area.
> They wanted a new server to run their database on. The client wants
> everything from one vendor, they wanted Dell initially, I'd advised
> against it. I rec
I am do some consulting for an animal hospital in the Boston, MA area.
They wanted a new server to run their database on. The client wants
everything from one vendor, they wanted Dell initially, I'd advised
against it. I recommended a dual Opteron system from either Sun or HP.
They settled on a DL3
70 matches
Mail list logo