There is 64MB on the 6i and 192MB on the 642 controller. I wish the
controllers had a wrieback enable option like the LSI MegaRAID
adapters have. I have tried splitting the cache accelerator 25/75
75/25 0/100 100/0 but the results really did not improve.
They have a writeback option, but you
Steve,
If this is an internal RAID1 on two disks, it looks great.
Based on the random seeks though (578 seeks/sec), it looks like maybe it's 6
disks in a RAID10?
- Luke
On 8/16/06 7:10 PM, Steve Poe [EMAIL PROTECTED] wrote:
Everyone,
I wanted to follow-up on bonnie results for the
To: [EMAIL PROTECTED]; Scott Marlowe
Cc: Michael Stone; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and
Steve,
If this is an internal RAID1 on two disks, it looks great.
Based on the random seeks though (578 seeks/sec), it looks like maybe
it's 6
disks
Luke,Nope. it is only a RAID1 for the 2 internal discs connected to the SmartArray 6i. This is where I *had* the pg_xlog located when the performance was very poor. Also, I just found out the default stripe size is 128k. Would this be a problem for pg_xlog?
The 6-disc RAID10 you speak of is on the
Steve,
On 8/18/06 10:39 AM, Steve Poe [EMAIL PROTECTED] wrote:
Nope. it is only a RAID1 for the 2 internal discs connected to the SmartArray
6i. This is where I *had* the pg_xlog located when the performance was very
poor. Also, I just found out the default stripe size is 128k. Would this be
Luke, ISTM that the main performance issue for xlog is going to be the rate at
which fdatasync operations complete, and the stripe size shouldn't hurtthat.I thought so. However, I've also tried running the PGDATA off of the RAID1 as a test and it is poor.
What are your postgresql.conf settings for
Title: Re: [PERFORM] Postgresql Performance on an HP DL385 and
Steve,
One thing here is that wal_sync_method should be set to fdatasync and not fsync. In fact, the default is fdatasync, but because you have uncommented the standard line in the file, it is changed to fsync, which is a lot
Luke,I'll try it, but you're right, it should not matter. The two systems are:HP DL385 (dual Opteron 265 I believe) 8GB of RAM, two internal RAID1 U320 10KSun W2100z (dual Opteron 245 I believe) 4GB of RAM, 1 U320 10K drive with LSI MegaRAID 2X 128M driving two external 4-disc arrays U320 10K
Hi, Jim,
Jim C. Nasby wrote:
Well, if the controller is caching with a BBU, I'm not sure that order
matters anymore, because the controller should be able to re-order at
will. Theoretically. :) But this is why having some actual data posted
somewhere would be great.
Well, actually, the
Hi,
Can you run bonnie++ version 1.03a on the machine and report the results
here?
Do you know if the figures from bonnie++ are able to measure the
performance related to the overhead of the 'fsync' option? I had
very strange performance differences between two Dell 1850
machines months ago,
Everyone,
I wanted to follow-up on bonnie results for the internal RAID1 which is
connected to the SmartArray 6i. I believe this is the problem, but I am
not good at interepting the results. Here's an sample of three runs:
scsi disc
array
On Mon, Aug 14, 2006 at 01:03:41PM -0400, Michael Stone wrote:
On Mon, Aug 14, 2006 at 10:38:41AM -0500, Jim C. Nasby wrote:
Got any data to back that up?
yes. that I'm willing to dig out? no. :)
Well, I'm not digging hard numbers out either, so that's fair. :) But it
would be very handy if
On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:
On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
Wow, interesting. IIRC, XFS is lower performing than ext3,
For xlog, maybe. For data, no. Both are definately slower than ext2 for
xlog, which is another reason to
On Tue, Aug 15, 2006 at 11:25:24AM -0500, Jim C. Nasby wrote:
Well, if the controller is caching with a BBU, I'm not sure that order
matters anymore, because the controller should be able to re-order at
will. Theoretically. :) But this is why having some actual data posted
somewhere would be
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find the last time this was
discussed...
The bottom line is that the only reason you need a metadata journalling
filesystem is to
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:
On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
Wow, interesting. IIRC, XFS is lower performing than ext3,
For xlog, maybe. For data, no. Both are
On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find the last time this was
discussed...
I looked last night
On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find
On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
Are 'we' sure that such a setup can't
On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find the last time this was
discussed...
I looked last night
[EMAIL PROTECTED] writes:
I've been worrying about this myself, and my current conclusion is that
ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which
could lead to the need to trash the xlog.
Even ext3 in writeback mode allows for the indirect blocks to be updated
On Tue, Aug 15, 2006 at 04:05:17PM -0400, Tom Lane wrote:
[EMAIL PROTECTED] writes:
I've been worrying about this myself, and my current conclusion is that
ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which
could lead to the need to trash the xlog.
Even ext3 in
[EMAIL PROTECTED] writes:
WAL file is never appended - only re-written?
If so, then I'm wrong, and ext2 is fine. The requirement is that no
file system structures change as a result of any writes that
PostgreSQL does. If no file system structures change, then I take
everything back as
On Tue, Aug 15, 2006 at 02:15:05PM -0500, Jim C. Nasby wrote:
Now, if
fsync'ing a file also ensures that all the metadata is written, then
we're probably fine...
...and it does. Unclean shutdowns cause problems in general because
filesystems operate asynchronously. postgres (and other
On Tue, Aug 15, 2006 at 03:39:51PM -0400, [EMAIL PROTECTED] wrote:
No. This is not true. Updating the file system structure (inodes, indirect
blocks) touches a separate part of the disk than the actual data. If
the file system structure is modified, say, to extend a file to allow
it to contain
On Tue, Aug 15, 2006 at 04:58:59PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 03:39:51PM -0400, [EMAIL PROTECTED] wrote:
No. This is not true. Updating the file system structure (inodes, indirect
blocks) touches a separate part of the disk than the actual data. If
the file system
On Tue, Aug 15, 2006 at 05:38:43PM -0400, [EMAIL PROTECTED] wrote:
I didn't know that the xlog segment only uses pre-allocated space. I
ignore mtime/atime as they don't count as file system structure
changes to me. It's updating a field in place. No change to the structure.
With the
On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:
This is only valid if the pre-allocation is also fsync'd *and* fsync
ensures that both the metadata and file data are on disk. Anyone
actually checked that? :)
fsync() does that, yes. fdatasync() (if it exists), OTOH, doesn't sync
On Tue, 15 Aug 2006 [EMAIL PROTECTED] wrote:
This is also wrong. fsck is needed because the file system is broken.
nope, the file system *may* be broken. the dirty flag simply indicates
that the filesystem needs to be checked to find out whether or not it is
broken.
Ah, but if we knew it
Steinar H. Gunderson [EMAIL PROTECTED] writes:
On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:
This is only valid if the pre-allocation is also fsync'd *and* fsync
ensures that both the metadata and file data are on disk. Anyone
actually checked that? :)
fsync() does that, yes.
Jim,I have to say Michael is onto something here to my surprise. I partitioned the RAID10 on the SmartArray 642 adapter into two parts, PGDATA formatted with XFS and pg_xlog as ext2. Performance jumped up to median of 98 TPS. I could reproduce the similar result with the LSI MegaRAID 2X adapter as
On Mon, Aug 14, 2006 at 10:38:41AM -0500, Jim C. Nasby wrote:
Got any data to back that up?
yes. that I'm willing to dig out? no. :)
The problem with seperate partitions is that it means more head movement
for the drives. If it's all one partition the pg_xlog data will tend to
be
On Mon, Aug 14, 2006 at 08:51:09AM -0700, Steve Poe wrote:
Jim,
I have to say Michael is onto something here to my surprise. I partitioned
the RAID10 on the SmartArray 642 adapter into two parts, PGDATA formatted
with XFS and pg_xlog as ext2. Performance jumped up to median of 98 TPS. I
On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
Wow, interesting. IIRC, XFS is lower performing than ext3,
For xlog, maybe. For data, no. Both are definately slower than ext2 for
xlog, which is another reason to have xlog on a small filesystem which
doesn't need metadata
On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
I tried as you suggested and my performance dropped by 50%. I went from
a 32 TPS to 16. Oh well.
If you put data xlog on the same array, put them on seperate
partitions, probably formatted differently (ext2 on xlog).
Mike Stone
Mike,
On 8/10/06 4:09 AM, Michael Stone [EMAIL PROTECTED] wrote:
On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
I tried as you suggested and my performance dropped by 50%. I went from
a 32 TPS to 16. Oh well.
If you put data xlog on the same array, put them on seperate
On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:
Mike,
On 8/10/06 4:09 AM, Michael Stone [EMAIL PROTECTED] wrote:
On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:
I tried as you suggested and my performance dropped by 50%. I went from
a 32 TPS to 16. Oh well.
If you put
Scott,I *could* rip out the LSI MegaRAID 2X from my Sun box. This belongs to me for testing. but I don't know if it will fit in the DL385. Do they have full-heigth/length slots? I've not worked on this type of box before. I was thinking this is the next step. In the meantime, I've discovered their
Luke,I check dmesg one more time and I found this regarding the cciss driver:Filesystem cciss/c1d0p1: Disabling barriers, not supported by the underlying device.Don't know if it means anything, but thought I'd mention it.
SteveOn 8/8/06, Steve Poe [EMAIL PROTECTED] wrote:
Luke,I thought so. In my
] Postgresql Performance on an HP DL385 and
Luke,
I check dmesg one more time and I found this regarding the
cciss driver:
Filesystem cciss/c1d0p1: Disabling barriers, not supported
by the underlying device.
Don't know if it means anything, but thought I'd mention it.
Steve
support and report backto this list what you find out?- Luke -Original Message- From: Steve Poe [mailto:
[EMAIL PROTECTED]] Sent: Tuesday, August 08, 2006 11:33 PM To: Luke Lonergan Cc: Alex Turner; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgresql Performance on an HP DL385
Steve,
I will do that. If it is the general impression that this
server should perform well with Postgresql, Are the RAID
cards, the 6i and 642 sufficient to your knowledge? I am
wondering if it is the disc array itself.
I think that is the question to be answered by HP support. Ask
Luke,I hope so. I'll keep you and the list up-to-date as I learn more.SteveOn 8/8/06, Luke Lonergan
[EMAIL PROTECTED] wrote:Steve, I will do that. If it is the general impression that this
server should perform well with Postgresql, Are the RAID cards, the 6i and 642 sufficient to your
On Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:
Luke,
I thought so. In my test, I tried to be fair/equal since my Sun box has two
4-disc arrays each on their own channel. So, I just used one of them which
should be a little slower than the 6-disc with 192MB cache.
Incidently,
Jim,I'll give it a try. However, I did not see anywhere in the BIOS configuration of the 642 RAID adapter to enable writeback. It may have been mislabled cache accelerator where you can give a percentage to read/write. That aspect did not change the performance like the LSI MegaRAID adapter does.
On Wed, 2006-08-09 at 16:11, Steve Poe wrote:
Jim,
I'll give it a try. However, I did not see anywhere in the BIOS
configuration of the 642 RAID adapter to enable writeback. It may have
been mislabled cache accelerator where you can give a percentage to
read/write. That aspect did not
Scott,Do you know how to activate the writeback on the RAID controller from HP?SteveOn 8/9/06, Scott Marlowe
[EMAIL PROTECTED] wrote:On Wed, 2006-08-09 at 16:11, Steve Poe wrote:
Jim, I'll give it a try. However, I did not see anywhere in the BIOS configuration of the 642 RAID adapter to enable
Jim,
I tried as you suggested and my performance dropped by 50%. I went from
a 32 TPS to 16. Oh well.
Steve
On Wed, 2006-08-09 at 16:05 -0500, Jim C. Nasby wrote:
On Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:
Luke,
I thought so. In my test, I tried to be fair/equal since my
These number are pretty darn good for a four disk RAID 10, pretty close to perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID controller than will read indpendently from mirrors.Alex
On 8/8/06, Steve Poe [EMAIL PROTECTED] wrote:
Luke,Here are the results of two runs of 16GB
On Aug 5, 2006, at 7:10 PM, Steve Poe wrote:
Has anyone worked with server before. I've read the SmartArray 6i is a
poor performer, I wonder if the SmartArray 642 adapter would have the
same fate?
My newest db is a DL385, 6 disks. It runs very nicely. I have no
issues with the 6i
, August 08, 2006 02:40 AM Eastern Standard Time
To: [EMAIL PROTECTED]
Cc: Luke Lonergan; pgsql-performance@postgresql.org
Subject:Re: [PERFORM] Postgresql Performance on an HP DL385 and
These number are pretty darn good for a four disk RAID 10, pretty close to
perfect infact. Nice
Sounds like there are a few moving parts here, one of which is the ODBCdriver.
Yes, I need to use it since my clients use it for their veterinary application.
First - using 7.4.x postgres is a big variable - not much experience on thislist with 7.4.x anymore.Like the previous, we have to use it
Are any of the disks not healthy? Do you see any I/O errors in dmesg?
Luke,
In my vmstat report, I it is an average per minute not per-second. Also,
I found that in the first minute of the very first run, the HP's bi
value hits a high of 221184 then it tanks after that.
Steve
Steve,
Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10
LSI MegaRAID 128MB). This is after 8 runs.
dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5
dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53
dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0
Steve,
Are any of the disks not healthy? Do you see any I/O
errors in dmesg?
In my vmstat report, I it is an average per minute not
per-second. Also, I found that in the first minute of the
very first run, the HP's bi
value hits a high of 221184 then it tanks after that.
Based on
Luke,I thought so. In my test, I tried to be fair/equal since my Sun box has two 4-disc arrays each on their own channel. So, I just used one of them which should be a little slower than the 6-disc with 192MB cache.
Incidently, the two internal SCSI drives, which are on the 6i adapter, generated a
Steve,
On 8/5/06 4:10 PM, Steve Poe [EMAIL PROTECTED] wrote:
I am do some consulting for an animal hospital in the Boston, MA area.
They wanted a new server to run their database on. The client wants
everything from one vendor, they wanted Dell initially, I'd advised
against it. I
Luke,
I'll do that then post the results. I ran zcav on it (default
settlings) on the disc array formatted XFS and its peak MB/s was around
85-90. I am using kernel 2.6.17.7. mounting the disc array with
noatime, nodiratime.
Thanks for your feedback.
Steve
On 8/7/06, Luke Lonergan [EMAIL
The database data is on the drive array(RAID10) and the pg_xlog is on
the internal RAID1 on the 6i controller. The results have been poor.
I have heard that the 6i was actually decent but to avoid the 5i.
Joshua D. Drake
My guess is the controllers are garbage.
Can you run bonnie++
There is 64MB on the 6i and 192MB on the 642 controller. I wish the
controllers had a wrieback enable option like the LSI MegaRAID
adapters have. I have tried splitting the cache accelerator 25/75 75/25
0/100 100/0 but the results really did not improve.
SteveOn 8/7/06, Joshua D. Drake [EMAIL
Luke,
Here are the results of two runs of 16GB file tests on XFS.
scsi disc array
xfs
,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1,0,16,3172,7,+,+++,2957,9,3197,10,+,+++,2484,8
scsi disc array
xfs
61 matches
Mail list logo