Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-07 Thread John-Paul Drawneek
Final rant on this.

Managed to get the box re-installed and the performance issue has vanished.

So there is a performance bug in zfs some where.

Not sure to put in a bug log as I can't now provide any more information.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-03 Thread Collier Minerich
Please unsubscribe me

COLLIER


-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of John-Paul Drawneek
Sent: Thursday, September 03, 2009 2:13 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs performance cliff when over 80% util, still
occuring when pool in 6

So I have poked and prodded the disks and they both seem fine.

Any yet my rpool is still slow.

Any ideas on what do do now.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-03 Thread John-Paul Drawneek
So I have poked and prodded the disks and they both seem fine.

Any yet my rpool is still slow.

Any ideas on what do do now.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-02 Thread John-Paul Drawneek
No joy.

c1t0d0 89 MB/sec
c1t1d0 89 MB/sec
c2t0d0 123 MB/sec
c2t1d0 123 MB/sec

First two are the rpool
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn

On Tue, 1 Sep 2009, Jpd wrote:


Thanks.

Any idea on how to work out which one.

I can't find smart in ips, so what other ways are there?


You could try using a script like this one to find pokey disks:

#!/bin/ksh

# Date: Mon, 14 Apr 2008 15:49:41 -0700
# From: Jeff Bonwick 
# To: Henrik Hjort 
# Cc: zfs-discuss@opensolaris.org
# Subject: Re: [zfs-discuss] Performance of one single 'cp'
# 
# No, that is definitely not expected.
# 
# One thing that can hose you is having a single disk that performs

# really badly.  I've seen disks as slow as 5 MB/sec due to vibration,
# bad sectors, etc.  To see if you have such a disk, try my diskqual.sh
# script (below).  On my desktop system, which has 8 drives, I get:
# 
# # ./diskqual.sh

# c1t0d0 65 MB/sec
# c1t1d0 63 MB/sec
# c2t0d0 59 MB/sec
# c2t1d0 63 MB/sec
# c3t0d0 60 MB/sec
# c3t1d0 57 MB/sec
# c4t0d0 61 MB/sec
# c4t1d0 61 MB/sec
# 
# The diskqual test is non-destructive (it only does reads), but to

# get valid numbers you should run it on an otherwise idle system.

disks=`format &1 |
nawk '$1 == "real" { printf("%.0f\n", 67.108864 / $2) }'
}

getspeed()
{
# Best out of 6
for iter in 1 2 3 4 5 6
do
getspeed1 $1
done | sort -n | tail -2 | head -1
}

for disk in $disks
do
echo $disk `getspeed $disk` MB/sec
done


--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn

On Tue, 1 Sep 2009, John-Paul Drawneek wrote:


i did not migrate my disks.

I now have 2 pools - rpool is at 60% as is still dog slow.

Also scrubbing the rpool causes the box to lock up.


This sounds like a hardware problem and not something related to 
fragmentation.  Probably you have a slow/failing disk.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread John-Paul Drawneek
i did not migrate my disks.

I now have 2 pools - rpool is at 60% as is still dog slow.

Also scrubbing the rpool causes the box to lock up.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-08-31 Thread Scott Meilicke
As I understand it, when you expand a pool, the data do not automatically 
migrate to the other disks. You will have to rewrite the data somehow, usually 
a backup/restore.

-Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss