Le 10/03/2012 10:29, Sriram Narayanan a écrit :
Hi folks:
At work, I have an R510, and R610 and an R710 - all with the H700 PERC
controller.
Based on experiments, it seems like there is no way to bypass the PERC
controller - it seems like one can only access the individual disks if
they
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Le 09/07/2010 01:37, Edward Ned Harvey a écrit :
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Philippe Schwarz
.
But if you're impressed with performance by enabling writeback, you can
still
Hi,
With dual-Xeon, 4GB of Ram (will be 8GB in a couple of weeks), two PCI-X
3Ware cards 7 Sata disks (750G 1T) over FreeBSD 8.0 (But i think it's
OS independant), i made some tests.
The disks are exported as JBOD, but i tried enabling/disabling write-cache .
I tried with UFS and ZFS on the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Le 08/07/2010 18:52, Freddie Cash a écrit :
On Thu, Jul 8, 2010 at 6:10 AM, Philippe Schwarz p...@schwarz-fr.net wrote:
With dual-Xeon, 4GB of Ram (will be 8GB in a couple of weeks), two PCI-X
3Ware cards 7 Sata disks (750G 1T) over FreeBSD 8.0
replacement for the 2 other failing
drives...
Many thanks to you all !
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
up replacing c7t3d0 too (and next c7t4d0), cause they
are falling rapidly (compared to the results of yesterday) !!
Have a nice week-end,
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Hi,
I known that ZFS is aware of I/O errors, and can alert or disable a crappy disk.
However, ZFS didn't notice at all these service time problems.
I think it is a good idea to integrate service time triggers in ZFS !
What to you think ?
Best regards !
Philippe
--
This message posted from
Hi,
Actually, it seems a common problem with WD EARS drives (advanced format) !
Please, see this other OpenSolaris thread :
https://opensolaris.org/jive/thread.jspa?threadID=126637
It is worth investigating !
I quote :
Just replacing back, and here is the iostat for the new EARS drive:
starting with the replacement of the very bad disk, and hope the
resilvering won't take too long !! I have no choice !
Thanks to all !
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
not ok...
How to get rid of this, and get a normal c7t2d0 ??
Thank you !
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Current status :
pool: zfs_raid
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h17m, 3,72% done, 7h22m to go
Any idea ?
action: Wait for the resilver to complete.
-- richard
Very fine ! And thank you a lot for your answers !
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
mm.. Service time of sd3..5 are waay too high to be
good working disks.
21 writes shouldn't take 1.3 seconds.
Some of your disks are not feeling well, possibly
doing
block-reallocation like mad all the time, or block
recovery of some
form. Service times should be closer to what sd1 and
is supported on the version of ZFS included in Opensolaris ?
Thank you !
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it looks like your 'sd5' disk is performing horribly
bad and except
for the horrible performance of 'sd5' (which
bottlenecks the I/O),
'sd4' would look just as bad. Regardless, the first
step would be to
investigate 'sd5'.
Hi Bob !
I've already tried the pool without the sd5 disk (so
! Or maybe..., a need of more RAM ??
Thank you for any thoughts !!
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0.1 13.8 0 1
sd3 0.24.0 10.7 196.4 0.0 0.1 15.9 0 1
sd4 0.35.5 17.1 195.6 0.0 0.2 31.3 0 3
sd5 0.3 22.5 17.1 127.4 0.0 31.4 1378.4 0 99
Thanks,
Philippe
--
This message posted from opensolaris.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
i'm getting involved in a pre-production test and want to be sure of the
means i'll have to use.
Take 2 SunFire x4150 1 3750 Gb Cisco Switche
1 private VLAN on the Gb ports of the SW.
1 x4150 is going to be ESX4 aka VSphere Server ( 1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
milosz a écrit :
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare
what you get with what you need. Just because striping 3 mirrors *will* give
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
David Magda a écrit :
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know
20 matches
Mail list logo