I can confirm that on *at least* 4 different cards - from different
board OEMs - I have seen single bit ZFS checksum errors that went away
immediately after removing the 3114 based card.
I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have
never looked back.
I now throw
I'd pick samsung and use the savings for additional redundancy. Ymmv.
On Feb 25, 2011 8:46 AM, Markus Kovero markus.kov...@nebula.fi wrote:
So, does anyone know which drives to choose for the next setup? Hitachis
look good so far, perhaps also seagates, but right now, I'm dubious about
the
On Fri, Feb 25 at 22:29, Nathan Kroenert wrote:
I don't recall if Solaris 10 (Sparc or X86) actually has the si3124
driver, but if it does, for a cheap thrill, they are worth a bash. I
have no problems pushing 4 disks pretty much flat out on a PCI-X 133
3124 based card. (note that there was a
Hi,
I'm investigating a hung system. The machine is running snv_159 and was
running a full build of Solaris 11. You cannot get any response from the
console and you cannot ssh in, but it responds to ping.
The output from ::arc shows:
arc_meta_used = 3836 MB
arc_meta_limit
nat...@tuneunix.com said:
I can confirm that on *at least* 4 different cards - from different board
OEMs - I have seen single bit ZFS checksum errors that went away immediately
after removing the 3114 based card.
I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have
-Original Message-
From: Thierry Delaitre
Sent: Wednesday, February 23, 2011 4:42 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Using Solaris iSCSI target in VirtualBox iSCSI
Initiator
Hello,
Im using ZFS to export some iscsi targets for the virtual box iscsi
initiator.
On 25 February, 2011 - Mark Logan sent me these 0,6K bytes:
Hi,
I'm investigating a hung system. The machine is running snv_159 and was
running a full build of Solaris 11. You cannot get any response from the
console and you cannot ssh in, but it responds to ping.
The output from ::arc
Samsung spinpoints or the hitachi are doing great @ 2tb.
-Original Message-
From: zfs-discuss-requ...@opensolaris.org
Sender: zfs-discuss-boun...@opensolaris.org
Date: Fri, 25 Feb 2011 12:00:02
To: zfs-discuss@opensolaris.org
Reply-To: zfs-discuss@opensolaris.org
Subject: zfs-discuss
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
from :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Hi Dave,
Still true.
Thanks,
Cindy
On 02/25/11 13:34, David Blasingame Oracle wrote:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
from :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It happens at about 90% for me.. all of a sudden, the mail server got
butt slow.. killed an old snapshot to
On 2/25/2011 3:49 PM, Tomas Ögren wrote:
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It happens at about 90% for me.. all of a sudden, the mail
Hi all,
Space is starting to get a bit tight here, so I'm looking at adding
a couple of TB to my home server. I'm considering external USB or
FireWire attached drive enclosures. Cost is a real issue, but I also
want the data to be managed by ZFS--so enclosures without a JBOD option
have been
On 2/25/2011 7:34 PM, Rich Teer wrote:
One product that seems to fit the bill is the StarTech.com S352U2RER,
an external dual SATA disk enclosure with USB and eSATA connectivity
(I'd be using the USB port). Here's a link to the specific product
I'm considering:
On Fri, Feb 25, 2011 at 4:34 PM, Rich Teer rich.t...@rite-group.com wrote:
Space is starting to get a bit tight here, so I'm looking at adding
a couple of TB to my home server. I'm considering external USB or
FireWire attached drive enclosures. Cost is a real issue, but I also
I would avoid
I'm with the gang on this one as far as USB being the spawn of the
devil for mass storage you want to depend on. I'd rather scoop my eyes
out with a red hot spoon than depend on permanently attached USB
storage... And - don't even start me on SPARC and USB storage... It's
like watching pitch
--- rich.t...@rite-group.com wrote:
Space is starting to get a bit tight here, so I'm looking at adding
a couple of TB to my home server. I'm considering external USB or
FireWire attached drive enclosures. Cost is a real issue, but I also
want the data to be managed by ZFS--so enclosures
Sorry all, didn't realize that half of Oracle would auto-reply to a public
mailing list since they're out of the office 9:30 Friday nights. I'll try to
make my initial post each month during daylight hours in the future.
___
zfs-discuss mailing list
On Fri, 2011-02-25 at 20:29 -0800, Yaverot wrote:
Sorry all, didn't realize that half of Oracle would auto-reply to a public
mailing list since they're out of the office 9:30 Friday nights. I'll try to
make my initial post each month during daylight hours in the future.
19 matches
Mail list logo