...
I have identified the culprit is the Western Digital drive WD2002FYPS-01U1B0.
It's not clear if they can fix it in firmware, but Western Digital is
replacing my drives.
Feb 17 04:45:10 thecratewall scsi_status=0x0, ioc_status=0x804b,
scsi_state=0xc
Feb 17 04:45:10 thecratewall scsi:
Tonmaus wrote:
Hi David,
why not just use a couple of SAS expanders?
Regards,
Tonmaus
I would go this route (SAS expanders for a 8-port HBA).
E.g.:
http://www.supermicro.com/products/accessories/mobilerack/CSE-M28E2.cfm
is a great internal bay set up if you want a non-Supermicro
Hi, do you have disks connected in sata1/2? With
WD2003FYYS-01T8B0/WD20EADS-00S2B0/WD1001FALS-00J7B1/WD1002FBYS-01A6B0
these timeouts are to be expected if disk is in SATA2 mode,
No, why are they to be expected with SATA2 mode? Is the defect
specific to the SATA2 circuitry? I guess it could
No, why are they to be expected with SATA2 mode? Is the defect
specific to the SATA2 circuitry? I guess it could be a temporary
workaround provided they would eventually fix the problem in
firmware, but I'm getting new drives, so I guess I can't complain :-)
Probably your new disks do
Just as an FYI, not all drives like sas expanders.
As an example, we had a lot of trouble with Indilinx MLC based SSDs. The
systems had Adaptec 52445 controllers and Chenbro SAS expanders. In the end we
had to remove the SAS expanders and put a 2nd 52445 in each system to get them
to work
David Dyer-Bennet d...@dd-b.net writes:
[...]
Am I way wrong on this, and further I'm curious if it would make more
versatile use of the space if I were to put the mirrored pairs into
one big pool containing 3 mirrored pairs (6 discs)
Well, my own thinking doesn't consider that adequate for
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On Fri, 9 Apr 2010, Harry Putnam wrote:
Am I way wrong on this, and further I'm curious if it would make more
versatile use of the space if I were to put the mirrored pairs into
one big pool containing 3 mirrored pairs (6 discs)
Besides
Richard Jahnel rich...@ellipseinc.com writes:
[...]
Perhaps mirrored sets with daily snapshots and a knowedge of how to
mount snapshots as clones so that you can pull a copy of that file
you deleted 3 days ago. :)
I've been doing that with the default auto snapshot setup, but hadn't
noticed
As far as I have read, that problem has been reported to be a compatibility
problem of the Adaptec controller and the expander chipset, e.g. LSI SASx which
is also on the mentioned Chenbro expander. There is no problem with 106x
chipset and sas expanders that I know of.
People sceptical about
On Sat, 10 Apr 2010, Harry Putnam wrote:
Am I way wrong on this, and further I'm curious if it would make more
versatile use of the space if I were to put the mirrored pairs into
one big pool containing 3 mirrored pairs (6 discs)
Besides more versatile use of the space, you would get 3X the
Due to recent experiences, and discussion on this list, my colleague and I
performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was introduced in zpool 19) In any way possible,
you lose an unmirrored log device, and the OS
On Sat, 10 Apr 2010, Edward Ned Harvey wrote:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log
device removal that was
introduced in zpool 19) In any way possible, you lose an unmirrored log
device, and the OS will crash, and
the whole zpool is permanently
Neil or somebody? Actual ZFS developers? Taking feedback here? ;-)
While I was putting my poor little server through cruel and unusual
punishment as described in my post a moment ago, I noticed something
unexpected:
I expected that while I'm stressing my log device by infinite sync
On Sat, Apr 10, 2010 at 10:08 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:
Due to recent experiences, and discussion on this list, my colleague and
I performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was
On Sat, 10 Apr 2010, Edward Ned Harvey wrote:
For several seconds, *only* the log device is busy. Then it stops,
and for maybe 0.5 secs *only* the primary storage disks are busy.
Repeat, recycle.
I expected to see the log device busy nonstop. And the spindle
disks blinking lightly. As
Thanks for the testing. so FINALLY with version 19 does ZFS demonstrate
production-ready status in my book. How long is it going to take Solaris to
catch up?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, Apr 9, 2010 at 9:31 PM, Eric D. Mudama edmud...@bounceswoosh.orgwrote:
On Sat, Apr 10 at 7:22, Daniel Carosone wrote:
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote:
If I could find a reasonable backup method that avoided external
enclosures altogether, I would
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool once it's
been snapshotted, meaning delete (and perhaps overwrite) all copies of this
file?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On Sat, 10 Apr 2010, Harry Putnam wrote:
Am I way wrong on this, and further I'm curious if it would make more
versatile use of the space if I were to put the mirrored pairs into
one big pool containing 3 mirrored pairs (6 discs)
Besides
No, until all snapshots referencing the file in question are removed.
Simplest way to understand snapshots is to consider them as
references. Any file-system object (say, file or block) is only
removed when its reference count drops to zero.
Regards,
Andrey
On Sat, Apr 10, 2010 at 10:20 PM,
Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
I guess that's the way I thought it was. Perhaps it would be nice to add such
a feature? If something gets stuck in a truckload of snapshots, say a 40GB
file in the root fs, it'd be nice to just rm --killemall largefile
Let us first asume the
On Sat, Apr 10, 2010 at 11:50:05AM -0500, Bob Friesenhahn wrote:
Huge synchronous bulk writes are pretty rare since usually the
bottleneck is elsewhere, such as the ethernet.
Also, large writes can go straight to the pool, and the zil only logs
the intent to commit those blocks (ie, link them
Any hints as to where you read that? I'm working on another system design with
LSI controllers and being able to use SAS expanders would be a big help.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sat, Apr 10, 2010 at 12:56:04PM -0500, Tim Cook wrote:
At that price, for the 5-in-3 at least, I'd go with supermicro. For $20
more, you get what appears to be a far more solid enclosure.
My intent with that link was only to show an example, not make a
recommendation. I'm glad others have
On Sat, Apr 10, 2010 at 02:51:45PM -0500, Harry Putnam wrote:
[Note: This discussion started in another thread
Subject: about backup and mirrored pools
but the subject has been significantly changed so started a new
thread]
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On 04/10/10 09:28, Edward Ned Harvey wrote:
Neil or somebody? Actual ZFS developers? Taking feedback here? ;-)
While I was putting my poor little server through cruel and unusual
punishment as described in my post a moment ago, I noticed something
unexpected:
I expected that
On 04/10/10 14:55, Daniel Carosone wrote:
On Sat, Apr 10, 2010 at 11:50:05AM -0500, Bob Friesenhahn wrote:
Huge synchronous bulk writes are pretty rare since usually the
bottleneck is elsewhere, such as the ethernet.
Also, large writes can go straight to the pool, and the zil only
On Sun, 11 Apr 2010, Daniel Carosone wrote:
The migration he's referring to is of disks, not of contents. The
contents you'd have to migrate first (say with send|recv), before
destroying the emptied pool and adding the disks to the pool you want
to expand, as a new vdev.There's an implicit
On Sat, 10 Apr 2010, Bob Friesenhahn wrote:
Since he is already using mirrors, he already has enough free space since he
can move one disk from each mirror to the main pool (which unfortunately,
can't be the boot 'rpool' pool), send the data, and then move the second
disks from the pools
On Sat, Apr 10, 2010 at 06:20:54PM -0500, Bob Friesenhahn wrote:
Since he is already using mirrors, he already has enough free space
since he can move one disk from each mirror to the main pool (which
unfortunately, can't be the boot 'rpool' pool), send the data, and then
move the second
Daniel Carosone d...@geek.com.au writes:
Thanks for the input.. very helpful.
[...]
No, as above. You might consider new disks for a new rpool (say, ssd
with some zil or l2arc) and reusing the current disks for data if
they're the same as the other data disks.
Would you mind expanding the
On Sat, 10 Apr 2010, Harry Putnam wrote:
Would you mind expanding the abbrevs: ssd zil 12arc?
SSD = Solid State Device
ZIL = ZFS Intent Log (log of pending synchronous writes)
L2ARC = Level 2 Adaptive Replacement Cache
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
On 04/11/10 11:55 AM, Harry Putnam wrote:
Would you mind expanding the abbrevs: ssd zil 12arc?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
33 matches
Mail list logo