to Canada as well without issue)
Why use USB ? You wll get much better performance/throughput on eSata
(if you have good drivers of course). I use their sil3124 eSata
controller on FreeBSD as well as a number of PM units and they work great.
---Mike
--
---
Mike Tancsa, tel
On 1/31/2011 4:19 PM, Mike Tancsa wrote:
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption
On 1/29/2011 6:18 PM, Richard Elling wrote:
On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
On 1/29/2011 12:57 PM, Richard Elling wrote:
0(offsite)# zpool status
pool: tank1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption
On 1/30/2011 12:39 AM, Richard Elling wrote:
Hmmm, doesnt look good on any of the drives.
I'm not sure of the way BSD enumerates devices. Some clever person thought
that hiding the partition or slice would be useful. I don't find it useful.
On a Solaris
system, ZFS can show a disk
On 1/29/2011 12:57 PM, Richard Elling wrote:
0(offsite)# zpool status
pool: tank1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool
On 1/29/2011 11:38 AM, Edward Ned Harvey wrote:
That is precisely the reason why you always want to spread your mirror/raidz
devices across multiple controllers or chassis. If you lose a controller or
a whole chassis, you lose one device from each vdev, and you're able to
continue
On 1/29/2011 6:18 PM, Richard Elling wrote:
0(offsite)#
The next step is to run zdb -l and look for all 4 labels. Something like:
zdb -l /dev/ada2
If all 4 labels exist for each drive and appear intact, then look more closely
at how the OS locates the vdevs. If you can't solve the
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the