Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-09 Thread James McPherson
On Thu, Jul 10, 2008 at 10:34 AM, Brandon High [EMAIL PROTECTED] wrote:
 On Wed, Jul 9, 2008 at 1:12 PM, Tim [EMAIL PROTECTED] wrote:
 Perfect.  Which means good ol' supermicro would come through :)  WOHOO!

 AOC-USAS-L8i

 http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

 Is this card new? I'm not finding it at the usual places like Newegg, etc.

 It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.


It appears to be the non--RAID version of the card. (That's what the R
suffix indicates). If it is that is the case, then I've got one running quite
happily in my workstation already, using the mpt driver.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
 http://blogs.sun.com/jmcp
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread James McPherson

On 11/30/06, David Elefante [EMAIL PROTECTED] wrote:

I had the same thing happen to me twice on my x86 box.  I
installed ZFS (RaidZ) on my enclosure with four drives and
upon reboot the bios hangs upon detection of the newly EFI'd
drives.  I've already RMA'd 4 drives to seagate and the new
batch was frozen as well.  I was suspecting my enclosure,
but I was suspicious when it only went bye bye after installing ZFS.

This is a problem since how can anyone use ZFS on a PC???
My motherboard is a newly minted AM2 w/ all the latest
firmware.  I disabled boot detection on the sata channels and
it still refuses to boot.  I had to purchase an external SATA
enclosure to fix the drives.  This seems to me to be a serious
problem.  I put build 47 and 50 on there with the same issue.




Yes, this is a serious problem. It's a problem with your motherboard
bios, which is clearly not up to date. The Sun Ultra-20 bios was
updated with a fix for this issue back in May.

Until you have updated your bios, you will need to destroy the
EFI labels, write SMI labels to the disks, and create slices on those
disks which are the size that you want to devote to ZFS. Then you
can specify the slice name when you run your zpool create operation.

This has been covered in the ZFS discussion lists several times, and
a quick google search should have found the answer for you.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems

2006-11-18 Thread James McPherson

On 11/18/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
...

 scrub: scrub completed with 0 errors on Mon Nov 13 04:49:35 2006
config:

NAMESTATE READ WRITE CKSUM
amber   ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c4d0ONLINE   0 051
c5d0ONLINE   0 041

errors: No known data errors


I have md5sums on a lot of the files and it looks like maybe 5% of
my files are corrupted. Does anyone have any ideas?


Michael,
as far as I can see, your setup does not mee the minimum
redundancy requirements for a Raid-Z, which is 3 devices.
Since you only have 2 devices you are out on a limb.

Please read the manpage for the zpool command and pay
close attention to the restrictions in the section on raidz.



I was under the impression that zfs was pretty reliable but I
guess with any software it needs time to get the bugs ironed out.


ZFS is reliable. I use it - mirrored - at home. If I was going to
use raidz or raidz2 I would make sure that I followed the
instructions in the manpage about the number of devices
I need in order to guarantee redundancy and thus reliability,
rather than making an assumption.

You should also check the output of iostat -En and see
whether your devices are listed there with any error counts.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bogus zfs error message on boot

2006-11-15 Thread James McPherson

On 11/15/06, Frank Cusack [EMAIL PROTECTED] wrote:

After swapping some hardware and rebooting:

SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUNW,Sun-Fire-T1000, CSN: -, HOSTNAME:
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 60b31acc-0de8-c1f3-84ec-935574615804
DESC: A ZFS pool failed to open.  Refer to http://sun.com/msg/ZFS-8000-CS
for more information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: The pool data is unavailable
REC-ACTION: Run 'zpool status -x' and either attach the missing device or
restore from backup.

# zpool status -x
all pools are healthy

And in fact they are.  What gives?  This message occurs on every boot now.
It didn't occur before I changed the hardware.


Sounds like an opportunity for enhancement. At the
very least the ZFS :: FMA interaction should include the
component (pool in this case) which was noted to be
marginal/faulty/dead.


Does zpool status -xv show anything that zpool status -x
doesn't?

James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Panic while scrubbing

2006-10-24 Thread James McPherson

On 10/25/06, Siegfried Nikolaivich [EMAIL PROTECTED] wrote:
...

While the machine was idle, I started a scrub.  Around the time the scrubbing 
was supposed to be finished, the machine panicked.
This might be related to the 'metadata corruption' that happened earlier to me. 
 Here is the log, any ideas?

...

Oct 24 20:13:52 FServe marvell88sx: [ID 812950 kern.warning] WARNING: 
marvell88sx0: error on port 3:
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]   device 
disconnected
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]   device connected
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]   SError interrupt
Oct 24 20:13:52 FServe marvell88sx: [ID 131198 kern.info]   SErrors:
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]   
Recovered communication error
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]   PHY 
ready change
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]   10-bit 
to 8-bit decode error
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]   
Disparity error



Hi Siegfried,
this error from the marvell88sx driver is of concern, The 10b8b decode
and disparity error messages make me think that you have a bad piece
of hardware. I hope it's not your controller but I can't tell without more
data. You should have a look at the iostat -En output for the device
on marvell88sx instance #0, attached as port 3. If there are any error
counts above 0 then - after checking /var/adm/messages for medium
errors - you should probably replace the disk.

However, don't discount the possibly that the controller and or the
cable is at fault.

cheers,
James
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-11 Thread James McPherson

On 10/12/06, Steve Goldberg [EMAIL PROTECTED] wrote:

Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris?  Is there something akin to vfstab
or perhaps a database?



Have a look at the contents of /etc/zfs for an in-filesystem artefact
of zfs. Apart from that, the information required is stored on the
disk itself.

There is really good documentation on ZFS at the ZFS community
pages found via http://www.opensolaris.org/os/community/zfs.


cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to make an extended LUN size known to ZFS and Solaris

2006-09-28 Thread James McPherson

On 9/29/06, Michael Phua - PTS [EMAIL PROTECTED] wrote:

Our customer has an Sun Fire X4100 with Solaris 10 using ZFS and a HW RAID
array (STK D280).
He has extended a LUN on the storage array and want to make this new size
known to ZFS and Solaris.
Does anyone know if this can be done and how it can be done.



Hi Michael,
the customer needs to export the pool which contains the lun, run
devfsadm, then re-import the port. That's the high level summary,
there's probably some more in the archives on really detailed specifics
but I can't recall them at the moment.

The reasoning behind the sequence of operations is that the lun's inquiry
data is read on pool import or creation, and not at any other time. At least,
that's how it is at the moment, perhaps Eric, Matt, Mark or Bill might have
some project underway to make this a bit easier.


cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss