Dave

which BIOS manufacturers and revisions? that seems to be more of the problem as choices are typically limited across vendors .. and I take it you're running 6/06 u2

Jonathan

On Nov 30, 2006, at 12:46, David Elefante wrote:

Just as background:

I attempted this process on the following:

1.  Jetway amd socket 734 (vintage 2005)
2.  Asus amd socket 939 (vintage 2005)
3. Gigabyte amd socket am2 (vintage 2006)

All with the same problem. I disabled the onboard nvidia nforce 410/430 raid bios in the bios in all cases. Now whether it actually does not look for a signature, I do not know. I'm attempting to make this box into an iSCSI target for my ESX environments. I can put W3K and SanMelody on there,
but it is not as interesting and I am attempting to help the Solaris
community.

I am simply making the business case that over three major vendors boards
and the absolute latest (gigabyte), the effect was the same.

As a workaround I can make slice 0 1 cyl and slice 1 1-x, and the zpool on the rest of the disk and be fine with that. So on a PC with zpool create there should be a warning for pc users that most likely if they use the entire disk, the resultant EFI label is likely to cause lack of bootability.

I attempted to hotplug the sata drives after booting, and Nevada 51 came up with scratch space errors and did not recognize the drive. In any case I'm
not hotplugging my drives every time.

The given fact is that PC vendors are not readily adopting EFI bios at this time, the millions of PC's out there are vulnerable to this. And if x86 Solaris is to be really viable, this community needs to be addressed. Now I was at Sun 1/4 of my entire life and I know the politics, but the PC area is different. If you tell the customer to go to the mobo vendor to fix the bios, they will have to find some guy in a bunker in Taiwan. Not likely.
Now I'm at VMware actively working on consolidating companies into x86
platforms. The simple fact that the holy war between AMD and Intel has created processors that a cheap enough and fast enough to cause disruption in the enterprise space. My new dual core AMD processor is incredibly fast
and the entire box cost me $500 to assemble.

The latest Solaris 10 documentation (thx Richard) has use the entire disk all over it. I don't see any warning in here about EFI labels, in fact
these statements discourage putting ZFS in a slice.:


ZFS applies an EFI label when you create a storage pool with whole disks. Disks can be labeled with a traditional Solaris VTOC label when you create a
storage pool with a disk slice.

Slices should only be used under the following conditions:

    *

      The device name is nonstandard.
    *

A single disk is shared between ZFS and another file system, such as
UFS.
    *

      A disk is used as a swap or a dump device.

Disks can be specified by using either the full path, such as
/dev/dsk/c1t0d0, or a shorthand name that consists of the device name within the /dev/dsk directory, such as c1t0d0. For example, the following are valid
disk names:

    *

      c1t0d0
    *

      /dev/dsk/c1t0d0
    *

      c0t0d6s2
    *

      /dev/foo/disk

ZFS works best when given whole physical disks. Although constructing
logical devices using a volume manager, such as Solaris Volume Manager
(SVM), Veritas Volume Manager (VxVM), or a hardware volume manager (LUNs or hardware RAID) is possible, these configurations are not recommended. While ZFS functions properly on such devices, less-than-optimal performance might
be the result.

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Wednesday, November 29, 2006 1:24 PM
To: Jonathan Edwards
Cc: David Elefante; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: system wont boot after zfs


I suspect a lack of an MBR could cause some BIOS implementations to
barf ..

Why?

Zeroed disks don't have that issue either.

What appears to be happening is more that raid controllers attempt
to interpret the data in the EFI label as the proprietary
"hardware raid" labels.  At least, it seems to be a problem
with internal RAIDs only.

In my experience, removing the disks from the boot sequence was
not enough; you need to disable the disks in the BIOS.

The SCSI disks with EFI labels in the same system caused no
issues at all; but the disks connected to the on-board RAID
did have issues.

So what you need to do is:

        - remove the controllers from the probe sequence
        - disable the disks

Casper

--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.430 / Virus Database: 268.15.2/560 - Release Date: 11/30/2006
3:41 PM


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.430 / Virus Database: 268.15.2/560 - Release Date: 11/30/2006
3:41 PM



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to