It is now solved!  thanks to Casper and billm

this is the mail i've received from Casper, i don't know why i didn't saw it 
here in the forum but...

>>Is zfs causing all this? does it write something at the beginning of the
>>drive that can cause this behavior ?
>
>Well, "cause" is not the correct term here.
>
>We've found that quite a few motherboards have buggy BIOSes; as soon as the
>BIOS sees a drive, it tries to read some data from it and in case of EFI
>labels this causes the BIOS software to crash.
>
>Generally, this can be worked around in the following manner:
>- remove the affected disks.
>- change the BIOS to ignore the selected disks/controllers during
>         boot/test, this could mean any of the steps:
>         - remove device from boot order
>         - prevent device BIOS extensions from executing
>         - etc.
>
>- reinsert the affected disks.
>
>It's an, unfortunately, very common issue.
>Option 0 to try might be: upgrade BIOS.
>
>Casper

So at that time, i know that it is something about the EFI label, i have tried 
what Casper told me in the Bios, but i cannot disable completely a drive from 
the boot disk list.  But i've search about the EFI labels and found this old 
post of 2005:
http://www.opensolaris.org/jive/message.jspa?messageID=18116#18116

So i followed billm advice and i did these step to make everything works after 
a reboot:

- Plug the power on the HD, without the sata cable and boot directly into 
Solaris 10
- Plug the SATA back on & run format -e
- selected the first disk in problem in the list
- typed: fdisk
- deleted the EFI partition, create a standard Solaris partition at 100% (have 
to create a Solaris2 then used the menu option to change it back to Solaris)
- Option 5 to exit and save the fdisk changes
- typed: "label" & select option #0: SMI
- exit format and back to do the same step on the 2 others disks

Then, i think that the EFI was done with creating the pool in raindz using the 
hole disk instead of using one partition only so instead of doing:
# zpool create mypool raidz c1d0 c2d0 c3d0
i did:
# zpool create mypool raid c1d0p0 c2d0p0 c3d0p0

Then, a zfs create mypool/test & a reboot

Everything works fine now !!!  thanks !

Ben.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to