Hi. Thanks I have tried this on update 8 and Sol 11 Express.
The import always results in a kernel panic as shown in the picture.
I did not try an alternate mountpoint though. Would it make that much
----- Original Message -----
> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."" <laot...@gmail.com>
> To: firstname.lastname@example.org
> Sent: Monday, August 15, 2011 3:06:20 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
> may be try the following
> 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
> then choose single user mode(6))
> 2)when ask to mount rpool just say no
> 3)mkdir /tmp/mnt1 /tmp/mnt2
> 4)zpool import -f -R /tmp/mnt1 tank
> 5)zpool import -f -R /tmp/mnt2 rpool
> On 8/15/2011 9:12 AM, Stu Whitefish wrote:
>>> On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
>>> <swhitef...@yahoo.com> wrote:
>>>> # zpool import -f tank
>>> I encourage you to open a support case and ask for an escalation on CR
>>> Mike Gerdts
>> Hi Mike,
>> Unfortunately I don't have a support contract. I've been trying to
> set up a development system on Solaris and learn it.
>> Until this happened, I was pretty happy with it. Even so, I don't have
> supported hardware so I couldn't buy a contract
>> until I bought another machine and I really have enough machines so I
> cannot justify the expense right now. And I
>> refuse to believe Oracle would hold people hostage in a situation like
> this, but I do believe they could generate a lot of
>> goodwill by fixing this for me and whoever else it happened to and telling
> us what level of Solaris 10 this is fixed at so
>> this doesn't continue happening. It's a pretty serious failure and
> I'm not the only one who it happened to.
>> It's incredible but in all the years I have been using computers I
> don't ever recall losing data due to a filesystem or OS issue.
>> That includes DOS, Windows, Linux, etc.
>> I cannot believe ZFS on Intel is so fragile that people lose hundreds of
> gigs of data and that's just the way it is. There
>> must be a way to recover this data and some advice on preventing it from
> happening again.
>> zfs-discuss mailing list
> zfs-discuss mailing list
zfs-discuss mailing list