> jabrewer,
> 
> The problem is that I can't see how that would solve
> my problem, please elaborate.
> 
> Replace does not work due to the erronous dual c2d0
> -- It ought to read c2d0 _and_ c3d0, i.e. c3d0 was
> somehow named c2d0 in conflict with the real one :-(
> It seems that a logical name is confused with the
> physical, or something...
> 
> Why does zpool even think that I have two c2d0?
> 
> I booted my old env (b70_x86) and with the disks
If you end up deciding to do any changes with spool replace stay with the old 
env (b70_x86) that the zpools wore working!
> offline to confirm that the 'zpool status' was giving
> me the correct output. I then rebooted the b70 with
> the disks online and now the 'zpool status' gives me
> the same erronous list. 
> 
> Scrub doesn't find any errors!?
> 
> Even if all disks are offline, zpool still lists two
> c2d0 instead of c2d0 and c3d0
In my case I had ran zpool export first , I have moved zfs enabled disks 
between systems and positions and have ran zpool scrub when a partition size 
changed and imported and when I had a media error, also I have been able to get 
at my data, the only issue was when I was on the old Solaris 10 305 and ran a 
upgrade option to a newer zfs version by mistake the data was not recoverable, 
in that you can read the old with the new version forward but not backwards, as 
stated in the manual.
Sounds like a bug on how spool lists the devices, you should run a truss -eaf 
on the spool list command and submit a BUGIF/RFE 
http://www.opensolaris.org/bug/report.jspa
> 
> I'm afraid that I may screw the disks if I do things
> out in the blue. This is a raidz2-set of ten 500gb
> sata disks and I'd be very happy if I'm not loosing
> all my data...
I agree! but at least provide as much detail to reproduce the steps in your bug 
report in your old env (b70_x86) vs the new env.
 
 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to