Hello All,

I hope you can help me with this one.

We have a production system running eleven zones on Solaris 10 11/07 E2900 with 
ZFS on EMC Clarion
A memory dimm failed on the 2900 which caused the server to become unusable. 
A reboot just hang and no console connection could be established so a flick of 
the switch was nessary.

All the zones came up but for one.

On issuing a zoneadm -s zonename boot it commplained about files system not 
being available and mount not able to connect (legacy mounting).

The stange thing is that /etc/mnttab is showing these filesystem but df -hZ is 
giving :-

df: cannot statvfs  /some/mount/point: No such file or directory

zpool status or list shows online so that looks fine.

On starting the zone I see the following in /var/adm/messages
Jan 17 19:23:52 upedb01 genunix: [ID 408114 kern.info] /pseudo/zconsnex at 
1/zcons at 1 (zcons1) online

As I understand it the above command is to do with the zone console which 
establishs an AF_Unix Socket an ldterm and an zcons driver but I could be 
clutching at straws with the above.

zoneadm list -cv  show as installed and not ready.

I tried to detach the zone but could not as /etc/mnttab is showing mount points 
that this zone uses as mounted.

How can I fix this without a reboot?
How can I force the system to clear these mounts from the mnttab?

Additional info:

I created the required dirs and was able to get df to show them mounted 
correctly but there is no data.
On unmounting any one of these mounted filesystem, df displays the statvfs for 
the rest.

On mounting these luns to a different mount point say under /mnt they mount 
cleanly and show all expected data.

Any help on this one would be much apreciated may even be able to keep my job 
;-)

Cheers
SRG
 
 
This message posted from opensolaris.org

Reply via email to