Re[2]: [zfs-discuss] zpool import - cannot mount [...] directory is not empty

2006-08-16 Thread Robert Milkowski
Hello Eric,

Wednesday, August 16, 2006, 4:48:46 PM, you wrote:

ES> What does 'zfs list -o name,mountpoint' and 'zfs mount' show after the
ES> import?  My only guess is that you have some explicit mountpoint set
ES> that's confusing the DSl-orderered mounting code.  If this is the case,
ES> this was fixed in build 46 (likely to be in S10u4) to always mount
ES> datasets in mountpoint order, regardless of their hierarchical
ES> relationship.  However, you may be hitting some other bug that we
ES> haven't seen before.

bash-3.00# zfs list -o name,mountpoint
NAME  MOUNTPOINT
nfs-s5-p0 /nfs-s5-p0
nfs-s5-p0/d5110   /nfs-s5-p0/d5110
nfs-s5-p0/d5111   /nfs-s5-p0/d5111
nfs-s5-p0/d5112   /nfs-s5-p0/d5112
nfs-s5-p0/d5113   /nfs-s5-p0/d5113
nfs-s5-p0/d5114   /nfs-s5-p0/d5114
nfs-s5-p0/d5115   /nfs-s5-p0/d5115
nfs-s5-p0/d5116   /nfs-s5-p0/d5116
nfs-s5-p1 /nfs-s5-p1
nfs-s5-p1/d5117   /nfs-s5-p1/d5117
nfs-s5-p1/d5118   /nfs-s5-p1/d5118
nfs-s5-p1/d5119   /nfs-s5-p1/d5119
nfs-s5-p1/d5120   /nfs-s5-p1/d5120
nfs-s5-p1/d5121   /nfs-s5-p1/d5121
nfs-s5-p1/d5122   /nfs-s5-p1/d5122
nfs-s5-p1/d5123   /nfs-s5-p1/d5123
nfs-s5-p2 /nfs-s5-p2
nfs-s5-p2/d5124   /nfs-s5-p2/d5124
nfs-s5-p2/d5125   /nfs-s5-p2/d5125
nfs-s5-p2/d5126   /nfs-s5-p2/d5126
nfs-s5-p2/d5127   /nfs-s5-p2/d5127
nfs-s5-p2/d5128   /nfs-s5-p2/d5128
nfs-s5-p2/d5129   /nfs-s5-p2/d5129
nfs-s5-p2/d5130   /nfs-s5-p2/d5130
nfs-s5-p3 /nfs-s5-p3
nfs-s5-p3/d5131   /nfs-s5-p3/d5131
nfs-s5-p3/d5132   /nfs-s5-p3/d5132
nfs-s5-p3/d5133   /nfs-s5-p3/d5133
nfs-s5-p3/d5134   /nfs-s5-p3/d5134
nfs-s5-p3/d5135   /nfs-s5-p3/d5135
nfs-s5-p3/d5136   /nfs-s5-p3/d5136
nfs-s5-p3/d5137   /nfs-s5-p3/d5137
bash-3.00#


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import - cannot mount [...] directory is not empty

2006-08-16 Thread Eric Schrock
What does 'zfs list -o name,mountpoint' and 'zfs mount' show after the
import?  My only guess is that you have some explicit mountpoint set
that's confusing the DSl-orderered mounting code.  If this is the case,
this was fixed in build 46 (likely to be in S10u4) to always mount
datasets in mountpoint order, regardless of their hierarchical
relationship.  However, you may be hitting some other bug that we
haven't seen before.

- Eric

On Wed, Aug 16, 2006 at 03:32:08PM +0200, Robert Milkowski wrote:
> Hello Mark,
> 
> Wednesday, August 16, 2006, 3:23:43 PM, you wrote:
> 
> MM> Robert,
> 
> MM> Are you sure that nfs-s5-p0/d5110 and nfs-s5-p0/d5111 are mounted
> MM> following the import?  These messages imply that the d5110 and d5111
> MM> directories in the top-level filesystem of pool nfs-s5-p0 are not
> MM> empty.  Could you verify that 'df /nfs-s5-p0/d5110' displays
> MM> nfs-s5-p0/d5110 as the "Filesystem" (and not just nfs-s5-p0)?
> 
> They are definitely mounted and not empty. And df reports them as
> mounted file systems.
> 
> -- 
> Best regards,
>  Robertmailto:[EMAIL PROTECTED]
>http://milek.blogspot.com
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] zpool import - cannot mount [...] directory is not empty

2006-08-16 Thread Robert Milkowski
Hello Mark,

Wednesday, August 16, 2006, 3:23:43 PM, you wrote:

MM> Robert,

MM> Are you sure that nfs-s5-p0/d5110 and nfs-s5-p0/d5111 are mounted
MM> following the import?  These messages imply that the d5110 and d5111
MM> directories in the top-level filesystem of pool nfs-s5-p0 are not
MM> empty.  Could you verify that 'df /nfs-s5-p0/d5110' displays
MM> nfs-s5-p0/d5110 as the "Filesystem" (and not just nfs-s5-p0)?

They are definitely mounted and not empty. And df reports them as
mounted file systems.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import - cannot mount [...] directory is not empty

2006-08-16 Thread Mark Maybee

Robert,

Are you sure that nfs-s5-p0/d5110 and nfs-s5-p0/d5111 are mounted
following the import?  These messages imply that the d5110 and d5111
directories in the top-level filesystem of pool nfs-s5-p0 are not
empty.  Could you verify that 'df /nfs-s5-p0/d5110' displays
nfs-s5-p0/d5110 as the "Filesystem" (and not just nfs-s5-p0)?

-Mark

Robert Milkowski wrote:

All pools were exported than I tried to import one-by-one and got this with 
only a first pool.

bash-3.00# zpool export nfs-s5-p4 nfs-s5-s5 nfs-s5-s6 nfs-s5-s7 nfs-s5-s8
bash-3.00# zpool import nfs-s5-p4
cannot mount '/nfs-s5-p4/d5139': directory is not empty
cannot mount '/nfs-s5-p4/d5141': directory is not empty
cannot mount '/nfs-s5-p4/d5138': directory is not empty
cannot mount '/nfs-s5-p4/d5142': directory is not empty
bash-3.00# df -h /nfs-s5-p4/d5139
Filesystem size   used  avail capacity  Mounted on
nfs-s5-p4/d5139600G   556G44G93%/nfs-s5-p4/d5139
bash-3.00# zpool export nfs-s5-p4
bash-3.00# ls -l /nfs-s5-p4/d5139
/nfs-s5-p4/d5139: No such file or directory
bash-3.00# ls -l /nfs-s5-p4/
total 0
bash-3.00# zpool import nfs-s5-p4
bash-3.00# uname -a
SunOS XXX 5.11 snv_43 sun4u sparc SUNW,Sun-Fire-V240
bash-3.00#

No problem with other pools - all other pools imported without any warnings.

The same on another server (all pools were exported first):

bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount 'nfs-s5-p0/d5112': mountpoint or dataset is busy
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00# zpool export nfs-s5-p0
bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00# zpool export nfs-s5-p0
bash-3.00# ls -la /nfs-s5-p0/
total 4
drwxr-xr-x   2 root other512 Jun 14 14:37 .
drwxr-xr-x  40 root root1024 Aug  8 11:00 ..
bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount 'nfs-s5-p0/d5112': mountpoint or dataset is busy
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00#
bash-3.00# uname -a
SunOS X 5.11 snv_39 sun4v sparc SUNW,Sun-Fire-T200
bash-3.00#

All filesystems from that pool were however mounted.

No problem with other pools - all other pools imported without any warnings.


All filesystems in a pool have set sharenfs (actually sharenfs is set on a pool 
and then inherited by filesystems). Additionally nfs/server was disabled just 
before I exported pools and automatically started when first pool was imported.



I belive there's already open bug for this.
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import - cannot mount [...] directory is not empty

2006-08-15 Thread Robert Milkowski
All pools were exported than I tried to import one-by-one and got this with 
only a first pool.

bash-3.00# zpool export nfs-s5-p4 nfs-s5-s5 nfs-s5-s6 nfs-s5-s7 nfs-s5-s8
bash-3.00# zpool import nfs-s5-p4
cannot mount '/nfs-s5-p4/d5139': directory is not empty
cannot mount '/nfs-s5-p4/d5141': directory is not empty
cannot mount '/nfs-s5-p4/d5138': directory is not empty
cannot mount '/nfs-s5-p4/d5142': directory is not empty
bash-3.00# df -h /nfs-s5-p4/d5139
Filesystem size   used  avail capacity  Mounted on
nfs-s5-p4/d5139600G   556G44G93%/nfs-s5-p4/d5139
bash-3.00# zpool export nfs-s5-p4
bash-3.00# ls -l /nfs-s5-p4/d5139
/nfs-s5-p4/d5139: No such file or directory
bash-3.00# ls -l /nfs-s5-p4/
total 0
bash-3.00# zpool import nfs-s5-p4
bash-3.00# uname -a
SunOS XXX 5.11 snv_43 sun4u sparc SUNW,Sun-Fire-V240
bash-3.00#

No problem with other pools - all other pools imported without any warnings.

The same on another server (all pools were exported first):

bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount 'nfs-s5-p0/d5112': mountpoint or dataset is busy
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00# zpool export nfs-s5-p0
bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00# zpool export nfs-s5-p0
bash-3.00# ls -la /nfs-s5-p0/
total 4
drwxr-xr-x   2 root other512 Jun 14 14:37 .
drwxr-xr-x  40 root root1024 Aug  8 11:00 ..
bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount 'nfs-s5-p0/d5112': mountpoint or dataset is busy
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00#
bash-3.00# uname -a
SunOS X 5.11 snv_39 sun4v sparc SUNW,Sun-Fire-T200
bash-3.00#

All filesystems from that pool were however mounted.

No problem with other pools - all other pools imported without any warnings.


All filesystems in a pool have set sharenfs (actually sharenfs is set on a pool 
and then inherited by filesystems). Additionally nfs/server was disabled just 
before I exported pools and automatically started when first pool was imported.



I belive there's already open bug for this.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss