ernst bauernfeind wrote:
> Hi,
> I use Opensolaris 2009.06 to share a ZFS filesystem via NFS to Linux (Debian 
> 5.0). It works fine, even with NFSv4. Now I tried using Kerberos, but stepped 
> into problems (client = vdr, server = storage)
>
> /etc/nfssec.conf on Opensolaris server
> none            0       -       -       -       # AUTH_NONE
> sys             1       -       -       -       # AUTH_SYS
> dh              3       -       -       -       # AUTH_DH
> #
> # Uncomment the following lines to use Kerberos V5 with NFS
> #
> krb5            300003  kerberos_v5     default -               # RPCSEC_GSS
> krb5i           390004  kerberos_v5     default integrity       # RPCSEC_GSS
> krb5p           390005  kerberos_v5     default privacy         # RPCSEC_GSS
> default         1       -       -       -                       # default is 
> AUTH_SYS
>
>   
Ernst,

What is the share options on the server?

Thanks,
Tom


> Linux says:
>
> vdr:~# mount -t nfs4 -vvvv -o sec=krb5 storage.imperium:/storage/ /storage
> mount: fstab path: "/etc/fstab"
> mount: lock path:  "/etc/mtab~"
> mount: temp path:  "/etc/mtab.tmp"
> mount: spec:  "storage.imperium:/storage/"
> mount: node:  "/storage"
> mount: types: "nfs4"
> mount: opts:  "sec=krb5"
> mount: external mount: argv[0] = "/sbin/mount.nfs4"
> mount: external mount: argv[1] = "storage.imperium:/storage/"
> mount: external mount: argv[2] = "/storage"
> mount: external mount: argv[3] = "-v"
> mount: external mount: argv[4] = "-o"
> mount: external mount: argv[5] = "rw,sec=krb5"
> mount.nfs4: pinging: prog 100003 vers 4 prot tcp port 2049
> mount.nfs4: Operation not permitted
>
> vdr:~# tail /var/log/messages
> Jul  9 15:38:52 vdr kernel: [  155.925026] call_verify: server storage 
> requires stronger authentication.
> Jul  9 15:38:52 vdr kernel: [  155.925026] call_verify: server storage 
> requires stronger authentication.
>
> vdr:~# tail /var/log/daemon.log
> Jul  9 16:26:34 vdr rpc.gssd[2864]: Full hostname for 'storage.imperium' is 
> 'storage.imperium'
> Jul  9 16:26:34 vdr rpc.gssd[2864]: Full hostname for 'vdr.imperium' is 
> 'vdr.imperium'
> Jul  9 16:26:34 vdr rpc.gssd[2864]: Key table entry not found while getting 
> keytab entry for 'root/vdr.imperium at IMPERIUM'
> Jul  9 16:26:34 vdr rpc.gssd[2864]: Success getting keytab entry for 
> 'nfs/vdr.imperium at IMPERIUM'
> Jul  9 16:26:34 vdr rpc.gssd[2864]: INFO: Credentials in CC 
> 'FILE:/tmp/krb5cc_machine_IMPERIUM' are good until 1247176570
> Jul  9 16:26:34 vdr rpc.gssd[2864]: INFO: Credentials in CC 
> 'FILE:/tmp/krb5cc_machine_IMPERIUM' are good until 1247176570
> Jul  9 16:26:34 vdr rpc.gssd[2864]: using FILE:/tmp/krb5cc_machine_IMPERIUM 
> as credentials cache for machine creds
> Jul  9 16:26:34 vdr rpc.gssd[2864]: using environment variable to select krb5 
> ccache FILE:/tmp/krb5cc_machine_IMPERIUM
> Jul  9 16:26:34 vdr rpc.gssd[2864]: creating context using fsuid 0 (save_uid 
> 0)
> Jul  9 16:26:34 vdr rpc.gssd[2864]: creating tcp client for server 
> storage.imperium
> Jul  9 16:26:34 vdr rpc.gssd[2864]: creating context with server nfs at 
> storage.imperium
> Jul  9 16:26:35 vdr rpc.gssd[2864]: DEBUG: serialize_krb5_ctx: lucid version!
> Jul  9 16:26:35 vdr rpc.gssd[2864]: prepare_krb5_rfc1964_buffer: serializing 
> keys with enctype 4 and length 8
> Jul  9 16:26:35 vdr rpc.gssd[2864]: doing downcall
> Jul  9 16:26:35 vdr rpc.gssd[2864]: destroying client clnt1e
> Jul  9 16:26:35 vdr rpc.gssd[2864]: destroying client clnt1d
>
>
> Any help would be appreciated!
>   


Reply via email to