Hello,

I set up 3 lustre networks for the following:

tcp0  for kerberized connections lustre 2.0
tcp21 for no kerberos auth connections to lustre 2.0
tcp18 for no kerberos auth connections to lustre 1.83

Below are the kerb specifications by network:

  Secure RPC Config Rules:
  BEER.srpc.flavor.tcp=krb5p
  BEER.srpc.flavor.tcp21=null
  BEER.srpc.flavor.default=krb5n

The name of the lustre filesystem is /beer.

youngs.beer.psc.edu on tcp0
youngs-145.beer.psc.edu on tcp21

[r...@youngs ~]# !lctl list_nids
lctl list_nids list_nids
128.182.58....@tcp
128.182.145....@tcp21

[r...@youngs ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       4.8G  2.0G  2.6G  43% /
/dev/hda1              99M   24M   71M  25% /boot
tmpfs                 506M     0  506M   0% /dev/shm
guinness.beer.psc....@tcp0:/BEER
                        99G   50G   47G  52% /beer
guinness-145.beer.psc....@tcp21:/BEER
                        99G   50G   47G  52% /beer-145

Problem:
--------
  I can change all kerberos flavors modifications on default and tcp0, 
tcp21 and get indication  on MDS server showing the changes. 
I don't see such confirmation  for tcp21. But I can certainly mount as 
shown above. I suspect that tcp21 is defaulting to krb5p and thus 
requiring still auth for users. /beer and /beer-145 are NFS-exported to
other systems residing in same and different kerberos realms.
Root can access the filesystems with no problem but users require 
authentication.

My modprobe is of the form
options lnet ip2nets="tcp0(eth0) 128.182.58.*; tcp21(eth1) 128.182.145.*"
routes="tcp0 128.182.145....@tcp21; tcp21 128.182.58....@tcp0"

Question:
--------
What's the interoperability between lustre 2.0* and lustre 1.83?
Officially it is not compatible and/or supported?
But unofficially, we can try it? Or absolutely not compatible.

I would appreciate any feedback/corrections.

Thanks,
josephine

Reference:
----------
Lustre version: 2.0.5 Alpha
Lustre release: 1.9.280
Kernel: 2.6.18_128.7.1



_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to