Paul Blackburn wrote:
>
> Steve,
>
> Thanks for the enlightenment!
>
> So, can you confirm: do you have a "fully functional"
> AFS service over NAT/firewall? klog OK?
Yes, klog works just fine... well, at least it does from UNIX clients.
In fact, I have AFS servers and clients at multiple sites, each NAT'd
behind firewalls at their respective sites, that communicate with each
other. (Note: It's a non-published cell... soon, the sites will be
communicating through a VPN, and then I can plug the holes in the
firewall...)
I *was* having a problem with my first experimental NT client, which I
only started to play with this past Thursday... but then I read Assar
Westerlund's comment:
> I believe you will also have to let through port 750 since the
> Transarc NT client uses the krb4 protocol for authenticating the user
> against the ka-server/kdc.
Thanks to Assar, I think I'll be able to get it to work now! The
interesting condition I was facing with the
(NAT'd-behind-a-different-firewall) NT client is that I *could*
authenticate to a remote server, from within AFS Control Center... but
not when using the "klog" application itself. I guess the Control
Center must be using port 70nn, while the klog app uses port 750...
well, I'll find out on Monday!
> Apart from the network interface alias on servers,
> are there any other things to configure?
> Routing issues?
No. In fact, I don't think you even have to have the second interface
(the one with the outside-the-firewall address) configured *up*. As
long as it is defined, the server will think of itself as multi-homed,
and clients outside the firewall will get the information they need with
the correct IP address.
The reason that you normally can't have a NAT'd file server can be seen
by imagining how a VLDB transaction would look.
Imagine a firewall at 128.100.250.1, protecting the data on a class C
network behind it. A client outside the firewall, say at 128.100.251.2,
thinks the VLservers are at "128.100.250.{200,201,202}".... but the
VLservers living behind the firewall know themselves only as
"10.10.10.{200,201,202}".
So, picture what happens when that remote client goes looking for a
file:
(a) The client at 128.100.251.2 wants data from a particular volume.
(b) The client cache manager sends "where is this volume?" to the
VLserver at 128.100.250.200. This is on a different class-C subnet, so
the request gets routed to 128.100.250.1.
(c) The firewall at 128.100.250.1 passes the request on to the NAT'd
address of the VLserver, 10.10.10.200.
(d) The VLserver looks in the VLDB, and discovers that the volume is on
fileserver 10.10.10.201. This fileserver happens to be known as
"128.100.250.201" outside of the firewall, but the VLserver *doesn't
know that*.
(e) The VLserver returns the location "10.10.10.201" to the client.
(f) The client tries to find the fileserver at 10.10.10.201, but it
doesn't have a route to that (non-routeable) subnet. Game over.
Now, in this scenario, if the fileserver had a primary interface of
10.10.10.201, and a second virtual interface of 128.100.250.201, the
client would get back... well, come to think of it, I don't know whether
the VLserver returns *both* addresses, or only returns the one that
makes sense... but somehow or other, the client ends up learning that
the volume it wants is at 128.100.250.201, and everything's happy.
> I guess you could make one fileserver "private"
> by not having the network interface alias?
Yeesss... any client outside the firewall trying to access a volume on
that "private" server would time out. So you'd have to make sure that
the volumes and/or replicas on that server were in distinct, connected
subtrees, else outside clients would be blocked from finding files on
volumes mounted "under" them.
Did that make any sense?
--
steve lammert unix administrator voice: +1-412-471-7500 x4712
[EMAIL PROTECTED] Be Free, Inc. fax: +1-412-471-9840