NAT traversal is messy. Oftentimes, one technique fails and the application
doesn't know why it failed. Sometimes when one technique fails, the
application doesn't have a good second or third or fourth choice technique.
And occasionally a new technique is implemented and has to be added in
without messing up all the source code. And of course, the NAT traversal
has to be isolated from the core application logic.

For the above reasons, I propose that a new API (client and server) for NAT
traversal be created. Something built to handle more techniques being added
and something smart enough to know what technique to use.

NAT traversal has to be a six phase process:

Phase I: The client (Bob) gathers information on its own
NAT/Firewall/Network/IP/permissions/latency/bandwidth/etc.

Phase II: Bob registers this information with a central server (and
maintaining an live connection with that server). This should probably be a
request to put a simple object into a database. Since the information
wouldn't need any joins or fancy queries (just fetch the object and
occasionally add more fields to it), a noSQL database like mongodb should
work fine (and might distribute better).

Phase III: Bob queries the server (Who is online? Is "Alice" connected?
What NAT/Firewall is Alice behind?). Plain old requests for data in a
database.

Phase IV: Bob asks the server to initiate a particular NAT traversal
protocol with "Alice". Since Bob knows Alice's state, Bob picks the NAT
traversal protocol based on the information that the server provided about
Alice and then the server merely forwards the request (along with the
information Alice would need to connect to Bob - the information Bob
provided to the central server)

Phase V: Bob and Alice execute the NAT traversal protocol. Note that this
protocol may require assistance from some other server besides the main
(database/web) server.

Phase VI: Alice disconnects from the central server and the server deletes
the data (ip address, port, NAT type, etc) that Alice posted to it.

For the main server to work in virtually all network environments, it has
to use TCP port 80 (or 443). The server is your fortress - everything from
the network data to the hard disk is encrypted, only a few ports are open
and those ports are always bound by a known services, only a few known
process are allowed to run, etc.

In addition there have to be a couple non-main servers for things like data
replication (maintain a copy of the data in the main server) and bouncing
back server-reflexive UDP/TCP packets and ICMP packets (necessary during
Phase I). There should be two of these (so that clients can see if their
TCP port number changes when a connection is made to two different ip
addresses).

Based on these six phases, an API has to be made. Since the main server has
to run on port 80/443 anyway, and since it will need a database back-end,
and since it is convenient to transfer information in plain text form, the
entire server can be implemented as a web application. For example, a get
method can be used to ask the web server for a plain text file containing
information on Alice. A post method can be used for posting information
about the type of NAT/Firewall/Security/etc. A web socket could be used to
allow the server to push the signal to initiate NAT traversal down to the
clients. Even though the server need not be a website, it could do
everything through the MEAN stack or something like that.

The client on the other hand just has to be able to make get/post requests
and make a websocket connection. This can be done in any major programming
language. From there, everything between the client and the server is plain
text.
_______________________________________________
GNUnet-developers mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/gnunet-developers

Reply via email to