Thanks for the pointers Nate! I got a similar setup working (although I
only have servers in AWS), but I am not NetRestricting the private IP.
Besides the initial timeout from the clients outside the NAT, is there
another reason to do this? I know that AWS treats internal and external
traffic differently in terms of billing (internal traffic to the internal
IP is free, but internal traffic to the external IP is not). I'm guessing
that is just a tradeoff.

Out of curiosity, Ben mentioned some issues related to virtualized
networking on small instances. Have you seen such a behavior, or do you run
larger instances anyways?

Victor


On Tue, Jun 18, 2013 at 11:00 AM, Nate Coraor <[email protected]> wrote:

> On Jun 18, 2013, at 1:20 PM, Victor Marmol wrote:
>
> > I believe we have had a couple people report back from running AFS cells
> in AWS, with some unfortunate experiences relating to the network.
> Apparently our Rx stack does not always deal well with the delays and
> interruptions that AWS VMs can see; it might be better with dedicated
> (large) instances.  It may be worth searching the list archives to find
> these reports, though I can't do so right now.
> >
> > I just started running the server on AWS and will report back on my
> experience.
>
> Hi Victor,
>
> I run two DB/Fileservers in AWS and a third DB (and various other
> fileservers) outside AWS.  You'll want to use Elastic IPs and configure
> NetInfo/NetRestrict accordingly.  Here are my notes on the subject:
>
>     http://www.bx.psu.edu/~nate/doc/vldb.html
>
> --nate

Reply via email to