HI Jeremy, 

I would personally look into neutron and consider provider network. ML2 allows 
different type of backends, and between nodes I would use GRE.

Just my 2 cents. I know I was short but at least you got some ideas on where to 
look for. 

Remo 
 
On May 30, 2014, at 1:21 PM, Jeremy Utley <[email protected]> wrote:

> Hello list,
> 
> I'm working on a proof-of-concept openstack installation for my company, and 
> am looking for the answers to some questions that I haven't been able to find 
> the answer to.  Hopefully someone here can help point me in the right 
> direction!
> 
> I've got a basic setup already in place - 2 controller nodes, 3 compute nodes 
> (with 5 more ready to provision when we line everything out).  The main 
> problems I'm facing right now is in the networking.  We need to integrate 
> this openstack setup with the network resources already in place in our 
> setup, and I'm not exactly sure how to do so.  I'll start off with basically 
> going over how we are currently setup.
> 
> 1. Management and compute nodes are all connected via infiniband networking, 
> we are using this as our cluster management network (10.30.x.x/16).  This 
> network has zero connectivity to anything else - it's completely private for 
> the cluster.
> 
> 2. eth2 on all compute nodes are connected to each other via a vlan on our 
> switch, and bridged to br100 to be our openstack fixed network. 
> (10.18.x.x/16).
> 
> 3. eth1 on all compute nodes is connected up to our existing colo private 
> network.  We are currently defining this as our external network.  
> (10.5.20.0/22) - In our current setup (with nova-network and the 
> FlatDHCPManager), I have taken a block of IP's from this subnet and reserved 
> them for use as floating IP's for testing purposes - this is working 
> perfectly right now.
> 
> 4. eth0 on all compute nodes is connected to our existing colo public 
> network.  We have a /19 public allocation, broken up into numerous /24 and 
> /25 segments to keep independent divisions of the company fully segregated - 
> each segment is a separate vlan on the public network switches.  In what we 
> currently have setup, we are not utilizing this.
> 
> Ultimately, we'd like to have our cluster VM's connected to the fixed network 
> (on eth2), and treat both eth1 and eth0 as "public networks" we can use 
> floating IP's from.  All VM's should connect to eth1 and be able to have 
> floating IP's assigned from that network to them, and they should be able to 
> connect to a single tagged vlan on eth0 as well.
> 
> From the reading I've done so far, I think what we are trying to do might be 
> too complicated for nova-network, since it depends on defining a single 
> interface as the public interface on the compute nodes, and we might 
> potentially have more.  Am I interpreting that correctly, or could we maybe 
> accomplish this with nova-network (perhaps using the VLANManager mode?)
> 
> If we have to switch to Neutron, can you run the neutron services on each 
> compute node?  We have concerns about scale if we have to implement a 
> separate network node, as we could easily end up saturating a full gig-e 
> interface with this cluster in the future.  Plus, the extra cost expenditure 
> of dedicated network nodes could end up cost-prohibitive in the early stages 
> of deployment.
> 
> Anyone got any suggestions for us?
> 
> Thank you for your time,
> 
> Jeremy Utley
> !DSPAM:1,5388ec55252442076621315! 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : [email protected]
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> 
> !DSPAM:1,5388ec55252442076621315!

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to