First a quick background: 


I am trying to build a small "mini public cloud" that consists of 2 XenServer 
hosts and 1 management/nfs server. 


There is absolutely no need (at least on my end) for vlans or special 
isolation. 


Here is my setup: 


**Management / NFS Server** 
-> Ubuntu 12.04 LTS 
-> Two 1GB NICs assigned to br0 in a single bridge configured as 10.0.20.210/24 
(saves me having to use another switch) 
-> One 1GB NIC configured with live IP of 63.135.177.210/28 (yes - that's the 
actual IP) and connected to public switch 
-> IP forwarding and MASQ enabled: 63.135.177.210 <--> 10.0.20.0/24 (tested, 
works) 
-> DNSMasq installed, configured and working 
-> Entries in /etc/hosts for mgmt.mycloud, xen1.mycloud and xen2.mycloud 
-> Management server completely configured and ready 


** Two Physical Hosts** 
-> Each host has 2 GB NICs 
-> One NIC connected to public switch 
-> The other NIC connected to one of the two bridged ports on the management 
server 
-> XenServer 6.0.2 
-> Management Network configured via the 10.0.20.0/24 interfaces 
-> xen1 is 10.0.20.211 and xen2 is 10.0.20.212 
-> Neither xen host has a configured public facing IP, but each one IS 
connected to the switch 


**Physical Router** 
-> Configured gateway IP is 63.135.177.209/28 
-> Connected [obviously] to public switch 


I initially did a very basic setup (basic networking) at first using only 
public IP addresses. Everything worked, but of coarse, it uses like 8 or 10 
IP's total. 


So I figured I would attempt a shot at advanced networking mode, with the 
following goals: 
-> No need for special isolation 
-> Desire to "share" NFS and Management network (10.0.20.0/24) 
-> Desire to provide VM's (instances) to the 63.135.177.208/28 network on an 
as-needed basis (not all will need access) 


My first issue I am having trouble coping with is getting a grasp on the 
"Physical Network" to actual NIC mapping. This seems almost nonexistent. When I 
add a zone, I select "advanced" and click next. I enter 10.0.20.210 as [both] 
DNS servers and am imidiately confused by the "Guest CIDR". Still not sure what 
exactly this should be - and examples online have further added to this 
confusion. 


One example mentions using a arbitrary subnet (10.1.1.0/24 - the default), and 
this is what I have been doing thus far.. not sure if I am messing up at this 
point or not. 


Also, what is the "Public" checkbox for at this window? 


I click "next" and am brought to the Physical Network screen - with all the 
nice drag-and-drop jquery stuff I am so fond of (nice touch guys). But this is 
perhaps one of the most confusing parts there is. The documentation says each 
of these "Physical Networks" should "map" to an actual NIC port on each xen 
host. How? I see an option to provide a free-form name to each Physical Network 
(default for the first one is literally "Physical Network 1"). Where/how to I 
tell cloudstack that "Physical Network 1" belongs to (or should be "connected 
to") port1/eth1/xenbr1 of the host? 


Also, is this the point at which I should define 2 physical networks and drag 
the yellow and green icons to the bottom (Physical Network 2) and leave the 
blue one on "physical network 1"? I also assume I do not need to drag the red 
icon over into "Physical Network 1" since they are the same subnet - correct? 


Next, the "edit" button on each icon.. Mentions "XenServer traffic label" - is 
this the uuid, network-uuid or device value from "xe pif-list"? Or is this the 
actual device or bridge name such as eth1 or xenbr1? Or is this something 
entirely different? 


Before leaving this step, I also wonder: why does it make me choose 
VLAN/STT/GRE? Can I not have a simple non-vlan physical network? I am providing 
the isolation be means of the physical network itself. Am I gonna have to bite 
the bullet and use VLAN-enabled switches for this? Perhaps I can limit any 
VLAN-needs to trunk across the 10.0.20.0/24 network since that does not use an 
external switch and would be simple to mange? 


On the next screen that follows, it asks to set up the "public" network.. 
**sigh** more confusion... Should I enter the 63.135.177.208/28 details here? 
Or should I be entering something from the 10.0.20.0/24 network? 


On the next screen, we configure the Pods.. I am pretty sure at this point I 
need to simply provide the 10.0.20.210 gateway and an un-used range on the 
10.0.20.0/24 net - correct? 


The next screen takes me to a VLAN range window.. again - do I really need to? 
I am trying to avoid VLAN's like the plague . 


I understand "Adding a host" well enough, but if someone intemately familiar w/ 
CS could shed some light on the questions above, that would be excellent. 


One last consideration: not that I am anti-VLAN, but it is possible I will have 
to set up and semi-manage over 50 such "mini public cloud" deployments and 
therefore I really need to keep the overall deployment of each as simple as 
possible. I have a rather good understanding of networking and XenServer in 
general and would have typically done this via normal XenCenter, but rather 
have the CS GUI for end-users. 


Many thanks in advanced! 


- Dean 

Dean M. Rantala 
Upper Cumberland IT 
IT Consultant 
(931) 284-7384 
(931) 268-0037 
www.uppercumberlandit.com 


Reply via email to