David mentioned Hurricane Electric, which is the company we use for colocation. 
Their prices are the best we were able to find, and most of their packages come 
with a large number of rack units. Power, as with most colo is usually the 
limiting factor. At least for our setup, we have our own switches and router, 
meaning that the entire network is basically ours to setup as we like. This 
freedom is nice when it comes to using tagged vlans and jumbo frames for Triton.

http://he.net/colocation.html <http://he.net/colocation.html>

Cliff

> On Mar 24, 2016, at 5:37 AM, Emmanuel Bastien <[email protected]> wrote:
> 
> Indeed I am only allocating RFC1918 addresses on the SDC networks.
> I am not doing 1:1 NAT at the moment but that could be done simply by tuning 
> the NAT rules. Once OVH allows for IP blocks to be routed to specific VLAN 
> IDs on the public interface, this setup would not be needed anymore. Still, I 
> like having a firewall/NAT between my zones and the Internet while 
> prototyping things like SDC+Manta. You could replace this SPoF SmartOS 
> firewall zone with a more HA setup though.
> 
> About iPXE and the USB key, note that the key is only needed on the headnode. 
> All other compute nodes do install unattended though iPXE by announcing 
> themselves to the headnode DHCP. You only have to change the BIOS settings on 
> each compute node so that the PXE boot happens on the vrack interface instead 
> of the default public interface.
> 
> Emmanuel
> 
> On Wed, Mar 23, 2016 at 10:41 PM, Jon Dison <[email protected] 
> <mailto:[email protected]>> wrote:
> Thanks for all of the replies so far...
> Emmanuel...
> That's unfortunate about the iPXE not working from SDC.  The USB option 
> should be okay.
> For the networking, are you saying that you're using RFC1918 addresses for 
> the instances in the external, manta, and mantanat VLANs and then exposing 
> them to the internet with 1:1 NATs in your firewall zone?
> That sounds like an interesting hack to get around OVH's cookie cutter 
> network but I'm not quite certain I want to commit to the idea of having that 
> single point of failure.
> 
> On Wed, Mar 23, 2016 at 5:52 AM, Emmanuel Bastien <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi Jon,
> 
> I did setup successfully both SmartOS and SDC+Manta on OVH.
> SmartOS is very easy to install, it's even supported by OVH in beta but last 
> time I checked they were still providing an old image.
> For SmartOS only I went through an iPXE installation from a recent image I 
> uploaded on the OVH Object Storage.
> For SDC I had to do it via a USB key as I did not manage to make the pretty 
> large root partition available through iPXE.
> You can order a USB key for any OVH server directly from the manager 
> interface.
> 
> It is true that the network layout provided by OVH is not strictly compatible 
> with the recommandations for SDC.
> In my case, I did it with 5 nodes in a single vrack: 1 SmartOS server, 1 SDC 
> headnode and 3 SDC compute nodes.
> The 4 SDC nodes only make use of the OVH vrack interface, the Internet 
> interface being left un-mounted.
> On the vrack, I have 4 vlans: admin (vlan 0), external, manta and mantanat.
> On the pure SmartOS server, I setup two zones: one firewall zone and one 
> worker zone. The firewall zone is providing NAT for the external and mantanat 
> vlans, thanks to an IP block being routed to the vrack. What is not so clean 
> here is that the vlan 0 on the vrack is mixing traffic for both the admin SDC 
> network and the Internet. AFAIK there is no other way because the admin 
> network has to use vlan 0 and OVH does not give you the opportunity (yet) to 
> route IP blocks on a vlan id. It should change sooner or later as they have 
> already announced their plan to route IP blocks to vlans on the public 
> network interface, instead of the vrack.
> 
> I am using cheap EG64 servers for this proof-of-concept. With 64GB per node I 
> had to resize most of the SDC & Manta zones to not over allocate memory.
> I would advise to go for much more RAM per server if you plan to build a real 
> cluster. 4 nodes is also the bare minimum. With SDC+Manta you end up running 
> dozens of zones only for the cluster orchestration.
> 
> Emmanuel
> 
> On Wed, Mar 23, 2016 at 1:56 AM, Jon Dison <[email protected] 
> <mailto:[email protected]>> wrote:
> Would anyone care to share their experience running at a dedicated hosting 
> provider?
> 
> I currently virtualize a lot of web servers on Proxmox hosted at OVH and 
> would love to switch over to SmartOS, but OVH's network infrastructure is 
> fairly cookie-cutter and they don't really allow you to get the sort of 
> network plumbing setup that SmartOS requires, especially SDC.
> 
> Has anyone managed to successfully setup SmartOS at OVH?
> Has anyone setup SmartOS and/or SDC somewhere else and have had a positive 
> experience?
> 
> The thing I like about OVH is their prices.  I suppose you get what you pay 
> for.
> But the same servers are twice as much with other providers and I haven't 
> gotten a guarantee from them that they can setup the networking the way that 
> I want either.
> 
> Thanks in advance,
> Jon
> 
> 
> 
> smartos-discuss | Archives 
> <https://www.listbox.com/member/archive/184463/=now>  
> <https://www.listbox.com/member/archive/rss/184463/26908954-1558d6be> | 
> Modify <https://www.listbox.com/member/?&;> Your Subscription  
> <http://www.listbox.com/>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to