Thanx everyone in helping me figure out how I wanted my multiple controller nodes setup - using keepalived and load balance the services with that is well under way and so far it looks like it's working.
I still have to fine-tune and finish that work of. But modifying my setup scripts etc to set this up (the physical hosts is setup using PXE and a bunch of scripts etc; the VMs is setup using puppet - it's to late to puppetize the physical nodes, but it's fine, PXE and the scripts let me destroy a node and just recreate it in minutes), lead me directly to another TODO I've had for a while, which now require a decision. My blade center have sixteen half-height nodes (eight in two rows), where I've decided to have the first node as a Control node (that's the one that's been up and running for months now), the following seven nodes in the first row as internal (for my own personal use) compute nodes. Those nodes would be dedicated for my infrastructure. Such as LDAP, kerberos, SMTP, VoIP server etc, etc. Those seven is also fully working. Technically I only seem to need three nodes for all of that, but that would mean that they would be quite cramped (I don't have that much memory in the machines unfortunately). And I'd loose out on having services on different physical hosts. Also, possible future need makes it reasonable to dedicate all seven to my infrastructure. Then node nine (first node on the second row) would then be my second control node (this is the one I'm setting up now) and then seven physical nodes for "miscellaneous" (mostly my development and test machines of different operating systems, distributions and versions, friends VMs etc, etc). So what this means is that having two different availability zones, first row is now named 'nova' (which is the default in OS, and it makes sense to keep that) and the second .. "user" (or whatever) seems like a reasonable approach. >From what I understand of AZs, is that if I create an instance, I need to specify the AZ (I'm using Heat and extremely early on in my OS setup, I created templates and worked around the need to specify the AZ - my template use that for me now) and the instance will ONLY be started in that specified AZ. Which helps things being created in the right place. Eventually, I probably have to figure out a way to make sure that only I can create instances in the 'nova' AZ, but that can be an exercise for another day.. However, and here comes the question after a long babbling of what I'm up to, the two controller nodes need to be able to manage _both_ availability zones! I can not (do not want to) waste more resources. Technically, I don't really _NEED_ two controllers, but for at least a rudimentary high-availability (which is, because they're on the same network, on the same switch, on the same breaker, the same power cable etc, etc, is only imaginary and one can still handle the load just fine, even with the possible future use, having two makes at least _some_ shred of 'good idea'), I've already dedicated these two physical nodes for that. Is this possible? Can a controller node (i.e., a physical node that runs EVERYTHING but Nova - aodh, barbican, ceilometer, cinder, etc, etc, etc!) also control multiple availability zones? They of course could have their own separate _default_ AZ, even though I'm not sure that is needed. The first objection I can think of is the networking - where do the network go in/out of OS if the primary controller dies? And how do I solve that? Currently, everything goes in and out via the first control node, which seems to be just fine. So what happens to the traffic if/when that dies? Will OS (Neutron) automatically use the second controller? -- System administrators motto: You're either invisible or in trouble. - Unknown _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : [email protected] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
