On Fri, Sep 26, 2014 at 2:01 PM, Angus Lees <g...@inodes.org> wrote:

> On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:
> > Doesn't nova with a docker driver and heat autoscaling handle case 2 and
> 3
> > for control jobs? Has anyone tried yet?
>
> For reference, the cases were:
>
> > - Something to deploy the code (docker / distro packages / pip install /
> > etc)
> > - Something to choose where to deploy
> > - Something to respond to machine outages / autoscaling and re-deploy as
> > necessary
>
>
> I tried for a while, yes.  The problems I ran into (and I'd be interested
> to
> know if there are solutions to these):
>
> - I'm trying to deploy into VMs on rackspace public cloud (just because
> that's
> what I have).  This means I can't use the nova docker driver, without
> constructing an entire self-contained openstack undercloud first.
>
> - heat+cloud-init (afaics) can't deal with circular dependencies (like
> nova<-
> >neutron) since the machines need to exist first before you can refer to
> their
> IPs.
> From what I can see, TripleO gets around this by always scheduling them on
> the
> same machine and just using the known local IP.  Other installs declare
> fixed
> IPs up front - on rackspace I can't do that (easily).
> I can't use loadbalancers via heat for this because the loadbalancers need
> to
> know the backend node addresses, which means the nodes have to exist first
> and
> you're back to a circular dependency.
>
> For comparision, with kubernetes you declare the loadbalancer-equivalents
> (services) up front with a search expression for the backends.  In a second
> pass you create the backends (pods) which can refer to any of the
> loadbalanced
> endpoints.  The loadbalancers then reconfigure themselves on the fly to
> find the
> new backends.  You _can_ do a similar lazy-loadbalancer-reconfig thing with
> openstack too, but not with heat and not just "out of the box".
>

Do you have a minimal template that shows what you are trying to do?
(just to demonstrate the circular dependency).


> - My experiences using heat for anything complex have been extremely
> frustrating.  The version on rackspace public cloud is ancient and limited,
> and quite easy to get into a state where the only fix is to destroy the
> entire
> stack and recreate it.  I'm sure these are fixed in newer versions of
> heat, but
> last time I tried I was unable to run it standalone against an arms-length
> keystone because some of the recursive heat callbacks became confused about
> which auth token to use.
>

Gus we are working at improving standalone (Steven Baker has some patch out
for this).


>
> (I'm sure this can be fixed, if it wasn't already just me using it wrong
> in the
> first place.)
>
> - As far as I know, nothing in a heat/loadbalancer/nova stack will actually
> reschedule jobs away from a failed machine.  There's also no lazy
>

This might go part of the way there, the other part of it is detecting the
failed machine
and some how marking it as failed.
 https://review.openstack.org/#/c/105907/

discovery/nameservice mechanism, so updating IP address declarations in
> cloud-
> configs tend to ripple through the heat config and cause all sorts of
> VMs/containers to be reinstalled without any sort of throttling or rolling
> update.
>
>
> So: I think there's some things to learn from the kubernetes approach,
> which
> is why I'm trying to gain more experience with it.  I know I'm learning
> more
> about the various OpenStack components along the way too ;)
>

This is valuable feedback, we need to improve Heat to make these use case
work better.
But I also don't believe there is one tool for all jobs, so see little harm
in trying
other things out too.

Thanks
Angus


>
> --
>  - Gus
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to