On Tue, Dec 18, 2012 at 12:33 PM, Tom White <[email protected]> wrote:

> Fantastic work Andrei and Ioan!
>

Thanks Tom!


>
> A few questions:
>
> * Do you expect to have to write a new set of activities for each
> provider?


Yes - for now. I expect that we will be able to reuse many components later
on.


> I see that the EC2 layer uses the Amazon SDK, and Cloudstack
> uses jclouds, but I'd expect you could have a set of generic jclouds
> activities for all the clouds that jclouds supports.
>

We want o stay as close to the APIs as possible to be able to keep each
activity
simple - in terms of number of API calls performed.

If an operation needs multiple API calls or a retry loop we want to build
that in the process definition - not inside the activity.


> * How does the image caching work (and is it implemented yet)? Do you
> have to run a special step to create the image or is it a side effect
> of starting a cluster?
>

Not implemented yet - it's on the roadmap for 0.0.2. I am thinking about
implementing
this as a a side effect of starting a cluster controlled by a flag on the
Pool description.


> * I can see that Whirr could use Provisionr for reliable provisioning,
> then run its bootstrap and configuration phases (this could be done by
> writing a ProvisionrClusterController). Are you planning on adding a
> configuration layer to Provisionr, one that takes advantage of the
> Activiti processing, or is that separate do you think?


We are not planning to add a configuration layer in the near future. Later
on I think we will
implement that as a different project.


> Like Roman, I'm interested in how we could use Bigtop's packages (and
> possibly Puppet
> scripts) with Provisionr.
>

As I said before this is our main goal - we want to be able to create large
clusters with
a pre-installed set of packages and files without DDoS-ing external
infrastructure.


> It's great to hear that you want to bring this to the ASF. Would it be
> a new incubator project or a part of Whirr? I can see arguments both
> ways, based on whether the Provisionr community is a different one to
> Whirr's or not.
>

Either way should work but for now I want to focus on releasing 0.0.1
(happy path)
and 0.0.2 (good error handling + base image caching).


>
> Cheers,
> Tom
>
> On Tue, Dec 18, 2012 at 12:04 AM, Andrei Savu <[email protected]>
> wrote:
> > On Mon, Dec 17, 2012 at 8:30 PM, Roman V Shaposhnik <[email protected]>
> wrote:
> >
> >> This looks really interesting and I can see how it can be very useful
> >> for things like buildouts of classes of virtual nodes.
> >>
> >
> > That's our primary goal: we want to have a robust system that can create
> > pools
> > of identical virtual machines on multiple clouds. From an user
> perspective
> > all clusters
> > should be more or less identical: same base operating system, same dns
> > settings,
> > ssh credentials, same firewall / security groups settings, same packages,
> > same files etc.
> >
> >
> >> The question I have is this -- once you're done with automating the
> >> base-line provisioning,
> >> what's your involvement with higher-level orchestration?
> >>
> >
> > I think higher level (service specific) orchestration should be a
> > completely different layer that
> > reacts to pool structure change events and only assumes ssh access
> (direct
> > or through a gateway).
> > Anytime a new set of nodes are added or removed from a pool the
> > configuration layer should
> > be notified and react as needed.
> >
> > We also want to make it possible to configure the pool management
> process to
> > repair the pool if virtual machines are destroyed due to unexpected
> events
> > (chaos monkey).
> >
> >
> >> It seems that one way for you to handle this is to hand off to the
> existing
> >> cluster orchestrators like CM and Ambari. This is fine, but I'm more
> >> interested
> >> in how extensible your architecture is.
> >>
> >
> > That's exactly what we are doing at Axemblr and we had a good experience
> so
> > far.
> >
> > Extensible in what sense? As in being able to handle new services? That's
> > not really important for us.
> >
> >  So here's my favorite use case -- suppose I need to stand up a Zookeeper
> >> cluster from Zookeper RPM packages from the Bigtop distribution. Could
> >> you, please, walk me through each step? The more detailed the better!
> >>
> >
> > I this case I would start by creating a pool description that contains
> > instructions about
> > registering a new rpm repository and installing the ZooKeeper server rpm
> > packages.
> >
> > If this is something I need to do many times I would enable automatic
> base
> > image caching to
> > speed-up things (avoid jdk install, avoid downloads etc.)
> >
> > Because ZooKeeper does not (yet) support dynamic membership I need to
> wait
> > for all the pool
> > nodes to start before doing any configuration.
> >
> > The configuration layer should later on generate the config files and
> start
> > the daemons on all nodes.
> >
> > And the last step would be to deploy some sort of monitoring to close the
> > loop:
> > Provisioning -> Configuration -> Monitoring -> and back all triggered by
> > events
> >
> > What do you think?
> >
> > -- A
>

Reply via email to