Very Interesting projects there Steven Borrelli!
So, I've been working on structuring (3) classes of nodes for general
deployment among multiple different cluster/cloud offerings.
(1) The traditional 'slave-node' that is 100% controlled by the
cluster/cloud master. Classical workload service typically deployed
currently.
(2) The 'worker-node' that has some degree of autonomy (less than 100%
control by the master) so that it may actually communicate and perform
work for other masters or even migrate between master-nodes of different
cluster/cloud systems; like from a mesos environment to an openstack
environment. Another aspect of the worker-node, is that it's unique
resources help the worker-node decide what types of problems (the
masters are in charge of) to pursue. My initial intuition is hardware
resources, like an Rf spectrum analyzer externally attached to the node,
a DSP or a FPGA or any sensor; but really there'd be room for unique
software resources that are part of the worker's inherent OS too.
(3) The 'entrepreneur-node' that not only can act as a worker-node but
also can actually decide to become a 'master-node' of a particular
system, or even set itself up as a new cluster and recruit from among
class (2) worker nodes. In Essence the Autonomy_Function would be
liberally experimented with in a variety of mechanisms; ultimately in
search of work that needs to be perform, assimilation of resources to
accomplish such work, and report of work status and accomplishment to a
'higher authority'.
Continuous Integration (CI) comes to mind as an immediate area for
testing these concepts and codes. This work really becomes quite easy
IFF one can presume that project authorities are willing to precisely
define the (classes) of subservient nodes and there common features
found in different cluster/cloud offerings in a well defined common data
structure. If these various project are not keen on these ideas
of node liberation, then it becomes a question of how best to define
each node outside of project controls. The latter option can lead
loss of control over these ideas for me.
My vision is to use self modifying codes [1] for much of this work. As
such would this sort of research be welcome at Cisco's
microservices-infrastructure projects? At mesos?
[1] http://en.wikipedia.org/wiki/Self-modifying_code
James
On 06/09/2015 11:43 AM, Steven Borrelli wrote:
On behalf of the development team. I'm pleased to announce the 0.3.0
release of Microservices Infrastructure. In the weeks since 0.2, we've
added a number of features and improvements.
The software can be downloaded at:
https://github.com/CiscoCloud/microservices-infrastructure
Documentation is located at:
https://microservices-infrastructure.readthedocs.org/en/latest
I’ll be speaking next week at the NYC mesos meetup:
http://www.meetup.com/Apache-Mesos-NYC-Meetup/events/222932873
What is it?
Microservices Infrastructure is software that launches servers and then
configures them to support a wide range of applications - like
continuous delivery or realtime data processing.
This makes it easy to run application containers alongside data-centric
workloads like Kafka, HDFS, Cassandra and Elasticsearch. We take leading
open-source projects (Docker, Consul, Terraform, Mesos) and integrate
them to build a powerful platform.
Microservices Infrastructure deploys to multiple cloud providers in
minutes. High-availability, service discovery, metrics, security, and
logging are built in.
All the components are released under an Apache 2.0 license. Bug reports
and pull requests are welcome.
New Features
Deployment to OpenStack, AWS and Google Cloud via Terraform
With the addition of Openstack support
<https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md#040-april-2-2015>
to
Terraform <http://terraform.io/>, Ansible-based cloud provisioning has
been deprecated. With this release we've included configurations for
OpenStack, Amazon Web Services, and Google Cloud. Future releases will
include storage, VPN, and networking configurations and support for more
providers.
To make the cloud installation process smoother, we've included a
dynamic Ansible inventory script terraform.py
<https://github.com/CiscoCloud/terraform.py> that automatically
discovers your hosts across clouds from your Terraform tfstate file and
integrates them with Ansible roles.
Logging with Logstash and collectd
This release includes support for collectd <https://collectd.org/> and
Logstash <https://www.elastic.co/products/logstash>. Collectd is used to
monitor system statistics and Logstash can be used to forward system
logs to a central point of a logging service.
0.3.0 includes collectd plugins for Docker, Mesos, Marathon and Zookeeper.
Simplified Vagrant runs
We've simplified the Vagrant process, getting rid of the need to run
security setup or install python modules. |vagrant up| will bring up an
environment without needing to run any other commands.
Mesos-consul support
To improve service discovery, we've developed mesos-consul
<https://github.com/CiscoCloud/mesos-consul>, a tool that populates
Consul service discovery with Mesos tasks. Mesos task |<taskname>| will
be automatically discoverable via dns as |<taskname>.service.consul|.
One benefit of this approach is that Mesos leader detection is saved in
Consul DNS. |leader.mesos.service.consul| will point to the current
Mesos leader.
Future releases will support populating consul with Mesos Service
Discovery and labels.
Marathon-consul support
We've developed a bridge between Marathon state and consul with
marathon-consul <https://github.com/CiscoCloud/marathon-consul>. This
allows us to support richer haproxy configurations (see below).
Updated haproxy configuration
Our haproxy container <https://github.com/CiscoCloud/haproxy-consul> now
supports optionally reading from marathon-consul data. This means we now
support non-HTTP proxying using Marathon global ports.
Future releases will support Mesos Service Discovery and labels to fine
tune the proxy configuration.
Improved security-setup script
You can selectively disable security settings at a granular level (for
example, turning off Marathon authentication), or disable security entirely.
ISO image creation and Packer support for Vagrant, AWS & Google
Cloud
Initial support has been added for creating ISO images that can be used
on bare metal systems.
Packer <http://packer.io/> builds have been added for AWS, Google Cloud,
and Vagrant. Openstack Glance support will be added in a future release.
Future releases will integrate these builds with terraform in order to
speed up deployments.
Tech previews
* Support for Hashicorp's Vault <https://vault.io/>. Currently Vault
is installed using Consul as an HA backend. This will allow us to
dynamically manage credentials across servers, and keep SSL keys and
secrets out of your containers.
Cleanups
*
Use of NetworkManager to manage dnsmasq and |/etc/resolv.conf| has
been removed, in favor of using dnsmasq directly.
*
We've cleaned our containers and packages to be sourced from a
single repository. Packages come from the bintray.com/ciscocloud
<http://bintray.com/ciscocloud> account, and Docker images will
download from docker.io/ciscocloud.
*
Ansible openstack provisioning playbooks and references are being
removed in favor of terraform and dynamic inventory.
*
Using |/etc/hosts| has been deprecated in favor of using consul DNS.
(For example, |server.node.consul|)
*
Ansible groups have been simplified, thanks to
https://github.com/CiscoCloud/microservices-infrastructure/pull/357
Getting Support
If you encounter any issues, please open a Github Issue
<https://github.com/CiscoCloud/microservices-infrastructure> against the
project. We review issues daily.
We also have a gitter chat room
<https://gitter.im/CiscoCloud/microservices-infrastructure>. Drop by and
ask any questions you might have. We'd be happy to walk you through your
first deployment.
Cisco Intercloud Services <https://developer.cisco.com/cloud> provides
support for OpenStack based deployments of Microservices Infrastructure.