announcing canary-releasing and autoscaling solution Vamp moving to beta

2016-09-11 Thread o...@magnetic.io
Hello everybody,

here a quick ping to announce that we’re moving our opensource 
canary-test/release and workflow-driven autoscaling solution Vamp to 0.9.0 and 
moving from alpha to beta stage.

We’d be very interested in any collaboration on improving Vamp and hearing your 
feedback on what we can improve or add.

https://github.com/magneticio/vamp/releases/tag/0.9.0 
<https://github.com/magneticio/vamp/releases/tag/0.9.0>

Thanks! Olaf

Olaf Molenveld
co-founder / CEO
-
VAMP: the Canary test and release platform for containers by magnetic.io
E: o...@magnetic.io
T: +31653362783
Skype: olafmol
www.vamp.io <http://www.vamp.io/>
www.magnetic.io <http://www.magnetic.io/>


Re: Mesos on hybrid AWS&DC - Best practices?

2016-07-04 Thread o...@magnetic.io
I agree about seperate clusters and tooling on top. This is exactly where 
several of our customers are using Vamp (vamp.io) for: gradual and controlled 
(canary) moving from legacy/current environments/applications (often on own 
DC’s) to container-based modern environments (often on public clouds like AWS). 
Vamp’s gateways can manage the canary routing based on HAproxy, and our 
integration with DC/OS can handle container deployments, (auto)scaling and 
routing/lb’ing on the modern DC/OS cluster.

Cheers, Olaf

Olaf Molenveld
co-founder / CEO
-
VAMP: the Canary test and release platform for containers by magnetic.io
E: o...@magnetic.io
T: +31653362783
Skype: olafmol
www.vamp.io <http://www.vamp.io/>
www.magnetic.io <http://www.magnetic.io/>






> On 30 Jun 2016, at 19:05, Sharma Podila  wrote:
> 
> I would second the suggestion of separate Mesos clusters for DC and AWS, with 
> a layer on top for picking one or either based on the job SLAs and resource 
> requirements.
> The local storage on cloud instances are more ephemeral than I'd expect the 
> DC instances to be. So, persistent storage of job metadata needs 
> consideration. Using something like DynamoDB may work, however, depending on 
> the scale of your operations, you may have to plan for EC2 rate limiting its 
> API calls and/or paying for higher IOPS for data storage/access. 
> Treating the cloud instances as immutable infrastructure has additional 
> benefits. For example, we deploy new Mesos master ASG for version upgrades, 
> let them join the quorum, and then "tear down" the old master ASG. Same for 
> agents. Although, for agent migration our framework does coordinate migration 
> of jobs from old agent ASG to new one with some SLAs on not too many 
> instances of a service being down at a time. Sort of what the maintenance 
> primitives from Mesos aim to address.
> 
> 
> On Thu, Jun 30, 2016 at 9:41 AM, Ken Sipe  <mailto:kens...@gmail.com>> wrote:
> I would suggest a cluster on AWS and a cluster on-prem.Then tooling on 
> top to manage between the 2.
> It is unlikely that a failure of a task on-prem should have a scheduled 
> replacement on AWS or vise versa.It is likely that you will end up 
> creating constraints to statically partition the clusters anyway IMO.
> 2 Clusters eliminates most of your proposed questions.
> 
> ken
> 
> > On Jun 30, 2016, at 10:57 AM, Florian Pfeiffer  > <mailto:fpfeif...@x8s.de>> wrote:
> >
> > Hi,
> >
> > the last 2 years I managed a mesos cluster with bare-metal on-premise. Now 
> > at my new company, the situation is a little bit different, and I'm 
> > wondering if there are some kind of best practices:
> > The company is in the middle of a transition from on-premise to AWS. The 
> > old stuff is still running in the DC, the newer micro services are running 
> > within autoscales groups on AWS and other AWS services like DynamoDB, 
> > Kinesis and Lambda are also on the rise.
> >
> > So in my naive view of the world (where no problems occur. never!) I'm 
> > thinking that it would be great to span a hybrid mesos cluster over AWS&DC 
> > to leverage the still available resources in the DC which gets more and 
> > more underutilized over the time.
> >
> > Now my naive world view slowly crumbles, and I realize that I'm missing the 
> > experience with AWS. Questions that are already popping up (beside all 
> > those Questions, where I currently don't know that I will have them...) are:
> > * Is Virtual Private Gateway to my VPC enough, or do I need to aim for a 
> > Direct Connect?
> > * Put everything into one Account, or use a Multi-Account strategy? (Mainly 
> > to prevent things running amok and drag stuff down while running into an 
> > account wide shared limit?)
> > * Will e.g. DynamoDb be "fast" enough if it's accessed from the Datacenter.
> >
> > I'll appreciate any feedback or lessons learned about that topic :)
> >
> > Thanks,
> > Florian
> >
> 
> 



Re: Mesos integration with OpenStack HEAT AutoScaling

2016-02-16 Thread o...@magnetic.io
Hi Everybody,

i would also suggest using the Magnum API to do this. For our 
Canary-testing/releasing & autoscaling platform Vamp (www.vamp.io 
<http://www.vamp.io/>) we’re currently setting up a collaboration project to 
develop a Vamp-Magnum driver. This way Vamp workflows can orchestrate and 
coordinate the scaling activities between IaaS and Mesos/Marathon (or any other 
supported scheduler). This coordination is essential as challenges like 
bin-packing and other optimisation strategies will become evident very quickly.

If there’s interest in collaboration on this please give me a ping!

cheers, Olaf

Olaf Molenveld
co-founder / CEO
-
magnetic.io: innovating enterprises
VAMP: canary test and release platform for containers
E: o...@magnetic.io
T: +31653362783
Skype: olafmol
www.magnetic.io <http://www.magnetic.io/>
www.vamp.io <http://www.vamp.io/>
> On 16 Feb 2016, at 09:20, Guangya Liu  wrote:
> 
> Hi Peter,
> 
> Have you ever tried Magnum (https://github.com/openstack/magnum 
> <https://github.com/openstack/magnum>) which is the container service in 
> OpenStack leveraging HEAT to integrate with Kubernetes, Swarm and Mesos. With 
> Magnum, you do not need to maintain your own HEAT template but just let 
> Magnum do this for you, it is more simple than using HEAT directly.
> 
> The Magnum can now supports both scale up and scale down, when scale down, 
> the Magnum will select the node which does not have container or have the 
> least containers.
> 
> The mesos now support "Host Maintain" 
> (https://github.com/apache/mesos/blob/master/docs/maintenance.md 
> <https://github.com/apache/mesos/blob/master/docs/maintenance.md>) which can 
> be leveraged by HEAT or Magnum, when HEAT or Magnum want to scale down a 
> host, we can call some cloud-init script to first maintain the host before 
> HEAT delete it. The host maintain will emit "InverseOffer" and you can update 
> the framework to handle "InverseOffer" for the host which is going to be 
> scale down.
> 
> Thanks,
> 
> Guangya
>  
> 
> On Tue, Feb 16, 2016 at 4:02 PM, Petr Novak  <mailto:oss.mli...@gmail.com>> wrote:
> Hello,
> we are considering adopting Mesos but at the same time we need to run it on 
> top of OpenStack at some places. My main questions is about how and if 
> autoscaling defined via HEAT templates works together. And has to be done. I 
> assume that scaling up is not much a problem - when Mesos detects more 
> resources it notifies frameworks which might scale based on their buildin 
> strategies, though I assume it can't be defined in HEAT templates. Scaling 
> down has to go through some cooperation between Mesos and HEAT. Do I have to 
> update Mesos frameworks source code to somehow listen to OpenStack events or 
> something like this?
> 
> Is there any ongoing effort from Mesosphere and OpenStack to integrate more 
> closely in this regard?
> 
> Many thanks for any points regarding other possible problems and any 
> clarification,
> Petr
> 



Re: Powered by mesos list

2016-01-14 Thread o...@magnetic.io
tnx!

Would like to hear opinions on how to categorise our solution and maybe 
restructure/rephrase this page:

Vamp is not so much “built on Mesos” but “makes use of Mesos” as it offers 
higher-level features using our Mesos/Marathon-driver. Maybe it’s semantics but 
i just wanted to check with the community.

Also, we’re a Canary-testing and releasing framework, which doesn’t really seem 
to fit the current categories in the “built on Mesos” article. The most fitting 
category would be “Batch Scheduling” but that wouldn’t entirely fit the 
use-case of Vamp. My suggestion would be “Continuous deployment, testing and 
scaling”.

Any thoughts/suggestions?

tnx, Olaf


> On 05 Jan 2016, at 02:54, Benjamin Mahler  wrote:
> 
> There are two sections: 'Organizations Using Mesos' and 'Software projects 
> built on Mesos'. The latter links to the list of frameworks.
> 
> If you fit either of these descriptions, then we can get you added, just 
> forward to us a pull request or reviewboard request.
> 
> On Tue, Dec 8, 2015 at 10:34 PM, Olaf Magnetic  > wrote:
> Hi Benjamin,
> 
> What are the criteria to be included on the powered by mesos list? Would love 
> to have our canary-test and release framework VAMP (www..vamp.io 
> ) which runs on mesos/marathon on this list too. 
> 
> Cheers, Olaf 
> 
> 
> On 08 Dec 2015, at 22:36, Benjamin Mahler  > wrote:
> 
>> Thanks for sharing Arunabha! I'm a big fan of the multi-framework compute 
>> platform approach, please share your feedback along the way :)
>> 
>> Would you like to be added to the powered by mesos list?
>> https://github.com/apache/mesos/blob/master/docs/powered-by-mesos.md 
>> 
>> 
>> On Mon, Dec 7, 2015 at 1:30 PM, Arunabha Ghosh > > wrote:
>> Hi Folks,
>>   We, at Moz have been working for a while on RogerOS, our next 
>> gen application platform built on top of Mesos. We've reached a point in the 
>> project where we feel it's ready to share with the world :-)
>> 
>> The blog posts introducing RogerOS can be found at
>>  
>> https://moz.com/devblog/introducing-rogeros-part-1/ 
>> 
>> https://moz.com/devblog/introducing-rogeros-part-2/ 
>> 
>> 
>> I can safely say that without Mesos, it would not have been possible for us 
>> to have built the system within the constraints of time and resources that 
>> we had. As we note in the blog 
>> 
>> " We are very glad that we chose Mesos though. It has delivered on all of 
>> its promises and more. We’ve had no issues with stability, extensibility, 
>> and performance of the system and it has allowed us to achieve our goals 
>> with a fraction of the development resources that would have been required 
>> otherwise. "
>> 
>> We would also like to thank the wonderful Mesos community for all the help 
>> and support we've received. Along the way we've tried to contribute back to 
>> the community through talks at Mesoscon and now through open sourcing our 
>> efforts.
>> 
>> Your feedback and thoughts are always welcome !
>> 
>> Thanks,
>> Arunabha
>> 
>>   
>> 
> 



looking for Vamp canary-testing feedback

2016-01-04 Thread o...@magnetic.io
Hello everybody,

first i want to wish you and your families all the best for 2016!

I have a small request, i have no intention to offend anyone on this list, as i 
think it’s topic-specific and suited to a lot of the mesos use-cases we’re 
hearing.

We are building Vamp (www.vamp.io <http://www.vamp.io/>) which is an opensource 
"Canary" testing & releasing framework for containers, with drivers to run on 
top of container-schedulers, the most important of them is Mesos/Marathon.

Vamp adds higher-level deployment, testing and scaling features to 
Mesos/Marathon, which can be used using our API, DSL, CLI or using the GUI.

It basically closes the loop between deployment orchestration, 
load-balancing/routing and scaling, by continuously monitoring and analysing 
events and metrics, and adjusting the entire system (schedulers, load-balancers 
(HAProxy), API gateways) to perform as defined in Vamp's blueprints and 
workflows.

We ran promising trials with smaller and bigger companies, improved Vamp very 
much during the last months, and are now looking for new organisations to test 
how Vamp can help to increase the adoption and value of mesos/marathon setups.

If you’re interested in trying this out, and maybe working together on this, i 
would be very happy to hear from you and see how we can help you out!

thank you very much! Olaf

Olaf Molenveld
co-founder / CEO
-
magnetic.io: innovating enterprises
VAMP: canary test and release platform for containers
E: o...@magnetic.io
T: +31653362783
Skype: olafmol
www.magnetic.io <http://www.magnetic.io/>
www.vamp.io <http://www.vamp.io/>







Re: Mesos at Moz

2015-12-08 Thread o...@magnetic.io
Hi Arunabha,

RogerOS looks great, congratulations with all the work and sharing and open 
sourcing it! :)
 
Olaf

Olaf Molenveld
co-founder / CEO
-
magnetic.io: innovating enterprises
VAMP: canary test and release platform for containers
E: o...@magnetic.io
T: +31653362783
Skype: olafmol
www.magnetic.io <http://www.magnetic.io/>
www.vamp.io <http://www.vamp.io/>
> On 07 Dec 2015, at 22:30, Arunabha Ghosh  wrote:
> 
> Hi Folks,
>   We, at Moz have been working for a while on RogerOS, our next 
> gen application platform built on top of Mesos. We've reached a point in the 
> project where we feel it's ready to share with the world :-)
> 
> The blog posts introducing RogerOS can be found at
>  
> https://moz.com/devblog/introducing-rogeros-part-1/ 
> <https://moz.com/devblog/introducing-rogeros-part-1/>
> https://moz.com/devblog/introducing-rogeros-part-2/ 
> <https://moz.com/devblog/introducing-rogeros-part-2/>
> 
> I can safely say that without Mesos, it would not have been possible for us 
> to have built the system within the constraints of time and resources that we 
> had. As we note in the blog 
> 
> " We are very glad that we chose Mesos though. It has delivered on all of its 
> promises and more. We’ve had no issues with stability, extensibility, and 
> performance of the system and it has allowed us to achieve our goals with a 
> fraction of the development resources that would have been required 
> otherwise. "
> 
> We would also like to thank the wonderful Mesos community for all the help 
> and support we've received. Along the way we've tried to contribute back to 
> the community through talks at Mesoscon and now through open sourcing our 
> efforts.
> 
> Your feedback and thoughts are always welcome !
> 
> Thanks,
> Arunabha
> 
>