[openstack-dev] [Neutron][LBaaS] LBaaSv2 with HAproxy Agent Deployment Issue

2016-06-07 Thread Daneyon Hansen (danehans)
All,

I am trying to add Neutron LBaaSv2 to a working OpenStack Liberty deployment. I 
am running into an issue where the lbaas agent does not appear in the output of 
neutron agent-list. However, the lbaas extension appears in the output of 
neutron ext-list. After investigating further, the lbaas-agent sends a message 
on the queue and times out waiting for a reply:

2016-06-06 21:09:15.958 22 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on 10.32.20.52:5672
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve 
ready devices
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutro
n_lbaas/services/loadbalancer/agent/agent_manager.py", line 152, in sync_state
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_r
eady_devices())
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutro
n_lbaas/services/loadbalancer/agent/agent_api.py", line 36, in get_ready_devices
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager return 
cctxt.call(self.context, 'get_ready_
devices', host=self.host)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/rpc/client.py", line 158, in call
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/transport.py", line 90, in _send
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 431, in send
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 420, in _send
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 318, in wait
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager message = 
self.waiters.get(msg_id, timeout=
timeout)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 223, in get
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' 
% msg_id)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed 
out waiting for a reply
 to message ID eae3cc1bc8614aa8ae499d92ca4ec731

I verified that the lbaas queues reside within the Rabbit cluster:

bash-4.2$ rabbitmqctl list_queues

n-lbaas_agent   0
n-lbaas_agent.control-server-1.novalocal0
n-lbaas_agent.control-server-2.novalocal0
n-lbaas_agent.control-server-3.novalocal0
n-lbaas_agent_fanout_18a3b28c969148f3a008df8f3e5f5363   0
n-lbaas_agent_fanout_a7d48e8a1b27443d82ee4944bec44cf8   0
n-lbaas_agent_fanout_b5360edb19c240e79c71d60806977f66   0
n-lbaasv2-plugin0
n-lbaasv2-plugin.control-server-1.novalocal 0
n-lbaasv2-plugin.control-server-2.novalocal 0
n-lbaasv2-plugin.control-server-3.novalocal 0
n-lbaasv2-plugin_fanout_5cbb6dd4fafc4c4784add8a20e0a28a50
n-lbaasv2-plugin_fanout_756ee4e4eee547528d0f6e3dde71b1500
n-lbaasv2-plugin_fanout_7629f7bb85ce493d83c334dfcc2cd4aa0
notifications.info  8


And the lbaas queues are being mirrored:

# rabbitmq server logs
=INFO REPORT 6-Jun-2016::19:01:23 ===
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in 
vhost '/': Adding mirror on node 'rabbit@mercury-control-se
rver-3': <3038.25481.1>

=INFO REPORT 6-Jun-2016::19:01:23 ===
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in 
vhost '/': Adding mirror on node 'rabbit@mercury-control-se
rver-2': <3037.25635.1>

=INFO REPORT 6-Jun-2016::19:01:23 ===
Mirrored queue 

Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Daneyon Hansen (danehans)


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.

· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.

· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.

· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:

· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.

· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I believe this is a sensible option that addresses the original problem 
statement in [1]:

"Magnum currently controls Kubernetes API services using unauthenticated HTTP. 
If an attacker knows the api_address of a Kubernetes Bay, (s)he can control the 
cluster without any access control."

The [1] problem statement is authenticating the bay API endpoint, not 
encrypting it. With the option you propose, we can leave the existing 
tls-disabled attribute alone and continue supporting encryption. Using Keystone 
to authenticate the Kubernetes API already exists outside of Magnum in 
Hypernetes [2]. We will need to investigate support for the other coe types.

[1] https://github.com/openstack/magnum/blob/master/specs/tls-support-magnum.rst
[2] http://thenewstack.io/hypernetes-brings-multi-tenancy-microservices/



I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often 

[openstack-dev] [barbican] High Availability

2016-03-21 Thread Daneyon Hansen (danehans)
All,

Does anyone have experience deploying Barbican in a highly-available fashion? 
If so, I'm interested in learning from your experience. Any insight you can 
provide is greatly appreciated.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][release] Announcement of release of Mitaka rc1!

2016-03-20 Thread Daneyon Hansen (danehans)

For sure.

Regards,
Daneyon

On Mar 19, 2016, at 1:52 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

Thanks - although it wasn't me that made it happen, it was the team.

Regards,
-steve


From: "Daneyon Hansen (danehans)" 
<daneh...@cisco.com<mailto:daneh...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 18, 2016 at 1:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla][release] Announcement of release of Mitaka 
rc1!

Steve,

Congratulations on the release!!!

Regards,
Daneyon

On Mar 17, 2016, at 9:32 AM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:


The Kolla community is pleased to announce the release of Mitaka

milestone RC1.  This may look like a large list of features, but really

it was finishing the job on 1 or 2 services that were missing for each

of our major blueprints for Mitaka.


Mitaka RC1 Features:

  *   MariaDB lights out recovery
  *   Full upgrades of all OpenStack services
  *   Full upgrades of all OpenStack infrastructure
  *   Full reconfiguration of all OpenStack services
  *   Full reconfiguration of all OpenStack infrastructureAll containers now 
run as non-root user
  *   Added support for Docker IPC host namespace
  *   Cleaned up false haproxy warning about reource unavailability
  *   Improved Vagrant scripts
  *   Mesos DNS container
  *   Ability to use a local archive or directory for source build
  *   Mesos has new per service cli to make deployment and upgrades more 
flexible
  *   Mesos has better constraints to deal with multi-host deployment
  *   Mesos has better depedencies for nova and neutron (inter-host 
dependencies)
  *

For more details, check out our blueprint feature and bug tracker here:


https://launchpad.net/kolla/+milestone/mitaka-rc1


We are super excited about the release of Kolla Mitaka-rc1!  We did some really

impressive output in 12 days, implementing solutions for 4

blueprints and 46 bugs.  This cycle our core team grew by one member

Alicja Kwasniewska.  Our community continues to remain extremely diverse

and is growing with 203 IC interactions and 40 corporate affiliations.  Check

out our stackalytics page at:


http://stackalytics.com/?module=kolla-group=person-day

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)
emented in Magnum, which may depend on libraries, but not Barbican.
>>> 
>>> 
>>> [2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html
>> 
>> The context there is important. Barbican was considered for two purposes: 
>> (1) CA signing capability, and (2) certificate storage. My willingness to 
>> implement an alternative was based on our need to get a certificate 
>> generation and signing solution that actually worked, as Barbican did not 
>> work for that at the time. I have always viewed Barbican as a suitable 
>> solution for certificate storage, as that was what it was first designed 
>> for. Since then, we have implemented certificate generation and signing 
>> logic within a library that does not depend on Barbican, and we can use that 
>> safely in production use cases. What we don’t have built in is what Barbican 
>> is best at, secure storage for our certificates that will allow 
>> multi-conductor operation.
>> 
>> I am opposed to the idea that Magnum should re-implement Barbican for 
>> certificate storage just because operators are reluctant to adopt it. If we 
>> need to ship a Barbican instance along with each Magnum control plane, so be 
>> it, but I don’t see the value in re-inventing the wheel. I promised the 
>> OpenStack community that we were out to integrate with and enhance OpenStack 
>> not to replace it.
>> 
>> Now, with all that said, I do recognize that not all clouds are motivated to 
>> use all available security best practices. They may be operating in 
>> environments that they believe are already secure (because of a secure 
>> perimeter), and that it’s okay to run fundamentally insecure software within 
>> those environments. As misguided as this viewpoint may be, it’s common. My 
>> belief is that it’s best to offer the best practice by default, and only 
>> allow insecure operation when someone deliberately turns of fundamental 
>> security features.
>> 
>> With all this said, I also care about Magnum adoption as much as all of us, 
>> so I’d like us to think creatively about how to strike the right balance 
>> between re-implementing existing technology, and making that technology 
>> easily accessible.
>> 
>> Thanks,
>> 
>> Adrian
>> 
>>> 
>>> Best regards,
>>> Hongbin
>>> 
>>> -Original Message-
>>> From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
>>> Sent: March-17-16 4:32 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum] High Availability
>>> 
>>> I have trouble understanding that blueprint. I will put some remarks on the 
>>> whiteboard. Duplicating Barbican sounds like a mistake to me.
>>> 
>>> --
>>> Adrian
>>> 
>>>> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>>>> 
>>>> The problem of missing Barbican alternative implementation has been raised 
>>>> several times by different people. IMO, this is a very serious issue that 
>>>> will hurt Magnum adoption. I created a blueprint for that [1] and set the 
>>>> PTL as approver. It will be picked up by a contributor once it is approved.
>>>> 
>>>> [1] 
>>>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
>>>> re
>>>> 
>>>> Best regards,
>>>> Hongbin
>>>> 
>>>> -Original Message-
>>>> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
>>>> Sent: March-17-16 2:39 PM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [magnum] High Availability
>>>> 
>>>> Hi.
>>>> 
>>>> We're on the way, the API is using haproxy load balancing in the same way 
>>>> all openstack services do here - this part seems to work fine.
>>>> 
>>>> For the conductor we're stopped due to bay certificates - we don't 
>>>> currently have barbican so local was the only option. To get them 
>>>> accessible on all nodes we're considering two options:
>>>> - store bay certs in a shared filesystem, meaning a new set of 
>>>> credentials in the boxes (and a process to renew fs tokens)
>>>> - deploy barbican (some bits of puppet missing we're sorting out)
>>>> 
>>>> More 

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)

Aside from the bay certificates/Barbican issue. Is anyone aware of any other 
potential problems for high-availability, especially for Conductor?

Regards,
Daneyon Hansen

> On Mar 17, 2016, at 12:03 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of credentials in 
> the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>> <daneh...@cisco.com> wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)
All,

Does anyone have experience deploying Magnum in a highly-available fashion? If 
so, I'm interested in learning from your experience. My biggest unknown is the 
Conductor service. Any insight you can provide is greatly appreciated.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][release] Announcement of release of Mitaka rc1!

2016-03-19 Thread Daneyon Hansen (danehans)
Steve,

Congratulations on the release!!!

Regards,
Daneyon

On Mar 17, 2016, at 9:32 AM, Steven Dake (stdake) 
> wrote:


The Kolla community is pleased to announce the release of Mitaka

milestone RC1.  This may look like a large list of features, but really

it was finishing the job on 1 or 2 services that were missing for each

of our major blueprints for Mitaka.


Mitaka RC1 Features:

  *   MariaDB lights out recovery
  *   Full upgrades of all OpenStack services
  *   Full upgrades of all OpenStack infrastructure
  *   Full reconfiguration of all OpenStack services
  *   Full reconfiguration of all OpenStack infrastructureAll containers now 
run as non-root user
  *   Added support for Docker IPC host namespace
  *   Cleaned up false haproxy warning about reource unavailability
  *   Improved Vagrant scripts
  *   Mesos DNS container
  *   Ability to use a local archive or directory for source build
  *   Mesos has new per service cli to make deployment and upgrades more 
flexible
  *   Mesos has better constraints to deal with multi-host deployment
  *   Mesos has better depedencies for nova and neutron (inter-host 
dependencies)
  *

For more details, check out our blueprint feature and bug tracker here:


https://launchpad.net/kolla/+milestone/mitaka-rc1


We are super excited about the release of Kolla Mitaka-rc1!  We did some really

impressive output in 12 days, implementing solutions for 4

blueprints and 46 bugs.  This cycle our core team grew by one member

Alicja Kwasniewska.  Our community continues to remain extremely diverse

and is growing with 203 IC interactions and 40 corporate affiliations.  Check

out our stackalytics page at:


http://stackalytics.com/?module=kolla-group=person-day

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Daneyon Hansen (danehans)


> On Mar 17, 2016, at 11:41 AM, Ricardo Rocha <rocha.po...@gmail.com> wrote:
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same
> way all openstack services do here - this part seems to work fine

I expected the API to work. Thanks for the confirmation. 
> 
> For the conductor we're stopped due to bay certificates - we don't
> currently have barbican so local was the only option. To get them
> accessible on all nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of
> credentials in the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)

How funny. I had this concern and proposed a similar solution to hongbin over 
irc yesterday. I suggested we discuss this issue at Austin, as Barbican is 
becoming a barrier to Magnum adoption. Please keep this thread updated as you 
progress with your deployment and I'll do the same. 
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)
> <daneh...@cisco.com> wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest unknown
>> is the Conductor service. Any insight you can provide is greatly
>> appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Daneyon Hansen (danehans)

+1 on the points Adrian makes below.

On Mar 4, 2016, at 12:52 PM, Adrian Otto 
> wrote:

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for, and the implications that has on our pace 
of feature development. My guidance here is that we resist the temptation to 
create a system with more permutations than we can possibly support. The 
relation between bay node OS, Heat Template, Heat Template parameters, COE, and 
COE dependencies (could-init, docker, flannel, etcd, etc.) are multiplicative 
in nature. From the mid cycle, it was clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.
3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
> wrote:

I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and 

[openstack-dev] [Magnum] Networking Subteam Update

2016-01-07 Thread Daneyon Hansen (danehans)
All,

Today the Magnum networking subteam decided to merge back into the general 
Magnum community. I would like to thank all who participated in the subteam 
over the past 6 months. We were able to develop and implement the Magnum 
Container Networking Model [1]. I look forward to additional network drivers 
being added in the future to expand Magnum's container networking capabilities. 
Details of Magnum's networking can be found in the Resources Section [2] of the 
Magnum wiki. Please use the #openstack-containers IRC channel and the mailing 
list for Magnum-related questions or discussion. Feel free to also join the 
weekly Magnum IRC meeting [3].

[1] 
https://github.com/openstack/magnum/blob/master/specs/container-networking-model.rst
[2] https://wiki.openstack.org/wiki/Magnum#Resources
[3] 
https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Containers_Team_Meeting

Regards,
Daneyon Hansen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][magnum] Networking Subteam Meeting

2015-12-17 Thread Daneyon Hansen (danehans)
All,

I have a scheduling conflict and am unable to chair today's subteam meeting. 
Due to the holiday break, we will reconvene on 1/7/16.  Have a happy holidays 
and thanks for your participation in 2015.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Containerize Flannel/Etcd

2015-12-10 Thread Daneyon Hansen (danehans)
All,

As a follow-on from today's networking subteam meeting, I received additional 
feedback from the Kubernetes community about running Etcd and Flannel in a 
container. Etcd/Flannel were containerized to simply the N different ways that 
these services can be deployed. It simplifies Kubernetes documentation and 
reduces support requirements. Since our current approach of pkg+template has 
worked well, I suggest we do not containerize Etcd and Flannel [1]. IMO the 
benefits of containerizing these services does not outweigh the additional 
complexity of running a 2nd "bootstrap" Docker daemon.

Since our current Flannel version contains a bug [2] that breaks VXLAN, I 
suggest the Flannel package gets upgraded in all images running Flannel version 
0.5.0 to 0.5.3.

[1] https://review.openstack.org/#/c/249503/
[2] https://bugs.launchpad.net/magnum/+bug/1518602

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Networking Subteam Meeting

2015-12-01 Thread Daneyon Hansen (danehans)
All,

I will be on leave this Thursday and will be unable to chair the Networking 
Subteam meeting. Please respond if you would like to chair the meeting. 
Otherwise, we will not have a meeting this week and pick things back up on 
12/10.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-28 Thread Daneyon Hansen (danehans)

From: Jay Lau >
Reply-To: OpenStack Development Mailing List 
>
Date: Wednesday, November 25, 2015 at 3:15 PM
To: OpenStack Development Mailing List 
>
Subject: [openstack-dev] [magnum] Using docker container to run COE daemons

Hi,

It is becoming more and more popular to use docker container run some 
applications, so what about leveraging this in Magnum?


What I want to do is that we can put all COE daemons running in docker 
containers, because now Kubernetes, Mesos and Swarm support running in docker 
container and there are already some existing docker images/dockerfiles which 
we can leverage.

Jay,

It is my understanding that we have blueprints to address this topic:

https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container
https://blueprints.launchpad.net/magnum/+spec/mesos-in-container

In addition to the COE daemons, the run-kube-as-a-container blueprint will 
address additional processes such as etcd and flannel. Swarm-agent/manager is 
already running as containers.


So what about update all COE templates to use docker container to run COE 
daemons and maintain some dockerfiles for different COEs in Magnum? This can 
reduce the maintain effort for COE as if there are new versions and we want to 
upgrade, just update the dockerfile is enough. Comments?

I would expect the templates to be updated as part of the blueprints above. As 
with the swarm template, I believe each coe service would correlate to a 
systemd unit file that specifies a docker pull/run of a specific image.


--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-11-25 Thread Daneyon Hansen (danehans)


From: 王华 >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 25, 2015 at 5:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum]storage for docker-bootstrap

Hi all,

I am working on containerizing etcd and flannel. But I met a problem. As 
described in [1], we need a docker-bootstrap. Docker and docker-bootstrap can 
not use the same storage, so we need some disk space for it.

I reviewed [1] and I do not see where the bootstrap docker instance requires 
separate storage.

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. The disk space left is too same 
for docker-bootstrap. Even if the root_gb of the instance flavor is 20G, only 
8G can be used in our image. I want to make it bigger. One way is we can add 
the disk space left in the vda as vda3 into atomicos vg after the instance 
starts and we allocate two logic volumes for docker-bootstrap. Another way is 
when we create the image, we allocate two logic volumes for docker-bootstrap. 
The second way has a advantage. It doesn't have to make filesystem when the 
instance is created which is time consuming.

What is your opinion?

Best Regards
Wanghua

[1] 
http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode/master.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Networking Subteam Meeting

2015-11-23 Thread Daneyon Hansen (danehans)
All,

There will be no Networking Subteam meeting this week due to the Thanksgiving 
holiday.

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Networking Subteam Meetings

2015-11-05 Thread Daneyon Hansen (danehans)
All,

I apologize for issues with today's meeting. My calendar was updated to reflect 
daylight savings and displayed an incorrect meeting start time. This issue is 
now resolved. We will meet on 11/12 at 18:30 UTC. The meeting has been pushed 
back 30 minutes from our usual start time. This is because Docker is hosting a 
Meetup [1] to discuss the new 1.9 networking features. I encourage everyone to 
attend the Meetup.

[1] http://www.meetup.com/Docker-Online-Meetup/events/226522306/
[2] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Networking Subteam Meeting Cancelations

2015-10-13 Thread Daneyon Hansen (danehans)
All,

I have a conflict this week and will be unable to chair the weekly irc meeting 
[1]. Therefore, we will not meet this week. 10/22 and 10/29 meetings will also 
be canceled due to the Mitaka Design Summit. We will resume are regularly 
scheduled meetings on 11/5.

[1] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Daneyon Hansen (danehans)

+1

From: Tom Cammann >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:

Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

[Inactivehide details for Egor Guz ---09/28/2015 10:20:02 PM---Also 
Ibelive docker compose is just command line tool which doe]Egor Guz 
---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line 
tool which doesn’t have any api or scheduling feat

From: Egor Guz 
To: 
"openstack-dev@lists.openstack.org" 

Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [magnum] Discovery

2015-09-21 Thread Daneyon Hansen (danehans)
All,

Thanks for the feedback and additional ideas related to discovery. For clarity 
purposes, I would like to circle back to the specific issue that I am 
experiencing with implementing Flannel for Swarm. Flannel can not be 
implemented in Swarm bay types without making changes to discovery for Swarm. 
This is because:

  1.  Flannel requires etcd, which is not implemented in Magnum’s Swarm bay 
type.
  2.  The discovery_url is implemented differently among Kubernetes and Swarm 
bay types, making it impossible for Swarm and etcd discovery to coexist within 
the same bay type.

I am in the process of moving forward with option 2 of my original email so 
flannel can be implemented in swarm bay types [1]. I have created a bp [2] to 
address discovery more holistically. It would be helpful if you could provide 
your ideas in the whiteboard of the bp.

[1] https://review.openstack.org/#/c/224367/
[2] https://blueprints.launchpad.net/magnum/+spec/bay-type-discovery-options

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen

From: 王华 <wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 21, 2015 at 1:18 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discovery

Swarm already supports etcd as a discovery backend [1]. So we can implement 
both hosted discovery with Docker Hub and using name etcd. And make hosted 
discovery with Docker Hub default if discovery_url is not given.

If we run etcd in bay, etcd alse need discovery [2]. Operator should set up a 
etcd cluster for other etcd clusters to discover or use public discovery 
service. I think it is not necessary to run etcd in swarm cluster just for 
discovery service. In a private cloud, operator should set up a local etcd 
cluster for discovery service, and all the bays can use it.

[1] https://docs.docker.com/swarm/discovery/
[2] https://github.com/coreos/etcd/blob/master/Documentation/clustering.md

Regards,
Wanghua

On Fri, Sep 18, 2015 at 11:39 AM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
In the case where a private cloud is used without access to the Internet, you 
do have the option of running your own etcd, and configuring that to be used 
instead.

Adding etcd to every bay should be optional, as a subsequent feature, but 
should be controlled by a flag in the Baymodel that defaults to off so the 
public discovery service is used. It might be nice to be able to configure 
Magnum in an isolated mode which would change the system level default for that 
flag from off to on.

Maybe the Baymodel resource attribute should be named local_discovery_service.

Should turning this on also set the minimum node count for the bay to 3? If 
not, etcd will not be highly available.

Adrian

> On Sep 17, 2015, at 1:01 PM, Egor Guz 
> <e...@walmartlabs.com<mailto:e...@walmartlabs.com>> wrote:
>
> +1 for stop using public discovery endpoint, most private cloud vms doesn’t 
> have access to internet and operator must to run etcd instance somewhere just 
> for discovery.
>
> —
> Egor
>
> From: Andrew Melton 
> <andrew.mel...@rackspace.com<mailto:andrew.mel...@rackspace.com><mailto:andrew.mel...@rackspace.com<mailto:andrew.mel...@rackspace.com>>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
> Date: Thursday, September 17, 2015 at 12:06
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
> Subject: Re: [openstack-dev] [magnum] Discovery
>
>
> Hey Daneyon,
>
>
> I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible 
> to take it a step further. Could we run etcd in each Bay without using the 
> public discovery endpoint? And then, configure Swarm to simply use the 
> internal ectd as it's discovery mechanism? This could cut one of our external 
> service dependencies and make it easier to run Magnum is an environment with 
> locked down public internet access.​
>
>
> Anyways, I think #2 could be a good start that we could iterate on later if 
> need be.
>
>
> --Andrew
>
>

Re: [openstack-dev] [magnum] Discovery

2015-09-17 Thread Daneyon Hansen (danehans)

From: Andrew Melton 
<andrew.mel...@rackspace.com<mailto:andrew.mel...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, September 17, 2015 at 12:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discovery


Hey Daneyon,


I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible 
to take it a step further. Could we run etcd in each Bay without using the 
public discovery endpoint? And then, configure Swarm to simply use the internal 
ectd as it's discovery mechanism? This could cut one of our external service 
dependencies and make it easier to run Magnum is an environment with locked 
down public internet access.​

Thanks for your feedback. #2 was the preferred direction of the Magnum 
Networking Subteam as well. Therefore, I have started working on [1] to move 
this option forward. As part of this effort, I am slightly refactoring the 
Swarm heat templates to more closely align them with the k8s templates. Until 
tcammann completes the larger template refactor [2], I think this will help 
dev’s more easily implement features across all bay types, distro’s, etc..

We can have etcd and Swarm use a local discovery backend. I have filed bp [3] 
to establish this effort. As a first step towards [3], I will modify Swarm to 
use etcd for discovery. However, etcd will still use public discovery for [1]. 
Either myself or someone from the community will need to attack etcd local 
discovery as a follow-on. For the longer-term, we may want to consider exposing 
a --discovery-backend attribute and optionally pass labels to allow users to 
modify default configurations of the —discovery-backend.

[1] https://review.openstack.org/#/c/224367/
[2] https://blueprints.launchpad.net/magnum/+spec/generate-heat-templates
[3] https://blueprints.launchpad.net/magnum/+spec/bay-type-discovery-options




Anyways, I think #2 could be a good start that we could iterate on later if 
need be.


--Andrew


____
From: Daneyon Hansen (danehans) <daneh...@cisco.com<mailto:daneh...@cisco.com>>
Sent: Wednesday, September 16, 2015 11:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discovery

All,

While implementing the flannel --network-driver for swarm, I have come across 
an issue that requires feedback from the community. Here is the breakdown of 
the issue:

  1.  Flannel [1] requires etcd to store network configuration. Meeting this 
requirement is simple for the kubernetes bay types since kubernetes requires 
etcd.
  2.  A discovery process is needed for bootstrapping etcd. Magnum implements 
the public discovery option [2].
  3.  A discovery process is also required to bootstrap a swarm bay type. 
Again, Magnum implements a publicly hosted (Docker Hub) option [3].
  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm 
and etcd discovery.
  5.  Etcd can not be implemented in swarm because discovery_url is associated 
to swarm’s discovery process and not etcd.

Here are a few options on how to overcome this obstacle:

  1.  Make the discovery_url more specific, for example etcd_discovery_url and 
swarm_discovery_url. However, this option would needlessly expose both 
discovery url’s to all bay types.
  2.  Swarm supports etcd as a discovery backend. This would mean discovery is 
similar for both bay types. With both bay types using the same mechanism for 
discovery, it will be easier to provide a private discovery option in the 
future.
  3.  Do not support flannel as a network-driver for k8s bay types. This would 
require adding support for a different driver that supports multi-host 
networking such as libnetwork. Note: libnetwork is only implemented in the 
Docker experimental release: 
https://github.com/docker/docker/tree/master/experimental.

I lean towards #2 but their may be other options, so feel free to share your 
thoughts. I would like to obtain feedback from the community before proceeding 
in a particular direction.

[1] https://github.com/coreos/flannel
[2] 
https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
[3] https://docs.docker.com/swarm/discovery/

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Networking Update

2015-08-24 Thread Daneyon Hansen (danehans)
Team,

I will be attending parent orientation for my son’s kindergarten class and will 
be unable to join the Magnum IRC meeting. Here is a quick summary of Magnum 
networking-related activities:

  1.  Yay… the Magnum Container Networking Model spec has merged [1]. A big 
thank you to everyone who contributed.
  2.  I have organized the spec work items into the following blueprints: 
[2]-[6]

  3.  I have started moving some of the blueprints forward with WIP patches 
[7]-[9]

[1] 
https://github.com/openstack/magnum/blob/master/specs/container-networking-model.rst

[2] 
https://blueprints.launchpad.net/magnum/+spec/extend-client-network-attributes

[3] https://blueprints.launchpad.net/magnum/+spec/conductor-template-net-update

[4] https://blueprints.launchpad.net/magnum/+spec/extend-api-network-attributes

[5] https://blueprints.launchpad.net/magnum/+spec/heat-network-refactor

[6] https://blueprints.launchpad.net/magnum/+spec/extend-baymodel-net-attributes

[7] https://review.openstack.org/#/c/214762/

[8] 
https://review.openstack.org/#/c/215260https://review.openstack.org/#/c/215260//

[9] https://review.openstack.org/#/c/214909/


Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][kuryr][magnum] Magnum/Kuryr Integration

2015-08-13 Thread Daneyon Hansen (danehans)

The Magnum Networking Subteam just concluded our weekly meeting. Feel free to 
review the logs[1], as Kuryr integration was an agenda topic that drew 
considerable discussion. An etherpad[2] has been created to foster 
collaboration on the topic. Kuryr integration is scheduled as a topic for next 
week’s agenda. It would be a big help if the Kuryr team can review the etherpad 
and have representation during next week's meeting[3]. I look forward to our 
continued collaboration.

[1] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-08-13-18.00.log.txt
[2] https://etherpad.openstack.org/p/magnum-kuryr
[3] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Networking Subteam Meeting Update

2015-08-04 Thread Daneyon Hansen (danehans)
All,

This week’s Magnum Networking Subteam Meeting will be canceled due to the 
Midcycle [1]. Sub team meetings [2] will resume on 8/13 at 1800 UTC. We have 
two container networking sessions on the midcycle agenda. WebEx has been setup 
if you would like to attend the midcycle remotely. We look forward to your 
participation at the midcycle and future subteam meetings.

[1] https://wiki.openstack.org/wiki/Magnum/Midcycle
[2] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Container Networking Spec

2015-08-04 Thread Daneyon Hansen (danehans)
All,

The container networking spec below has gone through a fairly significant 
design change in the past few days. To summarize, Magnum will not create any 
new network plugins or network-specific abstractions to support container 
networking. Instead, the spec suggests standardizing on libnetwork [4] and its 
associated Container Network Model to support a range of container networking 
implementations. The Magnum community intends to integrate with Kuryr [5] in 
the long-term and both communities are starting the integration planning 
process [6].

I would like to thank everyone who provided feedback on the spec. If you have 
provided feedback to earlier patch sets, I ask that you review the latest patch 
set and provide an updated vote. If you issue a -1 vote, please include 
specific feedback on what is needed for your +1/+2 vote.

[4] https://github.com/docker/libnetwork/blob/master/docs/design.md
[5] https://github.com/openstack/kuryr/blob/master/doc/source/design.rst
[6] https://etherpad.openstack.org/p/magnum-kuryr

Regards,
Daneyon

From: Cisco Employee daneh...@cisco.commailto:daneh...@cisco.com
Date: Wednesday, July 22, 2015 at 9:40 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] Container Networking Spec

All,

I have just submitted the container networking spec[1] for review. Thank you to 
everyone [2-3] who participated in contributing to the spec. If you are 
interested in container networking within Magnum, I urge you to review the spec 
and provide your feedback.

[1] https://review.openstack.org/#/c/204686
[2] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-06-25-18.00.html
[3] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-07-16-18.03.html

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr][kolla] - Bringing Dockers networking to Neutron

2015-07-23 Thread Daneyon Hansen (danehans)


From: Antoni Segura Puimedon 
toni+openstac...@midokura.commailto:toni+openstac...@midokura.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 23, 2015 at 11:16 AM
To: Mohammad Banikazemi m...@us.ibm.commailto:m...@us.ibm.com, Steven Dake 
(stdake) std...@cisco.commailto:std...@cisco.com
Cc: Eran Gampel 
eran.gam...@toganetworks.commailto:eran.gam...@toganetworks.com, OpenStack 
Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Irena Berezovsky ir...@midokura.commailto:ir...@midokura.com
Subject: Re: [openstack-dev] [Neutron][Kuryr][kolla] - Bringing Dockers 
networking to Neutron



On Thu, Jul 23, 2015 at 7:35 PM, Mohammad Banikazemi 
m...@us.ibm.commailto:m...@us.ibm.com wrote:

I let the creators of the project speak for themselves but here is my take on 
project Kuryr.

The goal is not to containerize Neutron or other OpenStack services. The main 
objective is to use Neutron as a networking backend option for Docker. The 
original proposal was to do so in the context of using containers (for 
different Neutron backends or vif types). While the main objective is 
fundamental to the project, the latter (use of containers in this particular 
way) seems to be a tactical choice we need to make. I see several different 
options available to achieve the same goal in this regard.

Thanks Mohammad. It is as you say, the goal of Kuryr to provide Docker with a 
new libnetwork remote
driver that is powered by Neutron, not a containerization of Neutron. Kuryr 
deployments as you point out,
may opt to point to a Neutron that is containerized, and for that I was looking 
at using Kolla. However,
that is just deployment and I consider it to be something up to the deployer 
(of course, we'll make Kuryr
containerizable and part of Kolla :-) ).

The design for interaction/configuration is not yet final, as I still have to 
push drafts for the blueprints and
get comments, but my initial idea is that you will configure docker to pass the 
configuration of which
device to take hold of for the overlay and where the neutron api are in the 
following way.


$ docker -d --kv-store=consul:localhost:8500 
--label=com.docker.network.driver.kuryr.bind_interface=eth0 
--label=com.docker.network.driver.kuryr.neutron_api=10.10.10.10
--label=com.docker.network.driver.kuryr.token=AUTH_tk713d067336d21348bcea1ab220965485

Why is a separate OpenStack project needed to contribute to libnetwork? I would 
think the Neutron community would follow the libnetwork contrib guidelines and 
submit the code.



Another possibility is that those values were passed as env variables or plain 
old configuration files.



Now, there is another aspect of using containers in the context of this project 
that is more interesting at least to me (and I do not know if others share this 
view or not) and that is the use of containers for providing network services 
that are not available through libnetwork as of now or in near future or ever. 
From the talks I have had with libnetwork developers the plan is to stay with 
the basic networking infrastructure and leave additional features to be 
developed by the community and to do so possibly by using what else, containers.

So take the current features available in libnetwork. You mainly get support 
for connectivity/isolation for multiple networks across multiple hosts. Now if 
you want to route between these networks, you have to devise a solution 
yourself. One possible solution would be having a router service in a container 
that gets connected to say two Docker networks. Whether the router service is 
implemented with the use of the current Neutron router services or by some 
other solutions is something to look into and discuss but this is a direction 
where I think Kuryr (did I spell it right? ;)) can and should contribute to.

You got that right. The idea is indeed to get the containers networked via 
libnetwork that, as you point out,
was left intentionally simple to be developed by the community; then we want to 
make make:

a) Kuryr get the containers to networks that have been pre-configured with 
advanced networking (lb, sec groups, etc).
Being able to perform changes on those networks via neutron after the fact as 
well. For example the container
orchestration software could create a Neutron network with a load balancer and 
a FIP, start containers on that network
and add them to the load balancer.
b) Via the usage of either docker labels on `docker run` make Kuryr implicitly 
set up Neutron networks/topologies.

Yup, you spelled it well. In Czech it is Kurýr, but for project purposes I 
dropped the ´
Thanks a lot for contributing and I'm very happy to see that you got a very 
good sense for the direction we are taking.
I'm looking forward to meet you all in the community meetings!

Just my 2 cents 

[openstack-dev] [Magnum] Container Networking Spec

2015-07-22 Thread Daneyon Hansen (danehans)
All,

I have just submitted the container networking spec[1] for review. Thank you to 
everyone [2-3] who participated in contributing to the spec. If you are 
interested in container networking within Magnum, I urge you to review the spec 
and provide your feedback.

[1] https://review.openstack.org/#/c/204686
[2] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-06-25-18.00.html
[3] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-07-16-18.03.html

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the Core Reviewer team for Kolla

2015-07-22 Thread Daneyon Hansen (danehans)
Steve,

Thanks for sharing the message with the Kolla community. I appreciate the 
opportunity to work on the project. It was great meeting several members of the 
community at the Vancouver DS and I look forward to meeting others at the mid 
cycle. Containers is a small world (for now) and I’m sure we’ll cross paths 
again. I'll continue using Kolla for Magnum development, so you’ll still see me 
from time-to-time. Best wishes to the Kolla community!

Regards,
Daneyon Hansen

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, July 22, 2015 at 1:47 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the 
Core Reviewer team for Kolla

Fellow Kolla developers,

Daneyon has been instrumental in getting Kolla rolling and keeping our project 
alive.  He even found me a new job that would pay my mortgage and Panamera 
payment so I could continue performing as PTL for Kolla and get Magnum off the 
ground.  But Daneyon has conferred with me that he has a personal objective of 
getting highly involved in the Magnum project and leading the container 
networking initiative coming out of Magnum.  For a sample of his new personal 
mission:

https://review.openstack.org/#/c/204686/

I’m a bit sad to lose Daneyon to Magnum, but life is short and not sweet 
enough.  I personally feel people should do what makes them satisfied and happy 
professionally.  Daneyon will still be present at the Kolla midcycle and 
contribute to our talk (if selected by the community) in Tokyo.  I expect 
Daneyon will make a big impact in Magnum, just as he has with Kolla.

In the future if Daneyon decides he wishes to re-engage with the Kolla project, 
we will welcome him with open arms because Daneyon rocks and does super high 
quality work.

NB Typically we would vote on removal of a core reviewer, unless they wish to 
be removed to focus on on other projects.  Since that is the case here, there 
is no vote necessary.

Please wish Daneyon well in his adventures in Magnum territory and prey he 
comes back when he finishes the job on Magnum networking :)

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum]

2015-07-17 Thread Daneyon Hansen (danehans)
All,

Does anyone have insight into Google's plans for contributing to containers 
within OpenStack?

http://googlecloudplatform.blogspot.tw/2015/07/Containers-Private-Cloud-Google-Sponsors-OpenStack-Foundation.html

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Daneyon Hansen (danehans)
All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None

(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver

http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, July 14, 2015 at 7:44 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

One drawback to virt_type if not seen in the context of the acceptable values, 
is that it should be set to values like libvirt, xen, ironic, etc. That might 
actually be good. Instead of using the values 'vm' or 'baremetal', we use the 
name of the nova virt driver, and interpret those to be vm or baremetal types. 
So if I set the value to 'xen', I know the nova instance type is a vm, and 
'ironic' means a baremetal nova instance.

Adrian


 Original message 
From: Hongbin Lu hongbin...@huawei.commailto:hongbin...@huawei.com
Date: 07/14/2015 7:20 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

I am going to propose a third option:

3. virt_type

I have concerns about option 1 and 2, because “instance_type” and flavor was 
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or 
“baremetal”, it may cause confusions.

[1] https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-14-15 9:35 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Magnum template manage use platform VS others 
as a type?


Hi Magnum Guys,


I want to raise this question through ML.


In this patch https://review.openstack.org/#/c/200401/


For some old history reason, we use platform to indicate 'vm' or 'baremetal'.
This seems not proper for that, @Adrian proposed nova_instance_type, and 
someone prefer other names, let me summarize as below:


1. nova_instance_type  2 votes

2. instance_type 2 votes

3. others (1 vote, but not proposed any name)


Let's try to reach the agreement ASAP. I think count the final votes winner as 
the proper name is the best solution(considering community diversity).


BTW, If you not proposed any better name, just vote to disagree all, I think 
that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for Paul Bourke for Kolla Core

2015-07-14 Thread Daneyon Hansen (danehans)

+1

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, July 13, 2015 at 7:40 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Proposal for Paul Bourke for Kolla Core

Hey folks,

I am proposing Paul Bourke for the Kolla core team.  He did a fantastic job 
getting Kolla into shape to support multiple distros and from source/from 
binary installation.  His statistics are fantastic including both code and 
reviews.  His reviews are not only voluminous, but consistently good.  Paul is 
helping on many fronts and I would feel make a fantastic addition to our core 
reviewer team.

Consider my proposal to count as one +1 vote.

Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote is a veto 
for the candidate, so if you are on the fence, best to abstain :)  We require 3 
core reviewer votes to approve a candidate.  I will leave the voting open until 
July 20th  UTC.  If the vote is unanimous prior to that time or a veto vote 
is received, I’ll close voting and make appropriate adjustments to the gerrit 
groups.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] [Networking]

2015-07-13 Thread Daneyon Hansen (danehans)

For those involved in Magnum networking, I suggest attending the upcoming 
Docker Meetup:

http://www.meetup.com/Docker-Online-Meetup/events/223796871/

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Networking Subgroup

2015-06-24 Thread Daneyon Hansen (danehans)
All,

A subgroup [1] has formed within the Magnum project, who's mission is providing 
first class networking support for containers. Please join us [2] this Thursday 
at 1800 UTC for our first meeting to discuss the native Docker networking 
blueprint [3] and overall direction of the team.

[1] https://wiki.openstack.org/wiki/ContainersTeam#Container_Networking_Subteam
[2] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting
[3] https://blueprints.launchpad.net/magnum/+spec/native-docker-network

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-15 Thread Daneyon Hansen (danehans)

+1

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, June 14, 2015 at 10:48 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

Hey folks,

I am proposing Harm Waites for the Kolla core team.  He did a fantastic job 
implementing Designate in a container[1] which I’m sure was incredibly 
difficult and never gave up even though there were 13 separate patch reviews :) 
 Beyond Harm’s code contributions, he is responsible for 32% of the 
“independent” reviews[1] where independents compose 20% of our total reviewer 
output.  I think we should judge core reviewers on more then output, and I knew 
Harm was core reviewer material with his fantastic review of the cinder 
container where he picked out 26 specific things that could be broken that 
other core reviewers may have missed ;) [3].  His other reviews are also as 
thorough as this particular review was.  Harm is active in IRC and in our 
meetings for which his TZ fits.  Finally Harm has agreed to contribute to the 
ansible-multi implementation that we will finish in the liberty-2 cycle.

Consider my proposal to count as one +1 vote.

Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote is a veto 
for the candidate, so if you are on the fence, best to abstain :)  Since our 
core team has grown a bit, I’d like 3 core reviewer +1 votes this time around 
(vs Sam’s 2 core reviewer votes).  I will leave the voting open until June 21 
 UTC.  If the vote is unanimous prior to that time or a veto vote is 
received, I’ll close voting and make appropriate adjustments to the gerrit 
groups.

Regards
-steve

[1] https://review.openstack.org/#/c/182799/
[2] 
http://stackalytics.com/?project_type=allmodule=kollacompany=%2aindependent
[3] https://review.openstack.org/#/c/170965/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Add core approver Sam Yaple

2015-05-28 Thread Daneyon Hansen (danehans)

+1


From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, May 25, 2015 at 11:59 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Add core approver Sam Yaple

Hi folks,

I propose Sam Yaple for core approver for the Kolla team.  Sam has a lot of 
great ideas and has done some really cool work lately.  Sam is active in IRC 
and is starting to pick up more reviews.  Of particular interest to me his his 
idea of merging the work he has done on YAODU into Kolla.  This would be 
fantastic for Kolla and allow us to deliver on our goals of providing high 
availability which depends on multi-node deployment in our container 
architecture.

Some really complex and nice improvements to the codebase:
https://review.openstack.org/#q,Ifc7bac0d827470f506c8b5c004a833da9ce13b90,n,z
https://review.openstack.org/#q,Ic0ff96bb8119ddfab15b99e9f1e21cfe8d321dab,n,z
https://review.openstack.org/#q,I95101136dad56e9331d8b92cd394495f7bd0576a,n,z

Sam's stats for Liberty and Kilo:
http://stackalytics.com/?project_type=alluser_id=s8mmodule=kollarelease=all

Count my proposal as a +1.  Since our core team is only 5 people presently, I 
think it makes sense to only require one additional +1.  Typically projects 
require 3 +1 votes but have larger core teams, so we will use that in the 
future.  -1 = veto, so vote wisely.  Folks often abstain if they are not 
certain how their vote should go – so don’t feel compelled to vote.

I’ll keep the voting open until May 29th.  If the vote is unanimous or vetoed 
prior, I’ll close voting.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] implement openvswitch container

2015-04-28 Thread Daneyon Hansen (danehans)

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 28, 2015 at 7:52 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] implement openvswitch container



From: FangFenghua fang_feng...@hotmail.commailto:fang_feng...@hotmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 28, 2015 at 7:02 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] implement openvswitch container

I want to enbale openvswitch container.  I tink i can do that like:

1 add a container that run ovs process

OVS is broken down into several daemons, so I suggest using supervisord to have 
these daemons run in a single container before breaking them apart. This is 
because these daemons communicate through sockets by default. The daemons can 
be configured to use tcp, but I don’t know the details for changing that config 
off hand.

2 add a container that run neutron-openvswitch-agent
3  share the db.sock in compose.yaml
I think this is where you want to use tcp instead of sockets
4 add configure script and check script for the 2 containers

As a first step, you may want to simply replicate the current neutron-agents 
container and swap linux bridge for ovs and get ovs/l3/meta/etc working in a 
single container first, then split the agents apart into individual containers.

that's all i need to do, right?

That should do it

You may need to configure the ovs process in the start.sh script and 
neutron-openvswitch-agent, which will be the most difficult part of the work.

Note our agents atm are a “fat container” but if you can get ovs in a separate 
container, that would be ideal. We are planning to redux the fat container we 
have to single-purpose containers.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] PTL Candidacy

2015-03-17 Thread Daneyon Hansen (danehans)

Congratulations Steve!

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen

From: Angus Salkeld asalk...@mirantis.commailto:asalk...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 17, 2015 at 5:05 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Kolla] PTL Candidacy

There have been no other candidates within the allowed time, so congratulations 
Steve on being the new Kolla PTL.

Regards
Angus Salkeld



On Thu, Mar 12, 2015 at 8:13 PM, Angus Salkeld 
asalk...@mirantis.commailto:asalk...@mirantis.com wrote:
Candidacy confirmed.

-Angus

On Thu, Mar 12, 2015 at 6:54 PM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:
I am running for PTL for the Kolla project.  I have been executing in an 
unofficial PTL capacity for the project for the Kilo cycle, but I feel it is 
important for our community to have an elected PTL and have asked Angus 
Salkeld, who has no outcome in the election, to officiate the election [1].

For the Kilo cycle our community went from zero LOC to a fully working 
implementation of most of the services based upon Kubernetes as the backend.  
Recently I led the effort to remove Kubernetes as a backend and provide 
container contents, building, and management on bare metal using docker-compose 
which is nearly finished.  At the conclusion of Kilo, it should be possible 
from one shell script to start an AIO full deployment of all of the current 
OpenStack git-namespaced services using containers built from RPM packaging.

For Liberty, I’d like to take our community and code to the next level.  Since 
our containers are fairly solid, I’d like to integrate with existing projects 
such as TripleO, os-ansible-deployment, or Fuel.  Alternatively the community 
has shown some interest in creating a multi-node HA-ified installation 
toolchain.

I am deeply committed to leading the community where the core developers want 
the project to go, wherever that may be.

I am strongly in favor of adding HA features to our container architecture.

I would like to add .deb package support and from-source support to our docker 
container build system.

I would like to implement a reference architecture where our containers can be 
used as a building block for deploying a reference platform of 3 controller 
nodes, ~100 compute nodes, and ~10 storage nodes.

I am open to expanding our scope to address full deployment, but would prefer 
to merge our work with one or more existing upstreams such as TripelO, 
os-ansible-deployment, and Fuel.

Finally I want to finish the job on functional testing, so all of our 
containers are functionally checked and gated per commit on Fedora, CentOS, and 
Ubuntu.

I am experienced as a PTL, leading the Heat Orchestration program from zero LOC 
through OpenStack integration for 3 development cycles.  I write code as a PTL 
and was instrumental in getting the Magnum Container Service code-base kicked 
off from zero LOC where Adrian Otto serves as PTL.  My past experiences include 
leading Corosync from zero LOC to a stable building block of High Availability 
in Linux.  Prior to that I was part of a team that implemented Carrier Grade 
Linux.  I have a deep and broad understanding of open source, software 
development, high performance team leadership, and distributed computing.

I would be pleased to serve as PTL for Kolla for the Liberty cycle and welcome 
your vote.

Regards
-steve

[1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-17 Thread Daneyon Hansen (danehans)
+1

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 16, 2015 at 7:20 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

+1 \o/ yay

From: Steven Dake std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 16, 2015 at 8:07 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

Hi folks,

I’d am proposing Andre Martin to join the kolla-core team.  Andre has been 
providing mostly code implementation, but as he contributes heavily, has 
indicated he will get more involved in our peer reviewing process.

He has contributed 30% of the commits for the Kilo development cycle, acting as 
our #1 commit contributor during Kilo.

http://stackalytics.com/?project_type=allmodule=kollametric=commits

Kolla-core members please vote +1/abstain/-1.  Remember that a any –1 vote is a 
veto.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Propose Andre Martin for kolla-core

2015-02-17 Thread Daneyon Hansen (danehans)
Congrats Andre and thanks for all your contributions. 

Regards,
Daneyon Hansen, CCIE 9950
Software Engineer
Office of the Cloud CTO
Mobile: 303-718-0400
Office: 720-875-2936
Email: daneh...@cisco.com

 On Feb 17, 2015, at 10:13 AM, Steven Dake (stdake) std...@cisco.com wrote:
 
 That is 3 votes.  Welcome to kolla-core Andre!
 
 Regards
 -steve
 
 
 On 2/17/15, 10:59 AM, Jeff Peeler jpee...@redhat.com wrote:
 
 On Tue, Feb 17, 2015 at 03:07:31AM +, Steven Dake (stdake) wrote:
 Hi folks,
 
 I¹d am proposing Andre Martin to join the kolla-core team.  Andre has
 been providing mostly code implementation, but as he contributes
 heavily, has indicated he will get more involved in our peer reviewing
 process.
 
 He has contributed 30% of the commits for the Kilo development cycle,
 acting as our #1 commit contributor during Kilo.
 
 http://stackalytics.com/?project_type=allmodule=kollametric=commits
 
 Kolla-core members please vote +1/abstain/-1.  Remember that a any ­1
 vote is a veto.
 
 +1
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] open contrail for nova-network

2015-01-19 Thread Daneyon Hansen (danehans)
All,

I came across this open contrail BP for nova-network:

https://blueprints.launchpad.net/nova/+spec/opencontrail-nova-vif-driver-plugin

I know we have been doing great things in Neutron. I also understand many 
operators are still using nova-network. Any thoughts on contributing to 
nova-network while we and the rest of the community bring Neutron up-to-speed? 
It would be unfortunate to see Juniper develop key relationships with operators 
through their nova-network development efforts.

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] making Daneyon Hansen core

2014-10-24 Thread Daneyon Hansen (danehans)

Thanks. I appreciate the opportunity to help the team develop Kolla. Have a 
great weekend!

Regards,
Daneyon Hansen, CCIE 9950
Software Engineer
Office of the Cloud CTO
Mobile: 303-718-0400
Office: 720-875-2936
Email: daneh...@cisco.com

 On Oct 24, 2014, at 8:33 AM, Steven Dake sd...@redhat.com wrote:
 
 On 10/23/2014 07:31 AM, Jeff Peeler wrote:
 On 10/22/2014 11:04 AM, Steven Dake wrote:
 A few weeks ago in IRC we discussed the criteria for joining the core
 team in Kolla.  I believe Daneyon has met all of these requirements by
 reviewing patches along with the rest of the core team and providing
 valuable comments, as well as implementing neutron and helping get
 nova-networking implementation rolling.
 
 Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It
 takes 3 votes.  This email counts as one vote ;)
 
 definitely +1
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Well that is 4 votes so Daneyon - weclome to the core team of Kolla!
 
 Regards
 -steve
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev