On Tue, Sep 13, 2016 at 8:35 PM, Hongbin Lu <hongbin...@gmail.com> wrote:
> > > On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary < > choudharyvika...@gmail.com> wrote: > >> >> >> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu <hongbin...@gmail.com> wrote: >> >>> Ivan, >>> >>> Thanks for the proposal. From Magnum's point of view, this proposal >>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs >>> which is more desirable. I am looking forward to the PoC. >>> >> >> Hogbin, Can you please elaborate on this will not require to store >> neutron credentials? >> For example in libnetwork case, neutron's commands like "show_port" and >> "update_port" will still need to be invoked from inside VM. >> > > In a typical COE cluster, there are master nodes and work (minion/slave) > nodes. Regarding to credentials, the following is optimal: > * Avoid storing credentials in work nodes. If credentials have to be > stored, move them to master nodes if we can (containers are running in work > nodes so credentials stored there have a higher risk). A question for you, > neutron's commands like "show_port" and "update_port" need to be invoked > from work nodes or master nodes? > VIKAS>> That will depend on kuryr configuration. There will be two choices: 1. use 'rest_driver' for neutron communication (making calls directly where libnetwork driver is running. It could be a vm or baremetal) 2. use 'rpc_driver'. Flow that Toni described is assuming that rpc_driver is used. So as he explained kuryr-libnetwork in the vm will talk to kuryr daemon over rpc for neutron services. IMO, Above part will be common in both the approaches, address-pairs based or vlan-aware-vms based. * If credentials have to be stored, scope them with least privilege (Magnum > uses Keystone trust for this purpose). > > >> >> Overall I liked this approach given its simplicity over vlan-aware-vms. >> >> -VikasC >> >>> >>> Best regards, >>> Hongbin >>> >>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan <ivan.cough...@intel.com >>> > wrote: >>> >>>> >>>> >>>> *Overview* >>>> >>>> Kuryr proposes to address the issues of double encapsulation and >>>> exposure of containers as neutron entities when containers are running >>>> within VMs. >>>> >>>> As an alternative to the vlan-aware-vms and use of ovs within the VM, >>>> we propose to: >>>> >>>> - Use allowed-address-pairs configuration for the VM neutron >>>> port >>>> >>>> - Use IPVLAN for wiring the Containers within VM >>>> >>>> >>>> >>>> In this way: >>>> >>>> - Achieve efficient data path to container within VM >>>> >>>> - Better leverage OpenStack EPA(Enhanced Platform Awareness) >>>> features to accelerate the data path (more details below) >>>> >>>> - Mitigate the risk of vlan-aware-vms not making neutron in >>>> time >>>> >>>> - Provide a solution that works on existing and previous >>>> openstack releases >>>> >>>> >>>> >>>> This work should be done in a way permitting the user to optionally >>>> select this feature. >>>> >>>> >>>> >>>> >>>> *Required Changes* >>>> >>>> The four main changes we have identified in the current kuryr codebase >>>> are as follows: >>>> >>>> · Introduce an option of enabling “IPVLAN in VM” use case. >>>> This can be achieved by using a config file option or possibly passing a >>>> command line argument. The IPVLAN master interface must also be identified. >>>> >>>> · If using “IPVLAN in VM” use case, Kuryr should no longer >>>> create a new port in Neutron or the associated VEth pairs. Instead, Kuryr >>>> will create a new IPVLAN slave interface on top of the VM’s master >>>> interface and pass this slave interface to the Container netns. >>>> >>>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to >>>> be identified so we can associate the additional IPVLAN addresses with the >>>> port. This can be achieved by querying Neutron’s show-port function and >>>> passing the VMs IP address. >>>> >>>> · If using “IPVLAN in VM” use case, Kuryr should associate the >>>> additional IPVLAN addresses with the VMs port. This can be achieved using >>>> Neutron’s allowed-address-pairs flag in the port-update function. We >>>> intend to make use of Kuryr’s existing IPAM functionality to request these >>>> IPs from Neutron. >>>> >>>> >>>> >>>> *Asks* >>>> >>>> We wish to discuss the pros and cons. >>>> >>>> For example, containers exposure as proper neutron entities and the >>>> utility of neutron’s allowed-address-pairs is not yet well understood. >>>> >>>> >>>> >>>> We also wish to understand if this approach is acceptable for kuryr? >>>> >>>> >>>> >>>> >>>> >>>> *EPA* >>>> >>>> The Enhanced Platform Awareness initiative is a continuous program to >>>> enable fine-tuning of the platform for virtualized network functions. >>>> >>>> This is done by exposing the processor and platform capabilities >>>> through the management and orchestration layers. >>>> >>>> When a virtual network function is instantiated by an Enhanced Platform >>>> Awareness enabled orchestrator, the application requirements can be more >>>> efficiently matched with the platform capabilities. >>>> >>>> http://itpeernetwork.intel.com/openstack-kilo-release-is-sha >>>> ping-up-to-be-a-milestone-for-enhanced-platform-awareness/ >>>> >>>> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf >>>> >>>> https://www.brighttalk.com/webcast/12229/181563/epa-features >>>> -in-openstack-kilo >>>> >>>> >>>> >>>> >>>> >>>> Regards, >>>> >>>> Ivan…. >>>> >>>> -------------------------------------------------------------- >>>> Intel Research and Development Ireland Limited >>>> Registered in Ireland >>>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare >>>> Registered Number: 308263 >>>> >>>> This e-mail and any attachments may contain confidential material for >>>> the sole use of the intended recipient(s). Any review or distribution by >>>> others is strictly prohibited. If you are not the intended recipient, >>>> please contact the sender and delete all copies. >>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: openstack-dev-requ...@lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: openstack-dev-requ...@lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev