Re: [openstack-dev] [Kuryr] Refactoring into common library and kuryr-libnetwork + Nested_VMs

2016-07-02 Thread Vikas Choudhary
Hi Team,

kuryr-libnetwork needs kuryr(along with refactoring changes)  to be
installed through requirements.txt, [1] . If we merge changes in kuryr to
achieve so we might end-up messing up both repos.

To handle this, I have forked kuryr repo on my github account, [2], and
doing required changes in my local forked kuryr.

kuryr-libnetwork patches will resolve kuryr dependency from my forked kuryr
project(with required changes). Once kuryr-libnetwork is stable with all
unit test-cases and functional test cases working as they do today in kuryr
repo, we can start making changes to kuryr project. This way atleast one
repo would be working at all time.

I will keep updating patches in kuryr from my forked kuryr project.

thoughts?


Regards
Vikas

[1] https://review.openstack.org/#/c/336661/4/requirements.txt
[2] https://github.com/vikaschoudhary16/kuryr/tree/rpc_ns



On Tue, Jun 28, 2016 at 3:45 PM, Vikas Choudhary  wrote:

>
>
> On Tue, Jun 28, 2016 at 3:41 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Tue, Jun 28, 2016 at 11:54 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Jun 28, 2016 at 1:53 PM, Antoni Segura Puimedon <
>>> toni+openstac...@midokura.com> wrote:
>>>


 On Mon, Jun 27, 2016 at 11:10 AM, Vikas Choudhary <
 choudharyvika...@gmail.com> wrote:

>
>
> On Mon, Jun 27, 2016 at 2:22 PM, Fawad Khaliq 
> wrote:
>
>> Vikas,
>>
>> Thanks for starting this. Where would you classify the Segmentation
>> (VLANID etc) allocation engine. Currently, the libnetwork plugin is tied 
>> to
>> the api and talks to Neutron, how would libnetwork and api part interact?
>>
>> As per my current understanding, it should be present be part of
> Kuryr-controller(server) running on cluster master node. My proposal is to
> move all neutron api calling part to kuryr-controller and let libnetwork
> plugin make request to kuryr-controller.
>

 Right now we have three repositories

 - kuryr
 - kuryr-libnetwork
 - kuryr-kubernetes

 My proposal is that the common code (as described below in Vikas'
 email, this includes the binding code) lives in `kuryr`.
 The kuryr server for the nested swarm case would also live there, as it
 would be a generic rest API.

 The local libnetwork code, the REST server that we have that serves the
 libnetwork ipam and remote driver APIs would live in kuryr-libnetwork.
 For the nested case, I'd put a configuration option to the libnetwork
 driver to prefer the vlan tagging binding script.

>>>
>>> vlan tagging part looks common to both libnetwork and k8s(cni). Will it
>>> be present in both the repos, kuryr-libnetwork and kuryr-k8s or we can put
>>> this also in common 'Kuryr'?
>>>
>>
>> It would be in common kuryr. The configuration option to use it instead
>> of the port type would be defined in both kuryr-libnetwork and kuryr-k8s.
>>
>>
>
> Thanks for the confirmation Toni. Now it totally makes sense.
>
>
>>>

 Both CNI and the API watcher I would put in the kuryr-kubernetes
 repositories under /kuryr/{cni,watcher}

>>>

>
>
>> Fawad Khaliq
>>
>>
>> On Fri, Jun 24, 2016 at 9:45 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> As already discussed with some of teammates over irc and internally,
>>> thought of bringing discussionto ml for more opinions:
>>>
>>> My idea on repo structure is something similar to this:
>>>
>>> kuryr
>>> └── controller
>>> │   ├── api (running on controller node(cluster master or openstack
>>> controller node), talking to other services(neutron))
>>> │   │
>>> │   ├── kubernetes-controller
>>> │   │   │
>>> │   │   └── watcher (for network related services making api
>>> calls)
>>> │   │
>>> │   │___any_other_coe_controller_capable_of_watching_events
>>> │
>>> │
>>> │
>>> │___driver
>>>  │common (traffic tagging utilities and binding)
>>>  │
>>>  │kubernetes(cni)
>>>  │
>>>  │libnetwork(network and ipam driver)(for network related
>>> services making api calls)
>>>  │
>>>  │ any_other_driver(calling api for nw related services if
>>> watcher not supported)
>>>
>>>
>>> Thoughts?
>>>
>>>
>>> -Vikas
>>>
>>>
>>>
>>>
>>> -- Forwarded message --
>>> From: Vikas Choudhary 
>>> Date: Thu, Jun 23, 2016 at 2:54 PM
>>> Subject: Re: Kuryr Refactoring into common library and
>>> kuryr-libnetwork + Nested_VMs
>>> To: Antoni Segura Puimedon 
>>>
>>>
>>> @Toni, Can you please explain a bit on how the roles 

[openstack-dev] Syntribos Error : AttributeError: 'tuple' object has no attribute 'headers'

2016-07-02 Thread OpenStack Mailing List Archive

Link: https://openstack.nimeyo.com/89478/?show=89478#q89478
From: run2obtain 

Hi... I tried to use OpenStack Syntribos today for security testing against my devstack kilo cloud.  I followed installation and configuration instructions provided at the openstack syntribos repo .Unfortunately, I received some errors after running the command : syntribos keystone.config .opencafe/templates/keystone/roles_get.txt . The errors are File "/usr/local/lib/python2.7/dist-packages/syntribos/extensions/identity/client.py", line 146, in get_token_v3  return r.headers["X-Subject-Token"] AttributeError: 'tuple' object has no attribute 'headers'. '   I have not been successful at discovering what could be wrong or how to resolve this issue, even after googling. Does anyone have a hint as to how to resolve this issue. Many thanks for your anticipated response.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] When to purge the DB, and when not to purge the DB?

2016-07-02 Thread Ian Cordasco
On Jul 2, 2016 10:37 AM, "Dan Smith"  wrote:
>
> > The question is whether we should do something like this:
> >
> > 1) As part of the normal execution of the service playbooks;
> > 2) As part of the automated major upgrade (i.e. The step is not
optional);
> > 3) As part of the manual major upgrade (i.e. The step is optional);
> > 4) Never.
>
> I am not an operator, but I would think that #4 is the right thing to
> do. If I want to purge the database, it's going to be based on billing
> reasons (or lack thereof) and be tied to another archival, audit, etc
> policy that the "business people" are involved with. Install and
> configuration of my services shouldn't really ever touch my data other
> than mechanical upgrade scripts and the like, IMHO.
>
> Purging the database only during upgrades is not sufficient for large
> installs, so why artificially tie it to that process? In Nova we don't
> do data migrations as part of schema updates anymore, so it's not like a
> purge is going to make the upgrade any faster...

I agree with this sentiment. If OSA feels like it must provide automation
for purging databases, it should be in the ops repo mentioned earlier.

I see no reason to over extend upgrades with something not inherently
necessary or appropriate for upgrades.

--
Ian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-07-02 Thread Steven Dake (stdake)
Stephen,

Responses inline.

On 7/1/16, 11:35 AM, "Stephen Hindle"  wrote:

>Maybe I missed it - but is there a way to provide site specific
>configurations?  Things we will run into in the wild include:
>Configuring multiple non-openstack nics

We don¹t have anything like this at present or planned.  Would you mind
explaining the use case?  Typically we in the Kolla community expect a
production deployment to only deploy OpenStack, and not other stacks on
top of the bare metal hardware.  This is generally considered best
practice at this time, unless of course your deploying something on top of
OpenStack that may need these nics.  The reason is that OpenStack itself
managed alongside another application doesn¹t know what it doesn't know
and can't handle capacity management or any of a number of other things
required to make an OpenStack cloud operate.

> IPMI configuration

BiFrost includes IPMI integration - assumption being we will just use
whatever BiFrost requires here for configuration.

> Password integration with Corporate LDAP etc.

We have been asked several times for this functionality, and it will come
naturally during either Newton or Occata.

> Integration with existing SANs

Cinder integrates with SANs, and in Newton, we have integration with
iSCSI.  Unfortunately because of some controversy around how glance should
provide images with regards to cinder, using existing SAN gear with iSCSI
integration as is done by Cinder may not work as expected in a HA setup.

> Integration with existing corporate IPAM

No idea

> Corporate Security policy (firewall rules, sudo groups,
>hosts.allow, ssh configs,etc)

This is environmental specific and its hard to make a promise on what we
could deliver in a generic way that would be usable by everyone.
Therefore our generic implementation will be the "bare minimum" to get the
system into an operational state.  The things listed above are outside the
"bare minimum" iiuc.

>
>Thats just off the top of my head - I'm sure we'll run into others.  I
>tend to think the best way
>to approach this is to allow some sort of 'bootstrap' role, that could
>be populated by the
>operators.  This should initially be empty (Kolla specific 'bootstrap'

Our bootstrap playbook is for launching BiFrost and bringing up the bare
metal machines with an SSH credential.  It appears from this thread we
will have another playbook to do the bare metal initialization (thiings
like turning off firewalld, turning on chrony, I.e. Making the bare metal
environment operational for OpenStack)

I think what you want is a third playbook which really belongs in the
domain of the operators to handle site-specific configuration as required
by corporate rules and the like.


>actions should be
>in another role) to prevent confusion.
>
>We also have to be careful that kolla doesn't stomp on any non-kolla
>configuration...

Could you expand here.  Kolla currently expects the machines under its
control to be only OpenStack machines, and not have other applications
running on them.

Hope that was helpful.

Regards
-steve

>
>
>On Thu, Jun 30, 2016 at 12:43 PM, Mooney, Sean K
> wrote:
>>
>>
>>> -Original Message-
>>> From: Steven Dake (stdake) [mailto:std...@cisco.com]
>>> Sent: Monday, June 27, 2016 9:21 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> 
>>> Subject: Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla +
>>> BiFrost integration
>>>
>>>
>>>
>>> On 6/27/16, 11:19 AM, "Devananda van der Veen"
>>> 
>>> wrote:
>>>
>>> >At a quick glance, this sequence diagram matches what I
>>> >envisioned/expected.
>>> >
>>> >I'd like to suggest a few additional steps be called out, however I'm
>>> >not sure how to edit this so I'll write them here.
>>> >
>>> >
>>> >As part of the installation of Ironic, and assuming this is done
>>> >through Bifrost, the Actor should configure Bifrost for their
>>> >particular network environment. For instance: what eth device is
>>> >connected to the IPMI network; what IP ranges can Bifrost assign to
>>> >physical servers; and so on.
>>> >
>>> >There are a lot of other options during the install that can be
>>> >changed, but the network config is the most important. Full defaults
>>> >for this roles' config options are here:
>>> >
>>> >https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifro
>>> s
>>> >t-i
>>> >ronic-install/defaults/main.yml
>>> >
>>> >and documentation is here:
>>> >
>>> >https://github.com/openstack/bifrost/tree/master/playbooks/roles/bifro
>>> s
>>> >t-i
>>> >ronic-install
>>> >
>>> >
>>> >
>>> >Immediately before "Ironic PXE boots..." step, the Actor must perform
>>> >an action to "enroll" hardware (the "deployment targets") in Ironic.
>>> >This could be done in several ways: passing a YAML file to Bifrost;
>>> >using the Ironic CLI; or something else.
>>> >
>>> >
>>> 

Re: [openstack-dev] [daisycloud-core] Kolla Mitaka requirements supported by CentOS

2016-07-02 Thread Haïkel
2016-07-02 20:42 GMT+02:00 jason :
> Pip Package Name Supported By Centos CentOS Name  Repo 
> Name
> ==
> ansible   yes
> ansible1.9.noarchepel
> docker-py  yes
> python-docker-py.noarchextras
> gitdb  yes
> python-gitdb.x86_64epel
> GitPython  yes
> GitPython.noarchepel
> oslo.config yes
> python2-oslo-config.noarch centos-openstack-mitaka
> pbryes
> python-pbr.noarch   epel
> setuptools yes
> python-setuptools.noarchbase
> six yes
> python-six.noarch base
> pycryptoyes
> python2-crypto  epel
> graphvizno
> Jinja2no (Note: Jinja2 2.7.2 will be installed as
> dependency by ansible)
>

As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
It's proven very hard to prevent EPEL pushing broken updates, or push
updates to fit OpenStack requirements.

Actually, all the dependency above but ansible, docker and git python
modules are in CentOS Cloud SIG repositories.
If you are interested to work w/ CentOS Cloud SIG, we can add missing
dependencies in our repositories.


>
> As above table shows, only two (graphviz and Jinja2) are not supported
> by centos currently. As those not supported packages are definitly not
> used by OpenStack as well as Daisy. So basicaly we can use pip to
> install them after installing other packages by yum. But note that
> Jinja2 2.7.2 will be installed as dependency while yum install
> ansible, so we need to using pip to install jinja2 2.8 after that to
> overide the old one. Also note that we must make sure pip is ONLY used
> for installing those two not supported packages.
>
> But before you trying to use pip, please consider these:
>
> 1) graphviz is just for saving image depend graph text file and is not
> used by default and only used in build process if it is configured to
> be used.
>
> 2) Jinja2 rpm can be found at
> http://koji.fedoraproject.org/koji/packageinfo?packageID=6506, which I
> think is suitable for CentOS. I have tested it.
>
> So, as far as Kolla deploy process concerned, there is no need to use
> pip to install graphviz and Jinja2. Further more, if we do not install
> Kolla either then we can get ride of pip totally!
>
> I encourage all of you to think about not using pip any more for
> Daisy+Kolla, because pip hase a lot of overlaps between yum/rpm, files
> may be overide back and force if not using them carefully. So not
> using pip will make things easier and make jump server more cleaner.
> Any ideas?
>
>
> Thanks,
> Zhijiang
>
> --
> Yours,
> Jason
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [daisycloud-core] Kolla Mitaka requirements supported by CentOS

2016-07-02 Thread Dave Walker
On 2 July 2016 at 19:42, jason  wrote:

>
>
> As above table shows, only two (graphviz and Jinja2) are not supported
> by centos currently. As those not supported packages are definitly not
> used by OpenStack as well as Daisy. So basicaly we can use pip to
> install them after installing other packages by yum. But note that
> Jinja2 2.7.2 will be installed as dependency while yum install
> ansible, so we need to using pip to install jinja2 2.8 after that to
> overide the old one. Also note that we must make sure pip is ONLY used
> for installing those two not supported packages.
>
> 

Whilst this appears to be accurate, it would probably not be an appropriate
change for stable/mitaka.  However, it is probably worth checking the
current development focus Master, which will become Newton
(kolla/docker/openstack-base/Dockerfile.j2) and seeing if this is still an
issue.  A bunch of Centos Binary improvements were made this current cycle
to make more use of yum packages[0].

[0]
https://github.com/openstack/kolla/commit/a8f3da204e6d6ae42b30c166d436d74064394a1b

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Kolla Mitaka requirements supported by CentOS

2016-07-02 Thread jason
Pip Package Name Supported By Centos CentOS Name  Repo Name
==
ansible   yes
ansible1.9.noarchepel
docker-py  yes
python-docker-py.noarchextras
gitdb  yes
python-gitdb.x86_64epel
GitPython  yes
GitPython.noarchepel
oslo.config yes
python2-oslo-config.noarch centos-openstack-mitaka
pbryes
python-pbr.noarch   epel
setuptools yes
python-setuptools.noarchbase
six yes
python-six.noarch base
pycryptoyes
python2-crypto  epel
graphvizno
Jinja2no (Note: Jinja2 2.7.2 will be installed as
dependency by ansible)


As above table shows, only two (graphviz and Jinja2) are not supported
by centos currently. As those not supported packages are definitly not
used by OpenStack as well as Daisy. So basicaly we can use pip to
install them after installing other packages by yum. But note that
Jinja2 2.7.2 will be installed as dependency while yum install
ansible, so we need to using pip to install jinja2 2.8 after that to
overide the old one. Also note that we must make sure pip is ONLY used
for installing those two not supported packages.

But before you trying to use pip, please consider these:

1) graphviz is just for saving image depend graph text file and is not
used by default and only used in build process if it is configured to
be used.

2) Jinja2 rpm can be found at
http://koji.fedoraproject.org/koji/packageinfo?packageID=6506, which I
think is suitable for CentOS. I have tested it.

So, as far as Kolla deploy process concerned, there is no need to use
pip to install graphviz and Jinja2. Further more, if we do not install
Kolla either then we can get ride of pip totally!

I encourage all of you to think about not using pip any more for
Daisy+Kolla, because pip hase a lot of overlaps between yum/rpm, files
may be overide back and force if not using them carefully. So not
using pip will make things easier and make jump server more cleaner.
Any ideas?


Thanks,
Zhijiang

-- 
Yours,
Jason

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-dev] The QoS feature minimum guaranteed bandwidth of OpenvSwitch not work

2016-07-02 Thread Ben Pfaff
On Sat, Jul 02, 2016 at 03:17:21PM +, Xiao Ma (xima2) wrote:
> Hi , Ben
> 
> Thanks for your reply.
> 
> I configured the bond0 using linux bond then the result is not good for 
> minimum guaranteed bandwidth.
> But if I use ovs-vsctl add-bond to configure the bond0 interface, the result 
> is good.
> If I use linux bond with linux tc, the tc works well also.
> 
>   However, most problems with QoS on Linux are not bugs in Open
>   vSwitch at all.  They tend to be either configuration errors
>   (please see the earlier questions in this section) or issues with
> 
> Could you send  me the section link?

https://github.com/openvswitch/ovs/blob/master/FAQ.md#quality-of-service-qos

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] When to purge the DB, and when not to purge the DB?

2016-07-02 Thread Dan Smith
> The question is whether we should do something like this:
> 
> 1) As part of the normal execution of the service playbooks;
> 2) As part of the automated major upgrade (i.e. The step is not optional);
> 3) As part of the manual major upgrade (i.e. The step is optional);
> 4) Never.

I am not an operator, but I would think that #4 is the right thing to
do. If I want to purge the database, it's going to be based on billing
reasons (or lack thereof) and be tied to another archival, audit, etc
policy that the "business people" are involved with. Install and
configuration of my services shouldn't really ever touch my data other
than mechanical upgrade scripts and the like, IMHO.

Purging the database only during upgrades is not sufficient for large
installs, so why artificially tie it to that process? In Nova we don't
do data migrations as part of schema updates anymore, so it's not like a
purge is going to make the upgrade any faster...

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] When to purge the DB, and when not to purge the DB?

2016-07-02 Thread Wade Holler
+1 "As part of the normal execution of the service playbooks"

w/ user_variable to turn it on/off of course; whether it defaults to on or
off is really designers choice. Some production clouds will have the
playbooks ran against them frequently, others not so much.




On Sat, Jul 2, 2016 at 7:43 AM Jesse Pretorius <
jesse.pretor...@rackspace.co.uk> wrote:

> On 7/1/16, 8:48 PM, "Ian Cordasco"  wrote:
>
>
>
> >-Original Message-
> >From: Jesse Pretorius 
> >
> >> In a recent conversation on the Operators list [1] there was a
> discussion about purging
> >> archived data in the database. It would seem to me an important step in
> maintaining an
> >> environment which should be done from time to time and perhaps at the
> very least prior
> >> to a major upgrade.
> >>
> >> What’re the thoughts on how often this should be done? Should we
> include it as an opt-in
> >> step, or an opt-out step?
> >>
> >> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2016-June/010813.html
> >
> >Is OpenStack-Ansible now going to get into the minutae of operating
> >the entire cloud? I was under the impression that it left the easy
> >things to the operators (e.g., deciding when and how to purge the
> >database) while taking care of the things that are less obvious
> >(setting up OpenStack, and interacting with the database directly to
> >only set up things for the services).
>
> That’s a good question which betrays the fact that I phrased my question
> poorly. :)
>
> The question is whether we should do something like this:
>
> 1) As part of the normal execution of the service playbooks;
> 2) As part of the automated major upgrade (i.e. The step is not optional);
> 3) As part of the manual major upgrade (i.e. The step is optional);
> 4) Never.
>
> If never, it might be useful for our community to curate a few bits of
> operations tooling to automated this sort of thing on demand. The tooling
> can be placed into the Ops repository [1] if this is done.
>
> [1] https://github.com/openstack/openstack-ansible-ops
>
> 
> Rackspace Limited is a company registered in England & Wales (company
> registered number 03897010) whose registered office is at 5 Millington
> Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy
> can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail
> message may contain confidential or privileged information intended for the
> recipient. Any dissemination, distribution or copying of the enclosed
> material is prohibited. If you receive this transmission in error, please
> notify us immediately by e-mail at ab...@rackspace.com and delete the
> original message. Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] When to purge the DB, and when not to purge the DB?

2016-07-02 Thread Jesse Pretorius
On 7/1/16, 8:48 PM, "Ian Cordasco"  wrote:



>-Original Message-
>From: Jesse Pretorius 
>
>> In a recent conversation on the Operators list [1] there was a discussion 
>> about purging
>> archived data in the database. It would seem to me an important step in 
>> maintaining an
>> environment which should be done from time to time and perhaps at the very 
>> least prior
>> to a major upgrade.
>>
>> What’re the thoughts on how often this should be done? Should we include it 
>> as an opt-in
>> step, or an opt-out step?
>>
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-operators/2016-June/010813.html
>
>Is OpenStack-Ansible now going to get into the minutae of operating
>the entire cloud? I was under the impression that it left the easy
>things to the operators (e.g., deciding when and how to purge the
>database) while taking care of the things that are less obvious
>(setting up OpenStack, and interacting with the database directly to
>only set up things for the services).

That’s a good question which betrays the fact that I phrased my question 
poorly. :)

The question is whether we should do something like this:

1) As part of the normal execution of the service playbooks;
2) As part of the automated major upgrade (i.e. The step is not optional);
3) As part of the manual major upgrade (i.e. The step is optional);
4) Never.

If never, it might be useful for our community to curate a few bits of 
operations tooling to automated this sort of thing on demand. The tooling can 
be placed into the Ops repository [1] if this is done.

[1] https://github.com/openstack/openstack-ansible-ops


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-07-02 Thread Ihar Hrachyshka
I was thinking on your reply for some time. I think I now have some 
constructive bits to add.

> On 30 Jun 2016, at 18:50, Doug Wiegley  wrote:
> 
> 
>> On Jun 30, 2016, at 7:01 AM, Ihar Hrachyshka  wrote:
>> 
>> 
>>> On 30 Jun 2016, at 06:03, Kosnik, Lubosz  wrote:
>>> 
>>> Like Doug said Amphora suppose to be a black box. It suppose to get some 
>>> data - like info in /etc/defaults and do everything inside on its own.
>>> Everyone will be able to prepare his own implementation of this image 
>>> without mixing things between each other.
>> 
>> That would be correct if the image would not be maintained by the project 
>> itself. Then indeed every vendor would prepare their own image, maybe 
>> collaborate on common code for that. Since this code is currently in 
>> octavia, we kinda need to plug into it for other vendors. Otherwise you pick 
>> one and give it a preference.
> 
> No, I disagree with that premise, because it pre-supposes that we have any 
> interest in supporting *this exact reference implementation* for any period 
> of time.
> 
> Octavia has a few goals:
> 
> - Present an openstack loadbalancing API to operators and users.
> - Put VIPs on openstack clouds, that do loadbalancy things, and are always 
> there and working.
> - Upgrade seamlessly.
> 
> That’s it. A few more constraints:
> 
> - It’s an openstack project, so it must be python, with our supported 
> version, running on our supported OSs, using our shared libraries, being 
> open, level playing field, etc…
> 
> Nowhere in there is the amp concept, or that we must always require nova, or 
> that said amps must run a REST agent, or anything about the load-balancing 
> backend.The amp itself, and all the code written for it, is just a means to 
> an end. If the day comes tomorrow that the amp agent and amp concept is 
> silly, as long as we have a seamless upgrade and those VIPs keep operating, 
> we are under no obligation as a project to keep using that amp code or 
> maintaining it. Our obligation is to the operators and users.
> 

You assume that operators and users don’t care about reference implementation 
and its internals. It can’t be further from truth. Architecture matters to 
operators, since it often defines how it’s used, if at all.

Another thing that matters is whether the team behind the architecture provides 
compatibility guarantees for an extended period of time. You can’t just switch 
design every second cycle and expect operators and distributions to catch  up. 
When you plan for a transition, backwards compatibility should be at the core 
of discussion.

> The amp “agent” code has already gone through two iterations (direct ssh, now 
> a silly rest agent on the amp). We’ve already discussed that the current 
> ubuntu based amp is too heavy-weight and needs to change. Tomorrow it could 
> be based on a microlinux. And the day after that, cirros plus a static nginx. 
> And the day after that, a docker swarm with an proxy running on a simulated 
> minecraft redstone machine (well, we’d have to find an open-source clone of 
> minecraft, first.)

That does not sound like a reasonable approach. Operators and distributions 
cannot be expected to adopt to your new cool ways every cycle. Please pick an 
implementation that is good enough and stick to it for extended time.

Yes, I know lbaas project is generally more experimental (starting with v1, 
switching to v2, getting it out of experimental only to deprecate right away 
and switch end points to octavia that uses absolutely different reference 
architecture without providing any migration path, …)

But maybe that’s not a thing to be proud of, and it’s time to stop.

> 
> The point being, as a project contributor, I have zero interest in signing up 
> for long-term maintenance of something that 1) is not user visible, and 2) is 
> likely to change; all for the sake of any particular vendors sensibilities. 
> The current octavia will run just fine on ubuntu or redhat, and the current 
> amp image will launch just fine on a nova run by either, too.

There was always an expectation in neutron community that we provide reasonable 
plug points to vendors, both for distributions as well as networking vendors, 
and we accommodate to wide variety of technologies.

> 
> That said, every part of octavia is pluggable with drivers, and while I will 
> personally resist adding multiple reference drivers in-tree, it doesn’t mean 
> everyone will, nor does it preclude using shims and external repos.

While Octavia itself is indeed pluggable, those plugging points are too high 
level, leaving alternative distributions to reimplement the whole stack. In 
that particular case, other distributions can indeed craft their own images 
with a customized amp agent. The problem is, by not giving us any real plugging 
points to leverage, we are effectively suggested to fork the whole agent. If 
that happens, I 

Re: [openstack-dev] [lbaas][octavia] suggestion for today's meeting agenda: How to make the Amphora-agent support additional Linux flavors

2016-07-02 Thread Ihar Hrachyshka

> On 30 Jun 2016, at 06:03, Kosnik, Lubosz  wrote:
> 
> Like Doug said Amphora suppose to be a black box. It suppose to get some data 
> - like info in /etc/defaults and do everything inside on its own.
> Everyone will be able to prepare his own implementation of this image without 
> mixing things between each other.

I actually don’t see too much of mixing here; at least in the patch that Nir 
published, implementation specific details are nicely isolated per flavour. 
When we have Ubuntu with systemd case, we can easily reshuffle the code a bit 
more for better reuse.

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-07-02 Thread HU, BIN
Thank you Matt and Armando for additional background information, and offer to 
help us. We really appreciate it.

Let me explain our requirement a bit.

We are telco operators, and we are moving toward NFV domain, which is a whole 
new domain of business and technology. New domain means 2 folds

-  Brand new business opportunities and market segments

-  Brand new technologies that need exploration and experiment, 
especially in NFV networking.

NFV brings opportunities, which also means new challenges. For example, there 
are new NFV networking use cases today, such as MPLS and L3VPN. But more 
importantly, because of the **green field** nature of NFV, we foresee there 
will be whole new NFV networking use cases (that bring new business) in near 
future. And the challenge to us is to:

-  Quickly catch the business opportunity

-  Execute it in agile way so that we can accelerate the time-to-market 
and improve our business agility in offering our customers with those new NFV 
services.

“Interoperability” is always our favorite term. Being an operator, we know how 
important the interoperability is. Without interoperability, there is no global 
mobile network for users to have one device and connect anywhere.

However, in a **green field** of NFV, when the services and technologies is 
being developed, and every service provider is offering diversified, innovative 
services and brings to the customers in an agile way, “interoperability” is not 
a top priority at this moment IMHO. Quick development and deployment, 
time-to-market and business agility is the key to grow everyone’s business in 
the **green field**. When NFV services get stabilized at later stage, then we 
need to emphasize the “interoperability” at that time.

All of the background brings an interesting challenge to Nova and Neutron too – 
how can we better balance the need of interoperability (i.e. tight control) 
v.s. the need of penetrating new market opportunity of NFV green field (i.e. 
wild west)?

That is why we developed Gluon, a model-driven, extensible framework for NFV 
networking services. This framework aims to:

-  Work with and interoperate with Neutron for any existing networking 
services that Neutron is supporting today

-  Provides extensibility to support new, unknown NFV networking 
services in green field in an agile way, i.e. NFV networking on-demand.

Think about early days of Nova networking, everything was exploratory and 
experimental. It gets matured over time. In a green field of NFV networking, we 
need exploration and experiment too, because it encourages innovation. When a 
NFV service gets matured, we focus on its interoperability then.

We are looking forward to working with Nova and Neutron team to consolidate 
Gluon with Nova and Neutron. The goal is to

-  Make sure current interoperability model of stable APIs and services 
will keep as is

-  Make sure current networking services and ongoing effort supported 
by Neutron will keep going as is

-  Allow an extensibility model/framework (Gluon), which can connect 
with Nova and Neutron too, for us to explore the green field in NFV networking 
services and business in an agile way, and meet the time-to-market need, 
because of its **unknown* nature of new, innovative services in NFV green field

-  Whenever NFV networking services and business get stabilized and 
matured, turn it to interoperability model for those stable APIs and services 
as we are currently doing

Let me know what you think and we are looking forward to working with you.

Thanks
Bin

From: Armando M. [mailto:arma...@gmail.com]
Sent: Friday, July 01, 2016 3:32 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova 
Mitaka Release



On 30 June 2016 at 10:55, HU, BIN > wrote:
I see, and thank you very much Dan. Also thank you Markus for unreleased 
release notes.

Now I understand that it is not a plugin and unstable interface. And there is a 
new "use_neutron" option for configuring Nova to use Neutron as its network 
backend.

When we use Neutron, there are ML2 and ML3 plugins so that we can choose to use 
different backend providers to actually perform those network functions. For 
example, integration with ODL.

There's no such a thing as ML3, not yet anyway and not in the same shape of ML2.


Shall we foresee a situation, where user can choose another network backend 
directly, e.g. ODL, ONOS? Under this circumstance, a stable plugin interface 
seems needed which can provide end users with more options and flexibility in 
deployment.

The networking landscape is dramatically different from the one Nova 
experiences and even though I personally share the same ideals and desire to 
strive for interoperability across OpenStack clouds, the