Re: [openstack-dev] [Zun] Document about how to deploy Zun with multiple hosts

2017-03-22 Thread Kevin Zhao
Hongbin,

I see. Thanks for your advice . I can deploy multiple compute hosts now
follow the guide.


Best Regards,
Kevin Zhao

On 23 March 2017 at 11:14, Hongbin Lu  wrote:

> Kevin,
>
>
>
> I don’t think there is any such document right now. I submitted a ticket
> for creating one:
>
>
>
> https://bugs.launchpad.net/zun/+bug/1675245
>
>
>
> There is a guidance for setting up a multi-host devstack environment:
> https://docs.openstack.org/developer/devstack/guides/multinode-lab.html .
> You could possibly use it as a starting point and inject Zun-specific
> configuration there. The guide divide nodes into two kinds: cluster
> controller and compute node. In the case of Zun, zun-api & zun-compute can
> run on cluster controller, and zun-compute can run on compute node. Hope it
> helps.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Kevin Zhao [mailto:kevin.z...@linaro.org]
> *Sent:* March-22-17 10:39 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Zun] Document about how to deploy Zun with
> multiple hosts
>
>
>
> Hi guys,
>
> Nowadays I want to try Zun in multiple hosts. But I didn't find the
> doc about how to deploy it.
>
> I wonder where is document to show the users about how to deploy zun
> with multiple hosts? That will be easy for development.
>
> Thanks  :-)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread zhu.fanglei
The current mechanism of microversion looks a bit strange to me.


https://github.com/openstack/tempest/blob/c0223906280619b6eb1ffb3fa200136fd3050528/tempest/api/volume/v3/base.py#L49-L52
 


that means we set microversion at setUp and clear it at tearDown, but that is 
strange, 


1) we never set microvertion at testcase level, i.e., we only set microversion 
at class level,  somethink like:


class KeyPairsV210TestJSON(base.BaseKeypairTest):


min_microversion = '2.10'


2) if microversion is set to None in tearDown, then in resource_cleanup the 
client will not have the correct microversion to work.

to sum up, why we will set and clear microversion at testcase level while not 
at testclass level?










Original Mail



Sender:  <ken1ohmi...@gmail.com>
To:  <openstack-dev@lists.openstack.org>
Date: 2017/03/23 11:24
Subject: Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest





2017-03-22 19:31 GMT-07:00  <zhu.fang...@zte.com.cn>:
> To have only one folder (tempest/api/volume/ ) looks really good, and, do we
> plan to deem "api_version" and "microversion" as one thing?
>
> i.e., we will use the mechanism of microversion to skip v3 new functional
> tests when the environment only supports v2?

Yeah, that is right.
Tempest has the range of microversions with the config options
min/max_microversions and we can select the target  microversions.
If both min_microversion and max_microversion are not specified(means
None), microversion tests run as the help message[1].

So the configuration would be like

gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job): testing
Cinder V3 API
  catalog_type: Specify Cinder V3 API's one
  min_microversion: Don't specify
  max_microversion: Specify max microversion of the branch (master, stable)

gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
testing Cinder V2 API
  catalog_type: Specify Cinder V2 API's one
  min_microversion: Don't specify
  max_microversion: Don't specify

Thanks

---
[1]: https://github.com/openstack/tempest/blob/master/tempest/config.py#L776


> On Thu, Mar 23, 2017 at 7:02 AM, Ken'ichi Ohmichi <ken1ohmi...@gmail.com>
> wrote:
>>
>> 2017-03-22 14:32 GMT-07:00 Andrea Frittoli <andrea.fritt...@gmail.com>:
>> >
>> >
>> > On Wed, Mar 22, 2017 at 8:31 PM Sean McGinnis <sean.mcgin...@gmx.com>
>> wrote:
>> >>
>> >> On Wed, Mar 22, 2017 at 01:08:23PM -0700, Ken'ichi Ohmichi wrote:
>> >> > Hi,
>> >> >
>> >> > Now we need to update Tempest for following Cinder API status..
>> >> > I have an idea for restructuring and happy to see feedback about
>> that.
>>
>> >> >
>> >> > Now Cinder API status is
>> >> >   V1: Deprecated
>> >> >   V2: Deprecated
>> >> >   V3: Current
>> >> > V1 API tests have been removed from Tempest side already, so we just
>> >> > need to concentrate on V2 and V3 now.
>> >>
>> >> >
>> >> > **Gate jobs**
>> >> > Most Cinder tests are implemented for V2 API on Tempest side and the
>> >> > base microversion of V3 is the same as V2.
>> >> > Then we can re-use V2 API tests for the base microversion of V3 API.
>> >> > One idea is that we can have Cinder V3 API tests as the default on
>> the
>> >> > gate jobs and the V2 API tests as another job like the following
>> >> > because the V2 API is deprecated.
>> >> >
>> >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
>> >> > testing Cinder V3 API
>> >> >   gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing
>> Cinder
>> >> > V3 API
>> >> >   ...
>> >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
>> >> > testing Cinder V2 API
>> >> >
>> >>
>> > I guess this job would run against tempest and cinder only?
>>
>> A nice point, I think so.
>>
>> >> +1 I like this idea.
>> >>
>> >> > We had the same testing way for Nova V2 API and V2.1 API before, and
>> >> > we could avoid copy V2 test code for V2.1 API on Tempest.
>> >> >
>> >> > **Test Structure**
>> >> > Current test structure is like:
>> >> >   tempest/api/volume/  - V2 API tests
>> >> >   tempest/api/volume/v2 - V2 API tests
>> >> >   tempest/api/volume/v3 - V3 API tests
>> >> > Yes, this is mess.
>> >> > For re-using V2 API tests for V3 API, it would be better to remove
>> >> > "v2" from V2 API tests for avoiding confusions.
>> >> >
>> >> > A new structure could be
>> >> >   tempest/api/volume/  - All tests for V2 API and the base
>> >> > microversion of V3 API
>> >> >   tempest/api/volume/v3 - V3 API specific tests for newer
>> microversions
>> >> > or
>> >> >   tempest/api/volume/  - All tests for V2 API and V3 API which
>> >> > includes newer microversions
>
>
> +1, this looks better as no more version specific tests and all v2 tests
> should run as it is on v3 base version.
>
>
>>
>> >> >
>> >> > As the reference, Nova API structure is like the later.
>> >>
>> >> I like the last one better as well.
>> >>
>> > My favourite option would be that that generates less churn in the code
>> :)
>> > One folder for everything means moving 4 

Re: [openstack-dev] Is there some way to run specific unittest in neutron?

2017-03-22 Thread Armando M.
On 22 March 2017 at 22:19, Sam  wrote:

> Hi all,
>
> I'm working on neutron, I add some code into ovs_neutron_agent.py, and I
> extend test_ovs_neutron_agent.py.
>
> Is there some way to run test_ovs_neutron_agent.py or run related module
> only?
>
> Thank you.
>

You should find your answer in [1].

[1]
https://docs.openstack.org/developer/neutron/devref/development.environment.html#running-individual-tests


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is there some way to run specific unittest in neutron?

2017-03-22 Thread Kevin Benton
tox -epy27 test_ovs_neutron_agent

On Mar 22, 2017 22:25, "Sam"  wrote:

> Hi all,
>
> I'm working on neutron, I add some code into ovs_neutron_agent.py, and I
> extend test_ovs_neutron_agent.py.
>
> Is there some way to run test_ovs_neutron_agent.py or run related module
> only?
>
> Thank you.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is there some way to run specific unittest in neutron?

2017-03-22 Thread Sam
Hi all,

I'm working on neutron, I add some code into ovs_neutron_agent.py, and I
extend test_ovs_neutron_agent.py.

Is there some way to run test_ovs_neutron_agent.py or run related module
only?

Thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - adjusting Monday IRC meeting time and drivers meeting time

2017-03-22 Thread Armando M.
On 22 March 2017 at 21:39, Armando M.  wrote:

>
>
> On 21 March 2017 at 02:00, Kevin Benton  wrote:
>
>> Hi everyone,
>>
>> The recent DST switch has caused several conflicts for the Monday IRC
>> meeting time and the drivers meeting time.
>>
>> I am going to adjust the Monday meeting time to 1 hour earlier[1] and the
>> drivers meeting time to 6 hours earlier to (1600 UTC).
>>
>> The Monday meeting will now be on openstack-meeting-4 to work around
>> other conflicts!
>>
>> https://review.openstack.org/447961
>>
>
> I would have liked some discussion before approving this, I am afraid I
> can make neither meetings on a regular basis.
>

Proposed revert in [1] to give time and opportunity to comment on whether
the new time works for the majority of folks.

[1] https://review.openstack.org/#/c/448886/


>
>
>>
>> Cheers,
>> Kevin Benton
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - adjusting Monday IRC meeting time and drivers meeting time

2017-03-22 Thread Kevin Benton
I'm okay with the revert here that you proposed:
https://review.openstack.org/#/c/448886/

On Wed, Mar 22, 2017 at 9:39 PM, Armando M.  wrote:

>
>
> On 21 March 2017 at 02:00, Kevin Benton  wrote:
>
>> Hi everyone,
>>
>> The recent DST switch has caused several conflicts for the Monday IRC
>> meeting time and the drivers meeting time.
>>
>> I am going to adjust the Monday meeting time to 1 hour earlier[1] and the
>> drivers meeting time to 6 hours earlier to (1600 UTC).
>>
>> The Monday meeting will now be on openstack-meeting-4 to work around
>> other conflicts!
>>
>> https://review.openstack.org/447961
>>
>
> I would have liked some discussion before approving this, I am afraid I
> can make neither meetings on a regular basis.
>
>
>>
>> Cheers,
>> Kevin Benton
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - adjusting Monday IRC meeting time and drivers meeting time

2017-03-22 Thread Armando M.
On 21 March 2017 at 02:00, Kevin Benton  wrote:

> Hi everyone,
>
> The recent DST switch has caused several conflicts for the Monday IRC
> meeting time and the drivers meeting time.
>
> I am going to adjust the Monday meeting time to 1 hour earlier[1] and the
> drivers meeting time to 6 hours earlier to (1600 UTC).
>
> The Monday meeting will now be on openstack-meeting-4 to work around other
> conflicts!
>
> https://review.openstack.org/447961
>

I would have liked some discussion before approving this, I am afraid I can
make neither meetings on a regular basis.


>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] gate unwedged - please recheck your patches

2017-03-22 Thread Steven Dake (stdake)
Hey folks,

If you have submitted patches for kolla-ansible since Friday March 17, gerrit 
has likely voted -1 on them because the voting gate was broken.  I recommend a 
recheck which should clear up your patch for review and merge.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread Ken'ichi Ohmichi
2017-03-22 19:31 GMT-07:00  :
> To have only one folder (tempest/api/volume/ ) looks really good, and, do we
> plan to deem "api_version" and "microversion" as one thing?
>
> i.e., we will use the mechanism of microversion to skip v3 new functional
> tests when the environment only supports v2?

Yeah, that is right.
Tempest has the range of microversions with the config options
min/max_microversions and we can select the target  microversions.
If both min_microversion and max_microversion are not specified(means
None), microversion tests run as the help message[1].

So the configuration would be like

gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job): testing
Cinder V3 API
  catalog_type: Specify Cinder V3 API's one
  min_microversion: Don't specify
  max_microversion: Specify max microversion of the branch (master, stable)

gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
testing Cinder V2 API
  catalog_type: Specify Cinder V2 API's one
  min_microversion: Don't specify
  max_microversion: Don't specify

Thanks

---
[1]: https://github.com/openstack/tempest/blob/master/tempest/config.py#L776


> On Thu, Mar 23, 2017 at 7:02 AM, Ken'ichi Ohmichi <ken1ohmi...@gmail.com>
> wrote:
>>
>> 2017-03-22 14:32 GMT-07:00 Andrea Frittoli <andrea.fritt...@gmail.com>:
>> >
>> >
>> > On Wed, Mar 22, 2017 at 8:31 PM Sean McGinnis <sean.mcgin...@gmx.com>
>> wrote:
>> >>
>> >> On Wed, Mar 22, 2017 at 01:08:23PM -0700, Ken'ichi Ohmichi wrote:
>> >> > Hi,
>> >> >
>> >> > Now we need to update Tempest for following Cinder API status..
>> >> > I have an idea for restructuring and happy to see feedback about
>> that.
>>
>> >> >
>> >> > Now Cinder API status is
>> >> >   V1: Deprecated
>> >> >   V2: Deprecated
>> >> >   V3: Current
>> >> > V1 API tests have been removed from Tempest side already, so we just
>> >> > need to concentrate on V2 and V3 now.
>> >>
>> >> >
>> >> > **Gate jobs**
>> >> > Most Cinder tests are implemented for V2 API on Tempest side and the
>> >> > base microversion of V3 is the same as V2.
>> >> > Then we can re-use V2 API tests for the base microversion of V3 API.
>> >> > One idea is that we can have Cinder V3 API tests as the default on
>> the
>> >> > gate jobs and the V2 API tests as another job like the following
>> >> > because the V2 API is deprecated.
>> >> >
>> >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
>> >> > testing Cinder V3 API
>> >> >   gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing
>> Cinder
>> >> > V3 API
>> >> >   ...
>> >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
>> >> > testing Cinder V2 API
>> >> >
>> >>
>> > I guess this job would run against tempest and cinder only?
>>
>> A nice point, I think so.
>>
>> >> +1 I like this idea.
>> >>
>> >> > We had the same testing way for Nova V2 API and V2.1 API before, and
>> >> > we could avoid copy V2 test code for V2.1 API on Tempest.
>> >> >
>> >> > **Test Structure**
>> >> > Current test structure is like:
>> >> >   tempest/api/volume/  - V2 API tests
>> >> >   tempest/api/volume/v2 - V2 API tests
>> >> >   tempest/api/volume/v3 - V3 API tests
>> >> > Yes, this is mess.
>> >> > For re-using V2 API tests for V3 API, it would be better to remove
>> >> > "v2" from V2 API tests for avoiding confusions.
>> >> >
>> >> > A new structure could be
>> >> >   tempest/api/volume/  - All tests for V2 API and the base
>> >> > microversion of V3 API
>> >> >   tempest/api/volume/v3 - V3 API specific tests for newer
>> microversions
>> >> > or
>> >> >   tempest/api/volume/  - All tests for V2 API and V3 API which
>> >> > includes newer microversions
>
>
> +1, this looks better as no more version specific tests and all v2 tests
> should run as it is on v3 base version.
>
>
>>
>> >> >
>> >> > As the reference, Nova API structure is like the later.
>> >>
>> >> I like the last one better as well.
>> >>
>> > My favourite option would be that that generates less churn in the code
>> :)
>> > One folder for everything means moving 4 or 5 modules only, so I think
>> that
>> > would
>> > be a good option.
>> >
>> > I would prefer to avoid though having a lot of v3 test classes that
>> inherit
>> > from
>> > v2 test classes, and just set _api_version = 3.
>>
>> Yeah, I agree :-)
>
>
>
> yea we should not have that.
>
>
>>
>>
>> > As long as we can assume we will never run v2 and v3 in the same job, we
>> > could
>> > have cinder_api_version as a configuration setting, to determine which
>> > cinder
>> > endpoint to hit when running the volume tests.
>>
>> Or it would be enough to have the existing "catalog_type",
>> "min_microversion" and "max_microversion" only without api_v1/v2/v3 to
>> control the target API version, because of the above separated gate
>> jobs.
>>
>
> Yes, so devstack does set different catalog for v2 and v3 [0]. Based on
> those catalog_type configured on tempest config(we already have that for
> volume API config) auth can 

Re: [openstack-dev] [Zun] Document about how to deploy Zun with multiple hosts

2017-03-22 Thread Hongbin Lu
Kevin,

I don’t think there is any such document right now. I submitted a ticket for 
creating one:

https://bugs.launchpad.net/zun/+bug/1675245

There is a guidance for setting up a multi-host devstack environment: 
https://docs.openstack.org/developer/devstack/guides/multinode-lab.html . You 
could possibly use it as a starting point and inject Zun-specific configuration 
there. The guide divide nodes into two kinds: cluster controller and compute 
node. In the case of Zun, zun-api & zun-compute can run on cluster controller, 
and zun-compute can run on compute node. Hope it helps.

Best regards,
Hongbin

From: Kevin Zhao [mailto:kevin.z...@linaro.org]
Sent: March-22-17 10:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Zun] Document about how to deploy Zun with multiple 
hosts

Hi guys,
Nowadays I want to try Zun in multiple hosts. But I didn't find the doc 
about how to deploy it.
I wonder where is document to show the users about how to deploy zun with 
multiple hosts? That will be easy for development.
Thanks  :-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ARA - Ansible Run Analysis: Would you like to help ?

2017-03-22 Thread David Moreau Simard
This is a question that comes up often and thus made its way to the
documentation FAQ [1] :)

Yes, ARA only provides playbook run recording and reporting/viewing.

The web interface is 100% passive, there are no actions that can be taken
like editing or retrying a playbook.

In fact, it's so passive that there is a feature to statically generate the
whole web application to HTML like StackViz.

[1]:
http://ara.readthedocs.io/en/latest/faq.html#why-don-t-you-use-ansible-tower-rundeck-or-semaphore

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Mar 22, 2017 8:48 PM, "Joshua Harlow"  wrote:

Sounds neat,

So this would be similar to what tower or semaphore also have (I would
assume they have something very like ARA internally) but instead of
providing the whole start/stop/inventory workflow this just provides the
viewing component?

David Moreau Simard wrote:

> Hi openstack-dev,
>
> There's this project I'm passionate that I want to tell you about: ARA [1].
> So, what's ARA ?
>
> ARA is an Ansible callback plugin that you can set up anywhere you run
> Ansible today.
> The next time you run an ansible-playbook command, it'll automatically
> record and organize all the data and provide an intuitive interface
> for you to browse the playbook results.
>
> In practice, you can find a video demonstration of what the user
> interface looks like here [2].
>
> ARA doesn't require you to change your existing workflows, it doesn't
> require you to re-write your playbooks.
> It's offline, self-contained, standalone and decentralized by default.
> You can run it on your laptop for a single playbook or run it across
> thousands of runs, recording millions of tasks in a centralized
> database.
> You can read more about the project's core values and philosophies in
> the documented manifesto [3].
>
> ARA is already used by many different projects that leverage Ansible
> to fulfill their needs, for example:
> - OpenShift-Ansible
> - OpenStack-Ansible
> - Kolla-Ansible
> - TripleO-Quickstart
> - Browbeat
> - devstack-gate
>
> ARA's also garnered quite a bit of interest outside the OpenStack
> community and there is already a healthy amount of users hanging out
> in IRC on #ara.
>
> So, it looks like the project is going well. Why am I asking for help ?
>
> ARA has been growing in popularity, that's definitely something I am
> very happy about.
> However, this also means that there are more users, more feedback,
> more questions, more bugs, more feature requests, more use cases and
> unfortunately, ARA doesn't happen to be my full time job.
> ARA is a tool that I created to make my job easier !
>
> Also, as much as I hate to admit it, I am by no means a professional
> python developer -- even less so in frontend (html/css/js).
> Being honest, there are things that we should be doing in the project
> that I don't have the time or the skills to accomplish.
>
> Examples of what I would need help with, aside from what's formally on
> StoryBoard [4]:
> - Help the community (answer questions, triage bugs, etc)
> - Flask experts (ARA is ultimately a flask application)
> - Better separation of components (decouple things properly into a
> server/client/api interface)
> - Full python3 compatibility, test coverage and gating
> - Improve/optimize SQL models/performance
>
> Contributing to ARA in terms of code is no different than any other
> OpenStack project but I've documented the process if you are not
> familiar with it [5].
> ARA has good unit and integration test coverage and I love to think
> it's not a project that is hard to develop for.
>
> If you feel the project is interesting and would like to get involved,
> I'd love to welcome you on board.
>
> Let's chat.
>
> [1]: https://github.com/openstack/ara
> [2]: https://www.youtube.com/watch?v=aQiN5wBXZ4g
> [3]: http://ara.readthedocs.io/en/latest/manifesto.html
> [4]: https://storyboard.openstack.org/#!/project/843
> [5]: http://ara.readthedocs.io/en/latest/contributing.html
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [mistral] Using mistral to start a not-to-die task

2017-03-22 Thread Lingxian Kong
Yeah, as Bob said, cron-trigger is what you are looking for.

As our discussion on IRC, currently, Mistral doesn't support to execute
shell command directly, my suggestion is, you could consider using http
action (which is supproted by Mistral out of the box) or providing a host
(physical or virtual) to run 'ping' command and use ssh action in Mistral.


Cheers,
Lingxian Kong (Larry)

On Thu, Mar 23, 2017 at 1:16 PM, gongys2017  wrote:

> Hi mistral stackers,
>
> Tacker is using the mistral as its part of system. Now we have a
> requirement:
>
> tacker server registers an openstack as its NFVI, and needs to ping http-ping) the openstack's management IP,
> for example the keystone URL until tacker updates or delete the openstack
> NFVI.
>
> Can the mistral be asked to start a workflow which  contains  just such a
> kind of task:
> for ever running until extenal tells him to stop.
>
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Document about how to deploy Zun with multiple hosts

2017-03-22 Thread Kevin Zhao
Hi guys,
Nowadays I want to try Zun in multiple hosts. But I didn't find the doc
about how to deploy it.
I wonder where is document to show the users about how to deploy zun
with multiple hosts? That will be easy for development.
Thanks  :-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread zhu.fanglei
To have only one folder (tempest/api/volume/ ) looks really good, and, do we 
plan to deem "api_version" and "microversion" as one thing?

i.e., we will use the mechanism of microversion to skip v3 new functional tests 
when the environment only supports v2?






Original Mail



Sender:  <ghanshyamm...@gmail.com>
To:  <openstack-dev@lists.openstack.org>
Date: 2017/03/23 08:30
Subject: Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest









On Thu, Mar 23, 2017 at 7:02 AM, Ken'ichi Ohmichi <ken1ohmi...@gmail.com> wrote:
2017-03-22 14:32 GMT-07:00 Andrea Frittoli <andrea.fritt...@gmail.com>:
 >
 >
 > On Wed, Mar 22, 2017 at 8:31 PM Sean McGinnis <sean.mcgin...@gmx.com> wrote:
 >>
 >> On Wed, Mar 22, 2017 at 01:08:23PM -0700, Ken'ichi Ohmichi wrote:
 >> > Hi,
 >> >
 >> > Now we need to update Tempest for following Cinder API status..
 >> > I have an idea for restructuring and happy to see feedback about that.
 >> >
 >> > Now Cinder API status is
 >> >   V1: Deprecated
 >> >   V2: Deprecated
 >> >   V3: Current
 >> > V1 API tests have been removed from Tempest side already, so we just
 >> > need to concentrate on V2 and V3 now.
 >>
 >> >
 >> > **Gate jobs**
 >> > Most Cinder tests are implemented for V2 API on Tempest side and the
 >> > base microversion of V3 is the same as V2.
 >> > Then we can re-use V2 API tests for the base microversion of V3 API.
 >> > One idea is that we can have Cinder V3 API tests as the default on the
 >> > gate jobs and the V2 API tests as another job like the following
 >> > because the V2 API is deprecated.
 >> >
 >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
 >> > testing Cinder V3 API
 >> >   gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing Cinder
 >> > V3 API
 >> >   ...
 >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
 >> > testing Cinder V2 API
 >> >
 >>
 > I guess this job would run against tempest and cinder only?
 
 A nice point, I think so.
 
 >> +1 I like this idea.
 >>
 >> > We had the same testing way for Nova V2 API and V2.1 API before, and
 >> > we could avoid copy V2 test code for V2.1 API on Tempest.
 >> >
 >> > **Test Structure**
 >> > Current test structure is like:
 >> >   tempest/api/volume/  - V2 API tests
 >> >   tempest/api/volume/v2 - V2 API tests
 >> >   tempest/api/volume/v3 - V3 API tests
 >> > Yes, this is mess.
 >> > For re-using V2 API tests for V3 API, it would be better to remove
 >> > "v2" from V2 API tests for avoiding confusions.
 >> >
 >> > A new structure could be
 >> >   tempest/api/volume/  - All tests for V2 API and the base
 >> > microversion of V3 API
 >> >   tempest/api/volume/v3 - V3 API specific tests for newer microversions
 >> > or
 >> >   tempest/api/volume/  - All tests for V2 API and V3 API which
 >> > includes newer microversions


+1, this looks better as no more version specific tests and all v2 tests should 
run as it is on v3 base version.


 >> >
 >> > As the reference, Nova API structure is like the later.
 >>
 >> I like the last one better as well.
 >>
 > My favourite option would be that that generates less churn in the code :)
 > One folder for everything means moving 4 or 5 modules only, so I think that
 > would
 > be a good option.
 >
 > I would prefer to avoid though having a lot of v3 test classes that inherit
 > from
 > v2 test classes, and just set _api_version = 3.
 
 Yeah, I agree :-)



yea we should not have that.


 
 > As long as we can assume we will never run v2 and v3 in the same job, we
 > could
 > have cinder_api_version as a configuration setting, to determine which
 > cinder
 > endpoint to hit when running the volume tests.
 
 Or it would be enough to have the existing "catalog_type",
 "min_microversion" and "max_microversion" only without api_v1/v2/v3 to
 control the target API version, because of the above separated gate
 jobs.
 


Yes, so devstack does set different catalog for v2 and v3 [0]. Based on those 
catalog_type configured on tempest config(we already have that for volume API 
config) auth can select the right endpoints to make the API call.


All existing tests can be run for both API without any extra class or change. 
This way we can get rid of 'api_version' in all volumes clients and have them 
as single copy for v2 and v3 base. 

Further v3 microversion tests can be implemented as accordingly by sending the 
microversion header on API request. and devstack can tell Temepst not to set 
microversion if catalog_type volume_v2 is being asked to run the tests.


As you mentioned it can be same way we handle compute v2, v2.1 and + 
microversion tests. 


 > Apart from the volume tests, if we split the gate jobs into standard one
 > running v3
 > plus and extra v2 one, we should make sure that all tests that use the
 > volume API
 > use a consistent version of the volume API. Nova as well should be
 > configured if
 > possible to use the same volume API version.
 
 This also is a nice point.
 Nova team also has a 

Re: [openstack-dev] [mistral] Using mistral to start a not-to-die task

2017-03-22 Thread HADDLETON, Robert W (Bob)

Hi Gongysh:

You are looking for mistral cron-triggers.

See 
https://docs.openstack.org/developer/mistral/terminology/cron_triggers.html


Bob

On 3/22/2017 7:16 PM, gongys2017 wrote:

Hi mistral stackers,

Tacker is using the mistral as its part of system. Now we have a 
requirement:


tacker server registers an openstack as its NFVI, and needs to pinghttp-ping) the openstack's management IP,
for example the keystone URL until tacker updates or delete the 
openstack NFVI.


Can the mistral be asked to start a workflow which  contains just such 
a kind of task:

for ever running until extenal tells him to stop.


Thanks


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ARA - Ansible Run Analysis: Would you like to help ?

2017-03-22 Thread Joshua Harlow

Sounds neat,

So this would be similar to what tower or semaphore also have (I would 
assume they have something very like ARA internally) but instead of 
providing the whole start/stop/inventory workflow this just provides the 
viewing component?


David Moreau Simard wrote:

Hi openstack-dev,

There's this project I'm passionate that I want to tell you about: ARA [1].
So, what's ARA ?

ARA is an Ansible callback plugin that you can set up anywhere you run
Ansible today.
The next time you run an ansible-playbook command, it'll automatically
record and organize all the data and provide an intuitive interface
for you to browse the playbook results.

In practice, you can find a video demonstration of what the user
interface looks like here [2].

ARA doesn't require you to change your existing workflows, it doesn't
require you to re-write your playbooks.
It's offline, self-contained, standalone and decentralized by default.
You can run it on your laptop for a single playbook or run it across
thousands of runs, recording millions of tasks in a centralized
database.
You can read more about the project's core values and philosophies in
the documented manifesto [3].

ARA is already used by many different projects that leverage Ansible
to fulfill their needs, for example:
- OpenShift-Ansible
- OpenStack-Ansible
- Kolla-Ansible
- TripleO-Quickstart
- Browbeat
- devstack-gate

ARA's also garnered quite a bit of interest outside the OpenStack
community and there is already a healthy amount of users hanging out
in IRC on #ara.

So, it looks like the project is going well. Why am I asking for help ?

ARA has been growing in popularity, that's definitely something I am
very happy about.
However, this also means that there are more users, more feedback,
more questions, more bugs, more feature requests, more use cases and
unfortunately, ARA doesn't happen to be my full time job.
ARA is a tool that I created to make my job easier !

Also, as much as I hate to admit it, I am by no means a professional
python developer -- even less so in frontend (html/css/js).
Being honest, there are things that we should be doing in the project
that I don't have the time or the skills to accomplish.

Examples of what I would need help with, aside from what's formally on
StoryBoard [4]:
- Help the community (answer questions, triage bugs, etc)
- Flask experts (ARA is ultimately a flask application)
- Better separation of components (decouple things properly into a
server/client/api interface)
- Full python3 compatibility, test coverage and gating
- Improve/optimize SQL models/performance

Contributing to ARA in terms of code is no different than any other
OpenStack project but I've documented the process if you are not
familiar with it [5].
ARA has good unit and integration test coverage and I love to think
it's not a project that is hard to develop for.

If you feel the project is interesting and would like to get involved,
I'd love to welcome you on board.

Let's chat.

[1]: https://github.com/openstack/ara
[2]: https://www.youtube.com/watch?v=aQiN5wBXZ4g
[3]: http://ara.readthedocs.io/en/latest/manifesto.html
[4]: https://storyboard.openstack.org/#!/project/843
[5]: http://ara.readthedocs.io/en/latest/contributing.html

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread Ghanshyam Mann
On Thu, Mar 23, 2017 at 7:02 AM, Ken'ichi Ohmichi 
wrote:

> 2017-03-22 14:32 GMT-07:00 Andrea Frittoli :
> >
> >
> > On Wed, Mar 22, 2017 at 8:31 PM Sean McGinnis 
> wrote:
> >>
> >> On Wed, Mar 22, 2017 at 01:08:23PM -0700, Ken'ichi Ohmichi wrote:
> >> > Hi,
> >> >
> >> > Now we need to update Tempest for following Cinder API status.
> >> > I have an idea for restructuring and happy to see feedback about that.
> >> >
> >> > Now Cinder API status is
> >> >   V1: Deprecated
> >> >   V2: Deprecated
> >> >   V3: Current
> >> > V1 API tests have been removed from Tempest side already, so we just
> >> > need to concentrate on V2 and V3 now.
> >>
> >> >
> >> > **Gate jobs**
> >> > Most Cinder tests are implemented for V2 API on Tempest side and the
> >> > base microversion of V3 is the same as V2.
> >> > Then we can re-use V2 API tests for the base microversion of V3 API.
> >> > One idea is that we can have Cinder V3 API tests as the default on the
> >> > gate jobs and the V2 API tests as another job like the following
> >> > because the V2 API is deprecated.
> >> >
> >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
> >> > testing Cinder V3 API
> >> >   gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing
> Cinder
> >> > V3 API
> >> >   ...
> >> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
> >> > testing Cinder V2 API
> >> >
> >>
> > I guess this job would run against tempest and cinder only?
>
> A nice point, I think so.
>
> >> +1 I like this idea.
> >>
> >> > We had the same testing way for Nova V2 API and V2.1 API before, and
> >> > we could avoid copy V2 test code for V2.1 API on Tempest.
> >> >
> >> > **Test Structure**
> >> > Current test structure is like:
> >> >   tempest/api/volume/  - V2 API tests
> >> >   tempest/api/volume/v2 - V2 API tests
> >> >   tempest/api/volume/v3 - V3 API tests
> >> > Yes, this is mess.
> >> > For re-using V2 API tests for V3 API, it would be better to remove
> >> > "v2" from V2 API tests for avoiding confusions.
> >> >
> >> > A new structure could be
> >> >   tempest/api/volume/  - All tests for V2 API and the base
> >> > microversion of V3 API
> >> >   tempest/api/volume/v3 - V3 API specific tests for newer
> microversions
> >> > or
> >> >   tempest/api/volume/  - All tests for V2 API and V3 API which
> >> > includes newer microversions
>

​+1, this looks better as no more version specific tests and all v2 tests
should run as it is on v3 base version.​



> >> >
> >> > As the reference, Nova API structure is like the later.
> >>
> >> I like the last one better as well.
> >>
> > My favourite option would be that that generates less churn in the code
> :)
> > One folder for everything means moving 4 or 5 modules only, so I think
> that
> > would
> > be a good option.
> >
> > I would prefer to avoid though having a lot of v3 test classes that
> inherit
> > from
> > v2 test classes, and just set _api_version = 3.
>
> Yeah, I agree :-)
>


​yea we should not have that.



>
> > As long as we can assume we will never run v2 and v3 in the same job, we
> > could
> > have cinder_api_version as a configuration setting, to determine which
> > cinder
> > endpoint to hit when running the volume tests.
>
> Or it would be enough to have the existing "catalog_type",
> "min_microversion" and "max_microversion" only without api_v1/v2/v3 to
> control the target API version, because of the above separated gate
> jobs.
>
>
​Yes, so devstack does set different catalog for v2 and v3 [0]​. Based on
those catalog_type configured on tempest config(we already have that for
volume API config) auth can select the right endpoints to make the API call.

All existing tests can be run for both API without any extra class or
change. This way we can get rid of 'api_version' in all volumes clients and
have them as single copy for v2 and v3 base.
Further v3 microversion tests can be implemented as accordingly by sending
the microversion header on API request. and devstack can tell Temepst not
to set microversion if catalog_type volume_v2 is being asked to run the
tests.

As you mentioned it can be same way we handle compute v2, v2.1 and +
microversion tests.



> > Apart from the volume tests, if we split the gate jobs into standard one
> > running v3
> > plus and extra v2 one, we should make sure that all tests that use the
> > volume API
> > use a consistent version of the volume API. Nova as well should be
> > configured if
> > possible to use the same volume API version.
>
> This also is a nice point.
> Nova team also has a plan to use cinder v3 as the default in Pike.
> We have consistent direction now.
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [mistral] Using mistral to start a not-to-die task

2017-03-22 Thread gongys2017
Hi mistral stackers,
Tacker is using the mistral as its part of system. Now we have a requirement:
tacker server registers an openstack as its NFVI, and needs to ping__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread Ken'ichi Ohmichi
2017-03-22 14:32 GMT-07:00 Andrea Frittoli :
>
>
> On Wed, Mar 22, 2017 at 8:31 PM Sean McGinnis  wrote:
>>
>> On Wed, Mar 22, 2017 at 01:08:23PM -0700, Ken'ichi Ohmichi wrote:
>> > Hi,
>> >
>> > Now we need to update Tempest for following Cinder API status.
>> > I have an idea for restructuring and happy to see feedback about that.
>> >
>> > Now Cinder API status is
>> >   V1: Deprecated
>> >   V2: Deprecated
>> >   V3: Current
>> > V1 API tests have been removed from Tempest side already, so we just
>> > need to concentrate on V2 and V3 now.
>>
>> >
>> > **Gate jobs**
>> > Most Cinder tests are implemented for V2 API on Tempest side and the
>> > base microversion of V3 is the same as V2.
>> > Then we can re-use V2 API tests for the base microversion of V3 API.
>> > One idea is that we can have Cinder V3 API tests as the default on the
>> > gate jobs and the V2 API tests as another job like the following
>> > because the V2 API is deprecated.
>> >
>> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
>> > testing Cinder V3 API
>> >   gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing Cinder
>> > V3 API
>> >   ...
>> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
>> > testing Cinder V2 API
>> >
>>
> I guess this job would run against tempest and cinder only?

A nice point, I think so.

>> +1 I like this idea.
>>
>> > We had the same testing way for Nova V2 API and V2.1 API before, and
>> > we could avoid copy V2 test code for V2.1 API on Tempest.
>> >
>> > **Test Structure**
>> > Current test structure is like:
>> >   tempest/api/volume/  - V2 API tests
>> >   tempest/api/volume/v2 - V2 API tests
>> >   tempest/api/volume/v3 - V3 API tests
>> > Yes, this is mess.
>> > For re-using V2 API tests for V3 API, it would be better to remove
>> > "v2" from V2 API tests for avoiding confusions.
>> >
>> > A new structure could be
>> >   tempest/api/volume/  - All tests for V2 API and the base
>> > microversion of V3 API
>> >   tempest/api/volume/v3 - V3 API specific tests for newer microversions
>> > or
>> >   tempest/api/volume/  - All tests for V2 API and V3 API which
>> > includes newer microversions
>> >
>> > As the reference, Nova API structure is like the later.
>>
>> I like the last one better as well.
>>
> My favourite option would be that that generates less churn in the code :)
> One folder for everything means moving 4 or 5 modules only, so I think that
> would
> be a good option.
>
> I would prefer to avoid though having a lot of v3 test classes that inherit
> from
> v2 test classes, and just set _api_version = 3.

Yeah, I agree :-)

> As long as we can assume we will never run v2 and v3 in the same job, we
> could
> have cinder_api_version as a configuration setting, to determine which
> cinder
> endpoint to hit when running the volume tests.

Or it would be enough to have the existing "catalog_type",
"min_microversion" and "max_microversion" only without api_v1/v2/v3 to
control the target API version, because of the above separated gate
jobs.

> Apart from the volume tests, if we split the gate jobs into standard one
> running v3
> plus and extra v2 one, we should make sure that all tests that use the
> volume API
> use a consistent version of the volume API. Nova as well should be
> configured if
> possible to use the same volume API version.

This also is a nice point.
Nova team also has a plan to use cinder v3 as the default in Pike.
We have consistent direction now.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] {neutron][gate-grenade-linuxbridge-multinode] experimenting gate-grenade-linuxbridge-multinode job

2017-03-22 Thread Bhatia, Manjeet S

Hi neutrinos,

I've been experimenting with 
gate-grenade-dsvm-neutron-linuxbridge-multinode-ubuntu-xenial-nv
 job,
So far I've tried forcing tempest concurrency, as depends on tag not work with 
project config, I have a
Experimental patch in devstack-gate [1]. On which depends on will work I 
believe. My observation is
Job is failing when scheduled on rax-node cloud, from timestamps I see 
difference b/w case when it pass [2],
Case when it fails [3]. Looking at timestamps it seems like it is taking more 
time for some reason.

I have increased the OS_TEST_TIMEOUT in [1]. which 500 by default in tempest. I 
need some volunteers
To add a tag Depends-On: changeId of [1].  I'd really appreciate if 2 or 3 
patches can do that.
I can also chose patches randomly if no one has any objection.

[1]. https://review.openstack.org/#/c/448218/
[2]. 
http://logs.openstack.org/25/338625/39/check/gate-grenade-dsvm-neutron-linuxbridge-multinode-ubuntu-xenial-nv/2398bd7/logs/grenade.sh.txt.gz#_2017-03-22_06_26_18_485
[3]. 
http://logs.openstack.org/25/338625/38/check/gate-grenade-dsvm-neutron-linuxbridge-multinode-ubuntu-xenial-nv/59034b8/logs/grenade.sh.txt.gz#_2017-03-22_00_19_32_894


Thanks and Regards !
Manjeet
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread Andrea Frittoli
On Wed, Mar 22, 2017 at 8:31 PM Sean McGinnis  wrote:

> On Wed, Mar 22, 2017 at 01:08:23PM -0700, Ken'ichi Ohmichi wrote:
> > Hi,
> >
> > Now we need to update Tempest for following Cinder API status.
> > I have an idea for restructuring and happy to see feedback about that.
> >
> > Now Cinder API status is
> >   V1: Deprecated
> >   V2: Deprecated
> >   V3: Current
> > V1 API tests have been removed from Tempest side already, so we just
> > need to concentrate on V2 and V3 now.

>
> > **Gate jobs**
> > Most Cinder tests are implemented for V2 API on Tempest side and the
> > base microversion of V3 is the same as V2.
> > Then we can re-use V2 API tests for the base microversion of V3 API.
> > One idea is that we can have Cinder V3 API tests as the default on the
> > gate jobs and the V2 API tests as another job like the following
> > because the V2 API is deprecated.
> >
> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
> > testing Cinder V3 API
> >   gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing Cinder
> V3 API
> >   ...
> >   gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
> > testing Cinder V2 API
> >
>
> I guess this job would run against tempest and cinder only?


> +1 I like this idea.
>
> > We had the same testing way for Nova V2 API and V2.1 API before, and
> > we could avoid copy V2 test code for V2.1 API on Tempest.
> >
> > **Test Structure**
> > Current test structure is like:
> >   tempest/api/volume/  - V2 API tests
> >   tempest/api/volume/v2 - V2 API tests
> >   tempest/api/volume/v3 - V3 API tests
> > Yes, this is mess.
> > For re-using V2 API tests for V3 API, it would be better to remove
> > "v2" from V2 API tests for avoiding confusions.
> >
> > A new structure could be
> >   tempest/api/volume/  - All tests for V2 API and the base
> > microversion of V3 API
> >   tempest/api/volume/v3 - V3 API specific tests for newer microversions
> > or
> >   tempest/api/volume/  - All tests for V2 API and V3 API which
> > includes newer microversions
> >
> > As the reference, Nova API structure is like the later.
>
> I like the last one better as well.
>
> My favourite option would be that that generates less churn in the code :)
One folder for everything means moving 4 or 5 modules only, so I think that
would
be a good option.

I would prefer to avoid though having a lot of v3 test classes that inherit
from
v2 test classes, and just set _api_version = 3.

As long as we can assume we will never run v2 and v3 in the same job, we
could
have cinder_api_version as a configuration setting, to determine which
cinder
endpoint to hit when running the volume tests.




> >
> > Any thoughts?
> >
> > Thanks
> > Ken Ohmichi
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Apart from the volume tests, if we split the gate jobs into standard one
running v3
plus and extra v2 one, we should make sure that all tests that use the
volume API
use a consistent version of the volume API. Nova as well should be
configured if
possible to use the same volume API version.

andrea
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ARA - Ansible Run Analysis: Would you like to help ?

2017-03-22 Thread David Moreau Simard
Hi openstack-dev,

There's this project I'm passionate that I want to tell you about: ARA [1].
So, what's ARA ?

ARA is an Ansible callback plugin that you can set up anywhere you run
Ansible today.
The next time you run an ansible-playbook command, it'll automatically
record and organize all the data and provide an intuitive interface
for you to browse the playbook results.

In practice, you can find a video demonstration of what the user
interface looks like here [2].

ARA doesn't require you to change your existing workflows, it doesn't
require you to re-write your playbooks.
It's offline, self-contained, standalone and decentralized by default.
You can run it on your laptop for a single playbook or run it across
thousands of runs, recording millions of tasks in a centralized
database.
You can read more about the project's core values and philosophies in
the documented manifesto [3].

ARA is already used by many different projects that leverage Ansible
to fulfill their needs, for example:
- OpenShift-Ansible
- OpenStack-Ansible
- Kolla-Ansible
- TripleO-Quickstart
- Browbeat
- devstack-gate

ARA's also garnered quite a bit of interest outside the OpenStack
community and there is already a healthy amount of users hanging out
in IRC on #ara.

So, it looks like the project is going well. Why am I asking for help ?

ARA has been growing in popularity, that's definitely something I am
very happy about.
However, this also means that there are more users, more feedback,
more questions, more bugs, more feature requests, more use cases and
unfortunately, ARA doesn't happen to be my full time job.
ARA is a tool that I created to make my job easier !

Also, as much as I hate to admit it, I am by no means a professional
python developer -- even less so in frontend (html/css/js).
Being honest, there are things that we should be doing in the project
that I don't have the time or the skills to accomplish.

Examples of what I would need help with, aside from what's formally on
StoryBoard [4]:
- Help the community (answer questions, triage bugs, etc)
- Flask experts (ARA is ultimately a flask application)
- Better separation of components (decouple things properly into a
server/client/api interface)
- Full python3 compatibility, test coverage and gating
- Improve/optimize SQL models/performance

Contributing to ARA in terms of code is no different than any other
OpenStack project but I've documented the process if you are not
familiar with it [5].
ARA has good unit and integration test coverage and I love to think
it's not a project that is hard to develop for.

If you feel the project is interesting and would like to get involved,
I'd love to welcome you on board.

Let's chat.

[1]: https://github.com/openstack/ara
[2]: https://www.youtube.com/watch?v=aQiN5wBXZ4g
[3]: http://ara.readthedocs.io/en/latest/manifesto.html
[4]: https://storyboard.openstack.org/#!/project/843
[5]: http://ara.readthedocs.io/en/latest/contributing.html

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread Sean McGinnis
On Wed, Mar 22, 2017 at 01:08:23PM -0700, Ken'ichi Ohmichi wrote:
> Hi,
> 
> Now we need to update Tempest for following Cinder API status.
> I have an idea for restructuring and happy to see feedback about that.
> 
> Now Cinder API status is
>   V1: Deprecated
>   V2: Deprecated
>   V3: Current
> V1 API tests have been removed from Tempest side already, so we just
> need to concentrate on V2 and V3 now.
> 
> **Gate jobs**
> Most Cinder tests are implemented for V2 API on Tempest side and the
> base microversion of V3 is the same as V2.
> Then we can re-use V2 API tests for the base microversion of V3 API.
> One idea is that we can have Cinder V3 API tests as the default on the
> gate jobs and the V2 API tests as another job like the following
> because the V2 API is deprecated.
> 
>   gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
> testing Cinder V3 API
>   gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing Cinder V3 API
>   ...
>   gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
> testing Cinder V2 API
> 

+1 I like this idea.

> We had the same testing way for Nova V2 API and V2.1 API before, and
> we could avoid copy V2 test code for V2.1 API on Tempest.
> 
> **Test Structure**
> Current test structure is like:
>   tempest/api/volume/  - V2 API tests
>   tempest/api/volume/v2 - V2 API tests
>   tempest/api/volume/v3 - V3 API tests
> Yes, this is mess.
> For re-using V2 API tests for V3 API, it would be better to remove
> "v2" from V2 API tests for avoiding confusions.
> 
> A new structure could be
>   tempest/api/volume/  - All tests for V2 API and the base
> microversion of V3 API
>   tempest/api/volume/v3 - V3 API specific tests for newer microversions
> or
>   tempest/api/volume/  - All tests for V2 API and V3 API which
> includes newer microversions
> 
> As the reference, Nova API structure is like the later.

I like the last one better as well.

> 
> Any thoughts?
> 
> Thanks
> Ken Ohmichi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][cinder] RFC: Cinder test on Tempest

2017-03-22 Thread Ken'ichi Ohmichi
Hi,

Now we need to update Tempest for following Cinder API status.
I have an idea for restructuring and happy to see feedback about that.

Now Cinder API status is
  V1: Deprecated
  V2: Deprecated
  V3: Current
V1 API tests have been removed from Tempest side already, so we just
need to concentrate on V2 and V3 now.

**Gate jobs**
Most Cinder tests are implemented for V2 API on Tempest side and the
base microversion of V3 is the same as V2.
Then we can re-use V2 API tests for the base microversion of V3 API.
One idea is that we can have Cinder V3 API tests as the default on the
gate jobs and the V2 API tests as another job like the following
because the V2 API is deprecated.

  gate-tempest-dsvm-neutron-full-ubuntu-xenial - (existing job):
testing Cinder V3 API
  gate-tempest-dsvm-py35-ubuntu-xenial - (existing job): testing Cinder V3 API
  ...
  gate-tempest-dsvm-neutron-full-ubuntu-xenial-cinder-v2: (new job):
testing Cinder V2 API

We had the same testing way for Nova V2 API and V2.1 API before, and
we could avoid copy V2 test code for V2.1 API on Tempest.

**Test Structure**
Current test structure is like:
  tempest/api/volume/  - V2 API tests
  tempest/api/volume/v2 - V2 API tests
  tempest/api/volume/v3 - V3 API tests
Yes, this is mess.
For re-using V2 API tests for V3 API, it would be better to remove
"v2" from V2 API tests for avoiding confusions.

A new structure could be
  tempest/api/volume/  - All tests for V2 API and the base
microversion of V3 API
  tempest/api/volume/v3 - V3 API specific tests for newer microversions
or
  tempest/api/volume/  - All tests for V2 API and V3 API which
includes newer microversions

As the reference, Nova API structure is like the later.

Any thoughts?

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Kevin L. Mitchell
On Wed, 2017-03-22 at 18:44 +, Taryma, Joanna wrote:
> Thanks for pointing out that 2 and 3 won’t actually work, I apologize
> for the confusion it could’ve created.
> 
>  
> 
> I don’t like the option 6, because making user-messages friendlier was
> the whole purpose of translation. Mixing languages in exception would
> be even worse than doing it in logs, IMHO. What is more – if there’s a
> custom message passed to exception (e.g. MyException(“My message” %
> {k: v}), it overwrites the default one, so it would end up with
> English-only message.
> 
>  
> 
> Option 5 looks nice (and easy), but I don’t think that it will be very
> good if all other components will allow showing translated messages
> and Ironic won’t.
> 
> Seems like *if* we want to translate entire exception messages, we’re
> left with option 1 only, right?

It occurred to me that i18n may provide a means of handling this
directly; I don't know for sure, but one of the library developers could
probably comment.  IIRC, i18n uses (or can use) "lazy translation",
where it keeps around the original message but only translates it on
output.  If that's the case, that may help provide a solution to
translate user-visible messages while leaving logs untranslated.
-- 
Kevin L. Mitchell 


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Taryma, Joanna
Hi,

Thanks for pointing out that 2 and 3 won’t actually work, I apologize for the 
confusion it could’ve created.

I don’t like the option 6, because making user-messages friendlier was the 
whole purpose of translation. Mixing languages in exception would be even worse 
than doing it in logs, IMHO. What is more – if there’s a custom message passed 
to exception (e.g. MyException(“My message” % {k: v}), it overwrites the 
default one, so it would end up with English-only message.

Option 5 looks nice (and easy), but I don’t think that it will be very good if 
all other components will allow showing translated messages and Ironic won’t.
Seems like *if* we want to translate entire exception messages, we’re left with 
option 1 only, right?

Regards,
Joanna

From: Pavlo Shchelokovskyy 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, March 22, 2017 at 7:54 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] Translations removal

HI all,

my 5 cents:

- option 1) is ugly due to code/string duplication;
- options 2) and 3) are not going to work for translators as others already 
pointed;
- option 4) has a caveat that we should do it consistently - either translate 
all or translate none, so there won't be a mess of log messages written in 
different languages at seemingly random;
- option 5) from Lucas looks nice and easy, but I'm afraid we still have to 
i18n the errors returned to end user in API responses.

So how about half-solution 6) - reorg our exception messages (at least those 
returned from API) to always include some string that is i18n'ed in the 
exception class declaration itself, but may have part of strings passed in at 
instantiation, so nowhere the whole exception message is completely passed in 
when instantiating the exception. Downside is that final exception message may 
be returned in two languages (half i18n'ed, half English).

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Wed, Mar 22, 2017 at 4:13 PM, Lucas Alvares Gomes 
> wrote:
Hi,

>> Possible options to handle that:
>>
>> 1)  Duplicate messages:
>>
>> LOG.error(“”, {: })
>>
>> raise Exception(_(“”) % {: })
>>
>> 2)  Ignore this error
>>
>> 3)  Talk to hacking people about possible upgrade of this check
>>
>> 4)  Pass translated text to LOG in such cases
>>
>>
>>
>> I’d personally vote for 2. What are your thoughts?
>
> When the translators go to translate, they generally only get to see
> what's inside _(), so #2 is a no-go for translations, and #3 also is a
> no-go.

+1

Just throwing and idea here: Is not translating anything an option ?

Personally I don't see much benefits in translating a software like
Ironic, there are many "user facing" parts that will remain in
english, e.g: The resource attributes name, node's states (powered
off, powered on, deploying, deploy wait...), etc... So why bother ? I
think it's fair to assume that people using Ironic directly (not via
horizon for example) understands english. It's a lot of overhead to
get it translated and there are very few people working on it for
Ironic (right now, Ironic is 2.74% translated [0]). IMHO just the
costs of having duplicated strings all over in the code overweight the
benefits.

I did some translation of Ironic to Brazilian Portuguese in the past
myself and it's really tough to keep up the pace, strings are added or
changed very rapidly.

So again, is:  "5) Not translate anything" an option here ?

[0] https://translate.openstack.org/iteration/view/ironic/master?dswid=9016

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][keystone][glance] WebOb

2017-03-22 Thread Lance Bragstad
Posting a keystone update here as well. We are iterating on it in review as
well as in IRC. There are a few things we're doing within keystone that
raised some questions as to how we should handle some of the new changes in
WebOb.

I'll post another update once we make some more progress.

On Wed, Mar 22, 2017 at 12:50 PM, Davanum Srinivas 
wrote:

> Thanks a ton Brian :)
>
> On Wed, Mar 22, 2017 at 1:40 PM, Brian Rosmaita
>  wrote:
> > On 3/20/17 6:54 AM, Davanum Srinivas wrote:
> >> Dear Keystone and Glance teams,
> >>
> >> WebOb update of u-c to 1.7.1 is stuck for a while[1]. Can you please
> >> prioritize reviews (keystone) review [1] and (glance) review [2] for
> >> this week?
> >
> > Hi Dims, quick update on this.  We do have it prioritized, but it's been
> > pushed back a bit as the eventlet upgrade has caused a few problems.
> > https://review.openstack.org/#/c/448653/
> >
> > Not saying it won't get done this week, but not saying it will, either.
> > Basically, just wanted you to know that we are paying attention to your
> > emails!
> >
> >>
> >> Thanks,
> >> Dims
> >>
> >> [1] https://review.openstack.org/#/c/417591/
> >> [2] https://review.openstack.org/#/c/422234/
> >> [3] https://review.openstack.org/#/c/423366/
> >>
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-22 Thread Emilien Macchi
Thanks to these who volunteered, it's appreciated.

I'm going to kick-off initial work here:

1. Create CI job that build deploy-guide on tripleo-docs repo (when
required) - https://review.openstack.org/448740
2. In tripleo-docs, modify tox.ini and create initial structure for
deployment guide (I'll kick it off once we have the CI job)
3. Start moving deployment related bits from doc/source/ to
deploy-guide/source/ (I'll need your help for this step). We also need
to include Alexendra and her team to make sure we're moving the right
bits.
4. Expose TripleO deployment guide to deployment guides front page and
drink a gin tonic.

I'll give an update when 2. is done so we can start working on the content.

Thanks,

On Wed, Mar 22, 2017 at 8:22 AM, Flavio Percoco  wrote:
> On 20/03/17 08:01 -0400, Emilien Macchi wrote:
>>
>> I proposed a blueprint to track the work done:
>>
>> https://blueprints.launchpad.net/tripleo/+spec/tripleo-deploy-guide
>> Target: pike-3
>>
>> Volunteers to work on it with me, please let me know.
>
>
> It'd be awesome to have some input from the containers squad on this effort
> too.
> Put me on the list for now while we find another volunteer in the containers
> DFG.
>
> Flavio
>
>> Thanks,
>>
>> On Tue, Mar 14, 2017 at 7:00 AM, Alexandra Settle 
>> wrote:
>>>
>>> Hey Emilien,
>>>
>>> You pretty much covered it all! Docs team is happy to provide guidance,
>>> but in reality, it should be a fairly straight forward process.
>>>
>>> The Kolla team just completed their deploy-guide patches and were able to
>>> help refine the process a bit further. Hopefully this should help the
>>> TripleO team :)
>>>
>>> Reach out if you have any questions at all :)
>>>
>>> Thanks,
>>>
>>> Alex
>>>
>>> On 3/13/17, 10:32 PM, "Emilien Macchi"  wrote:
>>>
>>> Team,
>>>
>>> [adding Alexandra, OpenStack Docs PTL]
>>>
>>> It seems like there is a common interest in pushing deployment guides
>>> for different OpenStack Deployment projects: OSA, Kolla.
>>> The landing page is here:
>>> https://docs.openstack.org/project-deploy-guide/newton/
>>>
>>> And one example:
>>>
>>> https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/
>>>
>>> I think this is pretty awesome and it would bring more visibility for
>>> TripleO project, and help our community to find TripleO documentation
>>> from a consistent place.
>>>
>>> The good news, is that openstack-docs team built a pretty solid
>>> workflow to make that happen:
>>>
>>> https://docs.openstack.org/contributor-guide/project-deploy-guide.html
>>> And we don't need to create new repos or do any crazy changes. It
>>> would probably be some refactoring and sphinx things.
>>>
>>> Alexandra, please add any words if I missed something obvious.
>>>
>>> Feedback from the team would be welcome here before we engage any
>>> work,
>>>
>>> Thanks!
>>> --
>>> Emilien Macchi
>>>
>>>
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-22 Thread James E. Blair
Darragh Bailey  writes:

> On 22 March 2017 at 15:02, James E. Blair  wrote:
>
>> Ian Cordasco  writes:
>>
>> >
>> > I suppose Barbican doesn't meet those requirements either, then, yes?
>>
>> Right -- we don't want to require another service or tie Zuul to an
>> authn/authz system for a fundamental feature.  However, I do think we
>> can look at making integration with Barbican and similar systems an
>> option for folks who have such an installation and prefer to use it.
>>
>> -Jim
>>
>
> Sounds like you're going to make this plugable, is that a hard requirement
> that will be added to the spec? or just a possibility?

More of a possibility at this point.  In general, I'd like to off-load
interaction with other systems to Ansible as much as possible, and then
add minimal backing support in Zuul itself if needed, that way the core
of Zuul doesn't become a choke point.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][keystone][glance] WebOb

2017-03-22 Thread Davanum Srinivas
Thanks a ton Brian :)

On Wed, Mar 22, 2017 at 1:40 PM, Brian Rosmaita
 wrote:
> On 3/20/17 6:54 AM, Davanum Srinivas wrote:
>> Dear Keystone and Glance teams,
>>
>> WebOb update of u-c to 1.7.1 is stuck for a while[1]. Can you please
>> prioritize reviews (keystone) review [1] and (glance) review [2] for
>> this week?
>
> Hi Dims, quick update on this.  We do have it prioritized, but it's been
> pushed back a bit as the eventlet upgrade has caused a few problems.
> https://review.openstack.org/#/c/448653/
>
> Not saying it won't get done this week, but not saying it will, either.
> Basically, just wanted you to know that we are paying attention to your
> emails!
>
>>
>> Thanks,
>> Dims
>>
>> [1] https://review.openstack.org/#/c/417591/
>> [2] https://review.openstack.org/#/c/422234/
>> [3] https://review.openstack.org/#/c/423366/
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][keystone][glance] WebOb

2017-03-22 Thread Brian Rosmaita
On 3/20/17 6:54 AM, Davanum Srinivas wrote:
> Dear Keystone and Glance teams,
> 
> WebOb update of u-c to 1.7.1 is stuck for a while[1]. Can you please
> prioritize reviews (keystone) review [1] and (glance) review [2] for
> this week?

Hi Dims, quick update on this.  We do have it prioritized, but it's been
pushed back a bit as the eventlet upgrade has caused a few problems.
https://review.openstack.org/#/c/448653/

Not saying it won't get done this week, but not saying it will, either.
Basically, just wanted you to know that we are paying attention to your
emails!

> 
> Thanks,
> Dims
> 
> [1] https://review.openstack.org/#/c/417591/
> [2] https://review.openstack.org/#/c/422234/
> [3] https://review.openstack.org/#/c/423366/
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arrivederci

2017-03-22 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi Ian,

Nice to have known you trough Craton project. Thanks for everything you have 
done.

All the best,
Tomi

> -Original Message-
> From: Ian Cordasco [mailto:sigmaviru...@gmail.com]
> Sent: Wednesday, March 22, 2017 2:07 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] Arrivederci
> 
> Hi everyone,
> 
> Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
> myself from teams (glance, craton, security, hacking) on Friday and
> unsubscribe
> from the OpenStack mailing lists.
> 
> I want to thank all of you for the last ~3 years. I've learned quite a bit
> from all of you. It's been a unique privilege to call the people in this
> community my colleagues. Treat each other well. Don't let minor technical
> arguments cause rifts in the community. Lift each other up.
> 
> As for me, I'm moving onto something completely different. You all are
> welcome
> to keep in touch via email, IRC, or some other method. At the very
> least, I'll see y'all
> around PyCon, the larger F/OSS world, etc.
> 
> --
> Ian Cordasco
> IRC/Git{Hub,Lab}/Twitter: sigmavirus24
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-22 Thread Darragh Bailey
On 22 March 2017 at 15:02, James E. Blair  wrote:

> Ian Cordasco  writes:
>
> >
> > I suppose Barbican doesn't meet those requirements either, then, yes?
>
> Right -- we don't want to require another service or tie Zuul to an
> authn/authz system for a fundamental feature.  However, I do think we
> can look at making integration with Barbican and similar systems an
> option for folks who have such an installation and prefer to use it.
>
> -Jim
>

Sounds like you're going to make this plugable, is that a hard requirement
that will be added to the spec? or just a possibility?

-- 
Darragh Bailey
"Nothing is foolproof to a sufficiently talented fool"
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-22 Thread David Moreau Simard
I'm really curious about this decision too.
Not so much about why the project wants to move out but more about
what it plans on doing in terms of contribution (code, issue)
workflow.

I happen to drive a project that's not OpenStack-specific either: ARA [1].
ARA was first created in GitHub and was "incubated" there until we
felt it was "good enough" to be proposed as an OpenStack ecosystem
project.

We chose the "OpenStack workflow" for two main reasons:
- The original authors were already intimate with it and we were very
satisfied with the rigid process it provided
- ARA would be used by different OpenStack projects and it would be a
good fit to be a part of the "family"

I did not find it hard to find *users* outside of the OpenStack
bubble, however, I still felt I needed to document a "FAQ" [2] about
how, yes, the project can be used outside of OpenStack.
It is definitely challenging to find contributors outside the
OpenStsack ecosystem, however. Even with an attempt at providing
simplified contribution guidelines [3].

Signing up for Launchpad and OpenStackid accounts, learning
git-review, setting up and using Gerrit and tracking things in
launchpad or storyboard are things we take for granted.
Whatever way we put it, though, it's a higher barrier to entry than
just browsing the GitHub repository and filing issues or creating pull
requests there.

So, what's the alternative ? Use the GitHub workflow ?
How well is this working out for projects that attracts (or intends to
attract) a lot of users and developers ?

Look at the Ansible GitHub repository [4] for an extreme case: 2600
contributors, more than 1700 issues and almost 1000 pull requests.
How do you make sense out of that ?

Ansible has had to create a bunch of custom software to wrap around
the workflow.
Triaging bots [5], custom tools [6] to sift through the amount of
content they have and so on.

I'm not saying the OpenStack workflow is better than the GitHub one --
just that there are pros and cons that the project must weigh based on
it's priorities and resources.

That said, I'll re-iterate that I'm really curious on what Gnocchi
intends on doing.

[1]: https://github.com/openstack/ara
[2]: 
http://ara.readthedocs.io/en/latest/faq.html#can-ara-be-used-outside-the-context-of-openstack-or-continuous-integration
[3]: http://ara.readthedocs.io/en/latest/contributing.html
[4]: https://github.com/ansible/ansible
[5]: https://github.com/ansible/ansibullbot
[6]: 
http://jctanner.mynetgear.com:5000/issuesearch/programmer-defection-vacuum/created_at/desc

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Mon, Mar 20, 2017 at 12:57 PM, Ian Cordasco  wrote:
> -Original Message-
> From: Chris Friesen 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: March 20, 2017 at 11:39:38
> To: openstack-dev@lists.openstack.org 
> Subject:  Re: [openstack-dev] [telemetry] Moving Gnocchi out
>
>> On 03/20/2017 10:10 AM, Chris Dent wrote:
>> > On Mon, 20 Mar 2017, Thomas Goirand wrote:
>> >
>> >> I really don't understand why the Telemetry team insists in being
>> >> release-independent, out of big tent and such, when the reality is that
>> >> all of released Telemetry components are *very tightly* bound to a
>> >> specific versions of OpenStack. IMO, it doesn't make sense upstream, or
>> >> downstream of Telemetry.
>> >
>> > This simply isn't the case with gnocchi. Gnocchi is an independent
>> > timeseries, metrics and resources data service that _happens_ to
>> > work with OpenStack.
>> >
>> > By making it independent of OpenStack, its ability to draw
>> > contribution and engagement from people outside the OpenStack
>> > community increases. As a result it can become a better tool for
>> > more people, including OpenStack people. Not all, or even many, of
>> > the OpenStack projects are like that, but gnocchi is. More eyes,
>> > less bugs, right?
>>
>> I'm curious why being independent of OpenStack would make it more attractive.
>>
>> Is the perception that requiring people to sign the Contributor Agreement is
>> holding back external contribution? Or is it just that the mere idea of it
>> being an OpenStack project is discouraging people from getting involved?
>>
>> Just as an example, if I want to get involved with libvirt because I have an
>> itch to scratch the fact that it's basically a RedHat project isn't going to
>> turn me off...
>
> Contributing to OpenStack is intimidating, if not utterly
> discouraging, to people unfamiliar with CLAs and Gerrit. There's a lot
> of process that goes into contributing. Moving this to a friendlier
> (if not inferior) developer platform makes sense if there is interest
> from companies not interested in participating in the OpenStack
> community.
>
> --
> Ian Cordasco
>
> 

Re: [openstack-dev] Arrivederci

2017-03-22 Thread Amy Marrich
Ian,

You will be missed. Best wishes in your new endeavours,

Amy (aka spotz)

On Wed, Mar 22, 2017 at 7:06 AM, Ian Cordasco 
wrote:

> Hi everyone,
>
> Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
> myself from teams (glance, craton, security, hacking) on Friday and
> unsubscribe
> from the OpenStack mailing lists.
>
> I want to thank all of you for the last ~3 years. I've learned quite a bit
> from all of you. It's been a unique privilege to call the people in this
> community my colleagues. Treat each other well. Don't let minor technical
> arguments cause rifts in the community. Lift each other up.
>
> As for me, I'm moving onto something completely different. You all are
> welcome
> to keep in touch via email, IRC, or some other method. At the very
> least, I'll see y'all
> around PyCon, the larger F/OSS world, etc.
>
> --
> Ian Cordasco
> IRC/Git{Hub,Lab}/Twitter: sigmavirus24
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Tom Barron


On 03/22/2017 11:44 AM, Sean McGinnis wrote:
> On Wed, Mar 22, 2017 at 08:42:42AM -0500, Kevin L. Mitchell wrote:
>> On Tue, 2017-03-21 at 22:10 +, Taryma, Joanna wrote:
>>> However, pep8 does not accept passing variable to translation
>>> functions,  so this results in ‘H701 Empty localization string’ error.
>>>
>>> Possible options to handle that:
>>>
>>> 1)  Duplicate messages:
>>>
>>> LOG.error(“”, {: })
>>>
>>> raise Exception(_(“”) % {: })
>>>
>>> 2)  Ignore this error
>>>
>>> 3)  Talk to hacking people about possible upgrade of this check
>>>
>>> 4)  Pass translated text to LOG in such cases
>>>
>>>  
>>>
>>> I’d personally vote for 2. What are your thoughts?
>>
>> When the translators go to translate, they generally only get to see
>> what's inside _(), so #2 is a no-go for translations, and #3 also is a
>> no-go.
>> -- 
> 
> I think the appropriate thing here is to do something like:
> 
> msg = _('') % {: }
> LOG.error(msg)
> raise Exception(msg)
> 
> This results in a translated string going to the log, but I think that's
> OK.
>

Yeah, that is what we are starting to do going forwards unless
instructed otherwise.


> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [acceleration]Reminder for the team weekly meeting today 2017.03.22

2017-03-22 Thread Zhipeng Huang
Thank you all for the meeting today, please find the meeting log here
http://eavesdrop.openstack.org/meetings/openstack_cyborg/2017/openstack_cyborg.2017-03-22-15.02.html
(click full log for the fun part).

Also please do help review the BPs we are now drafting
https://review.openstack.org/#/q/project:openstack/cyborg+status:open

On Wed, Mar 22, 2017 at 4:13 PM, Zhipeng Huang 
wrote:

> Hi Team,
>
> As agreed on the last meeting, since we start development we will change
> from bi-weekly meeting to weekly meeting on Wed. The time will be one hour
> later on ET 11:00am (UTC 1500) to facilitate more colleagues to join.
>
> The wiki is down at the moment. Please join the meeting at
> #openstack-cyborg. We will go through the BPs.
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [craton] Nomination of Thomas Maddox as Craton core

2017-03-22 Thread Sulochan Acharya
+1

On Wed, Mar 22, 2017 at 12:42 AM, Ian Cordasco 
wrote:

> +1. Welcome to the team, Thomas
>
> On Mar 21, 2017 3:43 PM, "Jim Baker"  wrote:
>
>> *I nominate Thomas Maddox as a core reviewer for the Craton project.*
>>
>> Thomas has shown extensive knowledge of Craton, working across a range of
>> issues in the core service, including down to the database modeling; the
>> client; and corresponding bugs, blueprints, and specs. Perhaps most notably
>> he has contributed a number of end-to-end patches, such as his work with
>> project support.
>> https://review.openstack.org/#/q/owner:thomas.maddox
>>
>> He has also expertly helped across a range of reviews, while always being
>> amazingly positive with other team members and potential contributors:
>> https://review.openstack.org/#/q/reviewer:thomas.maddox
>>
>> Other details can be found here on his contributions:
>> http://stackalytics.com/report/users/thomas-maddox
>>
>> In my opinion, Thomas has proven that he will make a fantastic addition
>> to the core review team. In particular, I'm confident Thomas will help
>> further improve the velocity for our project as a whole as a core reviewer.
>> I hope others concur with me in this assessment!
>>
>> - Jim
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Sean McGinnis
On Wed, Mar 22, 2017 at 08:42:42AM -0500, Kevin L. Mitchell wrote:
> On Tue, 2017-03-21 at 22:10 +, Taryma, Joanna wrote:
> > However, pep8 does not accept passing variable to translation
> > functions,  so this results in ‘H701 Empty localization string’ error.
> > 
> > Possible options to handle that:
> > 
> > 1)  Duplicate messages:
> > 
> > LOG.error(“”, {: })
> > 
> > raise Exception(_(“”) % {: })
> > 
> > 2)  Ignore this error
> > 
> > 3)  Talk to hacking people about possible upgrade of this check
> > 
> > 4)  Pass translated text to LOG in such cases
> > 
> >  
> > 
> > I’d personally vote for 2. What are your thoughts?
> 
> When the translators go to translate, they generally only get to see
> what's inside _(), so #2 is a no-go for translations, and #3 also is a
> no-go.
> -- 

I think the appropriate thing here is to do something like:

msg = _('') % {: }
LOG.error(msg)
raise Exception(msg)

This results in a translated string going to the log, but I think that's
OK.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Flavio Percoco

On 15/03/17 15:40 -0400, Doug Hellmann wrote:

Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:

On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
> Team,
>
> So one more thing popped up again on IRC:
> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>
> What do you think? interested in this work?
>
> Thanks,
> Dims
>
> PS: Between this thread and the other one about Tooz/DLM and
> os-lively, we can probably make a good case to add etcd as a base
> always-on service.

As I mentioned in the other thread, there was specific and strong
anti-etcd sentiment in Tokyo which is why we decided to use an
abstraction. I continue to be in favor of us having one known service in
this space, but I do think that it's important to revisit that decision
fully and in context of the concerns that were raised when we tried to
pick one last time.

It's worth noting that there is nothing particularly etcd-ish about
storing config that couldn't also be done with zk and thus just be an
additional api call or two added to Tooz with etcd and zk drivers for it.



The fun* thing about working with these libraries is managing the
interdependencies. If we're going to have an abstraction library that
provides configuration options for seeing the backend, like we do in
oslo.db and olso.messaging, then the configuration library can't use it
or we have a circular dependency.

Luckily, tooz does not currently use oslo.config. So, oslo.config could
use tooz and we could create an oslo.dlm library with a shallow
interface mapping config options to tooz calls to open connections or
whatever we need from tooz in an application. Then apps could use
oslo.dlm instead of calling into tooz directly and the configuration of
the backend would be hidden from the application developer.


Replying here becasue I like the proposal, I like what Monty said and I also
like what Doug said. Most of the issues and concerns have been covered in this
thread and I don't have much else to add other than +1.


Doug

* your definition of "fun" may be different than mine


Which is probably different than mine :)

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Patrole] Nominating Felipe Monteiro for patrole core

2017-03-22 Thread Andrea Frittoli
On Thu, Mar 16, 2017 at 6:32 PM BARTRA, RICK  wrote:

> Felipe has done a tremendous amount of work stabilizing, enabling gates,
> contributing new tests, and extensively reviewing code in the Patrole
> project. In fact, he is the number one contributor to Patrole in terms of
> lines of code. He is also driving direction in the project and genuinely
> cares about the success of Patrole. As core spots are limited,
>

Is this a Patrole specific policy? I'm not aware of any rule about core
reviewers team size limit.


> I am recommending that Felipe replace Sangeet Gupta (sg7...@att.com) as
> core due to Sangeet’s inactivity on the project.
>

+1 for Felipe
+1 for removing inactive users. Thank you Sangeet for your contributions!


>
>
> -Rick Bartra
>
> rb5...@att.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [snaps] proposing Dmitrii Shcherbakov for core review team

2017-03-22 Thread Dmitrii Shcherbakov
Will be glad to help with reviews!

Thanks a lot!

Best Regards,
Dmitrii Shcherbakov


*Dmitrii Shcherbakov * | *Canonical*
Field Software Engineer
dmitrii.shcherba...@canonical.com
IRC (freenode): Dmitrii-Sh

On Wed, Mar 22, 2017 at 6:14 PM, James Page  wrote:

> On Wed, 22 Mar 2017 at 15:10 Corey Bryant 
> wrote:
> [...]
>
>> +1
>>
>> I have full confidence in Dmitrii.  He's already a great asset to snaps
>> and will be great to have as a core reviewer.
>>
>
> And then there were three...
>
> welcome to the core reviewers team Dmitrii!
>
> Cheers
>
> James
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Thomas Herve
On Wed, Mar 22, 2017 at 3:24 PM, Alex Schultz  wrote:
> On Wed, Mar 22, 2017 at 7:58 AM, Paul Belanger  wrote:
[snip]
>> Please correct me if I am wrong, because I still have my container training
>> wheels on. I understand the need for etcd, and operators to write their
>> configuration into it.  Why I am struggling with still, is why you need
>> oslo.config to support it.  There is nothing stopping an operator today from
>> using etcd / confd in a container, right?  I can only imagine countless other
>> services that run in containers using them.
>>
>
> We want oslo.config to support it as a source for configuration.
> Dealing with files in containers is complicated. If we can remove the
> requirement to munge configurations for containers,
> deployment/updating containers becomes easier.  The service container
> becomes a single artifact to be deployed with less moving parts which
> helps reduce complexity and errors.  The process for moving a single
> container artifact is a lot easier than moving container and updating
> configurations based on where it's landing.

I believe the point is that operators will need to have a solution for
non-oslo services, if we want to centralize configuration using etcd.
If that's the case, it's unclear what will be the benefit of having
direct support for etcd in oslo.

I have even a counter example: let's say you want to deploy heat api
using httpd (as recommended). You'll deploy it it in a container: you
then need confd to manage httpd config, but Heat would then talk
directly to etcd? I'm not sure the benefit would be gigantic.

To summarize: confd (or something equivalent) needs to be in the
equation. Should we "simply" standardize on it?

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Can we deprecate the os-hosts API?

2017-03-22 Thread Matt Riedemann
This is mostly directed at operators but I'm cross-posting to the ops 
and dev lists.


First, does anyone use the os-hosts API and if so, for what use cases?

The os-hosts and os-services APIs are very similar, and they work on the 
same resources (the 'services' records in the nova database).


Both APIs allow you to list services and show details for a specific 
service.


Both APIs allow you to enable and disable a service so instances will 
not be scheduled to that service (compute host).


There are some additional 'action' APIs that are specific to the 
os-hosts API, which are:


1. Putting the service (host) into maintenance mode. This is only 
implemented by the XenServer virt driver and despite the description in 
the support matrix [1] I'm told that it doesn't actually evacuate all of 
the guests from the host, it just sets a flag in the Xen management 
console, and is therefore pretty useless. Regardless, we have other APIs 
that allow you to do the same thing which are supported across all virt 
drivers, which would be disabling a service and then migrating the 
instances off that host.


2. Reboot host. This is only supported by the XenServer and Hyper-v 
drivers. This is also arguably something that does not need to live in 
the compute API. As far as I know, the backing drivers do no 
orchestration of dealing with guests in the nova database when 
performing a reboot of the host. The compute service for that host may 
be temporarily disabled by the service group health check which would 
take it out of scheduling decisions, and the guests would be down, but 
the periodic task which checks for unexpectedly stopped instances runs 
in the nova-compute service, which might be dead now so the nova API 
would show the instances as running when in fact they are actually stopped.


3. Shutdown host. Same as #2 for reboot host.

4. Start host. This is literally not supported by any in-tree virt 
drivers. The only drivers that implement the 'host_power_action' method 
are XenServer and Hyper-v and they do not support the 'startup' action. 
Since this is an RPC call from nova-api to nova-compute, you will at 
least get a 501 error response indicating it is not supported or 
implemented (even though 501 is the wrong response for something like this).


--

So is anyone using any of these APIs? As noted, only Xen users can use 
the maintenance mode API but I'm told it's useless. Which leaves the 
stop/reboot power action APIs, which are only for XenServer and Hyper-v. 
Are there any users of those drivers that use those APIs and if so, why?


If no one uses any of those power action or maintenance APIs, then we 
propose to deprecate the os-hosts API. The list/show/enable/disable APIs 
are already covered by the os-services API which we actually intend to 
improve [2] so those would be the replacement.


[1] 
https://docs.openstack.org/developer/nova/support-matrix.html#operation_maintenance_mode

[2] https://review.openstack.org/#/c/447149/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [snaps] proposing Dmitrii Shcherbakov for core review team

2017-03-22 Thread James Page
On Wed, 22 Mar 2017 at 15:10 Corey Bryant 
wrote:
[...]

> +1
>
> I have full confidence in Dmitrii.  He's already a great asset to snaps
> and will be great to have as a core reviewer.
>

And then there were three...

welcome to the core reviewers team Dmitrii!

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] Boston Forum - Formal Submission Now Open!

2017-03-22 Thread Eoghan Glynn
Thanks for putting this together!

But one feature gap is some means to tag topic submissions, e.g.
tagging the project-specific topics by individual project relevance.
That could be a basis for grouping topics, to allow folks to better
manage their time during the Forum.

(e.g. if someone was mostly interested in say networking issues, they
could plan to attend all the neutron- and kuryr-tagged topics more
easily if those slots were all scheduled in a near-contiguous block
with minimal conflicts)

On Mon, Mar 20, 2017 at 9:49 PM, Emilien Macchi  wrote:
> +openstack-dev mailing-list.
>
> On Mon, Mar 20, 2017 at 3:55 PM, Melvin Hillsman  wrote:
>> Hey everyone!
>>
>> We have made it to the next stage of the topic selection process for the
>> Forum in Boston.
>>
>> Starting today, our submission tool is open for you to submit abstracts for
>> the most popular sessions that came out of your brainstorming. Please note
>> that the etherpads are not being pulled into the submission tool and
>> discussion around which sessions to submit are encouraged.
>>
>> We are asking all session leaders to submit their abstracts at:
>>
>> http://forumtopics.openstack.org/
>>
>> before 11:59PM UTC on Sunday April 2nd!
>>
>> We are looking for a good mix of project-specific, cross-project or
>> strategic/whole-of-community discussions, and sessions that emphasize
>> collaboration between users and developers are most welcome!
>>
>> We assume that anything submitted to the system has achieved a good amount
>> of discussion and consensus that it is a worthwhile topic. After submissions
>> close, a team of representatives from the User Committee, the Technical
>> Committee, and Foundation staff will take the sessions proposed by the
>> community and fill out the schedule.
>>
>> You can expect the draft schedule to be released on April 10th.
>>
>> Further details about the Forum can be found at:
>> https://wiki.openstack.org/wiki/Forum
>>
>> Regards,
>>
>> OpenStack User Committee
>>
>>
>> ___
>> User-committee mailing list
>> user-commit...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [snaps] proposing Dmitrii Shcherbakov for core review team

2017-03-22 Thread Corey Bryant
On Wed, Mar 22, 2017 at 11:02 AM, James Page  wrote:

> Hi Snappers
>
> Dmitrii did some good work on the ceilometer snap and has been providing
> reviews and feedback of other changes in the queue over the last few months
> as well has hanging out and being a sounding board/answering questions in
> #openstack-snaps.
>
> He's also working out how to get libvirt functional in a snap (no mean
> feat).
>
> I'd like to propose Dmitrii to the snaps core reviewers team.
>
> Cheers
>
> James
>
>
+1

I have full confidence in Dmitrii.  He's already a great asset to snaps and
will be great to have as a core reviewer.

Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [snaps] proposing Dmitrii Shcherbakov for core review team

2017-03-22 Thread James Page
Hi Snappers

Dmitrii did some good work on the ceilometer snap and has been providing
reviews and feedback of other changes in the queue over the last few months
as well has hanging out and being a sounding board/answering questions in
#openstack-snaps.

He's also working out how to get libvirt functional in a snap (no mean
feat).

I'd like to propose Dmitrii to the snaps core reviewers team.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-22 Thread James E. Blair
Ian Cordasco  writes:

> On Tue, Mar 21, 2017 at 6:10 PM, James E. Blair  wrote:
>> We did talk about some other options, though unfortunately it doesn't
>> look like a lot of that made it into the spec reviews.  Among them, it's
>> probably worth noting that there's nothing preventing a Zuul deployment
>> from relying on some third-party secret system -- if you can use it with
>> Ansible, you should be able to use it with Zuul.  But we also want Zuul
>> to have these features out of the box, and, wearing our sysadmin hits,
>> we're really keen on having source control and code review for the
>> system secrets for the OpenStack project.
>>
>> Vault alone doesn't meet our requirements here because it relies on
>> symmetric encryption, which means we need users to share a key with
>> Zuul, implying an extra service with out-of-band authn/authz.  However,
>> we *could* use our PKCS#1 style system to share a vault key with Zuul.
>> I don't think that has come up as a suggestion yet, but seems like it
>> would work.
>
> I suppose Barbican doesn't meet those requirements either, then, yes?

Right -- we don't want to require another service or tie Zuul to an
authn/authz system for a fundamental feature.  However, I do think we
can look at making integration with Barbican and similar systems an
option for folks who have such an installation and prefer to use it.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Paul Belanger
On Wed, Mar 22, 2017 at 08:24:52AM -0600, Alex Schultz wrote:
> On Wed, Mar 22, 2017 at 7:58 AM, Paul Belanger  wrote:
> > On Tue, Mar 21, 2017 at 05:53:35PM -0600, Alex Schultz wrote:
> >> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
> >> >
> >> >
> >> > On 21 Mar 2017, at 15:34, Alex Schultz wrote:
> >> >
> >> >> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
> >> >>> I've been following this thread, but I must admit I seem to have 
> >> >>> missed something.
> >> >>>
> >> >>> What problem is being solved by storing per-server service 
> >> >>> configuration options in an external distributed CP system that is 
> >> >>> currently not possible with the existing pattern of using local text 
> >> >>> files?
> >> >>>
> >> >>
> >> >> This effort is partially to help the path to containerization where we
> >> >> are delivering the service code via container but don't want to
> >> >> necessarily deliver the configuration in the same fashion.  It's about
> >> >> ease of configuration where moving service -> config files (on many
> >> >> hosts/containers) to service -> config via etcd (single source
> >> >> cluster).  It's also about an alternative to configuration management
> >> >> where today we have many tools handling the files in various ways
> >> >> (templates, from repo, via code providers) and trying to come to a
> >> >> more unified way of representing the configuration such that the end
> >> >> result is the same for every deployment tool.  All tools load configs
> >> >> into $place and services can be configured to talk to $place.  It
> >> >> should be noted that configuration files won't go away because many of
> >> >> the companion services still rely on them (rabbit/mysql/apache/etc) so
> >> >> we're really talking about services that currently use oslo.
> >> >
> >> > Thanks for the explanation!
> >> >
> >> > So in the future, you expect a node in a clustered OpenStack service to 
> >> > be deployed and run as a container, and then that node queries a 
> >> > centralized etcd (or other) k/v store to load config options. And other 
> >> > services running in the (container? cluster?) will load config from 
> >> > local text files managed in some other way.
> >>
> >> No the goal is in the etcd mode, that it  may not be necessary to load
> >> the config files locally at all.  That being said there would still be
> >> support for having some configuration from a file and optionally
> >> provide a kv store as another config point.  'service --config-file
> >> /etc/service/service.conf --config-etcd proto://ip:port/slug'
> >>
> > Hmm, not sure I like this.  Having a service magically read from 2 different
> > configuration source at run time, merge them, and reload, seems overly
> > complicated. And even harder to debug.
> >
> 
> That's something inherently supported by oslo.config today. We even do
> it for dist provided packaging (I also don't like it, but it's an
> established pattern).
> 
> >> >
> >> > No wait. It's not the *services* that will load the config from a kv 
> >> > store--it's the config management system? So in the process of deploying 
> >> > a new container instance of a particular service, the deployment tool 
> >> > will pull the right values out of the kv system and inject those into 
> >> > the container, I'm guessing as a local text file that the service loads 
> >> > as normal?
> >> >
> >>
> >> No the thought is to have the services pull their configs from the kv
> >> store via oslo.config.  The point is hopefully to not require
> >> configuration files at all for containers.  The container would get
> >> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> >> /etc/myconfigs/).  At that point it just becomes another place to load
> >> configurations from via oslo.config.  Configuration management comes
> >> in as a way to load the configs either as a file or into etcd.  Many
> >> operators (and deployment tools) are already using some form of
> >> configuration management so if we can integrate in a kv store output
> >> option, adoption becomes much easier than making everyone start from
> >> scratch.
> >>
> >> > This means you could have some (OpenStack?) service for inventory 
> >> > management (like Karbor) that is seeding the kv store, the cloud 
> >> > infrastructure software itself is "cloud aware" and queries the central 
> >> > distributed kv system for the correct-right-now config options, and the 
> >> > cloud service itself gets all the benefits of dynamic scaling of 
> >> > available hardware resources. That's pretty cool. Add hardware to the 
> >> > inventory, the cloud infra itself expands to make it available. Hardware 
> >> > fails, and the cloud infra resizes to adjust. Apps running on the infra 
> >> > keep doing their thing consuming the resources. It's clouds all the way 
> >> > down :-)
> >> >
> >> > Despite sounding pretty interesting, it also sounds like a lot of extra 

Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Pavlo Shchelokovskyy
HI all,

my 5 cents:

- option 1) is ugly due to code/string duplication;
- options 2) and 3) are not going to work for translators as others already
pointed;
- option 4) has a caveat that we should do it consistently - either
translate all or translate none, so there won't be a mess of log messages
written in different languages at seemingly random;
- option 5) from Lucas looks nice and easy, but I'm afraid we still have to
i18n the errors returned to end user in API responses.

So how about half-solution 6) - reorg our exception messages (at least
those returned from API) to always include some string that is i18n'ed in
the exception class declaration itself, but may have part of strings passed
in at instantiation, so nowhere the whole exception message is completely
passed in when instantiating the exception. Downside is that final
exception message may be returned in two languages (half i18n'ed, half
English).

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Wed, Mar 22, 2017 at 4:13 PM, Lucas Alvares Gomes 
wrote:

> Hi,
>
> >> Possible options to handle that:
> >>
> >> 1)  Duplicate messages:
> >>
> >> LOG.error(“”, {: })
> >>
> >> raise Exception(_(“”) % {: })
> >>
> >> 2)  Ignore this error
> >>
> >> 3)  Talk to hacking people about possible upgrade of this check
> >>
> >> 4)  Pass translated text to LOG in such cases
> >>
> >>
> >>
> >> I’d personally vote for 2. What are your thoughts?
> >
> > When the translators go to translate, they generally only get to see
> > what's inside _(), so #2 is a no-go for translations, and #3 also is a
> > no-go.
>
> +1
>
> Just throwing and idea here: Is not translating anything an option ?
>
> Personally I don't see much benefits in translating a software like
> Ironic, there are many "user facing" parts that will remain in
> english, e.g: The resource attributes name, node's states (powered
> off, powered on, deploying, deploy wait...), etc... So why bother ? I
> think it's fair to assume that people using Ironic directly (not via
> horizon for example) understands english. It's a lot of overhead to
> get it translated and there are very few people working on it for
> Ironic (right now, Ironic is 2.74% translated [0]). IMHO just the
> costs of having duplicated strings all over in the code overweight the
> benefits.
>
> I did some translation of Ironic to Brazilian Portuguese in the past
> myself and it's really tough to keep up the pace, strings are added or
> changed very rapidly.
>
> So again, is:  "5) Not translate anything" an option here ?
>
> [0] https://translate.openstack.org/iteration/view/ironic/
> master?dswid=9016
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Patrole] Nominating Felipe Monteiro for patrole core

2017-03-22 Thread BLANCO, SAMANTHA
+1

Samantha Blanco

From: Ghanshyam Mann [mailto:ghanshyamm...@gmail.com]
Sent: Thursday, March 16, 2017 7:54 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [QA] [Patrole] Nominating Felipe Monteiro for 
patrole core

+1.
Yea, Felipe is doing great work.

-gmann

On Fri, Mar 17, 2017 at 3:28 AM, BARTRA, RICK 
> wrote:
Felipe has done a tremendous amount of work stabilizing, enabling gates, 
contributing new tests, and extensively reviewing code in the Patrole project. 
In fact, he is the number one contributor to Patrole in terms of lines of code. 
He is also driving direction in the project and genuinely cares about the 
success of Patrole. As core spots are limited, I am recommending that Felipe 
replace Sangeet Gupta (sg7...@att.com) as core due to 
Sangeet’s inactivity on the project.

-Rick Bartra
rb5...@att.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arrivederci

2017-03-22 Thread Steve Martinelli
You'll be missed! Best of luck in your next adventure, they are very lucky
to have you.

On Wed, Mar 22, 2017 at 8:06 AM, Ian Cordasco 
wrote:

> Hi everyone,
>
> Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
> myself from teams (glance, craton, security, hacking) on Friday and
> unsubscribe
> from the OpenStack mailing lists.
>
> I want to thank all of you for the last ~3 years. I've learned quite a bit
> from all of you. It's been a unique privilege to call the people in this
> community my colleagues. Treat each other well. Don't let minor technical
> arguments cause rifts in the community. Lift each other up.
>
> As for me, I'm moving onto something completely different. You all are
> welcome
> to keep in touch via email, IRC, or some other method. At the very
> least, I'll see y'all
> around PyCon, the larger F/OSS world, etc.
>
> --
> Ian Cordasco
> IRC/Git{Hub,Lab}/Twitter: sigmavirus24
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Alex Schultz
On Wed, Mar 22, 2017 at 7:58 AM, Paul Belanger  wrote:
> On Tue, Mar 21, 2017 at 05:53:35PM -0600, Alex Schultz wrote:
>> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
>> >
>> >
>> > On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>> >
>> >> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
>> >>> I've been following this thread, but I must admit I seem to have missed 
>> >>> something.
>> >>>
>> >>> What problem is being solved by storing per-server service configuration 
>> >>> options in an external distributed CP system that is currently not 
>> >>> possible with the existing pattern of using local text files?
>> >>>
>> >>
>> >> This effort is partially to help the path to containerization where we
>> >> are delivering the service code via container but don't want to
>> >> necessarily deliver the configuration in the same fashion.  It's about
>> >> ease of configuration where moving service -> config files (on many
>> >> hosts/containers) to service -> config via etcd (single source
>> >> cluster).  It's also about an alternative to configuration management
>> >> where today we have many tools handling the files in various ways
>> >> (templates, from repo, via code providers) and trying to come to a
>> >> more unified way of representing the configuration such that the end
>> >> result is the same for every deployment tool.  All tools load configs
>> >> into $place and services can be configured to talk to $place.  It
>> >> should be noted that configuration files won't go away because many of
>> >> the companion services still rely on them (rabbit/mysql/apache/etc) so
>> >> we're really talking about services that currently use oslo.
>> >
>> > Thanks for the explanation!
>> >
>> > So in the future, you expect a node in a clustered OpenStack service to be 
>> > deployed and run as a container, and then that node queries a centralized 
>> > etcd (or other) k/v store to load config options. And other services 
>> > running in the (container? cluster?) will load config from local text 
>> > files managed in some other way.
>>
>> No the goal is in the etcd mode, that it  may not be necessary to load
>> the config files locally at all.  That being said there would still be
>> support for having some configuration from a file and optionally
>> provide a kv store as another config point.  'service --config-file
>> /etc/service/service.conf --config-etcd proto://ip:port/slug'
>>
> Hmm, not sure I like this.  Having a service magically read from 2 different
> configuration source at run time, merge them, and reload, seems overly
> complicated. And even harder to debug.
>

That's something inherently supported by oslo.config today. We even do
it for dist provided packaging (I also don't like it, but it's an
established pattern).

>> >
>> > No wait. It's not the *services* that will load the config from a kv 
>> > store--it's the config management system? So in the process of deploying a 
>> > new container instance of a particular service, the deployment tool will 
>> > pull the right values out of the kv system and inject those into the 
>> > container, I'm guessing as a local text file that the service loads as 
>> > normal?
>> >
>>
>> No the thought is to have the services pull their configs from the kv
>> store via oslo.config.  The point is hopefully to not require
>> configuration files at all for containers.  The container would get
>> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
>> /etc/myconfigs/).  At that point it just becomes another place to load
>> configurations from via oslo.config.  Configuration management comes
>> in as a way to load the configs either as a file or into etcd.  Many
>> operators (and deployment tools) are already using some form of
>> configuration management so if we can integrate in a kv store output
>> option, adoption becomes much easier than making everyone start from
>> scratch.
>>
>> > This means you could have some (OpenStack?) service for inventory 
>> > management (like Karbor) that is seeding the kv store, the cloud 
>> > infrastructure software itself is "cloud aware" and queries the central 
>> > distributed kv system for the correct-right-now config options, and the 
>> > cloud service itself gets all the benefits of dynamic scaling of available 
>> > hardware resources. That's pretty cool. Add hardware to the inventory, the 
>> > cloud infra itself expands to make it available. Hardware fails, and the 
>> > cloud infra resizes to adjust. Apps running on the infra keep doing their 
>> > thing consuming the resources. It's clouds all the way down :-)
>> >
>> > Despite sounding pretty interesting, it also sounds like a lot of extra 
>> > complexity. Maybe it's worth it. I don't know.
>> >
>>
>> Yea there's extra complexity at least in the
>> deployment/management/monitoring of the new service or maybe not.
>> Keeping configuration files synced across 1000s of nodes (or
>> 

Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Lucas Alvares Gomes
Hi,

>> Possible options to handle that:
>>
>> 1)  Duplicate messages:
>>
>> LOG.error(“”, {: })
>>
>> raise Exception(_(“”) % {: })
>>
>> 2)  Ignore this error
>>
>> 3)  Talk to hacking people about possible upgrade of this check
>>
>> 4)  Pass translated text to LOG in such cases
>>
>>
>>
>> I’d personally vote for 2. What are your thoughts?
>
> When the translators go to translate, they generally only get to see
> what's inside _(), so #2 is a no-go for translations, and #3 also is a
> no-go.

+1

Just throwing and idea here: Is not translating anything an option ?

Personally I don't see much benefits in translating a software like
Ironic, there are many "user facing" parts that will remain in
english, e.g: The resource attributes name, node's states (powered
off, powered on, deploying, deploy wait...), etc... So why bother ? I
think it's fair to assume that people using Ironic directly (not via
horizon for example) understands english. It's a lot of overhead to
get it translated and there are very few people working on it for
Ironic (right now, Ironic is 2.74% translated [0]). IMHO just the
costs of having duplicated strings all over in the code overweight the
benefits.

I did some translation of Ironic to Brazilian Portuguese in the past
myself and it's really tough to keep up the pace, strings are added or
changed very rapidly.

So again, is:  "5) Not translate anything" an option here ?

[0] https://translate.openstack.org/iteration/view/ironic/master?dswid=9016

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arrivederci

2017-03-22 Thread Monty Taylor
On 03/22/2017 07:06 AM, Ian Cordasco wrote:
> Hi everyone,
> 
> Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
> myself from teams (glance, craton, security, hacking) on Friday and 
> unsubscribe
> from the OpenStack mailing lists.
> 
> I want to thank all of you for the last ~3 years. I've learned quite a bit
> from all of you. It's been a unique privilege to call the people in this
> community my colleagues. Treat each other well. Don't let minor technical
> arguments cause rifts in the community. Lift each other up.
> 
> As for me, I'm moving onto something completely different. You all are welcome
> to keep in touch via email, IRC, or some other method. At the very
> least, I'll see y'all
> around PyCon, the larger F/OSS world, etc.
> 

We'll definitely miss you. It's been great working with you, and your
contributions have been greatly appreciated.

Good luck on your new thing, and I'm sure we'll see you around.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arrivederci

2017-03-22 Thread Mikhail Fedosin
So long, Ian, and good luck on your new job!

You know it has been a pleasure working with you, and I hope we'll meet
again many times. So do not get lost and take care of yourself!

Best,
Mike

P.S. ketogenic diet doesn't work

On Wed, Mar 22, 2017 at 4:07 PM, Luke Hinds  wrote:

>
>
> On Wed, Mar 22, 2017 at 12:06 PM, Ian Cordasco 
> wrote:
>
>> Hi everyone,
>>
>> Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
>> myself from teams (glance, craton, security, hacking) on Friday and
>> unsubscribe
>> from the OpenStack mailing lists.
>>
>> I want to thank all of you for the last ~3 years. I've learned quite a bit
>> from all of you. It's been a unique privilege to call the people in this
>> community my colleagues. Treat each other well. Don't let minor technical
>> arguments cause rifts in the community. Lift each other up.
>>
>> As for me, I'm moving onto something completely different. You all are
>> welcome
>> to keep in touch via email, IRC, or some other method. At the very
>> least, I'll see y'all
>> around PyCon, the larger F/OSS world, etc.
>>
>> --
>> Ian Cordasco
>> IRC/Git{Hub,Lab}/Twitter: sigmavirus24
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Hi Ian,
>
> A loss for OpenStack, but also a big gain for the next community you
> participate in! Wish you best of luck, thanks for all the effort put in.
> Its been great working and learning alongside you in the security project.
>
> Cheers,
>
> Luke
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Alex Schultz
On Wed, Mar 22, 2017 at 12:23 AM, Tim Bell  wrote:
>
>> On 22 Mar 2017, at 00:53, Alex Schultz  wrote:
>>
>> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
>>>
>>>
>>> On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>>>
 On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
> I've been following this thread, but I must admit I seem to have missed 
> something.
>
> What problem is being solved by storing per-server service configuration 
> options in an external distributed CP system that is currently not 
> possible with the existing pattern of using local text files?
>

 This effort is partially to help the path to containerization where we
 are delivering the service code via container but don't want to
 necessarily deliver the configuration in the same fashion.  It's about
 ease of configuration where moving service -> config files (on many
 hosts/containers) to service -> config via etcd (single source
 cluster).  It's also about an alternative to configuration management
 where today we have many tools handling the files in various ways
 (templates, from repo, via code providers) and trying to come to a
 more unified way of representing the configuration such that the end
 result is the same for every deployment tool.  All tools load configs
 into $place and services can be configured to talk to $place.  It
 should be noted that configuration files won't go away because many of
 the companion services still rely on them (rabbit/mysql/apache/etc) so
 we're really talking about services that currently use oslo.
>>>
>>> Thanks for the explanation!
>>>
>>> So in the future, you expect a node in a clustered OpenStack service to be 
>>> deployed and run as a container, and then that node queries a centralized 
>>> etcd (or other) k/v store to load config options. And other services 
>>> running in the (container? cluster?) will load config from local text files 
>>> managed in some other way.
>>
>> No the goal is in the etcd mode, that it  may not be necessary to load
>> the config files locally at all.  That being said there would still be
>> support for having some configuration from a file and optionally
>> provide a kv store as another config point.  'service --config-file
>> /etc/service/service.conf --config-etcd proto://ip:port/slug'
>>
>>>
>>> No wait. It's not the *services* that will load the config from a kv 
>>> store--it's the config management system? So in the process of deploying a 
>>> new container instance of a particular service, the deployment tool will 
>>> pull the right values out of the kv system and inject those into the 
>>> container, I'm guessing as a local text file that the service loads as 
>>> normal?
>>>
>>
>> No the thought is to have the services pull their configs from the kv
>> store via oslo.config.  The point is hopefully to not require
>> configuration files at all for containers.  The container would get
>> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
>> /etc/myconfigs/).  At that point it just becomes another place to load
>> configurations from via oslo.config.  Configuration management comes
>> in as a way to load the configs either as a file or into etcd.  Many
>> operators (and deployment tools) are already using some form of
>> configuration management so if we can integrate in a kv store output
>> option, adoption becomes much easier than making everyone start from
>> scratch.
>>
>>> This means you could have some (OpenStack?) service for inventory 
>>> management (like Karbor) that is seeding the kv store, the cloud 
>>> infrastructure software itself is "cloud aware" and queries the central 
>>> distributed kv system for the correct-right-now config options, and the 
>>> cloud service itself gets all the benefits of dynamic scaling of available 
>>> hardware resources. That's pretty cool. Add hardware to the inventory, the 
>>> cloud infra itself expands to make it available. Hardware fails, and the 
>>> cloud infra resizes to adjust. Apps running on the infra keep doing their 
>>> thing consuming the resources. It's clouds all the way down :-)
>>>
>>> Despite sounding pretty interesting, it also sounds like a lot of extra 
>>> complexity. Maybe it's worth it. I don't know.
>>>
>>
>> Yea there's extra complexity at least in the
>> deployment/management/monitoring of the new service or maybe not.
>> Keeping configuration files synced across 1000s of nodes (or
>> containers) can be just as hard however.
>>
>
> Would there be a mechanism to stage configuration changes (such as a 
> QA/production environment) or have different configurations for different 
> hypervisors?
>

Yes my understanding is that the goal is not to have a single config
for all the deployed instances as that just doesn't make sense.  This
is primarily the problem with trying to distribute configs with 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Paul Belanger
On Tue, Mar 21, 2017 at 05:53:35PM -0600, Alex Schultz wrote:
> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
> >
> >
> > On 21 Mar 2017, at 15:34, Alex Schultz wrote:
> >
> >> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
> >>> I've been following this thread, but I must admit I seem to have missed 
> >>> something.
> >>>
> >>> What problem is being solved by storing per-server service configuration 
> >>> options in an external distributed CP system that is currently not 
> >>> possible with the existing pattern of using local text files?
> >>>
> >>
> >> This effort is partially to help the path to containerization where we
> >> are delivering the service code via container but don't want to
> >> necessarily deliver the configuration in the same fashion.  It's about
> >> ease of configuration where moving service -> config files (on many
> >> hosts/containers) to service -> config via etcd (single source
> >> cluster).  It's also about an alternative to configuration management
> >> where today we have many tools handling the files in various ways
> >> (templates, from repo, via code providers) and trying to come to a
> >> more unified way of representing the configuration such that the end
> >> result is the same for every deployment tool.  All tools load configs
> >> into $place and services can be configured to talk to $place.  It
> >> should be noted that configuration files won't go away because many of
> >> the companion services still rely on them (rabbit/mysql/apache/etc) so
> >> we're really talking about services that currently use oslo.
> >
> > Thanks for the explanation!
> >
> > So in the future, you expect a node in a clustered OpenStack service to be 
> > deployed and run as a container, and then that node queries a centralized 
> > etcd (or other) k/v store to load config options. And other services 
> > running in the (container? cluster?) will load config from local text files 
> > managed in some other way.
> 
> No the goal is in the etcd mode, that it  may not be necessary to load
> the config files locally at all.  That being said there would still be
> support for having some configuration from a file and optionally
> provide a kv store as another config point.  'service --config-file
> /etc/service/service.conf --config-etcd proto://ip:port/slug'
> 
Hmm, not sure I like this.  Having a service magically read from 2 different
configuration source at run time, merge them, and reload, seems overly
complicated. And even harder to debug.

> >
> > No wait. It's not the *services* that will load the config from a kv 
> > store--it's the config management system? So in the process of deploying a 
> > new container instance of a particular service, the deployment tool will 
> > pull the right values out of the kv system and inject those into the 
> > container, I'm guessing as a local text file that the service loads as 
> > normal?
> >
> 
> No the thought is to have the services pull their configs from the kv
> store via oslo.config.  The point is hopefully to not require
> configuration files at all for containers.  The container would get
> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> /etc/myconfigs/).  At that point it just becomes another place to load
> configurations from via oslo.config.  Configuration management comes
> in as a way to load the configs either as a file or into etcd.  Many
> operators (and deployment tools) are already using some form of
> configuration management so if we can integrate in a kv store output
> option, adoption becomes much easier than making everyone start from
> scratch.
> 
> > This means you could have some (OpenStack?) service for inventory 
> > management (like Karbor) that is seeding the kv store, the cloud 
> > infrastructure software itself is "cloud aware" and queries the central 
> > distributed kv system for the correct-right-now config options, and the 
> > cloud service itself gets all the benefits of dynamic scaling of available 
> > hardware resources. That's pretty cool. Add hardware to the inventory, the 
> > cloud infra itself expands to make it available. Hardware fails, and the 
> > cloud infra resizes to adjust. Apps running on the infra keep doing their 
> > thing consuming the resources. It's clouds all the way down :-)
> >
> > Despite sounding pretty interesting, it also sounds like a lot of extra 
> > complexity. Maybe it's worth it. I don't know.
> >
> 
> Yea there's extra complexity at least in the
> deployment/management/monitoring of the new service or maybe not.
> Keeping configuration files synced across 1000s of nodes (or
> containers) can be just as hard however.
> 
Please correct me if I am wrong, because I still have my container training
wheels on. I understand the need for etcd, and operators to write their
configuration into it.  Why I am struggling with still, is why you need
oslo.config to support it.  There is nothing stopping an operator 

Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Kevin L. Mitchell
On Tue, 2017-03-21 at 22:10 +, Taryma, Joanna wrote:
> However, pep8 does not accept passing variable to translation
> functions,  so this results in ‘H701 Empty localization string’ error.
> 
> Possible options to handle that:
> 
> 1)  Duplicate messages:
> 
> LOG.error(“”, {: })
> 
> raise Exception(_(“”) % {: })
> 
> 2)  Ignore this error
> 
> 3)  Talk to hacking people about possible upgrade of this check
> 
> 4)  Pass translated text to LOG in such cases
> 
>  
> 
> I’d personally vote for 2. What are your thoughts?

When the translators go to translate, they generally only get to see
what's inside _(), so #2 is a no-go for translations, and #3 also is a
no-go.
-- 
Kevin L. Mitchell 


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-03-22 Thread Dan Prince
On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:
> On 22/03/17 13:32 +0100, Flavio Percoco wrote:
> > On 21/03/17 23:15 -0400, Emilien Macchi wrote:
> > > Hey,
> > > 
> > > I've noticed that container jobs look pretty unstable lately; to
> > > me,
> > > it sounds like a timeout:
> > > http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-
> > > ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_2017-03-
> > > 22_00_08_55_358973
> > 
> > There are different hypothesis on what is going on here. Some
> > patches have
> > landed to improve the write performance on containers by using
> > hostpath mounts
> > but we think the real slowness is coming from the images download.
> > 
> > This said, this is still under investigation and the containers
> > squad will
> > report back as soon as there are new findings.
> 
> Also, to be more precise, Martin André is looking into this. He also
> fixed the
> gate in the last 2 weeks.

I spoke w/ Martin on IRC. He seems to think this is the cause of some
of the failures:

http://logs.openstack.org/32/446432/1/check-tripleo/gate-tripleo-ci-cen
tos-7-ovb-containers-oooq-nv/543bc80/logs/oooq/overcloud-controller-
0/var/log/extra/docker/containers/heat_engine/log/heat/heat-
engine.log.txt.gz#_2017-03-21_20_26_29_697


Looks like Heat isn't able to create Nova instances in the overcloud
due to "Host 'overcloud-novacompute-0' is not mapped to any cell'. This
means our cells initialization code for containers may not be quite
right... or there is a race somewhere.

Dan

> 
> Flavio
> 
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arrivederci

2017-03-22 Thread Luke Hinds
On Wed, Mar 22, 2017 at 12:06 PM, Ian Cordasco 
wrote:

> Hi everyone,
>
> Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
> myself from teams (glance, craton, security, hacking) on Friday and
> unsubscribe
> from the OpenStack mailing lists.
>
> I want to thank all of you for the last ~3 years. I've learned quite a bit
> from all of you. It's been a unique privilege to call the people in this
> community my colleagues. Treat each other well. Don't let minor technical
> arguments cause rifts in the community. Lift each other up.
>
> As for me, I'm moving onto something completely different. You all are
> welcome
> to keep in touch via email, IRC, or some other method. At the very
> least, I'll see y'all
> around PyCon, the larger F/OSS world, etc.
>
> --
> Ian Cordasco
> IRC/Git{Hub,Lab}/Twitter: sigmavirus24
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hi Ian,

A loss for OpenStack, but also a big gain for the next community you
participate in! Wish you best of luck, thanks for all the effort put in.
Its been great working and learning alongside you in the security project.

Cheers,

Luke
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arrivederci

2017-03-22 Thread Flavio Percoco

On 22/03/17 07:06 -0500, Ian Cordasco wrote:

Hi everyone,

Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
myself from teams (glance, craton, security, hacking) on Friday and unsubscribe
from the OpenStack mailing lists.

I want to thank all of you for the last ~3 years. I've learned quite a bit
from all of you. It's been a unique privilege to call the people in this
community my colleagues. Treat each other well. Don't let minor technical
arguments cause rifts in the community. Lift each other up.

As for me, I'm moving onto something completely different. You all are welcome
to keep in touch via email, IRC, or some other method. At the very
least, I'll see y'all
around PyCon, the larger F/OSS world, etc.



Can't say how sad I'm about seeing you leave the community. Thanks a bunch for
all your contributions and help thus far. I really hope we'll cross paths again
in the future and I wish you the best of lucks in your new adventure.

Hugs,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-22 Thread Ricardo Rocha
Hi.

One simplification would be:
openstack coe create/list/show/config/update
openstack coe template create/list/show/update
openstack coe ca show/sign

This covers all the required commands and is a bit less verbose. The
cluster word is too generic and probably adds no useful info.

Whatever it is, kerberos support for the magnum client is very much
needed and welcome! :)

Cheers,
  Ricardo

On Tue, Mar 21, 2017 at 2:54 PM, Spyros Trigazis  wrote:
> IMO, coe is a little confusing. It is a term used by people related somehow
> to the magnum community. When I describe to users how to use magnum,
> I spent a few moments explaining what we call coe.
>
> I prefer one of the following:
> * openstack magnum cluster create|delete|...
> * openstack mcluster create|delete|...
> * both the above
>
> It is very intuitive for users because, they will be using an openstack
> cloud
> and they will be wanting to use the magnum service. So, it only make sense
> to type openstack magnum cluster or mcluster which is shorter.
>
>
> On 21 March 2017 at 02:24, Qiming Teng  wrote:
>>
>> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
>> > On 03/20/2017 03:08 PM, Adrian Otto wrote:
>> > >Team,
>> > >
>> > >Stephen Watson has been working on an magnum feature to add magnum
>> > > commands to the openstack client by implementing a plugin:
>> > >
>> >
>> > > >https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
>> > >
>> > >In review of this work, a question has resurfaced, as to what the
>> > > client command name should be for magnum related commands. Naturally, 
>> > > we’d
>> > > like to have the name “cluster” but that word is already in use by 
>> > > Senlin.
>> >
>> > Unfortunately, the Senlin API uses a whole bunch of generic terms as
>> > top-level REST resources, including "cluster", "event", "action",
>> > "profile", "policy", and "node". :( I've warned before that use of
>> > these generic terms in OpenStack APIs without a central group
>> > responsible for curating the API would lead to problems like this.
>> > This is why, IMHO, we need the API working group to be ultimately
>> > responsible for preventing this type of thing from happening.
>> > Otherwise, there ends up being a whole bunch of duplication and same
>> > terms being used for entirely different things.
>> >
>>
>> Well, I believe the name and namespaces used by Senlin is very clean.
>> Please see the following outputs. All commands are contained in the
>> cluster namespace to avoid any conflicts with any other projects.
>>
>> On the other hand, is there any document stating that Magnum is about
>> providing clustering service? Why Magnum cares so much about the top
>> level noun if it is not its business?
>
>
> From magnum's wiki page [1]:
> "Magnum uses Heat to orchestrate an OS image which contains Docker
> and Kubernetes and runs that image in either virtual machines or bare
> metal in a cluster configuration."
>
> Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
> but we can't refer to a collection of virtual machines or physical servers
> with
> another name. Bay proven to be confusing to users. I don't think that magnum
> should reserve the cluster noun, even if it was available.
>
> [1] https://wiki.openstack.org/wiki/Magnum
>
>>
>>
>>
>> $ openstack --help | grep cluster
>>
>>   --os-clustering-api-version 
>>
>>   cluster action list  List actions.
>>   cluster action show  Show detailed info about the specified action.
>>   cluster build info  Retrieve build information.
>>   cluster check  Check the cluster(s).
>>   cluster collect  Collect attributes across a cluster.
>>   cluster create  Create the cluster.
>>   cluster delete  Delete the cluster(s).
>>   cluster event list  List events.
>>   cluster event show  Describe the event.
>>   cluster expand  Scale out a cluster by the specified number of nodes.
>>   cluster list   List the user's clusters.
>>   cluster members add  Add specified nodes to cluster.
>>   cluster members del  Delete specified nodes from cluster.
>>   cluster members list  List nodes from cluster.
>>   cluster members replace  Replace the nodes in a cluster with
>>   specified nodes.
>>   cluster node check  Check the node(s).
>>   cluster node create  Create the node.
>>   cluster node delete  Delete the node(s).
>>   cluster node list  Show list of nodes.
>>   cluster node recover  Recover the node(s).
>>   cluster node show  Show detailed info about the specified node.
>>   cluster node update  Update the node.
>>   cluster policy attach  Attach policy to cluster.
>>   cluster policy binding list  List policies from cluster.
>>   cluster policy binding show  Show a specific policy that is bound to
>>   the specified cluster.
>>   cluster policy binding update  Update a policy's properties on a
>>   cluster.
>>   cluster policy create  Create a policy.
>>   cluster policy 

Re: [openstack-dev] [tripleo] container jobs are unstable

2017-03-22 Thread Flavio Percoco

On 22/03/17 13:32 +0100, Flavio Percoco wrote:

On 21/03/17 23:15 -0400, Emilien Macchi wrote:

Hey,

I've noticed that container jobs look pretty unstable lately; to me,
it sounds like a timeout:
http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_2017-03-22_00_08_55_358973


There are different hypothesis on what is going on here. Some patches have
landed to improve the write performance on containers by using hostpath mounts
but we think the real slowness is coming from the images download.

This said, this is still under investigation and the containers squad will
report back as soon as there are new findings.


Also, to be more precise, Martin André is looking into this. He also fixed the
gate in the last 2 weeks.

Flavio



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-03-22 Thread Flavio Percoco

On 21/03/17 23:15 -0400, Emilien Macchi wrote:

Hey,

I've noticed that container jobs look pretty unstable lately; to me,
it sounds like a timeout:
http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_2017-03-22_00_08_55_358973


There are different hypothesis on what is going on here. Some patches have
landed to improve the write performance on containers by using hostpath mounts
but we think the real slowness is coming from the images download.

This said, this is still under investigation and the containers squad will
report back as soon as there are new findings.


If anyone could file a bug and see how we can bring it back as soon as
possible, I think we want to maintain this job in stable shape.
I remember Container squad wanted it voting because it was supposed to
be stable, but I'm not sure that's the case today.

Also, it would be great to have the container jobs in
http://tripleo.org/cistatus.html - what do you think?


As I mentioned here (and in my email yday) this is work in progress and the
containers squad is aware of it. Just not enough news today.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-22 Thread Flavio Percoco

On 20/03/17 08:01 -0400, Emilien Macchi wrote:

I proposed a blueprint to track the work done:

https://blueprints.launchpad.net/tripleo/+spec/tripleo-deploy-guide
Target: pike-3

Volunteers to work on it with me, please let me know.


It'd be awesome to have some input from the containers squad on this effort too.
Put me on the list for now while we find another volunteer in the containers
DFG.

Flavio


Thanks,

On Tue, Mar 14, 2017 at 7:00 AM, Alexandra Settle  wrote:

Hey Emilien,

You pretty much covered it all! Docs team is happy to provide guidance, but in 
reality, it should be a fairly straight forward process.

The Kolla team just completed their deploy-guide patches and were able to help 
refine the process a bit further. Hopefully this should help the TripleO team :)

Reach out if you have any questions at all :)

Thanks,

Alex

On 3/13/17, 10:32 PM, "Emilien Macchi"  wrote:

Team,

[adding Alexandra, OpenStack Docs PTL]

It seems like there is a common interest in pushing deployment guides
for different OpenStack Deployment projects: OSA, Kolla.
The landing page is here:
https://docs.openstack.org/project-deploy-guide/newton/

And one example:
https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/

I think this is pretty awesome and it would bring more visibility for
TripleO project, and help our community to find TripleO documentation
from a consistent place.

The good news, is that openstack-docs team built a pretty solid
workflow to make that happen:
https://docs.openstack.org/contributor-guide/project-deploy-guide.html
And we don't need to create new repos or do any crazy changes. It
would probably be some refactoring and sphinx things.

Alexandra, please add any words if I missed something obvious.

Feedback from the team would be welcome here before we engage any work,

Thanks!
--
Emilien Macchi






--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][forum] proposing a session about future of configuration management - ops + devs wanted!

2017-03-22 Thread Flavio Percoco

On 21/03/17 18:23 -0400, Emilien Macchi wrote:

OpenStack developers and operators who work on deployments: we need you.

http://forumtopics.openstack.org/cfp/details/15

Abstract: I would like to bring Developers and Operators in a room to
discuss about future of Configuration Management in OpenStack.

Until now, we haven't done a good job in collaborating on how we make
configuration management in a consistent way across OpenStack
Deployment Tools.
Some efforts started to emerge in Pike:
https://etherpad.openstack.org/p/deployment-pike
And some projects like TripleO started some discussion on future of
configuration management:
https://etherpad.openstack.org/p/tripleo-etcd-transition

In this discussion, we will discuss about our common challenges and
take some actions from there, where projects could collaborate.

Desired people:
- Folks from Deployment Tools (TripleO, Kolla, OSA, Kubernetes, etc)
- Operators who deploy OpenStack

Moderator: me + any volunteer.


Happy to help moderating and/or working on the content.

Flavio


Any question on this proposal is very welcome by using this thread.

Thanks for reading so far and I'm looking forward to making progress
on this topic in Boston.
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Arrivederci

2017-03-22 Thread Ian Cordasco
Hi everyone,

Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
myself from teams (glance, craton, security, hacking) on Friday and unsubscribe
from the OpenStack mailing lists.

I want to thank all of you for the last ~3 years. I've learned quite a bit
from all of you. It's been a unique privilege to call the people in this
community my colleagues. Treat each other well. Don't let minor technical
arguments cause rifts in the community. Lift each other up.

As for me, I'm moving onto something completely different. You all are welcome
to keep in touch via email, IRC, or some other method. At the very
least, I'll see y'all
around PyCon, the larger F/OSS world, etc.

-- 
Ian Cordasco
IRC/Git{Hub,Lab}/Twitter: sigmavirus24

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-22 Thread Monty Taylor
On 03/22/2017 06:10 AM, Ian Cordasco wrote:
> On Tue, Mar 21, 2017 at 6:10 PM, James E. Blair  wrote:
>> David Moreau Simard  writes:
>>
>>> I don't have a horse in this race or a strong opinion on the topic, in
>>> fact I'm admittedly not very knowledgeable when it comes to low-level
>>> encryption things.
>>>
>>> However, I did have a question, even if just to generate discussion.
>>> Did we ever consider simply leaving secrets out of Zuul and offloading
>>> that "burden" to something else ?
>>>
>>> For example, end-users could use something like git-crypt [1] to crypt
>>> files in their git repos and Zuul could have a mean to decrypt them at
>>> runtime.
>>> There is also ansible-vault [2] that could perhaps be leveraged.
>>>
>>> Just trying to make sure we're not re-inventing any wheels,
>>> implementing crypto is usually not straightfoward.
>>
>> We did talk about some other options, though unfortunately it doesn't
>> look like a lot of that made it into the spec reviews.  Among them, it's
>> probably worth noting that there's nothing preventing a Zuul deployment
>> from relying on some third-party secret system -- if you can use it with
>> Ansible, you should be able to use it with Zuul.  But we also want Zuul
>> to have these features out of the box, and, wearing our sysadmin hits,
>> we're really keen on having source control and code review for the
>> system secrets for the OpenStack project.
>>
>> Vault alone doesn't meet our requirements here because it relies on
>> symmetric encryption, which means we need users to share a key with
>> Zuul, implying an extra service with out-of-band authn/authz.  However,
>> we *could* use our PKCS#1 style system to share a vault key with Zuul.
>> I don't think that has come up as a suggestion yet, but seems like it
>> would work.
> 
> I suppose Barbican doesn't meet those requirements either, then, yes?
> 

It doesn't - because we explicitly want the secrets to be in git so that
they can be submitted as part of a proposed change. Even so, if we
wanted to go some other route, such as having an API that users used to
add secrets, barbican would be more of a backend zuul secret storage
possibility, and less an interface we'd hand to zuul's users. (also,
none of our clouds have barbican)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder -- Forum Topic Submission

2017-03-22 Thread Melvin Hillsman
Hey everyone,

This is  a friendly reminder that all proposed Forum session leaders must
submit their abstracts at:

http://forumtopics.openstack.org/

*before 11:59PM UTC on Sunday April 2nd!*

Regards,

TC/UC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-22 Thread Ian Cordasco
On Tue, Mar 21, 2017 at 6:10 PM, James E. Blair  wrote:
> David Moreau Simard  writes:
>
>> I don't have a horse in this race or a strong opinion on the topic, in
>> fact I'm admittedly not very knowledgeable when it comes to low-level
>> encryption things.
>>
>> However, I did have a question, even if just to generate discussion.
>> Did we ever consider simply leaving secrets out of Zuul and offloading
>> that "burden" to something else ?
>>
>> For example, end-users could use something like git-crypt [1] to crypt
>> files in their git repos and Zuul could have a mean to decrypt them at
>> runtime.
>> There is also ansible-vault [2] that could perhaps be leveraged.
>>
>> Just trying to make sure we're not re-inventing any wheels,
>> implementing crypto is usually not straightfoward.
>
> We did talk about some other options, though unfortunately it doesn't
> look like a lot of that made it into the spec reviews.  Among them, it's
> probably worth noting that there's nothing preventing a Zuul deployment
> from relying on some third-party secret system -- if you can use it with
> Ansible, you should be able to use it with Zuul.  But we also want Zuul
> to have these features out of the box, and, wearing our sysadmin hits,
> we're really keen on having source control and code review for the
> system secrets for the OpenStack project.
>
> Vault alone doesn't meet our requirements here because it relies on
> symmetric encryption, which means we need users to share a key with
> Zuul, implying an extra service with out-of-band authn/authz.  However,
> we *could* use our PKCS#1 style system to share a vault key with Zuul.
> I don't think that has come up as a suggestion yet, but seems like it
> would work.

I suppose Barbican doesn't meet those requirements either, then, yes?

-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Vladyslav Drok
On Wed, Mar 22, 2017 at 12:10 AM, Taryma, Joanna 
wrote:

> Hi team,
>
>
>
> As discussed on Monday, logged messages shouldn’t be translated anymore.
> Exception messages still should be still translated.
>
> While removing usages of _LE, _LW, _LI should be fairly easy, some usages
> of _ may cause issues.
>
>
>
> Some messages in the code are declared with ‘_’ method and used both for
> logger and exception. This has to be changed, so we don’t have some log
> entries translated because of that.
>
> The best option in terms of code redundancy would be something like:
>
> msg = “”
>
> LOG.error(msg, {: })
>
> raise Exception(_(msg) % {: })
>
>
>
> However, pep8 does not accept passing variable to translation functions,
> so this results in ‘H701 Empty localization string’ error.
>
> Possible options to handle that:
>
> 1)   Duplicate messages:
>
> LOG.error(“”, {: })
>
> raise Exception(_(“”) % {: })
>
> 2)   Ignore this error
>
> 3)   Talk to hacking people about possible upgrade of this check
>
> 4)   Pass translated text to LOG in such cases
>
>
>
> I’d personally vote for 2. What are your thoughts?
>

I don't think we can simply ignore it --
https://docs.openstack.org/developer/oslo.i18n/guidelines.html#using-a-marker-function,
it is just a marker for i18n IIUC, and if we'll change to just doing
_(var), it will not be translated.

-Vlad


>
>
> Kind regards,
>
> Joanna
>
>
>
> [0] http://eavesdrop.openstack.org/irclogs/%23openstack-
> ironic/%23openstack-ironic.2017-03-21.log.html#t2017-03-21T14:00:49
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Thomas Herve
On Tue, Mar 21, 2017 at 10:26 PM, Davanum Srinivas  wrote:
> Jay,
>
> the /v3alpha HTTP API  (grpc-gateway) supports watch
> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json

Ah, that's a great find, thanks. That means that we don't have to use
grpc, and can still talk to an HTTP endpoint that would integrate
better with our tools. I tested the API, and it works fine.

Regarding reloading, I don't think watch is mandatory for now. As a
first step, I would fetch the keys again upload reload, like we do for
files.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-22 Thread Thierry Carrez
Matt Riedemann wrote:
> On 3/21/2017 4:09 PM, Lance Bragstad wrote:
>> I have a couple questions in addition to Matt's.
>>
>> The keystone group is still trying to figure out what this means for us
>> and we discussed it in today's meeting [0]. Based on early feedback,
>> we're going to have less developer presence at the Forum than we did at
>> the PTG. Are these formal sessions intended to be the same format as
>> design session at previous summits?
>>
>> In the past when we've organized ourselves for summits design sessions,
>> we typically got an email saying "you have these rooms at these times".
>> From there we filter our topics into like categories and shuffle them
>> around until the schedule looks right.
>>
>> With the direction of the PTG, I'm not sure many developers were
>> expecting to have those types of technical discussions at the forum
>> (which could be why early developer attendance confirmation is lower).
>>
>> Am I misunderstanding something?
>>
>> [0]
>> http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-03-21-18.00.log.html#l-150
> 
> We talked about this a bit in the nova channel today too.
> 
> I'm approaching this like the cross-project days at the design summit in
> the past where we'd propose topics and the TC would vote on them and
> they'd get scheduled, and then the rest of the design summit was for
> vertical-team discussions where we say how many rooms we want and then
> we schedule our sessions ourselves in cheddar.
> 
> For the Forum, I think we'll be submitting maybe three topics that I
> know of right now:
> 
> * cells (mostly project-specific but involves operators)
> * placement (cross-project and operator involvement)
> * limits (whole-of-openstack and operators/users)
> 
> Beyond those three, I don't plan on requesting any other nova-specific
> sessions (those aren't all nova-specific anyway).
> 
> For any of the nova developers that will be there, which will be a much
> smaller number than previous summits because of the format and because
> we did the PTG, I assume we'll talk about Pike development status and
> priorities during the downtime we have between the actual scheduled
> sessions that involve more than just nova (so we'll talk about the nova
> stuff in the free room area in other words).

Right. The Summit main goal is to use that week to communicate *outside*
the team (with users, with prospective new contributors, learning new
things), rather than *within* the team. That obviously doesn't prevent
teams from regrouping, have discussions and dinner, and use common space
to hold ad-hoc meetings, but the scheduled space is more oriented
towards cross-community communications.

In terms of preparation, that means brainstorming what topics you would
like to discuss in the "forum" setting (with ops, with users, with the
community-at-large). And propose those topics to
forumtopics.openstack.org as explained by Emilien.

If you expect to have enough team members present, you can also
brainstorm which discussions you'd like to have with your team members
in ad-hoc meetings during that week. As explained in [1] we'll have
hacking space available, so you can certainly take advantage of having a
critical mass of team members around to look at your progress in Pike,
re-prioritize stuff or have any discussion you want to have.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-22 Thread Thierry Carrez
Matt Riedemann wrote:
> Do we need to submit formal sessions to forumtopics.o.o for the upstream
> contributor / new-comer session blocks laid out in Kendall's email? I
> had assumed we already said 'yes we want a slot' and then Kendall is
> going to sort that all out.

Kendall being away this week, I'll answer:

Yes, your assumption is correct. On-boarding classrooms are separate
from the "Forum" proper (although located in the same area). So no need
to ask again.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-22 Thread Thierry Carrez
joehuang wrote:
> Should we submit a session for the on-boarding slot which is being arranged 
> by Kendall " first come first served process" ? Does this mean that the 
> on-boarding slot allocation need another round of selection, not the " first 
> come first served process" ?

Kendall being away this week, I'll answer:

No. On-boarding classrooms are separate from the "Forum" proper
(although located in the same area). So no need to ask again.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-22 Thread Adam Harwell
I have been a fan of this from the very beginning of the project as well. I
think oslo is the obvious correct place for a library that should define
core interfaces for use by many openstack projects.

Also, whether you have seen it or not, I have *definitely* seen real
instances where people shied away from using castellan or contributing to
it because it seemed "no different than just barbican, created and owned by
the same people". That was never the goal, and though it's difficult to
point to specific instances, there were many times during discussions that
I thought explaining it was part of oslo would have totally changed the
tone and direction of the conversation.

The naming of things is no joke -- there is significant psychological
emphasis given to the "name of a thing" that can be seen in many cultures
all through literature and tradition. Some of you may think of this as a
meaningless gesture, but I am personally very sure it is not.

+1 from me as an interested user/contributor. Personally I'd go in for the
complete rename to oslo.keymanager, but just oslo is a good start.

--Adam Harwell

On Wed, Mar 22, 2017, 00:34 Flavio Percoco  wrote:

> On 16/03/17 12:43 -0400, Davanum Srinivas wrote:
> >+1 from me to bring castellan under Oslo governance with folks from
> >both oslo and Barbican as reviewers without a project rename. Let's
> >see if that helps get more adoption of castellan
>
> This sounds like a great path forward! +1
>
> Flavio
>
> >Thanks,
> >Dims
> >
> >On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
> > wrote:
> >> This thread has generated quite the discussion, so I will try to
> >> address a few points in this email, echoing a lot of what Dave said.
> >>
> >> Clint originally explained what we are trying to solve very well. The
> hope was
> >> that the rename would emphasize that Castellan is just a basic
> >> interface that supports operations common between key managers
> >> (the existing Barbican back end and other back ends that may exist
> >> in the future), much like oslo.db supports the common operations
> >> between PostgreSQL and MySQL. The thought was that renaming to have
> >> oslo part of the name would help reinforce that it's just an interface,
> >> rather than a standalone key manager. Right now, the only Castellan
> >> back end that would work in DevStack is Barbican. There has been talk
> >> in the past for creating other Castellan back ends (Vault or Tang), but
> >> no one has committed to writing the code for those yet.
> >>
> >> The intended proposal was to rename the project, maintain the current
> >> review team (which is only a handful of Barbican people), and bring on
> >> a few Oslo folks, if any were available and interested, to give advice
> >> about (and +2s for) OpenStack library best practices. However, perhaps
> >> pulling it under oslo's umbrella without a rename is blessing it enough.
> >>
> >> In response to Julien's proposal to make Castellan "the way you can do
> >> key management in Python" -- it would be great if Castellan were that
> >> abstract, but in practice it is pretty OpenStack-specific. Currently,
> >> the Barbican team is great at working on key management projects
> >> (including both Barbican and Castellan), but a lot of our focus now is
> >> how we can maintain and grow integration with the rest of the OpenStack
> >> projects, for which having the name and expertise of oslo would be a
> >> great help.
> >>
> >> Thanks,
> >>
> >> Kaitlin
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >--
> >Davanum Srinivas :: https://twitter.com/dims
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> @flaper87
> Flavio Percoco
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Reminder for the team weekly meeting today 2017.03.22

2017-03-22 Thread Zhipeng Huang
Hi Team,

As agreed on the last meeting, since we start development we will change
from bi-weekly meeting to weekly meeting on Wed. The time will be one hour
later on ET 11:00am (UTC 1500) to facilitate more colleagues to join.

The wiki is down at the moment. Please join the meeting at
#openstack-cyborg. We will go through the BPs.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Extending Topology

2017-03-22 Thread Muhammad Usman
Hello Ifat,

I tried to see more in depth about the issues you mentioned regarding
the extension of vSwitches. Due to a lot of complexity involved in
generating this topology and associated effects, I believe we need to
setup some baseline (e.g. adding a configuration file for specifying
bridges in existing deployment setup). Then using that baseline,
topology can be constructed as well as type of network can be
extracted from neutron and associated path followed (e.g. vlan or
vxlan). However, more general case you mentioned, I cannot get it. Do
you mean nova-network?

Regarding the sunburst representation -  Yes I agree, if you want to
keep compute hierarchy separate then addition of networking components
is not a good idea.

Also, suggestions from other vitrage members are welcomed.


> On Thu, Mar 16, 2017 at 6:44 PM, Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com > wrote:
>
>> Hi,
>>
>>
>>
>> Adding switches to the Vitrage topology is generally a good idea, but the
>> implementation might be very complex. Your diagram shows a simple use
> case,
>> where all switches are linked to one another and it is easy to determine
>> the effect of a failure on the vms. However, in the more general case
> there
>> might be switches without a connecting port (e.g. they might be connected
>> via the network stack). In such cases, it is not clear how to model the
>> switches topology in Vitrage. Another case to consider is when the
>> network
>> type affects the packets path, like vlan vs. vxlan. If you have an idea
>> of
>> how to solve these issues, I will be happy to hear it.
>>
>>
>>
>> Regarding the sunburst representation – I’m not sure I understand your
>> diagram. Currently the sunburst is meant to show (only) the compute
>> hierarchy: zones, hosts and instances. It is arranged in a containment
>> relationship, i.e. every instance on the outer ring appears next to its
>> host in the inner ring. If you add the bridges in the middle, you lose
> this
>> containment relationship. Can you please explain to me the suggested
>> diagram?
>>
>>
>>
>> BTW, you can send such questions to OpenStack mailing list (
>> openstack-dev@lists.openstack.org ) with [vitrage] tag in
> the title, and
>> possibly get replies from other contributors as well.
>>
>>
>>
>> Best Regards,
>>
>> Ifat.
>>
>>
>>
>>
>>
>> *From: *Muhammad Usman >
>> *Date: *Monday, 13 March 2017 at 09:16
>>
>> *To: *"Afek, Ifat (Nokia - IL)" >
>> *Cc: *JungSu Han >
>> *Subject: *Re: OpenStack Vitrage
>>
>>
>>
>> Hi Ifat,
>>
>> I attached our idea of extending the Vitrage Topology to include Virtual
>> switches.
>>
>> The reason, I mentioned about adding switches part in Vitrage is because
>> we experienced looping issues that effect all infrastructure resources
>> (i.e. physical host as well as vm's) performance. Therefore, it's
> important
>> to monitor the virtual switches as well to assist overall monitoring/RCA
>> tasks.
>>
>> I think this idea will extend the Vitrage scope to touch some portion of
>> SDN (e.g. if we consider the SDN managed virtual switches) as well.
>>
>>
>>
>> On Thu, Mar 9, 2017 at 6:49 PM, Muhammad Usman  > wrote:
>>
>> Dear Ifat,
>>
>> Thanks for your guidance, I managed to install Vitrage properly using
>> Master branches for both OpenStack and Vitrage.
>>
>> Now, I will look into the visualization as well as other aspects.
>>
>>
>>
>>
>>
>> On Thu, Mar 9, 2017 at 2:43 PM, Afek, Ifat (Nokia - IL) <
>> ifat.a...@nokia.com > wrote:
>>
>> Hi,
>>
>>
>>
>> I have also noticed this problem, that Vitrage Ocata is not compatible
>> with Horizon Newton.
>>
>> If you just want an OpenStack working, you should use a stable version.
>> Stable/Ocata is the newest one (just released a few weeks ago). On the
>> other hand, if you want to contribute code, you better take the master
>> branch. Alternatively, you can take stable/ocata for all projects, and
>> the
>> master for Vitrage. This should work (for now, since Pike has just
> started).
>>
>>
>>
>> Best Regards,
>>
>> Ifat.
>>
>>
>>
>> *From: *Muhammad Usman >
>> *Date: *Wednesday, 8 March 2017 at 15:21
>>
>>
>> *To: *"Afek, Ifat (Nokia - IL)" >
>> *Cc: *JungSu Han >
>> *Subject: *Re: OpenStack Vitrage
>>
>>
>>
>> Ifat,
>>
>> after adding the mentioned line in /etc/heat/policy.json first error "You
>> are not authorized to use global_index" seems to be solved.
>>
>> However, in Horizon I still see same error (file is attached).
>>
>> So, After looking inside the code I found that I installed OpenStack
>> using
>> stable/newton branch but Vitrage is installed from Master branch. Since,
>> there are few changes in code (python-vitrageclient/
> vitrageclient/client.py)
>> that's why I think this error is occurring. Therefore, I 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Tim Bell

> On 22 Mar 2017, at 00:53, Alex Schultz  wrote:
> 
> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
>> 
>> 
>> On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>> 
>>> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
 I've been following this thread, but I must admit I seem to have missed 
 something.
 
 What problem is being solved by storing per-server service configuration 
 options in an external distributed CP system that is currently not 
 possible with the existing pattern of using local text files?
 
>>> 
>>> This effort is partially to help the path to containerization where we
>>> are delivering the service code via container but don't want to
>>> necessarily deliver the configuration in the same fashion.  It's about
>>> ease of configuration where moving service -> config files (on many
>>> hosts/containers) to service -> config via etcd (single source
>>> cluster).  It's also about an alternative to configuration management
>>> where today we have many tools handling the files in various ways
>>> (templates, from repo, via code providers) and trying to come to a
>>> more unified way of representing the configuration such that the end
>>> result is the same for every deployment tool.  All tools load configs
>>> into $place and services can be configured to talk to $place.  It
>>> should be noted that configuration files won't go away because many of
>>> the companion services still rely on them (rabbit/mysql/apache/etc) so
>>> we're really talking about services that currently use oslo.
>> 
>> Thanks for the explanation!
>> 
>> So in the future, you expect a node in a clustered OpenStack service to be 
>> deployed and run as a container, and then that node queries a centralized 
>> etcd (or other) k/v store to load config options. And other services running 
>> in the (container? cluster?) will load config from local text files managed 
>> in some other way.
> 
> No the goal is in the etcd mode, that it  may not be necessary to load
> the config files locally at all.  That being said there would still be
> support for having some configuration from a file and optionally
> provide a kv store as another config point.  'service --config-file
> /etc/service/service.conf --config-etcd proto://ip:port/slug'
> 
>> 
>> No wait. It's not the *services* that will load the config from a kv 
>> store--it's the config management system? So in the process of deploying a 
>> new container instance of a particular service, the deployment tool will 
>> pull the right values out of the kv system and inject those into the 
>> container, I'm guessing as a local text file that the service loads as 
>> normal?
>> 
> 
> No the thought is to have the services pull their configs from the kv
> store via oslo.config.  The point is hopefully to not require
> configuration files at all for containers.  The container would get
> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> /etc/myconfigs/).  At that point it just becomes another place to load
> configurations from via oslo.config.  Configuration management comes
> in as a way to load the configs either as a file or into etcd.  Many
> operators (and deployment tools) are already using some form of
> configuration management so if we can integrate in a kv store output
> option, adoption becomes much easier than making everyone start from
> scratch.
> 
>> This means you could have some (OpenStack?) service for inventory management 
>> (like Karbor) that is seeding the kv store, the cloud infrastructure 
>> software itself is "cloud aware" and queries the central distributed kv 
>> system for the correct-right-now config options, and the cloud service 
>> itself gets all the benefits of dynamic scaling of available hardware 
>> resources. That's pretty cool. Add hardware to the inventory, the cloud 
>> infra itself expands to make it available. Hardware fails, and the cloud 
>> infra resizes to adjust. Apps running on the infra keep doing their thing 
>> consuming the resources. It's clouds all the way down :-)
>> 
>> Despite sounding pretty interesting, it also sounds like a lot of extra 
>> complexity. Maybe it's worth it. I don't know.
>> 
> 
> Yea there's extra complexity at least in the
> deployment/management/monitoring of the new service or maybe not.
> Keeping configuration files synced across 1000s of nodes (or
> containers) can be just as hard however.
> 

Would there be a mechanism to stage configuration changes (such as a 
QA/production environment) or have different configurations for different 
hypervisors?

We have some of our hypervisors set for high performance which needs a slightly 
different nova.conf (such as CPU passthrough).

Tim

>> Thanks again for the explanation.
>> 
>> 
>> --John
>> 
>> 
>> 
>> 
>>> 
>>> Thanks,
>>> -Alex
>>> 
 
 --John
 
 
 
 
 On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
 
>