Re: [openstack-dev] [Kuryr] Proposing vikasc for kuryr-libnetwork core

2016-08-16 Thread Gal Sagie
+1 , Vikas has been doing a great work on the project and is a good
addition to the team.


On Wed, Aug 17, 2016 at 12:54 AM, Antoni Segura Puimedon  wrote:

> Hi Kuryrs,
>
> I would like to propose Vikas Choudhary for the core team for the
> kuryr-libnetwork subproject. Vikas has kept submitting patches and reviews
> at a very good rhythm in the past cycle and I believe he will help a lot to
> move kuryr forward.
>
> I would also like to propose him for the core team for the
> kuryr-kubernetes subproject since he has experience in the day to day work
> with kubernetes and can help with the review and refactoring of the
> prototype upstreaming.
>
> Regards,
>
> Antoni Segura Puimedon
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How do I get devstack with nova-network now?

2016-08-16 Thread BİLGEM BTE





Hi 




Your local.conf like below code. You should add "disable_service n-net" , you 
can disable nova-network this comment. 




# Neutron 
# --- 
disable_service n-net 
enable_service neutron 
enable_service q-svc 
enable_service q-agt 
enable_service q-dhcp 
enable_service q-l3 
enable_service q-meta 
enable_service q-lbaas 

Thanks 

Yasemin 
- Orijinal Mesaj -

Kimden: "Matt Riedemann"  
Kime: "OpenStack Development Mailing List (not for usage questions)" 
 
Gönderilenler: 17 Ağustos Çarşamba 2016 5:52:09 
Konu: [openstack-dev] How do I get devstack with nova-network now? 

My nova-net local.conf isn't working anymore apparently, neutron is 
still getting installed and run rather than nova-network even though I 
have this in my local.conf: 

stack@novanet:~$ cat devstack/local.conf | grep enable_service 
enable_service tempest 
enable_service n-net 
#enable_service q-svc 
#enable_service q-agt 
#enable_service q-dhcp 
#enable_service q-l3 
#enable_service q-meta 
#enable_service q-lbaas 
#enable_service q-lbaasv2 

This guide tells me about the default networking now: 

http://docs.openstack.org/developer/devstack/networking.html 

But doesn't tell me how to get nova-network running instead. 

It's also nearly my bedtime and I'm being lazy, so figured I'd post this 
if for nothing else a heads up that people's local configs for 
nova-network might no longer work with neutron being the default. 

-- 

Thanks, 

Matt Riedemann 


__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-08-16 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Neutron] No DVR meeting this week.

2016-08-16 Thread Swaminathan Vasudevan
Hi Folks,
We will not be having our weekly DVR meeting this week.
We will resume the meeting next week.
Thanks
Swami
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage][vitrage-dashboard] cropped entity information

2016-08-16 Thread Yujun Zhang
In the topology view, the content of entity information is sometimes
cropped when there are too many rows.

Is there a way to peek the cropped content?

[image: xbm5sl.jpg]
--
Yujun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Ceilometer]:Duplicate messages with Ceilometer Kafka Publisher.

2016-08-16 Thread Raghunath D
Hi Gordon,

As suggested I pasted pipeline.yaml content to paste.openstack.org

http://paste.openstack.org/show/558688/

With Best Regards
 Raghunath Dudyala
 Tata Consultancy Services Limited
 Mailto: raghunat...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 

-gordon chung  wrote: -
To: "openstack-dev@lists.openstack.org" 
From: gordon chung 
Date: 08/16/2016 07:51PM
Subject: Re: [openstack-dev] FW: [Ceilometer]:Duplicate messages with 
Ceilometer Kafka Publisher.

 what does your pipeline.yaml look like? maybe paste it to 
paste.openstack.org. i imagine it's correct if your udp publishing works as 
expected.
 
 
 
On 16/08/16 04:04 AM, Raghunath D wrote:
   Hi Simon,
 
 I have two openstack setup's one with kilo and other with mitaka.
 
 Please find details of kafka version's below:
 Kilo kafka client library Version:1.3.1
 Mitaka kafka client library Version:1.3.1
 Kafka server version on both kilo and mitaka: kafka_2.11-0.9.0.0.tgz
 
 One observation is on kilo openstack setup I didn't see duplicate message 
issue,while on mitaka
 setup I am experiencing this issue.
 
 With Best Regards
 Raghunath Dudyala
 Tata Consultancy Services Limited
 Mailto:  raghunat...@tcs.com
 Website:  http://www.tcs.com
 
 Experience certainty. IT Services
 Business Solutions
 Consulting
 
 
 
 -
 
 
 From: Simon Pasquier [mailto:spasqu...@mirantis.com] 
 
 Sent: Thursday, August 11, 2016 2:34 AM
 To: OpenStack Development Mailing List (not for usage questions)  

 Subject: Re: [openstack-dev] [Ceilometer]:Duplicate messages with Ceilometer 
Kafka Publisher.
  
 
 
 
 
 Hi
  Which version of Kafka do you use?
  BR
  Simon
  
  
 
 On Thu, Aug 11, 2016 at 10:13 AM, Raghunath D  wrote:
  Hi , 
 
   We are injecting events to our custom plugin in ceilometer. 
   The ceilometer pipeline.yaml  is configured to publish these events over 
kafka and udp, consuming these samples using kafka and udp clients. 
 
   KAFKA publisher: 
   --- 
   When the events are sent continously ,we can see duplicate msg's are 
recevied in kafka client. 
   From the log it seems ceilometer kafka publisher is failed to send msg's,but 
still these msgs 
   are received by kafka server.So when kafka resends these failed msgs we can 
see duplicate msg's 
   received in kafka client. 
   Please find the attached log for reference. 
   Is this know issue ? 
   Is their any workaround for this issue ? 
 
   UDP publisher: 
     No duplicate msg's issue seen here and it is working as expected. 
 
         
 
 With Best Regards
 Raghunath Dudyala
 Tata Consultancy Services Limited
 Mailto:  raghunat...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.        IT Services
                        Business Solutions
                        Consulting
 
 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain 
 confidential or privileged information. If you are 
 not the intended recipient, any dissemination, use, 
 review, distribution, printing or copying of the 
 information contained in this e-mail message 
 and/or attachments to it are strictly prohibited. If 
 you have received this communication in error, 
 please notify us by reply e-mail or telephone and 
 immediately and permanently delete the message 
 and any attachments. Thank you
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    

 
 [attachment "ATT1.txt" removed by Raghunath D/HYD/TCS] 
  
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
-- 
gord   
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for API docs

2016-08-16 Thread Anne Gentle
On Tue, Aug 16, 2016 at 1:05 AM, Shuu Mutou  wrote:

> Hi Anne,
>
> AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API docs.
> Anne, has not the adoption been changed? Otherwise, do we need to maintain
> much RST files also?
>

It does say either/or in the API WG guideline:
http://specs.openstack.org/openstack/api-wg/guidelines/api-docs.html


>
> IMO, for that the reference and the source code doesn't have conflict,
> these should be near each other as possible as follow. And it decreases
> maintainance costs for documents, and increases document reliability. So I
> believe our approach is more ideal.
>
>
This isn't about a contest between projects for the best approach. This is
about serving end-users the best information we can.


> The Best: the references generated from source code.
>

I don't want to argue, but anything generated from the source code suffers
if the source code changes in a way that reviewers don't catch as a
backwards-incompatible change you can break your contract.


> Better: the references written in docstring.
>
> We know some projects abandoned these approach, and then they uses RST +
> YAML.
> But we hope decreasing maintainance cost for the documents. So we should
> not create so much RST files, I think.
>
>
I think you'll see the evolution of our discussions over the years has
brought us to this point in time. Yes, there are trade-offs in these
decisions.


> I'm proceeding auto-generation of swagger spec from magnum source code
> using pecan-swagger [1], and improving pecan-swagger with Michael McCune
> [2].
> These will generate almost Magnum API specs automatically in OpenAPI
> format.
> Also, these approach can be adopted by other APIs that uses pecan and WSME.
> Please check this also.
>

I ask you to consider the experience of someone consuming the API documents
OpenStack provides. There are 26 REST API services under an OpenStack
umbrella. Twelve of them will be included in an unified side-bar navigation
on developer.openstack.org due to using Sphinx tooling provided as a common
web experience. Six of them don't have redirects yet from the "old way" to
do API reference docs. Seven of them are not collected under a single
landing page or common sidebar or navigation. Three of them have no API
docs published to a website.

I'm reporting what I'm seeing from a broader viewpoint than a single
project. I don't have a solution other than RST/YAML for common navigation,
and I'm asking you to provide ideas for that integration point.

My vision is that even if you choose to publish with OpenAPI, you would
find a way to make this web experience better. We can do better than this
scattered approach. I'm asking you to find a way to unify and consider the
web experience of a consumer of OpenStack services. Can you generate HTML
that can plug into the openstackdocstheme we are providing as a common tool?
Thanks,
Anne


>
> [1] https://review.openstack.org/#/c/303235/
> [2] https://github.com/elmiko/pecan-swagger/
>
> Regards,
> Shu
>
>
> > -Original Message-
> > From: Anne Gentle [mailto:annegen...@justwriteclick.com]
> > Sent: Monday, August 15, 2016 11:00 AM
> > To: Hongbin Lu 
> > Cc: openstack-dev@lists.openstack.org; enstack.org
> > 
> > Subject: Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common
> > tooling for API docs
> >
> > Hi Hongbin,
> >
> > Thanks for asking. I'd like for teams to look for ways to innovate and
> > integrate with the navigation as a good entry point for OpenAPI to become
> > a standard for OpenStack to use. That said, we have to move forward and
> > make progress.
> >
> > Is Lars or anyone on the Magnum team interested in the web development
> work
> > to integrate with the sidebar? See the work at
> > https://review.openstack.org/#/c/329508 and my comments on
> > https://review.openstack.org/#/c/351800/ saying that I would like teams
> > to integrate first to provide the best web experience for people
> consuming
> > the docs.
> >
> > Anne
> >
> > On Fri, Aug 12, 2016 at 4:43 PM, Hongbin Lu  >  > wrote:
> >
> >
> >   Hi team,
> >
> >
> >
> >   As mentioned in the email below, Magnum are not using common
> tooling
> > for generating API docs, so we are excluded from the common navigation of
> > OpenStack API. I think we need to prioritize the work to fix it. BTW, I
> > notice there is a WIP patch [1] for generating API docs by using Swagger.
> > However, I am not sure if Swagger belongs to “common tooling” (docs team,
> > please confirm).
> >
> >
> >
> >   [1] https://review.openstack.org/#/c/317368/
> > 
> >
> >
> >
> >   Best regards,
> >
> >   Hongbin
> >
> >
> >
> >   From: Anne Gentle [mailto:annegen...@justwriteclick.com
> >  ]
> >   Sent: August-10-16 3:50 PM
> >   To: 

Re: [openstack-dev] How do I get devstack with nova-network now?

2016-08-16 Thread Matt Riedemann

On 8/16/2016 9:52 PM, Matt Riedemann wrote:

My nova-net local.conf isn't working anymore apparently, neutron is
still getting installed and run rather than nova-network even though I
have this in my local.conf:

stack@novanet:~$ cat devstack/local.conf  | grep enable_service
enable_service tempest
enable_service n-net
#enable_service q-svc
#enable_service q-agt
#enable_service q-dhcp
#enable_service q-l3
#enable_service q-meta
#enable_service q-lbaas
#enable_service q-lbaasv2

This guide tells me about the default networking now:

http://docs.openstack.org/developer/devstack/networking.html

But doesn't tell me how to get nova-network running instead.

It's also nearly my bedtime and I'm being lazy, so figured I'd post this
if for nothing else a heads up that people's local configs for
nova-network might no longer work with neutron being the default.



Looks like I just have to be explicit about disabling the neutron 
services and enabling the n-net service:


enable_service n-net
disable_service q-svc
disable_service q-agt
disable_service q-dhcp
disable_service q-l3
disable_service q-meta

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How do I get devstack with nova-network now?

2016-08-16 Thread Matt Riedemann
My nova-net local.conf isn't working anymore apparently, neutron is 
still getting installed and run rather than nova-network even though I 
have this in my local.conf:


stack@novanet:~$ cat devstack/local.conf  | grep enable_service
enable_service tempest
enable_service n-net
#enable_service q-svc
#enable_service q-agt
#enable_service q-dhcp
#enable_service q-l3
#enable_service q-meta
#enable_service q-lbaas
#enable_service q-lbaasv2

This guide tells me about the default networking now:

http://docs.openstack.org/developer/devstack/networking.html

But doesn't tell me how to get nova-network running instead.

It's also nearly my bedtime and I'm being lazy, so figured I'd post this 
if for nothing else a heads up that people's local configs for 
nova-network might no longer work with neutron being the default.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [javascript] library api reference publishing

2016-08-16 Thread Yujun Zhang
As an action from the javascript meeting, below is the summary of how to
integrate jsdoc [1] generated api reference with sphinx document.

Feel free to comment and discuss in today's meeting.


   - findings


   - openstack api ref is dedicated to describe RESTful interface
   http://developer.openstack.org/api-ref.html


   - no example of library api reference found


   - Proposal


   - hosting api reference on independent site and add external link in
   openstack document site


   - api reference: http://js-generator-openstack.openzero.net/


   - infra document: http://docs.openstack.org/infra/


   - TODO


   - hosting of api reference (static HTML pages)


   - github pages?


   - publish api reference with CI


[1] http://usejsdoc.org/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] launchpad bugs

2016-08-16 Thread Emilien Macchi
Hi team,

This e-mail is addressed to TripleO developers interested by helping
in bug triage.
If you already subscribed to TripleO bugs notifications, you can skip
and go at the second part of the e-mail.
If not, please:
1) Go on https://bugs.launchpad.net/tripleo/+subscriptions
2) Click on "Add a subscription"
3) Bug mail recipient: yourself / Receive mail for bugs affecting
tripleo that are added or closed
4) Create a mail filter if you like your emails classified.
That way, you'll get an email for every new bug and their updates.


At least but not least, please keep your assigned bugs up-to-date and
close them with "Fix released" when automation didn't make it for you.
Reminder: auto-triage is our model, we trust you to assign the bug in
the right priority and milestone.
If any doubt, please ask on #tripleo.
Note: I spent time today to make triage and updates on a good number
of bugs. Let me know if I did something wrong with your bugs.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-16 Thread Emilien Macchi
On Tue, Aug 16, 2016 at 4:49 PM, James Slagle  wrote:
> On Mon, Aug 15, 2016 at 4:54 AM, Dmitry Tantsur  wrote:
>> Hi everyone, happy Monday :)
>>
>> I'd like to start the discussion about CI-testing the optional composable
>> services in the CI (I'm primarily interested in Ironic, but I know there are
>> a lot more).
>>
>> Currently every time we change something in an optional service, we have to
>> create a DO-NOT-MERGE patch making the service in question not optional.
>> This approach has several problems:
>>
>> 1. It's not usually done for global refactorings.
>>
>> 2. The current CI does not have any specific tests to check that the
>> services in question actually works at all (e.g. in my experience the CI was
>> green even though nova-compute could not reach ironic).
>>
>> 3. If something breaks, it's hard to track the problem down to a specific
>> patch, as there is no history of gate runs.
>>
>> 4. It does not test the environment files we provide for enabling the
>> service.
>>
>> So, are there any plans to start covering optional services? Maybe at least
>> a non-HA job with all environment files included? It would be cool to also
>> somehow provide additional checks though. Or, in case of ironic, to disable
>> the regular nova compute, so that the ping test runs on an ironic instance.
>
> There are no immediate plans. Although I think the CI testing matrix
> is always open for discussion.
>
> I'm a little skeptical we will be able to deploy all services within
> the job timeout. And if we are, such a job seems better suited as a
> periodic job than in the check queue.
>
> The reason being is that there are already many different services
> that can break TripleO, and I'd rather focus on improving the testing
> of the actual deployment framework itself, instead of testing the
> "whole world" on every patch. We only have so much capacity. For
> example, I'd rather see us testing updates or upgrades on each patch
> instead of all the services.
>
> That being said, if you wanted to add a job that covered Ironic, I'd
> at least start with adding a job in the experimental queue. If it
> proves to be stable, we can always consider moving it to the check
> queue.
>

TripleO CI is having the same problem as Puppet CI had some time ago,
when we wanted to test more and more.

We solved this problem with scenarios.
Everything is documented here:
https://github.com/openstack/puppet-openstack-integration#description

We're testing all scenarios for all commits in
puppet-openstack-integration (equivalent to tripleo-ci) but only some
scenarios for each Puppet module.
Ex: OpenStack Sahara is deployed in scenario003, so a patch in
puppet-sahara will only run scenario003 job. We do that in Zuul
layout.

Because it would be complicated to do the same with TripleO Heat
Templates, we could run the same kind of job in periodic pipeline.
Though for Puppet jobs, we could run the tripleo scenarios in the
check/gate, as we do with the current multinode job.

So here's a summary of my proposal (and I volunteer to actually work
on it, like I did for Puppet CI):
* Call current jobs "compute-kit" which are current nonha/ha (ovb and
multinode) jobs deploying the basic services (undercloud +
Nova/Neutron/Glance/Keystone/Heat/Cinder/Swift and common services,
apache, mysql, etc).
* Create TripleO envs deploying different scenarios (we could start by
scenario001 deploying "compute-kit" + ceph (rbd backend for Nova /
Glance / Cinder). It's an example, feel free to propose something
else. The envs would live in tripleo-ci repo among others ci envs.
* Switch puppet-ceph zuul layout to stop running ovb ha job and run
the scenario001 job in check/gate (with the experimental pipeline
transition, as usual).
* Run scenario001 job in check/gate of tripleo-ci among other jobs.
* Run scenario001 job in periodic pipeline for puppet-tripleo and
tripleo-heat-templates.

Any feedback is welcome, but I think this would be a good start of
scaling our CI jobs coverage.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #90

2016-08-16 Thread Iury Gregory
We did the meeting, you can read the notes in

http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-08-16-15.00.log.html

Thanks!

2016-08-15 21:55 GMT-03:00 Iury Gregory <iurygreg...@gmail.com>:

> Hi Puppeteers!
>
> We'll have our weekly meeting tomorrow at 3pm UTC on #openstack-meeting-4
>
> Here's a first agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160816
>
>
> Feel free to add topics, and any outstanding bug and patch.
>
> See you tomorrow!
> Thanks,
>
>
> --
> ~
>
>
> *Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science
> at UFCG*
> *E-mail:  iurygreg...@gmail.com <iurygreg...@gmail.com>*
> ~
>



-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com <iurygreg...@gmail.com>*
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Overlay MTU setup in docker remote driver

2016-08-16 Thread Liping Mao (limao)
Hi Ihar,

Thanks for your comments.
Agree, kuryr need to load mtu in neutron network.

I submit a bug to track this:
https://bugs.launchpad.net/kuryr-libnetwork/+bug/1613528

And patch sets are here:
https://review.openstack.org/#/c/355712/
https://review.openstack.org/#/c/355714/


Generally, when kuryr create docker interface, it call
neturon network api to get mtu, then set it on the interface.

Thanks.


Regards,
Liping Mao

On 16/8/17 上午8:20, "Ihar Hrachyshka"  wrote:

>Liping Mao (limao)  wrote:
>
>> Hi Kuryr team,
>>
>> When the network in neutron using overlay for vm,
>> it will use dhcp option to control the VM interface MTU,
>> but for docker, the ip address does not get from dhcp.
>> So it will not set up proper MTU in container.
>>
>> Two work-around in my mind now:
>> 1. Set the default MTU in docker to 1450 or less.
>> 2. Manually configure MTU after container start up.
>>
>> But both of these are not good, the idea way in my mind
>> is when libnetwork Call remote driver create network,
>> kuryr create neutron network, then return Proper MTU to libnetwork,
>> docker use this MTU for this network. But docker remote driver
>> does not support this.
>>
>> Or maybe let user config MTU in remote driver,
>> a little similar with overlay driver:
>> https://github.com/docker/libnetwork/pull/1349
>
>Please don’t allow to configure MTU in any specific way, just make sure
>that the MTU value on neutron network is applied on the container side.
>Also, enforcing it to 1450 is not going to work for lots and lots of
>cases.  
>Enforcing it to anything won’t work, because MTUs depend on network type.
>
>>
>> But now, seems like remote driver will not do similar things.
>>
>> Any idea to solve this problem? Thanks.
>>
>>
>> Regards,
>> Liping Mao
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] reference a type of alarm in template

2016-08-16 Thread Yujun Zhang
Hi, Liat,

Thanks for clarification. But my question still remains.

Under this definition (type <=> datasource), we can only refer to
***one** *alarm
from each datasource in the template. Is it a limitation?

How should I refer two different aodh alarms in the template?
--
Yujun

On Tue, Aug 16, 2016 at 10:24 PM Har-Tal, Liat (Nokia - IL) <
liat.har-...@nokia.com> wrote:

> Hi Yujun,
>
>
>
> The template example you are looking at is invalid.
>
> I added a new valid example (see the following link:
> https://review.openstack.org/#/c/355840/)
>
>
>
> As Elisha wrote, the ‘type’ field means the alarm type itself or in simple
> words where it was generated (Vitrage/Nagios/Zabbix/AODH)
>
> The ‘name’ field is not mandatory and it describes the actual problem
> which the alarm was raised about.
>
>
>
> In the example you can see two alarm types:
>
>
>
> 1.   Zabbix alarm - No use of “name” field:
>
>  category: ALARM
>
>  type: zabbix
>
>  rawtext: Processor load is too high on {HOST.NAME}
>
>  template_id: zabbix_alarm
>
> 2.   Vitrage alarm
>
> category: ALARM
>
> type: vitrage
>
> name: CPU performance degradation
>
> template_id: instance_alarm
>
>
>
> One more point, in order to define an entity in the template, the only
> mandatory fields are:
>
> · template_id
>
> · category
>
> All the other fields are optional and they are designed so that you more
> accurately define the entity.
>
> Each alarm data source has its own set of fields you can use – we will add
> documentation for the in the future.
>
>
>
> Best regards,
>
> Liat Har-Tal
>
>
>
>
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Tuesday, August 16, 2016 5:18 AM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [vitrage] reference a type of alarm in
> template
>
>
>
> Hi, Elisha
>
>
>
> There is no `name` in the template [1], and type is not one of 'nagios',
> 'aodh' and 'vitrage' in the examples [2].
>
>
>
> - entity:
>
> category: ALARM
>
> type: Free disk space is less than 20% on volume /
>
> template_id: host_alarm
>
>
>
> [1]
> https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/deduced_host_disk_space_to_instance_at_risk.yaml#L8
>
> [2]
> https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst#examples
>
>
>
>
>
> On Tue, Aug 16, 2016 at 2:21 AM Rosensweig, Elisha (Nokia - IL) <
> elisha.rosensw...@nokia.com> wrote:
>
> Hi,
>
>
>
> The "type" means where it was generated - aodh, vitrage, nagios...
>
>
>
> I think you are looking for"name", a field that describes the actual
> problem. We should add that to our documentation to clarify.
>
>
>
> Sent from Nine 
> --
>
> *From:* Yujun Zhang 
> *Sent:* Aug 15, 2016 16:10
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [vitrage] reference a type of alarm in template
>
>
>
> I have a question on how to reference a type of alarm in template so that
> we can build scenarios.
>
>
>
> In the template sample [1], an alarm entity has three keys: `category`,
> `type` and `template_id`. It seems `type` is the only information to
> distinguish different alarms. However, when an alarm is raised by aodh, it
> seems all alarms are assigned entity type `aodh` [2], so are they shown in
> dashboard.
>
>
>
> Suppose we have two different types of alarms from `aodh`, e.g.
> `volume.corrupt` and `volume.deleted`. How should I reference them
> separately in a template?
>
>
>
> ×
>
> [image: Screen Shot 2016-08-15 at 8.44.56 PM.png]
>
> ×
>
>
>
> ×
>
>
>
> [1]
> https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/deduced_host_disk_space_to_instance_at_risk.yaml#L8
>
> [2]
> https://github.com/openstack/vitrage/blob/master/vitrage/datasources/aodh/transformer.py#L75
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Running Tempest tests for a customized cloud

2016-08-16 Thread Elancheran Subramanian
Hello Punal,
We do support both V2 and V3, that’s just a example I’ve stated BTW. We do have 
our own integration tests which are pretty much covers all our integration 
points with openstack. But we would like to leverage the tempest while doing 
our upstream merge for openstack components in CI.

I believe the tests support the include list, how can I exclude test? Any 
pointer would be a great help.

Thanks,
Cheran

From: punal patel >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, August 16, 2016 at 4:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [QA] Running Tempest tests for a customized cloud

Hi Cheran,

Best practice depends on your test plan and what coverage you. Does your test 
plan covers V3 Identity tests ? If it does and its failing, you should modify 
to make it work for your environment. Fast way to move forward is to exclude 
those tests.

-Punal

On Tue, Aug 16, 2016 at 3:40 PM, Elancheran Subramanian 
> wrote:
Hello There,
I’m currently playing with using Tempest as our integration tests for our 
internal and external clouds, facing some issues with api which are not 
supported in our cloud. For ex, listing domains isn’t supported for any user, 
due to this V3 Identity tests are failing. So I would like to know what’s the 
best practice? Like fix those tests, and apply those fix as patch? Or just 
exclude those tests?

Would be great if anyone could share their experience on this.

Thanks & Regards,
Cheran

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-16 Thread huangdenghui
Hi Carl
Thanks for reply. I was wondering how this works? fg device has one private 
ip address, and the interface on upstream router has two ip address, one is 
private ip address, and the other one is public ip address, Can fg device do 
arp proxy for this situation?







在 2016-08-03 06:38:52,"Carl Baldwin"  写道:





On Tue, Aug 2, 2016 at 6:15 AM, huangdenghui  wrote:

hi john and brain
   thanks for your information, if we get patch[1],patch[2] merged,then fg can 
allocate private ip address. after that, we need consider floating ip 
dataplane, in current dvr implementation, fg is used to reachment testing for 
floating ip, now,with subnet types bp,fg has different subnet than floating ip 
address, from fg'subnet gateway point view, to reach floating ip, it need a 
routes entry, destination is some floating ip address, fg'ip address is the 
nexthop, and this routes entry need be populated at the event of floating ip 
creating, deleting when floating ip is dissociated. any comments?



The fg device will still do proxy arp for the floating ip to other devices on 
the external network. This will be part of our testing. The upstream router 
should still have an on-link route on the network to the floating ip subnet. 
IOW, you shouldn't replace the floating ip subnet with the private fg subnet on 
the upstream router. You should add the new subnet to the already existing ones 
and the router should have an additional IP address on the new subnet to be 
used as the gateway address for north-bound traffic.


Carl__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Overlay MTU setup in docker remote driver

2016-08-16 Thread Ihar Hrachyshka

Liping Mao (limao)  wrote:


Hi Kuryr team,

When the network in neutron using overlay for vm,
it will use dhcp option to control the VM interface MTU,
but for docker, the ip address does not get from dhcp.
So it will not set up proper MTU in container.

Two work-around in my mind now:
1. Set the default MTU in docker to 1450 or less.
2. Manually configure MTU after container start up.

But both of these are not good, the idea way in my mind
is when libnetwork Call remote driver create network,
kuryr create neutron network, then return Proper MTU to libnetwork,
docker use this MTU for this network. But docker remote driver
does not support this.

Or maybe let user config MTU in remote driver,
a little similar with overlay driver:
https://github.com/docker/libnetwork/pull/1349


Please don’t allow to configure MTU in any specific way, just make sure  
that the MTU value on neutron network is applied on the container side.  
Also, enforcing it to 1450 is not going to work for lots and lots of cases.  
Enforcing it to anything won’t work, because MTUs depend on network type.




But now, seems like remote driver will not do similar things.

Any idea to solve this problem? Thanks.


Regards,
Liping Mao


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-16 Thread Matthew Thode
On 08/16/2016 05:38 PM, Brant Knudson wrote:
> s bindep.txt meant to be used by anything other than OpenStack CI? (As
> in, are packagers going to rely on it?)
> 
> In keystone's bindep.txt, we have packages listed like:
> 
> |libldap2-dev [platform:dpkg] |
> 
> 
> -> Which is only needed if you install with keystone[ldap] (see
> keystone's setup.cfg[1]).
> 
> |
> |
> 
> |
> 
> |libsqlite3-dev [platform:dpkg] |
> 
> 
> -> Which is only needed for unit tests.
> 
> .. and maybe others that aren't needed in all deployments.
> 
> So there's a use case for a) integrating with extras, and b) a
> "test-bindep.txt".
> 
> Maybe this is supported already or is known work to do, or maybe
> somebody's looking for something to work on.
> 
> [1] http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg#n28
> |
> 
> -- 
> - Brant

I look at it (as a packager), though it is somewhat dubious as to
whether or not what's there is actually needed.  Having test-bindep.txt
like we have test-requirements would be helpful.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Running Tempest tests for a customized cloud

2016-08-16 Thread punal patel
Hi Cheran,

Best practice depends on your test plan and what coverage you. Does your
test plan covers V3 Identity tests ? If it does and its failing, you should
modify to make it work for your environment. Fast way to move forward is to
exclude those tests.

-Punal

On Tue, Aug 16, 2016 at 3:40 PM, Elancheran Subramanian <
esubraman...@godaddy.com> wrote:

> Hello There,
> I’m currently playing with using Tempest as our integration tests for our
> internal and external clouds, facing some issues with api which are not
> supported in our cloud. For ex, listing domains isn’t supported for any
> user, due to this V3 Identity tests are failing. So I would like to know
> what’s the best practice? Like fix those tests, and apply those fix as
> patch? Or just exclude those tests?
>
> Would be great if anyone could share their experience on this.
>
> Thanks & Regards,
> Cheran
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] infra_driver and mgmt_driver

2016-08-16 Thread Sridhar Ramaswamy
Hi Yong Sheng,

Sorry, missed this email from earlier.

Yes, absolutely. We should deprecate both of them in Newton and remove
it from the API in Ocata.

thanks,
Sridhar

On Tue, Aug 9, 2016 at 8:39 PM, gong_ys2004  wrote:
> Hi Sridhar,
>
> You said in https://review.openstack.org/#/c/255146/ we should not expose
> the infra_driver and mgmt_driver to user,
> but they have been already exposed by API at
> https://github.com/openstack/tacker/blob/master/tacker/extensions/vnfm.py#L205,
>
> so what do you think?
> Do we need remove the infra_driver and mgmt_driver from API side?
>
> I think we can remove these two drivers since all of them are indicated by
> VIM.
>
> Regards,
> yong sheng gong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Running Tempest tests for a customized cloud

2016-08-16 Thread Elancheran Subramanian
Hello There,
I’m currently playing with using Tempest as our integration tests for our 
internal and external clouds, facing some issues with api which are not 
supported in our cloud. For ex, listing domains isn’t supported for any user, 
due to this V3 Identity tests are failing. So I would like to know what’s the 
best practice? Like fix those tests, and apply those fix as patch? Or just 
exclude those tests?

Would be great if anyone could share their experience on this.

Thanks & Regards,
Cheran
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-16 Thread Brant Knudson
On Fri, Aug 12, 2016 at 2:53 PM, Jeremy Stanley  wrote:

> On 2016-08-12 21:20:34 +0200 (+0200), Julien Danjou wrote:
> [...]
> > If bindep.txt is present, are the "standard" packages still installed?
> > If yes, this is going to be more challenging to get bindep.txt right, as
> > a missing entry will go unnoticed.
>
> As Andreas mentioned, we have a fallback list[*] which gets
> installed in most (non-devstack) jobs when you don't have a
> bindep.txt or other-requirements.txt in your repo. That said, the
> addition/modification/removal of that file is accounted for in jobs
> that test a change doing that, so you can see whether it will work
> on our infrastructure simply by proposing the change to your project
> and seeing if any of your jobs fail due to missing packages.
>
> [*] http://git.openstack.org/cgit/openstack-infra/project-
> config/tree/jenkins/data/bindep-fallback.txt >
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Is bindep.txt meant to be used by anything other than OpenStack CI? (As in,
are packagers going to rely on it?)

In keystone's bindep.txt, we have packages listed like:

libldap2-dev [platform:dpkg]


-> Which is only needed if you install with keystone[ldap] (see
keystone's setup.cfg[1]).


libsqlite3-dev [platform:dpkg]


-> Which is only needed for unit tests.

.. and maybe others that aren't needed in all deployments.

So there's a use case for a) integrating with extras, and b) a
"test-bindep.txt".

Maybe this is supported already or is known work to do, or maybe somebody's
looking for something to work on.

[1] http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg#n28

-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Horizon] Any guidelines for naming heat resource type names?

2016-08-16 Thread Rob Cresswell
This sounds like a bug on the Horizon side. There is/was a patch regarding a 
similar issue with LBaaS v2 resources too. It's likely just an incorrect 
assumption in the logic processing these names.

Rob

On 16 Aug 2016 11:03 p.m., "Zane Bitter" 
> wrote:
On 16/08/16 17:43, Praveen Yalagandula wrote:
> Hi all,
> We have developed some heat resources for our custom API server. We
> followed the instructions in the development guide
> at http://docs.openstack.org/developer/heat/pluginguide.html and got
> everything working. However, the Horizon "Resource Types" panel is
> returning a 500 error with "TemplateSyntaxError: u'resource'" message.
>
> Upon further debugging, we found that the Horizon is expecting all Heat
> resource type names to be of form
> . However, we didn't see this
> requirement in the heat development documents. Some of our resource
> types have just two words (e.g., "Avi::Pool"). Heat itself didn't care
> about these names at all.

Given that Heat has a REST API specifically for validating templates,
it's surprising that Horizon would implement its own, apparently
incorrect, validation.

> Question: Is there a general consensus is to enforce
> the  format for type names?

No. We don't really care what you call them (although using OS:: or
AWS:: as a prefix for your own custom types would be very unwise). In
fact, Heat allows you to specify a URL of a template file as a resource
type, and it sounds like that might run afoul of the same restriction.

> If so, can we please update the heat plugin guide to reflect this?
> If not, we can file it is as a bug in Horizon.

+1 for bug in Horizon.

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Horizon] Any guidelines for naming heat resource type names?

2016-08-16 Thread Zane Bitter

On 16/08/16 17:43, Praveen Yalagandula wrote:

Hi all,
We have developed some heat resources for our custom API server. We
followed the instructions in the development guide
at http://docs.openstack.org/developer/heat/pluginguide.html and got
everything working. However, the Horizon "Resource Types" panel is
returning a 500 error with "TemplateSyntaxError: u'resource'" message.

Upon further debugging, we found that the Horizon is expecting all Heat
resource type names to be of form
. However, we didn't see this
requirement in the heat development documents. Some of our resource
types have just two words (e.g., "Avi::Pool"). Heat itself didn't care
about these names at all.


Given that Heat has a REST API specifically for validating templates, 
it's surprising that Horizon would implement its own, apparently 
incorrect, validation.



Question: Is there a general consensus is to enforce
the  format for type names?


No. We don't really care what you call them (although using OS:: or 
AWS:: as a prefix for your own custom types would be very unwise). In 
fact, Heat allows you to specify a URL of a template file as a resource 
type, and it sounds like that might run afoul of the same restriction.



If so, can we please update the heat plugin guide to reflect this?
If not, we can file it is as a bug in Horizon.


+1 for bug in Horizon.

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Proposing vikasc for kuryr-libnetwork core

2016-08-16 Thread Antoni Segura Puimedon
Hi Kuryrs,

I would like to propose Vikas Choudhary for the core team for the
kuryr-libnetwork subproject. Vikas has kept submitting patches and reviews
at a very good rhythm in the past cycle and I believe he will help a lot to
move kuryr forward.

I would also like to propose him for the core team for the kuryr-kubernetes
subproject since he has experience in the day to day work with kubernetes
and can help with the review and refactoring of the prototype upstreaming.

Regards,

Antoni Segura Puimedon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][kolla] keystone v3 support status and plans?

2016-08-16 Thread Antoni Segura Puimedon
On Tue, Aug 16, 2016 at 2:56 PM, Steven Dake (stdake) 
wrote:

> Hey kuryrians,
>
> Kolla has a kuryr review in our queue and its looking really solid from
> Hui Kang.  The last key problem blocking the merge (and support of Kuryr in
> Kolla) is that Kolla only supports keystone v3 (and later when that comes
> from upstream).  As a result we are unable to merge kuryr because we can't
> validate it.  The work on the kolla side is about 98% done (need a few
> keystone v3 config options).  Wondering if keystone v3 will magically land
> in this cycle?
>

We are now trying to make the first release of kuryr-lib (openstack/kuryr)
and of kuryr-libnetwork (openstack/kuryr-libnetwork). Part of the release
is the keystone v3 support. It should be merged by next week.

Thanks a lot for reaching out!


> Its not all that challenging, but I realize the kuryr team may have other
> things that are higher priority on their plates.
>
> FWIW lack of keystone v3 support will be an adoption barrier for kuryr
> beyond kolla as well.
>
> Comments welcome.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Overlay MTU setup in docker remote driver

2016-08-16 Thread Antoni Segura Puimedon
On Tue, Aug 16, 2016 at 6:29 AM, Liping Mao (limao)  wrote:

> Hi Kuryr team,
>
> I just notice, this can be fixed in kuryr bind code.
> I submit a bug to track this:
> https://bugs.launchpad.net/kuryr-libnetwork/+bug/1613528
>
> And patch sets are here:
> https://review.openstack.org/#/c/355712/
> https://review.openstack.org/#/c/355714/


Thanks a lot Liping Mao! That's a nice way to solve it.


>
>
> Thanks.
>
> Regards,
> Liping Mao
>
> On 16/8/15 下午11:20, "Liping Mao (limao)"  wrote:
>
> >Hi Kuryr team,
> >
> >I open an issue in docker-libnetwork:
> >https://github.com/docker/libnetwork/issues/1390
> >
> >Appreciate for any idea or comments. Thanks.
> >
> >Regards,
> >Liping Mao
> >
> >
> >On 16/8/12 下午4:08, "Liping Mao (limao)"  wrote:
> >
> >>Hi Kuryr team,
> >>
> >>When the network in neutron using overlay for vm,
> >>it will use dhcp option to control the VM interface MTU,
> >>but for docker, the ip address does not get from dhcp.
> >>So it will not set up proper MTU in container.
> >>
> >>Two work-around in my mind now:
> >>1. Set the default MTU in docker to 1450 or less.
> >>2. Manually configure MTU after container start up.
> >>
> >>But both of these are not good, the idea way in my mind
> >>is when libnetwork Call remote driver create network,
> >>kuryr create neutron network, then return Proper MTU to libnetwork,
> >>docker use this MTU for this network. But docker remote driver
> >>does not support this.
> >>
> >>Or maybe let user config MTU in remote driver,
> >>a little similar with overlay driver:
> >>https://github.com/docker/libnetwork/pull/1349
> >>
> >>But now, seems like remote driver will not do similar things.
> >>
> >>Any idea to solve this problem? Thanks.
> >>
> >>
> >>Regards,
> >>Liping Mao
> >>
> >>
> >>__
> ___
> >>_
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >___
> ___
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Horizon] Any guidelines for naming heat resource type names?

2016-08-16 Thread Praveen Yalagandula
Hi all,
We have developed some heat resources for our custom API server. We
followed the instructions in the development guide at
http://docs.openstack.org/developer/heat/pluginguide.html and got
everything working. However, the Horizon "Resource Types" panel is
returning a 500 error with "TemplateSyntaxError: u'resource'" message.

Upon further debugging, we found that the Horizon is expecting all Heat
resource type names to be of form
. However, we didn't see this
requirement in the heat development documents. Some of our resource types
have just two words (e.g., "Avi::Pool"). Heat itself didn't care about
these names at all.

Question: Is there a general consensus is to enforce the

format for type names?
If so, can we please update the heat plugin guide to reflect this?
If not, we can file it is as a bug in Horizon.

Thanks,
Praveen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] fluentd client composable service (request for review)

2016-08-16 Thread Lars Kellogg-Stedman
Howdy,

I'm working on a composable service to install the fluentd client
across the overcloud (and provide appropriate configuration files to
pull in relevant openstack logfiles).

There are two* reviews pending right now:

- in tripleo-heat-templates: https://review.openstack.org/#/c/353506
- in puppet-tripleo: https://review.openstack.org/#/c/353507

I am looking for someone on the tripleo team to take a quick look at how
this is laid out and give a thumbs-up or thumbs-down on the current
design.

Thanks,

* There is also a corresponding spec which should be posted soon.

-- 
Lars Kellogg-Stedman  | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-16 Thread James Slagle
On Mon, Aug 15, 2016 at 4:54 AM, Dmitry Tantsur  wrote:
> Hi everyone, happy Monday :)
>
> I'd like to start the discussion about CI-testing the optional composable
> services in the CI (I'm primarily interested in Ironic, but I know there are
> a lot more).
>
> Currently every time we change something in an optional service, we have to
> create a DO-NOT-MERGE patch making the service in question not optional.
> This approach has several problems:
>
> 1. It's not usually done for global refactorings.
>
> 2. The current CI does not have any specific tests to check that the
> services in question actually works at all (e.g. in my experience the CI was
> green even though nova-compute could not reach ironic).
>
> 3. If something breaks, it's hard to track the problem down to a specific
> patch, as there is no history of gate runs.
>
> 4. It does not test the environment files we provide for enabling the
> service.
>
> So, are there any plans to start covering optional services? Maybe at least
> a non-HA job with all environment files included? It would be cool to also
> somehow provide additional checks though. Or, in case of ironic, to disable
> the regular nova compute, so that the ping test runs on an ironic instance.

There are no immediate plans. Although I think the CI testing matrix
is always open for discussion.

I'm a little skeptical we will be able to deploy all services within
the job timeout. And if we are, such a job seems better suited as a
periodic job than in the check queue.

The reason being is that there are already many different services
that can break TripleO, and I'd rather focus on improving the testing
of the actual deployment framework itself, instead of testing the
"whole world" on every patch. We only have so much capacity. For
example, I'd rather see us testing updates or upgrades on each patch
instead of all the services.

That being said, if you wanted to add a job that covered Ironic, I'd
at least start with adding a job in the experimental queue. If it
proves to be stable, we can always consider moving it to the check
queue.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Separating guides from murano developers documentation

2016-08-16 Thread Serg Melikyan
Hi folks,

at this moment all murano documentation (including admin & user
guides) is published only in one place [0], but actually all other
projects store there only developer documentation and guides are
separated. I propose to follow same model with murano documentation.
What do you think?

References:
[0] docs.openstack.org/developer/murano/

-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] swiftclient report that header value 120 must be of type str or bytes, not , the detail info is like the following

2016-08-16 Thread Tim Burke
This is the result of a fairly recent change in requests [1], released 8 Aug. 
While the enforcement of header-value types is fairly recent, they've 
documented since 2.0.1 that headers must be byte- or unicode-strings [2], and 
more recently clarified it in their quick start guide [3]. It looks like this 
has been addressed in smaug already [4], but I'd expect that you could 
downgrade requests to 2.10.0 while waiting for a new release of smaug.

[1] https://github.com/kennethreitz/requests/pull/3366
[2] 
https://github.com/kennethreitz/requests/blob/v2.0.1/docs/api.rst#behavioral-changes
[3] http://docs.python-requests.org/en/latest/user/quickstart/#custom-headers
[4] https://review.openstack.org/#/c/355427/

Tim

> On Aug 11, 2016, at 1:17 AM, zhangshuai <446077...@qq.com> wrote:
> 
> 
> 2016-08-11 08:01:25.371
>  DEBUG keystoneclient.auth.identity.v2 [-] 
> Making authentication request to 
> http://127.0.0.1:5000/v2.0/tokens from (pid=30110) get_auth_ref 
> /usr/local/lib/python2.7/dist-packages/keystoneclient/auth/identity/v2.py:87
> 
> 2016-08-11 08:01:28.786
>  DEBUG swiftclient [-] REQ: curl -i 
> http://127.0.0.1:8080/v1/AUTH_819fc87a6b1648f1b8dff8a0d09a9c62/smaug -X PUT 
> -H "Content-Length: 0" -H "X-Auth-Token: c11af1086dd14279..." 
> from (pid=30110) http_log 
> /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:164
> 
> 2016-08-11 08:01:28.786
>  DEBUG swiftclient [-] RESP STATUS: 
> 201 Created from (pid=30110) http_log 
> /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:165
> 
> 2016-08-11 08:01:28.787
>  DEBUG swiftclient [-] RESP HEADERS: 
> {u'Date': u'Thu, 11 Aug 2016 08:01:28 GMT', u'Content-Length': u'0', 
> u'Content-Type': u'text/html; charset=UTF-8', u'X-Trans-Id': 
> u'tx931f23ce6e584cbe9894e-0057ac30d8'} from (pid=30110) http_log 
> /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:166
> 
> 2016-08-11 08:01:28.803
>  DEBUG swiftclient [-] REQ: curl -i 
> http://127.0.0.1:8080/v1/AUTH_819fc87a6b1648f1b8dff8a0d09a9c62/leases -X PUT 
> -H "Content-Length: 0" -H "X-Auth-Token: c11af1086dd14279..." 
> from (pid=30110) http_log 
> /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:164
> 
> 2016-08-11 08:01:28.803
>  DEBUG swiftclient [-] RESP STATUS: 
> 201 Created from (pid=30110) http_log 
> /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:165
> 
> 2016-08-11 08:01:28.804
>  DEBUG swiftclient [-] RESP HEADERS: 
> {u'Date': u'Thu, 11 Aug 2016 08:01:28 GMT', u'Content-Length': u'0', 
> u'Content-Type': u'text/html; charset=UTF-8', u'X-Trans-Id': 
> u'tx8069168a9d5641728cab8-0057ac30d8'} from (pid=30110) http_log 
> /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:166
> 
> 2016-08-11 08:01:42.819
>  ERROR swiftclient [-] Header value 
> 120 must be of type str or bytes, not 
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient Traceback 
> (most recent call last):
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient   File 
> "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 1565, in 
> _retry
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient 
> service_token=self.service_token, **kwargs)
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient   File 
> "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 1268, in 
> put_object
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient 
> conn.request('PUT', path, contents, headers)
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient   File 
> "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 401, in 
> request
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient 
> files=files, **self.requests_args)
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient   File 
> "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 384, in 
> _request
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient return 
> self.request_session.request(*arg, **kwarg)
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient   File 
> "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 457, in 
> request
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient prep = 
> self.prepare_request(req)
> 
> 2016-08-11 08:01:42.819 TRACE swiftclient   File 
> "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 390, in 
> prepare_request
> 
> 2016-08-11 08:01:42.819 

[openstack-dev] [Security] No IRC meeting this week

2016-08-16 Thread Rob C
All,

No IRC meeting this week as we're conducting the mid-cycle in Austin
Weds->Friday.

However, we'll be doing hangouts for those who can't make it onsite and
will be monitoring IRC so just ping us on there if you want to contribute.

Cheers
-Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] State of upgrade CLI commands

2016-08-16 Thread Brad P. Crochet
Hello TripleO-ians,

I've started to look again at the introduced, but unused/undocumented
upgrade commands. It seems to me that given the current state of the
upgrade process (at least from Liberty -> Mitaka), these commands make
a lot less sense.

I see one of two directions to take on this. Of course I would love to
hear other options.

1) Revert these commands immediately, and forget they ever existed.
They don't exactly work, and as I said, were never officially
documented, so I don't think a revert is out of the question.

or

2) Do a major overhaul, and rethink the interface entirely. For
instance, the L->M upgrade introduced a couple of new steps (the AODH
migration and the Keystone migration). These would have either had to
have completely new commands added, or have some type of override to
the existing upgrade command to handle them.

Personally, I would go for step 1. The 'overcloud deploy' command can
accomplish all of the upgrade steps that involve Heat. In order for
the new upgrade commands to work properly, there's a lot that needs to
be refactored out of the deploy command itself so that it can be
shared with deploy and upgrade, like passing of passwords and the
like. I just don't see a need for discrete commands when we have an
existing command that will do it for us. And with the addition of an
answer file, it makes it even easier.

Thoughts?

-- 
Brad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-16 Thread Mohan Kumar
Hi Alioune,

 Please find me in IRC * #openstack-neutron   *name:* mohankumar*

Yes ,the error looks confusing , but  logs  at "2016-08-16 17:32:35.255 "
showing  networking_sfc.services.sfc.drivers.ovs.driver function
called as *'ingress':
None * , Hence the create port_chain at sfc driver manager failed  . Please
check q-svc log in neutron-log.zip

port_chain db record created then sfc driver got failed hence the code
attempting to remove db entry but it can't remove due to  key constraints .
https://github.com/openstack/networking-sfc/blob/33d8014c1ef2d7a83578145f44bc41b1453cb257/networking_sfc/services/sfc/plugin.py#L41-L57

So you getting both errors (from q-svc log)

[01;31m2016-08-16 17:32:35.654 TRACE networking_sfc.services.sfc.plugin
[01;35m [00mSfcDriverError: create_port_chain failed.

01;31m2016-08-16 17:32:35.976 TRACE networking_sfc.services.sfc.plugin
[01;35m [00mSfcDriverError: delete_port_chain failed.

we not seen such  issue in master / liberty stable branch , please let me
know your sfc code base or you did any local code changes ?

Thanks.,
Mohankumar.N


On Tue, Aug 16, 2016 at 9:15 PM, Alioune  wrote:

> Hi Mohan,
>
> You can find the neutron logs on the attached.
> I used the sripts below for the lab.
> Please cloud you give the networking-sfc channel and your username ?
> Regards,
>
> cat create_sfc_ports.sh
> #!/bin/bash
>
> neutron port-create --name p1 net1
> neutron port-create --name p2 net1
> neutron port-create --name p3 net1
> neutron port-create --name p4 net1
>
> #In part 4: I've added 4 instances in this step. 2 SFs , source and dst
> nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --nic
> port-id=$(neutron port-list | grep -w p1 | awk '{print $2}') --nic
> port-id=$(neutron port-list | grep -w p2 | awk '{print $2}') vmvx1
> nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --nic
> port-id=$(neutron port-list | grep -w p3 | awk '{print $2}') --nic
> port-id=$(neutron port-list | grep -w p4 | awk '{print $2}') vmvx2
> # the source vm has 55.55.55.8 and the dsl 55.55.55.7
> cat create_flow_classifier.sh
>
>  neutron flow-classifier-create \
>  --ethertype IPv4 \
>  --source-ip-prefix 55.55.55.8/32 \
>  --logical-source-port 1b2ec7a7-b6ae-48db-bc5c-76970f0da4fd \
>  --destination-ip-prefix 55.55.55.7/32 \
>  --protocol icmp FC1
>
> cat create_port_pair.sh
>
> #!/usr/bin/env bash
> neutron port-pair-create --ingress=p1 --egress=p2 PP1
>
> neutron port-pair-create --ingress=p3 --egress=p4 PP2
>
> cat create_port_group.sh
> #!/usr/bin/env bash
>
> neutron port-pair-group-create --port-pair PP1 PG1
> neutron port-pair-group-create --port-pair PP2 PG2
>
> cat create_port_chain.sh
> #!/bin/bash
>
> neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2
> --flow-classifier FC1 PC1
>
>
>
>
>
>
> On 16 August 2016 at 16:06, Mohan Kumar  wrote:
>
>> Hi  Alioune,
>>
>>  Could you share neutron log as well ? also let us know your sfc code
>> base., If possible shall we have quick chat on this in neutron IRC channel ?
>>
>> Thanks.,
>> Mohankumar.N
>>
>> On Mon, Aug 15, 2016 at 5:09 PM, Alioune  wrote:
>>
>>> Hi all,
>>> I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs,
>>> 1 Web Server (DST) and the DHCP namespace as the SRC.
>>> I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and
>>> the neutron L2-agent runs correctly.
>>> I followed the process by creating classifier, port pairs and port_group
>>> but I got a wrong message "delete_port_chain failed." when creating
>>> port_chain [2]
>>> I tried to create the neutron ports with and without the option
>>> "--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP
>>> packets don't go through the SFs.
>>>
>>> Can anyone advice to fix? that ?
>>> What's your channel on IRC ?
>>>
>>> Regards,
>>>
>>>
>>> [1] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
>>> [2]
>>> vagrant@ubuntu:~/openstack_sfc$ ./08-os_create_port_chain.sh
>>> delete_port_chain failed.
>>> vagrant@ubuntu:~/openstack_sfc$ cat 08-os_create_port_chain.sh
>>> #!/bin/bash
>>>
>>> neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2
>>> --flow-classifier FC1 PC1
>>>
>>> [3] Output OVS Flows
>>>
>>> vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-tun -O OpenFlow13
>>> OFPST_FLOW reply (OF1.3) (xid=0x2):
>>>  cookie=0xbc2e9105125301dc, duration=9615.385s, table=0, n_packets=146,
>>> n_bytes=11534, priority=1,in_port=1 actions=resubmit(,2)
>>>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=0, n_packets=0,
>>> n_bytes=0, priority=0 actions=drop
>>>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=2, n_packets=5,
>>> n_bytes=490, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
>>> actions=resubmit(,20)
>>>  cookie=0xbc2e9105125301dc, duration=9615.381s, table=2, n_packets=141,
>>> n_bytes=11044, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
>>> 

Re: [openstack-dev] [kolla][osic] OSIC cluster status

2016-08-16 Thread Paul Bourke

Ok so I got up to speed -

We were having trouble getting instances to get an IP on the VLAN 
provider network via DHCP. DHCP responses were being sent by the neutron 
dhcp agent but seemed to be getting dropped at br-ex.


Turned out the kernel running in the ISO we are using is not new enough 
and the buggy NIC driver (i40e) was present. After upgrading the kernel 
instances are receiving IPs and are ssh-able. (NOTE: the driver issue 
was highlighted in the OSIC docs, we got sidetracked on this though due 
to other issues encountered during bare metal provisioning (hopefully 
most of this is detailed in the review doc, if not we need to get this 
up to date).


Next steps: continue with the rally scenarios. There is a tmux session 
running with a work in progress script to run each scenario we're 
interested in and generate reports. This will allow us to easily 
generate consistent sets of data after each deploy scenario.


-Paul

On 16/08/16 12:57, Paul Bourke wrote:

Kollagues,

Can we get an update from the APAC / US reps?

The last I heard we were having trouble getting guests spawned on an
external network so they could be accessible by Rally. There's been
various people looking at this the past few days but overall unclear
where we are and time is short. If someone could summarise(looking at
Jeffrey / inc0 ;)) what's been happening that will allow us to make a
decision on how to progress.

Cheers,
-Paul

On 05/08/16 17:48, Paul Bourke wrote:

Hi Kolla,

Thought it will be helpful to send a status mail once we hit checkpoints
in the osic cluster work, so people can keep up to speed without having
to trawl IRC.

Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic

Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now
complete. The 131 bare metal nodes have been provisioned with Ubuntu
14.04, networking is configured, and all Kolla prechecks are passing.

The default set of images (--profile default) have been built and pushed
to a registry running on the deployment node, the build taking a very
speedy 5m37.040s.

Cheers,
-Paul

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-16 Thread Clint Byrum
Excerpts from Sean Dague's message of 2016-08-16 10:35:11 -0400:
> On 08/16/2016 05:36 AM, Thierry Carrez wrote:
> > John Dickinson wrote:
>  Excerpts from John Dickinson's message of 2016-08-12 16:04:42 -0700:
> > [...]
> > The proposed plan has a lot of good in it, and I'm really happy to see 
> > the TC
> > working to bring common goals and vision to the entirety of the 
> > OpenStack
> > community. Drop the "project teams are expected to prioritize these 
> > goals above
> > all other work", and my concerns evaporate. I'd be happy to agree to 
> > that proposal.
> 
>  Saying that the community has goals but that no one is expected to
>  act to bring them about would be a meaningless waste of time and
>  energy.
> >>>
> >>> I think we can find wording that doesn't use the word "priority" (which
> >>> is, I think, what John objects to the most) while still conveying that
> >>> project teams are expected to act to bring them about (once they said
> >>> they agreed with the goal).
> >>>
> >>> How about "project teams are expected to do everything they can to
> >>> complete those goals within the boundaries of the target development
> >>> cycle" ? Would that sound better ?
> >>
> >> Any chance you'd go for something like "project teams are expected to
> >> make progress on these goals and report that progress to the TC every
> >> cycle"?
> > 
> > The issue with this variant is that it removes the direct link between
> > the goal and the development cycle. One of the goals of these goals
> > (arh) is that we should be able to collectively complete them in a given
> > timeframe, so that there is focus at the same time and we have a good
> > story to show at the end. Those goals are smallish development cycle
> > goals. They are specifically chosen to be complete-able within a cycle
> > and with a clear definition of "done". It's what differentiates them
> > from more traditional cross-project specs or strategic initiatives which
> > can be more long-term (and on which "reporting progress to the TC every
> > cycle" could be an option).
> 
> So, I think that's ok. But it's going to cause a least common
> denominator of doable. For instance, python 3.5 is probably not doable
> in Nova in a cycle. And the biggest issue is really not python 3.5 per
> say, but our backlog of mox based unit tests (over a thousand), which
> we've experienced are unreliable in odd ways on python3. They also tend
> to be the oldest unit tests (we stopped letting people add new ones 2
> years ago), in areas of the code that have a lower rate of change, and
> folks are less familiar with (like the xenserver driver which is used by
> very few folks).
> 
> So, those are getting tackled, but there is a lot there, and it will
> take a while. (Note: this is one of the reasons I suggested the next
> step with python3 be full stack testing, because I think we could
> actually get Nova working there well in advance of the unit tests
> ported, for the above issue. That however requires someone to take on
> the work for full stack python3 setup and maintenance.)
> 

Yeah, I don't think we should time-box all goals to one release cycle.
Community goals should be real things that the community needs.

But we can still set a time frame for a goal, like "O+1", and even try
to set objectives that are single-release cycle doable. Like for Ocata
we can say "All dependencies are python3.5 compatible and 80% of tests
pass on python3.5". Then "Integrated gate passes using pyton3.5 in O+1".
Then at the end of each release cycle, we can look at the objectives
completed, and consider whether or not the goal is reached, or what we
can do to make sure it is.

> Maybe this process also can expose "we're going to need help to get
> there" for some of these goals.

Just like with the architecture working group I've been proposing,
I think we need to rally resources around supporting these objectives,
otherwise the TC will just sow frustration.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] RPC call not appearing to retry

2016-08-16 Thread Mehdi Abaakouk

Hi,

Le 2016-08-15 04:50, Eric K a écrit :

Hi all, I'm running into an issue with oslo-messaging PRC call not
appearing to retry. If I do oslo_messaging.RPCClient(transport, target,
timeout=5, retry=10).call(self.context, method, **kwargs) using a topic
with no listeners, I consistently get the MessagingTimeout exception in 
5

seconds, with no apparent retry attempt. Any tips on whether this is a
user error or a bug or a feature? Thanks so much!


About retry, from 
http://docs.openstack.org/developer/oslo.messaging/rpcclient.html:


"By default, cast() and call() will block until the message is 
successfully sent. However, the retry parameter can be used to have 
message sending fail with a MessageDeliveryFailure after the given 
number of retries."


It looks like it retries in case of MessageDeliveryFailure not 
MessagingTimeout.


Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Puppet] Puppet OpenStack Application Module

2016-08-16 Thread Emilien Macchi
During our weekly meeting [1] we discussed about $topic.

* Deploying a PE stack in our CI is not an option, we rely only on FOSS.

* Some FOSS deployers doing orchestration (eg mcollective-choria, etc)
do exist but it doesn't seem there is an official consensus about
which one Puppetlabs does support.

* Unless something changed recently, the actual Orchestrator is not
FOSS so we would need to pick a FOSS deployer (see previous
statement).
I find it weird that Open-Source communities (Puppet, OpenStack, etc)
would engage some efforts to replace something that Puppetlabs itself
didn't open.
In other words, why would we be developing a new module that help
deploying OpenStack in multinode with Puppet App Orchestration without
the actual help from Puppetlabs?

Unless using PE (not FOSS), there is no official way to make Puppet
multinode orchestration at this time. Before driving some work in our
community, I would like to know from Puppetlabs what is the plan here.

Thanks,


[1] 
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-08-16-15.00.html

On Tue, Aug 16, 2016 at 9:39 AM, Emilien Macchi  wrote:
> On Mon, Aug 15, 2016 at 12:13 PM, Andrew Woodward  wrote:
>> I'd like to propose the creation of a puppet module to make use of Puppet
>> Application orchestrator. This would consist of a Puppet-4 compatible module
>> that would define applications that would wrap the existing modules.
>>
>>
>> This will allow for the establishment of a shared module that is capable of
>> expressing OpenStack applications using the new language schematics in
>> Puppet 4 [1] for multi-node application orchestration.
>>
>>
>> I'd expect that initial testing env would consist of deploying a PE stack,
>> and using docker containers as node primitives. This is necessary until a
>> FOSS deployer component like [2] becomes stable, at which point we can
>> switch to it and use the FOSS PM as well. Once the env is up, I plan to wrap
>> p-o-i profiles to deploy the cloud and launch tempest for functional
>> testing.
>>
>>
>> [1] https://docs.puppet.com/pe/latest/app_orchestration_workflow.html
>>
>> [2] https://github.com/ripienaar/mcollective-choria
>
> It reminds me stackforge/puppet-openstack [1] repository that we had
> in the past and that we deprecated and removed.
> It was a failure because it was not maintained and not really used by
> our community because too much opinionated.
>
> Before creating such a new repository, we might want to see some PoC
> somewhere, some piece of code, so we can evaluate if we would actually
> need that thing.
> Have you already written some code around that? Can you show it?
> I just don't want to see (again) an empty repo where nobody is pushing
> code on it.
>
> [1] https://github.com/stackforge/puppet-openstack
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Nominate Prashanth Hari as core for watcher-specs

2016-08-16 Thread Joe Cropper
+1

> On Aug 16, 2016, at 11:59 AM, Jean-Émile DARTOIS 
>  wrote:
> 
> +1
> 
> Jean-Emile
> DARTOIS
> 
> {P} Software Engineer
> Cloud Computing
> 
> {T} +33 (0) 2 56 35 8260
> {W} www.b-com.com
> 
> 
> De : Antoine Cabot 
> Envoyé : mardi 16 août 2016 16:57
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : [openstack-dev] [watcher] Nominate Prashanth Hari as core for   
> watcher-specs
> 
> Hi Watcher team,
> 
> I'd like to nominate Prashanth Hari as core contributor for watcher-specs.
> As a potential end-user for Watcher, Prashanth gave us a lot of good
> feedback since the Mitaka release.
> Please vote for his candidacy.
> 
> Antoine
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Status of the validations and priority patches

2016-08-16 Thread Tomas Sedovic

Hey folks,

Martin André and I have been working on adding validations into the 
TripleO workflow. And since the time for landing things in Neutron is 
getting closer, here are a few patches I'd like to call your attention to.


This patchset adds the Mistral workflows the gui and cli need for 
running validations:


https://review.openstack.org/#/c/313632/

This validation adds a couple of libraries to reading undercloud.conf 
and hiera data, which a few other validations need:


https://review.openstack.org/#/c/329385


A lot of our other patches depend on these. Other than that, anything in 
tripleo-validations is a fair game (most patches are individual 
validations):


https://review.openstack.org/#/q/project:openstack/tripleo-validations+status:open


Cheers,
Tomas Sedovic

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Secure Setup & HSM-plugin

2016-08-16 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi Manuel,

Historically all of our contributors working with the PKCS#11 crypto
plugin have been using Safenet HSMs, which is why the Safenet
mechanism is the default.

The "CKR_MECHANISM_PARAM_INVALID" error appears to be a bug.  You may
file a bug report in our Launchpad page. [1]

The "CryptoPluginNotFound" may mean that the PKCS#11 plugin is failing
to instantiate.  I think it may mean that we don't currently support
those algorithms.  I would have to do some debugging to get to the
root of this issue, but it sounds like we may need some more work to
support the different algorithms you're trying to use.

In theory, the PKCS#11 plugin should be able to work with any PKCS#11
device.  I suspect that fixing the "CKR_MECHANISM_PARAM_INVALID" bug
may be all we need to get the Utimaco HSM working.

You're more than welcome to contribute patches to get your HSM
working.  You can take a look at our development guide if contributing
is something you're interested in. [2]

Thanks,
- - Doug

[1] https://bugs.launchpad.net/barbican
[2] http://docs.openstack.org/developer/barbican/#barbican-for-developer
s

On 8/16/16 8:22 AM, Praktikant HSM wrote:
> Hi Douglas,
> 
> Thank you very much for your response.
> 
> When testing the usage of the generated MKEK, we ran into some
> problems. For clarification: we are testing the PKCS#11-based
> crypto plugin with a Utimaco HSM. The error messages are from
> Barbican's and Utimaco's log files.
> 
> When storing a secret, we get the following error:
> CKM_MECHANISM_INVALID (Mechanism 0x811c is invalid). Since this
> is a vendor specific AES-GCM mechanism by SafeNet[1], it is not
> supported by our HSMs.
> 
> In the p11_crypto.py file, the default algorithm is set to
> "VENDOR_SAFENET_CKM_AES_GCM"[2]. Thus, we specified "CKM_AES_GCM"
> in the barbican.conf file in the [p11_crypto_plugin] section to be
> used instead of the default mechanism. However, this gave us a
> "CKR_MECHANISM_PARAM_INVALID" error: mechanism length invalid
> (expected 40, provided 48).
> 
> Additionally, when trying other AES modes, e.g. CBC, there is an
> CryptoPluginNotFound error.
> 
> Is there currently a workaround which would allow us to use a
> Utimaco HSM? Also, are there any plans to natively support HSMs
> from other vendors in the near future?
> 
> Again, thank you for your support.
> 
> Best regards,
> 
> Manuel Roth
> 
> [1]:
> https://github.com/openstack/barbican/blob/306b2ac592c059c59be42c0420a
08af0a9e34f6e/barbican/plugin/crypto/pkcs11.py#L131
>
>  [2]:
> https://github.com/openstack/barbican/blob/c2a7f426455232ed04d2ccef6b3
5c87a2a223977/barbican/plugin/crypto/p11_crypto.py#L63
>
>  --- System Engineering HSM
> 
> Utimaco IS GmbH Germanusstr. 4 52080 Aachen Germany
> 
> 
> 
> -Original Message- From: Douglas Mendizábal
> [mailto:douglas.mendiza...@rackspace.com] Sent: Freitag, 12. August
> 2016 18:24 To: OpenStack Development Mailing List (not for usage
> questions)  Cc: Ariano-Tim Donda
> ; Jiannis Papadakis
>  Subject: Re: [openstack-dev]
> Barbican: Secure Setup & HSM-plugin
> 
> Hi Manuel,
> 
> I'm happy to hear about your interest in Barbican.  I assume your
> HSM has a PKCS#11 interface since the admin commands to generate
> the MKEK and HMAC keys worked for you.
> 
> The labels for the generated keys should be specified in the config
> file for the API process. [1]  The API process uses the MKEK and
> HMAC keys to encrypt and sign the secrets (keys) that are stored in
> Barbican by clients.
> 
> The PKCS#11 plugin was designed to use the SQL Database to store
> client keys (secrets) in the SQL database, so your API process must
> be configured to use "store_crypto" as the
> enabled_secretstore_plugins [2] in addition to specifing
> "p11_crypto" as the enabled_crypto_plguins [3].
> 
> When configured this way, Barbican uses the HSM to encrypt the
> client data (keys/secrets) before storing it in the DB.
> 
> The API itself does not currently support using keys stored by
> clients to do server-side encryption, but it's a feature that has
> been discussed in previous summits with some interest.  We've also
> had some discussions with the Designate team to add server-side
> signing that they could use to implement DNSSEC, but we don't yet
> have a blueprint for it.
> 
> Let me know if you have any more questions.
> 
> - Douglas Mendizábal
> 
> [1] 
> http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/bar
bi
>
> 
can.conf#n278
> [2] 
> http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/bar
bi
>
> 
can.conf#n255
> [3] 
> http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/bar
bi
>
> 
can.conf#n260
> 
> 
> On 8/12/16 7:51 AM, Praktikant HSM wrote:
>> Hi all,
> 
>> As a member of Utimaco's pre-sales team I am currently testing
>> an integration of Barbican 

Re: [openstack-dev] [openstack-ansible] git repo in infra_repo container not working

2016-08-16 Thread Jesse Pretorius
Hi Linhao Lu,

I expect that you might be hitting the fact that fastcgi is no longer 
available. Instead of using http the git protocol is now being used. See 
https://github.com/openstack/openstack-ansible-repo_server/commit/c87a8c1d4ca008e03e8dcef4160503b622d1fc1f
 for details.

Operating from the master branch can be challenging as it’s a moving target. 
When doing so I would recommend that you always ensure that you update the 
roles too by executing ‘./script/bootstrap-ansible.sh’ from the integrated repo 
root. This ensures that all the roles are updated along with the playbooks.

The transport_url issues have been resolved for a few weeks already, so I would 
recommend just checking out the current head of master.

HTH,

Jesse

From: "Lu, Lianhao" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, August 11, 2016 at 3:51 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [openstack-ansible] git repo in infra_repo container 
not working

Hi guys,

I’m encounter new problems with the OSA SHA master 
eb3aec7827e78d81469ff4489c28963ee602117c (I use this version because previously 
the master version cf79d4f6 has problems with the nova transport_url settings 
which blocks nova-api to be launched), the problem is that the git repo in 
infra_repo containers not working, which blocks me from going on.

TASK [os_nova : Get package from git] **
FAILED - RETRYING: TASK: os_nova : Get package from git (4 retries left).
… …
FAILED - RETRYING: TASK: os_nova : Get package from git (1 retries left).
fatal: [infra2_nova_console_container-2e227e79]: FAILED! => {"changed": false, 
"cmd": "/usr/bin/git ls-remote 
http://172.29.236.15:8181/openstackgit/spice-html5 -h refs/heads/54cc41299be
a8cd681ed0262735e0fd821cd774a", "failed": true, "msg": "fatal: repository 
'http://172.29.236.15:8181/openstackgit/spice-html5/' not found", "rc": 128, 
"stderr": "fatal: repository 'http:
//172.29.236.15:8181/openstackgit/spice-html5/' not found\n", "stdout": "", 
"stdout_lines": []}

cmd: /usr/bin/git ls-remote http://172.29.236.15:8181/openstackgit/spice-html5 
-h refs/heads/54cc41299bea8cd681ed0262735e0fd821cd774a

msg: fatal: repository 'http://172.29.236.15:8181/openstackgit/spice-html5/' 
not found

stderr: fatal: repository 'http://172.29.236.15:8181/openstackgit/spice-html5/' 
not found


The 172.29.236.15 is my LB ip(which use haproxy). I then “curl -L 
http://172.29.236.15:8181/openstackgit/spice-html5/” and it worked ok.

Then I ssh into one of the infra-repo containers behind  the LB, and find the 
foloowings:

-  “curl http://127.0.0.1:8181/penstackgit/spice-html5/” works, it 
display the file content under directory 
/var/www/repo/openstackgit/spice-html5/.

-  “git ls-remote /var/www/repo/openstackgit/spice-html5/” works.

-  “git ls-remote http://127.0.0.1:8181/penstackgit/spice-html5/” 
doesn’t.  It says “fatal: repository 
'http://127.0.0.1:8181/penstackgit/spice-html5/' not found”

Seems the git repo over http is not working. I checked other repos under 
/var/www/repo/openstackgit/, e.g. ceilometer, neutron, nova, they all have the 
same problems.

Any suggestions?

-Lianhao Lu


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Nominate Prashanth Hari as core for watcher-specs

2016-08-16 Thread Jean-Émile DARTOIS
+1

Jean-Emile
DARTOIS

{P} Software Engineer
Cloud Computing

{T} +33 (0) 2 56 35 8260
{W} www.b-com.com


De : Antoine Cabot 
Envoyé : mardi 16 août 2016 16:57
À : OpenStack Development Mailing List (not for usage questions)
Objet : [openstack-dev] [watcher] Nominate Prashanth Hari as core for   
watcher-specs

Hi Watcher team,

I'd like to nominate Prashanth Hari as core contributor for watcher-specs.
As a potential end-user for Watcher, Prashanth gave us a lot of good
feedback since the Mitaka release.
Please vote for his candidacy.

Antoine

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nfs-ganesha export modification issue

2016-08-16 Thread Ben Swartzlander

On 08/16/2016 08:42 AM, Ramana Raja wrote:

On Thursday, June 30, 2016 6:07 PM, Alexey Ovchinnikov 
 wrote:


Hello everyone,

here I will briefly summarize an export update problem one will encounter
when using nfs-ganesha.

While working on a driver that relies on nfs-ganesha I have discovered that
it
is apparently impossible to provide interruption-free export updates. As of
version
2.3 which I am working with it is possible to add an export or to remove an
export without restarting the daemon, but it is not possible to modify an
existing
export. So in other words if you create an export you should define all
clients
before you actually export and use it, otherwise it will be impossible to
change
rules on the fly. One can come up with at least two ways to work around
this issue: either by removing, updating and re-adding an export, or by
creating multiple
exports (one per client) for an exported resource. Both ways have associated
problems: the first one interrupts clients already working with an export,
which might be a big problem if a client is doing heavy I/O, the second one
creates multiple exports associated with a single resource, which can easily
lead
to confusion. The second approach is used in current manila's ganesha
helper[1].
This issue seems to be raised now and then with nfs-ganesha team, most
recently in
[2], but apparently it will not be addressed in the nearest future.


Frank Filz has added support to Ganesha (upstream "next" branch) to
allow one to dynamically update exports via D-Bus. Available since,
https://github.com/nfs-ganesha/nfs-ganesha/commits/2f47e8a761f3700


This is awesome news! Unforunately there's no time to update the 
container driver to use this mechanism before Newton FF, but we can 
provide feedback and plan this enhancement for Ocata.


-Ben



It'd be nice if we can test this feature and provide feedback.
Also, ML [2] was updated with more implementation details.

Thanks,
Ramana



With kind regards,
Alexey.

[1]:
https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
[2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Nominate Prashanth Hari as core for watcher-specs

2016-08-16 Thread Antoine Cabot
Hi Watcher team,

I'd like to nominate Prashanth Hari as core contributor for watcher-specs.
As a potential end-user for Watcher, Prashanth gave us a lot of good
feedback since the Mitaka release.
Please vote for his candidacy.

Antoine

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-common bugs, bug tracking and launchpad tags

2016-08-16 Thread Emilien Macchi
On Tue, Aug 16, 2016 at 10:40 AM, Julie Pichon  wrote:
> On 19 July 2016 at 16:20, Steven Hardy  wrote:
>> On Mon, Jul 18, 2016 at 12:28:10PM +0100, Julie Pichon wrote:
>>> Hi,
>>>
>>> On Friday Dougal mentioned on IRC that he hadn't realised there was a
>>> separate project for tripleo-common bugs on Launchpad [1] and that he'd
>>> been using the TripleO main tracker [2] instead.
>>>
>>> Since the TripleO tracker is also used for client bugs (as far as I can
>>> tell?), and there doesn't seem to be a huge amount of tripleo-common
>>> bugs perhaps it would make sense to also track those in the main
>>> tracker? If there is a previous conversation or document about bug
>>> triaging beyond [3] I apologise for missing it (and would love a
>>> URL!). At the moment it's a bit confusing.
>>
>> Thanks for raising this, yes there is a bit of a proliferation of LP
>> projects, but FWIW the only one I'm using to track coordinated milestone
>> releases for Newton is this one:
>>
>> https://launchpad.net/tripleo/
>>
>>> If we do encourage using the same bug tracker for multiple components,
>>> I think it would be useful to curate a list of official tags [4]. The
>>> main advantage of doing that is that the tags will auto-complete so
>>> it'd be easier to keep them consistent (and thus actually useful).
>>
>> +1 I'm fine with adding tags, but I would prefer that we stopped adding
>> more LP projects unless the associated repos aren't planned to be part of
>> the coordinated release (e.g I don't have to track them ;)
>>
>>> Personally, I wanted to look through open bugs against
>>> python-tripleoclient but people use different ways of marking them at
>>> the moment - e.g. [tripleoclient] or [python-tripleoclient] or
>>> tripleoclient (or nothing?) in the bug name. I tried my luck at adding
>>> a 'tripleoclient' tag [5] to the obvious ones as an example. Maybe
>>> something shorter like 'cli', 'common' would make more sense. If there
>>> are other tags that come back regularly it'd probably be helpful to
>>> list them explicitly as well.
>>
>> Sure, well I know that many python-*clients do have separate LP projects,
>> but in the case of TripleO our client is quite highly coupled to the the
>> other TripleO pieces, in particular tripleo-common.  So my vote is to
>> create some tags in the main tripleo project and use that to filter bugs as
>> needed.
>>
>> There are two projects we might consider removing, tripleo-common, which
>> looks pretty much unused and tripleo-validations which was recently added
>> by the sub-team working on validations.
>>
>> If folks find either useful then they can stay, but it's going to be easier
>> to get a clear view on when to cut a release if we track everything
>> considered part of the tripleo deliverable in one place IMHO.
>
> Following up on this and related conversations (e.g. on today's TripleO
> meeting), the tripleo-ui team would like to migrate to the main TripleO
> tracker as well. It totally makes sense to me, seeing as the UI is just
> as dependent on the other TripleO projects. It's a new language but we
> already have multiple code bases so what's one more :-) That way the UI
> can be more integrated with TripleO during the cycle and related issues
> and features will show up during the weekly meeting.
>
> I think there's some Launchpad magic we can use to migrate the bugs,
> but I'm not sure if it's possible to move the blueprints themselves. To
> avoid distractions when we're so close to Feature Freeze, in my opinion
> it might be better to migrate the blueprints after Newton-3 anyway.
>
> If there's no objections, I'll add the 'ui' tag to the bug tagging
> policy at [1]. We can start filing new bugs into the TripleO
> tracker [2], and blueprint authors can move their outstanding blueprints
> when they have a chance.

I think you can go ahead and update the spec. People will vote in
Gerrit, but I'm pretty confident it's fine for everyone.

Thanks!

> Thanks,
>
> Julie
>
> [1] https://review.openstack.org/#/c/352852/
> [2] https://bugs.launchpad.net/tripleo
>
>> Thanks,
>>
>> Steve
>>
>>>
>>> Julie
>>>
>>> [1] https://bugs.launchpad.net/tripleo-common
>>> [2] https://bugs.launchpad.net/tripleo
>>> [3] https://wiki.openstack.org/wiki/TripleO#Bug_Triage
>>> [4] https://wiki.openstack.org/wiki/Bug_Tags
>>> [5] https://bugs.launchpad.net/tripleo?field.tag=tripleoclient
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [vitrage] add the resource to the alarm notification

2016-08-16 Thread Malin, Eylon (Nokia - IL)
Hi,

I'm writing an alarm notifier that need to get from vitrage graph the resource 
along with the alarm.
So I've added blueprint :

https://blueprints.launchpad.net/vitrage/+spec/add-resource-to-alarm-notification

I've also wrote the code that implement it.
And I'm ready to commit it.

BR

Eylon Malin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-common bugs, bug tracking and launchpad tags

2016-08-16 Thread Julie Pichon
On 19 July 2016 at 16:20, Steven Hardy  wrote:
> On Mon, Jul 18, 2016 at 12:28:10PM +0100, Julie Pichon wrote:
>> Hi,
>>
>> On Friday Dougal mentioned on IRC that he hadn't realised there was a
>> separate project for tripleo-common bugs on Launchpad [1] and that he'd
>> been using the TripleO main tracker [2] instead.
>>
>> Since the TripleO tracker is also used for client bugs (as far as I can
>> tell?), and there doesn't seem to be a huge amount of tripleo-common
>> bugs perhaps it would make sense to also track those in the main
>> tracker? If there is a previous conversation or document about bug
>> triaging beyond [3] I apologise for missing it (and would love a
>> URL!). At the moment it's a bit confusing.
>
> Thanks for raising this, yes there is a bit of a proliferation of LP
> projects, but FWIW the only one I'm using to track coordinated milestone
> releases for Newton is this one:
>
> https://launchpad.net/tripleo/
>
>> If we do encourage using the same bug tracker for multiple components,
>> I think it would be useful to curate a list of official tags [4]. The
>> main advantage of doing that is that the tags will auto-complete so
>> it'd be easier to keep them consistent (and thus actually useful).
>
> +1 I'm fine with adding tags, but I would prefer that we stopped adding
> more LP projects unless the associated repos aren't planned to be part of
> the coordinated release (e.g I don't have to track them ;)
>
>> Personally, I wanted to look through open bugs against
>> python-tripleoclient but people use different ways of marking them at
>> the moment - e.g. [tripleoclient] or [python-tripleoclient] or
>> tripleoclient (or nothing?) in the bug name. I tried my luck at adding
>> a 'tripleoclient' tag [5] to the obvious ones as an example. Maybe
>> something shorter like 'cli', 'common' would make more sense. If there
>> are other tags that come back regularly it'd probably be helpful to
>> list them explicitly as well.
>
> Sure, well I know that many python-*clients do have separate LP projects,
> but in the case of TripleO our client is quite highly coupled to the the
> other TripleO pieces, in particular tripleo-common.  So my vote is to
> create some tags in the main tripleo project and use that to filter bugs as
> needed.
>
> There are two projects we might consider removing, tripleo-common, which
> looks pretty much unused and tripleo-validations which was recently added
> by the sub-team working on validations.
>
> If folks find either useful then they can stay, but it's going to be easier
> to get a clear view on when to cut a release if we track everything
> considered part of the tripleo deliverable in one place IMHO.

Following up on this and related conversations (e.g. on today's TripleO
meeting), the tripleo-ui team would like to migrate to the main TripleO
tracker as well. It totally makes sense to me, seeing as the UI is just
as dependent on the other TripleO projects. It's a new language but we
already have multiple code bases so what's one more :-) That way the UI
can be more integrated with TripleO during the cycle and related issues
and features will show up during the weekly meeting.

I think there's some Launchpad magic we can use to migrate the bugs,
but I'm not sure if it's possible to move the blueprints themselves. To
avoid distractions when we're so close to Feature Freeze, in my opinion
it might be better to migrate the blueprints after Newton-3 anyway.

If there's no objections, I'll add the 'ui' tag to the bug tagging
policy at [1]. We can start filing new bugs into the TripleO
tracker [2], and blueprint authors can move their outstanding blueprints
when they have a chance.

Thanks,

Julie

[1] https://review.openstack.org/#/c/352852/
[2] https://bugs.launchpad.net/tripleo

> Thanks,
>
> Steve
>
>>
>> Julie
>>
>> [1] https://bugs.launchpad.net/tripleo-common
>> [2] https://bugs.launchpad.net/tripleo
>> [3] https://wiki.openstack.org/wiki/TripleO#Bug_Triage
>> [4] https://wiki.openstack.org/wiki/Bug_Tags
>> [5] https://bugs.launchpad.net/tripleo?field.tag=tripleoclient

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] weekly meeting Aug 16

2016-08-16 Thread Emilien Macchi
Hi,

TripleO team did weekly meeting and you can read notes here:
http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-08-16-14.00.html

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-16 Thread Sean Dague
On 08/16/2016 05:36 AM, Thierry Carrez wrote:
> John Dickinson wrote:
 Excerpts from John Dickinson's message of 2016-08-12 16:04:42 -0700:
> [...]
> The proposed plan has a lot of good in it, and I'm really happy to see 
> the TC
> working to bring common goals and vision to the entirety of the OpenStack
> community. Drop the "project teams are expected to prioritize these goals 
> above
> all other work", and my concerns evaporate. I'd be happy to agree to that 
> proposal.

 Saying that the community has goals but that no one is expected to
 act to bring them about would be a meaningless waste of time and
 energy.
>>>
>>> I think we can find wording that doesn't use the word "priority" (which
>>> is, I think, what John objects to the most) while still conveying that
>>> project teams are expected to act to bring them about (once they said
>>> they agreed with the goal).
>>>
>>> How about "project teams are expected to do everything they can to
>>> complete those goals within the boundaries of the target development
>>> cycle" ? Would that sound better ?
>>
>> Any chance you'd go for something like "project teams are expected to
>> make progress on these goals and report that progress to the TC every
>> cycle"?
> 
> The issue with this variant is that it removes the direct link between
> the goal and the development cycle. One of the goals of these goals
> (arh) is that we should be able to collectively complete them in a given
> timeframe, so that there is focus at the same time and we have a good
> story to show at the end. Those goals are smallish development cycle
> goals. They are specifically chosen to be complete-able within a cycle
> and with a clear definition of "done". It's what differentiates them
> from more traditional cross-project specs or strategic initiatives which
> can be more long-term (and on which "reporting progress to the TC every
> cycle" could be an option).

So, I think that's ok. But it's going to cause a least common
denominator of doable. For instance, python 3.5 is probably not doable
in Nova in a cycle. And the biggest issue is really not python 3.5 per
say, but our backlog of mox based unit tests (over a thousand), which
we've experienced are unreliable in odd ways on python3. They also tend
to be the oldest unit tests (we stopped letting people add new ones 2
years ago), in areas of the code that have a lower rate of change, and
folks are less familiar with (like the xenserver driver which is used by
very few folks).

So, those are getting tackled, but there is a lot there, and it will
take a while. (Note: this is one of the reasons I suggested the next
step with python3 be full stack testing, because I think we could
actually get Nova working there well in advance of the unit tests
ported, for the above issue. That however requires someone to take on
the work for full stack python3 setup and maintenance.)

Maybe this process also can expose "we're going to need help to get
there" for some of these goals.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] The great deproxification of novaclient

2016-08-16 Thread Matt Riedemann
The 2.36 microversion in nova's REST API makes proxy APIs for images, 
baremetal, volumes and network resources return a 404.


http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/deprecate-api-proxies.html

We realized after the fact that to support 2.36 in novaclient we 
actually have to make novaclient stop using these proxy APIs too, like 
for image and network lookup by name.


So we have two patches up for image and network lookup that go directly 
to glance and neutron, respectively:


image: https://review.openstack.org/#/c/354349/

network: https://review.openstack.org/#/c/354981/

These rely on the service catalog to have sane values, i.e. an 'image' 
entry for glance and a 'network' entry for neutron. It's also 
assuming/relying on the v2 API for glance, since glance v1 should be 
deprecated in Ocata.


This email is both a heads up and to see if these cause issues for anyone.

Note that these patches don't yet support the 2.36 microversion, that's 
happening later here:


https://review.openstack.org/#/c/347514/

That change is going to try and soften the blow by falling back to 2.35 
when using network CLIs, but the python API code with 2.36 will not be 
handling a transition for you, the proxy APIs will fail with a 404 at 
2.36 (but the APIs are opt-in for that microversion).


Please speak up soon if this hits you in any way because we plan to have 
these changes merged this week and do a release probably next week.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Developer's productivity with MuranoPL

2016-08-16 Thread Alexander Tivelkov
Hi folks,

The developer's productivity and ease of use is critical for the language
adoption, and I believe we have issues with these in MuranoPL. The things
are slowly getting better (thanks to the guys who are bulding th MuranoPL
syntax checker [1] which will be a very useful tool), but that's not
enough: we need more tools to simplify dev's life.

Let me provide a quick example: what does it take to run a "Hello World" in
MuranoPL?
To create such an introductory MuranoPL program the developer has to:
1) Install Murano with the rest of openstack (i.e. spin up devstack with
Murano and Glare plugins)
2) Write the actual HelloWorld "program" (mind the absence of simple
"print" function and the non-so-obvious need to extend
io.murano.Application class for the example to be runnable as murano's
entry point)
3) Write the UI.yaml file to provide the input object graph (even if there
is no user-supplied paramters - we still have to have this file)
4) Write a manifest.yaml
5) zip the program, the ui and the manifest into a package
6) Upload the package to Glare
7) Create an environment in Murano dashboard
8) Add the demo app to the environment
9) Deploy the environment

I hope I din't miss any step. Even excluding the time to spin up devstack
(and the high probability that a newcomer will not do that properly from
the first attempt) this is going to take at least 30 minutes. Even when the
environment is set up, the whole "make code changes - recreate the package
- reupload the package - recreate the environment - redepoy" cycle is
cumbersome.

What do we need is a simple and easy way to run MuranoPL code without the
need to set up all the server-sides components, generate object model and
interact with all the production-grade machinery murano has under the hood.
To do something like:

   $ cat "- print('Hello, World')" > ./hello.yaml
   $ murano-pl ./hello.yaml

Ideally there should be an interactive REPL-like shell, with smart
indentation and code completion similar to the Python's (we have one for
yaql, and so we should for MuranoPL).

That is not a very hard thing to do, and it will simplify the developers'
onboarding dramatically.

So, I propose to start with a simple thing: to separate the MuranoPL
interpreter (mostly the code located in murano.dsl package, plus some other
stuff) from the rest of Murano. Put it into a standalone reporitory, so it
may be packaged and distributed independently. The developers will just
need to 'pip install murano-pl' to have the local interpreter without all
the dependencies on murano api, engine, database etc. This new package may
include the murano-pl test-runner (this tool is currently part of the main
murano repo and is hard to use since it requires to have a valid config
file which is not a proper option for a cli tool). Then we may include some
other devloper-side tools, such as a murano-pl debugger (when we finally
have one: this is a separate topic).
Finally, we will need to remove the core library (murano.engine.system)
package from the main murano repo and also make it a standalone
repo/package with its own lifecycle (it would be very helpfull if we could
release/update the core library more frequently). So the main murano repo
(and appropriate package) will contain only the server-side murano
components: the REST API, the engine, DB api, model and migrations etc.

When this is done we may begin adding other developer's productivity tools:
starting with debugger and then to various kinds of IDE integrations.

Thoughts, opinions?


[1]
https://github.com/openstack/murano-specs/blob/master/specs/newton/approved/validation-tool.rst
-- 
Regards,
Alexander Tivelkov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron Bug Deputy report

2016-08-16 Thread Nate Johnston
Neutrinos,

Since the Neutron mid-cycle has pre-empted the usual team meeting, I am
issuing my report as bug deputy for the week of August 8-16 in email
form.  There were five critical bugs filed last week, two of which are
still outstanding.  They are:

1. https://bugs.launchpad.net/neutron/+bug/1611400
  - Subject: "pep8 job failing with F821 in test_l3"
  - Assigned: Ihar Hrachyshka
  - Status: Fix Released

2. https://bugs.launchpad.net/neutron/+bug/1611627
  - Subject: "revision plugin throwing objectdeletederror"
  - Assigned: Kevin Benton
  - Status: Fix Released

3. https://bugs.launchpad.net/neutron/+bug/1592546
  - Subject: "OVSLibTestCase.test_db_find_column_type_list is not isolated"
  - Assigned: Russell Boden
  - Status: Fix Released
  - Note: This was escalated from "High" to "Critical" on 8/10 by Henry Gessau

4. https://bugs.launchpad.net/neutron/+bug/1612192
  - Subject: "L3 DVR: Unable to complete operation on subnet"
  - Assigned: Unassigned (reported by John Schwarz)
  - Status: Confirmed

5. https://bugs.launchpad.net/neutron/+bug/1612804
  - Subject: "test_shelve_instance fails with sshtimeout"
  - Assigned: Unassigned (reported by Armando Migliaccio)
  - Status: Confirmed

Armando has graciously volunteered to fill in as Bug Deputy for next
week.  Thanks, Armando!

--N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] scenario evaluator not enabled by default

2016-08-16 Thread Yujun Zhang
Found it at
https://github.com/openstack/vitrage/blob/master/vitrage/entity_graph/consistency/consistency_enforcer.py#L58

Thanks, Nofar.

On Tue, Aug 16, 2016 at 3:09 PM Schnider, Nofar (EXT - IL) <
nofar.schnider@nokia.com> wrote:

> Hi Yujun,
>
> I think you will find what you are looking for in
> “consistency_enforcer.py”.
>
> Hope it helps.
>
>
>
>
>
> *Nofar Schnider*
>
> Senior Software Engineer
>
> Applications & Analytics, Nokia
>
> 16 Atir Yeda
>
> Kfar Saba, Israel 44643
>
> M: +972 52 4326023
>
> F: +972 9 793 3036
>
> [image: download]
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Monday, August 15, 2016 8:49 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>; Rosensweig, Elisha (Nokia - IL) <
> elisha.rosensw...@nokia.com>
>
>
> *Subject:* Re: [openstack-dev] [vitrage] scenario evaluator not enabled
> by default
>
>
>
> Thanks for the expanation, Elisha. I understand the design now.
>
>
>
> But I could not find the statement which enables the evaluator after
> initial phase.
>
>
>
> Could you help to point it out?
>
> --
>
> Yujun
>
>
>
> On Thu, Aug 11, 2016 at 11:42 PM Rosensweig, Elisha (Nokia - IL) <
> elisha.rosensw...@nokia.com> wrote:
>
> This is on purpose.
>
> When Vitrage is started, it first runs a "consistency" round where it gets
> all the resources from its datasources and inserts them into the entity
> graph. Once this initial phase is over, the evaluator is run over all the
> entity graph to check for meaningful patterns based on it's templates.
>
> The reason for this process is to avoid too much churn during the initial
> phase when Vitrage comes up. With so many changes done to the entity graph,
> it's best to wait for the initial collection phase to finish and then to do
> the analysis.
>
> Elisha
>
> > From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> > Sent: Thursday, August 11, 2016 5:49 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [vitrage] scenario evaluator not enabled by
> default
> >
> > Sorry for having put a url from forked repo. It should be
> https://github.com/openstack/vitrage/commit/bdba10cb71b2fa3744e4178494fa860303ae0bbe#diff>
> -6f1a277a2f6e9a567b38d646f19728bcL36
>
> > But the content is the same
> > --
> > Yujun
>
> > On Thu, Aug 11, 2016 at 10:43 PM Yujun Zhang 
> wrote:
> > It seems the scenario evaluator is not enabled when vitrage is started
> in devstack installer.
> >
> > I dig a bit in the history, it seems the default value for the evaluator
> is changed from True to False > in a history commit [1].
> >
> > Is it breaking the starting of evaluator or I have missed some steps to
> enable it explictily?
> >
> > - [1]
> https://github.com/openzero-zte/vitrage/commit/bdba10cb71b2fa3744e4178494fa860303ae0bbe#diff-
> 6f1a277a2f6e9a567b38d646f19728bcL36
>
> > --
> > Yujun
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-16 Thread Mohan Kumar
Hi  Alioune,

 Could you share neutron log as well ? also let us know your sfc code
base., If possible shall we have quick chat on this in neutron IRC channel ?

Thanks.,
Mohankumar.N

On Mon, Aug 15, 2016 at 5:09 PM, Alioune  wrote:

> Hi all,
> I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs, 1
> Web Server (DST) and the DHCP namespace as the SRC.
> I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and
> the neutron L2-agent runs correctly.
> I followed the process by creating classifier, port pairs and port_group
> but I got a wrong message "delete_port_chain failed." when creating
> port_chain [2]
> I tried to create the neutron ports with and without the option
> "--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP
> packets don't go through the SFs.
>
> Can anyone advice to fix? that ?
> What's your channel on IRC ?
>
> Regards,
>
>
> [1] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
> [2]
> vagrant@ubuntu:~/openstack_sfc$ ./08-os_create_port_chain.sh
> delete_port_chain failed.
> vagrant@ubuntu:~/openstack_sfc$ cat 08-os_create_port_chain.sh
> #!/bin/bash
>
> neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2
> --flow-classifier FC1 PC1
>
> [3] Output OVS Flows
>
> vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-tun -O OpenFlow13
> OFPST_FLOW reply (OF1.3) (xid=0x2):
>  cookie=0xbc2e9105125301dc, duration=9615.385s, table=0, n_packets=146,
> n_bytes=11534, priority=1,in_port=1 actions=resubmit(,2)
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=0, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=2, n_packets=5,
> n_bytes=490, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,20)
>  cookie=0xbc2e9105125301dc, duration=9615.381s, table=2, n_packets=141,
> n_bytes=11044, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,22)
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=3, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=4, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=8617.106s, table=4, n_packets=0,
> n_bytes=0, priority=1,tun_id=0x40e actions=push_vlan:0x8100,set_
> field:4097->vlan_vid,resubmit(,10)
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=6, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=10, n_packets=0,
> n_bytes=0, priority=1 actions=learn(table=20,hard_
> timeout=300,priority=1,cookie=0xbc2e9105125301dc,NXM_OF_
> VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-
> >NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],
> output:NXM_OF_IN_PORT[]),output:1
>  cookie=0xbc2e9105125301dc, duration=9615.378s, table=20, n_packets=5,
> n_bytes=490, priority=0 actions=resubmit(,22)
>  cookie=0xbc2e9105125301dc, duration=9615.342s, table=22, n_packets=146,
> n_bytes=11534, priority=0 actions=drop
> vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-int -O OpenFlow13
> OFPST_FLOW reply (OF1.3) (xid=0x2):
>  cookie=0xbc2e9105125301dc, duration=6712.090s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=7,icmp_type=136 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6709.623s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=8,icmp_type=136 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6555.755s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=10,icmp_type=136
> actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6559.596s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=9,icmp_type=136 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6461.028s, table=0, n_packets=0,
> n_bytes=0, priority=10,icmp6,in_port=11,icmp_type=136
> actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6712.071s, table=0, n_packets=13,
> n_bytes=546, priority=10,arp,in_port=7 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6709.602s, table=0, n_packets=0,
> n_bytes=0, priority=10,arp,in_port=8 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6555.727s, table=0, n_packets=0,
> n_bytes=0, priority=10,arp,in_port=10 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6559.574s, table=0, n_packets=12,
> n_bytes=504, priority=10,arp,in_port=9 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=6461.005s, table=0, n_packets=15,
> n_bytes=630, priority=10,arp,in_port=11 actions=resubmit(,24)
>  cookie=0xbc2e9105125301dc, duration=9620.388s, table=0, n_packets=514,
> n_bytes=49656, priority=0 actions=NORMAL
>  cookie=0xbc2e9105125301dc, duration=9619.277s, table=0, n_packets=0,
> n_bytes=0, priority=20,mpls actions=resubmit(,10)
>  cookie=0xbc2e9105125301dc, duration=6712.111s, table=0, n_packets=25,
> n_bytes=2674, priority=9,in_port=7 actions=resubmit(,25)
>  

[openstack-dev] [Heat] Current status of observe reality feature implementation (Convergence Phase 2)

2016-08-16 Thread Peter Razumovsky
Hi all!

I'd like to provide current status of observe reality implementation. I
remember, that community wished to land before N release time as much
observe reality patches as possible.

Next resource plugins have observe reality feature patches on review and
should be reviewed *as soon as possible*:

*Nova resources:*
Nova::Server - https://review.openstack.org/#/c/244066/

*Cinder resources:*
Cinder::VolumeType - https://review.openstack.org/#/c/250451/
Cinder::Volume - https://review.openstack.org/261445
Cinder::EncryptedVolumeType - https://review.openstack.org/276730

*Aodh resources:*
Aodh::GnocchiAggregationByResourcesAlarm -
https://review.openstack.org/#/c/314540/
Aodh::GnocchiAggregationByMetricsAlarm - https://review.openstack.org/314517
Aodh::GnocchiResourcesAlarm - https://review.openstack.org/#/c/314488/
Aodh::CombinationAlarm - https://review.openstack.org/#/c/313513/
Aodh::Alarm - https://review.openstack.org/#/c/313499/

Next resource plugins have observe reality feature patches on review and
should be reviewed *when it would be convenient*:

*Neutron resources:*
Neutron::Net - https://review.openstack.org/255287
Neutron::Subnet - https://review.openstack.org/255753
Neutron::Router - https://review.openstack.org/255776
Neutron::FloatingIP - https://review.openstack.org/256264
Neutron::Port - https://review.openstack.org/259074
vpnservice.py resources - https://review.openstack.org/266910
firewall.py resources - https://review.openstack.org/271992
Neutron::ProviderNet - https://review.openstack.org/273055

*Sahara resources:*
sahara/templates resources - https://review.openstack.org/274073
Sahara::ImageRegistry - https://review.openstack.org/274648
Sahara::DataSource - https://review.openstack.org/274654
Sahara::JobBinary - https://review.openstack.org/274658

*Manila resources:*
Manila::SecurityService - https://review.openstack.org/275344
Manila::ShareType - https://review.openstack.org/275363
Manila::ShareNetwork - https://review.openstack.org/275363
Manila::Share - https://review.openstack.org/276151

Next resource plugins will be available after testing and rebasing:

*Keystone resources: *

https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/get-reality-for-resources+message:%22Keystone::%22

---
Best regards,
Peter Razumovsky
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-16 Thread Giulio Fidente

On 08/15/2016 10:54 AM, Dmitry Tantsur wrote:

Hi everyone, happy Monday :)

I'd like to start the discussion about CI-testing the optional
composable services in the CI (I'm primarily interested in Ironic, but I
know there are a lot more).


thanks for bringing this up, with "pluggability" comes responsibility it 
seems


there is also a conflicting (yet valid) interest in keeping the number 
of services deployed in the overcloud to a minimum to avoid even longer 
CI run times



So, are there any plans to start covering optional services? Maybe at
least a non-HA job with all environment files included? It would be cool
to also somehow provide additional checks though. Or, in case of ironic,
to disable the regular nova compute, so that the ping test runs on an
ironic instance.


it isn't really a case of HA vs non-HA, with the newer HA architecture 
we're only managing via pcmk those openstack services which need to be 
(including recent additions like manila-share or cinder-backup) and 
these should be tested in the HA scenario which IMHO at this point could 
become a default


it looks to me that a scenario in the experimental queue deploying a 
"full" overcloud could work?


there is a similar requirement for testing 'competing' services, like 
swift and ceph/rgw which we're about to merge ... but applies to other 
things, like the neutron plugins

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-16 Thread Scott Moser
On Mon, 8 Aug 2016, Jeremy Stanley wrote:

> On 2016-08-08 11:20:54 +0200 (+0200), Miguel Angel Ajo Pelayo wrote:
> > The problem with the other projects image builds is that they are
> > based for bigger systems, while cirros is an embedded-device-like
> > image which boots in a couple of seconds.
>
> Yep, smaller is certainly better when it comes to trying to run Nova
> in a virtual machine.
>
> > Couldn't we contribute to cirros to have such module load by default [1]?
>
> It's worth chatting with them, for sure, and see what they say about
> it.
>

Kevin opened a bug at
 https://bugs.launchpad.net/cirros/+bug/1605832
I responded with a one option that we could pursue to avoid others having
to build their own images.
The suggestion was to provide a command line utility in inside cirros that
would download additional kernel modules.  Then, user-data could be fed to
instruct it to do so.  That would allow you to keep using unmodified
cirros kernels and initramfs.

Another option would be for cirros to provide other ways to patch itself
to your needs.  For example, the injected files support in nova could be
used to insert the necessary modules.  The difficulty there is that then
you're dependent upon knowing what version of the kernel is inside the
image.

Another option is for cirros to simply include those modules by default.
That comes at the cost of 100k or so.  Its not a huge thing, but its
definitely an increase.

We can work something out.

Scott

> > Or may be it's time for Openstack to build their own "cirros-like"
> > image with all the capabilities we may be missing for general tempest
> > testing? (ipv6, vlan, etc..? )
>
> I haven't personally tested the CirrOS build instructions, but have
> a feeling writing a diskimage-builder element wrapper for that
> wouldn't be particularly challenging.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] reference a type of alarm in template

2016-08-16 Thread Har-Tal, Liat (Nokia - IL)
Hi Yujun,

The template example you are looking at is invalid.
I added a new valid example (see the following link: 
https://review.openstack.org/#/c/355840/)

As Elisha wrote, the ‘type’ field means the alarm type itself or in simple 
words where it was generated (Vitrage/Nagios/Zabbix/AODH)
The ‘name’ field is not mandatory and it describes the actual problem which the 
alarm was raised about.

In the example you can see two alarm types:


1.   Zabbix alarm - No use of “name” field:
 category: ALARM
 type: zabbix
 rawtext: Processor load is too high on {HOST.NAME}
 template_id: zabbix_alarm

2.   Vitrage alarm
category: ALARM
type: vitrage
name: CPU performance degradation
template_id: instance_alarm

One more point, in order to define an entity in the template, the only 
mandatory fields are:

· template_id

· category
All the other fields are optional and they are designed so that you more 
accurately define the entity.
Each alarm data source has its own set of fields you can use – we will add 
documentation for the in the future.

Best regards,
Liat Har-Tal



From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Tuesday, August 16, 2016 5:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] reference a type of alarm in template

Hi, Elisha

There is no `name` in the template [1], and type is not one of 'nagios', 'aodh' 
and 'vitrage' in the examples [2].

- entity:


category: ALARM


type: Free disk space is less than 20% on volume /


template_id: host_alarm


[1] 
https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/deduced_host_disk_space_to_instance_at_risk.yaml#L8
[2] 
https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst#examples


On Tue, Aug 16, 2016 at 2:21 AM Rosensweig, Elisha (Nokia - IL) 
> wrote:
Hi,

The "type" means where it was generated - aodh, vitrage, nagios...

I think you are looking for"name", a field that describes the actual problem. 
We should add that to our documentation to clarify.

Sent from Nine

From: Yujun Zhang >
Sent: Aug 15, 2016 16:10
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [vitrage] reference a type of alarm in template

I have a question on how to reference a type of alarm in template so that we 
can build scenarios.

In the template sample [1], an alarm entity has three keys: `category`, `type` 
and `template_id`. It seems `type` is the only information to distinguish 
different alarms. However, when an alarm is raised by aodh, it seems all alarms 
are assigned entity type `aodh` [2], so are they shown in dashboard.

Suppose we have two different types of alarms from `aodh`, e.g. 
`volume.corrupt` and `volume.deleted`. How should I reference them separately 
in a template?

×
[Screen Shot 2016-08-15 at 8.44.56 PM.png]
×

×

[1] 
https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/deduced_host_disk_space_to_instance_at_risk.yaml#L8
[2] 
https://github.com/openstack/vitrage/blob/master/vitrage/datasources/aodh/transformer.py#L75

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Puppet] Puppet OpenStack Application Module

2016-08-16 Thread Emilien Macchi
On Mon, Aug 15, 2016 at 12:13 PM, Andrew Woodward  wrote:
> I'd like to propose the creation of a puppet module to make use of Puppet
> Application orchestrator. This would consist of a Puppet-4 compatible module
> that would define applications that would wrap the existing modules.
>
>
> This will allow for the establishment of a shared module that is capable of
> expressing OpenStack applications using the new language schematics in
> Puppet 4 [1] for multi-node application orchestration.
>
>
> I'd expect that initial testing env would consist of deploying a PE stack,
> and using docker containers as node primitives. This is necessary until a
> FOSS deployer component like [2] becomes stable, at which point we can
> switch to it and use the FOSS PM as well. Once the env is up, I plan to wrap
> p-o-i profiles to deploy the cloud and launch tempest for functional
> testing.
>
>
> [1] https://docs.puppet.com/pe/latest/app_orchestration_workflow.html
>
> [2] https://github.com/ripienaar/mcollective-choria

It reminds me stackforge/puppet-openstack [1] repository that we had
in the past and that we deprecated and removed.
It was a failure because it was not maintained and not really used by
our community because too much opinionated.

Before creating such a new repository, we might want to see some PoC
somewhere, some piece of code, so we can evaluate if we would actually
need that thing.
Have you already written some code around that? Can you show it?
I just don't want to see (again) an empty repo where nobody is pushing
code on it.

[1] https://github.com/stackforge/puppet-openstack
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO-Quickstart Driven Doc Generation Updates

2016-08-16 Thread Harry Rybacki
Greetings All,

Detailed post TripleO-Quickstart build doc generation is coming along
smoothly. After much discussion between myself, adarazs, weshay, and
larsks -- I believe to have found a solution that impacts
TripleO-Quickstart and external roles minimally. Combining work from
trown[1] and an old awk script from way-back-when, all we need is to:

  1. Ensure as much work is done within templated bash scripts as
possible (this was/is already a goal of oooq)
  2. Track which templates are used for build paths e.g. for a
"minimal" deployment the following shell scripts are created and
called:
  - undercloud-install.sh
  - undercloud-post-instal.sh
  - overcloud-deploy.sh
  - overcloud-deploy-post.sh
  - overcloud-validate.sh
  3. Update existing/audit new templates to follow some style
guidelines[3] (vanilla example as well as an actual script output
linked in this paste).

After a build has finished, either by way of CI or locally, the
tripleo-collect-logs[4] role can be used to collect logs from the
undercloud, convert the shell scripts into rST files, use Sphinx[5] to
build a documentation, and (optionally) publish the results to an
artifacts server.

There is a PoC job[6] up that is successfully building and publishing
docs created to combine and monitor work done in relevant
tripleo-quickstart[7] and ansible-role-tripleo-collect-logs[8]
patches. To get a better idea of what the end product from Sphinx
looks like, please look here[9]. Note that the job is using a playbook
which is calling external roles for most of the deployment (which
doesn't contain all the possible templated bash script conversions).
For a more complete version, download the output from a local run I
completed yesterday[10] -- just untar it and open the index.html file
located at its root level.

Up to this point, my goal has been to get as much coverage of what our
CI does in a minimal job[10] building documentation. With this shaping
up quickly, I would like to open discussion around what I believe will
be the most complicated part of expanding documentation coverage -
work being done via ansible instead of within templated bash scripts.
For example, the undercloud role in TripleO-Quickstart adjusts an
Ironic config and restarts the service[11] outside of the templated
bash scripts. I can see a few solutions:

  1. Convert work done in ansible to templated bash scripts
  - Requires a fair amount of initial leg work
  - Sections of the workflow very well /should/ be ansible for
various reasons
  2. Write static documentation for things not templated and insert it
inti the correct section of docs
  - Propensity to very easily become out-of-date
  3. Write templated bash that mimics what ansible is doing and let
the automation consume these unused 'scripts'
  - Same issue as option 2 -- both allow for easy divergence
between what is done in automation and what we are presenting folks
with inside of the docs

Would folks chime in on the above? Is there a simple solution that I overlooked?

[1] - https://review.gerrithub.io/#/c/261218/
[2] - 
https://github.com/openstack/tripleo-incubator/blob/master/scripts/extract-docs.awk
[3] - https://paste.fedoraproject.org/408991/
[4] - https://github.com/redhat-openstack/ansible-role-tripleo-collect-logs
[5] - http://www.sphinx-doc.org/en/stable/
[6] - 
https://ci.centos.org/job/poc-hrybacki-tripleo-quickstart-roles-gate-mitaka-doc-generator/
full-minimal/
[7] - https://review.openstack.org/#/c/347592/
[8] - https://review.gerrithub.io/#/c/286349/
[9] - 
https://ci.centos.org/artifacts/rdo/jenkins-poc-hrybacki-tripleo-quickstart-roles-gate-mitaka-doc-generator-44/docs/build/
[10] - 
https://drive.google.com/a/redhat.com/file/d/0B9Alqh5AS1vpS1djWm9zYmZhNHM/view?usp=sharing
[11] - 
https://ci.centos.org/view/rdo/view/tripleo-gate/job/tripleo-quickstart-gate-mitaka-delorean-
[12] - 
https://github.com/openstack/tripleo-quickstart/blob/master/roles/tripleo/undercloud/tasks/main.yml#L9-L21


/R

Harry Rybacki

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vitrage] host evacuate

2016-08-16 Thread Matt Riedemann

On 8/16/2016 12:39 AM, Afek, Ifat (Nokia - IL) wrote:






On 15/08/2016, 19:29, "Matt Riedemann"  wrote:


On 8/15/2016 4:24 AM, Afek, Ifat (Nokia - IL) wrote:

Hi,

In Vitrage project[1], I would like to add an option to call nova host-evacuate 
for a failed host. I noticed that there is an api for ‘nova evacuate’, and a 
cli (but no api) for ‘nova host-evacuate’. Is there a way that I can call 'nova 
host-evacuate’ from Vitrage code? Did you consider adding host-evacuate to nova 
api? Obviously it would improve the performance for Vitrage use case.


Some more details: The Vitrage project is used for analysing the root cause of 
OpenStack alarms, and deducing their existence before they are directly 
observed. It receives information from various data sources (Nova, Neutron, 
Zabbix, etc.) performs configurable actions and notifies other projects like 
Nova.

For example, in case zabbix detect a host NIC failure, Vitrage calls nova 
force-down api to indicate that the host is unavailable. We would like to add 
an option to evacuate the failed host.

Thanks,

Ifat.

[1] https://wiki.openstack.org/wiki/Vitrage




This blog post will probably be helpful:

http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/

The 'nova host-evacuate' CLI is a convenience CLI, it's not for a
specific API, it just takes a host, finds the instances running on that
host and evacuates each of them. You could do something similar in vitrage.

--

Thanks,

Matt Riedemann




Thanks Matt,
I already read this blog :-)
I asked about moving the cli code to the api, because performance-wise it would 
help us.

Thanks,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's not going to happen, it's orchestration which Nova is trying to 
avoid adding more of that to the REST API.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Secure Setup & HSM-plugin

2016-08-16 Thread Praktikant HSM
Hi Douglas,

Thank you very much for your response.

When testing the usage of the generated MKEK, we ran into some problems. For 
clarification: we are testing the PKCS#11-based crypto plugin with a Utimaco 
HSM. The error messages are from Barbican's and Utimaco's log files.

When storing a secret, we get the following error: CKM_MECHANISM_INVALID 
(Mechanism 0x811c is invalid). Since this is a vendor specific AES-GCM 
mechanism by SafeNet[1], it is not supported by our HSMs.

In the p11_crypto.py file, the default algorithm is set to 
"VENDOR_SAFENET_CKM_AES_GCM"[2]. Thus, we specified "CKM_AES_GCM" in the 
barbican.conf file in the [p11_crypto_plugin] section to be used instead of the 
default mechanism. However, this gave us a "CKR_MECHANISM_PARAM_INVALID" error: 
mechanism length invalid (expected 40, provided 48).

Additionally, when trying other AES modes, e.g. CBC, there is an 
CryptoPluginNotFound error.

Is there currently a workaround which would allow us to use a Utimaco HSM? 
Also, are there any plans to natively support HSMs from other vendors in the 
near future?

Again, thank you for your support.

Best regards,

Manuel Roth

[1]: 
https://github.com/openstack/barbican/blob/306b2ac592c059c59be42c0420a08af0a9e34f6e/barbican/plugin/crypto/pkcs11.py#L131

[2]: 
https://github.com/openstack/barbican/blob/c2a7f426455232ed04d2ccef6b35c87a2a223977/barbican/plugin/crypto/p11_crypto.py#L63

---
System Engineering HSM

Utimaco IS GmbH
Germanusstr. 4
52080 Aachen
Germany



-Original Message-
From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com]
Sent: Freitag, 12. August 2016 18:24
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Ariano-Tim Donda ; Jiannis Papadakis 

Subject: Re: [openstack-dev] Barbican: Secure Setup & HSM-plugin

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi Manuel,

I'm happy to hear about your interest in Barbican.  I assume your HSM has a 
PKCS#11 interface since the admin commands to generate the MKEK and HMAC keys 
worked for you.

The labels for the generated keys should be specified in the config file for 
the API process. [1]  The API process uses the MKEK and HMAC keys to encrypt 
and sign the secrets (keys) that are stored in Barbican by clients.

The PKCS#11 plugin was designed to use the SQL Database to store client keys 
(secrets) in the SQL database, so your API process must be configured to use 
"store_crypto" as the enabled_secretstore_plugins [2] in addition to specifing 
"p11_crypto" as the enabled_crypto_plguins [3].

When configured this way, Barbican uses the HSM to encrypt the client data 
(keys/secrets) before storing it in the DB.

The API itself does not currently support using keys stored by clients to do 
server-side encryption, but it's a feature that has been discussed in previous 
summits with some interest.  We've also had some discussions with the Designate 
team to add server-side signing that they could use to implement DNSSEC, but we 
don't yet have a blueprint for it.

Let me know if you have any more questions.

- - Douglas Mendizábal

[1]
http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/barbi
can.conf#n278
[2]
http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/barbi
can.conf#n255
[3]
http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/barbi
can.conf#n260


On 8/12/16 7:51 AM, Praktikant HSM wrote:
> Hi all,
>
> As a member of Utimaco's pre-sales team I am currently testing an
> integration of Barbican with one of our HSMs.
>
>
>
> We were able to generate MKEKs and HMAC keys on the HSM with the
> 'pkcs11-key-generation' as well as 'barbican-manage hsm' commands.
> However, it is not fully clear to us how to use these keys to encrypt
> or sign data.
>
>
>
> Additionally, we would appreciate further information concerning the
> secure setup of Barbican with an HSM-plugin.
>
>
>
> Thank you in advance for your support.
>
>
>
> Best regards,
>
>
>
>
>
> Manuel Roth
>
>
>
> ---
>
> System Engineering HSM
>
>
>
> Utimaco IS GmbH
>
> Germanusstr. 4
>
> 52080 Aachen
>
> Germany
>
>
>
> www.utimaco.com 
>
>
> --
- --
>
>  Utimaco IS GmbH Germanusstr. 4, D.52080 Aachen, Germany, Tel:
> +49-241-1696-0, www.utimaco.com Seat: Aachen – Registergericht
> Aachen HRB 18922 VAT ID No.: DE 815 496 496 Managementboard: Malte
> Pollmann (Chairman) CEO, Dr. Frank J. Nellissen CFO
>
> This communication is confidential. We only send and receive email on
> the basis of the terms set out at
> https://www.utimaco.com/en/e-mail-disclaimer/
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJXrfg0AAoJEB7Z2EQgmLX7A08QAIpZqMKNDdT8MwM/iLmlDrMz
s/3wh+BErcQ8DHRHfwFijS6R+dm3/lZxzwTFszcRGgnXS90cKkZ0MGfuabne3Ul1

Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-16 Thread Mooney, Sean K


> -Original Message-
> From: Assaf Muller [mailto:as...@redhat.com]
> Sent: Monday, August 15, 2016 2:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Cc: Mooney, Sean K 
> Subject: Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack
> security group driver with ovs-dpdk
> 
> + Jakub.
> 
> On Wed, Aug 10, 2016 at 9:54 AM,
>  wrote:
> > Hi,
> >> [Mooney, Sean K]
> >> In ovs 2.5 only linux kernel conntrack was supported assuming you
> had
> >> a 4.x kernel that supported it. that means that the feature was not
> >> available on bsd,windows or with dpdk.
> > Yup, I also thought about something like that.
> > I think I was at-least-slightly misguided by
> > http://docs.openstack.org/draft/networking-guide/adv-config-
> ovsfwdrive
> > r.html
> > and there is currently a statement
> > "The native OVS firewall implementation requires kernel and user
> space support for conntrack, thus requiring minimum versions of the
> Linux kernel and Open vSwitch. All cases require Open vSwitch version
> 2.5 or newer."
> 
> I agree, that statement is misleading.
[Mooney, Sean K] the 2.6 branch now exists so it is probably ok to refer to
2.6 now. https://github.com/openvswitch/ovs/commits/branch-2.6
The release should be made ~ September 15th
https://github.com/openvswitch/ovs/blob/797dad21566fecc60de3ce6f93c81ad55a61fe86/Documentation/release-process.md#release-scheduling
which will be before then next openstack release.
if you would like I can update the networking guide to refect the change in ovs.

> 
> >
> > Do you agree that this is something to change? I think it is not OK
> to state OVS 2.6 without that being released, but in case I am not
> confusing then:
> > -OVS firewall driver with OVS that uses kernel datapath requires OVS
> > 2.5 and Linux kernel 4.3 -OVS firewall driver with OVS that uses
> > userspace datapath with DPDK (aka ovs-dpdk  aka DPDK vhost-user aka
> netdev datapath) doesn't have a Linux kernel prerequisite That is
> documented in table in " ### Q: Are all features available with all
> datapaths?":
> > http://openvswitch.org/support/dist-docs/FAQ.md.txt
> > where currently 'Connection tracking' row says 'NO' for 'Userspace' -
> > but that's exactly what has been merged recently /to become feature
> of
> > OVS 2.6
> >
> > Also when it comes to performance I came across
> > http://openvswitch.org/pipermail/dev/2016-June/071982.html, but I
> would guess that devil could be the exact flows/ct actions that will be
> present in real-life scenario.
> >
> >
> > BR,
> > Konstantin
> >
> >
> >> -Original Message-
> >> From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> >> Sent: Tuesday, August 09, 2016 2:29 PM
> >> To: Volenbovskyi Kostiantyn, INI-ON-FIT-CXD-ELC
> >> ; openstack-
> >> d...@lists.openstack.org
> >> Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk]
> conntrack
> >> security group driver with ovs-dpdk
> >>
> >>
> >> > -Original Message-
> >> > From: kostiantyn.volenbovs...@swisscom.com
> >> > [mailto:kostiantyn.volenbovs...@swisscom.com]
> >> > Sent: Tuesday, August 9, 2016 12:58 PM
> >> > To: openstack-dev@lists.openstack.org; Mooney, Sean K
> >> > 
> >> > Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk]
> >> > conntrack security group driver with ovs-dpdk
> >> >
> >> > Hi,
> >> > (sorry for using incorrect threading)
> >> >
> >> > > > About 2 weeks ago I did some light testing with the conntrack
> >> > > > security group driver and the newly
> >> > > >
> >> > > > Merged upserspace conntrack support in ovs.
> >> > > >
> >> > By 'recently' - whether you mean patch v4
> >> > http://openvswitch.org/pipermail/dev/2016-June/072700.html
> >> > or you used OVS 2.5 itself (which I think includes v2 of the same
> >> > patch series)?
> >> [Mooney, Sean K] I used http://openvswitch.org/pipermail/dev/2016-
> >> June/072700.html or specifically i used the following commit
> >>
> https://github.com/openvswitch/ovs/commit/0c87efe4b5017de4c5ae99e7b9c
> >> 3
> >> 6e8a6e846669
> >> which is just after userspace conntrack was merged,
> >> >
> >> > So in general - I am a bit confused about conntrack support in
> OVS.
> >> >
> >> > OVS 2.5 release notes
> >> > http://openvswitch.org/pipermail/announce/2016-
> >> > February/81.html state:
> >> > "This release includes the highly anticipated support for
> >> > connection tracking in the Linux kernel.  This feature makes it
> >> > possible to implement stateful firewalls and will be the basis for
> >> > future stateful features such as NAT and load-balancing.  Work is
> >> > underway to bring connection tracking to the userspace datapath
> >> > (used by DPDK) and the port to Hyper-V."  - in the way that 'work
> >> > is underway' (=work is
> >> > ongoing) means that a time of OVS 2.5 release the feature was not
> >> > 

Re: [openstack-dev] [Ceilometer] Newbie question regarding Ceilometer notification plugin

2016-08-16 Thread gordon chung


On 15/08/16 02:23 PM, Wanjing Xu (waxu) wrote:

We are trying to develop a program which can draw the VMs topology.  For
this we need to listen VM creation/deletion event and act upon it.  I
would think Ceilometer is the right place for this.  So I download and
read Ceilometer a little bit.  If somebody can give me a pointer to where
I can start to hook my program with the notification mechanism of the
ceilometer, that would be a great.

Thanks and any suggestions are welcome!


you can take a look at Events as well: 
http://docs.openstack.org/developer/ceilometer/events.html

there is a translation file[1] and pipeline[2] related to it

[1] 
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/event_definitions.yaml
[2] 
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/event_pipeline.yaml


--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Ceilometer]:Duplicate messages with Ceilometer Kafka Publisher.

2016-08-16 Thread gordon chung
what does your pipeline.yaml look like? maybe paste it to paste.openstack.org. 
i imagine it's correct if your udp publishing works as expected.

On 16/08/16 04:04 AM, Raghunath D wrote:
Hi Simon,

I have two openstack setup's one with kilo and other with mitaka.

Please find details of kafka version's below:
Kilo kafka client library Version:1.3.1
Mitaka kafka client library Version:1.3.1
Kafka server version on both kilo and mitaka: kafka_2.11-0.9.0.0.tgz

One observation is on kilo openstack setup I didn't see duplicate message 
issue,while on mitaka
setup I am experiencing this issue.

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting



-
From: Simon Pasquier [mailto:spasqu...@mirantis.com]
Sent: Thursday, August 11, 2016 2:34 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Ceilometer]:Duplicate messages with Ceilometer 
Kafka Publisher.

Hi
Which version of Kafka do you use?
BR
Simon

On Thu, Aug 11, 2016 at 10:13 AM, Raghunath D 
> wrote:
Hi ,

  We are injecting events to our custom plugin in ceilometer.
  The ceilometer pipeline.yaml  is configured to publish these events over 
kafka and udp, consuming these samples using kafka and udp clients.

  KAFKA publisher:
  ---
  When the events are sent continously ,we can see duplicate msg's are recevied 
in kafka client.
  From the log it seems ceilometer kafka publisher is failed to send msg's,but 
still these msgs
  are received by kafka server.So when kafka resends these failed msgs we can 
see duplicate msg's
  received in kafka client.
  Please find the attached log for reference.
  Is this know issue ?
  Is their any workaround for this issue ?

  UDP publisher:
No duplicate msg's issue seen here and it is working as expected.



With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty.IT Services
   Business Solutions
   Consulting


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[attachment "ATT1.txt" removed by Raghunath D/HYD/TCS]



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of ISO

2016-08-16 Thread Jay Pipes

On 08/16/2016 04:58 AM, Vladimir Kozhukalov wrote:

Dear colleagues,

We finally have working custom deployment job that deploys Fuel admin
node using online RPM repositories (not ISO) on vanilla Centos 7.0.


Bravo! :)


Currently all Fuel system and deployment tests use ISO and we are
planning to re-implement all these jobs (including BVT, SWARM, and Fuel
CI jobs) to exclude ISO from the pipeline. That will allow us to get rid
of ISO as our deliverable and instead rely totally on package
repositories. Linux distributions like Ubuntu, Debian, RHEL, etc. are
already delivered via ISO/qcow2/etc. images and we'd better stop
reinventing a wheel and support our own ISO build code. That will allow
us to make Fuel admin node deployment more flexible.

I will infrom about our next steps here in the thread.


Thanks, Vova, this is an excellent step forward for ease-of-use with Fuel.

Nice work,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nfs-ganesha export modification issue

2016-08-16 Thread Ramana Raja
On Thursday, June 30, 2016 6:07 PM, Alexey Ovchinnikov 
 wrote:
> 
> Hello everyone,
> 
> here I will briefly summarize an export update problem one will encounter
> when using nfs-ganesha.
> 
> While working on a driver that relies on nfs-ganesha I have discovered that
> it
> is apparently impossible to provide interruption-free export updates. As of
> version
> 2.3 which I am working with it is possible to add an export or to remove an
> export without restarting the daemon, but it is not possible to modify an
> existing
> export. So in other words if you create an export you should define all
> clients
> before you actually export and use it, otherwise it will be impossible to
> change
> rules on the fly. One can come up with at least two ways to work around
> this issue: either by removing, updating and re-adding an export, or by
> creating multiple
> exports (one per client) for an exported resource. Both ways have associated
> problems: the first one interrupts clients already working with an export,
> which might be a big problem if a client is doing heavy I/O, the second one
> creates multiple exports associated with a single resource, which can easily
> lead
> to confusion. The second approach is used in current manila's ganesha
> helper[1].
> This issue seems to be raised now and then with nfs-ganesha team, most
> recently in
> [2], but apparently it will not be addressed in the nearest future.

Frank Filz has added support to Ganesha (upstream "next" branch) to
allow one to dynamically update exports via D-Bus. Available since,
https://github.com/nfs-ganesha/nfs-ganesha/commits/2f47e8a761f3700

It'd be nice if we can test this feature and provide feedback.
Also, ML [2] was updated with more implementation details.

Thanks,
Ramana

> 
> With kind regards,
> Alexey.
> 
> [1]:
> https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
> [2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][kolla] keystone v3 support status and plans?

2016-08-16 Thread Steven Dake (stdake)
Hey kuryrians,

Kolla has a kuryr review in our queue and its looking really solid from Hui 
Kang.  The last key problem blocking the merge (and support of Kuryr in Kolla) 
is that Kolla only supports keystone v3 (and later when that comes from 
upstream).  As a result we are unable to merge kuryr because we can't validate 
it.  The work on the kolla side is about 98% done (need a few keystone v3 
config options).  Wondering if keystone v3 will magically land in this cycle?  
Its not all that challenging, but I realize the kuryr team may have other 
things that are higher priority on their plates.

FWIW lack of keystone v3 support will be an adoption barrier for kuryr beyond 
kolla as well.

Comments welcome.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-16 Thread Chris Dent

On Tue, 16 Aug 2016, Alex Xu wrote:


There are two solutions in my mind
1. There aren't different dependence for traits, that is easy. The
validation API in the placement engine API is enough.
2. There are different dependence for traits. We need API for compute node
report those dependence. This isn't easy, we need model the dependence in
the placement engine. This sounds like Chris will hate this way :)


If you're talking about me here, "hate" is probably a bit too strong
of a term. Maybe "get sad"?

In any case my dislike is not with the idea of putting the capabilities
behind an API that can be accessed by a multitude of clients. That
seems like a good idea. It removes some dependencies and allows some
dynamism in the collection and management of traits. That is important
to allow.

What I dislike is the idea of including it in the placement service.
We should have a standalone traits service (for which the placement
service is one of the multitude of clients). I know people get
stressed about having yet another service running but if we continue
with the standard service evolution trend the placement service is
going to end up being yet another monolith in the OpenStack stack.
It should do one thing well and compose with other services to
satisfy use cases.

We should have a traits service. It should be very simple. It should
be amenable to simple caching for performance boosting and should be
easy to scale horizontally. If people want it to use the same
database as the placement engine, that's fine but it should be
its own process and ideally it's own code repo.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] live migration meeting today

2016-08-16 Thread Murray, Paul (HP Cloud)
See agenda and add if you have something to bring up: 
https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

Paul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] mcollective configuration on slaves

2016-08-16 Thread Vladimir Kozhukalov
Currently we use /etc/nailgun-agent/nodiscover file to prevent early run of
nailgun-agent (and thus its conflict with cloud-init). We put this file
when we build an image [0] and then we remove this file at the late stage
of system start process when cloud-init is done [1]. Anyway this does not
look good and we'd better replace cloud-init with something else like
puppet and run this configuration during provisioning even before booting
into target operating system. Besides, running Puppet during provisioning
to do initial configuration will allow us to configure other services/files
in a consistent way. So, I prefer option #3. Let's write a spec for this.

[0]
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L961
[1]
https://github.com/openstack/fuel-agent/blob/master/cloud-init-templates/boothook_fuel_9.0_ubuntu.jinja2#L91



Vladimir Kozhukalov

On Fri, Aug 12, 2016 at 8:29 AM, Georgy Kibardin 
wrote:

> Guys,
>
> Currently, at the time of the first boot Mcollective on slaves is
> configured by Cloud-init (host, port etc.) and by Nailgun agent when node
> identity is written. This happens in any order and already caused several
> problem which were fixed in bounds of https://bugs.launchpad.net/
> fuel/+bug/1455489. The last fix relies on hostname to figure that the
> provisioning is over and prevent Nailgun agent from messing with
> Mcollective. This solution is quite hacky and, I think, we need to come up
> with a better fix. So, the options are:
>
> 1. Do nothing
>   + No effort, no regression
>   - Existing solution is fragile and not simple enough to be sure that
> there are obviously no more bugs.
>   - Cloud-init does things on early boot stages which is a bit hard to
> debug.
>
> 2. "Manual" Mcollective configuration, triggered by Nailgun agent.
>   + Quite simple
>   - Not "cross-distro"
>   - Not consistent with the remaining configuration tasks (rewriting them
> worsens the problem above)
>
> 3. Use Puppet to configure Mcollective. (copy necessary input data into
> the chroot env and run puppet inside it)
>   + Using puppet is consistent with the way Fuel configures things
>   - It is not consistent with remaining configuration tasks, which is done
> via Cloud-init and moving them to Puppet takes some time.
>
> 4. Some other tool?
>
> New options, cons and pros are very welcome!
>
> Best,
> Georgy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI][Delorean]

2016-08-16 Thread Sagi Shnaidman
Hi, all

we have current tripleo repo[1] pointing to old repository[2] which
contains broken cinder[3] with volumes type bug[4] That breaks all our CI
jobs which can not create pingtest stack, because current-tripleo is the
main repo that is used in jobs[5]

Could we move the link please to any newer repo that contains cinder fix?

Thanks


[1]
http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/
[2]
http://trunk.rdoproject.org/centos7/c6/bd/c6bd3cb95b9819c03345f50bf2812227e81314ab_4e6dfa3c
[3] openstack-cinder-9.0.0-0.20160810043123.b53621a.el7.centos.noarch
[4] https://bugs.launchpad.net/cinder/+bug/1610073
[5]
https://github.com/openstack-infra/tripleo-ci/blob/master/scripts/tripleo.sh#L100

-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-16 Thread Thierry Carrez
John Dickinson wrote:
>>> Excerpts from John Dickinson's message of 2016-08-12 16:04:42 -0700:
 [...]
 The proposed plan has a lot of good in it, and I'm really happy to see the 
 TC
 working to bring common goals and vision to the entirety of the OpenStack
 community. Drop the "project teams are expected to prioritize these goals 
 above
 all other work", and my concerns evaporate. I'd be happy to agree to that 
 proposal.
>>>
>>> Saying that the community has goals but that no one is expected to
>>> act to bring them about would be a meaningless waste of time and
>>> energy.
>>
>> I think we can find wording that doesn't use the word "priority" (which
>> is, I think, what John objects to the most) while still conveying that
>> project teams are expected to act to bring them about (once they said
>> they agreed with the goal).
>>
>> How about "project teams are expected to do everything they can to
>> complete those goals within the boundaries of the target development
>> cycle" ? Would that sound better ?
> 
> Any chance you'd go for something like "project teams are expected to
> make progress on these goals and report that progress to the TC every
> cycle"?

The issue with this variant is that it removes the direct link between
the goal and the development cycle. One of the goals of these goals
(arh) is that we should be able to collectively complete them in a given
timeframe, so that there is focus at the same time and we have a good
story to show at the end. Those goals are smallish development cycle
goals. They are specifically chosen to be complete-able within a cycle
and with a clear definition of "done". It's what differentiates them
from more traditional cross-project specs or strategic initiatives which
can be more long-term (and on which "reporting progress to the TC every
cycle" could be an option).

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osic] OSIC cluster status

2016-08-16 Thread Paul Bourke

Kollagues,

Can we get an update from the APAC / US reps?

The last I heard we were having trouble getting guests spawned on an 
external network so they could be accessible by Rally. There's been 
various people looking at this the past few days but overall unclear 
where we are and time is short. If someone could summarise(looking at 
Jeffrey / inc0 ;)) what's been happening that will allow us to make a 
decision on how to progress.


Cheers,
-Paul

On 05/08/16 17:48, Paul Bourke wrote:

Hi Kolla,

Thought it will be helpful to send a status mail once we hit checkpoints
in the osic cluster work, so people can keep up to speed without having
to trawl IRC.

Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic

Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now
complete. The 131 bare metal nodes have been provisioned with Ubuntu
14.04, networking is configured, and all Kolla prechecks are passing.

The default set of images (--profile default) have been built and pushed
to a registry running on the deployment node, the build taking a very
speedy 5m37.040s.

Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Getting rid of ISO

2016-08-16 Thread Vladimir Kozhukalov
Dear colleagues,

We finally have working custom deployment job that deploys Fuel admin node
using online RPM repositories (not ISO) on vanilla Centos 7.0.

Currently all Fuel system and deployment tests use ISO and we are planning
to re-implement all these jobs (including BVT, SWARM, and Fuel CI jobs) to
exclude ISO from the pipeline. That will allow us to get rid of ISO as our
deliverable and instead rely totally on package repositories. Linux
distributions like Ubuntu, Debian, RHEL, etc. are already delivered via
ISO/qcow2/etc. images and we'd better stop reinventing a wheel and support
our own ISO build code. That will allow us to make Fuel admin node
deployment more flexible.

I will infrom about our next steps here in the thread.

Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Newbie question regarding Ceilometer notification plugin

2016-08-16 Thread Julien Danjou
On Mon, Aug 15 2016, Wanjing Xu (waxu) wrote:

> We are trying to develop a program which can draw the VMs topology.  For
> this we need to listen VM creation/deletion event and act upon it.  I
> would think Ceilometer is the right place for this.  So I download and
> read Ceilometer a little bit.  If somebody can give me a pointer to where
> I can start to hook my program with the notification mechanism of the
> ceilometer, that would be a great.

What you may want is Aodh and its event alarm feature:

  http://docs.openstack.org/developer/aodh/event-alarm.html

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to add ironic db version file. now install don't copying db version file. how to do?

2016-08-16 Thread Lucas Alvares Gomes
Hi Paul,

I want add a db version file. can tell me how to do ?   very very thank you
> !!!
>

To create a migration script you have to do:

 $ cd ironic/ironic/db/sqlalchemy
 $ alembic revision -m "create foo table"

We maintain a FAQ as part of the Ironic documentation that answer this and
other similar questions. Please take a look:
http://docs.openstack.org/developer/ironic/dev/faq.html

Cheers,
Lucas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-16 Thread Sylvain Bauza



Le 15/08/2016 22:59, Andrew Laski a écrit :

On Mon, Aug 15, 2016, at 10:33 AM, Jay Pipes wrote:

On 08/15/2016 09:27 AM, Andrew Laski wrote:

Currently in Nova we're discussion adding a "capabilities" API to expose
to users what actions they're allowed to take, and having compute hosts
expose "capabilities" for use by the scheduler. As much fun as it would
be to have the same term mean two very different things in Nova to
retain some semblance of sanity let's rename one or both of these
concepts.

An API "capability" is going to be an action, or URL, that a user is
allowed to use. So "boot an instance" or "resize this instance" are
capabilities from the API point of view. Whether or not a user has this
capability will be determined by looking at policy rules in place and
the capabilities of the host the instance is on. For instance an
upcoming volume multiattach feature may or may not be allowed for an
instance depending on host support and the version of nova-compute code
running on that host.

A host "capability" is a description of the hardware or software on the
host that determines whether or not that host can fulfill the needs of
an instance looking for a home. So SSD or x86 could be host
capabilities.
https://github.com/jaypipes/os-capabilities/blob/master/os_capabilities/const.py
has a list of some examples.

Some possible replacement terms that have been thrown out in discussions
are features, policies(already used), grants, faculties. But none of
those seemed to clearly fit one concept or the other, except policies.

Any thoughts on this hard problem?

I know, naming is damn hard, right? :)

After some thought, I think I've changed my mind on referring to the
adjectives as "capabilities" and actually think that the term
"capabilities" is better left for the policy-like things.

My vote is the following:

GET /capabilities <-- returns a set of *actions* or *abilities* that the
user is capable of performing

GET /traits <-- returns a set of *adjectives* or *attributes* that may
describe a provider of some resource

Traits sounds good to me.


Yeah, it wouldn't be dire, trait.


I can rename os-capabilities to os-traits, which would make Sean Mooney
happy I think and also clear up the terminology mismatch.

Thoughts?
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Ceilometer]:Duplicate messages with Ceilometer Kafka Publisher.

2016-08-16 Thread Raghunath D
 Hi Simon,

I have two openstack setup's one with kilo and other with mitaka.

Please find details of kafka version's below:
Kilo kafka client library Version:1.3.1
Mitaka kafka client library Version:1.3.1
Kafka server version on both kilo and mitaka: kafka_2.11-0.9.0.0.tgz

One observation is on kilo openstack setup I didn't see duplicate message 
issue,while on mitaka
setup I am experiencing this issue.

With Best Regards
 Raghunath Dudyala
 Tata Consultancy Services Limited
 Mailto: raghunat...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 

-
From: Simon Pasquier [mailto:spasqu...@mirantis.com] 
 Sent: Thursday, August 11, 2016 2:34 AM
 To: OpenStack Development Mailing List (not for usage questions) 

 Subject: Re: [openstack-dev] [Ceilometer]:Duplicate messages with Ceilometer 
Kafka Publisher.
  
 
 
 
 
 Hi
  Which version of Kafka do you use?
  BR
  Simon
  
  
 
 On Thu, Aug 11, 2016 at 10:13 AM, Raghunath D  wrote:
  Hi , 
 
   We are injecting events to our custom plugin in ceilometer. 
   The ceilometer pipeline.yaml  is configured to publish these events over 
kafka and udp, consuming these samples using kafka and udp clients. 
 
   KAFKA publisher: 
   --- 
   When the events are sent continously ,we can see duplicate msg's are 
recevied in kafka client. 
   From the log it seems ceilometer kafka publisher is failed to send msg's,but 
still these msgs 
   are received by kafka server.So when kafka resends these failed msgs we can 
see duplicate msg's 
   received in kafka client. 
   Please find the attached log for reference. 
   Is this know issue ? 
   Is their any workaround for this issue ? 
 
   UDP publisher: 
     No duplicate msg's issue seen here and it is working as expected. 
 
         
 
 With Best Regards
 Raghunath Dudyala
 Tata Consultancy Services Limited
 Mailto: raghunat...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.        IT Services
                        Business Solutions
                        Consulting
 
 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain 
 confidential or privileged information. If you are 
 not the intended recipient, any dissemination, use, 
 review, distribution, printing or copying of the 
 information contained in this e-mail message 
 and/or attachments to it are strictly prohibited. If 
 you have received this communication in error, 
 please notify us by reply e-mail or telephone and 
 immediately and permanently delete the message 
 and any attachments. Thank you
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    
 

[attachment "ATT1.txt" removed by Raghunath D/HYD/TCS]__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-16 Thread Alioune
Hi all,
You can see below the debuf logs at the creation of service chain
Regards,

vagrant@ubuntu:~/openstack_sfc$ neutron --debug  port-chain-create
--port-pair-group PG1 --port-pair-group PG2 --flow-classifier FC15 PC1
DEBUG: keystoneclient.session REQ: curl -g -i -X GET
http://192.168.56.15:35357/v2.0 -H "Accept: application/json" -H
"User-Agent: python-keystoneclient"
DEBUG: keystoneclient.session RESP: [200] Content-Length: 340 Vary:
X-Auth-Token Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu)
Connection: Keep-Alive Date: Mon, 15 Aug 2016 14:36:00 GMT Content-Type:
application/json x-openstack-request-id:
req-5efa5391-4c7e-4b5e-81ec-ba2b304bc423
RESP BODY: {"version": {"status": "stable", "updated":
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json",
"type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0",
"links": [{"href": "http://192.168.56.15:35357/v2.0/;, "rel": "self"},
{"href": "http://docs.openstack.org/;, "type": "text/html", "rel":
"describedby"}]}}

DEBUG: stevedore.extension found extension EntryPoint.parse('table =
cliff.formatters.table:TableFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('json =
cliff.formatters.json_format:JSONFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('shell =
cliff.formatters.shell:ShellFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('value =
cliff.formatters.value:ValueFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('yaml =
cliff.formatters.yaml_format:YAMLFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('yaml =
clifftablib.formatters:YamlFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('json =
clifftablib.formatters:JsonFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('html =
clifftablib.formatters:HtmlFormatter')
DEBUG: networking_sfc.cli.port_chain.PortChainCreate
get_data(Namespace(chain_parameters=None, columns=[], description=None,
flow_classifiers=[u'FC15'], formatter='table', max_width=0, name=u'PC1',
noindent=False, port_pair_groups=[u'PG1', u'PG2'], prefix='',
request_format='json', tenant_id=None, variables=[]))
DEBUG: keystoneclient.auth.identity.v2 Making authentication request to
http://192.168.56.15:35357/v2.0/tokens
DEBUG: stevedore.extension found extension EntryPoint.parse('port_pair =
networking_sfc.cli.port_pair')
DEBUG: stevedore.extension found extension
EntryPoint.parse('port_pair_group = networking_sfc.cli.port_pair_group')
DEBUG: stevedore.extension found extension
EntryPoint.parse('flow_classifier = networking_sfc.cli.flow_classifier')
DEBUG: stevedore.extension found extension EntryPoint.parse('port_chain =
networking_sfc.cli.port_chain')
DEBUG: keystoneclient.session REQ: curl -g -i -X GET
http://192.168.56.15:9696/v2.0/sfc/port_pair_groups.json?fields=id=PG1
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: {SHA1}b64a4351e9ee74a640d7070a3a16db1961215260"
DEBUG: keystoneclient.session RESP: [200] Date: Mon, 15 Aug 2016 14:36:01
GMT Connection: keep-alive Content-Type: application/json; charset=UTF-8
Content-Length: 70 X-Openstack-Request-Id:
req-63790c14-8c02-4b0d-b44d-185cdbdfbf10
RESP BODY: {"port_pair_groups": [{"id":
"128aee07-96b7-45cd-9090-699a60e57bf4"}]}

DEBUG: keystoneclient.session REQ: curl -g -i -X GET
http://192.168.56.15:9696/v2.0/sfc/port_pair_groups.json?fields=id=PG2
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: {SHA1}b64a4351e9ee74a640d7070a3a16db1961215260"
DEBUG: keystoneclient.session RESP: [200] Date: Mon, 15 Aug 2016 14:36:01
GMT Connection: keep-alive Content-Type: application/json; charset=UTF-8
Content-Length: 70 X-Openstack-Request-Id:
req-186c74b5-c878-4bad-8f51-1a8601612a5b
RESP BODY: {"port_pair_groups": [{"id":
"d4f47717-2dd0-4c2f-84a2-469802c0c922"}]}

DEBUG: keystoneclient.session REQ: curl -g -i -X GET
http://192.168.56.15:9696/v2.0/sfc/flow_classifiers.json?fields=id=FC15
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: {SHA1}b64a4351e9ee74a640d7070a3a16db1961215260"
DEBUG: keystoneclient.session RESP: [200] Date: Mon, 15 Aug 2016 14:36:01
GMT Connection: keep-alive Content-Type: application/json; charset=UTF-8
Content-Length: 70 X-Openstack-Request-Id:
req-9cac1849-d9c3-48b3-b20f-cd11948a549c
RESP BODY: {"flow_classifiers": [{"id":
"020a9ceb-3451-47af-ba09-650202009217"}]}

DEBUG: keystoneclient.session REQ: curl -g -i -X POST
http://192.168.56.15:9696/v2.0/sfc/port_chains.json -H "User-Agent:
python-neutronclient" -H "Content-Type: application/json" -H "Accept:
application/json" -H "X-Auth-Token:
{SHA1}b64a4351e9ee74a640d7070a3a16db1961215260" -d '{"port_chain":
{"flow_classifiers": ["020a9ceb-3451-47af-ba09-650202009217"], "name":
"PC1", "port_pair_groups": ["128aee07-96b7-45cd-9090-699a60e57bf4",
"d4f47717-2dd0-4c2f-84a2-469802c0c922"]}}'
DEBUG: keystoneclient.session RESP: 

[openstack-dev] [vitrage][vitrage-dashboard] missing icon/font in alarm list under material theme

2016-08-16 Thread Yujun Zhang
The alarm list is not displaying some icon/font correctly under Material
theme. See attached screenshots.

Could anybody have a look?

Tracked in https://bugs.launchpad.net/vitrage-dashboard/+bug/1613574

Material:

[image: Screen Shot 2016-08-16 at 3.14.26 PM.png]
×
[image: Screen Shot 2016-08-16 at 3.14.34 PM.png]
×


Default:

[image: Screen Shot 2016-08-16 at 3.13.58 PM.png][image: Screen Shot
2016-08-16 at 3.14.13 PM.png]
×

× *[image: Screen Shot 2016-08-16 at 3.14.26 PM.png]*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage weekly IRC meeting is SKIPPED

2016-08-16 Thread Afek, Ifat (Nokia - IL)
Hi,

Vitrage IRC meeting tomorrow (Aug 17) will be SKIPPED, as many Vitrage 
contributors are away.

Thanks,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] scenario evaluator not enabled by default

2016-08-16 Thread Schnider, Nofar (EXT - IL)
Hi Yujun,
I think you will find what you are looking for in “consistency_enforcer.py”.
Hope it helps.


Nofar Schnider
Senior Software Engineer
Applications & Analytics, Nokia
16 Atir Yeda
Kfar Saba, Israel 44643
M: +972 52 4326023
F: +972 9 793 3036
[download]

From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Monday, August 15, 2016 8:49 AM
To: OpenStack Development Mailing List (not for usage questions) 
; Rosensweig, Elisha (Nokia - IL) 

Subject: Re: [openstack-dev] [vitrage] scenario evaluator not enabled by default

Thanks for the expanation, Elisha. I understand the design now.

But I could not find the statement which enables the evaluator after initial 
phase.

Could you help to point it out?
--
Yujun

On Thu, Aug 11, 2016 at 11:42 PM Rosensweig, Elisha (Nokia - IL) 
> wrote:
This is on purpose.

When Vitrage is started, it first runs a "consistency" round where it gets all 
the resources from its datasources and inserts them into the entity graph. Once 
this initial phase is over, the evaluator is run over all the entity graph to 
check for meaningful patterns based on it's templates.

The reason for this process is to avoid too much churn during the initial phase 
when Vitrage comes up. With so many changes done to the entity graph, it's best 
to wait for the initial collection phase to finish and then to do the analysis.

Elisha

> From: Yujun Zhang 
> [mailto:zhangyujun+...@gmail.com]
> Sent: Thursday, August 11, 2016 5:49 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [vitrage] scenario evaluator not enabled by 
> default
>
> Sorry for having put a url from forked repo. It should be 
> https://github.com/openstack/vitrage/commit/bdba10cb71b2fa3744e4178494fa860303ae0bbe#diff>
>  -6f1a277a2f6e9a567b38d646f19728bcL36

> But the content is the same
> --
> Yujun

> On Thu, Aug 11, 2016 at 10:43 PM Yujun Zhang 
> > wrote:
> It seems the scenario evaluator is not enabled when vitrage is started in 
> devstack installer.
>
> I dig a bit in the history, it seems the default value for the evaluator is 
> changed from True to False > in a history commit [1].
>
> Is it breaking the starting of evaluator or I have missed some steps to 
> enable it explictily?
>
> - [1] 
> https://github.com/openzero-zte/vitrage/commit/bdba10cb71b2fa3744e4178494fa860303ae0bbe#diff-
>  6f1a277a2f6e9a567b38d646f19728bcL36

> --
> Yujun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for API docs

2016-08-16 Thread Shuu Mutou
Hi Anne,

AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API docs.
Anne, has not the adoption been changed? Otherwise, do we need to maintain much 
RST files also?

IMO, for that the reference and the source code doesn't have conflict, these 
should be near each other as possible as follow. And it decreases maintainance 
costs for documents, and increases document reliability. So I believe our 
approach is more ideal.

The Best: the references generated from source code.
Better: the references written in docstring.

We know some projects abandoned these approach, and then they uses RST + YAML. 
But we hope decreasing maintainance cost for the documents. So we should not 
create so much RST files, I think.


I'm proceeding auto-generation of swagger spec from magnum source code using 
pecan-swagger [1], and improving pecan-swagger with Michael McCune [2].
These will generate almost Magnum API specs automatically in OpenAPI format.
Also, these approach can be adopted by other APIs that uses pecan and WSME.
Please check this also.

[1] https://review.openstack.org/#/c/303235/
[2] https://github.com/elmiko/pecan-swagger/

Regards,
Shu


> -Original Message-
> From: Anne Gentle [mailto:annegen...@justwriteclick.com]
> Sent: Monday, August 15, 2016 11:00 AM
> To: Hongbin Lu 
> Cc: openstack-dev@lists.openstack.org; enstack.org
> 
> Subject: Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common
> tooling for API docs
> 
> Hi Hongbin,
> 
> Thanks for asking. I'd like for teams to look for ways to innovate and
> integrate with the navigation as a good entry point for OpenAPI to become
> a standard for OpenStack to use. That said, we have to move forward and
> make progress.
> 
> Is Lars or anyone on the Magnum team interested in the web development work
> to integrate with the sidebar? See the work at
> https://review.openstack.org/#/c/329508 and my comments on
> https://review.openstack.org/#/c/351800/ saying that I would like teams
> to integrate first to provide the best web experience for people consuming
> the docs.
> 
> Anne
> 
> On Fri, Aug 12, 2016 at 4:43 PM, Hongbin Lu   > wrote:
> 
> 
>   Hi team,
> 
> 
> 
>   As mentioned in the email below, Magnum are not using common tooling
> for generating API docs, so we are excluded from the common navigation of
> OpenStack API. I think we need to prioritize the work to fix it. BTW, I
> notice there is a WIP patch [1] for generating API docs by using Swagger.
> However, I am not sure if Swagger belongs to “common tooling” (docs team,
> please confirm).
> 
> 
> 
>   [1] https://review.openstack.org/#/c/317368/
> 
> 
> 
> 
>   Best regards,
> 
>   Hongbin
> 
> 
> 
>   From: Anne Gentle [mailto:annegen...@justwriteclick.com
>  ]
>   Sent: August-10-16 3:50 PM
>   To: OpenStack Development Mailing List;
> openstack-d...@lists.openstack.org
> 
>   Subject: [openstack-dev] [api] [doc] API status report
> 
> 
> 
>   Hi all,
> 
>   I wanted to report on status and answer any questions you all have
> about the API reference and guide publishing process.
> 
> 
> 
>   The expectation is that we provide all OpenStack API information
> on developer.openstack.org  . In order to
> meet that goal, it's simplest for now to have all projects use the
> RST+YAML+openstackdocstheme+os-api-ref extension tooling so that users
> see available OpenStack APIs in a sidebar navigation drop-down list.
> 
> 
> 
>   --Migration--
> 
>   The current status for migration is that all WADL content is
> migrated except for trove. There is a patch in progress and I'm in contact
> with the team to assist in any way.
> https://review.openstack.org/#/c/316381/
> 
> 
> 
> 
>   --Theme, extension, release requirements--
> 
>   The current status for the theme, navigation, and Sphinx extension
> tooling is contained in the latest post from Graham proposing a solution
> for the release number switchover and offers to help teams as needed:
> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101112.
> html
>  .html>  I hope to meet the requirements deadline to get those changes landed.
> Requirements freeze is Aug 29.
> 
> 
> 
>   --Project coverage--
> 
>   The current status for project coverage is that these projects are
> now using the RST+YAML in-tree workflow and tools and publishing to
> http://developer.openstack.org/api-ref/
>   so they will be
> included in the upcoming API navigation sidebar intended to span all
> OpenStack APIs:
> 
> 
> 
>   designate 

Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-16 Thread Jim Meyer

> On Aug 15, 2016, at 10:49 AM, Doug Hellmann  wrote:
> 
> Excerpts from Jim Meyer's message of 2016-08-15 09:37:36 -0700:
>> A fast reply where others will expand further (I hope):
>> 
>>> On Aug 15, 2016, at 9:01 AM, Doug Hellmann  wrote:
>>> 
 My vote is the following:
 
 GET /capabilities <-- returns a set of *actions* or *abilities* that the 
 user is capable of performing
>>> 
>>> Does this relate in any way to how DefCore already uses "capabilities”?
>> 
>> Only a bit, and not in a way I’d be deeply concerned about.
>> 
>> The Interoperability Working Group (née DefCore) points to a specific 
>> Tempest test which asserts that a service has a “capability” and determines 
>> if that capability is “core,” thus required to be provided in this way in 
>> order to claim that the underlying cloud is an OpenStack cloud.
> 
> Do you think that the meanings of "capability of a cloud" and
> "capability of a user of a cloud" are far enough apart to avoid
> confusion?

The two contexts are distinct enough and with little enough overlap that I 
believe the set of folks who might ever discuss both at the same time is less 
than twenty.

Unless Jay takes a sudden interest in DefCore. Then it might *equal* twenty. ;D

--j


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev