Re: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for core

2017-06-26 Thread duon...@vn.fujitsu.com
+1

nice works

> -Original Message-
> From: Michał Jastrzębski [mailto:inc...@gmail.com]
> Sent: Wednesday, June 14, 2017 10:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for
> core
> 
> Hello,
> 
> With great pleasure I'm kicking off another core voting to kolla-ansible and
> kolla teams:) this one is about spsurya. Voting will be open for 2 weeks (till
> 28th Jun).
> 
> Consider this mail my +1 vote, you know the drill:)
> 
> Regards,
> Michal
> 
> 

-
duonghq
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-26 Thread Melvin Hillsman
On Wed, Jun 21, 2017 at 11:55 AM, Matt Riedemann 
wrote:

> On 6/21/2017 11:17 AM, Shamail Tahir wrote:
>
>>
>>
>> On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez > > wrote:
>>
>> Shamail Tahir wrote:
>> > In the past, governance has helped (on the UC WG side) to reduce
>> > overlaps/duplication in WGs chartered for similar objectives. I
>> would
>> > like to understand how we will handle this (if at all) with the new
>> SIG
>> > proposa?
>>
>> I tend to think that any overlap/duplication would get solved
>> naturally,
>> without having to force everyone through an application process that
>> may
>> discourage natural emergence of such groups. I feel like an
>> application
>> process would be premature optimization. We can always encourage
>> groups
>> to merge (or clean them up) after the fact. How much
>> overlaps/duplicative groups did you end up having ?
>>
>>
>> Fair point, it wasn't many. The reason I recalled this effort was because
>> we had to go through the exercise after the fact and that made the volume
>> of WGs to review much larger than had we asked the purpose whenever they
>> were created. As long as we check back periodically and not let the work
>> for validation/clean up pile up then this is probably a non-issue.
>>
>>
>> > Also, do we have to replace WGs as a concept or could SIG
>> > augment them? One suggestion I have would be to keep projects on
>> the TC
>> > side and WGs on the UC side and then allow for spin-up/spin-down of
>> SIGs
>> > as needed for accomplishing specific goals/tasks (picture of a
>> diagram
>> > I created at the Forum[1]).
>>
>> I feel like most groups should be inclusive of all community, so I'd
>> rather see the SIGs being the default, and ops-specific or
>> dev-specific
>> groups the exception. To come back to my Public Cloud WG example, you
>> need to have devs and ops in the same group in the first place before
>> you would spin-up a "address scalability" SIG. Why not just have a
>> Public Cloud SIG in the first place?
>>
>>
>> +1, I interpreted originally that each use-case would be a SIG versus the
>> SIG being able to be segment oriented (in which multiple use-cases could be
>> pursued)
>>
>>
>>  > [...]
>> > Finally, how will this change impact the ATC/AUC status of the SIG
>> > members for voting rights in the TC/UC elections?
>>
>> There are various options. Currently you give UC WG leads the AUC
>> status. We could give any SIG lead both statuses. Or only give the AUC
>> status to a subset of SIGs that the UC deems appropriate. It's really
>> an
>> implementation detail imho. (Also I would expect any SIG lead to
>> already
>> be both AUC and ATC somehow anyway, so that may be a non-issue).
>>
>>
>> We can discuss this later because it really is an implementation detail.
>> Thanks for the answers.
>>
>>
>> --
>> Thierry Carrez (ttx)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> --
>> Thanks,
>> Shamail Tahir
>> t: @ShamailXD
>> tz: Eastern Time
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I think a key point you're going to want to convey and repeat ad nauseum
> with this SIG idea is that each SIG is focused on a specific use case and
> they can be spun up and spun down. Assuming that's what you want them to be.
>
> One problem I've seen with the various work groups is they overlap in a
> lot of ways but are probably driven as silos. For example, how many
> different work groups are there that care about scaling? So rather than
> have 5 work groups that all overlap on some level for a specific issue,
> create a SIG for that specific issue so the people involved can work on
> defining the specific problem and work to come up with a solution that can
> then be implemented by the upstream development teams, either within a
> single project or across projects depending on the issue. And once the
> specific issue is resolved, close down the SIG.


> Examples here would be things that fall under proposed community wide
> goals for a release, like running API services under wsgi, py3 support,
> moving policy rules into code, hierarchical quotas, RBAC "admin 

Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-26 Thread Édouard Thuleau
HI Armando,

I opened a launchpad bug [1]. I'll try to propose a patch on one of
the service plugin to enable plugable backend driver.
I'll look how we can add tests to check service plugin works with a
dummy core plugin not based on the Neutron model.

[1] https://bugs.launchpad.net/neutron/+bug/1700651

Édouard.

On Thu, Jun 22, 2017 at 11:40 PM, Armando M.  wrote:
>
>
> On 22 June 2017 at 17:24, Édouard Thuleau  wrote:
>>
>> Hi Armando,
>>
>> I did not opened any bug report. But if a core plugin implements only
>> the NeutronPluginBaseV2 interface [1] and not the NeutronDbPluginV2
>> interface [2], most of the service plugins of that list will be
>> initialized without any errors (only the timestamp plugin fails to
>> initialize because it tries to do DB stuff in its constructor [3]).
>> And all API extensions of that service plugins are listed as supported
>> but none of them works. Resources are not extended (tag, revision,
>> auto-allocate) or some API extensions returns 404
>> (network-ip-availability or flavors).
>>
>> What I proposed, is to improve all that service plugins of that list
>> to be able to support pluggable backend drivers (thanks to the Neutron
>> service driver mechanism [4]) and uses by default a driver based on
>> the Neutron DB(like it's implemented actually). That will permits core
>> plugin which not implements the Neutron DB model to provide its own
>> driver. But until all service plugins will be fixed, I proposed a
>> workaround to disable them.
>
>
> I would recommend against the workaround of disabling them because of the
> stated rationale.
>
> Can you open a bug report, potentially when you're ready to file a fix (or
> enable someone else to take ownership of the fix)? This way we can have a
> more effective conversation either on the bug report or code review.
>
> Thanks,
> Armando
>
>>
>>
>> [1]
>> https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L30
>> [2]
>> https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L124
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/services/timestamp/timestamp_plugin.py#L32
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27
>>
>> Édouard.
>>
>> On Thu, Jun 22, 2017 at 12:29 AM, Armando M.  wrote:
>> >
>> >
>> > On 21 June 2017 at 17:40, Édouard Thuleau 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> @Chaoyi,
>> >> I don't want to change the core plugin interface. But I'm not sure we
>> >> are talking about the same interface. I had a very quick look into the
>> >> tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
>> >> which implements the Neutron DB model. Our Contrail Neutron plugin
>> >> implements the NeutronPluginBaseV2 interface [2]. Anyway,
>> >> NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
>> >> Thanks for the pointer to the stadium paragraph.
>> >
>> >
>> > Is there any bug report that captures the actual error you're facing?
>> > Out of
>> > the list of plugins that have been added to that list over time, most
>> > work
>> > just exercising the core plugin API, and we can look into the ones that
>> > don't to figure out whether we overlooked some design abstractions
>> > during
>> > code review.
>> >
>> >>
>> >>
>> >> @Kevin,
>> >> Service plugins loaded by default are defined in a contant list [4]
>> >> and I don't see how I can remove a default service plugin to be loaded
>> >> [5].
>> >>
>> >> [1]
>> >>
>> >> https://github.com/openstack/tricircle/blob/master/tricircle/network/central_plugin.py#L128
>> >> [2]
>> >>
>> >> https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py#L113
>> >> [3]
>> >>
>> >> https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L125
>> >> [4]
>> >>
>> >> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>> >> [5]
>> >>
>> >> https://github.com/openstack/neutron/blob/master/neutron/manager.py#L190
>> >>
>> >> Édouard.
>> >>
>> >> On Wed, Jun 21, 2017 at 11:22 AM, Kevin Benton 
>> >> wrote:
>> >> > Why not just delete the service plugins you don't support from the
>> >> > default
>> >> > plugins dict?
>> >> >
>> >> > On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau
>> >> > 
>> >> > wrote:
>> >> >>
>> >> >> Ok, we would like to help on that. How we can start?
>> >> >>
>> >> >> I think the issue I raise in that thread must be the first point to
>> >> >> address and my second proposition seems to be the correct one. What
>> >> >> do
>> >> >> you think?
>> >> >> But it will needs some time and not sure we'll be able to fix all
>> >> >> service plugins loaded by default before the next Pike release.
>> >> >>
>> >> >> I like to propose a workaround until all default service 

Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-26 Thread Michael Johnson
Hello Yipei,

 

You are on the track to debug this.

When you are logged into the amphora, please check the following logs to see 
what the amphora-agent error is:

 

/var/log/amphora-agent.log

And

/var/log/syslog

 

One of those two logs will have the error information.

 

Michael

 

 

From: Yipei Niu [mailto:newy...@gmail.com] 
Sent: Sunday, June 25, 2017 8:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [octavia] fail to plug vip to amphora

 

Hi, all,

 

I am trying to create a load balancer in octavia. The amphora can be booted 
successfully, and can be reached via icmp. However, octavia fails to plug vip 
to the amphora through the amphora client api and returns 500 status code, 
causing some errors as follows.

 

   |__Flow 
'octavia-create-loadbalancer-flow': InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
Traceback (most recent call last):

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py",
 line 53, in _execute_task

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
result = task.execute(**arguments)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 240, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 219, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphora, loadbalancer, amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
137, in post_vip_plug

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
net_info)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
378, in plug_vip

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
return exc.check_exception(r)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py", 
line 32, in check_exception

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
raise responses[status_code]()

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker

 

To fix the problem, I log in the amphora and find that there is one http server 
process is listening on port 9443, so I think the amphora api services is 
active. But do not know how to further investigate what error happens inside 
the amphora api service and solve it? Look forward to your valuable comments.

 

Best regards,

Yipei 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] documentation migration and consolidation

2017-06-26 Thread Steve Martinelli
Great to see this consolidation happen!

On Mon, Jun 26, 2017 at 4:09 PM, Lance Bragstad  wrote:

> Hey all,
>
> We recently merged the openstack-manuals admin-guide into keystone [0]
> and there is a lot of duplication between the admin-guide and keystone's
> "internal" operator-guide [1]. I've started proposing small patches to
> consolidate the documentation from the operator-guide to the official
> admin-guide. In case you're interested in helping out, please use the
> remove-duplicate-docs branch [2]. The admin-guide is really well written
> and it would be great to get some reviews from members of the docs team
> if possible to help us maintain the style and consistency of the
> admin-guide.
>
> Ping me if you have any questions. Thanks!
>
>
> [0] https://review.openstack.org/#/c/469515/
> [1] https://docs.openstack.org/developer/keystone/configuration.html
> [2]
> https://review.openstack.org/#/q/status:open+project:
> openstack/keystone+branch:master+topic:remove-duplicate-docs
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] documentation migration and consolidation

2017-06-26 Thread Lance Bragstad
Hey all,

We recently merged the openstack-manuals admin-guide into keystone [0]
and there is a lot of duplication between the admin-guide and keystone's
"internal" operator-guide [1]. I've started proposing small patches to
consolidate the documentation from the operator-guide to the official
admin-guide. In case you're interested in helping out, please use the
remove-duplicate-docs branch [2]. The admin-guide is really well written
and it would be great to get some reviews from members of the docs team
if possible to help us maintain the style and consistency of the
admin-guide.

Ping me if you have any questions. Thanks!


[0] https://review.openstack.org/#/c/469515/
[1] https://docs.openstack.org/developer/keystone/configuration.html
[2]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:remove-duplicate-docs




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] New Meeting Time

2017-06-26 Thread MONTEIRO, FELIPE C
Hi all,

Murano has a new meeting time, which has been rescheduled to 13:00 UTC, as per 
[0], beginning tomorrow. The meetings will continue every Tuesday as usual.

[0] https://review.openstack.org/#/c/468182/

Felipe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-06-26 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. Fixing the way we access Glance:
1.1. quick(ish) fix to enable using service catalog: 
https://review.openstack.org/476498
2. Booting from volume:
2.1. Skipping deployment logic: https://review.openstack.org/#/c/454243/
2.2. CRUD notifications: https://review.openstack.org/#/c/463930/
3. Rolling upgrades:
3.1.  'Add new dbsync command with first online data migration': 
https://review.openstack.org/#/c/408556/
4. Nova patch for VIF attach/detach: https://review.openstack.org/#/c/419975/
5. Driver composition reform
5.1. Classic driver deprecation spec: 
https://review.openstack.org/#/c/464046/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 19 Jun 2017 and 26 Jun 2017)
- Ironic: 253 bugs (+4) + 255 wishlist items (+4). 24 new (-1), 207 in progress 
(+7), 2 critical (+2), 31 high (+1) and 31 incomplete
- Inspector: 12 bugs (-1) + 30 wishlist items. 1 new, 13 in progress (-1), 0 
critical, 3 high and 3 incomplete
- Nova bugs with Ironic tag: 13. 3 new (+2), 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/413324 - iPXE template - Has 
review feedback - Hopefully updated revision later today.Pushing revision now 
<-- Has 3x+2 and a +A - Blocked by gate
https://review.openstack.org/#/c/454243/ - Skip deployment if BFV - 
Has 1x+2 - Has a -1 that needs to be addressed
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change  - Needs reviews
https://review.openstack.org/#/c/463930/ - CRUD notification 
updates for volume objects. - Has 1x +2
https://review.openstack.org/#/c/463908/ - Enable cinder storage 
interface for generic hardware - Has failing tests at present.
https://review.openstack.org/#/c/463972/ - Add storage_interface to 
notificaions - 2x+2, 3x+1
https://review.openstack.org/#/c/466333/ - Devstack changes or Boot 
from Volume
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- next patch ready for reviews: 'Add new dbsync command with first online 
data migration': https://review.openstack.org/#/c/408556/
- to address restarting services after unpinning, spec ready for reviews: 
'SIGHUP restarts services with updated configs': 
https://review.openstack.org/474309
- during testing last week, discovered some minor issues and new 
port.physical_network field not supported in rolling upgrades; rloo to push up 
patch in next 1-2 days to address that
- Testing work: done as per spec, but rloo wants to ask vasyl whether we 
can improve. grenade test will do upgrade so we have old API sending requests 
to old and/or new conductor, but rloo doesn't think there is anything to 
control -which- conductor handles the request, so what if old conductor handles 
all the requests?

Reference architecture 

Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-26 Thread Doug Hellmann
Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
> Hi everyone,
> 
> Doug and I have written up a spec following on from the conversation [0] that 
> we had regarding the documentation publishing future.
> 
> Please take the time out of your day to review the spec as this affects 
> *everyone*.
> 
> See: https://review.openstack.org/#/c/472275/
> 
> I will be PTO from the 9th – 19th of June. If you have any pressing concerns, 
> please email me and I will get back to you as soon as I can, or, email Doug 
> Hellmann and hopefully he will be able to assist you.
> 
> Thanks,
> 
> Alex
> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html

Someone left a few questions about the directions on the tracking
etherpad [1]. I tried to answer them in place, but next time *please*
post to the mailing list. I have no idea how long those questions were
there before I noticed them, and given how little time we have to do the
work I want to make sure everyone understands as soon as possible.

Doug

[1] https://etherpad.openstack.org/p/doc-migration-tracking

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][networking-sfc] Meters/Statistics for Networking-SFC

2017-06-26 Thread gordon chung


On 25/06/17 08:10 AM, rajeev.satyanaray...@wipro.com wrote:
> I am interested to know if there are any meters available for monitoring
> SFC through ceilometer, like no.of flows associated with an SFC or
> packets in/out for an SFC etc?
>
> If they are available, please let me know how to configure and use them.
> If not, are there any plans of providing support to them in coming releases?
>

i'm unaware of a meter in ceilometer that does this but i'd welcome it 
if contributed. please don't hesitate to reach out if you have questions 
on adding new meters to ceilometer.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-26 Thread Mikhail Fedosin
On Jun 26, 2017 7:14 PM, "Jay Pipes"  wrote:

On 06/26/2017 11:32 AM, Mikhail Fedosin wrote:


>
> On Jun 26, 2017 5:54 PM, "Jay Pipes" > wrote:
>
> On 06/26/2017 10:35 AM, Mikhail Fedosin wrote:
>
> * Storage of secrets - a new artifact type in Glare, which
> will store private information (keys, passwords, etc.) in an
> encrypted form (like in Barbican).
>
>
> Does the above mean you are implementing a share secret storage
> solution or that you are going to use an existing solution like
> Barbican that does that?
>
> Sectets is a plugin for Glare we developed for Nokia CloudBand platform,
>  and they just decided to opensource it. It doesn't use Barbican,
> technically it is oslo.versionedobjects class.
>

Sorry to hear that you opted not to use Barbican.

I think it's only because Keycloak integration is required by Nokia's
system and Barbican doesn't support it.


But, I'm confused what oslo.versionedobjects has to do with secrets
storage. Could you explain?

Oslo.versionedobjects just defines a structure of artifact type. But we
also implemented two new field types for oslo_vo - Blob and Folder, which
can be used similar to Integer or String.

When user tries to write data to a Blob field it is automatically decoded
and uploaded to a cloud store by glance_store library. And vice versa -
when user reads data from the Blob field it is dowloaded from the store and
decoded.

So, consider Glare as a synergy of glance_store and oslo.versionedobjects
with RESTful API above it.



Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-26 Thread Chris Friesen

On 06/25/2017 02:09 AM, Sahid Orentino Ferdjaoui wrote:

On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:

On 06/23/2017 09:35 AM, Henning Schild wrote:

Am Fri, 23 Jun 2017 11:11:10 +0200
schrieb Sahid Orentino Ferdjaoui :



In Linux RT context, and as you mentioned, the non-RT vCPU can acquire
some guest kernel lock, then be pre-empted by emulator thread while
holding this lock. This situation blocks RT vCPUs from doing its
work. So that is why we have implemented [2]. For DPDK I don't think
we have such problems because it's running in userland.

So for DPDK context I think we could have a mask like we have for RT
and basically considering vCPU0 to handle best effort works (emulator
threads, SSH...). I think it's the current pattern used by DPDK users.


DPDK is just a library and one can imagine an application that has
cross-core communication/synchronisation needs where the emulator
slowing down vpu0 will also slow down vcpu1. You DPDK application would
have to know which of its cores did not get a full pcpu.

I am not sure what the DPDK-example is doing in this discussion, would
that not just be cpu_policy=dedicated? I guess normal behaviour of
dedicated is that emulators and io happily share pCPUs with vCPUs and
you are looking for a way to restrict emulators/io to a subset of pCPUs
because you can live with some of them beeing not 100%.


Yes.  A typical DPDK-using VM might look something like this:

vCPU0: non-realtime, housekeeping and I/O, handles all virtual interrupts
and "normal" linux stuff, emulator runs on same pCPU
vCPU1: realtime, runs in tight loop in userspace processing packets
vCPU2: realtime, runs in tight loop in userspace processing packets
vCPU3: realtime, runs in tight loop in userspace processing packets

In this context, vCPUs 1-3 don't really ever enter the kernel, and we've
offloaded as much kernel work as possible from them onto vCPU0.  This works
pretty well with the current system.


For RT we have to isolate the emulator threads to an additional pCPU
per guests or as your are suggesting to a set of pCPUs for all the
guests running.

I think we should introduce a new option:

- hw:cpu_emulator_threads_mask=^1

If on 'nova.conf' - that mask will be applied to the set of all host
CPUs (vcpu_pin_set) to basically pack the emulator threads of all VMs
running here (useful for RT context).


That would allow modelling exactly what we need.
In nova.conf we are talking absolute known values, no need for a mask
and a set is much easier to read. Also using the same name does not
sound like a good idea.
And the name vcpu_pin_set clearly suggest what kind of load runs here,
if using a mask it should be called pin_set.


I agree with Henning.

In nova.conf we should just use a set, something like
"rt_emulator_vcpu_pin_set" which would be used for running the emulator/io
threads of *only* realtime instances.


I'm not agree with you, we have a set of pCPUs and we want to
substract some of them for the emulator threads. We need a mask. The
only set we need is to selection which pCPUs Nova can use
(vcpus_pin_set).


We may also want to have "rt_emulator_overcommit_ratio" to control how many
threads/instances we allow per pCPU.


Not really sure to have understand this point? If it is to indicate
that for a pCPU isolated we want X guest emulator threads, the same
behavior is achieved by the mask. A host for realtime is dedicated for
realtime, no overcommitment and the operators know the number of host
CPUs, they can easily deduct a ratio and so the corresponding mask.


Suppose I have a host with 64 CPUs.  I reserve three for host overhead and 
networking, leaving 61 for instances.  If I have instances with one non-RT vCPU 
and one RT vCPU then I can run 30 instances.  If instead my instances have one 
non-RT and 5 RT vCPUs then I can run 12 instances.  If I put all of my emulator 
threads on the same pCPU, it might make a difference whether I put 30 sets of 
emulator threads or 12 sets.


The proposed "rt_emulator_overcommit_ratio" would simply say "nova is allowed to 
run X instances worth of emulator threads on each pCPU in 
"rt_emulator_vcpu_pin_set".  If we've hit that threshold, then no more RT 
instances are allowed to schedule on this compute node (but non-RT instances 
would still be allowed).


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][networking-sfc] Meters/Statistics for Networking-SFC

2017-06-26 Thread Cathy Zhang
Hi Rajeev,

There is no meter yet. I think it is a good idea to add it.
Would you like to share your thought in more detail?

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, June 26, 2017 2:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ceilometer][networking-sfc] Meters/Statistics for 
Networking-SFC

Hi Rajeev, there are no meters as far as I know and I'm not aware of any plans 
at the moment.
What else do you have in mind in terms of monitoring?

Best regards,
Igor.

From: rajeev.satyanaray...@wipro.com 
[mailto:rajeev.satyanaray...@wipro.com]
Sent: Sunday, June 25, 2017 1:10 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ceilometer][networking-sfc] Meters/Statistics for 
Networking-SFC


Hi All,



I am interested to know if there are any meters available for monitoring SFC 
through ceilometer, like no.of flows associated with an SFC or packets in/out 
for an SFC etc?

If they are available, please let me know how to configure and use them. If 
not, are there any plans of providing support to them in coming releases?



Thanking you,



Regards,

Rajeev.
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-26 Thread Clark Boylan
On Mon, Jun 26, 2017, at 10:31 AM, Boris Pavlovic wrote:
> Mike,
> 
> I was recently helping one Intern to join OpenStack community and make
> some
> contribution.
> 
> And I found that current workflow is extremely complex and I think not
> all
> people that want to contribute can pass it..
> 
> Current workflow is:
> - Go to Gerrit sign in
> - Find how to contirubte to Gerrit (fail with this because no ssh key)
> - Find in Gerrit where to upload ssh (because no agreement)
> - Find in Gerrit where to accept License agreement  (fail because your
> agreement is invalid and contact info should be provided in Gerrit)
> - Server can't accept contact infomration (is what you see in gerrit)
> - Go to OpenStack.org sign in (to fix the problem with Gerrit)
> - Update contact information
> - When you try to contribute your first commit (if you already created
> it,
> you won't be able unit you do git commit --ament, so git review will add
> change-id)

Git review should automatically do this last step for you if a change id
is missing.

> 
> Overall it would take 1-2 days for people not familiar with OpenStack.
> 
> 
> What about if one make  "Sing-Up" page:
> 
> 1) Few steps: provide Username, Contact info, Agreement, SSH key (and it
> will do all work for you set Gerrit, OpenStack,...)
> 2) After one finished form it gets instruction for his OS how to setup
> and
> run properly git review
> 3) Maybe few tutorials (how to find some bug, how to test it and where
> are
> the docs, devstack, ...)
> 
> That would simplify onboarding process...

I think that Jeremy (fungi) has work in progress to tie electoral rolls
to foundation membership via an external lookup api that was recently
added to the foundation membership site. This means that we shouldn't
need to check that gerrit account info matches foundation account info
at CLA signing tiem anymore (at least this is my understanding, Jeremy
can correct me if I am wrong).

If this is the case it should make account setup much much simpler. You
just add an ssh key and sign the cla without worrying about account
details lining up.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-26 Thread Amy Marrich
Mike,

We've gone through some of this with the Git and Gerrit Lunch and Learns
and the Upstream Institute, but those are both live teaching type
situations. I'd be more then happy to help incorporate some of the stuff we
already have into an on-line effort for those who can't attend in person so
that we can help get them the same experience if not close to it.

Amy (aka spotz)

On Fri, Jun 23, 2017 at 3:17 PM, Mike Perez  wrote:

> Hello all,
>
> Every month we have people asking on IRC or the dev mailing list having
> interest in working on OpenStack, and sometimes they're given different
> answers from people, or worse, no answer at all.
>
> Suggestion: lets work our efforts together to create some common
> documentation so that all teams in OpenStack can benefit.
>
> First it’s important to note that we’re not just talking about code
> projects here. OpenStack contributions come in many forms such as running
> meet ups, identifying use cases (product working group), documentation,
> testing, etc. We want to make sure those potential contributors feel
> welcomed too!
>
> What is common documentation? Things like setting up Git, the many
> accounts you need to setup to contribute (gerrit, launchpad, OpenStack
> foundation account). Not all teams will use some common documentation, but
> the point is one or more projects will use them. Having the common
> documentation worked on by various projects will better help prevent
> duplicated efforts, inconsistent documentation, and hopefully just more
> accurate information.
>
> A team might use special tools to do their work. These can also be
> integrated in this idea as well.
>
> Once we have common documentation we can have something like:
> 1. Choose your own adventure: I want to contribute by code
> 2. What service type are you interested in? (Database, Block storage,
> compute)
> 3. Here’s step-by-step common documentation to setting up Git, IRC,
> Mailing Lists, Accounts, etc.
> 4. A service type project might choose to also include additional
> documentation in that flow for special tools, etc.
>
> Important things to note in this flow:
> * How do you want to contribute?
> * Here are **clear** names that identify the team. Not code names like
> Cloud Kitty, Cinder, etc.
> * The documentation should really aim to not be daunting:
> * Someone should be able to glance at it and feel like they can finish
> things in five minutes. Not be yet another tab left in their browser that
> they’ll eventually forget about
> * No wall of text!
> * Use screen shots
> * Avoid covering every issue you could hit along the way.
>
> ## Examples of More Simple Documentation
> I worked on some documentation for the Upstream University preparation
> that has received excellent feedback meet close to these suggestions:
> * IRC [1]
> * Git [2]
> * Account Setup [3]
>
> ## 500 Feet Birds Eye view
> There will be a Contributor landing page on the openstack.org website.
> Existing contributors will find reference links to quickly jump to things.
> New contributors will find a banner at the top of the page to direct them
> to the choose your own adventure to contributing to OpenStack, with ordered
> documentation flow that reuses existing documentation when necessary.
> Picture also a progress bar somewhere to show how close you are to being
> ready to contribute to whatever team. Of course there are a lot of other
> fancy things we can come up with, but I think getting something up as an
> initial pass would be better than what we have today.
>
> Here's an example of what the sections/chapters could look like:
>
> - Code
> * Volumes (Cinder)
>  * IRC
>  * Git
>  * Account Setup
>  * Generating Configs
> * Compute (Nova)
>  * IRC
>  * Git
>  * Account Setup
> * Something about hypervisors (matrix?)
> -  Use Cases
> * Products (Product working group)
> * IRC
> * Git
> * Use Case format
>
> There are some rough mock up ideas [4]. Probably Sphinx will be fine for
> this. Potentially we could use this content for conference lunch and
> learns, upstream university, and the on-boarding events at the Forum. What
> do you all think?
>
> [1] - http://docs.openstack.org/upstream-training/irc.html
> [2] - http://docs.openstack.org/upstream-training/git.html
> [3] - http://docs.openstack.org/upstream-training/accounts.html
> [4] - https://www.dropbox.com/s/o46xh1cp0sv0045/OpenStack%
> 20contributor%20portal.pdf?dl=0
>
> —
>
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack 

Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-26 Thread Boris Pavlovic
Mike,

I was recently helping one Intern to join OpenStack community and make some
contribution.

And I found that current workflow is extremely complex and I think not all
people that want to contribute can pass it..

Current workflow is:
- Go to Gerrit sign in
- Find how to contirubte to Gerrit (fail with this because no ssh key)
- Find in Gerrit where to upload ssh (because no agreement)
- Find in Gerrit where to accept License agreement  (fail because your
agreement is invalid and contact info should be provided in Gerrit)
- Server can't accept contact infomration (is what you see in gerrit)
- Go to OpenStack.org sign in (to fix the problem with Gerrit)
- Update contact information
- When you try to contribute your first commit (if you already created it,
you won't be able unit you do git commit --ament, so git review will add
change-id)

Overall it would take 1-2 days for people not familiar with OpenStack.


What about if one make  "Sing-Up" page:

1) Few steps: provide Username, Contact info, Agreement, SSH key (and it
will do all work for you set Gerrit, OpenStack,...)
2) After one finished form it gets instruction for his OS how to setup and
run properly git review
3) Maybe few tutorials (how to find some bug, how to test it and where are
the docs, devstack, ...)

That would simplify onboarding process...

Best regards,
Boris Pavlovic

On Mon, Jun 26, 2017 at 2:45 AM, Alexandra Settle 
wrote:

> I think this is a good idea :) thanks Mike. We get a lot of people coming
> to the docs chan or ML asking for help/where to start and sometimes it’s
> difficult to point them in the right direction.
>
>
>
> Just from experience working with contributor documentation, I’d avoid all
> screen shots if you can – updating them whenever the process changes
> (surprisingly often) is a lot of unnecessary technical debt.
>
>
>
> The docs team put a significant amount of effort in a few releases back
> writing a pretty comprehensive Contributor Guide. For the purposes you
> describe below, I imagine a lot of the content here could be adapted. The
> process of setting up for code and docs is exactly the same:
> http://docs.openstack.org/contributor-guide/index.html
>
>
>
> I also wonder if we could include a ‘what is openstack’ 101 for new
> contributors. I find that there is a **lot** of material out there, but
> it is often very hard to explain to people what each project does, how they
> all interact, why we install from different sources, why do we have
> official and unofficial projects etc. It doesn’t have to be seriously
> in-depth, but an overview that points people who are interested in the
> right directions. Often this will help people decide on what project they’d
> like to undertake.
>
>
>
> Cheers,
>
>
>
> Alex
>
>
>
> *From: *Mike Perez 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Friday, June 23, 2017 at 9:17 PM
> *To: *OpenStack Development Mailing List  openstack.org>
> *Cc: *Wes Wilson , "ild...@openstack.org" <
> ild...@openstack.org>, "knel...@openstack.org" 
> *Subject: *[openstack-dev] [docs][all][ptl] Contributor Portal and Better
> New Contributor On-boarding
>
>
>
> Hello all,
>
>
>
> Every month we have people asking on IRC or the dev mailing list having
> interest in working on OpenStack, and sometimes they're given different
> answers from people, or worse, no answer at all.
>
>
>
> Suggestion: lets work our efforts together to create some common
> documentation so that all teams in OpenStack can benefit.
>
>
>
> First it’s important to note that we’re not just talking about code
> projects here. OpenStack contributions come in many forms such as running
> meet ups, identifying use cases (product working group), documentation,
> testing, etc. We want to make sure those potential contributors feel
> welcomed too!
>
>
>
> What is common documentation? Things like setting up Git, the many
> accounts you need to setup to contribute (gerrit, launchpad, OpenStack
> foundation account). Not all teams will use some common documentation, but
> the point is one or more projects will use them. Having the common
> documentation worked on by various projects will better help prevent
> duplicated efforts, inconsistent documentation, and hopefully just more
> accurate information.
>
>
>
> A team might use special tools to do their work. These can also be
> integrated in this idea as well.
>
>
>
> Once we have common documentation we can have something like:
>
> 1. Choose your own adventure: I want to contribute by code
>
> 2. What service type are you interested in? (Database, Block storage,
> compute)
>
> 3. Here’s step-by-step common documentation to setting up Git, IRC,
> Mailing Lists, Accounts, etc.
>
> 4. A service type project might choose to also include additional
> documentation in that flow 

Re: [openstack-dev] [keystone] New Office Hours Proposal

2017-06-26 Thread Lance Bragstad
According to the poll results, office hours will be moved to Tuesday
19:00 - 22:00 UTC. We'll officially start tomorrow after the keystone
meeting.

Thanks for putting together and advertising the poll, Harry!

On 06/20/2017 02:30 PM, Harry Rybacki wrote:
> Greetings All,
>
> We would like to foster a more interactive community within Keystone
> focused on fixing bugs on a regular basis! On a regular datetime (to
> be voted upon) we will have "office hours"[1] where Keystone cores
> will be available specifically to advise, help and review your efforts
> in squashing bugs. We want to aggressively attack our growing list of
> bugs and make sure Keystone is as responsive as possible to fixing
> them. The best way to do this is get people working on them and have
> the resources to get the fixes reviewed and merged.
>
> Please take a few moments to fill out our Doodle poll[2] to select the
> time block(s) that work best for you. We will tally the results and
> announce the official Keystone Office hours on Friday, 23-June-2017,
> by 2100 (UTC).
>
> [1] - https://etherpad.openstack.org/p/keystone-office-hours
> [2] - https://beta.doodle.com/poll/epvs95npfvrd3h5e
>
>
> /R
>
> Harry Rybacki
> Software Engineer, Red Hat
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EXT: [octavia] scheduling webex to discuss flavor spec (https://review.openstack.org/#/c/392485/)

2017-06-26 Thread Samuel Bercovici
Carlos,

We are in Israel and would like the meeting to happen at a more favorable time 
to us, for example 9:00AM CDT.

-Sam.


From: Carlos Puga [mailto:carlos.p...@walmart.com]
Sent: Monday, June 26, 2017 7:04 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] EXT: [octavia] scheduling webex to discuss flavor spec 
(https://review.openstack.org/#/c/392485/)

Octavia Team,

As per our last octavia meeting, I'm scheduling a webex so that we may speak 
through the teams preference on the design of the flavor spec.  I'd like to 
purpose we meet up on Thursday 3pm CDT.  If this day/time doesn't work for most 
please let me know and I can change it to best accommodate as many as possible.

Thank you,
Carlos Puga



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-26 Thread Jay Pipes

On 06/26/2017 11:32 AM, Mikhail Fedosin wrote:



On Jun 26, 2017 5:54 PM, "Jay Pipes" > wrote:


On 06/26/2017 10:35 AM, Mikhail Fedosin wrote:

* Storage of secrets - a new artifact type in Glare, which
will store private information (keys, passwords, etc.) in an
encrypted form (like in Barbican).


Does the above mean you are implementing a share secret storage
solution or that you are going to use an existing solution like
Barbican that does that?

Sectets is a plugin for Glare we developed for Nokia CloudBand platform, 
  and they just decided to opensource it. It doesn't use Barbican, 
technically it is oslo.versionedobjects class.


Sorry to hear that you opted not to use Barbican.

But, I'm confused what oslo.versionedobjects has to do with secrets 
storage. Could you explain?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] EXT: [octavia] scheduling webex to discuss flavor spec (https://review.openstack.org/#/c/392485/)

2017-06-26 Thread Carlos Puga
Octavia Team,

As per our last octavia meeting, I'm scheduling a webex so that we may speak 
through the teams preference on the design of the flavor spec.  I’d like to 
purpose we meet up on Thursday 3pm CDT.  If this day/time doesn’t work for most 
please let me know and I can change it to best accommodate as many as possible.

Thank you,
Carlos Puga



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-06-26 Thread Markus Zoeller
On 26.06.2017 10:49, Sylvain Bauza wrote:
> 
> 
> Le 23/06/2017 18:52, Sean Dague a écrit :
>> The Nova bug backlog is just over 800 open bugs, which while
>> historically not terrible, remains too large to be collectively usable
>> to figure out where things stand. We've had a few recent issues where we
>> just happened to discover upgrade bugs filed 4 months ago that needed
>> fixes and backports.
>>
>> Historically we've tried to just solve the bug backlog with volunteers.
>> We've had many a brave person dive into here, and burn out after 4 - 6
>> months. And we're currently without a bug lead. Having done a big giant
>> purge in the past
>> (http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html)
>> I know how daunting this all can be.
>>
>> I don't think that people can currently solve the bug triage problem at
>> the current workload that it creates. We've got to reduce the smart
>> human part of that workload.
>>
> 
> Thanks for sharing ideas, Sean.
> 
>> But, I think that we can also learn some lessons from what active github
>> projects do.
>>
>> #1 Bot away bad states
>>
>> There are known bad states of bugs - In Progress with no open patch,
>> Assigned but not In Progress. We can just bot these away with scripts.
>> Even better would be to react immediately on bugs like those, that helps
>> to train folks how to use our workflow. I've got some starter scripts
>> for this up at - https://github.com/sdague/nova-bug-tools
>>
> 
> Sometimes, I had no idea why but I noticed the Gerrit hook not working
> (ie. amending the Launchpad bug with the Gerrit URL) so some of the bugs
> I was looking for were actively being worked on (and I had the same
> experience myself although my commit msg was pretty correctly marked AFAIR).
> 
> Either way, what you propose sounds reasonable to me. If you care about
> fixing a bug by putting yourself owner of that bug, that also means you
> engage yourself on a resolution sooner than later (even if I do fail
> applying that to myself...).
> 
>> #2 Use tag based workflow
>>
>> One lesson from github projects, is the github tracker has no workflow.
>> Issues are openned or closed. Workflow has to be invented by every team
>> based on a set of tags. Sometimes that's annoying, but often times it's
>> super handy, because it allows the tracker to change workflows and not
>> try to change the meaning of things like "Confirmed vs. Triaged" in your
>> mind.
>>
>> We can probably tag for information we know we need at lot easier. I'm
>> considering something like
>>
>> * needs.system-version
>> * needs.openstack-version
>> * needs.logs
>> * needs.subteam-feedback
>> * has.system-version
>> * has.openstack-version
>> * has.reproduce
>>
>> Some of these a bot can process the text on and tell if that info was
>> provided, and comment how to provide the updated info. Some of this
>> would be human, but with official tags, it would probably help.
>>
> 
> The tags you propose seem to me related to an "Incomplete" vs.
> "Confirmed" state of the bug.
> 
> If I'm not able to triage the bug because I'm missing information like
> the release version or more logs, I put the bug as Incomplete.
> I could add those tags, but I don't see where a programmatical approach
> could help us.
> 
> If I understand correctly, you're rather trying to identify more what's
> missing in the bug report to provide a clear path of resolution, so we
> could mark the bug as Triaged, right? If so, I'd not propose those tags
> for the reason I just said, but rather other tags like (disclaimer, I
> suck at naming things):
> 
>  - rootcause.found
>  - needs.rootcause.analysis
>  - is.regression
>  - reproduced.locally
> 
> 
>> #3 machine assisted functional tagging
>>
>> I'm playing around with some things that might be useful in mapping new
>> bugs into existing functional buckets like: libvirt, volumes, etc. We'll
>> see how useful it ends up being.
>>
> 
> Logs parsing could certainly help. If someone is able to provide a clear
> stacktrace of some root exception, we can get for free the impact
> functional bucket for 80% of cases.
> 
> I'm not fan of identifying a domain by text recognition (like that's not
> because someone tells about libvirt that this is a libvirt bug tho), so
> that's why I'd see more some logs analysis like I mentioned.
> 
> 
>> #4 reporting on smaller slices
>>
>> Build some tooling to report on the status and change over time of bugs
>> under various tags. This will help visualize how we are doing
>> (hopefully) and where the biggest piles of issues are.
>>
>> The intent is the normal unit of interaction would be one of these
>> smaller piles. Be they the 76 libvirt bugs, 61 volumes bugs, or 36
>> vmware bugs. It would also highlight the rates of change in these piles,
>> and what's getting attention and what is not.
>>
> 
> I do wonder if Markus already wrote such reporting tools. AFAIR, he had
> a couple of very interesting reportings (and he also worked hard on 

Re: [openstack-dev] [nova] PCI handling (including SR-IOV), Traits, Resource Providers and Placement API - how to proceed?

2017-06-26 Thread Eric Fried
Hi Maciej, thanks for bringing this up.

On 06/26/2017 04:59 AM, Maciej Kucia wrote:
> Hi,
> 
> I have recently spent some time digging in Nova PCI devices handling code.
> I would like to propose some improvements:
> https://review.openstack.org/#/c/474218/ (Extended PCI alias)
> https://review.openstack.org/#/q/status:open+project:openstack/nova+topic:PCI 
> 
> but
> 
> There is an ongoing work on Resource Providers, Traits and Placement:
> https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html
> https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html
> https://github.com/openstack/os-traits
> https://docs.openstack.org/developer/nova/placement.html
> 
> I am willing to contribute some work to the PCI handling in Queens. 
> Given the scope of changes a new spec will be needed.
> 
> The current PCI code has some issues that would be nice to fix. Most
> notably:
>  - Broken single responsibility principle 
>A lot of classes are doing more than the name would suggest
>  - Files and classes naming is not consistent
>  - Mixed SR-IOV and PCI code
>  - PCI Pools provide no real performance advantage and add unnecessary
> complexity

I would like to add for consideration the issue that the current
whitelist/allocaton model doesn't work at all for hypervisors like
HyperV and PowerVM that don't directly own/access the devices as Linux
/dev files; and (at least for PowerVM) where VFs can be created on the
fly.  I'm hoping the placement and resource provider work will result in
a world where a compute node can define different kinds of PCI devices
as resource classes against which resources with specific traits can be
claimed.  And hopefully the whitelist goes away (or I can "opt out" of
it) in the process.

> 
> My questions:
>  - I understand that Nova will remain handling low-level operations
> between OpenStack and hypervisor driver.
>Is this correct?
>  - Will the `placement service` take the responsibility of managing PCI
> devices?
>  - Shall the SR-IOV handling be done by Nova or `placement service` (in
> such case Nova would manage SR-IOV as a regular PCI)?
>  - Where to store PCI configuration?
>For example currently nova.conf PCI Whitelist is responsible for some
> SR-IOV configuration.
>Shall it be stored somewhere alongside `SR-IOV` resource provider?
> 
> Thanks,
> Maciej
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-26 Thread Mikhail Fedosin
On Jun 26, 2017 5:54 PM, "Jay Pipes"  wrote:

On 06/26/2017 10:35 AM, Mikhail Fedosin wrote:

>* Storage of secrets - a new artifact type in Glare, which will store
> private information (keys, passwords, etc.) in an encrypted form (like in
> Barbican).
>

Does the above mean you are implementing a share secret storage solution or
that you are going to use an existing solution like Barbican that does that?

Sectets is a plugin for Glare we developed for Nokia CloudBand platform,
 and they just decided to opensource it. It doesn't use Barbican,
technically it is oslo.versionedobjects class.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-06-26 Thread Lance Bragstad


On 06/26/2017 08:58 AM, Chris Dent wrote:
> On Mon, 26 Jun 2017, Flavio Percoco wrote:
>
>> So, should we let teams to host IRC meetings in their own channels?
>
> Yes.
+1
>
>> Thoughts?
>
> I think the silo-ing concern is, at least recently, not relevant on
> two fronts: IRC was never a good fix for that and silos gonna be
> silos.
>
> There are so many meetings and so many projects there already are
> silos and by encouraging people to use the mailing lists more we are
> more effectively enabling diverse access than IRC ever could,
> especially if the IRC-based solution is the impossible "always be on
> IRC, always use a bouncer, always read all the backlogs, always read
> all the meeting logs".
>
> The effective way for a team not to be a silo is for it to be
> better about publishing accessible summaries of itself (as in: make
> more email) and participating in cross project related reviews. If
> it doesn't do that, that's the team's loss.
>
> Synchronous communication is fine for small groups of speakers but
> that's pretty much where it ends.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-26 Thread Eric Harney
On 06/19/2017 09:22 AM, Matt Riedemann wrote:
> On 6/16/2017 8:58 AM, Eric Harney wrote:
>> I'm not convinced yet that this failure is purely Ceph-specific, at a
>> quick look.
>>
>> I think what happens here is, unshelve performs an asynchronous delete
>> of a glance image, and returns as successful before the delete has
>> necessarily completed.  The check in tempest then sees that the image
>> still exists, and fails -- but this isn't valid, because the unshelve
>> API doesn't guarantee that this image is no longer there at the time it
>> returns.  This would fail on any image delete that isn't instantaneous.
>>
>> Is there a guarantee anywhere that the unshelve API behaves how this
>> tempest test expects it to?
> 
> There are no guarantees, no. The unshelve API reference is here [1]. The
> asynchronous postconditions section just says:
> 
> "After you successfully shelve a server, its status changes to ACTIVE.
> The server appears on the compute node.
> 
> The shelved image is deleted from the list of images returned by an API
> call."
> 
> It doesn't say the image is deleted immediately, or that it waits for
> the image to be gone before changing the instance status to ACTIVE.
> 
> I see there is also a typo in there, that should say after you
> successfully *unshelve* a server.
> 
> From an API user point of view, this is all asynchronous because it's an
> RPC cast from the nova-api service to the nova-conductor and finally
> nova-compute service when unshelving the instance.
> 
> So I think the test is making some wrong assumptions on how fast the
> image is going to be deleted when the instance is active.
> 
> As Ken'ichi pointed out in the Tempest change, Glance returns a 204 when
> deleting an image in the v2 API [2]. If the image delete is asynchronous
> then that should probably be a 202.
> 
> Either way the Tempest test should probably be in a wait loop for the
> image to be gone if it's really going to assert this.
> 

Thanks for confirming this.

What do we need to do to get this fixed in Tempest?  Nobody from Tempest
Core has responded to the revert patch [3] since this explanation was
posted.

IMO we should revert this for now and someone can implement a fixed
version if this test is needed.

[3] https://review.openstack.org/#/c/471352/

> [1]
> https://developer.openstack.org/api-ref/compute/?expanded=unshelve-restore-shelved-server-unshelve-action-detail#unshelve-restore-shelved-server-unshelve-action
> 
> [2]
> https://developer.openstack.org/api-ref/image/v2/index.html?expanded=delete-an-image-detail#delete-an-image
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-26 Thread Jay Pipes

On 06/26/2017 10:35 AM, Mikhail Fedosin wrote:
   * Storage of secrets - a new artifact type in Glare, which will store 
private information (keys, passwords, etc.) in an encrypted form (like 
in Barbican).


Does the above mean you are implementing a share secret storage solution 
or that you are going to use an existing solution like Barbican that 
does that?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-26 Thread Mikhail Fedosin
Hello! It's me again. I hasten to inform you about the latest news in Glare
project!

To begin with, I want to say that:

First, we created the stable branch (stable/ocata), which is already used
in production. This is undoubtedly a joyful event and the result of long
months of work!
Secondly, we are adding integration with the Mistral:
https://review.openstack.org/#/c/473898/
Third, we moved on to the active implementation of new features. In
general, I promised them for a long time already, but we decided to devote
the last few months to stabilize the project and make it good for
production. The next big release is scheduled for late August, and there we
will add:
  * ACLs aka sharing of artifacts, where tenants can share their artifacts
with the other.
  * Dynamic quotas, when the operator can choose how much data for what
type a particular tenant can upload (for instance, Anna can upload 1 Tb of
images and 100Mb of heat templates; Betty can upload 500Gb of images and
50Mb of heat templates, and so on)
  * Asynchronous data processing, which can be used for background
conversion and validation of large amounts of data on the server side.
  * Storage of secrets - a new artifact type in Glare, which will store
private information (keys, passwords, etc.) in an encrypted form (like in
Barbican).

Now I want to discuss a few questions with OpenStack community and get some
opinions.

1. Generally speaking, I want to make the development of Glare more open
and create a community around the project. Now the development of Glare
engaged in two full-time engineers from Nokia plus me. But as you can see
we have a large list of tasks, and we will gladly accept more people.
Perhaps someone from Glance, Nova or Cinder projects will want to
participate in the development.

2. We would like to become an official OpenStack project, and in general we
follow all the necessary rules and recommendations, starting from weekly
IRC meetings and our own channel, to Apache license and Keystone support.
For this reason, I want to file an application and hear objections and
recommendations on this matter.

3. Finally, I want to discuss the future of the project and its role in
OpenStack. As has been noted many times, the project is capable of much
more, and we only need to find the right application for it. I believe that
this issue will be conceptually discussed in Denver, but nevertheless we
must prepare for it right now.

Thanks in advance for your suggestions!

Best,
Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-06-26 Thread Chris Dent

On Mon, 26 Jun 2017, Flavio Percoco wrote:


So, should we let teams to host IRC meetings in their own channels?


Yes.


Thoughts?


I think the silo-ing concern is, at least recently, not relevant on
two fronts: IRC was never a good fix for that and silos gonna be
silos.

There are so many meetings and so many projects there already are
silos and by encouraging people to use the mailing lists more we are
more effectively enabling diverse access than IRC ever could,
especially if the IRC-based solution is the impossible "always be on
IRC, always use a bouncer, always read all the backlogs, always read
all the meeting logs".

The effective way for a team not to be a silo is for it to be
better about publishing accessible summaries of itself (as in: make
more email) and participating in cross project related reviews. If
it doesn't do that, that's the team's loss.

Synchronous communication is fine for small groups of speakers but
that's pretty much where it ends.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-26 Thread Flavio Percoco

On 15/06/17 13:06 -0400, Emilien Macchi wrote:

I missed [tripleo] tag.

On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi  wrote:

If you haven't followed the "Configuration management with etcd /
confd" thread [1], Doug found out that using confd to generate
configuration files wouldn't work for the Cinder case where we don't
know in advance of the deployment what settings to tell confd to look
at.
We are still looking for a generic way to generate *.conf files for
OpenStack, that would be usable by Deployment tools and operators.
Right now, Doug and I are investigating some tooling that would be
useful to achieve this goal.

Doug has prototyped an Ansible role that would generate configuration
files by consumming 2 things:

* Configuration schema, generated by Ben's work with Machine Readable
Sample Config.
  $ oslo-config-generator --namespace cinder --format yaml > cinder-schema.yaml

It also needs: https://review.openstack.org/#/c/474306/ to generate
some extra data not included in the original version.

* Parameters values provided in config_data directly in the playbook:
   config_data:
 DEFAULT:
   transport_url: rabbit://user:password@hostname
   verbose: true

There are 2 options disabled by default but which would be useful for
production environments:
* Set to true to always show all configuration values: config_show_defaults
* Set to true to show the help text: config_show_help: true

The Ansible module is available on github:
https://github.com/dhellmann/oslo-config-ansible

To try this out, just run:
  $ ansible-playbook ./playbook.yml

You can quickly see the output of cinder.conf:
https://clbin.com/HmS58


What are the next steps:

* Getting feedback from Deployment Tools and operators on the concept
of this module.
  Maybe this module could replace what is done by Kolla with
merge_configs and OpenStack Ansible with config_template.
* On the TripleO side, we would like to see if this module could
replace the Puppet OpenStack modules that are now mostly used for
generating configuration files for containers.
  A transition path would be having Heat to generate Ansible vars
files and give it to this module. We could integrate the playbook into
a new task in the composable services, something like
  "os_gen_config_tasks", a bit like we already have for upgrade tasks,
also driven by Ansible.
* Another similar option to what Doug did is to write a standalone
tool that would generate configuration, and for Ansible users we would
write a new module to use this tool.
  Example:
  Step 1. oslo-config-generator --namespace cinder --format yaml >
cinder-schema.yaml (note this tool already exists)
  Step 2. Create config_data.yaml in a specific format with
parameters values for what we want to configure (note this format
doesn't exist yet but look at what Doug did in the role, we could use
the same kind of schema).
  Step 3. oslo-gen-config -i config_data.yaml -s schema.yaml >
cinder.conf (note this tool doesn't exist yet)

  For Ansible users, we would write an Ansible module that would
take in entry 2 files: the schema and the data. The module would just
run the tool provided by oslo.config.
  Example:
  - name: Generate cinder.conf
oslo-gen-config: schema=cinder-schema.yaml
   data=config_data.yaml



I finally caught up with this thread and got the time to get back to y'all.
Sorry.

I like the roles version more because it's flexible and easier to distribute. We
can upload it to galaxy, package it, etc. Distributing ansible modules is a bit
painful right now and you end up adding them as roles in the playbook for the
modules to be loaded.

I'm about to work on a prototype and I'll use option #1 and perhaps we can
discuss further the option #2.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-06-26 Thread Thierry Carrez
Flavio Percoco wrote:
> [...]
> Not being able to easily ping someone during a meeting is kind of a bummer but
> I'd argue that assuming someone is in the meeting channel and available at all
> times is a mistake to begin with.

I think people can be pinged by PM or on #openstack-dev, it's just an
habit to take. It's just that there are cases  where people passively
mention you, without going up to a formal ping -- I usually go back
later to that person to answer the issue they informally raised. We'll
lose that, but it's minor enough.

> There will be conflicts on meeting times. There will be slots that will
> be used by several teams as these slots are convinient for cross-timezone
> interaction. We can check this and highlight the various conflicts but I'd 
> argue we
> shouldn't. We already have some overlaps in the current structure.

Yes we could give an indication of how busy a given slot is, when people
book it. I think the problem solves itself when the meeting participants
are asked to select a time slot -- if there are too many conflicts
people will naturally choose a less busy slot<

> The social drawbacks related to this change can be overcome by interacting 
> more
> on te mailing list. Ideally, this change should help raising awareness
> about the distributed nature of our community, encourage folks to do more 
> office
> hours, fewer meetings and, more importantly, to encourage folks to favor the
> mailing list over IRC conversations for *some* discussions.

My main gripe is that it that it reinforces silos, so this change hurts
inter-project work more than it helps it. But at the same time, nobody
was actually following every meeting anyway, so the damage is very limited.

> So, should we let teams to host IRC meetings in their own channels?
> Thoughts?

I think it would smooth the transition to office-hour style
coordination, which is a good step for more inclusion. I objected to the
idea in the past (due to the social damage) but at this points the
benefits probably outweigh the drawbacks.

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Nominating Yujun Zhang to Vitrage core

2017-06-26 Thread Yujun Zhang (ZTE)
Thanks all.

It is my pleasure to work with you in Vitrage project :-)

--
Yujun

On Mon, Jun 26, 2017 at 4:34 PM Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com> wrote:

> Hi,
>
>
>
> I added Yujun Zhang to the vitrage-core team. Yujun, Welcome J
>
>
>
> Best Regards,
>
> Ifat.
>
>
>
> *From: *דני אופק 
> *Date: *Monday, 26 June 2017 at 11:00
>
>
>
> +1
>
>
>
> On Mon, Jun 26, 2017 at 7:45 AM, Weyl, Alexey (Nokia - IL/Kfar Sava) <
> alexey.w...@nokia.com> wrote:
>
> +1
>
>
>
> > -Original Message-
> > From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
> > Sent: Sunday, June 25, 2017 3:18 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> >
> > Hi,
> >
> > I’d like to nominate Yujun Zhang to the Vitrage core team.
> >
> > In the last year Yujun has made a significant contribution in
> Vitrage[1], both
> > by adding new features and by reviewing other people’s work. He has an
> > extensive knowledge of the Vitrage architecture and code, and I believe
> he
> > would make a great addition to our team.
> >
> > Best Regards,
> > Ifat.
> >
> > [1]
> > https://review.openstack.org/#/q/owner:zhang.yujunz%2540zte.com.cn+pr
> > ojects:openstack/vitrage+is:merged
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-06-26 Thread Sean Dague
On 06/26/2017 04:49 AM, Sylvain Bauza wrote:
> 
> 
> Le 23/06/2017 18:52, Sean Dague a écrit :
>> The Nova bug backlog is just over 800 open bugs, which while
>> historically not terrible, remains too large to be collectively usable
>> to figure out where things stand. We've had a few recent issues where we
>> just happened to discover upgrade bugs filed 4 months ago that needed
>> fixes and backports.
>>
>> Historically we've tried to just solve the bug backlog with volunteers.
>> We've had many a brave person dive into here, and burn out after 4 - 6
>> months. And we're currently without a bug lead. Having done a big giant
>> purge in the past
>> (http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html)
>> I know how daunting this all can be.
>>
>> I don't think that people can currently solve the bug triage problem at
>> the current workload that it creates. We've got to reduce the smart
>> human part of that workload.
>>
> 
> Thanks for sharing ideas, Sean.
> 
>> But, I think that we can also learn some lessons from what active github
>> projects do.
>>
>> #1 Bot away bad states
>>
>> There are known bad states of bugs - In Progress with no open patch,
>> Assigned but not In Progress. We can just bot these away with scripts.
>> Even better would be to react immediately on bugs like those, that helps
>> to train folks how to use our workflow. I've got some starter scripts
>> for this up at - https://github.com/sdague/nova-bug-tools
>>
> 
> Sometimes, I had no idea why but I noticed the Gerrit hook not working
> (ie. amending the Launchpad bug with the Gerrit URL) so some of the bugs
> I was looking for were actively being worked on (and I had the same
> experience myself although my commit msg was pretty correctly marked AFAIR).
> 
> Either way, what you propose sounds reasonable to me. If you care about
> fixing a bug by putting yourself owner of that bug, that also means you
> engage yourself on a resolution sooner than later (even if I do fail
> applying that to myself...).
> 
>> #2 Use tag based workflow
>>
>> One lesson from github projects, is the github tracker has no workflow.
>> Issues are openned or closed. Workflow has to be invented by every team
>> based on a set of tags. Sometimes that's annoying, but often times it's
>> super handy, because it allows the tracker to change workflows and not
>> try to change the meaning of things like "Confirmed vs. Triaged" in your
>> mind.
>>
>> We can probably tag for information we know we need at lot easier. I'm
>> considering something like
>>
>> * needs.system-version
>> * needs.openstack-version
>> * needs.logs
>> * needs.subteam-feedback
>> * has.system-version
>> * has.openstack-version
>> * has.reproduce
>>
>> Some of these a bot can process the text on and tell if that info was
>> provided, and comment how to provide the updated info. Some of this
>> would be human, but with official tags, it would probably help.
>>
> 
> The tags you propose seem to me related to an "Incomplete" vs.
> "Confirmed" state of the bug.
> 
> If I'm not able to triage the bug because I'm missing information like
> the release version or more logs, I put the bug as Incomplete.
> I could add those tags, but I don't see where a programmatical approach
> could help us.

We always want that information, and the odds of us getting it from a
user decline over time. When we end up looking at bugs that are year
old, it becomes a big guessing game on their relevancy.

The theory here is that tags like that would be applied by a bot
immediately after the bug is filed. Catching the owner within 5 minutes
of their bug filing with a response which is "we need the following"
means we should get a pretty decent attach rate on that information. And
then you don't have to spend 10 minutes of real human time realizing
that you really need that before moving forward.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 26

2017-06-26 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work
for week 26.

Bugs

[Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned 
server notifications don't include updated_at
Takashi proposed the fix https://review.openstack.org/#/c/475276/ that 
only needs a second +2.


[Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications
use nova-api as binary name instead of nova-osapi_compute
Agreed not to change the binary name in the notifications. Instead we
make an enum for that name to show that the name is intentional.
Patch has been proposed: https://review.openstack.org/#/c/476538/

[Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault 
notification is never emitted
It seems that the legacy api.fault notification is never emited from 
nova. More details and the question about the way forward is in a 
separate ML thread 
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html


[Undecided] https://bugs.launchpad.net/nova/+bug/1698779 aggregate 
related notification samples are missing from the notification dev-ref
It is a doc generation bug. Fix and improvement on doc generation has 
been proposed https://review.openstack.org/#/c/475349/


[Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications 
are emitted per-cell instead of globally
Vitrage tempest test was broken due to missing notifications from 
nova-compute caused by the cells devstack change 
https://review.openstack.org/#/c/436094/. Revert is on the way 
https://review.openstack.org/#/c/477436/. The final solution is to 
configure a separate and non cell local transport_url for 
notifications. This is already possible with current oslo.messaging 
https://docs.openstack.org/developer/oslo.messaging/opts.html#oslo_messaging_notifications.transport_url



Versioned notification transformation
-
Patches needs only a second +2:
* https://review.openstack.org/#/c/402124/ Transform
instance.live_migration_rollback notification

Patches that looks good from the subteam perspective:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0


Searchlight integration
---
bp additional-notification-fields-for-searchlight
~
https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open
The next two patches in the series needs just a second +2. The last 
patch needs a rebase due to conflict



Small improvements
~~
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
error reporting

* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data. The third patch already shows how much sample data can be
deleted from nova tree. We added a minimal hand rolled json ref
implementation to notification sample test as the existing python json
ref implementations are not well maintained.


Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 27th of June.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170627T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] PCI handling (including SR-IOV), Traits, Resource Providers and Placement API - how to proceed?

2017-06-26 Thread Maciej Kucia
Hi,

I have recently spent some time digging in Nova PCI devices handling code.
I would like to propose some improvements:
https://review.openstack.org/#/c/474218/ (Extended PCI alias)
https://review.openstack.org/#/q/status:open+project:openstack/nova+topic:PCI


but

There is an ongoing work on Resource Providers, Traits and Placement:
https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html
https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html
https://github.com/openstack/os-traits
https://docs.openstack.org/developer/nova/placement.html

I am willing to contribute some work to the PCI handling in Queens.
Given the scope of changes a new spec will be needed.

The current PCI code has some issues that would be nice to fix. Most
notably:
 - Broken single responsibility principle
   A lot of classes are doing more than the name would suggest
 - Files and classes naming is not consistent
 - Mixed SR-IOV and PCI code
 - PCI Pools provide no real performance advantage and add unnecessary
complexity

My questions:
 - I understand that Nova will remain handling low-level operations between
OpenStack and hypervisor driver.
   Is this correct?
 - Will the `placement service` take the responsibility of managing PCI
devices?
 - Shall the SR-IOV handling be done by Nova or `placement service` (in
such case Nova would manage SR-IOV as a regular PCI)?
 - Where to store PCI configuration?
   For example currently nova.conf PCI Whitelist is responsible for some
SR-IOV configuration.
   Shall it be stored somewhere alongside `SR-IOV` resource provider?

Thanks,
Maciej
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-26 Thread Alexandra Settle
I think this is a good idea :) thanks Mike. We get a lot of people coming to 
the docs chan or ML asking for help/where to start and sometimes it’s difficult 
to point them in the right direction.

Just from experience working with contributor documentation, I’d avoid all 
screen shots if you can – updating them whenever the process changes 
(surprisingly often) is a lot of unnecessary technical debt.

The docs team put a significant amount of effort in a few releases back writing 
a pretty comprehensive Contributor Guide. For the purposes you describe below, 
I imagine a lot of the content here could be adapted. The process of setting up 
for code and docs is exactly the same: 
http://docs.openstack.org/contributor-guide/index.html

I also wonder if we could include a ‘what is openstack’ 101 for new 
contributors. I find that there is a *lot* of material out there, but it is 
often very hard to explain to people what each project does, how they all 
interact, why we install from different sources, why do we have official and 
unofficial projects etc. It doesn’t have to be seriously in-depth, but an 
overview that points people who are interested in the right directions. Often 
this will help people decide on what project they’d like to undertake.

Cheers,

Alex

From: Mike Perez 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, June 23, 2017 at 9:17 PM
To: OpenStack Development Mailing List 
Cc: Wes Wilson , "ild...@openstack.org" 
, "knel...@openstack.org" 
Subject: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New 
Contributor On-boarding

Hello all,

Every month we have people asking on IRC or the dev mailing list having 
interest in working on OpenStack, and sometimes they're given different answers 
from people, or worse, no answer at all.

Suggestion: lets work our efforts together to create some common documentation 
so that all teams in OpenStack can benefit.

First it’s important to note that we’re not just talking about code projects 
here. OpenStack contributions come in many forms such as running meet ups, 
identifying use cases (product working group), documentation, testing, etc. We 
want to make sure those potential contributors feel welcomed too!

What is common documentation? Things like setting up Git, the many accounts you 
need to setup to contribute (gerrit, launchpad, OpenStack foundation account). 
Not all teams will use some common documentation, but the point is one or more 
projects will use them. Having the common documentation worked on by various 
projects will better help prevent duplicated efforts, inconsistent 
documentation, and hopefully just more accurate information.

A team might use special tools to do their work. These can also be integrated 
in this idea as well.

Once we have common documentation we can have something like:
1. Choose your own adventure: I want to contribute by code
2. What service type are you interested in? (Database, Block storage, 
compute)
3. Here’s step-by-step common documentation to setting up Git, IRC, Mailing 
Lists, Accounts, etc.
4. A service type project might choose to also include additional 
documentation in that flow for special tools, etc.

Important things to note in this flow:
* How do you want to contribute?
* Here are **clear** names that identify the team. Not code names like 
Cloud Kitty, Cinder, etc.
* The documentation should really aim to not be daunting:
* Someone should be able to glance at it and feel like they can finish 
things in five minutes. Not be yet another tab left in their browser that 
they’ll eventually forget about
* No wall of text!
* Use screen shots
* Avoid covering every issue you could hit along the way.

## Examples of More Simple Documentation
I worked on some documentation for the Upstream University preparation that has 
received excellent feedback meet close to these suggestions:
* IRC [1]
* Git [2]
* Account Setup [3]

## 500 Feet Birds Eye view
There will be a Contributor landing page on the 
openstack.org website. Existing contributors will find 
reference links to quickly jump to things. New contributors will find a banner 
at the top of the page to direct them to the choose your own adventure to 
contributing to OpenStack, with ordered documentation flow that reuses existing 
documentation when necessary. Picture also a progress bar somewhere to show how 
close you are to being ready to contribute to whatever team. Of course there 
are a lot of other fancy things we can come up with, but I think getting 
something up as an initial pass would be better than what we have today.

Here's an example of what the sections/chapters could look like:

- Code
* Volumes (Cinder)
 * IRC
 * 

Re: [openstack-dev] [ceilometer][networking-sfc] Meters/Statistics for Networking-SFC

2017-06-26 Thread Duarte Cardoso, Igor
Hi Rajeev, there are no meters as far as I know and I'm not aware of any plans 
at the moment.
What else do you have in mind in terms of monitoring?

Best regards,
Igor.

From: rajeev.satyanaray...@wipro.com [mailto:rajeev.satyanaray...@wipro.com]
Sent: Sunday, June 25, 2017 1:10 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ceilometer][networking-sfc] Meters/Statistics for 
Networking-SFC


Hi All,



I am interested to know if there are any meters available for monitoring SFC 
through ceilometer, like no.of flows associated with an SFC or packets in/out 
for an SFC etc?

If they are available, please let me know how to configure and use them. If 
not, are there any plans of providing support to them in coming releases?



Thanking you,



Regards,

Rajeev.
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Need volunteer(s) to help migrate project docs

2017-06-26 Thread sfinucan
On Sat, 2017-06-24 at 14:22 +0800, Zhenyu Zheng wrote:
> On Sat, Jun 24, 2017 at 12:23 AM, Matt Riedemann  wrote:
> > The spec [1] with the plan to migrate project-specific docs from
> > docs.openstack.org to each project has merged.
> > 
> > There are a number of steps outlined in there which need people from the
> > project teams, e.g. nova, to do for their project. Some of it we're already
> > doing, like building a config reference, API reference, using the
> > openstackdocstheme, etc. But there are other things like moving the install
> > guide for compute into the nova repo.
> > 
> > Is anyone interested in owning this work? There are enough tasks that it
> > could probably be a couple of people coordinating. It also needs to be done
> > by the end of the Pike release, so time is a factor.
> > 
> > [1] https://review.openstack.org/#/c/472275/
> 
> I will help

As will I. I've already started restructuring the existings docs (and gave
feedback to the spec based on that experience) and will start working on moving
stuff in once done with that. We can probably break this up between us, Kevin.

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-06-26 Thread Sylvain Bauza


Le 23/06/2017 18:52, Sean Dague a écrit :
> The Nova bug backlog is just over 800 open bugs, which while
> historically not terrible, remains too large to be collectively usable
> to figure out where things stand. We've had a few recent issues where we
> just happened to discover upgrade bugs filed 4 months ago that needed
> fixes and backports.
> 
> Historically we've tried to just solve the bug backlog with volunteers.
> We've had many a brave person dive into here, and burn out after 4 - 6
> months. And we're currently without a bug lead. Having done a big giant
> purge in the past
> (http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html)
> I know how daunting this all can be.
> 
> I don't think that people can currently solve the bug triage problem at
> the current workload that it creates. We've got to reduce the smart
> human part of that workload.
> 

Thanks for sharing ideas, Sean.

> But, I think that we can also learn some lessons from what active github
> projects do.
> 
> #1 Bot away bad states
> 
> There are known bad states of bugs - In Progress with no open patch,
> Assigned but not In Progress. We can just bot these away with scripts.
> Even better would be to react immediately on bugs like those, that helps
> to train folks how to use our workflow. I've got some starter scripts
> for this up at - https://github.com/sdague/nova-bug-tools
> 

Sometimes, I had no idea why but I noticed the Gerrit hook not working
(ie. amending the Launchpad bug with the Gerrit URL) so some of the bugs
I was looking for were actively being worked on (and I had the same
experience myself although my commit msg was pretty correctly marked AFAIR).

Either way, what you propose sounds reasonable to me. If you care about
fixing a bug by putting yourself owner of that bug, that also means you
engage yourself on a resolution sooner than later (even if I do fail
applying that to myself...).

> #2 Use tag based workflow
> 
> One lesson from github projects, is the github tracker has no workflow.
> Issues are openned or closed. Workflow has to be invented by every team
> based on a set of tags. Sometimes that's annoying, but often times it's
> super handy, because it allows the tracker to change workflows and not
> try to change the meaning of things like "Confirmed vs. Triaged" in your
> mind.
> 
> We can probably tag for information we know we need at lot easier. I'm
> considering something like
> 
> * needs.system-version
> * needs.openstack-version
> * needs.logs
> * needs.subteam-feedback
> * has.system-version
> * has.openstack-version
> * has.reproduce
> 
> Some of these a bot can process the text on and tell if that info was
> provided, and comment how to provide the updated info. Some of this
> would be human, but with official tags, it would probably help.
> 

The tags you propose seem to me related to an "Incomplete" vs.
"Confirmed" state of the bug.

If I'm not able to triage the bug because I'm missing information like
the release version or more logs, I put the bug as Incomplete.
I could add those tags, but I don't see where a programmatical approach
could help us.

If I understand correctly, you're rather trying to identify more what's
missing in the bug report to provide a clear path of resolution, so we
could mark the bug as Triaged, right? If so, I'd not propose those tags
for the reason I just said, but rather other tags like (disclaimer, I
suck at naming things):

 - rootcause.found
 - needs.rootcause.analysis
 - is.regression
 - reproduced.locally


> #3 machine assisted functional tagging
> 
> I'm playing around with some things that might be useful in mapping new
> bugs into existing functional buckets like: libvirt, volumes, etc. We'll
> see how useful it ends up being.
> 

Logs parsing could certainly help. If someone is able to provide a clear
stacktrace of some root exception, we can get for free the impact
functional bucket for 80% of cases.

I'm not fan of identifying a domain by text recognition (like that's not
because someone tells about libvirt that this is a libvirt bug tho), so
that's why I'd see more some logs analysis like I mentioned.


> #4 reporting on smaller slices
> 
> Build some tooling to report on the status and change over time of bugs
> under various tags. This will help visualize how we are doing
> (hopefully) and where the biggest piles of issues are.
> 
> The intent is the normal unit of interaction would be one of these
> smaller piles. Be they the 76 libvirt bugs, 61 volumes bugs, or 36
> vmware bugs. It would also highlight the rates of change in these piles,
> and what's getting attention and what is not.
> 

I do wonder if Markus already wrote such reporting tools. AFAIR, he had
a couple of very interesting reportings (and he also worked hard on the
bugs taxonomy) so we could potentially leverage those.

-Sylvain

> 
> This is going to be kind of an ongoing experiment, but as we currently
> have no one spear heading bug triage, it seemed 

[openstack-dev] [tc][all] Move away from meeting channels

2017-06-26 Thread Flavio Percoco

Hey Y'all,

Not so long ago there was a discussion about how we manage our meeting channels
and whether there's need for more or fewer of them[0]. Good points were made in
that thread in favor of keeping the existing model but some things have changed,
hence this new thread.

More teams - including the Technical Committee[1] - have started to adopt office
hours as a way to provide support and have synchronous discussion. Some of these
teams have also discontinued their IRC meetings or moved to an ad-hoc meetings
model.

As these changes start popping up in the community, we need to have a good way
to track the office hours for each team and allow for teams to meet at the time
they prefer. Before we go deep into the discussion again, I'd like to summarize
what has been discussed in the past (thanks ttx for the summary):

The main objections to just letting people meet anywhere are:
- how do we ensure the channel is logged/accessible
- we won't catch random mentions of our name as easily anymore
- might create a pile-up of meetings at peak times rather than force 
them to
   spread around
- increases silo effect

Main benefits being:
- No more scheduling nightmare
- More flexibility in listing things in the calendar


Some of the problems above can be solved programmatically - cross-check on
eavesdrop to make sure logging is enabled, for example. The problems that I'm
more worried about are the social ones, because they'll require a change in the
way we interact among us.

Not being able to easily ping someone during a meeting is kind of a bummer but
I'd argue that assuming someone is in the meeting channel and available at all
times is a mistake to begin with.

There will be conflicts on meeting times. There will be slots that will be used
by several teams as these slots are convinient for cross-timezone interaction.
We can check this and highlight the various conflicts but I'd argue we
shouldn't. We already have some overlaps in the current structure.

The social drawbacks related to this change can be overcome by interacting more
on te mailing list. Ideally, this change should help raising awareness about the
distributed nature of our community, encourage folks to do more office hours,
fewer meetings and, more importantly, to encourage folks to favor the mailing
list over IRC conversations for *some* discussions.

So, should we let teams to host IRC meetings in their own channels? Thoughts?

Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-December/108360.html
[1] https://governance.openstack.org/tc/#office-hours


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Suspected SPAM - Re: [vitrage] Nominating Yujun Zhang to Vitrage core

2017-06-26 Thread Weyl, Alexey (Nokia - IL/Kfar Sava)
Congrats ☺

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Monday, June 26, 2017 11:31 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Suspected SPAM - Re: [openstack-dev] [vitrage] Nominating Yujun Zhang 
to Vitrage core

Hi,

I added Yujun Zhang to the vitrage-core team. Yujun, Welcome ☺

Best Regards,
Ifat.

From: דני אופק >
Date: Monday, 26 June 2017 at 11:00

+1

On Mon, Jun 26, 2017 at 7:45 AM, Weyl, Alexey (Nokia - IL/Kfar Sava) 
> wrote:
+1

> -Original Message-
> From: Afek, Ifat (Nokia - IL/Kfar Sava) 
> [mailto:ifat.a...@nokia.com]
> Sent: Sunday, June 25, 2017 3:18 PM
> To: OpenStack Development Mailing List (not for usage questions)
> >
>
> Hi,
>
> I’d like to nominate Yujun Zhang to the Vitrage core team.
>
> In the last year Yujun has made a significant contribution in Vitrage[1], both
> by adding new features and by reviewing other people’s work. He has an
> extensive knowledge of the Vitrage architecture and code, and I believe he
> would make a great addition to our team.
>
> Best Regards,
> Ifat.
>
> [1]
> https://review.openstack.org/#/q/owner:zhang.yujunz%2540zte.com.cn+pr
> ojects:openstack/vitrage+is:merged
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Vitrage] New service in Vitrage

2017-06-26 Thread Weyl, Alexey (Nokia - IL/Kfar Sava)
Hi,

We have added a new service to Vitrage, called vitrage-collector.

Vitrage-collector is part of our HA (High Availability) solution.

At the moment, until we'll finish to write the history, after restarting the 
vitrage-graph you have to restart the vitrage-collector as well.

BR,
Alexey :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Nominating Yujun Zhang to Vitrage core

2017-06-26 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

I added Yujun Zhang to the vitrage-core team. Yujun, Welcome ☺

Best Regards,
Ifat.

From: דני אופק 
Date: Monday, 26 June 2017 at 11:00

+1

On Mon, Jun 26, 2017 at 7:45 AM, Weyl, Alexey (Nokia - IL/Kfar Sava) 
> wrote:
+1

> -Original Message-
> From: Afek, Ifat (Nokia - IL/Kfar Sava) 
> [mailto:ifat.a...@nokia.com]
> Sent: Sunday, June 25, 2017 3:18 PM
> To: OpenStack Development Mailing List (not for usage questions)
> >
>
> Hi,
>
> I’d like to nominate Yujun Zhang to the Vitrage core team.
>
> In the last year Yujun has made a significant contribution in Vitrage[1], both
> by adding new features and by reviewing other people’s work. He has an
> extensive knowledge of the Vitrage architecture and code, and I believe he
> would make a great addition to our team.
>
> Best Regards,
> Ifat.
>
> [1]
> https://review.openstack.org/#/q/owner:zhang.yujunz%2540zte.com.cn+pr
> ojects:openstack/vitrage+is:merged
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-26 Thread Henning Schild
Am Sun, 25 Jun 2017 10:09:10 +0200
schrieb Sahid Orentino Ferdjaoui :

> On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:
> > On 06/23/2017 09:35 AM, Henning Schild wrote:  
> > > Am Fri, 23 Jun 2017 11:11:10 +0200
> > > schrieb Sahid Orentino Ferdjaoui :  
> >   
> > > > In Linux RT context, and as you mentioned, the non-RT vCPU can
> > > > acquire some guest kernel lock, then be pre-empted by emulator
> > > > thread while holding this lock. This situation blocks RT vCPUs
> > > > from doing its work. So that is why we have implemented [2].
> > > > For DPDK I don't think we have such problems because it's
> > > > running in userland.
> > > > 
> > > > So for DPDK context I think we could have a mask like we have
> > > > for RT and basically considering vCPU0 to handle best effort
> > > > works (emulator threads, SSH...). I think it's the current
> > > > pattern used by DPDK users.  
> > > 
> > > DPDK is just a library and one can imagine an application that has
> > > cross-core communication/synchronisation needs where the emulator
> > > slowing down vpu0 will also slow down vcpu1. You DPDK application
> > > would have to know which of its cores did not get a full pcpu.
> > > 
> > > I am not sure what the DPDK-example is doing in this discussion,
> > > would that not just be cpu_policy=dedicated? I guess normal
> > > behaviour of dedicated is that emulators and io happily share
> > > pCPUs with vCPUs and you are looking for a way to restrict
> > > emulators/io to a subset of pCPUs because you can live with some
> > > of them beeing not 100%.  
> > 
> > Yes.  A typical DPDK-using VM might look something like this:
> > 
> > vCPU0: non-realtime, housekeeping and I/O, handles all virtual
> > interrupts and "normal" linux stuff, emulator runs on same pCPU
> > vCPU1: realtime, runs in tight loop in userspace processing packets
> > vCPU2: realtime, runs in tight loop in userspace processing packets
> > vCPU3: realtime, runs in tight loop in userspace processing packets
> > 
> > In this context, vCPUs 1-3 don't really ever enter the kernel, and
> > we've offloaded as much kernel work as possible from them onto
> > vCPU0.  This works pretty well with the current system.
> >   
> > > > For RT we have to isolate the emulator threads to an additional
> > > > pCPU per guests or as your are suggesting to a set of pCPUs for
> > > > all the guests running.
> > > > 
> > > > I think we should introduce a new option:
> > > > 
> > > >- hw:cpu_emulator_threads_mask=^1
> > > > 
> > > > If on 'nova.conf' - that mask will be applied to the set of all
> > > > host CPUs (vcpu_pin_set) to basically pack the emulator threads
> > > > of all VMs running here (useful for RT context).  
> > > 
> > > That would allow modelling exactly what we need.
> > > In nova.conf we are talking absolute known values, no need for a
> > > mask and a set is much easier to read. Also using the same name
> > > does not sound like a good idea.
> > > And the name vcpu_pin_set clearly suggest what kind of load runs
> > > here, if using a mask it should be called pin_set.  
> > 
> > I agree with Henning.
> > 
> > In nova.conf we should just use a set, something like
> > "rt_emulator_vcpu_pin_set" which would be used for running the
> > emulator/io threads of *only* realtime instances.  
> 
> I'm not agree with you, we have a set of pCPUs and we want to
> substract some of them for the emulator threads. We need a mask. The
> only set we need is to selection which pCPUs Nova can use
> (vcpus_pin_set).

At that point it does not really matter whether it is a set or a mask.
They can both express the same where a set is easier to read/configure.
With the same argument you could say that vcpu_pin_set should be a mask
over the hosts pcpus.

As i said before: vcpu_pin_set should be renamed because all sorts of
threads are put here (pcpu_pin_set?). But that would be a bigger change
and should be discussed as a seperate issue.

So far we talked about a compute-node for realtime only doing realtime.
In that case vcpu_pin_set + emulator_io_mask would work. If you want to
run regular VMs on the same host, you can run a second nova, like we do.

We could also use vcpu_pin_set + rt_vcpu_pin_set(/mask). I think that
would allow modelling all cases in just one nova. Having all in one
nova, you could potentially repurpose rt cpus to best-effort and back.
Some day in the future ...

> > We may also want to have "rt_emulator_overcommit_ratio" to control
> > how many threads/instances we allow per pCPU.  
> 
> Not really sure to have understand this point? If it is to indicate
> that for a pCPU isolated we want X guest emulator threads, the same
> behavior is achieved by the mask. A host for realtime is dedicated for
> realtime, no overcommitment and the operators know the number of host
> CPUs, they can easily deduct a ratio and so the corresponding mask.

Agreed.

> > > > If on flavor extra-specs It will be applied to 

Re: [openstack-dev] Suspected SPAM - [vitrage] Nominating Yujun Zhang to Vitrage core

2017-06-26 Thread דני אופק
+1

On Mon, Jun 26, 2017 at 7:45 AM, Weyl, Alexey (Nokia - IL/Kfar Sava) <
alexey.w...@nokia.com> wrote:

> +1
>
> > -Original Message-
> > From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
> > Sent: Sunday, June 25, 2017 3:18 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Suspected SPAM - [openstack-dev] [vitrage] Nominating Yujun
> > Zhang to Vitrage core
> >
> > Hi,
> >
> > I’d like to nominate Yujun Zhang to the Vitrage core team.
> >
> > In the last year Yujun has made a significant contribution in
> Vitrage[1], both
> > by adding new features and by reviewing other people’s work. He has an
> > extensive knowledge of the Vitrage architecture and code, and I believe
> he
> > would make a great addition to our team.
> >
> > Best Regards,
> > Ifat.
> >
> > [1]
> > https://review.openstack.org/#/q/owner:zhang.yujunz%2540zte.com.cn+pr
> > ojects:openstack/vitrage+is:merged
> >
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev