Re: [openstack-dev] OpenStack Bug Smash for Queens

2017-10-22 Thread ChangBo Guo
Thanks Fred  for raising this.

BWT ,  we have a session about the Bug Smash event in Sydney Summit [1],
please join us if you're interested.


[1]
https://www.openstack.org/summit/sydney-2017/summit-schedule/events/19746/what-china-developers-brought-to-community-after-6-openstack-bug-smash-events

2017-10-21 15:40 GMT+08:00 Fred Li :

> Hi all OpenStackers,
>
> Since April 2015, there have been 6 OpenStack Bug Smash events in
> China. OpenStack Bug Smash has been a routine and exciting event by
> OpenStacker expects. In the past 5 bug smashes, over 300 top
> developers from many companies(Huawei,Intel,CMCC, IBM, Mirantis,
> Awcloud, 99cloud, UnitedStack, EasyStack, LeTV, ZTE, and etc.)
> contributed 600+ bugs to community. This achievement significantly
> demonstrated the technical strength of Chinese engineers. Huawei,
> Intel and those companies show the commitment of OpenStack Stability
> to enterprise customers.
>
> Now, the 7th OpenStack bug smash is coming. Intel, Huawei, Fiberhome
> and CESI will host this event. Intel, Huawei and Fiberhome are all
> OpenStack members[1].
> Come to join other OpenStackers to make Queens a grand success!
> Come to learn tips and gain a better understanding by working closely
> with other talents.
> Each focused project will have a core on standby to review patches and
> vote for merges.
>
> Queens Bug SmashObject: smash the critical and high bugs in key
> projects. Focused projects: Nova, Neutron, Cinder, Keystone, Manila,
> Heat, Telemetry, Karbor, Tricircle, and the ones you want to add.
>
> To all the projects team leaders, you can discuss with your team
> members in the project meeting and mark the bugs you expect OpenStack
> Bug Smash Queens to fix. If you can arrange core reviewers to take
> care of the patches during that week, that will be more efficient.
>
> Date: from Nov 22 to Nov 24, which is 2 weeks prior to Queens-2
> milestone[2].
>
> Location: 中国湖北省武汉市洪山区邮科院路88号,烽火创新谷武汉国际创客中心五楼一号会议室 [3]
>
>
> Please start to register in [4].
> And start to list the bugs to be fixed in [5].
>
> [1] https://www.openstack.org/foundation/companies/
> [2] https://releases.openstack.org/queens/schedule.html
> [3] https://www.google.co.jp/maps/place/88+You+Ke+Yuan+Lu,+
> GuangGu+ShangQuan,+Hongshan+Qu,+Wuhan+Shi,+Hubei+Sheng,+
> China,+430073/@30.5107646,114.392814,17z/data=!3m1!4b1!4m5!
> 3m4!1s0x342ea4ce1537ed17:0x52ff45d6b5dba38c!8m2!3d30.
> 51076!4d114.395008?hl=en
> [4] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Queens-Wuhan
> [5] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-
> Queens-Wuhan-Bugs-List
>
>
> Regards
> Fred Li (李永乐)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-22 Thread ChangBo Guo
Congratulations to new TC members !

2017-10-21 7:59 GMT+08:00 Kendall Nelson :

> Hello Everyone :)
>
> Please join me in congratulating the 6 newly elected members of the
> Technical Committee (TC)!
>
> Colleen Murphy (cmurphy)
> Doug Hellmann (dhellmann)
> Emilien Macchi (emilienm)
> Jeremy Stanley (fungi)
> Julia Kreger (TheJulia)
> Paul Belanger (pabelanger)
>
> Full results: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> ce86063991ef8aae
>
> Election process details and results are also available here:
> https://governance.openstack.org/election/
>
> Thank you to all of the candidates, having a good group of candidates
> helps engage the community in our democratic process.
>
> Thank you to all who voted and who encouraged others to vote. We need to
> ensure your voice is heard.
>
> Thank you for another great round.
>
> -Kendall Nelson (diablo_rojo)
>
> [1] https://review.openstack.org/#/c/513881/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Distutils][pbr][devstack][qa] Announcement: Pip 10 is coming, and will move all internal APIs

2017-10-22 Thread Ian Wienand

On 10/22/2017 12:18 AM, Jeremy Stanley wrote:

Right, on Debian/Ubuntu it's not too terrible (cloud-init's
dependencies are usually the biggest issue there and we manage to
avoid them by building our own images with no cloud-init), but on
Red Hat derivatives there are a lot of deep operating system
internals built on top of packaged Python libraries which simply
can't be uninstalled cleanly nor safely.


Also note though, if it can be uninstalled, we have often had problems
with the packages coming back and overwriting the pip installed
version, which leads to often very obscure problems.  For this reason
in various bits of devstack/devstack-gate/dib's pip install etc we
often install and pin packages to let pip overwrite them.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Distutils][pbr] Announcement: Pip 10 is coming, and will move all internal APIs

2017-10-22 Thread Ian Wienand

On 10/21/2017 07:14 AM, Clark Boylan wrote:

The current issue this change is facing can be seen at
http://logs.openstack.org/25/513825/4/check/legacy-tempest-dsvm-py35/c31deb2/logs/devstacklog.txt.gz#_2017-10-20_20_07_54_838.
The tl;dr is that for distutils installed packages (basically all the
distro installed python packges) pip refuses to uninstall them in order
to perform upgrades because it can't reliably determine where all the
files are. I think this is a new pip 10 behavior.

In the general case I think this means we can not rely on global pip
installs anymore. This may be a good thing to bring up with upstream
PyPA as I expect it will break a lot of people in a lot of places (it
will break infra for example too).


deja-vu!  pip 8 tried this and quickly reverted.  I wrote a long email
with all the details, but then figured that's not going to help much
so translated it into [1].

-i

[1] https://github.com/pypa/pip/issues/4805

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Proposing changes to 17UTC team meeting

2017-10-22 Thread Ghanshyam Mann
1 typo correction, All current meeting and office hours are on
Thursday (not Tuesday).

Week1:
- *Thu* 9:00 UTC Office hours in #openstack-qa
- *Thu* 17:00 UTC Meeting in #openstack-meeting
Week2:
- *Thu* 8:00 UTC Meeting in #openstack-meeting

-gmann


On Sat, Oct 21, 2017 at 5:43 AM, Andrea Frittoli
 wrote:
> Dear all,
>
> The current schedule for QA meetings and office hours is as follows
> (alternating weeks):
>
> Week1:
> - Tue 9:00 UTC Office hours in #openstack-qa
> - Tue 17:00 UTC Meeting in #openstack-meeting
> Week2:
> - Tue 8:00 UTC Meeting in #openstack-meeting
>
> Since the 17:00 UTC as a rather low attendance, but not zero, I would
> propose to drop that meeting in favour of a second office hours slot, so
> that we have one slot every week and the meeting would be every
> second week:
>
> Week1:
> - Tue 9:00 UTC Office hours in #openstack-qa
> Week2:
> - Tue 8:00 UTC Meeting in #openstack-meeting
> - [TBD] Office hours in #openstack-qa
>
> Proposals for the office hours schedule in the doodle [0]
>
> Let me know your thoughts on this proposal and please vote for the
> time slot in the doodle.
>
> Thank you!
>
> Andrea Frittoli (andreaf)
>
> [0] https://doodle.com/poll/kf6b8847wa2s5mxv
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gertty dashboards

2017-10-22 Thread Jeremy Freudberg
I know nothing about Gertty, but it looks like you have two typos:

- Code-Review-0 should be Code-Review=0
- openstack/shade should openstack-infra/shade


That being said, there could still be other Gertty-specific things to
discuss here. Hope you get the answers you need!

On Sun, Oct 22, 2017 at 5:56 AM, Sławek Kapłoński 
wrote:

> Hello,
>
> Recently I started using Getty and I think it’s good tool.
> I have one problem which I don’t know how to solve. On web based gerrit
> page (review.openstack.org) I have defined page with own query:
>
> "(NOT owner:self) status:open label:Code-Review-0,self label:Workflow=0
> (project:openstack/neutron OR project:openstack/neutron-lib OR
> project:openstack/shade) branch:master”
>
> And it gives me list of many patches (more than 100 I think). Problem is
> that when I configured own dashboard with exactly same query in gertty.yml
> file I had only 22 patches displayed.
> I suppose that gertty is looking only for patches which are already in
> local database. Is it true? And is there any possibility to change it?
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal

2017-10-22 Thread Clint Byrum
Excerpts from Jeremy Stanley's message of 2017-10-21 13:37:01 +:
> On 2017-10-20 22:50:53 + (+), Fox, Kevin M wrote:
> [...]
> > Ideally, there should be an OpenStack overarching architecture
> > team of some sort to handle this kind of thing I think.
> 
> There was one for a while, but it dissolved due to lack of community
> participation. If you'd like to help reboot it, Clint B. can
> probably provide you with background on the previous attempt.
> 

I'd be in support of reviving the Architecture Working Group (SIG?).

Would need to see more people commit to it though. It mostly felt like
a place for Thierry and me to write down our ideas, and a title to put
on a room at the PTG so we could have cross-project discussions about
our ideas.

That said, there is a cross-project process that works pretty well when
one project needs to ask for changes from other projects:

https://docs.openstack.org/project-team-guide/cross-project.html

I believe the Keystone team followed this process despite some fumbles
early in the v3 story.

> > Without such an entity though, I think the TC is probably
> > currently the best place to discuss it though?
> 
> Contrary to the impression some people seem to have, the TC is not
> primarily composed of cloud architects; it's an elected body of
> community leaders who seek informed input from people like you. I've
> personally found no fault in the process and timeline the Keystone
> team followed in this situation but I'm also not the primary
> audience for their software, so it's good to hear from those who are
> about ways to improve similar cases in the future. However, I also
> understand that no matter how widely and carefully changes are
> communicated, there's only so much anyone can do to avoid surprising
> the subset of users who simply don't pay attention.

Right, the TC is more or less a legislative body. They can set policy
but they don't actually make sure the vision is implemented directly.

I made an argument that there's a need for an executive branch to get
hard things done here:

http://fewbar.com/2017/02/open-source-governance-needs-presidents/

Without some kind of immediate executive that sits above project levels,
we'll always be designing by committee and find our silos getting deeper.

All of that said, I believe the Keystone team did a great job of getting
something hard done. As Morgan states, it was a 100% necessary evolution
and required delicate orchestration. Well done Keystone team!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovn] Error while running unit tests(master branch)

2017-10-22 Thread pranab boruah
Hi,

I just cloned the networking-ovn master branch and ran the unit tests.
I got the following ERROR:

# tox
..
{9}networking_ovn.tests.unit.ovsdb.test_ovsdb_monitor.TestOvnConnection.test_connection_sb_start
[0.015743s] ... ok
Mechanism driver 'ovn' failed in create_port_precommit
Traceback (most recent call last):
  File 
"networking-ovn-orig/networking-ovn/.tox/py27/src/neutron/neutron/plugins/ml2/managers.py",
line 428, in _call_on_drivers
getattr(driver.obj, method_name)(context)
  File "networking_ovn/ml2/mech_driver.py", line 325, in create_port_precommit
utils.validate_and_get_data_from_binding_profile(port)
  File "networking_ovn/common/utils.py", line 162, in
validate_and_get_data_from_binding_profile
raise n_exc.InvalidInput(error_message=msg)
InvalidInput: Invalid input for operation: Invalid binding:profile.
vtep-logical-switch 1234 value invalid type.

Is the test suite broken?.

Any ideas on how to fix it?

Thanks,
Pranab

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic and traits

2017-10-22 Thread Jay Pipes

Sorry for delay, took a week off before starting a new job. Comments inline.

On 10/16/2017 12:24 PM, Dmitry Tantsur wrote:

Hi all,

I promised John to dump my thoughts on traits to the ML, so here we go :)

I see two roles of traits (or kinds of traits) for bare metal:
1. traits that say what the node can do already (e.g. "the node is
doing UEFI boot")
2. traits that say what the node can be *configured* to do (e.g. "the node can
boot in UEFI mode")


There's only one role for traits. #2 above. #1 is state information. 
Traits are not for state information. Traits are only for communicating 
capabilities of a resource provider (baremetal node).


For example, let's say we add the following to the os-traits library [1]

* STORAGE_RAID_0
* STORAGE_RAID_1
* STORAGE_RAID_5
* STORAGE_RAID_6
* STORAGE_RAID_10

The Ironic administrator would add all RAID-related traits to the 
baremetal nodes that had the *capability* of supporting that particular 
RAID setup [2]


When provisioned, the baremetal node would either have RAID configured 
in a certain level or not configured at all.


A very important note: the Placement API and Nova scheduler (or future 
Ironic scheduler) doesn't care about this. At all. I know it sounds like 
I'm being callous, but I'm not. Placement and scheduling doesn't care 
about the state of things. It only cares about the capabilities of 
target destinations. That's it.



This seems confusing, but it's actually very useful. Say, I have a flavor that
requests UEFI boot via a trait. It will match both the nodes that are already in
UEFI mode, as well as nodes that can be put in UEFI mode.


No :) It will only match nodes that have the UEFI capability. The set of 
providers that have the ability to be booted via UEFI is *always* a 
superset of the set of providers that *have been booted via UEFI*. 
Placement and scheduling decisions only care about that superset -- the 
providers with a particular capability.



This idea goes further with deploy templates (new concept we've been thinking
about). A flavor can request something like CUSTOM_RAID_5, and it will match the
nodes that already have RAID 5, or, more interestingly, the nodes on which we
can build RAID 5 before deployment. The UEFI example above can be treated in a
similar way.

This ends up with two sources of knowledge about traits in ironic:
1. Operators setting something they know about hardware ("this node is in UEFI
mode"),
2. Ironic drivers reporting something they
   2.1. know about hardware ("this node is in UEFI mode" - again)
   2.2. can do about hardware ("I can put this node in UEFI mode")


You're correct that both pieces of information are important. However, 
only the "can do about hardware" part is relevant to Placement and Nova.



For case #1 we are planning on a new CRUD API to set/unset traits for a node.


I would *strongly* advise against this. Traits are not for state 
information.


Instead, consider having a DB (or JSON) schema that lists state 
information in fields that are explicitly for that state information.


For example, a schema that looks like this:

{
  "boot": {
"mode": ,
"params": 
  },
  "disk": {
"raid": {
  "level": ,
  "controller": ,
  "driver": ,
  "params": 
},  ...
  },
  "network": {
...
  }
}

etc, etc.

Don't use trait strings to represent state information.

Best,
-jay


Case #2 is more interesting. We have two options, I think:

a) Operators still set traits on nodes, drivers are simply validating them. E.g.
an operators sets CUSTOM_RAID_5, and the node's RAID interface checks if it is
possible to do. The downside is obvious - with a lot of deploy templates
available it can be a lot of manual work.

b) Drivers report the traits, and they get somehow added to the traits provided
by an operator. Technically, there are sub-cases again:
   b.1) The new traits API returns a union of operator-provided and
driver-provided traits
   b.2) The new traits API returns only operator-provided traits; 
driver-provided
traits are returned e.g. via a new field (node.driver_traits). Then nova will
have to merge the lists itself.

My personal favorite is the last option: I'd like a clear distinction between
different "sources" of traits, but I'd also like to reduce manual work for
operators.

A valid counter-argument is: what if an operator wants to override a
driver-provided trait? E.g. a node can do RAID 5, but I don't want this
particular node to do it for any reason. I'm not sure if it's a valid case, and
what to do about it.

Let me know what you think.

Dmitry


[1] http://git.openstack.org/cgit/openstack/os-traits/tree/
[2] Based on how many attached disks the node had, the presence and 
abilities of a hardware RAID controller, etc


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] Interesting bug when unshelving an instance in an AZ and the AZ is gone

2017-10-22 Thread Jay Pipes

On 10/16/2017 11:22 AM, Matt Riedemann wrote:

This is interesting from the user point of view:

https://bugs.launchpad.net/nova/+bug/1723880

- The user creates an instance in a non-default AZ.
- They shelve offload the instance.
- The admin deletes the AZ that the instance was using, for whatever 
reason.
- The user unshelves the instance which goes back through scheduling and 
fails with NoValidHost because the AZ on the original request spec no 
longer exists.


Now the question is what, if anything, do we do about this bug? Some notes:

1. How reasonable is it for a user to expect in a stable production 
environment that AZs are going to be deleted from under them? We 
actually have a spec related to this but with AZ renames:


https://review.openstack.org/#/c/446446/


I don't think it's reasonable for a user to expect an AZ suddenly gets 
*deleted* from under them, no.


That said, I think it's reasonable for operators to want to *rename* an 
AZ. And because AZs in Nova aren't really *things* [1], attempting to 
change the name of an AZ involves a bunch of nasty DB updates (including 
shadow tables). [2]


2. Should we null out the instance.availability_zone when it's shelved 
offloaded like we do for the instance.host and instance.node attributes? 
Similarly, we would not take into account the 
RequestSpec.availability_zone when scheduling during unshelve. I tend to 
prefer this option because once you unshelve offload an instance, it's 
no longer associated with a host and therefore no longer associated with 
an AZ. However, is it reasonable to assume that the user doesn't care 
that the instance, once unshelved, is no longer in the originally 
requested AZ? Probably not a safe assumption.


Yeah, I don't think this is appropriate.

3. When a user unshelves, they can't propose a new AZ (and I don't think 
we want to add that capability to the unshelve API). So if the original 
AZ is gone, should we automatically remove the 
RequestSpec.availability_zone when scheduling? I tend to not like this 
as it's very implicit and the user could see the AZ on their instance 
change before and after unshelve and be confused.


I don't think this is something we should add to the public API (for 
reasons Matt stated in a followup email to Dean). Instead, I think the 
"rename AZ" functionality should do the needful DB-related tasks to 
change the instance.availability_zone for shelved instances to the new 
AZ name...


4. We could simply do nothing about this specific bug and assert the 
behavior is correct. The user requested an instance in a specific AZ, 
shelved that instance and when they wanted to unshelve it, it's no 
longer available so it fails. The user would have to delete the instance 
and create a new instance from the shelve snapshot image in a new AZ. If 
we implemented Sylvain's spec in #1 above, maybe we don't have this 
problem going forward since you couldn't remove/delete an AZ when there 
are even shelved offloaded instances still tied to it.


I think it's reasonable to prevent deletion of an AZ (whatever that 
actually means... see [1]) when the AZ "has instances in it" (whatever 
that means... see [1])


Best,
-jay


Other options?



[1] AZs in Nova are just metadata key/values on aggregates and string 
values in the instance.availability_zone DB table field that have no FK 
relationship to said metadata key/values


[2] Note that, as I've said before, the entire concept of an 
availability zone in Nova/Cinder/Neutron is completely fictional and 
improperly pretending to be an AWS EC2 availability zone. AZs in Nova 
pretend to be failure domains. They are not anything of the sort.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gertty dashboards

2017-10-22 Thread Sławek Kapłoński
Hello,

Recently I started using Getty and I think it’s good tool.
I have one problem which I don’t know how to solve. On web based gerrit page 
(review.openstack.org) I have defined page with own query:

"(NOT owner:self) status:open label:Code-Review-0,self label:Workflow=0 
(project:openstack/neutron OR project:openstack/neutron-lib OR 
project:openstack/shade) branch:master”

And it gives me list of many patches (more than 100 I think). Problem is that 
when I configured own dashboard with exactly same query in gertty.yml file I 
had only 22 patches displayed.
I suppose that gertty is looking only for patches which are already in local 
database. Is it true? And is there any possibility to change it?

—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl






signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev