Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Emilien Macchi
On Jul 19, 2017 1:58 PM, "Jeremy Stanley"  wrote:

On 2017-07-19 13:42:43 -0700 (-0700), Emilien Macchi wrote:
[...]
> We also want to remove #openstack-puppet. Not sure who created it
> but it causes confusion.

I don't see any evidence we ever logged it anyway:

http://eavesdrop.openstack.org/irclogs/

So... er... done!

> The real one is #puppet-openstack.

Yeah, and that one's reasonably active and being logged, so you/they
should be all set already?


Indeed, thanks for confirming!

--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Stepping down as Watcher spec core

2017-07-19 Thread Emilien Macchi
On Jul 19, 2017 2:36 PM, "Antoine Cabot"  wrote:

Hey guys,

It's been a long time since the last summit and our last discussions !
I hope Watcher is going well and you are getting more traction
everyday in the OpenStack community !

As you may guess, my last 2 months have been very busy with my
relocation in Vancouver with my family. After 8 weeks of active job
search in the cloud industry here in Vancouver, I've got a Senior
Product Manager position at Parsable, a start-up leading the Industry
4.0 revolution. I will continue to deal with very large customers but
in different industries (Oil & Gas, Manufacturing...) to build the
best possible product, leveraging cloud and mobile technologies.

It was a great pleasure to lead the Watcher initiative from its
infancy to the OpenStack Big Tent and be able to work with all of you.
I hope to be part of another open source community in the near future
but now, due to my new attributions, I need to step down as a core
contributor to Watcher specs. Feel free to reach me in any case if I
still hold restricted rights on launchpad or anywhere else.

I hope to see you all in Vancouver next year for the summit and be
part of the traditional Watcher dinner (I will try to find the best
place for you guys).


Thanks for your leadership in the project!

Also please ping me when you come on the Island, Victoria is also sweet to
visit ; -)

Cheers,

Antoine

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-7-18

2017-07-19 Thread Lance Bragstad
Hi all,

This is a day late, but here is the summary for what we worked on during
office hours yesterday. The full log can be found below [0].

Bug #1689888 in OpenStack Identity (keystone): "/v3/users is
unproportionally slow"
https://bugs.launchpad.net/keystone/+bug/1689888
participants: lbragstad
Verified and triaged

Bug #1703245 in OpenStack Identity (keystone): "Assignment API doesn't
test GET for member urls"
https://bugs.launchpad.net/keystone/+bug/1703245
participants: lbragstad
Triaged

Bug #1704205 in OpenStack Identity (keystone): "GET
/v3/role_assignments?effective_names API fails with unexpected
500 error"
https://bugs.launchpad.net/keystone/+bug/1704205
participants: knikolla, lbragstad, edmondsw, prashkre
Discussed possible solutions, documented workarounds, triaged, and set
target milestone

Bug #1687401 in OpenStack Identity (keystone): "Keystone 403 Forbidden"
https://bugs.launchpad.net/keystone/+bug/1687401
participants: lbragstad
Marked as Incomplete until we have more information/details to recreate

Bug #1687888 in OpenStack Identity (keystone): "creating a federation
protocol returns Bad Request instead of Conflict"
https://bugs.launchpad.net/keystone/+bug/1687888
participants: lbragstad
Marked as Invalid based on the inability to recreate

Bug #1694589 in OpenStack Identity (keystone): "Federation protocol
creation gives error"
https://bugs.launchpad.net/keystone/+bug/1694589
participants: lbragstad
Marked as Invalid based on the inability to recreate

Bug #1697634 in OpenStack Identity (keystone): "AH01630: client denied
by server configuration"
https://bugs.launchpad.net/keystone/+bug/1697634
participants: lbragstad
Marked as Invalid based on configuration

Bug #1702230 in OpenStack Identity (keystone): "fernet token fails with
keystone HA"
https://bugs.launchpad.net/keystone/+bug/1702230
participants: lbragstad
Marked as Invalid based on configuration


[0]
http://eavesdrop.openstack.org/meetings/keystone_office_hours/2017/keystone_office_hours.2017-07-18-19.00.log.html




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] office hours report 2017-7-7

2017-07-19 Thread Lance Bragstad
I was able to automate some of this report. I figured a follow up
containing data about what was worked on would be nice.


Bug #1703467 in OpenStack Identity (keystone): "assert_admin is checking
default policy rule not admin_required"
https://bugs.launchpad.net/keystone/+bug/1703467
participants: lbragstad, edmondsw
Triaged, tagged, set target milestone, and worked on patch

Bug #1696264 in OpenStack Identity (keystone): "Create OpenStack client
environment scripts in Installation Guide INCOMPLETE - doesn't state
path for file"
https://bugs.launchpad.net/keystone/+bug/1696264
participants: lbragstad, wingwj
Triaged and set target milestone

Bug #1703666 in OpenStack Identity (keystone): "Templated catalog does
not handle multi-regions properly"
https://bugs.launchpad.net/keystone/+bug/1703666
participants: lbragstad, eandersson
Triaged, set target milestone, discussed alternatives, and worked on a patch

Bug #1133435 in OpenStack Identity (keystone): "policy should return a
400 if a required field is missing"
https://bugs.launchpad.net/keystone/+bug/1133435
participants: lbragstad, edmondsw
Set status, discussed, and proposed  a possible solution due to the work
with policy in code

Bug #1689468 in OpenStack Identity (keystone): "odd keystone behavior
when X-Auth-Token ends with carriage return"
https://bugs.launchpad.net/keystone/+bug/1689468
participants: gagehugo, kaerie
Reproposed patch in review

Bug #1703369 in OpenStack Identity (keystone): "get_identity_providers
policy should be singular"
https://bugs.launchpad.net/keystone/+bug/1703369
participants: lbragstad, edmondsw
Set priority, target to series, set target milestone, proposed and
reviewed patch, discussed backport procedure

Bug #1703438 in keystoneauth: "Discover.version_data: Empty max_version
results in max_microversion=None even if version is specified"
https://bugs.launchpad.net/keystoneauth/+bug/1703438
participants: efried, mordred, morgan
Merged fix

Bug #1703447 in keystoneauth: "URL caching in
EndpointData._run_discovery is busted"
https://bugs.launchpad.net/keystoneauth/+bug/1703447
participants: efried, morgan
Merged fix

Bug #1689468 in keystonemiddleware: "odd keystone behavior when
X-Auth-Token ends with carriage return"
https://bugs.launchpad.net/keystonemiddleware/+bug/1689468
participants: gagehugo, kaerie
Reproposed patch in review


For what it's worth, I also apparently thought office hours occurred on
the 7th when it was actually on the 11th. 



On 07/11/2017 08:35 PM, Lance Bragstad wrote:
>
> Hey all,
>
> This is a summary of what was worked on today during office hours.
> Full logs of the meeting can be found below:
>
> http://eavesdrop.openstack.org/meetings/office_hours/2017/office_hours.2017-07-11-19.00.log.html
>
> *The future of the templated catalog backend
> *
>
> Some issues were uncovered, or just resurfaced, with the templated
> catalog backend. The net of the discussion boiled down to - do we fix
> it or remove it? The answer actually ended up being both. It was
> determined that instead of trying to maintain and fix the existing
> templated backend, we should deprecate it for removal [0]. Since it
> does provide some value, it was suggested that we can start
> implementing a new backend based on YAML to fill the purpose instead.
> The advantage here is that the approach is directed towards a specific
> format (YAML). This should hopefully make things easier for both
> developers and users.
>
> [0] https://review.openstack.org/#/c/482714/
>
> *Policy fixes*
>
> All the policy-in-code work has exposed several issues with policy
> defaults in keystone. We spent time as a group going through several
> of the bugs [0] [1] [2] [3], the corresponding fixes, and impact. One
> of which will be backported specifically for the importance of
> communicating a release note to stable users [0].
>
> [0] https://bugs.launchpad.net/keystone/+bug/1703369
> [1] https://bugs.launchpad.net/keystone/+bug/1703392
> [2] https://bugs.launchpad.net/keystone/+bug/1703467
> [3] https://bugs.launchpad.net/keystone/+bug/1133435
>
> *Additional bugs worked*
>
> Transient bug with security compliance or PCI-DSS:
> https://bugs.launchpad.net/keystone/+bug/1702211
> Request header issues: https://bugs.launchpad.net/keystone/+bug/1689468
>
>
> I hope to find ways to automate most of what is communicated in this
> summary. Until then I'm happy to hear feedback if you find the report
> lacking in a specific area.
>
>
> Thanks,
>
> Lance
>



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-19 Thread Monty Taylor

On 07/19/2017 12:11 AM, Zane Bitter wrote:

On 17/07/17 23:12, Lance Bragstad wrote:

Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application 
credentials dependent on several cycles of policy work. Right?


My thought here was that if this were the case (i.e. persistent 
credentials are OK provided the user can lock down the privileges) then 
you could make a case that the current spec is on the right track. For 
now we implement the application credentials as non-persistent, people 
who know about it use at their own risk, and for people who don't 
there's no exposure. Later on we add the authorisation stuff and relax 
the non-persistence requirement.


On further reflection, I'm not convinced by this - if we care about 
protecting people who don't intentionally use/know about the feature 
now, then we should probably still care once the tools are in place for 
the people who are using it intentionally to lock it down tightly.


So I'm increasingly convinced that we need to do one of two things. Either:

* Agree with Colleen (elsewhere in the thread) that persistent 
application credentials are still better than the status quo and 
reinstate the project-scoped lifecycle in accordance with original 
intent of the spec; or


* Agree that the concerns raised by Morgan & Adam will always apply, and 
look for a solution that gives us automatic key rotation - which might 
be quite different in shape (I can elaborate if necessary).


(That said, I chatted about this briefly with Monty yesterday and he 
said that his recollection was that there is a long-term solution that 
will keep everyone happy. He'll try to remember what it is once he's 
finished on the version discovery stuff he's currently working on.)


Part of this comes down to the fact that there are actually multiple 
scenarios and persistent credentials actually only applies to a scenario 
that typically requires a human with elevated credentials.


SO - I think we can get a long way forward by divvying up some 
responsibilities clearly.


What I mean is:

* The simple consume case ("typical public cloud") is User-per-Project 
with User lifecycle tied to Project lifecycle. In this case the idea of 
a 'persistent' credential is meaningless, because there is no 'other' 
User with access to the Project If the User in this scenario creates a 
Credential it doesn't actually matter what the Credential lifecycle is 
because the act of Account is ultimately about disabling or deleting 
access to the Project in question. We can and should help the folks who 
are running clouds in this model with $something (we need to talk 
details) so that if they are running in this model they don't 
accidentally or by default leave a door open when they think they've 
disabled someone's User as part of shutting off their Account. But in 
this scenario OpenStack adding project-persistent credentials is not a 
big deal - it doesn't provide value. (While a User in that scenario, who 
typically does not have the Role to create a new User being able to 
manage Application Credentials is a HUGE win)


* The other scenario is where there is more than one Human who has a 
User that have been granted Roles on a Project. This is the one where 
project-lifecycle credentials are meaningful and valuable, but it's also 
one that involves some Human with elevated admin-style privileges having 
been involved at some point because that is required to assign Users 
Roles in the Project in the first place.


I believe if we divide application credentials into two kinds:

1) Application Credentials with lifecycle tied to User
2) Application Credentials with lifecycle tied to Project

Then I think it's ok for the ability to do (2) to require a specific 
Role in policy. If we do that, then whatever Human it is that is mapping 
multiple Users into a single Project can decide whether any of those 
Users should be granted the ability to make Project-lifecycle 
Application Credentials. Such a Human is already a Human who has a User 
with elevated permissions, as you have to be to assign Roles to Users 
Projects.


In any case, as I mentioned in the other mail, I think there are a bunch 
of details here that are going to require us being in the room - and 
everyone realizing that everyones use cases and everyones concerns are 
important. If we dig in, I'm sure we can come out on the other side with 
happiness and joy.




I'm trying to avoid taking a side here because everyone is right. 

++

Currently anybody who want to do anything remotely 'cloudy' (i.e. have 
the application talk to OpenStack APIs) has to either share their 
personal password with the application (and by extension their whole 


Or - create an account in the Team's name and by storing the password 
for that account realizing that everyone on the team has access to the 
password so 

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-19 Thread Monty Taylor

On 07/19/2017 12:18 AM, Zane Bitter wrote:

On 18/07/17 10:55, Lance Bragstad wrote:


Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application 
credentials dependent on several cycles of policy work. Right?


I think having the ability to communicate deprecations though 
oslo.policy would help here. We could use it to move towards better 
default roles, which requires being able to set minimum privileges.


Using the current workflow requires operators to define the minimum 
privileges for whatever is using the application credential, and work 
that into their policy. Is that the intended workflow that we want to 
put on the users and operators of application credentials?


The plan is to add an authorisation mechanism that is user-controlled 
and independent of the (operator-controlled) policy. The beginnings of 
this were included in earlier drafts of the spec, but were removed in 
patch set 19 in favour of leaving them for a future spec:


https://review.openstack.org/#/c/450415/18..19/specs/keystone/pike/application-credentials.rst 


Yes - that's right - and I expect to start work on that again as soon as 
this next keystoneauth release with version discovery is out the door.


It turns out there are different POVs on this topic, and it's VERY 
important to be clear which one we're talking about at any given point 
in time. A bunch of the confusion just in getting as far as we've gotten 
so far came from folks saying words like "policy" or "trusts" or "ACLs" 
or "RBAC" - but not clarifying which group of cloud users they were 
discussing and from what context.


The problem that Zane and I are are discussing and advocating for are 
for UNPRIVILEDGED users who neither deploy nor operate the cloud but who 
use the cloud to run applications.


Unfortunately, neither the current policy system nor trusts are useful 
in any way shape or form for those humans. Policy and trusts are tools 
for cloud operators to take a certain set of actions.


Similarly, the concern from the folks who are not in favor of 
project-lifecycled application credentials is the one that Zane outlined 
- that there will be $someone with access to those credentials after a 
User change event, and thus $security will be compromised.


There is a balance that can and must be found. The use case Zane and I 
are talking about is ESSENTIAL, and literally ever single human who is a 
actually using OpenStack to run applications needs it. Needed it last 
year in fact, and they are, in fact doing things like writing ssh-agent 
like daemons in which they can store their corporate LDAP credentials so 
that their automation will work because we're not giving them a workable 
option.


That said, the concerns about not letting a thing out the door that is 
insecure by design like PHP4's globally scoped URL variables is also 
super important.


So we need to find a design that meets both goals.

I have thoughts on the topic, but have been holding off until 
version-discovery is out the door. My hunch is that, like application 
credentials, we're not going to make significant headway without getting 
humans in the room - because the topic is WAY too fraught with peril.


I propose we set aside time at the PTG to dig in to this. Between Zane 
and I and the Keystone core team I have confidence we can find a way out.


Monty

PS. It will not help to solve limited-scope before we solve this. 
Limited scope is an end-user opt-in action and having it does not remove 
the concerns that have been expressed.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]HP proliant create logical drivenot according to the target_raid_config

2017-07-19 Thread 王俊
ruby and Wan-Yan,
 Thanks for your reply, I have already send the steps and logs to ilo 
driver group

保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-19 Thread Matt Riedemann

On 7/19/2017 6:16 AM, Sean Dague wrote:

I was just starting to look through some logs to see if I could line up
request ids (part of global request id efforts), when I realized that in
the process to uwsgi by default, we've entirely lost the INFO wsgi
request logs. :(

Instead of the old format (which was coming out of oslo.service) we get
the following -
http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-n-api.txt.gz#_Jul_19_03_44_58_233532


That definitely takes us a step backwards in understanding the world, as
we lose our request id on entry that was extremely useful to match up
everything. We hit a similar issue with placement, and added custom
paste middleware for that. Maybe we need to consider a similar thing
here, that would only emit if running under uwsgi/apache?

Thoughts?

-Sean



I'm noticing some other weirdness here:

http://logs.openstack.org/65/483565/4/check/gate-tempest-dsvm-py35-ubuntu-xenial/9921636/logs/screen-n-sch.txt.gz#_Jul_19_20_17_18_801773

The first part of the log message got cut off:

Jul 19 20:17:18.801773 ubuntu-xenial-infracloud-vanilla-9950433 
nova-scheduler[22773]: 
-01dc-4de3-9da7-8eb3de9e305e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active'), 
'a4eba582-075a-4200-ae6f-9fc7797c95dd':


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How should we go about removing legacy VIF types in Queens?

2017-07-19 Thread Kevin Benton
Yeah, if one clearly belongs to a single vendor moving is definitely the
way to go.

OVS itself is a good example of one that is used by lots of drivers. Since
it's in os-vif maybe we should do the same for any others without a clear
association (e.g. vif_type='tap' is about as vendor agnostic as you can
get).

On Wed, Jul 19, 2017 at 3:31 AM, Stephen Finucane 
wrote:

> On Thu, 2017-07-13 at 07:54 -0600, Kevin Benton wrote:
> > On Thu, Jul 13, 2017 at 7:26 AM, Stephen Finucane 
> wrote:
> >
> > > os-vif has been integrated into nova since the newton cycle. With the
> > > integration of os-vif, the expectation is that all the old, non-os-vif
> > > plugging/unplugging code found in [1] will be replaced by code that
> > > harnesses
> > > os-vif plugins [2]. This has happened for a few of the VIF types, and
> newer
> > > VIFs are being added in this manner [3]. However, there are quite a few
> > > VIFs
> > > that are still using the legacy path, and I think it's about time we
> > > started
> > > moving things forward. Doing so allows us to continue to progress on
> > > passing
> > > os-vif objects from neutron and remove the large swathes of legacy code
> > > still
> > > found in nova.
> > >
> > > I've opened a bug against networking-bigswitch [4] for one of these VIF
> > > types,
> > > IVS, and I'm thinking I'll do the same for a lot of the other VIF types
> > > where I
> > > can find definite vendors. Is there anything else we can do though? At
> some
> > > point we're going to have to just start deleting code and I'd like to
> avoid
> > > leaving operators in the lurch.
> >
> > Some of the stuff like '802.1qbh' isn't particularly vendor specific so
> I'm
> > not sure who will host it and a repo just for that seems like a bit much.
> > Should we just bite the bullet and convert them in the nova tree or put
> them
> > in os-vif?
>
> That VIF type actually seems to be a CISCO-only option [1][2] but I get
> what
> you're saying. I think we can definitely move some of them, though (IVS,
> for a
> start). Perhaps moving the ones that *do* have clear owners to their
> respective
> packages is the way to go?
>
> Stephen
>
> [1] http://codesearch.openstack.org/?q=802.1qbh=nope==
> [2] https://git.openstack.org/cgit/openstack/networking-
> cisco/tree/networking_c
> isco/plugins/ml2/drivers/cisco/ucsm/constants.py
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-19 Thread Matt Riedemann

On 7/19/2017 6:16 AM, Sean Dague wrote:

We hit a similar issue with placement, and added custom
paste middleware for that. Maybe we need to consider a similar thing
here, that would only emit if running under uwsgi/apache?


For example, this:

http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-placement-api.txt.gz#_Jul_19_03_41_21_429324

If it's not optional for placement, why would we make it optional for 
the compute API? Would turning it on always make it log the request IDs 
twice or something?


Is this a problem for glance/cinder/neutron/keystone and whoever else is 
logging request IDs in the API?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][python3][congress] locally successful devstack setup fails in check-job

2017-07-19 Thread Eric K


On 7/19/17, 1:11 PM, "Clark Boylan"  wrote:

>On Tue, Jul 18, 2017, at 12:47 PM, Eric K wrote:
>> Hi all, looking for some hints/tips. Thanks so much in advance.
>> 
>> My local python3 devstack setup [2] succeeds, but in check-job a
>> similarly
>> configured devstack setup [1] fails for not installing congress client.
>> 
>> ./stack.sh:1439:check_libs_from_git
>> /opt/stack/new/devstack/inc/python:401:die
>> [ERROR] /opt/stack/new/devstack/inc/python:401 The following
>> LIBS_FROM_GIT
>> were not installed correct: python-congressclient
>> 
>> 
>> It seems that the devstack setup in check-job never attempted to install
>> congress client. Comparing the log [4] in my local run to the log in
>> check-job [3], all these steps in my local log are absent from the
>> check-job log:
>> ++/opt/stack/congress/devstack/settings:source:9
>> CONGRESSCLIENT_DIR=/opt/stack/python-congressclient
>> 
>> ++/opt/stack/congress/devstack/settings:source:52
>> 
>>CONGRESSCLIENT_REPO=git://git.openstack.org/openstack/python-congressclie
>>nt
>> .git
>> 
>> Cloning into '/opt/stack/python-congressclient'?
>
>You won't see this logged by devstack because devstack-gate does all of
>the git repo setup beforehand to ensure that the correct git refs are
>checked out.
>
>> 
>> Check python version for : /opt/stack/python-congressclient
>> Automatically using 3.5 version to install
>> /opt/stack/python-congressclient based on classifiers
>> 
>> 
>> Installing collected packages: python-congressclient
>>   Running setup.py develop for python-congressclient
>> Successfully installed python-congressclient
>> 
>> 
>> [1] Check-job config:
>> 
>>https://github.com/openstack-infra/project-config/blob/master/jenkins/job
>>s/
>> congress.yaml#L65
>> 
>>https://github.com/openstack-infra/project-config/blob/master/jenkins/job
>>s/
>> congress.yaml#L111
>> 
>> [2] Local devstack local.conf:
>> https://pastebin.com/qzuYTyAE
>> 
>> [3] Check-job devstack log:
>> 
>>http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-m
>>ys
>> ql-ubuntu-xenial-nv/7ae2814/logs/devstacklog.txt.gz
>> 
>> [4] Local devstack log:
>> https://ufile.io/c9jhm
>
>My best guess of what is happening here is that python-congressclient is
>being installed to python2 from source so then when devstack checks if
>python-congressclient is installed properly against python3 it fails.
>You'll want to make sure that whatever is installing
>python-congressclient is doing so against the appropriate python.

Thanks a lot Clark!

Now pursuing the guess that install was done in wrong python version.

I was actually looking at the wrong log. Here is the correct one.
http://logs.openstack.org/53/485053/1/check/gate-congress-dsvm-py35-api-mys
ql-ubuntu-xenial-nv/7f07b73/logs/devstacklog.txt.gz

In this log, I see it successfully installing congress client here:
| Installing collected packages: python-congressclient

| Running setup.py develop for python-congressclient

| Successfully installed python-congressclient

| + ./stack.sh:main:941 : use_library_from_git python-openstackclient

| + inc/python:use_library_from_git:378 : local enabled=1
| + inc/python:use_library_from_git:379 : [[ python-congressclient =
\A\L\L ]]
| + inc/python:use_library_from_git:379 : [[ ,python-congressclient, =~
,python-openstackclient, ]]
| + inc/python:use_library_from_git:380 : return 1

(http://logs.openstack.org/53/485053/1/check/gate-congress-dsvm-py35-api-my
sql-ubuntu-xenial-nv/7f07b73/logs/devstacklog.txt.gz#_2017-07-19_06_26_31_5
46)

From then on there is nothing noteworthy re: congress client until it says
the client is not installed correctly:
| + inc/python:check_libs_from_git:395 : lib_installed_from_git
python-congressclient

...
| + inc/python:check_libs_from_git:401 : die 401 'The following
LIBS_FROM_GIT were not installed correct: python-congressclient’

(http://logs.openstack.org/53/485053/1/check/gate-congress-dsvm-py35-api-my
sql-ubuntu-xenial-nv/7f07b73/logs/devstacklog.txt.gz#_2017-07-19_06_36_41_2
01)

Is there a way to tell from these logs whether the install is being done
in python2 or python3? From this line in the log it seems to be doing the
right thing:
| Automatically using 3.5 version to install
/opt/stack/new/python-congressclient based on classifiers

(http://logs.openstack.org/53/485053/1/check/gate-congress-dsvm-py35-api-my
sql-ubuntu-xenial-nv/7f07b73/logs/devstacklog.txt.gz#_2017-07-19_06_26_24_8
86)

Thanks again!



>
>Clark
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [infra][python3][congress] locally successful devstack setup fails in check-job

2017-07-19 Thread Eric K
Thanks a lot Jeremy. I'm now running the reproduce.sh to see what happens.

> [3] Check-job devstack log:
> 
>http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-my
>sql-ubuntu-xenial-nv/7ae2814/logs/devstacklog.txt.gz

And oops, I linked to and older run of the check-job devstack log. Here¹s
a more appropriate version:
http://logs.openstack.org/53/485053/1/check/gate-congress-dsvm-py35-api-mys
ql-ubuntu-xenial-nv/7f07b73/logs/devstacklog.txt.gz

In this log, I see it successfully installing congress client here:
| Installing collected packages: python-congressclient

|   Running setup.py develop for python-congressclient

| Successfully installed python-congressclient

| + ./stack.sh:main:941  :   use_library_from_git
python-openstackclient

| + inc/python:use_library_from_git:378  :   local enabled=1
| + inc/python:use_library_from_git:379  :   [[ python-congressclient
= \A\L\L ]]
| + inc/python:use_library_from_git:379  :   [[
,python-congressclient, =~ ,python-openstackclient, ]]
| + inc/python:use_library_from_git:380  :   return 1

(http://logs.openstack.org/53/485053/1/check/gate-congress-dsvm-py35-api-my
sql-ubuntu-xenial-nv/7f07b73/logs/devstacklog.txt.gz#_2017-07-19_06_26_31_5
46)

From then on there is nothing noteworthy re: congress client until it says
the client is not installed correctly:
| + inc/python:check_libs_from_git:395   :   lib_installed_from_git
python-congressclient

...
| + inc/python:check_libs_from_git:401   :   die 401 'The following
LIBS_FROM_GIT were not installed correct:  python-congressclient¹

(http://logs.openstack.org/53/485053/1/check/gate-congress-dsvm-py35-api-my
sql-ubuntu-xenial-nv/7f07b73/logs/devstacklog.txt.gz#_2017-07-19_06_36_41_2
01)

Clark suggested that perhaps in install for python-congressclient was
going to python2 instead of python3. So I¹m investigating that.

Thanks again!

On 7/19/17, 11:19 AM, "Jeremy Stanley"  wrote:

>On 2017-07-18 12:47:07 -0700 (-0700), Eric K wrote:
>> Hi all, looking for some hints/tips. Thanks so much in advance.
>> 
>> My local python3 devstack setup [2] succeeds, but in check-job a
>>similarly
>> configured devstack setup [1] fails for not installing congress client.
>> 
>> ./stack.sh:1439:check_libs_from_git
>> /opt/stack/new/devstack/inc/python:401:die
>> [ERROR] /opt/stack/new/devstack/inc/python:401 The following
>>LIBS_FROM_GIT
>> were not installed correct: python-congressclient
>> 
>> 
>> It seems that the devstack setup in check-job never attempted to install
>> congress client. Comparing the log [4] in my local run to the log in
>> check-job [3], all these steps in my local log are absent from the
>> check-job log:
>> ++/opt/stack/congress/devstack/settings:source:9
>> CONGRESSCLIENT_DIR=/opt/stack/python-congressclient
>> 
>> ++/opt/stack/congress/devstack/settings:source:52
>> 
>>CONGRESSCLIENT_REPO=git://git.openstack.org/openstack/python-congressclie
>>nt.git
>> 
>> Cloning into '/opt/stack/python-congressclient'
>> 
>> Check python version for : /opt/stack/python-congressclient
>> Automatically using 3.5 version to install
>> /opt/stack/python-congressclient based on classifiers
>> 
>> 
>> Installing collected packages: python-congressclient
>>   Running setup.py develop for python-congressclient
>> Successfully installed python-congressclient
>> 
>> 
>> [1] Check-job config:
>> 
>>https://github.com/openstack-infra/project-config/blob/master/jenkins/job
>>s/congress.yaml#L65
>> 
>>https://github.com/openstack-infra/project-config/blob/master/jenkins/job
>>s/congress.yaml#L111
>> 
>> [2] Local devstack local.conf:
>> https://pastebin.com/qzuYTyAE
>> 
>> [3] Check-job devstack log:
>> 
>>http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-m
>>ysql-ubuntu-xenial-nv/7ae2814/logs/devstacklog.txt.gz
>> 
>> [4] Local devstack log:
>> https://ufile.io/c9jhm
>
>Did you attempt comparison to the local.conf the job used?
>
>http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-my
>sql-ubuntu-xenial-nv/7ae2814/logs/local.conf.txt.gz
>
>Also, if you haven't seen it, every devstack-gate run includes a
>convenience script you should be able to use to reproduce locally
>with the same settings:
>
>http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-my
>sql-ubuntu-xenial-nv/7ae2814/logs/reproduce.sh
>
>Further, https://review.openstack.org/484158 seems to have changed
>the behavior of the job since the log you posted from the 14th. Is
>the result still the same in this case?
>-- 
>Jeremy Stanley
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not 

[openstack-dev] [watcher] Stepping down as Watcher spec core

2017-07-19 Thread Antoine Cabot
Hey guys,

It's been a long time since the last summit and our last discussions !
I hope Watcher is going well and you are getting more traction
everyday in the OpenStack community !

As you may guess, my last 2 months have been very busy with my
relocation in Vancouver with my family. After 8 weeks of active job
search in the cloud industry here in Vancouver, I've got a Senior
Product Manager position at Parsable, a start-up leading the Industry
4.0 revolution. I will continue to deal with very large customers but
in different industries (Oil & Gas, Manufacturing...) to build the
best possible product, leveraging cloud and mobile technologies.

It was a great pleasure to lead the Watcher initiative from its
infancy to the OpenStack Big Tent and be able to work with all of you.
I hope to be part of another open source community in the near future
but now, due to my new attributions, I need to step down as a core
contributor to Watcher specs. Feel free to reach me in any case if I
still hold restricted rights on launchpad or anywhere else.

I hope to see you all in Vancouver next year for the summit and be
part of the traditional Watcher dinner (I will try to find the best
place for you guys).

Cheers,

Antoine

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Nominate Yumeng Bao to the core team

2017-07-19 Thread Antoine Cabot
+1

On Wed, Jul 12, 2017 at 7:31 AM, Susanne Balle  wrote:
> +1
>
> On Fri, Jun 30, 2017 at 4:31 AM, Hidekazu Nakamura
>  wrote:
>>
>> +1
>>
>> > -Original Message-
>> > From: Чадин Александр (Alexander Chadin)
>> > [mailto:a.cha...@servionica.ru]
>> > Sent: Tuesday, June 27, 2017 10:44 PM
>> > To: OpenStack Development Mailing List
>> > 
>> > Subject: [openstack-dev] [watcher] Nominate Yumeng Bao to the core team
>> >
>> > Hi watcher folks,
>> >
>> > I’d like to nominate Yumeng Bao to the core team. She has made a lot of
>> > contributions including specifications,
>> > features and bug fixes. Yumeng has attended PTG and Summit with her
>> > presentation related to the Watcher.
>> > Yumeng is active on IRC channels and take a part on weekly meetings as
>> > well.
>> >
>> > Please, vote with +1/-1.
>> >
>> > Best Regards,
>> > _
>> > Alexander Chadin
>> > OpenStack Developer
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]HP proliant create logical drive not according to the target_raid_config

2017-07-19 Thread Wan-yen Hsu
Hi Wangjun,

   Can you provide more info?

What driver did you use?
What's your target_raid_config?
What do the logical drives look like after the configuration?
Any log, server and storage hardware and firmware version info you can
share?

You are welcome to contact ilo_driv...@groups.ext.hpe.com  for
assistance.

Thanks!

Regards,
wanyen





On Wed, Jul 19, 2017 at 4:55 AM, 王俊  wrote:

> Hi,
>
>  May I ask a question about RAID?I set target_raid_config before I
> set the node state to ‘provide’, but when I make the node to available, the
> server’s logical drive does not like my configures.I don’t know why.who can
> give some help?
>
> 保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Jeremy Stanley
On 2017-07-19 13:42:43 -0700 (-0700), Emilien Macchi wrote:
[...]
> We also want to remove #openstack-puppet. Not sure who created it
> but it causes confusion.

I don't see any evidence we ever logged it anyway:

http://eavesdrop.openstack.org/irclogs/

So... er... done!

> The real one is #puppet-openstack.

Yeah, and that one's reasonably active and being logged, so you/they
should be all set already?
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Emilien Macchi
On Wed, Jul 19, 2017 at 12:24 PM, Jeremy Stanley  wrote:
> For those who are unaware, Freenode doesn't allow any one user to
> /join more than 120 channels concurrently. This has become a
> challenge for some of the community's IRC bots in the past year,
> most recently the "openstack" meetbot (which not only handles
> meetings but also takes care of channel logging to
> eavesdrop.openstack.org and does the nifty bug number resolution
> some people seem to like).
>
> I have run some rudimentary analysis and come up with the following
> list of channels which have had fewer than 10 lines said by anyone
> besides a bot over the past three months:
>
> #craton
> #openstack-api
> #openstack-app-catalog
> #openstack-bareon
> #openstack-cloudpulse
> #openstack-community
> #openstack-cue
> #openstack-diversity
> #openstack-gluon
> #openstack-gslb
> #openstack-ko
> #openstack-kubernetes
> #openstack-networking-cisco
> #openstack-neutron-release
> #openstack-opw
> #openstack-pkg
> #openstack-product
> #openstack-python3
> #openstack-quota
> #openstack-rating
> #openstack-solar
> #openstack-swauth
> #openstack-ux
> #openstack-vmware-nsx
> #openstack-zephyr

We also want to remove #openstack-puppet.
Not sure who created it but it causes confusion.

The real one is #puppet-openstack.

Thanks!

> I have a feeling many of these are either no longer needed, or what
> little and infrequent conversation they get used for could just as
> easily happen in a general channel like #openstack-dev or #openstack
> or maybe in the more active channel of their parent team for some
> subteams. Who would miss these if we ceased logging/using them? Does
> anyone want to help by asking around to people who might not see
> this thread, maybe by popping into those channels and seeing if any
> of the sleeping denizens awaken and say they still want to keep it
> around?
>
> Ultimately we should improve our meetbot deployment to support
> sharding channels across multiple bots, but that will take some time
> to implement and needs volunteers willing to work on it. In the
> meantime we're running with the meetbot present in 120 channels and
> have at least one new channel that desires logging and can't get it
> until we whittle that number down.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Amy Marrich
Yeah I'd say stop logging it and we'll request being added back in should
we get going.

Amy(spotz)

On Wed, Jul 19, 2017 at 3:22 PM, Jeremy Stanley  wrote:

> On 2017-07-19 15:15:51 -0500 (-0500), Amy Marrich wrote:
> > Sorry to see the Diversity channel go as we really need to get the
> > Working Group back up and running but it never was really high
> > volume.
> [...]
>
> If you anticipate activity picking up soon in that channel we can
> certainly keep logging it for now. Alternatively, even if we stop
> logging it today, you can always submit a change to add logging back
> later if people decide to start using that channel for discussions
> again.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Jeremy Stanley
On 2017-07-19 15:15:51 -0500 (-0500), Amy Marrich wrote:
> Sorry to see the Diversity channel go as we really need to get the
> Working Group back up and running but it never was really high
> volume.
[...]

If you anticipate activity picking up soon in that channel we can
certainly keep logging it for now. Alternatively, even if we stop
logging it today, you can always submit a change to add logging back
later if people decide to start using that channel for discussions
again.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Amy Marrich
Sorry to see the Diversity channel go as we really need to get the Working
Group back up and running but it never was really high volume.

Amy(spotz)

On Wed, Jul 19, 2017 at 2:24 PM, Jeremy Stanley  wrote:

> For those who are unaware, Freenode doesn't allow any one user to
> /join more than 120 channels concurrently. This has become a
> challenge for some of the community's IRC bots in the past year,
> most recently the "openstack" meetbot (which not only handles
> meetings but also takes care of channel logging to
> eavesdrop.openstack.org and does the nifty bug number resolution
> some people seem to like).
>
> I have run some rudimentary analysis and come up with the following
> list of channels which have had fewer than 10 lines said by anyone
> besides a bot over the past three months:
>
> #craton
> #openstack-api
> #openstack-app-catalog
> #openstack-bareon
> #openstack-cloudpulse
> #openstack-community
> #openstack-cue
> #openstack-diversity
> #openstack-gluon
> #openstack-gslb
> #openstack-ko
> #openstack-kubernetes
> #openstack-networking-cisco
> #openstack-neutron-release
> #openstack-opw
> #openstack-pkg
> #openstack-product
> #openstack-python3
> #openstack-quota
> #openstack-rating
> #openstack-solar
> #openstack-swauth
> #openstack-ux
> #openstack-vmware-nsx
> #openstack-zephyr
>
> I have a feeling many of these are either no longer needed, or what
> little and infrequent conversation they get used for could just as
> easily happen in a general channel like #openstack-dev or #openstack
> or maybe in the more active channel of their parent team for some
> subteams. Who would miss these if we ceased logging/using them? Does
> anyone want to help by asking around to people who might not see
> this thread, maybe by popping into those channels and seeing if any
> of the sleeping denizens awaken and say they still want to keep it
> around?
>
> Ultimately we should improve our meetbot deployment to support
> sharding channels across multiple bots, but that will take some time
> to implement and needs volunteers willing to work on it. In the
> meantime we're running with the meetbot present in 120 channels and
> have at least one new channel that desires logging and can't get it
> until we whittle that number down.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-07-19 19:24:00 +:
> For those who are unaware, Freenode doesn't allow any one user to
> /join more than 120 channels concurrently. This has become a
> challenge for some of the community's IRC bots in the past year,
> most recently the "openstack" meetbot (which not only handles
> meetings but also takes care of channel logging to
> eavesdrop.openstack.org and does the nifty bug number resolution
> some people seem to like).
> 
> I have run some rudimentary analysis and come up with the following
> list of channels which have had fewer than 10 lines said by anyone
> besides a bot over the past three months:
> 
> #craton
> #openstack-api
> #openstack-app-catalog
> #openstack-bareon
> #openstack-cloudpulse
> #openstack-community
> #openstack-cue
> #openstack-diversity
> #openstack-gluon
> #openstack-gslb
> #openstack-ko
> #openstack-kubernetes
> #openstack-networking-cisco
> #openstack-neutron-release
> #openstack-opw
> #openstack-pkg
> #openstack-product
> #openstack-python3
> #openstack-quota
> #openstack-rating
> #openstack-solar
> #openstack-swauth
> #openstack-ux
> #openstack-vmware-nsx
> #openstack-zephyr
> 
> I have a feeling many of these are either no longer needed, or what
> little and infrequent conversation they get used for could just as
> easily happen in a general channel like #openstack-dev or #openstack
> or maybe in the more active channel of their parent team for some
> subteams. Who would miss these if we ceased logging/using them? Does
> anyone want to help by asking around to people who might not see
> this thread, maybe by popping into those channels and seeing if any
> of the sleeping denizens awaken and say they still want to keep it
> around?
> 
> Ultimately we should improve our meetbot deployment to support
> sharding channels across multiple bots, but that will take some time
> to implement and needs volunteers willing to work on it. In the
> meantime we're running with the meetbot present in 120 channels and
> have at least one new channel that desires logging and can't get it
> until we whittle that number down.

All of those look like good candidates for cleanup. We could easily move
#openstack-python3 to #openstack-dev.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][python3][congress] locally successful devstack setup fails in check-job

2017-07-19 Thread Clark Boylan
On Tue, Jul 18, 2017, at 12:47 PM, Eric K wrote:
> Hi all, looking for some hints/tips. Thanks so much in advance.
> 
> My local python3 devstack setup [2] succeeds, but in check-job a
> similarly
> configured devstack setup [1] fails for not installing congress client.
> 
> ./stack.sh:1439:check_libs_from_git
> /opt/stack/new/devstack/inc/python:401:die
> [ERROR] /opt/stack/new/devstack/inc/python:401 The following
> LIBS_FROM_GIT
> were not installed correct: python-congressclient
> 
> 
> It seems that the devstack setup in check-job never attempted to install
> congress client. Comparing the log [4] in my local run to the log in
> check-job [3], all these steps in my local log are absent from the
> check-job log:
> ++/opt/stack/congress/devstack/settings:source:9
> CONGRESSCLIENT_DIR=/opt/stack/python-congressclient
> 
> ++/opt/stack/congress/devstack/settings:source:52
> CONGRESSCLIENT_REPO=git://git.openstack.org/openstack/python-congressclient
> .git
> 
> Cloning into '/opt/stack/python-congressclient'Š

You won't see this logged by devstack because devstack-gate does all of
the git repo setup beforehand to ensure that the correct git refs are
checked out.

> 
> Check python version for : /opt/stack/python-congressclient
> Automatically using 3.5 version to install
> /opt/stack/python-congressclient based on classifiers
> 
> 
> Installing collected packages: python-congressclient
>   Running setup.py develop for python-congressclient
> Successfully installed python-congressclient
> 
> 
> [1] Check-job config:
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/
> congress.yaml#L65
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/
> congress.yaml#L111
> 
> [2] Local devstack local.conf:
> https://pastebin.com/qzuYTyAE   
> 
> [3] Check-job devstack log:
> http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-mys
> ql-ubuntu-xenial-nv/7ae2814/logs/devstacklog.txt.gz
> 
> [4] Local devstack log:
> https://ufile.io/c9jhm

My best guess of what is happening here is that python-congressclient is
being installed to python2 from source so then when devstack checks if
python-congressclient is installed properly against python3 it fails.
You'll want to make sure that whatever is installing
python-congressclient is doing so against the appropriate python.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-19 Thread Jeremy Stanley
For those who are unaware, Freenode doesn't allow any one user to
/join more than 120 channels concurrently. This has become a
challenge for some of the community's IRC bots in the past year,
most recently the "openstack" meetbot (which not only handles
meetings but also takes care of channel logging to
eavesdrop.openstack.org and does the nifty bug number resolution
some people seem to like).

I have run some rudimentary analysis and come up with the following
list of channels which have had fewer than 10 lines said by anyone
besides a bot over the past three months:

#craton
#openstack-api
#openstack-app-catalog
#openstack-bareon
#openstack-cloudpulse
#openstack-community
#openstack-cue
#openstack-diversity
#openstack-gluon
#openstack-gslb
#openstack-ko
#openstack-kubernetes
#openstack-networking-cisco
#openstack-neutron-release
#openstack-opw
#openstack-pkg
#openstack-product
#openstack-python3
#openstack-quota
#openstack-rating
#openstack-solar
#openstack-swauth
#openstack-ux
#openstack-vmware-nsx
#openstack-zephyr

I have a feeling many of these are either no longer needed, or what
little and infrequent conversation they get used for could just as
easily happen in a general channel like #openstack-dev or #openstack
or maybe in the more active channel of their parent team for some
subteams. Who would miss these if we ceased logging/using them? Does
anyone want to help by asking around to people who might not see
this thread, maybe by popping into those channels and seeing if any
of the sleeping denizens awaken and say they still want to keep it
around?

Ultimately we should improve our meetbot deployment to support
sharding channels across multiple bots, but that will take some time
to implement and needs volunteers willing to work on it. In the
meantime we're running with the meetbot present in 120 channels and
have at least one new channel that desires logging and can't get it
until we whittle that number down.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Feature proposal freeze exception request for change

2017-07-19 Thread Ben Swartzlander

On 07/18/2017 04:33 PM, Ravi, Goutham wrote:

Hello Manila reviewers,

It has been a few days past the feature proposal freeze, but I would 
like to request an extension for an enhancement to the NetApp driver in 
Manila. [1] implements a low-impact blueprint [2] that was approved for 
the Pike release. The code change is contained within the driver and 
would be a worthwhile addition to users of this driver in Manila/Pike.


I have no problem with this particular feature, but given that it missed 
the deadline I would recommend prioritizing review on this lower than 
other changes which did meet the deadline.


I'll put an agenda item on the weekly meeting tomorrow to formally 
consider this FFE.


-Ben


[1] https://review.openstack.org/#/c/484933/

[2] https://blueprints.launchpad.net/openstack/?searchtext=netapp-cdot-qos

Thanks,

Goutham



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Stable-maint team members

2017-07-19 Thread Ben Swartzlander

Welcome to the following new members of the manila-stable-maint team:

Goutham Pacha Ravi
Rodrigo Barbieri
Thomas Bechtold
Tom Barron
Valeriy Ponomaryov
Xing Yang

All of you are of course familiar with the stable-maint guidelines, and 
have a good history of enforcing the rules. Please continue to exercise 
restraint when approving backports, and perhaps go over the checklist 
one more time before pressing the +2 button :-)


Thanks for keeping stable branches stable!
-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l2-gateway] Added Ricardo Noriega to the core team

2017-07-19 Thread Emilien Macchi
On Wed, Jul 19, 2017 at 9:15 AM, Alex Schultz  wrote:

>
>
> On Wed, Jul 19, 2017 at 9:58 AM, Ricardo Noriega De Soto <
> rnori...@redhat.com> wrote:
>
>> Thanks Gary for the opportunity! We'll keep fighting! :-)
>>
>>
> Congrats. Your efforts in the puppet openstack repos to also get this
> properly supported and tested have also been very appreciated.
>

ditto in tripleo :-)


> Thanks,
> -Alex
>
>
>> On Wed, Jul 19, 2017 at 8:52 AM, Gary Kotton  wrote:
>>
>>> Hi,
>>>
>>> Over the last few months Ricardo Noriega has been making many
>>> contributions to the project and has actually helped get it to the stage
>>> where it’s a lot healthier than before ☺. I am adding him to the core
>>> team.
>>>
>>> Congratulations!
>>>
>>> A luta continua
>>>
>>> Gary
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Ricardo Noriega
>>
>> Senior Software Engineer - NFV Partner Engineer | Office of Technology
>>  | Red Hat
>> irc: rnoriega @freenode
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][python3][congress] locally successful devstack setup fails in check-job

2017-07-19 Thread Jeremy Stanley
On 2017-07-18 12:47:07 -0700 (-0700), Eric K wrote:
> Hi all, looking for some hints/tips. Thanks so much in advance.
> 
> My local python3 devstack setup [2] succeeds, but in check-job a similarly
> configured devstack setup [1] fails for not installing congress client.
> 
> ./stack.sh:1439:check_libs_from_git
> /opt/stack/new/devstack/inc/python:401:die
> [ERROR] /opt/stack/new/devstack/inc/python:401 The following LIBS_FROM_GIT
> were not installed correct: python-congressclient
> 
> 
> It seems that the devstack setup in check-job never attempted to install
> congress client. Comparing the log [4] in my local run to the log in
> check-job [3], all these steps in my local log are absent from the
> check-job log:
> ++/opt/stack/congress/devstack/settings:source:9
> CONGRESSCLIENT_DIR=/opt/stack/python-congressclient
> 
> ++/opt/stack/congress/devstack/settings:source:52
> CONGRESSCLIENT_REPO=git://git.openstack.org/openstack/python-congressclient.git
> 
> Cloning into '/opt/stack/python-congressclient'
> 
> Check python version for : /opt/stack/python-congressclient
> Automatically using 3.5 version to install
> /opt/stack/python-congressclient based on classifiers
> 
> 
> Installing collected packages: python-congressclient
>   Running setup.py develop for python-congressclient
> Successfully installed python-congressclient
> 
> 
> [1] Check-job config:
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/congress.yaml#L65
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/congress.yaml#L111
> 
> [2] Local devstack local.conf:
> https://pastebin.com/qzuYTyAE
> 
> [3] Check-job devstack log:
> http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-mysql-ubuntu-xenial-nv/7ae2814/logs/devstacklog.txt.gz
> 
> [4] Local devstack log:
> https://ufile.io/c9jhm

Did you attempt comparison to the local.conf the job used?

http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-mysql-ubuntu-xenial-nv/7ae2814/logs/local.conf.txt.gz

Also, if you haven't seen it, every devstack-gate run includes a
convenience script you should be able to use to reproduce locally
with the same settings:

http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-mysql-ubuntu-xenial-nv/7ae2814/logs/reproduce.sh

Further, https://review.openstack.org/484158 seems to have changed
the behavior of the job since the log you posted from the 14th. Is
the result still the same in this case?
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Migration from Neutron ML2OVS to OVN

2017-07-19 Thread Ben Nemec



On 07/18/2017 08:18 AM, Numan Siddique wrote:



On Thu, Jul 13, 2017 at 3:02 PM, Saravanan KR > wrote:


On Tue, Jul 11, 2017 at 11:40 PM, Ben Nemec > wrote:
>
>
> On 07/11/2017 10:17 AM, Numan Siddique wrote:
>>
>> Hello Tripleo team,
>>
>> I have few questios regarding migration from neutron ML2OVS to OVN. Below
>> are some of the requirements
>>
>>   - We want to migrate an existing depoyment from Neutroon default ML2OVS
>> to OVN
>>   - We are targetting this for tripleo Queen's release.
>>   - The plan is to first upgrade the tripleo deployment from Pike to
>> Queens with no changes to neutron. i.e with neutron ML2OVS. Once the 
upgrade
>> is done, we want to migrate to OVN.
>>   - The migration process will stop all the neutron agents, configure
>> neutron server to load OVN mechanism driver and start OVN services (with 
no
>> or very limited datapath downtime).
>>   - The migration would be handled by an ansible script. We have a PoC
>> ansible script which can be found here [1]
>>
>> And the questions are
>> -  (A broad question) - What is the right way to migrate and switch the
>> neutron plugin ? Can the stack upgrade handle the migration as well ?
This is going to be a broader problem as it is also require to migrate
ML2OvS to ODL for NFV deployments, pretty much at the same timeline.
If i understand correctly, this migration involves stopping services
of ML2OVS (like neutron-ovs-agent) and starting the corresponding new
ML2 (OVN or ODL), along with few parameter additions and removals.

>> - The migration procedure should be part of tripleo ? or can it be a
>> standalone ansible script ? (I presume it should be former).
Each service has upgrade steps which can be associated via ansible
steps. But this is not a service upgrade. It disables an existing
service and enables a new service. So I think, it would need an
explicit disabled service [1], stop the required service. And enabled
the new service.

>> - If it should be part of the tripleo then what would be the command to 
do
>> it ? A update stack command with appropriate environment files for OVN ?
>> - In case the migration can be done  as a standalone script, how to 
handle
>> later updates/upgrades since tripleo wouldn't be aware of the migration ?
>
I would also discourage doing it standalone.

Another area which needs to be looked is that, should it be associated
with containers upgrade? May be OVN and ODL can be migrated as
containers only instead of baremetal by default (just a thought, could
have implications to be worked/discussed out).

Regards,
Saravanan KR

[1]

https://github.com/openstack/tripleo-heat-templates/tree/master/puppet/services/disabled



 >
 > This last point seems like the crux of the discussion here. 
Sure, you can

 > do all kinds of things to your cloud using standalone bits, but
if any of
 > them affect things tripleo manages (which this would) then you're
going to
 > break on the next stack update.
 >
 > If there are things about the migration that a stack-update can't
handle,
 > then the migration process would need to be twofold: 1) Run the
standalone
 > bits to do the migration 2) Update the tripleo configuration to
match the
 > migrated config so stack-updates work.
 >
 > This is obviously a complex and error-prone process, so I'd strongly
 > encourage doing it in a tripleo-native fashion instead if at all
possible.
 >



Thanks Ben and Saravanan for your comments.

I did some testing. I first deployed an overcloud with the command [1] 
and then I ran the command [2] which enables the OVN services. After the 
completion of [2], all the neutron agents were stopped and all the OVN 
services were up.


The question is is this the right way to disable some services and 
enable some ? or "openstack overcloud update stack" is the right command ?


Re-running the deploy command as you did is the right way to change 
configuration.  The update stack command is just for updating packages.





[1] - openstack overcloud deploy \
 --templates /usr/share/openstack-tripleo-heat-templates \
 --libvirt-type qemu --control-flavor oooq_control --compute-flavor 
oooq_compute --ceph-storage-flavor oooq_ceph --block-storage-flavor 
oooq_blockstorage --swift-storage-flavor oooq_objectstorage --timeout 90 
-e /home/stack/cloud-names.yaml-e 
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml 
-e 
/usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml 
-e 

Re: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-19 Thread Logan V.
+1 Awesome work on the suse support, but also on improving the nuts
and bolts of our tests and everything else during that process.

On Tue, Jul 18, 2017 at 7:57 AM, Major Hayden  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 07/18/2017 04:23 AM, Andy McCrae wrote:
>> Following on from last week's meeting I'd like to propose Markos (hwoarang) 
>> for OSA core.
>>
>> Markos has done a lot of good reviews and commits over an extended period of 
>> time, and has shown interest in the project as a whole. (Not to mention the 
>> addition of SUSE support)
>>
>> We already have quite a few +1's from the meeting itself, but opening up to 
>> everybody who wasn't available at the meeting!
>
> +1 here!  Anyone that offers to help with the ansible-hardening role is solid 
> in my book. ;)
>
> Markos has been doing great work and he's automated quite a few things that 
> we used to push around manually. SUSE support has been building out *really* 
> quickly, too.
>
> - --
> Major Hayden
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQIcBAEBCAAGBQJZbgXFAAoJEHNwUeDBAR+x0b0P/jYbzThAhWkmb0qbdzQcHYoc
> 9Tgx6eymEU8HEs+wC74+r1JBAfUF32thymwM1ToZv7RT+3w0KT3ArEadEmo2+BSz
> TtNCsg4adCHVQHdPnHeFor0jT9PHXYlMzRwfU4UHEjFkcDBX4iHNvUIYkOp/NSy2
> OAZE3YmYxPRUbw87VeIOi2lLhhbdoJJWlFJbHRH4xY2jjl6Le6UjdVpgErhzHcaP
> 3VhJI5mR4bKLhjrnJmgMVC6ECxZ4PDMa3uzfpJ+STWVzgOODk6FQ89AfcOTwbX8K
> /m3aw6e9+KyiacrriK6xZJlTzBpWZyj17V9V6xb4hzHZMkSn0X0OJD0L6YYp1k+r
> YBXB4kPFeX4KMoxpp5Xu0COu0cjLF3rqb0tZHsh0B8dDjYcXs+SY1QEoQyEvyX1q
> 2kqbNS4+rg0uNO0ioddAG+mwJZ7oX+b3kHeJT6XLhkXgyLnBVXC9lCvbNrOJUuwa
> HHcNj/Xxti1fZT51/TvtKM/ou1gdWPbW3NGAwp0+d5oEiy2mUnL/p1J8i1T1c3V/
> kA5zWcY9UX+WbArwmxRtoOIJn5CAOSccii8Uc2HCx89au7BBxtA3k7LNNWo9B2jF
> S4KcWUZi7EnWyFOw4+VcW2OctCxKeEuO7yCxaW5ffrHeYl2GjXoKupPanvVMq/Pq
> WphlU0lHsiNNTXrghFaw
> =2Zxr
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Logging in containerized services

2017-07-19 Thread Fox, Kevin M
FYI, in kolla-kubernes, I've been playing with fluent-bit as a log shipper. 
Works very similar to fluentd but is much lighter weight. I used this: 
https://github.com/kubernetes/charts/tree/master/stable/fluent-bit

I fought with getting log rolling working properly with log files and its kind 
of a pain. A lot of things that can go wrong.

I ended up getting the following to work pretty well:
1. configure docker to roll its own log files based on size.
2. switch containers to use stderror/stdout instead of log files.
3. use fluent-bit to follow docker logs, add k8s pod info and ship to central 
server.

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Wednesday, July 19, 2017 1:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Logging in containerized services

On 18.07.2017 21:27, Lars Kellogg-Stedman wrote:
> Our current model for logging in a containerized deployment has pretty
> much everything logging to files in a directory that has been
> bind-mounted from the host.  This has some advantages: primarily, it
> makes it easy for an operator on the local system to find logs,
> particularly if they have had some previous exposure to
> non-containerized deployments.
>
> There is strong demand for a centralized logging solution.  We've got
> one potential solution right now in the form of the fluentd service
> introduced in Newton, but this requires explicit registration of log
> files for every service.  I don't think it's an ideal solution, and I
> would like to explore some alternatives.
>
> Logging via syslog
> ==
>
> For the purposes of the following, I'm going to assume that we're
> deploying on an EL-variant (RHEL/CentOS/etc), which means (a) journald
> owns /dev/log and (b) we're running rsyslog on the host and using the
> omjournal plugin to read messages from journald.
>
> If we bind mount /dev/log into containers and configure openstack
> services to log via syslog rather than via files, we get the following
> for free:
>
> - We get message-based rather than line-based logging.  This means that
> multiline tracebacks are handled correctly.
>
> - A single point of collection for logs.  If your host has been
> configured to ship logs to a centralized collector, logs from all of
> your services will be sent there without any additional configuration.
>
> - We get per-service message rate limiting from journald.
>
> - Log messages are annotated by journald with a variety of useful
> metadata, including the container id and a high resolution timestamp.
>
> - We can configure the syslog service on the host to continue to write
> files into legacy locations, so an operator looking to run grep against
> local log files will still have that ability.
>
> - Ryslog itself can send structured messages directly to an Elastic
> instance, which means that in a many deployments we would not require
> fluentd and its dependencies.
>
> - This plays well in environments where some services are running in
> containers and others are running on the host, because everything simply
> logs to /dev/log.

Plus it solves the log rotation (still have to be addressed [0] for Pike
though), out-of-box.

>
> Logging via stdin/stdout
> ==
>
> A common pattern in the container world is to log everything to
> stdout/stderr.  This has some of the advantages of the above:
>
> - We can configure the container orchestration service to send logs to
> the journal or to another collector.
>
> - We get a different set of annotations on log messages.
>
> - This solution may play better with frameworks like Kubernetes that
> tend to isolate containers from the host a little more than using Docker
> or similar tools straight out of the box.
>
> But there are some disadvantages:
>
> - Some services only know how to log via syslog (e.g., swift and haproxy)
>
> - We're back to line-based vs. message-based logging.
>
> - It ends up being more difficult to expose logs at legacy locations.
>
> - The container orchestration layer may not implement the same message
> rate limiting we get with fluentd.
>
> Based on the above, I would like to suggest exploring a syslog-based
> logging model moving forward. What do people think about this idea? I've
> started putting together a spec
> at https://review.openstack.org/#/c/484922/ and I would welcome your input.

My vote goes for this option, but TBD for Queens. It won't make it for
Pike, as it looks too late for such amount of drastic changes, like
switching all OpenStack services to syslog, deploying additional
required components, and so on.

[0] https://bugs.launchpad.net/tripleo/+bug/1700912
[1] https://review.openstack.org/#/c/462900/

>
> Cheers,
>
> --
> Lars Kellogg-Stedman >
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [neutron][l2-gateway] Added Ricardo Noriega to the core team

2017-07-19 Thread Alex Schultz
On Wed, Jul 19, 2017 at 9:58 AM, Ricardo Noriega De Soto <
rnori...@redhat.com> wrote:

> Thanks Gary for the opportunity! We'll keep fighting! :-)
>
>
Congrats. Your efforts in the puppet openstack repos to also get this
properly supported and tested have also been very appreciated.

Thanks,
-Alex


> On Wed, Jul 19, 2017 at 8:52 AM, Gary Kotton  wrote:
>
>> Hi,
>>
>> Over the last few months Ricardo Noriega has been making many
>> contributions to the project and has actually helped get it to the stage
>> where it’s a lot healthier than before ☺. I am adding him to the core
>> team.
>>
>> Congratulations!
>>
>> A luta continua
>>
>> Gary
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Ricardo Noriega
>
> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
> Red Hat
> irc: rnoriega @freenode
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-19 Thread Derek Higgins
On 17 July 2017 at 15:56, Derek Higgins  wrote:
> On 17 July 2017 at 15:37, Emilien Macchi  wrote:
>> On Thu, Jul 13, 2017 at 6:01 AM, Emilien Macchi  wrote:
>>> On Thu, Jul 13, 2017 at 1:55 AM, Derek Higgins  wrote:
 On 12 July 2017 at 22:33, Emilien Macchi  wrote:
> On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi  
> wrote:
> [...]
>> Derek, it seems like you want to deploy Ironic on scenario006
>> (https://review.openstack.org/#/c/474802). I was wondering how it
>> would work with multinode jobs.
>
> Derek, I also would like to point out that
> https://review.openstack.org/#/c/474802 is missing the environment
> file for non-containerized deployments & and also the pingtest file.
> Just for the record, if we can have it before the job moves in gate.

 I knew I had left out the ping test file, this is the next step but I
 can create a noop one for now if you'd like?
>>>
>>> Please create a basic pingtest with common things we have in other 
>>> scenarios.
>>>
 Is the non-containerized deployments a requirement?
>>>
>>> Until we stop supporting non-containerized deployments, I would say yes.
>>>
>
> Thanks,
> --
> Emilien Macchi
>>>
>>> So if you create a libvirt domain, would it be possible to do it on
>>> scenario004 for example and keep coverage for other services that are
>>> already on scenario004? It would avoid to consume a scenario just for
>>> Ironic. If not possible, then talk with Flavio and one of you will
>>> have to prepare scenario007 or 0008, depending where Numans is in his
>>> progress to have OVN coverage as well.
>>
>> I haven't seen much resolution / answers about it. We still have the
>> conflict right now and open questions.
>>
>> Derek, Flavio - let's solve this one this week if we can.
> Yes, I'll be looking into using scenario004 this week. I was traveling
> last week so wasn't looking at it.

I'm not sure if this is what you had intended but I believe to do
this(i.e. test the nova ironic driver) we we'll
need to swap out the nova libvirt driver for the ironic one. I think
this is ok as the libvirt driver has coverage
in other scenarios.

Because there are no virtual BMC's setup yet on the controller I also
have to remove the instance creation,
but if merged I'll next work on adding these now. So I'm think
something like this
https://review.openstack.org/#/c/485261/

>
>>
>> Thanks,
>> --
>> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l2-gateway] Added Ricardo Noriega to the core team

2017-07-19 Thread Ricardo Noriega De Soto
Thanks Gary for the opportunity! We'll keep fighting! :-)

On Wed, Jul 19, 2017 at 8:52 AM, Gary Kotton  wrote:

> Hi,
>
> Over the last few months Ricardo Noriega has been making many
> contributions to the project and has actually helped get it to the stage
> where it’s a lot healthier than before ☺. I am adding him to the core
> team.
>
> Congratulations!
>
> A luta continua
>
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight] status of an instance on the REST API and in the instance notifications

2017-07-19 Thread Balazs Gibizer



On Wed, Jul 19, 2017 at 5:38 PM, McLellan, Steven 
 wrote:

Thanks Balazs for noticing and replying to my message!

The Status field is quite important to us since it's the indicator of 
VM state that Horizon displays most prominently and the most simple 
description of whether a VM is currently usable or not without having 
to parse the various _state fields. If we can't get this change added 
in Pike I'll probably implement a simplified version of the mapping 
in [2], but it would be really good to get it into the notifications 
in Pike if possible. I understand though that this late in the cycle 
it may not be possible.


I can create a patch to add the status to the instance notifications 
but I don't know if nova cores accept it for this late in Pike.

@Cores: Do you?

Cheers,
gibi




Thanks,

Steve

On 7/19/17, 10:27 AM, "Balazs Gibizer"  
wrote:


Hi,

Steve asked the following question on IRC [1]

< sjmc7> hi gibi. sorry, meant to bring this up in the 
notifications
meeting but i had to step away for a bit. we were having a 
discussion
last week about the field that the API returns as 'status' - do 
the

notifications have an equivalent?

I will try to answer it here so others can chime in.

Internally in nova an instance has vm_state, task_state and
power_state. On the REST API the instance has status which is
calculated from vm_state and task_state. See the code doing the
conversion here [2]. The instance notifications contain both the
vm_state, task_state and power_state of the instance but do not 
contain
the calculated status value [3]. The instance.update notification 
has

extra state fields to signal possible state transitions [4].

Technically we can add the calculated status field to the 
notifications
but it is not there at the moment. So if searchlight needs that 
info
right now then it needs to be calculated on searchlight side 
based on

the vm_state and the task_state from the notification.

Adding this field can be a continuation of the bp
additional-notification-fields-for-searchlight [5] in Queens.


Cheers,
gibi

[1]

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-07-18.log.html#t2017-07-18T17:39:36

[2]

https://github.com/openstack/nova/blob/a4a9733f4a9ead01356f0f76c1bb1f04f905fa4e/nova/api/openstack/common.py#L113

[3]

https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L52-L54

[4]

https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L352

[5]

https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][api] need new reviewers for the api-site and developer.openstack.org

2017-07-19 Thread Doug Hellmann
Excerpts from Alexandra Settle's message of 2017-07-19 15:42:42 +:
> Hi everyone,
> 
> As you all know, we have been changing the way the documentation team 
> operates. Chief among those
> changes are reducing the workload on the shrinking team. The migration to 
> move installation, admin, and
> configuration documentation to project-team repositories is the first phase 
> of that transition. Another component on
> the documentation team's list of responsibilities is developer.openstack.org, 
> the site for consumers of
> OpenStack services to find resources to help them build their applications. 
> Finding a new team of people
> to manage that site is the next phase of shrinking the documentation team's 
> duties.
> 
> We are setting up a new review team [0] in gerrit for the openstack/api-site 
> repository and removing
> the api-site and the faafo (First App Application for OpenStack) repositories 
> from the set listed as owned by the docs
> team in governance [1].  This opens up an opportunity for a new SIG or WG 
> that is able to tackle the requirements
> of the api-site repo.
> 
> Any concerns or questions, please do not hesitate to reach out.
> 
> Cheers,
> 
> Alex
> 
> [0] https://review.openstack.org/#/c/485179/
> [1] https://review.openstack.org/#/c/485249/

Thanks, Alex. This seems like an excellent opportunity improve the
focus of the docs team and build a new dedicated team to take on
developer.openstack.org.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [doc][api] need new reviewers for the api-site and developer.openstack.org

2017-07-19 Thread Alexandra Settle
Hi everyone,

As you all know, we have been changing the way the documentation team operates. 
Chief among those
changes are reducing the workload on the shrinking team. The migration to move 
installation, admin, and
configuration documentation to project-team repositories is the first phase of 
that transition. Another component on
the documentation team's list of responsibilities is developer.openstack.org, 
the site for consumers of
OpenStack services to find resources to help them build their applications. 
Finding a new team of people
to manage that site is the next phase of shrinking the documentation team's 
duties.

We are setting up a new review team [0] in gerrit for the openstack/api-site 
repository and removing
the api-site and the faafo (First App Application for OpenStack) repositories 
from the set listed as owned by the docs
team in governance [1].  This opens up an opportunity for a new SIG or WG that 
is able to tackle the requirements
of the api-site repo.

Any concerns or questions, please do not hesitate to reach out.

Cheers,

Alex

[0] https://review.openstack.org/#/c/485179/
[1] https://review.openstack.org/#/c/485249/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight] status of an instance on the REST API and in the instance notifications

2017-07-19 Thread McLellan, Steven
Thanks Balazs for noticing and replying to my message!

The Status field is quite important to us since it's the indicator of VM state 
that Horizon displays most prominently and the most simple description of 
whether a VM is currently usable or not without having to parse the various 
_state fields. If we can't get this change added in Pike I'll probably 
implement a simplified version of the mapping in [2], but it would be really 
good to get it into the notifications in Pike if possible. I understand though 
that this late in the cycle it may not be possible.

Thanks,

Steve

On 7/19/17, 10:27 AM, "Balazs Gibizer"  wrote:

Hi,

Steve asked the following question on IRC [1]

< sjmc7> hi gibi. sorry, meant to bring this up in the notifications 
meeting but i had to step away for a bit. we were having a discussion 
last week about the field that the API returns as 'status' - do the 
notifications have an equivalent?

I will try to answer it here so others can chime in.

Internally in nova an instance has vm_state, task_state and 
power_state. On the REST API the instance has status which is 
calculated from vm_state and task_state. See the code doing the 
conversion here [2]. The instance notifications contain both the 
vm_state, task_state and power_state of the instance but do not contain 
the calculated status value [3]. The instance.update notification has 
extra state fields to signal possible state transitions [4].

Technically we can add the calculated status field to the notifications 
but it is not there at the moment. So if searchlight needs that info 
right now then it needs to be calculated on searchlight side based on 
the vm_state and the task_state from the notification.

Adding this field can be a continuation of the bp 
additional-notification-fields-for-searchlight [5] in Queens.


Cheers,
gibi

[1] 

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-07-18.log.html#t2017-07-18T17:39:36
[2] 

https://github.com/openstack/nova/blob/a4a9733f4a9ead01356f0f76c1bb1f04f905fa4e/nova/api/openstack/common.py#L113
[3] 

https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L52-L54
[4] 

https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L352
[5] 

https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][searchlight] status of an instance on the REST API and in the instance notifications

2017-07-19 Thread Balazs Gibizer

Hi,

Steve asked the following question on IRC [1]

< sjmc7> hi gibi. sorry, meant to bring this up in the notifications 
meeting but i had to step away for a bit. we were having a discussion 
last week about the field that the API returns as 'status' - do the 
notifications have an equivalent?


I will try to answer it here so others can chime in.

Internally in nova an instance has vm_state, task_state and 
power_state. On the REST API the instance has status which is 
calculated from vm_state and task_state. See the code doing the 
conversion here [2]. The instance notifications contain both the 
vm_state, task_state and power_state of the instance but do not contain 
the calculated status value [3]. The instance.update notification has 
extra state fields to signal possible state transitions [4].


Technically we can add the calculated status field to the notifications 
but it is not there at the moment. So if searchlight needs that info 
right now then it needs to be calculated on searchlight side based on 
the vm_state and the task_state from the notification.


Adding this field can be a continuation of the bp 
additional-notification-fields-for-searchlight [5] in Queens.



Cheers,
gibi

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-07-18.log.html#t2017-07-18T17:39:36
[2] 
https://github.com/openstack/nova/blob/a4a9733f4a9ead01356f0f76c1bb1f04f905fa4e/nova/api/openstack/common.py#L113
[3] 
https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L52-L54
[4] 
https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L352
[5] 
https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]HP proliant create logical drive not according to the target_raid_config

2017-07-19 Thread Loo, Ruby
Hi, I suggest either providing more information so someone may be able to help 
you here, or go onto irc, #openstack-ironic, and ask for help there.

--ruby

From: 王俊 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, July 19, 2017 at 7:55 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [ironic]HP proliant create logical drive not according 
to the target_raid_config

Hi,
 May I ask a question about RAID?I set target_raid_config before I set 
the node state to ‘provide’, but when I make the node to available, the 
server’s logical drive does not like my configures.I don’t know why.who can 
give some help?

保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Queens PTG

2017-07-19 Thread Giulio Fidente
On 07/07/2017 07:38 PM, Giulio Fidente wrote:
> On 07/04/2017 08:00 PM, Emilien Macchi wrote:
>> On Wed, Jun 28, 2017 at 9:37 PM, Giulio Fidente  wrote:
>>> On 06/28/2017 04:35 PM, Emilien Macchi wrote:
 Hey folks,

 Let's start to prepare the next PTG in Denver.

 Here's the schedule draft:
 https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true=gmail
 We'll have a room Wednesday, Thursday and Friday. We'll probably
 finish by end of Friday morning.

 Etherpad for the PTG:
 https://etherpad.openstack.org/p/tripleo-ptg-queens
>>>
>>> thanks Emilien!
>>>
>>> I've added a session into the agenda about the integration of
>>> ceph-ansible as this brough in a generic functionality in Heat and
>>> TripleO which allows services to describe workflows to be executed
>>> during the overcloud deployment steps
>>>
>>> I think it'd be nice to review together what the submissions tracked by
>>> the two blueprints actually do [1] [2] and how!
>>
>> Really cool. Indeed, a session for this topic would be awesome. Please
>> make sure we prepare the agenda (maybe prepare a TL;DR on the ML to
>> summarize what has been done and what remains).
>> So we prepare the session correctly and can directly start with
>> pro-active discussions.
> Thanks! I've just added a link to the session etherpad:
> 
> https://etherpad.openstack.org/p/tripleo-ptg-queens-heat-mistral-ceph-ansible
> 
> I think it'll be interesting to discuss it at the PTG because it
> provides for a pretty generic mechanism to:
> 
> 1) run unmodified playbooks, per service
> 2) pass native params to the playbooks without wrappers
> 3) build the full inventory to keep the decision on where to run the
> tasks in the playbook
> 4) keep in heat the mechanism to emit per-role settings and also
> orchestrate the deployment steps
> 
> But some of it can probably be generalized further and extended or
> changed for a potential even more tight integration with ansible (thanks
> Steven and James for the feedback)... bringing us slowly to the other
> topic which James added to the agenda!

I thought it might have been useful to put up a short walkthrough of the
more interesting changes we implemented to have this functionality and
wrote a blog post [1]

I would encourage everyone interested in mistral>ansible and/or Ceph in
taking a look to the submissions referenced by the blog post ... and
help reviewing the latest still on review

Thanks for the interest in the topic!

1.
http://giuliofidente.com/2017/07/understanding-ceph-ansible-in-tripleo.html

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Cyborg Weekly Meeting 2017.07.19

2017-07-19 Thread Zhipeng Huang
Kind reminder for the team meeting, as NOW :)

Join us on #openstack-cyborg if you are bored of regular stuff :P

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] [horizon-plugin] [requirements] Moving to Django 1.11

2017-07-19 Thread Rob Cresswell
Hi everyone,

Django 1.11 support has landed for Django OpenStack Auth [1], which was 
released shortly after [2]. Horizon's Django 1.11 support is just merging [3], 
at which point we will raise the global requirement [4] and make the Horizon / 
DOA tests voting [5].

At that point, we should be able to merge the requirements patches and tag 
Django OpenStack Auths final release for Pike. The only other blocking issue 
that I'm aware of is getting django-babel released (external to openstack) with 
Django 1.11 support; there is a GitHub issue open [6].

Overall, we should (fingers crossed) be able to land this effort on time for 
non-client lib freeze and requirements freeze. Big thanks to everyone that 
contributed towards this.

Cheers,
Rob

1. https://review.openstack.org/#/c/484722/
2. https://review.openstack.org/#/c/484914/
3. https://review.openstack.org/#/c/484277/
4. https://review.openstack.org/#/c/485221/
5. https://review.openstack.org/#/c/485220/
6. https://github.com/python-babel/django-babel/issues/38
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-19 Thread Chris Dent

On Wed, 19 Jul 2017, Balazs Gibizer wrote:

I added more info to the bug report and the review as it seems the test is 
fluctuating.


(Reflecting some conversation gibi and I have had in IRC)

I've made a gabbi-based replication of the desired functionality. It
also flaps, with a >50% failure rate:
https://review.openstack.org/#/c/485209/

Sorry copy pasted the wrong link, the correct link is 
https://bugs.launchpad.net/nova/+bug/1705231


This has been updated (by gibi) to show that the generated SQL is
different between the failure and success cases.


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Feature proposal freeze exception request for change

2017-07-19 Thread Tom Barron


On 07/18/2017 04:33 PM, Ravi, Goutham wrote:
> Hello Manila reviewers,
> 
>  
> 
> It has been a few days past the feature proposal freeze, but I would
> like to request an extension for an enhancement to the NetApp driver in
> Manila. [1] implements a low-impact blueprint [2] that was approved for
> the Pike release. The code change is contained within the driver and
> would be a worthwhile addition to users of this driver in Manila/Pike.
> 
>  
> 
> [1] https://review.openstack.org/#/c/484933/
> 
> [2] https://blueprints.launchpad.net/openstack/?searchtext=netapp-cdot-qos
> 
>  
> 
> Thanks,
> 
> Goutham
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

+1

As mentioned, the BP was approved already.  The review is up and looks
sound, with effects limited to the NetApp backend.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Heads-up: classic drivers future (please read if you maintain a driver)

2017-07-19 Thread Dmitry Tantsur

Hi all!

With the driver composition implemented in Ocata and polished in Pike, we would 
like to eventually get rid of the old-style drivers. I believe the new hardware 
types are much easier to understand, create and use.


We have landed a spec laying down the deprecation plan [1]. In essence, we don't 
do anything in Pike yet. Then in Queens we will require all drivers that are 
going to be supported to have a hardware type counterpart. We will deprecate the 
classic driver loading mechanism in that release. Finally, in Rocky we will 
remove the ability to load classic drivers, as well as all classic drivers we 
have in tree.


This may have effect on vendor drivers, as well as on 3rdparty CI. We tried to 
minimize the latter by NOT requiring double CI coverage at any point in time. 
Please read the spec [1] and let us know your questions and concerns.


Operators, currently using classic drivers (which, I guess, is the majority), 
should keep an eye on the new drivers appearing in the release notes, and plan 
an eventual migration to them. We will provide a detailed upgrade guide soon.


Thanks!

[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/classic-drivers-future.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Soft feature freeze proposal

2017-07-19 Thread Dmitry Tantsur

Hi team!

As discussed on the IRC meeting, I would like to propose a soft feature freeze 
for ironic, starting with Aug 1st. I hope this proposal will help us better 
concentrate on the priorities and finally be able to finish most of the things 
we've committed to.


Between this day and the final release, we will only merge features and 
enhancements that are either:


1. Team priorities, listed on 
http://specs.openstack.org/openstack/ironic-specs/priorities/pike-priorities.html 
and https://etherpad.openstack.org/p/IronicWhiteBoard. These will be *reviewed* 
during the coming IRC meetings to prune of things that are clearly not making it.


2. Or relatively small vendor features. They have to:
 2.1. be mostly contained in the driver code (no new API or driver interfaces),
 2.2. have an RFE and/or spec approved by the freeze date,
 2.3. be fully submitted for review by the freeze date.

Of course, any bug fixes, CI improvements and documentation patches can merge at 
any point; they are not affected by the freeze.


Please let me know, what do you think. If no objections are raised, we can 
accept the proposal formally on one of the coming meetings.


Dmitry.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-19 Thread Balazs Gibizer



On Wed, Jul 19, 2017 at 1:13 PM, Chris Dent  
wrote:

On Wed, 19 Jul 2017, Balazs Gibizer wrote:

We are trying to get some help from the related functional test [5] 
but

honestly we still need some time to digest that LOCs. So any direct
help is appreciated.


I managed to create a functional test case that reproduces the above 
problem https://review.openstack.org/#/c/485088/


Excellent, thank you. I was planning to look into repeating this
today, will first look at this test and see what I can see. Your
experimentation is exactly the sort of stuff we need right now, so
thank you very much.


I added more info to the bug report and the review as it seems the test 
is fluctuating.






BTW, should I open a bug for it?


I also filed a bug so that we can track this work 
https://bugs.launchpad.net/nova/+bug/1705071


I guess Jay and Matt have already fixed a part of this, but not the
whole thing.


Sorry copy pasted the wrong link, the correct link is 
https://bugs.launchpad.net/nova/+bug/1705231


Cheers,
gibi




--
Chris Dent  ┬──┬◡ノ(° -°ノ)   
https://anticdent.org/

freenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][rdo] How does RDO package (puppet module) gets installed on Tripleo overcloud nodes ?

2017-07-19 Thread Emilien Macchi
On Wed, Jul 19, 2017 at 3:11 AM, Dnyaneshwar Pawar
 wrote:
> Hi tripleo/rdo experts,
>
>
>
> puppet-veritas_hyperscale is added to RDO trunk via [1].
>
> Also patched tripleo-puppet-elements to include puppet-veirtas_hyperscale
> [2].
>
>
>
> When I create overcloud using quickstart.sh + latest RDO trunk,
> puppet-veritas_hyperscale modules do not get installed on any of overcloud
> node, neither on undercloud.
>
> From #rdo(amoralej, apvec) and #tripleo(shardy) got to know that only [1]
> and [2] may not be sufficient to get these modules installed under
> “/etc/puppet/modules” on overcloud nodes.
>
> This needs confirmation from Emilien/Alex.

Like I said on IRC, you'll need to patch
https://github.com/rdo-packages/puppet-tripleo-distgit to add the
veritas module in dependency, so the module will be deployed when
installing TripleO.

>
> If above theory is correct then, I may need to submit one more patch. I am
> not clear against which repo I need to submit a patch. Do we have any
> example that I can refer?
>
>
>
> [1] https://review.rdoproject.org/r/#/q/topic:add-puppet-veritas_hyperscale
>
> [2] https://review.openstack.org/#/c/481085/
>
>
>
>
>
> Thanks,
>
> Dnyaneshwar



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] breaking changes, new container image parameter formats

2017-07-19 Thread Dan Prince
I wanted to give a quick heads up on some breaking changes that started
landing last week with regards to how container images are specified
with Heat parameters in TripleO. There are a few patches associated
with converting over to the new changes but the primary patches are
listed below here [1] and here [2].

Here are a few examples where I'm using a local (insecure) docker
registry on 172.19.0.2.

The old parameters were:

  
  DockerNamespaceIsRegistry: true
  DockerNamespace: 172.19.0.2:8787/tripleoupstream
  DockerKeystoneImage: centos-binary-keystone:latest
  ...

The new parameters simplify things quite a bit so that each
Docker*Image parameter contains the *entire* URL required to pull the
docker image. It ends up looking something like this:

  ...
  DockerInsecureRegistryAddress: 172.19.0.2:8787/tripleoupstream
  DockerKeystoneImage: 172.19.0.2:8787/tripleoupstream/centos-binary-
keystone:latest
  ...

The benefit of the new format is that it makes it possible to pull
images from multiple registries without first staging them to a local
docker registry. Also, we've removed the 'tripleoupstream' default
container names and now require them to be specified. Removing the
default should make it much more explicit that the end user has
specified container image names correctly and doesn't accidentally use
'tripleoupstream' by accident because one of the container image
parameters didn't get specified. Finally the simplification of the
DockerInsecureRegistryAddress parameter into a single setting makes
things more clear to the end user as well.

A new python-tripleoclient command makes it possible to generate a
custom heat environment with defaults for your environment and
registry. For the examples above I can run 'overcloud container image
prepare' to generate a custom heat environment like this:

openstack overcloud container image prepare --
namespace=172.19.0.2:8787/tripleoupstream --env-
file=$HOME/containers.yaml

We choose not to implement backwards compatibility with the old image
formats as almost all of the Heat parameters here are net new in Pike
and as such have not yet been released yet. The changes here should
make it much easier to manage containers and work with other community
docker registries like RDO, etc.

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
it/?id=e76d84f784d27a7a2d9e5f3a8b019f8254cb4d6c
[2] https://review.openstack.org/#/c/479398/17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 29

2017-07-19 Thread Chris Dent


(Blog version at https://anticdent.org/tc-report-29.html )

This TC Report is a bit late. Yesterday I was attacked by an oyster.

This week had no meeting, so what follows is a summary of various
other TC related (sometimes only vaguely related) activity.

# Vision

The [TC
Vision](https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html)
has been merged, presented in a way that makes sure that it's easy and
desirable to create a new vision at a later date to respond to
changing circumstances. There were concerns during the review process
that the document as is does not take into account recent changes in
the corporate and contributor community surrounding OpenStack. The
consensus conclusion, however, was that the goals stated in the vision
remain relevant and productive work has already begun.

# Hosted Projects

The conversation about [hosted
projects](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119205.html)
continues, mostly in regard to that question with great stamina: Is
OpenStack Infrastructure as a Service or something more encompassing
of all of cloud? In either case what does it take for something to be
a complete IAAS or what is "cloud"? There was a useful posting from
Zane pointing out that the [varied
assumptions](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119736.html)
people bring to the discussion are a) varied, b) assumptions.

It feels likely that these discussion will become more fraught during
times of pressure but have no easy answers. As long as the discussion
don't devolve into name calling, I think each repeat round is useful
as it brings new insights to the old hands and keeps the new hands
informed of stuff that matters. Curtailing the discussion simply
because we have been over it before is disrespectful to the people who
continue to care and to the people for whom it is new.

I still think we haven't fully expressed the answers to the questions
about the value and cost that any project being officially in
OpenStack has for that project or for OpenStack. I'm not asserting
anything about the values or the costs; knowing the answers is simply
necessary to have a valid conversation.

# Glare

The conversation about [Glare becoming
official](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119442.html)
continued, but more slowly than before. The plan at this stage is to
discuss the issues in person at the PTG where the Glare project will
have some space. [ttx made a brief
summary](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119818.html);
there's no objection to Glare becoming official unless there is some
reason to believe it will result in issues for Glance (which is by no
means pre-determined).

# SIGs

The new openstack-sigs mailing list was opened with a deliberately
provocative thread on [How SIG Work Gets
Done](http://lists.openstack.org/pipermail/openstack-sigs/2017-July/03.html).
This resulted in comments on how OpenStack work gets done, how open
source work gets done, and even whether open source behaviors fully apply
in the OpenStack context.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic]HP proliant create logical drive not according to the target_raid_config

2017-07-19 Thread 王俊
Hi,
 May I ask a question about RAID?I set target_raid_config before I set 
the node state to ‘provide’, but when I make the node to available, the 
server’s logical drive does not like my configures.I don’t know why.who can 
give some help?

保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-19 Thread Sean Dague
I was just starting to look through some logs to see if I could line up
request ids (part of global request id efforts), when I realized that in
the process to uwsgi by default, we've entirely lost the INFO wsgi
request logs. :(

Instead of the old format (which was coming out of oslo.service) we get
the following -
http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-n-api.txt.gz#_Jul_19_03_44_58_233532


That definitely takes us a step backwards in understanding the world, as
we lose our request id on entry that was extremely useful to match up
everything. We hit a similar issue with placement, and added custom
paste middleware for that. Maybe we need to consider a similar thing
here, that would only emit if running under uwsgi/apache?

Thoughts?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-19 Thread Chris Dent

On Wed, 19 Jul 2017, Balazs Gibizer wrote:


We are trying to get some help from the related functional test [5] but
honestly we still need some time to digest that LOCs. So any direct
help is appreciated.


I managed to create a functional test case that reproduces the above problem 
https://review.openstack.org/#/c/485088/


Excellent, thank you. I was planning to look into repeating this
today, will first look at this test and see what I can see. Your
experimentation is exactly the sort of stuff we need right now, so
thank you very much.


BTW, should I open a bug for it?


I also filed a bug so that we can track this work 
https://bugs.launchpad.net/nova/+bug/1705071


I guess Jay and Matt have already fixed a part of this, but not the
whole thing.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-07-19 Thread Tobias Rydberg
Hi everyone, 

Don't forget todays meeting for the PublicCloudWorkingGroup. 
1400 UTC in IRC channel #openstack-meeting-3 

Etherpad: https://etherpad.openstack.org/p/publiccloud-wg 

Regards,
Tobias__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How should we go about removing legacy VIF types in Queens?

2017-07-19 Thread Stephen Finucane
On Thu, 2017-07-13 at 07:54 -0600, Kevin Benton wrote:
> On Thu, Jul 13, 2017 at 7:26 AM, Stephen Finucane 
wrote:
> 
> > os-vif has been integrated into nova since the newton cycle. With the
> > integration of os-vif, the expectation is that all the old, non-os-vif
> > plugging/unplugging code found in [1] will be replaced by code that
> > harnesses
> > os-vif plugins [2]. This has happened for a few of the VIF types, and newer
> > VIFs are being added in this manner [3]. However, there are quite a few
> > VIFs
> > that are still using the legacy path, and I think it's about time we
> > started
> > moving things forward. Doing so allows us to continue to progress on
> > passing
> > os-vif objects from neutron and remove the large swathes of legacy code
> > still
> > found in nova.
> > 
> > I've opened a bug against networking-bigswitch [4] for one of these VIF
> > types,
> > IVS, and I'm thinking I'll do the same for a lot of the other VIF types
> > where I
> > can find definite vendors. Is there anything else we can do though? At some
> > point we're going to have to just start deleting code and I'd like to avoid
> > leaving operators in the lurch.
>
> Some of the stuff like '802.1qbh' isn't particularly vendor specific so I'm
> not sure who will host it and a repo just for that seems like a bit much.
> Should we just bite the bullet and convert them in the nova tree or put them
> in os-vif?

That VIF type actually seems to be a CISCO-only option [1][2] but I get what
you're saying. I think we can definitely move some of them, though (IVS, for a
start). Perhaps moving the ones that *do* have clear owners to their respective
packages is the way to go?

Stephen

[1] http://codesearch.openstack.org/?q=802.1qbh=nope==
[2] https://git.openstack.org/cgit/openstack/networking-cisco/tree/networking_c
isco/plugins/ml2/drivers/cisco/ucsm/constants.py

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] 1.6.1 release for pike.

2017-07-19 Thread Mooney, Sean K
You are right but adding [os-vif] lands it in my os-vif folder os
I guess [openstack-dev][os-vif][nova][neutron] 1.6.1 release for pike 
Would have made it work for everyone :)

> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: Tuesday, July 18, 2017 10:35 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [os-vif] 1.6.1 release for pike.
> 
> On 7/18/2017 12:07 PM, Mooney, Sean K wrote:
> > Resending with correct subject line
> 
> The real correct subject line tag would be [nova] or [nova][neutron].
> :P
> 
> --
> 
> Thanks,
> 
> Matt
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-19 Thread Balazs Gibizer
On Tue, Jul 18, 2017 at 2:39 PM, Balazs Gibizer 
 wrote:



On Mon, Jul 17, 2017 at 6:40 PM, Jay Pipes  wrote:
> On 07/17/2017 11:31 AM, Balazs Gibizer wrote:
> > On Thu, Jul 13, 2017 at 11:37 AM, Chris Dent
> 
> > wrote:
> >> On Thu, 13 Jul 2017, Balazs Gibizer wrote:
> >>
> >>>
> 
/placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1"
> >>> but placement returns an empty response. Then nova scheduler 
falls

> >>> back to legacy behavior [4] and places the instance without
> >>> considering the custom resource request.
> >>
> >> As far as I can tell at least one missing piece of the puzzle 
here

> >> is that your MAGIC provider does not have the
> >> 'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the 
compute
> >> and MAGIC to be in the same aggregate, the MAGIC needs to 
announce

> >> that its inventory is for sharing. The comments here have a bit
> more
> >> on that:
> >>
> >>
> 
https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678

> >
> > Thanks a lot for the detailed answer. Yes, this was the missing
> piece.
> > However I had to add that trait both the the MAGIC provider and to
> my
> > compute provider to make it work. Is it intentional that the 
compute

> > also has to have that trait?
>
> No. The compute node doesn't need that trait. It only needs to be
> associated to an aggregate that is associated to the provider that 
is

> marked with the MISC_SHARES_VIA_AGGREGATE trait.
>
> In other words, you need to do this:
>
> 1) Create the provider record for the thing that is going to share 
the

> CUSTOM_MAGIC resources
>
> 2) Create an inventory record on that provider
>
> 3) Set the MISC_SHARES_VIA_AGGREGATE trait on that provider
>
> 4) Create an aggregate
>
> 5) Associate both the above provider and the compute node provider
> with
> the aggregate
>
> That's it. The compute node provider will now have access to the
> CUSTOM_MAGIC resources that the other provider has in inventory.

Something doesn't add up. We tried exactly your order of actions (see
the script [1]) but placement returns an empty result (see the logs of
the script[2], of the scheduler[3], of the placement[4]). However as
soon as we add the MISC_SHARES_VIA_AGGREGATE trait to the compute
provider as well then placement-api returns allocation candidates as
expected.

We are trying to get some help from the related functional test [5] 
but

honestly we still need some time to digest that LOCs. So any direct
help is appreciated.


I managed to create a functional test case that reproduces the above 
problem https://review.openstack.org/#/c/485088/





BTW, should I open a bug for it?


I also filed a bug so that we can track this work 
https://bugs.launchpad.net/nova/+bug/1705071


Cheers,
gibi





As a related question. I looked at the claim in the scheduler patch
https://review.openstack.org/#/c/483566 and I wondering if that patch
wants to claim not just the resources a compute provider provides but
also custom resources like MAGIC at [6]. In the meantime I will go and
test that patch to see what it actually does with some MAGIC. :)

Thanks for the help!
Cheers,
gibi

[1] http://paste.openstack.org/show/615707/
[2] http://paste.openstack.org/show/615708/
[3] http://paste.openstack.org/show/615709/
[4] http://paste.openstack.org/show/615710/
[5]
https://github.com/openstack/nova/blob/0e6cac5fde830f1de0ebdd4eebc130de1eb0198d/nova/tests/functional/db/test_resource_provider.py#L1969
[6]
https://review.openstack.org/#/c/483566/3/nova/scheduler/filter_scheduler.py@167
>
>
> Magic. :)
>
> Best,
> -jay
>
> > I updated my script with the trait. [3]
> >
> >>
> >> It's quite likely this is not well documented yet as this style 
of

> >> declaring that something is shared was a later development. The
> >> initial code that added the support for GET /resource_providers
> >> was around, it was later reused for GET /allocation_candidates:
> >>
> >> https://review.openstack.org/#/c/460798/
> >
> > What would be a good place to document this? I think I can help 
with

> > enhancing the documentation from this perspective.
> >
> > Thanks again.
> > Cheers,
> > gibi
> >
> >>
> >> --
> >> Chris Dent  ┬──┬◡ノ(° -°ノ)
> https://anticdent.org/
> >> freenode: cdent tw:
> @anticdent
> >
> > [3] http://paste.openstack.org/show/615629/
> >
> >
> >
> >
> >
> 
__

> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> 
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> 

[openstack-dev] [tripleo][rdo] How does RDO package (puppet module) gets installed on Tripleo overcloud nodes ?

2017-07-19 Thread Dnyaneshwar Pawar
Hi tripleo/rdo experts,

puppet-veritas_hyperscale is added to RDO trunk via [1].
Also patched tripleo-puppet-elements to include puppet-veirtas_hyperscale [2].

When I create overcloud using quickstart.sh + latest RDO trunk, 
puppet-veritas_hyperscale modules do not get installed on any of overcloud 
node, neither on undercloud.
From #rdo(amoralej, apvec) and #tripleo(shardy) got to know that only [1] and 
[2] may not be sufficient to get these modules installed under 
“/etc/puppet/modules” on overcloud nodes.
This needs confirmation from Emilien/Alex.
If above theory is correct then, I may need to submit one more patch. I am not 
clear against which repo I need to submit a patch. Do we have any example that 
I can refer?

[1] https://review.rdoproject.org/r/#/q/topic:add-puppet-veritas_hyperscale
[2] https://review.openstack.org/#/c/481085/


Thanks,
Dnyaneshwar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-vif 1.6.1 release for pike.

2017-07-19 Thread Jan Gutter
On Tue, Jul 18, 2017 at 5:54 PM, Mooney, Sean K  wrote:
> Should have:
>
> Improve OVS Representor Lookup  https://review.openstack.org/#/c/484051/

I've split out the review into two portions, here is the second one:

Improve OVS Representor VF Lookup  https://review.openstack.org/#/c/485125/

-- 
Jan Gutter
Embedded Networking Software Engineer

Netronome | First Floor Suite 1, Block A, Southdowns Ridge Office Park,
Cnr Nellmapius and John Vorster St, Irene, Pretoria, 0157
Phone: +27 (12) 665-4427 | Skype: jangutter |  www.netronome.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-19 Thread Flavio Percoco

On 17/07/17 14:05 +0200, Flavio Percoco wrote:

Thanks for all the feedback so far. This is one of the things I appreciate the
most about this community, Open conversations, honest feedback and will to
collaborate.

I'm top-posting to announce that we'll have a joint meeting with the Kolla team
on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not for
me) but I do want to have a live discussion with the rest of the Kolla team.

Some questions about the meeting:

* How much time can we allocate?
* Can we prepare an agenda rather than just discussing "TripleO is thinking of
using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
agenda)

One last point. I'm not interested in conversations around competition,
re-invention, etc. I think I speak for the entire TripleO team when I say that
this is not about "winning" in this space but rather seeing how/if we can
collaborate and how/if it makes sense to keep exploring the path described in
the email below.


Hey y'all,

Sorry for not having sent this earlier but, Life Happened (TM).

In preparation for the meeting today, I took the time to collect some thoughts
on the topic so that we can, hopefully, have a more focused and constructive
conversation.

Please, find my thoughts on this etherpad and feel free to comment on it. I've
disabled color so please, tag your comments with your nickname.

https://etherpad.openstack.org/p/tripleo-ptg-queens-kubernetes

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-07-19 Thread Flavio Percoco

Hey folks,

Based on the outcome of this thread, I've submitted this resolution to allow
teams to host meetings outside meeting channels. Please, comment and review :)

https://review.openstack.org/#/c/485117/

Flavio

On 26/06/17 10:37 +0200, Flavio Percoco wrote:

Hey Y'all,

Not so long ago there was a discussion about how we manage our meeting channels
and whether there's need for more or fewer of them[0]. Good points were made in
that thread in favor of keeping the existing model but some things have changed,
hence this new thread.

More teams - including the Technical Committee[1] - have started to adopt office
hours as a way to provide support and have synchronous discussion. Some of these
teams have also discontinued their IRC meetings or moved to an ad-hoc meetings
model.

As these changes start popping up in the community, we need to have a good way
to track the office hours for each team and allow for teams to meet at the time
they prefer. Before we go deep into the discussion again, I'd like to summarize
what has been discussed in the past (thanks ttx for the summary):

The main objections to just letting people meet anywhere are:
- how do we ensure the channel is logged/accessible
- we won't catch random mentions of our name as easily anymore
- might create a pile-up of meetings at peak times rather than force 
them to
  spread around
- increases silo effect

Main benefits being:
- No more scheduling nightmare
- More flexibility in listing things in the calendar


Some of the problems above can be solved programmatically - cross-check on
eavesdrop to make sure logging is enabled, for example. The problems that I'm
more worried about are the social ones, because they'll require a change in the
way we interact among us.

Not being able to easily ping someone during a meeting is kind of a bummer but
I'd argue that assuming someone is in the meeting channel and available at all
times is a mistake to begin with.

There will be conflicts on meeting times. There will be slots that will be used
by several teams as these slots are convinient for cross-timezone interaction.
We can check this and highlight the various conflicts but I'd argue we
shouldn't. We already have some overlaps in the current structure.

The social drawbacks related to this change can be overcome by interacting more
on te mailing list. Ideally, this change should help raising awareness about the
distributed nature of our community, encourage folks to do more office hours,
fewer meetings and, more importantly, to encourage folks to favor the mailing
list over IRC conversations for *some* discussions.

So, should we let teams to host IRC meetings in their own channels? Thoughts?

Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-December/108360.html
[1] https://governance.openstack.org/tc/#office-hours


--
@flaper87
Flavio Percoco




--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Logging in containerized services

2017-07-19 Thread Mark Goddard
Hi,

Kolla-ansible went through this process a few years ago, and ended up with
a solution involving heka pulling logs from files in a shared docker volume
(kolla_logs). Heka was recently switched for fluentd due to the
disappearance of upstream support. I suspect kolla-kubernetes has been
through a similar process recently.

I'd recommend soliciting input from the kolla team on the factors that lead
to the current design, and whether with hindsight anything would be done
differently.

Cheers,
Marl

On 19 July 2017 at 09:26, Bogdan Dobrelya  wrote:

> On 18.07.2017 21:27, Lars Kellogg-Stedman wrote:
> > Our current model for logging in a containerized deployment has pretty
> > much everything logging to files in a directory that has been
> > bind-mounted from the host.  This has some advantages: primarily, it
> > makes it easy for an operator on the local system to find logs,
> > particularly if they have had some previous exposure to
> > non-containerized deployments.
> >
> > There is strong demand for a centralized logging solution.  We've got
> > one potential solution right now in the form of the fluentd service
> > introduced in Newton, but this requires explicit registration of log
> > files for every service.  I don't think it's an ideal solution, and I
> > would like to explore some alternatives.
> >
> > Logging via syslog
> > ==
> >
> > For the purposes of the following, I'm going to assume that we're
> > deploying on an EL-variant (RHEL/CentOS/etc), which means (a) journald
> > owns /dev/log and (b) we're running rsyslog on the host and using the
> > omjournal plugin to read messages from journald.
> >
> > If we bind mount /dev/log into containers and configure openstack
> > services to log via syslog rather than via files, we get the following
> > for free:
> >
> > - We get message-based rather than line-based logging.  This means that
> > multiline tracebacks are handled correctly.
> >
> > - A single point of collection for logs.  If your host has been
> > configured to ship logs to a centralized collector, logs from all of
> > your services will be sent there without any additional configuration.
> >
> > - We get per-service message rate limiting from journald.
> >
> > - Log messages are annotated by journald with a variety of useful
> > metadata, including the container id and a high resolution timestamp.
> >
> > - We can configure the syslog service on the host to continue to write
> > files into legacy locations, so an operator looking to run grep against
> > local log files will still have that ability.
> >
> > - Ryslog itself can send structured messages directly to an Elastic
> > instance, which means that in a many deployments we would not require
> > fluentd and its dependencies.
> >
> > - This plays well in environments where some services are running in
> > containers and others are running on the host, because everything simply
> > logs to /dev/log.
>
> Plus it solves the log rotation (still have to be addressed [0] for Pike
> though), out-of-box.
>
> >
> > Logging via stdin/stdout
> > ==
> >
> > A common pattern in the container world is to log everything to
> > stdout/stderr.  This has some of the advantages of the above:
> >
> > - We can configure the container orchestration service to send logs to
> > the journal or to another collector.
> >
> > - We get a different set of annotations on log messages.
> >
> > - This solution may play better with frameworks like Kubernetes that
> > tend to isolate containers from the host a little more than using Docker
> > or similar tools straight out of the box.
> >
> > But there are some disadvantages:
> >
> > - Some services only know how to log via syslog (e.g., swift and haproxy)
> >
> > - We're back to line-based vs. message-based logging.
> >
> > - It ends up being more difficult to expose logs at legacy locations.
> >
> > - The container orchestration layer may not implement the same message
> > rate limiting we get with fluentd.
> >
> > Based on the above, I would like to suggest exploring a syslog-based
> > logging model moving forward. What do people think about this idea? I've
> > started putting together a spec
> > at https://review.openstack.org/#/c/484922/ and I would welcome your
> input.
>
> My vote goes for this option, but TBD for Queens. It won't make it for
> Pike, as it looks too late for such amount of drastic changes, like
> switching all OpenStack services to syslog, deploying additional
> required components, and so on.
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1700912
> [1] https://review.openstack.org/#/c/462900/
>
> >
> > Cheers,
> >
> > --
> > Lars Kellogg-Stedman >
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > 

Re: [openstack-dev] [tripleo] Logging in containerized services

2017-07-19 Thread Bogdan Dobrelya
On 18.07.2017 21:27, Lars Kellogg-Stedman wrote:
> Our current model for logging in a containerized deployment has pretty
> much everything logging to files in a directory that has been
> bind-mounted from the host.  This has some advantages: primarily, it
> makes it easy for an operator on the local system to find logs,
> particularly if they have had some previous exposure to
> non-containerized deployments.
> 
> There is strong demand for a centralized logging solution.  We've got
> one potential solution right now in the form of the fluentd service
> introduced in Newton, but this requires explicit registration of log
> files for every service.  I don't think it's an ideal solution, and I
> would like to explore some alternatives.
> 
> Logging via syslog
> ==
> 
> For the purposes of the following, I'm going to assume that we're
> deploying on an EL-variant (RHEL/CentOS/etc), which means (a) journald
> owns /dev/log and (b) we're running rsyslog on the host and using the
> omjournal plugin to read messages from journald.
> 
> If we bind mount /dev/log into containers and configure openstack
> services to log via syslog rather than via files, we get the following
> for free:
> 
> - We get message-based rather than line-based logging.  This means that
> multiline tracebacks are handled correctly.
> 
> - A single point of collection for logs.  If your host has been
> configured to ship logs to a centralized collector, logs from all of
> your services will be sent there without any additional configuration.
> 
> - We get per-service message rate limiting from journald.
> 
> - Log messages are annotated by journald with a variety of useful
> metadata, including the container id and a high resolution timestamp.
> 
> - We can configure the syslog service on the host to continue to write
> files into legacy locations, so an operator looking to run grep against
> local log files will still have that ability.
> 
> - Ryslog itself can send structured messages directly to an Elastic
> instance, which means that in a many deployments we would not require
> fluentd and its dependencies.
> 
> - This plays well in environments where some services are running in
> containers and others are running on the host, because everything simply
> logs to /dev/log.

Plus it solves the log rotation (still have to be addressed [0] for Pike
though), out-of-box.

> 
> Logging via stdin/stdout
> ==
> 
> A common pattern in the container world is to log everything to
> stdout/stderr.  This has some of the advantages of the above:
> 
> - We can configure the container orchestration service to send logs to
> the journal or to another collector.
> 
> - We get a different set of annotations on log messages.
> 
> - This solution may play better with frameworks like Kubernetes that
> tend to isolate containers from the host a little more than using Docker
> or similar tools straight out of the box.
> 
> But there are some disadvantages:
> 
> - Some services only know how to log via syslog (e.g., swift and haproxy)
> 
> - We're back to line-based vs. message-based logging.
> 
> - It ends up being more difficult to expose logs at legacy locations.
> 
> - The container orchestration layer may not implement the same message
> rate limiting we get with fluentd.
> 
> Based on the above, I would like to suggest exploring a syslog-based
> logging model moving forward. What do people think about this idea? I've
> started putting together a spec
> at https://review.openstack.org/#/c/484922/ and I would welcome your input.

My vote goes for this option, but TBD for Queens. It won't make it for
Pike, as it looks too late for such amount of drastic changes, like
switching all OpenStack services to syslog, deploying additional
required components, and so on.

[0] https://bugs.launchpad.net/tripleo/+bug/1700912
[1] https://review.openstack.org/#/c/462900/

> 
> Cheers,
> 
> -- 
> Lars Kellogg-Stedman >
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] weekly meetings on #tripleo

2017-07-19 Thread mathieu bultel
On 07/18/2017 11:22 PM, Emilien Macchi wrote:
> On Mon, Jul 17, 2017 at 12:10 PM, Emilien Macchi  wrote:
>> Since we have mixed feelings but generally agree that we should give
>> it a try, let's give it a try and see how it goes, at least one time,
>> tomorrow.
> So we tried and did the meeting on #tripleo.
> I noticed more people participated and were present to the meeting and
> I didn't notice any interruption.
>
> Please bring any feedback (positive / negative) so we decide if
> whether or not we continue that way.
+1 , in particular for those, like me, who are bad with calendar / time
schedule things :)
>> On Mon, Jul 10, 2017 at 10:01 AM, Michele Baldessari  
>> wrote:
>>> On Mon, Jul 10, 2017 at 11:36:03AM -0230, Brent Eagles wrote:
 +1 for giving it a try.
>>> Agreed.
>>>
 On Wed, Jul 5, 2017 at 2:26 PM, Emilien Macchi  wrote:

> After reading http://lists.openstack.org/pipermail/openstack-dev/2017-
> June/118899.html
> - we might want to collect TripleO's community feedback on doing
> weekly meetings on #tripleo instead of #openstack-meeting-alt.
>
> I see some direct benefits:
> - if you come up late in meetings, you could easily read backlog in
> #tripleo
> - newcomers not aware about the meeting channel wouldn't have to search
> for it
> - meeting would maybe get more activity and we would expose the
> information more broadly
>
> Any feedback on this proposal is welcome before we make any change (or
> not).
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> --
>>> Michele Baldessari
>>> C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>> Emilien Macchi
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][l2-gateway] Added Ricardo Noriega to the core team

2017-07-19 Thread Gary Kotton
Hi,
Over the last few months Ricardo Noriega has been making many contributions to 
the project and has actually helped get it to the stage where it’s a lot 
healthier than before ☺. I am adding him to the core team.
Congratulations!
A luta continua
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - making external subnets visible by default

2017-07-19 Thread Kevin Benton
Hi all,

In the process of making it possible to make external subnets visible via a
policy.json entry[1] I ran into a limitation of our DB pagination in
conjunction with the policy engine.

The queries to the DB do not take into account policy.json entries so users
may get fewer subnets than requested after the policy engine has filtered
them out. This is causing the CI to fail on that patch since the default is
to not allow users to see external subnets.

Before I go through a bunch of work to adjust the tests to handle this
case, I was wondering what people thought of just making external subnets
visible by default in our default policy.json at the same time.

Does anyone have any objections to making external subnets visible by
default?

1. https://review.openstack.org/#/c/476094/

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev