[openstack-dev] [Heat] Template Testing environment

2017-05-31 Thread Lance Haig

Hi All,

I asked on IRC for some guidance on what and how I would be able to test 
templates for the changes to the heat templates repo.


Is there a specific localrc file configuration that I could use for 
Devstack so that IO can get all the services I need to be able to run 
tests against my template changes?


As I am going to be making many changes to the templates I would like to 
test these before I submit reviews.


In addition I want to learn more about how the current tests are run 
with the new "headless" mode


Is someone able to assist me?

Thanks

Lance


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler]

2017-05-31 Thread Narendra Pal Singh
Hello,

I am looking for some suggestions. lets say i have multiple compute nodes,
Pool-A has 5 nodes and Pool-B has 4 nodes categorized based on some
property.
Now there is request for new instance, i always want this instance to be
placed on any compute in Pool-A and not in Pool-B.
What would be best approach to address this situation?

-- 
Best Regards,
Narendra Pal Singh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-31 Thread Lee Yarwood
On 31-05-17 20:06:01, Farr, Kaitlin M. wrote:
>> IMHO for now we are better off storing a secret passphrase in Barbican
>> for use with these encrypted volumes, would there be any objections to
>> this? Are there actual plans to use a symmetric key stored in Barbican
>> to directly encrypt and decrypt volumes?
> 
> It sounds like you're thinking that using a key manager object with the 
> type
> "passphrase" is closer to how the encryptors are using the bytes than using 
> the
> "symmetric key" type, but if you switch over to using passphrases,
> where are you going to generate the random bytes?  Would you prefer the
> user to input their own passphrase?  The benefit of continuing to use 
> symmetric
> keys as "passphrases" is that the key manager can randomly generate the bytes.
> Key generation is a standard feature of key managers, but password generation
> Is not.

Thanks for responding Kaitlin, I'd be happy to have the key manager
generate a random passphrase of a given length as defined by the volume
encryption type. I don't think we would want any user input here as
ultimately the encryption is transparent to them.

> On a side note, I thought the latest QEMU encryption feature was supposed to
> have support for passing in key material directly to the encryptors?  Perhaps
> this is not true and I am misremembering.

That isn't the case, with the native LUKS support in QEMU we can now
skip the use of the front-end encryptors entirely. We simply provide the
passphrase via a libvirt secret associated with the volume that is then
passed to QEMU in a secure fashion [1] to unlock the LUKS volume. 

[1] 
https://www.berrange.com/posts/2016/04/01/improving-qemu-security-part-3-securely-passing-in-credentials/

-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [cinder] Thoughts on cinder readiness

2017-05-31 Thread Joshua Harlow

Erik McCormick wrote:

I've been running Ceph-backed Cinder since, I think, Icehouse. It's
really more of a function of your backend or the hypervisor than Cinder
itself. That being said, it's been probabky mt smallest Openstack pain
point iver the years.

I can't imagine what sort of concurrency issues you'd run into short of
a large public cloud given that it really doesn't do much once
provisioning a volume is complete. Maybe if you've got people taking a
ton of snapshots? What sort of specific issues are you concerned about?



Mainly the ones that spawned articles/specs like:

https://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/

https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cinder-volume-active-active-support.html

And a few more like those, I'm especially not going to be a big fan of 
having to (as a person, myself or others on the godaddy team) go in and 
muck with volumes in stuck states and so-on (similar issues occur in 
nova, which just drain the blood out of humans that have to go fix them).



-Erik

On May 31, 2017 8:30 PM, "Mike Lowe" > wrote:

We have run ceph backed cinder from Liberty through Newton, with the
exception of a libvirt 2.x bug that should now be fixed, cinder
really hasn't caused us any problems.

Sent from my iPad

 > On May 31, 2017, at 6:12 PM, Joshua Harlow > wrote:
 >
 > Hi folks,
 >
 > So I was having some back and forth internally about is cinder
ready for usage and wanted to get other operators thoughts on how
there cinder experiences have been going, any trials and tribulations.
 >
 > For context, we are running on liberty (yes I know, working on
getting that to newer versions) and folks in godaddy are starting to
use more and more cinder (backed by ceph) and that got me thinking
about asking the question from operators (and devs) on what kind of
readiness 'rating' (or whatever you would want to call it) would
people give cinder in liberty.
 >
 > Some things that I was thinking was around concurrency rates,
because I know that's be a common issue that the cinder developers
have been working through (using tooz, and various other lock
mechanisms and such).
 >
 > Have other cinder operators seen concurrent operations (or
conflicting operations or ...) work better in newer releases (is
there any metric/s anyone has gathered about how things have gotten
worse/better under scale for cinder in various releases? partically
with regard to using ceph).
 >
 > Thoughts?
 >
 > It'd be interesting to capture (not just for my own usage) I
think because such info helps the overall user and operator and dev
community (and yes I would expect various etherpads to have parts of
this information, but it'd be nice to have like a single place where
other operators can specify how ready they believe a project is for
a given release and for a given configuration; and ideally provide
details/comments as to why they believe this).
 >
 > -Josh
 >
 >
 >
 >
 > ___
 > OpenStack-operators mailing list
 > OpenStack-operators@lists.openstack.org

 >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [mogan] Architecture diagrams

2017-05-31 Thread Zhenguo Niu
On Thu, Jun 1, 2017 at 4:06 AM, Joshua Harlow  wrote:

> Hi mogan folks,
>
> I was doing some source code examination of mogan and it peaked my
> interest in how it all is connected together. In part I see there is a
> state machine, some taskflow usage, some wsgi usage that looks like parts
> of it are inspired(?) by various other projects.
>
> That got me wondering if there is any decent diagrams or documents that
> explain how it all connects together and I thought I might as well ask and
> see if there are any (50/50 chances? ha).
>

hi Josh, you can find some diagrams/documents on our wiki [1], sorry for
the lack of docs, will enrich it soon.


>
> I am especially interested in the state machine, taskflow and such (no
> tooz seems to be there) and how they are used (if they are, or are going to
> be used); I guess in part because I know the most about those
> libraries/components :)
>
>
In fact, we use the same state machine library like ironic to help control
the baremetal server state change. And we introduced a linear taskflow for
create_server to reliably ensure that workflow is executed in a manner that
can survie process failure by reverting. It's great if we can get
help/suggestions from a taskflow expert :)


> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


[1] https://wiki.openstack.org/wiki/Mogan

-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [panko] dropping hbase driver support

2017-05-31 Thread Hanxi Liu
+1

Hanxi Liu
IRC(lhx_)

On Sat, May 27, 2017 at 1:17 AM, gordon chung  wrote:

> hi,
>
> as all of you know, we moved all storage out of ceilometer so it is
> handles only data generation and normalisation. there seems to be very
> little contribution to panko which handles metadata indexing, event
> storage so given how little it's being adopted and how little resources
> are being put on supporting it, i'd like to proposed to drop hbase
> support as a first step in making the project more manageable for
> whatever resource chooses to support it.
>
> why hbase as initial candidate to prune?
> - it has no testing in gate
> - it never had testing in gate
> - we didn't receive a single reply in user survey saying hbase was used
> - all the devs who originally worked on driver don't work on openstack
> anymore.
> - i'd be surprised if it actually worked
>
> i just realised it's a long weekend in some places so i'll let this
> linger for a bit.
>
> cheers,
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [User-committee] Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-05-31 Thread Yih Leong, Sun.
I recalled that UC is inviting WG chairs to join the UC IRC meeting to
share/provide WG updates on high level status/activities? Is this something
similar?
Should the WG chair attend the UC meeting instead of setting up another
separate meeting?


On Wednesday, May 31, 2017, MCCABE, JAMEY A  wrote:

> Working group (WG) chairs or delegates, please enter your name (and WG
> name) and what times you could meet at this poll:
> https://beta.doodle.com/poll/6k36zgre9ttciwqz#table
>
>
>
> As back ground and to share progress:
>
>- We started and generally confirmed the desire to have a regular
>cross WG status meeting at the Boston Summit.
>- Specifically the groups interested in Telco NFV and Fog Edge agreed
>to collaborate more often and in a more organized fashion.
>- In e-mails and then in today’s Operators Telco/NFV we finalized a
>proposal to have all WGs meet for high level status monthly and to bring
>the collaboration back to our individual WG sessions.
>- the User Committee sessions are appropriate for the Monthly WG
>Status meeting
>- more detailed coordination across Telco/NFV and Fog Edge groups
>should take place in the Operators Telco NFV WG meetings which already
>occur every 2 weeks.
>- we need participation of each WG Chair (or a delegate)
>- we welcome and request the OPNFV and Linux Foundation and other WGs
>to join us in the cross WG status meetings
>
>
>
> The Doodle was setup to gain concurrence for a time of week in which we
> could schedule and is not intended to be for a specific week.
>
>
>
> ​Jamey McCabe – AT Integrated Cloud -jm6819 - mobile if needed
> 847-496-1176
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [User-committee] Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-05-31 Thread Yih Leong, Sun.
I recalled that UC is inviting WG chairs to join the UC IRC meeting to
share/provide WG updates on high level status/activities? Is this something
similar?
Should the WG chair attend the UC meeting instead of setting up another
separate meeting?


On Wednesday, May 31, 2017, MCCABE, JAMEY A  wrote:

> Working group (WG) chairs or delegates, please enter your name (and WG
> name) and what times you could meet at this poll:
> https://beta.doodle.com/poll/6k36zgre9ttciwqz#table
>
>
>
> As back ground and to share progress:
>
>- We started and generally confirmed the desire to have a regular
>cross WG status meeting at the Boston Summit.
>- Specifically the groups interested in Telco NFV and Fog Edge agreed
>to collaborate more often and in a more organized fashion.
>- In e-mails and then in today’s Operators Telco/NFV we finalized a
>proposal to have all WGs meet for high level status monthly and to bring
>the collaboration back to our individual WG sessions.
>- the User Committee sessions are appropriate for the Monthly WG
>Status meeting
>- more detailed coordination across Telco/NFV and Fog Edge groups
>should take place in the Operators Telco NFV WG meetings which already
>occur every 2 weeks.
>- we need participation of each WG Chair (or a delegate)
>- we welcome and request the OPNFV and Linux Foundation and other WGs
>to join us in the cross WG status meetings
>
>
>
> The Doodle was setup to gain concurrence for a time of week in which we
> could schedule and is not intended to be for a specific week.
>
>
>
> ​Jamey McCabe – AT Integrated Cloud -jm6819 - mobile if needed
> 847-496-1176
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Ghanshyam Mann
On Thu, Jun 1, 2017 at 9:46 AM, Matthew Treinish  wrote:
> On Wed, May 31, 2017 at 04:24:14PM +, Jeremy Stanley wrote:
>> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
>> [...]
>> > Trademark programs are trademark programs - we should have a unified
>> > process for all of them. Let's not make the same mistakes again by
>> > creating classes of projects / programs. I do not want this to be
>> > a distinction as we move forward.
>>
>> This I agree with. However I'll be surprised if a majority of the QA
>> team disagree on this point (logistic concerns with how to curate
>> this over time I can understand, but that just means they need to
>> interest some people in working on a manageable solution).
>
> +1 I don't think anyone disagrees with this. There is a logistical concern
> with the way the new proposed programs are going to be introduced. Quite
> frankly it's too varied and broad and I don't think we'll have enough people
> working on this space to help maintain it in the same manner.
>
> It's the same reason we worked on the plugin decomposition in the first place.
> You can easily look at the numbers of tests to see this:
>
> https://raw.githubusercontent.com/mtreinish/qa-in-the-open/lca2017/tests_per_proj.png
>
> Which shows things before the plugin decomposition (and before the big tent) 
> Just
> because we said we'd support all the incubated and integrated projects in 
> tempest
> didn't mean people were contributing and/or the tests were well maintained.
>
> But, as I said elsewhere in this thread this is a bit too early to have the
> conversation because the new interop programs don't actually exist yet.

Yes, there is no question on goal to have a unified process for all.
As Jeremy, Matthew mentioned, key things here is manageability issues.

We know contributors in QA are reducing cycle by cycle. I might be
thinking over but I thought about QA team situation when we have
around 30-40 trademark projects and all tests on Tempest
repo.Personally I am ok to have tests in Tempest repo or a dedicated
interop plugin repo which can be controlled by QA at some level But we
need dedicated participation from interop + projects liason (I am not
sure that worked well in pass but if with TC help it might work :)).

I can recall that, QA team has many patches on plugin side to improve
them or fix them but may of them has no active reviews or much
attentions from project team. I am afraid about same case for
trademark projects also.

May be broad direction on trademark program and scope of it can help
to imagine the quantity of programs and tests which QA teams need to
maintain.

-gmann

>
> -Matt Treinish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-31 Thread Mike Bayer



On 05/30/2017 09:06 PM, Jay Pipes wrote:

On 05/30/2017 05:07 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2017-05-30 14:52:01 -0400:

Sorry for the delay in getting back on this... comments inline.

On 05/18/2017 06:13 PM, Adrian Turjak wrote:

Hello fellow OpenStackers,

For the last while I've been looking at options for multi-region
multi-master Keystone, as well as multi-master for other services I've
been developing and one thing that always came up was there aren't many
truly good options for a true multi-master backend.


Not sure whether you've looked into Galera? We had a geo-distributed
12-site Galera cluster servicing our Keystone assignment/identity
information WAN-replicated. Worked a charm for us at AT Much easier
to administer than master-slave replication topologies and the
performance (yes, even over WAN links) of the ws-rep replication was
excellent. And yes, I'm aware Galera doesn't have complete snapshot
isolation support, but for Keystone's workloads (heavy, heavy read, very
little write) it is indeed ideal.



This has not been my experience.

We had a 3 site, 9 node global cluster and it was _extremely_ sensitive
to latency. We'd lose even read ability whenever we had a latency storm
due to quorum problems.

Our sites were London, Dallas, and Sydney, so it was pretty common for
there to be latency between any of them.

I lost track of it after some reorgs, but I believe the solution was
to just have a single site 3-node galera for writes, and then use async
replication for reads. We even helped land patches in Keystone to allow
split read/write host configuration.


Interesting, thanks for the info. Can I ask, were you using the Galera 
cluster for read-heavy data like Keystone identity/assignment storage? 
Or did you have write-heavy data mixed in (like Keystone's old UUID 
token storage...)


I'd also throw in, there's lots of versions of Galera with different 
bugfixes / improvements as we go along, not to mention configuration 
settings if Jay observes it working great on a distributed cluster 
and Clint observes it working terribly, it could be that these were not 
the same Galera versions being used.






It should be noted that CockroachDB's documentation specifically calls 
out that it is extremely sensitive to latency due to the way it measures 
clock skew... so might not be suitable for WAN-separated clusters?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Matthew Treinish
On Wed, May 31, 2017 at 04:24:14PM +, Jeremy Stanley wrote:
> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
> [...]
> > Trademark programs are trademark programs - we should have a unified
> > process for all of them. Let's not make the same mistakes again by
> > creating classes of projects / programs. I do not want this to be
> > a distinction as we move forward.
> 
> This I agree with. However I'll be surprised if a majority of the QA
> team disagree on this point (logistic concerns with how to curate
> this over time I can understand, but that just means they need to
> interest some people in working on a manageable solution).

+1 I don't think anyone disagrees with this. There is a logistical concern
with the way the new proposed programs are going to be introduced. Quite
frankly it's too varied and broad and I don't think we'll have enough people
working on this space to help maintain it in the same manner.

It's the same reason we worked on the plugin decomposition in the first place.
You can easily look at the numbers of tests to see this:

https://raw.githubusercontent.com/mtreinish/qa-in-the-open/lca2017/tests_per_proj.png

Which shows things before the plugin decomposition (and before the big tent) 
Just
because we said we'd support all the incubated and integrated projects in 
tempest
didn't mean people were contributing and/or the tests were well maintained.

But, as I said elsewhere in this thread this is a bit too early to have the
conversation because the new interop programs don't actually exist yet.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [cinder] Thoughts on cinder readiness

2017-05-31 Thread Mike Lowe
We have run ceph backed cinder from Liberty through Newton, with the exception 
of a libvirt 2.x bug that should now be fixed, cinder really hasn't caused us 
any problems.

Sent from my iPad

> On May 31, 2017, at 6:12 PM, Joshua Harlow  wrote:
> 
> Hi folks,
> 
> So I was having some back and forth internally about is cinder ready for 
> usage and wanted to get other operators thoughts on how there cinder 
> experiences have been going, any trials and tribulations.
> 
> For context, we are running on liberty (yes I know, working on getting that 
> to newer versions) and folks in godaddy are starting to use more and more 
> cinder (backed by ceph) and that got me thinking about asking the question 
> from operators (and devs) on what kind of readiness 'rating' (or whatever you 
> would want to call it) would people give cinder in liberty.
> 
> Some things that I was thinking was around concurrency rates, because I know 
> that's be a common issue that the cinder developers have been working through 
> (using tooz, and various other lock mechanisms and such).
> 
> Have other cinder operators seen concurrent operations (or conflicting 
> operations or ...) work better in newer releases (is there any metric/s 
> anyone has gathered about how things have gotten worse/better under scale for 
> cinder in various releases? partically with regard to using ceph).
> 
> Thoughts?
> 
> It'd be interesting to capture (not just for my own usage) I think because 
> such info helps the overall user and operator and dev community (and yes I 
> would expect various etherpads to have parts of this information, but it'd be 
> nice to have like a single place where other operators can specify how ready 
> they believe a project is for a given release and for a given configuration; 
> and ideally provide details/comments as to why they believe this).
> 
> -Josh
> 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests (was: more tempest plugins)

2017-05-31 Thread Matthew Treinish
On Wed, May 31, 2017 at 03:45:52PM +, Jeremy Stanley wrote:
> On 2017-05-31 15:22:59 + (+), Jeremy Stanley wrote:
> > On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > it's news to me that they're considering reversing course. If the
> > > QA team isn't going to continue, we'll need to figure out what
> > > that means and potentially find another group to do it.
> > 
> > I wasn't there for the discussion, but it sounds likely to be a
> > mischaracterization. I'm going to assume it's not true (or much more
> > nuanced) at least until someone responds on behalf of the QA team.
> > This particular subthread is only going to go further into the weeds
> > until it is grounded in some authoritative details.
> 
> Apologies for replying to myself, but per discussion[*] with Chris
> in #openstack-dev I'm adjusting the subject header to make it more
> clear which particular line of speculation I consider weedy.
> 
> Also in that brief discussion, Graham made it slightly clearer that
> he was talking about pushback on the tempest repo getting tests for
> new trademark programs beyond "OpenStack Powered Platform,"
> "OpenStack Powered Compute" and "OpenStack Powered Object Storage."

TBH, it's a bit premature to have the discussion. These additional programs do
not exist yet, and there is a governance road block around this. Right now the
set of projects that can be used defcore/interopWG is limited to the set of 
projects in:

https://governance.openstack.org/tc/reference/tags/tc_approved-release.html

We had a forum session on it (I can't find the etherpad for the session) which
was pretty speculative because it was about planning the new programs. Part of
that discussion was around the feasibility of using tests in plugins and whether
that would be desirable. Personally, I was in favor of doing that for some of
the proposed programs because of the way they were organized it was a good fit.
This is because the proposed new programs were extra additions on top of the
base existing interop program. But it was hardly a definitive discussion.

We will have to have discussions about how we're going to actually implement
the additional programs when we start to create them, but that's not happening
yet.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Matthew Treinish
On Wed, May 31, 2017 at 03:22:59PM +, Jeremy Stanley wrote:
> On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > it's news to me that they're considering reversing course. If the
> > QA team isn't going to continue, we'll need to figure out what
> > that means and potentially find another group to do it.
> 
> I wasn't there for the discussion, but it sounds likely to be a
> mischaracterization. 
> I'm going to assume it's not true (or much more
> nuanced) at least until someone responds on behalf of the QA team.
> This particular subthread is only going to go further into the weeds
> until it is grounded in some authoritative details.

+1

I'm very confused by this whole thread TBH. Was there a defcore test which was
blocked from tempest? Quite frankly the amount of contribution to tempest
specifically for defcore tests is very minimal. (at most 1 or 2 patches per
cycle) It seems like this whole concern is based on a misunderstanding somewhere
and just is going off in a weird direction.

-Matt Treinish



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-31 Thread Michael Johnson
Hi Alex,

 

As you know I am a strong proponent of moving the docs into the project team 
repositories [1].

 

Personally I am in favor of pulling the Band-Aids off and doing option 1.  I 
think centralizing the documentation under one tree and consolidating the build 
into one job has benefits.  I can’t speak to the complexities of the 
documentation template(s?) and the sphinx configuration issues that might arise 
from this plan, but from a PTL/developer/doc writer I like the concept.  I 
fully understand this means work for us to move our API-REF, etc. but I think 
it is worth it.

 

As a secondary vote I am also ok with option 2.  I just think we might as well 
do a full consolidation.

 

I am not a fan of requiring project teams to setup separate repos for the docs, 
there is value to having them in tree for me.  So, I would vote against 3.

 

Michael

 

[1] https://review.openstack.org/#/c/439122/

 

From: Alexandra Settle [mailto:a.set...@outlook.com] 
Sent: Monday, May 22, 2017 2:39 AM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: 'openstack-d...@lists.openstack.org' 
Subject: [openstack-dev] [doc][ptls][all] Documentation publishing future

 

Hi everyone,

 

The documentation team are rapidly losing key contributors and core reviewers. 
We are not alone, this is happening across the board. It is making things 
harder, but not impossible.

Since our inception in 2010, we’ve been climbing higher and higher trying to 
achieve the best documentation we could, and uphold our high standards. This is 
something to be incredibly proud of.

 

However, we now need to take a step back and realise that the amount of work we 
are attempting to maintain is now out of reach for the team size that we have. 
At the moment we have 13 cores, of whom none are full time contributors or 
reviewers. This includes myself.

 

Until this point, the documentation team has owned several manuals that include 
content related to multiple projects, including an installation guide, admin 
guide, configuration guide, networking guide, and security guide. Because the 
team no longer has the resources to own that content, we want to invert the 
relationship between the doc team and project teams, so that we become liaisons 
to help with maintenance instead of asking for project teams to provide 
liaisons to help with content. As a part of that change, we plan to move the 
existing content out of the central manuals repository, into repositories owned 
by the appropriate project teams. Project teams will then own the content and 
the documentation team will assist by managing the build tools, helping with 
writing guidelines and style, but not writing the bulk of the text.

 

We currently have the infrastructure set up to empower project teams to manage 
their own documentation in their own tree, and many do. As part of this change, 
the rest of the existing content from the install guide and admin guide will 
also move into project-owned repositories. We have a few options for how to 
implement the move, and that's where we need feedback now.

 

1. We could combine all of the documentation builds, so that each project has a 
single doc/source directory that includes developer, contributor, and user 
documentation. This option would reduce the number of build jobs we have to 
run, and cut down on the number of separate sphinx configurations in each 
repository. It would completely change the way we publish the results, though, 
and we would need to set up redirects from all of the existing locations to the 
new locations and move all of the existing documentation under the new 
structure.

 

2. We could retain the existing trees for developer and API docs, and add a new 
one for "user" documentation. The installation guide, configuration guide, and 
admin guide would move here for all projects. Neutron's user documentation 
would include the current networking guide as well. This option would add 1 new 
build to each repository, but would allow us to easily roll out the change with 
less disruption in the way the site is organized and published, so there would 
be less work in the short term.

 

3. We could do option 2, but use a separate repository for the new 
user-oriented documentation. This would allow project teams to delegate 
management of the documentation to a separate review project-sub-team, but 
would complicate the process of landing code and documentation updates together 
so that the docs are always up to date.

 

Personally, I think option 2 or 3 are more realistic, for now. It does mean 
that an extra build would have to be maintained, but it retains that key 
differentiator between what is user and developer documentation and involves 
fewer changes to existing published contents and build jobs. I definitely think 
option 1 is feasible, and would be happy to make it work if the community 
prefers this. 

[Openstack-operators] [nova] Cinder cross_az_attach=False changes/fixes

2017-05-31 Thread Matt Riedemann

This is a request for any operators out there that configure nova to set:

[cinder]
cross_az_attach=False

To check out these two bug fixes:

1. https://review.openstack.org/#/c/366724/

This is a case where nova is creating the volume during boot from volume 
and providing an AZ to cinder during the volume create request. Today we 
just pass the instance.availability_zone which is None if the instance 
was created without an AZ set. It's unclear to me if that causes the 
volume creation to fail (someone in IRC was showing the volume going 
into ERROR state while Nova was waiting for it to be available), but I 
think it will cause the later attach to fail here [1] because the 
instance AZ (defaults to None) and volume AZ (defaults to nova) may not 
match. I'm still looking for more details on the actual failure in that 
one though.


The proposed fix in this case is pass the AZ associated with any host 
aggregate that the instance is in.


2. https://review.openstack.org/#/c/469675/

This is similar, but rather than checking the AZ when we're on the 
compute and the instance has a host, we're in the API and doing a boot 
from volume where an existing volume is provided during server create. 
By default, the volume's AZ is going to be 'nova'. The code doing the 
check here is getting the AZ for the instance, and since the instance 
isn't on a host yet, it's not in any aggregate, so the only AZ we can 
get is from the server create request itself. If an AZ isn't provided 
during the server create request, then we're comparing 
instance.availability_zone (None) to volume['availability_zone'] 
("nova") and that results in a 400.


My proposed fix is in the case of BFV checks from the API, we default 
the AZ if one wasn't requested when comparing against the volume. By 
default this is going to compare "nova" for nova and "nova" for cinder, 
since CONF.default_availability_zone is "nova" by default in both projects.


--

I'm requesting help from any operators that are setting 
cross_az_attach=False because I have to imagine your users have run into 
this and you're patching around it somehow, so I'd like input on how you 
or your users are dealing with this.


I'm also trying to recreate these in upstream CI [2] which I was already 
able to do with the 2nd bug.


Having said all of this, I really hate cross_az_attach as it's 
config-driven API behavior which is not interoperable across clouds. 
Long-term I'd really love to deprecate this option but we need a 
replacement first, and I'm hoping placement with compute/volume resource 
providers in a shared aggregate can maybe make that happen.


[1] 
https://github.com/openstack/nova/blob/f278784ccb06e16ee12a42a585c5615abe65edfe/nova/virt/block_device.py#L368

[2] https://review.openstack.org/#/c/467674/

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split out from horizon

2017-05-31 Thread Bhatia, Manjeet S


> -Original Message-
> From: Akihiro Motoki [mailto:amot...@gmail.com]
> Sent: Wednesday, May 31, 2017 6:13 AM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split
> out from horizon
> 
> Hi all,
> 
> As discussed last month [1], we agree that each neutron-related dashboard has
> its own repository.
> I would like to move this forward on FWaaS and VPNaaS as the horizon team
> plans to split them out as horizon plugins.
> 
> A couple of questions hit me.
> 
> (1) launchpad project
> Do we create a new launchpad project for each dashboard?
> At now, FWaaS and VPNaaS projects use 'neutron' for their bug tracking from
> the historical reason, it sometimes There are two choices: the one is to 
> accept
> dashboard bugs in 'neutron' launchpad, and the other is to have a separate
> launchpad project.

+1 for separating Launchpad we can one for each and each can cover all the 
issues related
To it. 

> My vote is to create a separate launchpad project.
> It allows users to search and file bugs easily.
> 
> (2) repository name
> 
> Are neutron-fwaas-dashboard / neutron-vpnaas-dashboard good repository
> names for you?
> Most horizon related projects use -dashboard or -ui as their repo
> names.
> I personally prefers to -dashboard as it is consistent with the OpenStack
> dashboard (the official name of horizon). On the other hand, I know some folks
> prefer to -ui as the name is shorter enough.
> Any preference?

Both looks good, but using ui will make it shorter so +1 for ui

> (3) governance
> neutron-fwaas project is under the neutron project.
> Does it sound okay to have neutron-fwaas-dashboard under the neutron
> project?
> This is what the neutron team does for neutron-lbaas-dashboard before and
> this model is adopted in most horizon plugins (like trove, sahara or others).

IMO, we can have it under neutron project for now but for simplicity and
maintenance I'd suggest to branch it out from neutron.

> (4) initial core team
> 
> My thought is to have neutron-fwaas/vpnaas-core and horizon-core as the
> initial core team.
> The release team and the stable team follow what we have for neutron-
> fwaas/vpnaas projects.
> Sounds reasonable?
> 
> 
> Finally, I already prepare the split out version of FWaaS and VPNaaS
> dashboards in my personal github repos.
> Once we agree in the questions above, I will create the repositories under
> git.openstack.org.
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-
> April/thread.html#115200
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] deploy software on Openstack controller on the Overcloud

2017-05-31 Thread Emilien Macchi
On Wed, May 31, 2017 at 6:29 PM, Dnyaneshwar Pawar
 wrote:
> Hi Alex,
>
> Currently we have puppet modules[0] to configure our software which has
> components on Openstack Controller, Cinder node and Nova node.
> As per document[1] we successfully tried out role specific configuration[2].
>
> So, does it mean that if we have an overcloud image with our packages
> inbuilt and we call our configuration scripts using role specific
> configuration, we may not need puppet modules[0] ? Is it acceptable
> deployment method?

So running a binary from Puppet, to make configuration management is
not something we recommend.
Puppet has been good at managing configuration files and services for
example. In your module, you just manage a file and execute it. The
problem with that workflow is we have no idea what happens in backend.
Also we have no way to make Puppet run idempotent, which is one
important aspect in TripleO.

Please tell us what does the binary, and maybe we can convert the
tasks into Puppet resources that could be managed by your module. Also
make the resources by class (service), so we can plug it into the
composable services in TripleO.

Thanks,

> [0] https://github.com/abhishek-kane/puppet-veritas-hyperscale
> [1]
> https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html
> [2] http://paste.openstack.org/show/66/
>
> Thanks,
> Dnyaneshwar
>
> On 5/30/17, 6:52 PM, "Alex Schultz"  wrote:
>
> On Mon, May 29, 2017 at 5:05 AM, Dnyaneshwar Pawar
>  wrote:
>
> Hi,
>
> I am tying to deploy a software on openstack controller on the overcloud.
> One way to do this is by modifying ‘overcloud image’ so that all packages of
> our software are added to image and then run overcloud deploy.
> Other way is to write heat template and puppet module which will deploy the
> required packages.
>
> Question: Which of above two approaches is better?
>
> Note: Configuration part of the software will be done via separate heat
> template and puppet module.
>
>
> Usually you do both.  Depending on how the end user is expected to
> deploy, if they are using the TripleoPackages service[0] in their
> role, the puppet installation of the package won't actually work (we
> override the package provider to noop) so it needs to be in the
> images.  That being said, usually there is also a bit of puppet that
> needs to be written to configure the end service and as a best
> practice (and for development purposes), it's a good idea to also
> capture the package in the manifest.
>
> Thanks,
> -Alex
>
> [0]
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml
>
>
> Thanks and Regards,
> Dnyaneshwar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [cinder] Thoughts on cinder readiness

2017-05-31 Thread Joshua Harlow

Hi folks,

So I was having some back and forth internally about is cinder ready for 
usage and wanted to get other operators thoughts on how there cinder 
experiences have been going, any trials and tribulations.


For context, we are running on liberty (yes I know, working on getting 
that to newer versions) and folks in godaddy are starting to use more 
and more cinder (backed by ceph) and that got me thinking about asking 
the question from operators (and devs) on what kind of readiness 
'rating' (or whatever you would want to call it) would people give 
cinder in liberty.


Some things that I was thinking was around concurrency rates, because I 
know that's be a common issue that the cinder developers have been 
working through (using tooz, and various other lock mechanisms and such).


Have other cinder operators seen concurrent operations (or conflicting 
operations or ...) work better in newer releases (is there any metric/s 
anyone has gathered about how things have gotten worse/better under 
scale for cinder in various releases? partically with regard to using ceph).


Thoughts?

It'd be interesting to capture (not just for my own usage) I think 
because such info helps the overall user and operator and dev community 
(and yes I would expect various etherpads to have parts of this 
information, but it'd be nice to have like a single place where other 
operators can specify how ready they believe a project is for a given 
release and for a given configuration; and ideally provide 
details/comments as to why they believe this).


-Josh




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [congress] policy monitoring panel design

2017-05-31 Thread Eric K
Here's a quick & rough mock-up I put up based on the discussions at Atlanta
PTG.

https://wireframepro.mockflow.com/view/congress-policy-monitor

Anyone can view and comment. To make changes, please sign-up and email me
to get access.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-31 Thread Lance Bragstad
I took a stab at working through the API a bit more and I've capture that
information in the spec [0]. Rendered version is available, too [1].

[0] https://review.openstack.org/#/c/464763/
[1]
http://docs-draft.openstack.org/63/464763/12/check/gate-keystone-specs-docs-ubuntu-xenial/1dbeb65//doc/build/html/specs/keystone/ongoing/global-roles.html

On Wed, May 31, 2017 at 9:10 AM, Lance Bragstad  wrote:

>
>
> On Fri, May 26, 2017 at 10:21 AM, Sean Dague  wrote:
>
>> On 05/26/2017 10:44 AM, Lance Bragstad wrote:
>> 
>> > Interesting - I guess the way I was thinking about it was on a per-token
>> > basis, since today you can't have a single token represent multiple
>> > scopes. Would it be unreasonable to have oslo.context build this
>> > information based on multiple tokens from the same user, or is that a
>> > bad idea?
>>
>> No service consumer is interacting with Tokens. That's all been
>> abstracted away. The code in the consumers consumer is interested in is
>> the context representation.
>>
>> Which is good, because then the important parts are figuring out the
>> right context interface to consume. And the right Keystone front end to
>> be explicit about what was intended by the operator "make jane an admin
>> on compute in region 1".
>>
>> And the middle can be whatever works best on the Keystone side. As long
>> as the details of that aren't leaked out, it can also be refactored in
>> the future by having keystonemiddleware+oslo.context translate to the
>> known interface.
>>
>
> Ok - I think that makes sense. So if I copy/paste your example from
> earlier and modify it a bit ( s/is_admin/global/)::
>
> {
>"user": "me!",
>"global": True,
>"roles": ["admin", "auditor"],
>
> }
>
> Or
>
> {
>"user": "me!",
>"global": True,
>"roles": ["reader"],
>
> }
>
> That might be one way we can represent global roles through 
> oslo.context/keystonemiddleware.
> The library would be on the hook for maintaining the mapping of token scope
> to context scope, which makes sense:
>
> if token['is_global'] == True:
> context.global = True
> elif token['domain_scoped']:
> # domain scoping?
> else:
> # handle project scoping
>
> I need to go dig into oslo.context a bit more to get familiar with how
> this works on the project level. Because if I understand correctly,
> oslo.context currently doesn't relay global scope and that will be a
> required thing to get done before this work is useful, regardless of going
> with option #1, #2, and especially #3.
>
>
>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-05-31 Thread Amrith Kumar
I agree, this would be a good thing to do and something which will definitely 
improve the overall ease of upgrades. We already have two Queens goals though; 
do we want to add a third?

-amrith

P.S. I'd happily volunteer doing this, with a real end user benefit, over the 
current tempest shuffle goal. My 2c worth.

--
Amrith Kumar
amrith.ku...@gmail.com


> -Original Message-
> From: Mike [mailto:thin...@gmail.com]
> Sent: Wednesday, May 31, 2017 4:39 PM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Cc: Sean Dague 
> Subject: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off
> Paste
> 
> Hello everyone,
> 
> As part of our community wide goals process [1], we will discuss the
> potential goals that came out of the forum session in Boston [2].
> These discussions will aid the TC in making a final decision of what goals
> the community will work towards in the Queens release.
> 
> For this thread we will be discussing migrating off paste. This was suggested
> by Sean Dague. I’m not sure if he’s leading this effort, but here’s a excerpt
> from him to get us started:
> 
> A migration path off of paste would be a huge win. Paste deploy is
> unmaintained (as noted in the etherpad) and being in etc means it's another
> piece of gratuitous state that makes upgrading harder than it really should
> be. This is one of those that is going to require someone to commit to
> working out that migration path up front. But it would be a pretty good chunk
> of debt and upgrade ease.
> 
> 
> [1] - https://governance.openstack.org/tc/goals/index.html
> [2] - https://etherpad.openstack.org/p/BOS-forum-Queens-Goals
> 
> —
> Mike Perez
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptl][all] Potential Queens Goal: Continuing Python 3.5+ Support

2017-05-31 Thread Mike
Hello everyone,

For this thread we will be discussing continuing Python 3.5+ support.
Emilien who has been helping with coordinating our efforts here with
Pike can probably add more here, but glancing at our goals document
[1] it looks like we have a lot of unanswered projects’ status, but
mostly we have python 3.5 unit test voting jobs done thanks to this
effort! I have no idea how to use the graphite dashboard, but here’s a
graph [2] showing success vs failure with python-35 jobs across all
projects.

Glancing at that I think it’s safe to say we can start discussions on
moving forward with having our functional tests support python 3.5.
Some projects are already ahead in this. Let the discussions begin so
we can aid the decision in the  TC deciding our community wide goals
for Queens [3].


[1] - https://governance.openstack.org/tc/goals/pike/python35.html
[2] - 
http://graphite.openstack.org/render/?width=1273=554&_salt=1496261911.56=00%3A00_20170401=23%3A59_20170531=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.SUCCESS)=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.FAILURE)
[3] - https://governance.openstack.org/tc/goals/index.html

—
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-05-31 Thread Mike
Hello everyone,

As part of our community wide goals process [1], we will discuss the
potential goals that came out of the forum session in Boston [2].
These discussions will aid the TC in making a final decision of what
goals the community will work towards in the Queens release.

For this thread we will be discussing migrating off paste. This was
suggested by Sean Dague. I’m not sure if he’s leading this effort, but
here’s a excerpt from him to get us started:

A migration path off of paste would be a huge win. Paste deploy is
unmaintained (as noted in the etherpad) and being in etc means it's
another piece of gratuitous state that makes upgrading harder than it
really should be. This is one of those that is going to require
someone to commit to working out that migration path up front. But it
would be a pretty good chunk of debt and upgrade ease.


[1] - https://governance.openstack.org/tc/goals/index.html
[2] - https://etherpad.openstack.org/p/BOS-forum-Queens-Goals

—
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Amrith Kumar
I've been following this discussion from a safe distance on the mailing
list, and I just caught up on the IRC conversation as well.

It was my understanding that if a project had tests that were going to be
run for the purposes of determination whether something met the standards of
"OpenStack Powered Databases" (TBD), then those tests would reside in the
tempest repository. Any other tempest tests that the project would have for
any purpose (or purposes) should, I understand, be moved into an independent
repository per the Queens community goal[1].

I'm particularly interested in having this matter decided in an email thread
such as this one because of the comment in IRC about a verbal discussion at
the forum not being considered to be a 'binding decision' [2]. It had been
my understanding that the forum and the PTG were places where 'decisions'
were made, but if that is not the case, I'm hoping that one will be
documented as part of this thread and formalized (maybe) with a resolution
by the TC or the Interop/DefCore body?

-amrith


[1]
https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.201
7-05-31.log.html#t2017-05-31T15:39:33

--
Amrith Kumar
amrith.ku...@gmail.com


> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> Sent: Wednesday, May 31, 2017 12:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
 d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark
tests
> 
> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
> [...]
> > Trademark programs are trademark programs - we should have a unified
> > process for all of them. Let's not make the same mistakes again by
> > creating classes of projects / programs. I do not want this to be a
> > distinction as we move forward.
> 
> This I agree with. However I'll be surprised if a majority of the QA team
> disagree on this point (logistic concerns with how to curate this over
time I
> can understand, but that just means they need to interest some people in
> working on a manageable solution).
> --
> Jeremy Stanley


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-31 Thread Farr, Kaitlin M.
Lee, a few thoughts on your previous email.  Many of the details I think you
already know, but I'm clarifying for posterity's sake:

> However the only supported disk encryption formats on the front-end at
> present are plain (dm-crypt) and LUKS, neither of which use the supplied
> key to directly encrypt or decrypt data. Plain derives a fixed length
> master key from the provided key / passphrase and LUKS uses PBKDF2 to
> derive a key from the key / passphrase that unlocks a separate master
> key.

This is true.  When we retrieve the key from Barbican, we don't use the key 
bytes
themselves to encrypt the volume. We format the key as a string and use it as a
passphrase.  You can see this for both the LUKS encryptor [2] and the cryptsetup
encryptor [3].  We pass in a "--key-file=-" parameter (which indicates the 
"keyfile"
should be read from stdin) and then pass in the formatted key.  But according to
the documentation, "keyfile" is a misnomer.  I think it would be clearer if 
dm-crypt
renamed it to something like "passwordfile" because it's still used by dm-crypt 
and
luks as a passphrase [4].

> I also can't find any evidence of these keys being used directly on the
> backend for any direct encryption of volumes within c-vol. Happy to be
> corrected here if there are out-of-tree drivers etc that do this.

There are two options for control_location for the volume encryption type:
'front-end' (nova) and 'back-end' (cinder) [5].  'front-end' is the default, 
and I
know where the code logic is that sets up the encryptors for the front-end,
now in os-brick [1].  But I cannot find any logic that handles the case for 
'back-end'.
I would think the 'back-end' logic would be found in Cinder, but I do not see 
it.
I am under the impression that it was just a placeholder for future 
functionality.

> IMHO for now we are better off storing a secret passphrase in Barbican
> for use with these encrypted volumes, would there be any objections to
> this? Are there actual plans to use a symmetric key stored in Barbican
> to directly encrypt and decrypt volumes?

It sounds like you're thinking that using a key manager object with the type
"passphrase" is closer to how the encryptors are using the bytes than using the
"symmetric key" type, but if you switch over to using passphrases,
where are you going to generate the random bytes?  Would you prefer the
user to input their own passphrase?  The benefit of continuing to use symmetric
keys as "passphrases" is that the key manager can randomly generate the bytes.
Key generation is a standard feature of key managers, but password generation
Is not.

On a side note, I thought the latest QEMU encryption feature was supposed to
have support for passing in key material directly to the encryptors?  Perhaps
this is not true and I am misremembering.

Hopefully that helps,

Kaitlin

1. 
https://github.com/openstack/os-brick/blob/6cf9b1cd689f70a2c50c0fa83a9a9f7c502712a1/os_brick/encryptors/__init__.py#L62
2. 
https://github.com/openstack/os-brick/blob/6cf9b1cd689f70a2c50c0fa83a9a9f7c502712a1/os_brick/encryptors/luks.py#L63-L87
3. 
https://github.com/openstack/os-brick/blob/6cf9b1cd689f70a2c50c0fa83a9a9f7c502712a1/os_brick/encryptors/cryptsetup.py#L104-L129
4. https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption#Keyfiles
5. https://docs.openstack.org/admin-guide/dashboard-manage-volumes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mogan] Architecture diagrams

2017-05-31 Thread Joshua Harlow

Hi mogan folks,

I was doing some source code examination of mogan and it peaked my 
interest in how it all is connected together. In part I see there is a 
state machine, some taskflow usage, some wsgi usage that looks like 
parts of it are inspired(?) by various other projects.


That got me wondering if there is any decent diagrams or documents that 
explain how it all connects together and I thought I might as well ask 
and see if there are any (50/50 chances? ha).


I am especially interested in the state machine, taskflow and such (no 
tooz seems to be there) and how they are used (if they are, or are going 
to be used); I guess in part because I know the most about those 
libraries/components :)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] problem with nova placement after update of cloud from Mitaka to Ocata

2017-05-31 Thread Jay Pipes

On 05/31/2017 05:52 AM, federica fanzago wrote:

Hello operators,
we have a problem with the placement after the update of our cloud from 
Mitaka to Ocata release.


We started from a mitaka cloud and we have followed these steps: updated 
the cloud controller from Mitaka to newton, run the dbsync, updated from 
newton to ocata adding at this step the db nova_cell0 and run again the 
dbsync.  Then we have updated the compute directly from Mitaka to Ocata.


With the update to Ocata we have added the placement section in 
nova.conf, configured the related endpoint and installed the package 
openstack-nova-placement-api (placement wasn't enabled in newton)


Verifying the operation, the command nova-status upgrade check fails 
with the error


Error:
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456, 
in main

 ret = fn(*fn_args, **fn_kwargs)
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386, 
in check

 result = func(self)
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201, 
in _check_placement

 versions = self._placement_get("/")
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189, 
in _placement_get

 return client.get(path, endpoint_filter=ks_filter).json()
   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", 
line 758, in get

 return self.request(url, 'GET', **kwargs)
   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 
101, in inner

 return wrapped(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", 
line 655, in request

 raise exceptions.from_response(resp, method, url)
ServiceUnavailable: Service Unavailable (HTTP 503)

Do you have suggestions about how to debug the problem?


Did you ensure that you created a service entry, endpoint entry, and 
service user for Placement in Keystone?


See here:

https://ask.openstack.org/en/question/102256/how-to-configure-placement-service-for-compute-node-on-ocata/

Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Nova] [Scheduler]

2017-05-31 Thread Jay Pipes

On 05/31/2017 09:49 AM, Narendra Pal Singh wrote:

Hello,

lets say i have multiple compute nodes, Pool-A has 5 nodes and Pool-B 
has 4 nodes categorized based on some property.
Now there is request for new instance, i always want this instance to be 
placed on any compute in Pool-A.

What would be best approach to address this situation?


FYI, this question is best asked on openstack@ ML. openstack-dev@ is for 
development questions.


You can use the AggregateInstanceExtraSpecsFilter and aggregate metadata 
to accomplish your needs.


Read more about that here:

https://docs.hpcloud.com/hos-3.x/helion/operations/compute/creating_aggregates.html

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Some error occurred when I build the image.

2017-05-31 Thread Clark Boylan
On Mon, May 22, 2017, at 06:38 PM, itri_neut...@itri.org.tw wrote:
> Hi Paul,
> 
> Thank you for your prompt reply.
> According to your suggestion, I have imported the
> openstack-infra/project-example and cloned the latest nodepool elements
> and script today.
> (follow the instruction of
> https://docs.openstack.org/infra/openstackci/third_party_ci.html#start-nodepool)
> 
> However, the log (/var/log/nodepool/image/image.log) shown the same error
> messages
> 
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: File
> "/usr/local/bin/element-info", line 11, in 
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: sys.exit(main())
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: File
> "/usr/local/lib/python2.7/dist-packages/diskimage_builder/element_dependencies.py",
> line 337, in main
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: elements =
> _get_elements(args.elements)
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: File
> "/usr/local/lib/python2.7/dist-packages/diskimage_builder/element_dependencies.py",
> line 248, in _get_elements
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: return
> _expand_element_dependencies(elements, all_elements)
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: File
> "/usr/local/lib/python2.7/dist-packages/diskimage_builder/element_dependencies.py",
> line 148, in _expand_element_dependencies
> 2017-05-23 09:14:15,565 INFO nodepool.image.build.dpc: raise
> MissingElementException("Element '%s' not found" % element)
> 2017-05-23 09:14:15,566 INFO nodepool.image.build.dpc:
> diskimage_builder.element_dependencies.MissingElementException: Element
> 'puppet' not found
> 2017-05-23 09:14:15,569 INFO nodepool.image.build.dpc:
> 
> 
> and failed to build the image.
> What should I do?

You will need to remove the puppet element from the elements list under
the diskimages configuration in /etc/nodepool/nodepool.yaml. Updating
the repos got you the latest elements but may not have updated your site
local nodepool.yaml as I expect that to be local config (it has info in
it like your clouds and what images to use, etc which is not common
config).

Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Nodepool drivers

2017-05-31 Thread Clark Boylan
On Sun, May 28, 2017, at 07:39 PM, Tristan Cacqueray wrote:
> Hi,
> 
> With the nodepool-drivers[0] spec approved, I started to hack a quick
> implementation[1]. Well I am not very familiar with the
> nodepool/zookeeper
> architecture, thus this implementation may very well be missing important
> bits... The primary goal is to be able to run ZuulV3 with static nodes,
> comments and feedbacks are most welcome.
> 
> Moreover, assuming this isn't too off-track, I'd like to propose an
> OpenContainer and a libvirt driver to diversify Test environment.
> 
> Thanks in advance,
> -Tristan
> 
> [0]:
> http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-drivers.html
> [1]: https://review.openstack.org/#/q/topic:nodepool-drivers

I've briefly looked through the stack and left some comments on changes.
Would help if the nodepool-drivers topic is on all the changes (I didn't
update the topics as I wasn't sure if this was intentional or not).

Overall this looks good. There are a few things that come up though. I
think we need to clearly define what the handlers and providers do so
that it is clear in the implementation. Right now it appears that they
share a lot of responsibility and you end up with different drivers
splitting those two classes differently.

We also need to clearly communicate the behavior of different drivers
when it comes to running multiple launchers. Do we expect that each
launcher is a standalone spof and manages resources completely
independent of the other launchers or do we want to allow for
coordination between launchers via zookeeper so that we have redundancy.
I don't think we need to solve redundancy right away, we just need to be
clear to users (and devs modifying drivers) what the expectation is for
each driver.

Hope this helps,
Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul V3: Behavior of change related requirements on push like events

2017-05-31 Thread Clark Boylan


On Tue, May 30, 2017, at 01:45 PM, James E. Blair wrote:
> Jeremy Stanley  writes:
> 
> > On 2017-05-30 12:53:15 -0700 (-0700), Jesse Keating wrote:
[...]
> >> Personally, my opinions are that to avoid confusion, change type
> >> requirements should always fail on push type events. This means
> >> open, current-patchset, approvals, reviews, labels, and maybe
> >> status requirements would all fail to match a pipeline for a push
> >> type event. It's the least ambiguous, and promotes the practice of
> >> creating a separate pipeline for push like events from change like
> >> events. I welcome other opinions!
> >
> > This seems like a reasonable conclusion to me.
> 
> Agreed -- we haven't run into this condition because our pipelines are
> naturally segregated into change or ref related workflows.  I think
> that's probably going to be the case for most folks, so codifying this
> seems reasonable.  However, I could simply be failing to imagine a
> pipeline that works with both.

+1. I want to say that Gozer ran into some weird behavior at one point
where they were trying to enforce change details on a ref-updated
pipeline and that caused problems. The biggest issue was lack of clear
logging of the issue. We should try to clearly present this mismatch of
info to the user through the logs at the very least to avoid that.

Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] [gnocchi] regional incoming storage targets

2017-05-31 Thread gordon chung
here's a scenario: i'd like aggregates stored centrally like it does 
currently with ceph/swift/s3 drivers but i want to collect data from 
many different regions spanning globe. they can all hit the same 
incoming storage but:
- that will be a hell of a lot of load
- single incoming storage locality might not be optimal for all regions 
causing the write performance to take longer than needed for a 'cache' 
storage
- sending HTTP POST with JSON payload probably more bandwidth than 
binary serialised format gnocchi uses internally.

i'm thinking it'd be good to support ability to have each region store 
data 'locally' to minimise latency and then have regional metricd agents 
aggregate into a central target. this is technically possible right now 
by just declaring regional (write-only?) APIs with same storage target 
and indexer targets but a different incoming target per region. the 
problem i think is how to handle coordination_url. it cannot be the same 
coordination_url since that would cause sack locks to overlap. if 
they're different, then i think there's an issue with having a 
centralised API (in addition to regional APIs). specifically, the 
centralised API cannot 'refresh'.

i'm not entirely sure this is an issue, just thought i'd raise it to 
discuss.

regardless, thoughts on maybe writing up deployment strategies like 
this? or make everyone who reads this to erase their minds and use this 
for 'consulting' fees :P

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-31 Thread Jeremy Stanley
On 2017-05-19 09:22:07 -0400 (-0400), Sean Dague wrote:
[...]
> the project,

I hosted the onboarding session for the Infrastructure team. For
various logistical reasons discussed on the planning thread before
the PTG, it was a shared session with many other "horizontal" teams
(QA, Requirements, Stable, Release). We carved the 90-minute block
up into individual subsessions for each team, though due to
scheduling conflicts I was only able to attend the second half
(Release and Infra). Attendance was also difficult to gauge; we had
several other regulars from the Infra team present in the audience,
people associated with other teams with which we shared the room,
and an assortment of new faces but hard to tell which session(s)
they were mainly there to see.

> what you did in the room,

I prepared a quick (5-10 minute) "help wanted" intro slide deck to
set the stage, then transitioned to a less formal mix of Q and
open discussion of some of the exciting things we're working on
currently. I felt like we didn't really get as many solid questions
as I was hoping, but the back-and-forth with other team members in
the room about our priority efforts was definitely a good way to
fill in the gaps between.

> what you think worked,

The format wasn't bad. Given the constraints we were under for this,
sharing seems to have worked out pretty well for us and possibly
seeded the audience with people who were interested in what those
other teams had to say and stuck around to see me ramble.

> what you would have done differently
[...]

The goal I had was to drum up some additional solid contributors to
our team, though the upshot (not necessarily negative, just not what
I expected) was that we seemed to get more interest from "adjacent
technologies" representatives interested in what we were doing and
how to replicate it in their ecosystems. If that ends up being a
significant portion of the audience going forward, it's possible we
could make some adjustments to our approach in an attempt to entice
them to collaborate further on co-development of our tools and
processes.

Also, due to the usual no-schedule-is-perfect conundrums, I ended up
running this in the final minutes before the happy hour social (or
maybe even overlapping the start of it) so that _may_ have sucked up
some of our potential audience before I got to the room. Not sure
what to really do to fix that, more just an observation I suppose.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Jay S Bryant



On 5/31/2017 12:02 PM, Jiri Suchomel wrote:

V Wed, 31 May 2017 11:34:20 -0400
Eric Harney  napsáno:


On 05/25/2017 05:51 AM, Jiri Suchomel wrote:

Hi,
it seems to me that the way of adding extra NFS options to the
cinder backend is somewhat confusing.

...

This has gotten a bit more confusing than in necessary in Cinder due
to how the configuration for the NFS and related drivers has been
tweaked over time.

The method of putting a list of shares in the nfs_shares_config file
is effectively deprecated, but still works for now.
...

Thanks for answer!
I should definitely try with those specific host/path/options.

However, it seems that the (supposedly) deprecated way does not
really work how it should, so I filed a bug report:

https://bugs.launchpad.net/cinder/+bug/1694758

Jiri


Jiri,

Thanks.  Does seem, at a minimum, that the documentation needs to be 
updated and that there may be some other bugs here.  We will work this 
through the bug.


Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] ERROR gear.Server: Exception in poll loop

2017-05-31 Thread Clark Boylan
On Wed, May 31, 2017, at 07:14 AM, Wang Shilong wrote:
> Ping.
> 
> Anyone has some good ideas that i could debug further?
> 
> Thanks,
> Shilong
> 
> 
> ___
> From: Wang Shilong
> Sent: Wednesday, May 24, 2017 19:37
> To: openstack-infra ‎[openstack-infra@lists.openstack.org]‎
> Cc: fu...@yuggoth.org
> Subject: ERROR gear.Server: Exception in poll loop
> 
> Hi Guys,
> 
>   I hit an error in My CI setup, somehow Gear could not call
> Jenkins jobs. zuul could watch gerrit events fine.
> 
> And I tried this 'root@r52:~/gearman-plugin-client# python gear_client.py
> -s localhost --function=build:noop-check-communication'  it could work.
> 
> But i hit gear errors
> $ cat /var/log/zuul/gearman-server.log
> 2017-05-24 20:22:13,043 ERROR gear.Server: Exception in poll loop:
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line
>   2903, in _doPollLoop
> self.log.exception("Exception in poll loop:")
>   File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line
>   2916, in _pollLoop
> for fd, event in ret:
> IOError: [Errno 4] Interrupted system call
> 2017-05-24 20:22:14,081 ERROR gear.Server: Exception in poll loop:
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line
>   2903, in _doPollLoop
> self.log.exception("Exception in poll loop:")
>   File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line
>   2916, in _pollLoop
> for fd, event in ret:
> IOError: [Errno 4] Interrupted system call
> 
> Zuul logs like following:
> 2017-05-24 11:42:41,857 ERROR gear.Client.unknown: Exception in poll
> loop:
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line
>   830, in _doPollLoop
> try:
>   File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line
>   856, in _pollLoop
> len(self.active_connections))
> error: (4, 'Interrupted system call')
> 2017-05-24 11:45:24,565 INFO zuul.Server: Starting scheduler
> 2017-05-24 11:45:53,960 INFO zuul.Scheduler: Registering foreign project:
> openstack/diskimage-builder
> 2017-05-24 11:45:54,417 INFO zuul.Scheduler: Registering foreign project:
> openstack/nova
> 2017-05-24 11:46:03,840 INFO zuul.Scheduler: Registering foreign project:
> openstack/openstack-helm
> 2017-05-24 11:46:10,344 INFO zuul.Scheduler: Registering foreign project:
> openstack/daisycloud-core
> 2017-05-24 11:46:45,066 INFO zuul.Scheduler: Registering foreign project:
> openstack/neutron-lib
> 2017-05-24 11:46:57,904 WARNING zuul.GerritEventConnector: Received
> unrecognized event type 'ref-replicated' from Gerrit.   
> Can not get account information.
> 2017-05-24 11:46:58,045 WARNING zuul.GerritEventConnector: Received
> unrecognized event type 'ref-replicated' from Gerrit.   
> Can not get account information.
> 2017-05-24 11:46:58,280 WARNING zuul.GerritEventConnector: Received
> unrecognized event type 'ref-replicated' from Gerrit.   
> Can not get account information.
> 2017-05-24 11:46:58,422 WARNING zuul.GerritEventConnector: Received
> unrecognized event type 'ref-replicated' from Gerrit.   
> Can not get account information.
> 2017-05-24 11:46:58,423 WARNING zuul.GerritEventConnector: Received
> unrecognized event type 'ref-replicated' from Gerrit.   
> Can not get account information.
> 
> 
> Any help is much appreciated!
> 
> Thanks,
> Shilong

The interrupted system calls are "normal" in that it is something that
happens when signals are sent to the application and the syscall is
sitting around, it gets interrupted. The gear code should handle this
(and a quick review of said code shows that the poll loop is wrapped in
an exception handler that should do this properly.

What I would check is that your Jenkins server has the gearman plugin
installed, that it can talk to the gear server running under zuul, and
that you have Jenkins slaves with labels that match up to jobs so that
they will get registered in Gearman.

To do this check your global jenkins config for the gearman plugin
settings, there should be a button to test the connection. Then make
sure the labels on slaves in jenkins match the labels on your jobs.
Finally you can run `telnet $zuul_server 4730` and enter 'status' to get
the job registrations. The fields there are job name, queued jobs,
running jobs, workers able to run the job.

If all that looks good then we'll need to dig deeper but that should
give you somewhere to start.

Hope this helps,
Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Wang, Peter Xihong
Thanks John for the intro.

I believe cryptography is supported by pypy today.   Just did a "pip install 
cryptography" using an older version of pypy, pypy2-v5.6.0, with no error.  
This package has been downloaded > 13 millions of times based on pypy package 
tracking site: http://packages.pypy.org/##cryptography.  
I'd be interested in knowing the exact errors where it's failing, and help out.

As John said, we've observed 2x boost in throughput, and 78% reduction in 
latency from the proxy node in Swift lab setups.
We've also seen perf gain from Cinder, Keystone, Nova, Glance, Neutron, with 
most significant gains seen (22x) from Oslo.i18n from our lab setup running 
benchmarks such as Rally.

Switching from CPython to PyPy for OpenStack is not hard.  However, I found it 
really challenging to identify real world (customers) OpenStack examples where 
performance is hindered mainly on Python codes/or interpreter.

Thanks,

Peter


-Original Message-
From: John Dickinson [mailto:m...@not.mn] 
Sent: Wednesday, May 31, 2017 9:22 AM
To: OpenStack Development Mailing List 
Cc: Wang, Peter Xihong 
Subject: Re: [openstack-dev] [requirements] Do we care about pypy for clients 
(broken by cryptography)



On 31 May 2017, at 5:34, Monty Taylor wrote:

> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
>> On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
>>> We had a discussion a few months back around what to do for 
>>> cryptography since pycrypto is basically dead [1]. After some 
>>> discussion, at least on the Cinder project, we decided the best way 
>>> forward was to use the cryptography package instead, and work has 
>>> been done to completely remove pycrypto usage.
>>>
>>> It all seemed like a good plan at the time.
>>>
>>> I now notice that for the python-cinderclient jobs, there is a pypy 
>>> job
>>> (non-voting!) that is failing because the cryptography package is 
>>> not supported with pypy.
>>>
>>> So this leaves us with two options I guess. Change the cryto library 
>>> again, or drop support for pypy.
>>>
>>> I am not aware of anyone using pypy, and there are other valid 
>>> working alternatives. I would much rather just drop support for it 
>>> than redo our crypto functions again.
>>>
>>> Thoughts? I'm sure the Grand Champion of the Clients (Monty) 
>>> probably has some input?
>
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new push 
> and be successful at this point seems low at best.
>
> I'd argue that pypy is already not supported, so dropping the non-voting job 
> doesn't seem like losing very much to me. Reworking cryptography libs again, 
> otoh, seems like a lot of work.
>
> Monty

On the other hand, I've been working with Intel on getting PyPy support in 
Swift (it works, just need to reenable that gate job), and I know they were 
working on support in other projects. Summary is that for Swift, we got around 
2x improvement (lower latency + higher throughput) by simply replacing CPython 
with PyPy. I believe similar gains in other projects were expected.

I've cc'd Peter from Intel who's been working on PyPy quite a bit. I know he'll 
be very interested in this discussion and will have valuable input.

--john




>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
> Just a couple things (I don't think it changes the decision made).
> 
> Cryptography does at least claim to support PyPy (see
> https://pypi.python.org/pypi/cryptography/ trove identifiers), so
> possibly a bug on their end that should get filed?
> 
> Also, our user clients should probably run under as many interpreters as
> possible to make life easier for end users; however, they currently
> depend on oslo and if oslo doesn't support pypy then likely not
> reasonable to do in user clients.
> 
> Clark
> 

This looks like it may be due to the version of pypy packaged for Xenial.

The pypi info for cryptography does state PyPy 5.3+ is supported. The latest
package I see for 16.04 is pypy/xenial-updates 5.1.2+dfsg-1~16.04.

Is there a way we can get a newer version loaded on our images? I would be
fine testing against it, though the lack of oslo support does give a high
risk that something else may break once we get past the cryptography issue.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Strange: lost physical connectivity to compute hosts when using native (ryu) openflow interface

2017-05-31 Thread Kevin Benton
No prob. Thanks for replying.

On May 31, 2017 10:11 AM, "Gustavo Randich" 
wrote:

> Hi Kevin, I confirm that applying the patch the problem is fixed.
>
> Sorry for the inconvenience.
>
>
> On Tue, May 30, 2017 at 9:36 PM, Kevin Benton  wrote:
>
>> Do you have that patch already in your environment? If not, can you
>> confirm it fixes the issue?
>>
>> On Tue, May 30, 2017 at 9:49 AM, Gustavo Randich <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> While dumping OVS flows as you suggested, we finally found the cause of
>>> the problem: our br-ex OVS bridge lacked the secure fail mode configuration.
>>>
>>> May be the issue is related to this: https://bugs.launchpad.net/neu
>>> tron/+bug/1607787
>>>
>>> Thank you
>>>
>>>
>>> On Fri, May 26, 2017 at 6:03 AM, Kevin Benton  wrote:
>>>
 Sorry about the long delay.

 Can you dump the OVS flows before and after the outage? This will let
 us know if the flows Neutron setup are getting wiped out.

 On Tue, May 2, 2017 at 12:26 PM, Gustavo Randich <
 gustavo.rand...@gmail.com> wrote:

> Hi Kevin, here is some information aout this issue:
>
> - if the network outage lasts less than ~1 minute, then connectivity
> to host and instances is automatically restored without problem
>
> - otherwise:
>
> - upon outage, "ovs-vsctl show" reports "is_connected: true" in all
> bridges (br-ex / br-int / br-tun)
>
> - after about ~1 minute, "ovs-vsctl show" ceases to show
> "is_connected: true" on every bridge
>
> - upon restoring physical interface (fix outage)
>
> - "ovs-vsctl show" now reports "is_connected: true" in all
> bridges (br-ex / br-int / br-tun)
>
>- access to host and VMs is NOT restored, although some pings
> are sporadically answered by host (~1 out of 20)
>
>
> - to restore connectivity, we:
>
>
>   - execute "ifdown br-ex; ifup br-ex" -> access to host is
> restored, but not to VMs
>
>
>   - restart neutron-openvswitch-agent -> access to VMs is restored
>
> Thank you!
>
>
>
>
> On Fri, Apr 28, 2017 at 5:07 PM, Kevin Benton 
> wrote:
>
>> With the network down, does ovs-vsctl show that it is connected to
>> the controller?
>>
>> On Fri, Apr 28, 2017 at 2:21 PM, Gustavo Randich <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> Exactly, we access via a tagged interface, which is part of br-ex
>>>
>>> # ip a show vlan171
>>> 16: vlan171:  mtu 9000 qdisc
>>> noqueue state UNKNOWN group default qlen 1
>>> link/ether 8e:14:8d:c1:1a:5f brd ff:ff:ff:ff:ff:ff
>>> inet 10.171.1.240/20 brd 10.171.15.255 scope global vlan171
>>>valid_lft forever preferred_lft forever
>>> inet6 fe80::8c14:8dff:fec1:1a5f/64 scope link
>>>valid_lft forever preferred_lft forever
>>>
>>> # ovs-vsctl show
>>> ...
>>> Bridge br-ex
>>> Controller "tcp:127.0.0.1:6633"
>>> is_connected: true
>>> Port "vlan171"
>>> tag: 171
>>> Interface "vlan171"
>>> type: internal
>>> ...
>>>
>>>
>>> On Fri, Apr 28, 2017 at 3:03 PM, Kevin Benton 
>>> wrote:
>>>
 Ok, that's likely not the issue then. I assume the way you access
 each host is via an IP assigned to an OVS bridge or an interface that
 somehow depends on OVS?

 On Apr 28, 2017 12:04, "Gustavo Randich" 
 wrote:

> Hi Kevin, we are using the default listen address of loopback
> interface:
>
> # grep -r of_listen_address /etc/neutron
> /etc/neutron/plugins/ml2/openvswitch_agent.ini:#of_listen_address
> = 127.0.0.1
>
>
> tcp/127.0.0.1:6640 -> ovsdb-server
> /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info
> --remote=punix:/var/run/openvswitch/db.sock
> --private-key=db:Open_vSwitch,SSL,private_key
> --certificate=db:Open_vSwitch,SSL,certificate
> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir
> --log-file=/var/log/openvswitch/ovsdb-server.log
> --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor
>
> Thanks
>
>
>
>
> On Fri, Apr 28, 2017 at 5:00 AM, Kevin Benton 
> wrote:
>
>> Are you using an of_listen_address value of an interface being
>> brought down?
>>
>> On Apr 25, 2017 17:34, "Gustavo Randich" <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> (using Mitaka / Ubuntu 16 / Neutron DVR / OVS / 

Re: [Openstack-operators] [Openstack] Strange: lost physical connectivity to compute hosts when using native (ryu) openflow interface

2017-05-31 Thread Kevin Benton
No prob. Thanks for replying.

On May 31, 2017 10:11 AM, "Gustavo Randich" 
wrote:

> Hi Kevin, I confirm that applying the patch the problem is fixed.
>
> Sorry for the inconvenience.
>
>
> On Tue, May 30, 2017 at 9:36 PM, Kevin Benton  wrote:
>
>> Do you have that patch already in your environment? If not, can you
>> confirm it fixes the issue?
>>
>> On Tue, May 30, 2017 at 9:49 AM, Gustavo Randich <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> While dumping OVS flows as you suggested, we finally found the cause of
>>> the problem: our br-ex OVS bridge lacked the secure fail mode configuration.
>>>
>>> May be the issue is related to this: https://bugs.launchpad.net/neu
>>> tron/+bug/1607787
>>>
>>> Thank you
>>>
>>>
>>> On Fri, May 26, 2017 at 6:03 AM, Kevin Benton  wrote:
>>>
 Sorry about the long delay.

 Can you dump the OVS flows before and after the outage? This will let
 us know if the flows Neutron setup are getting wiped out.

 On Tue, May 2, 2017 at 12:26 PM, Gustavo Randich <
 gustavo.rand...@gmail.com> wrote:

> Hi Kevin, here is some information aout this issue:
>
> - if the network outage lasts less than ~1 minute, then connectivity
> to host and instances is automatically restored without problem
>
> - otherwise:
>
> - upon outage, "ovs-vsctl show" reports "is_connected: true" in all
> bridges (br-ex / br-int / br-tun)
>
> - after about ~1 minute, "ovs-vsctl show" ceases to show
> "is_connected: true" on every bridge
>
> - upon restoring physical interface (fix outage)
>
> - "ovs-vsctl show" now reports "is_connected: true" in all
> bridges (br-ex / br-int / br-tun)
>
>- access to host and VMs is NOT restored, although some pings
> are sporadically answered by host (~1 out of 20)
>
>
> - to restore connectivity, we:
>
>
>   - execute "ifdown br-ex; ifup br-ex" -> access to host is
> restored, but not to VMs
>
>
>   - restart neutron-openvswitch-agent -> access to VMs is restored
>
> Thank you!
>
>
>
>
> On Fri, Apr 28, 2017 at 5:07 PM, Kevin Benton 
> wrote:
>
>> With the network down, does ovs-vsctl show that it is connected to
>> the controller?
>>
>> On Fri, Apr 28, 2017 at 2:21 PM, Gustavo Randich <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> Exactly, we access via a tagged interface, which is part of br-ex
>>>
>>> # ip a show vlan171
>>> 16: vlan171:  mtu 9000 qdisc
>>> noqueue state UNKNOWN group default qlen 1
>>> link/ether 8e:14:8d:c1:1a:5f brd ff:ff:ff:ff:ff:ff
>>> inet 10.171.1.240/20 brd 10.171.15.255 scope global vlan171
>>>valid_lft forever preferred_lft forever
>>> inet6 fe80::8c14:8dff:fec1:1a5f/64 scope link
>>>valid_lft forever preferred_lft forever
>>>
>>> # ovs-vsctl show
>>> ...
>>> Bridge br-ex
>>> Controller "tcp:127.0.0.1:6633"
>>> is_connected: true
>>> Port "vlan171"
>>> tag: 171
>>> Interface "vlan171"
>>> type: internal
>>> ...
>>>
>>>
>>> On Fri, Apr 28, 2017 at 3:03 PM, Kevin Benton 
>>> wrote:
>>>
 Ok, that's likely not the issue then. I assume the way you access
 each host is via an IP assigned to an OVS bridge or an interface that
 somehow depends on OVS?

 On Apr 28, 2017 12:04, "Gustavo Randich" 
 wrote:

> Hi Kevin, we are using the default listen address of loopback
> interface:
>
> # grep -r of_listen_address /etc/neutron
> /etc/neutron/plugins/ml2/openvswitch_agent.ini:#of_listen_address
> = 127.0.0.1
>
>
> tcp/127.0.0.1:6640 -> ovsdb-server
> /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info
> --remote=punix:/var/run/openvswitch/db.sock
> --private-key=db:Open_vSwitch,SSL,private_key
> --certificate=db:Open_vSwitch,SSL,certificate
> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir
> --log-file=/var/log/openvswitch/ovsdb-server.log
> --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor
>
> Thanks
>
>
>
>
> On Fri, Apr 28, 2017 at 5:00 AM, Kevin Benton 
> wrote:
>
>> Are you using an of_listen_address value of an interface being
>> brought down?
>>
>> On Apr 25, 2017 17:34, "Gustavo Randich" <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> (using Mitaka / Ubuntu 16 / Neutron DVR / OVS / 

[Openstack-operators] Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-05-31 Thread MCCABE, JAMEY A
Working group (WG) chairs or delegates, please enter your name (and WG name) 
and what times you could meet at this poll: 
https://beta.doodle.com/poll/6k36zgre9ttciwqz#table

As back ground and to share progress:

  *   We started and generally confirmed the desire to have a regular cross WG 
status meeting at the Boston Summit.
  *   Specifically the groups interested in Telco NFV and Fog Edge agreed to 
collaborate more often and in a more organized fashion.
  *   In e-mails and then in today’s Operators Telco/NFV we finalized a 
proposal to have all WGs meet for high level status monthly and to bring the 
collaboration back to our individual WG sessions.
  *   the User Committee sessions are appropriate for the Monthly WG Status 
meeting
  *   more detailed coordination across Telco/NFV and Fog Edge groups should 
take place in the Operators Telco NFV WG meetings which already occur every 2 
weeks.
  *   we need participation of each WG Chair (or a delegate)
  *   we welcome and request the OPNFV and Linux Foundation and other WGs to 
join us in the cross WG status meetings

The Doodle was setup to gain concurrence for a time of week in which we could 
schedule and is not intended to be for a specific week.

​Jamey McCabe – AT Integrated Cloud -jm6819 - mobile if needed 
847-496-1176


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] FW: Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-05-31 Thread MCCABE, JAMEY A
Working group (WG) chairs or delegates, please enter your name (and WG name) 
and what times you could meet at this poll: 
https://beta.doodle.com/poll/6k36zgre9ttciwqz#table

As background and to share progress:

  *   We started and generally confirmed the desire to have a regular cross WG 
status meeting at the Boston Summit.
  *   Specifically the groups interested in Telco NFV and Fog Edge agreed to 
collaborate more often and in a more organized fashion.
  *   In e-mails and then in today’s Operators Telco/NFV we finalized a 
proposal to have all WGs meet for high level status monthly and to bring the 
collaboration back to our individual WG sessions.
  *   the User Committee sessions are appropriate for the Monthly WG Status 
meeting
  *   more detailed coordination across Telco/NFV and Fog Edge groups should 
take place in the Operators Telco NFV WG meetings which already occur every 2 
weeks.
  *   we need participation of each WG Chair (or a delegate)
  *   we welcome and request the OPNFV and Linux Foundation and other WGs to 
join us in the cross WG status meetings

The Doodle was setup to gain concurrence for a time of week in which we could 
schedule and is not intended to be for a specific week.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Blueprint process question

2017-05-31 Thread Waines, Greg
Hey Rob,

Just thought I’d check in on whether Horizon team has had a chance to review 
the following blueprint:
https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar

The blueprint in Vitrage which the above Horizon blueprint depends on has been 
approved by Vitrage team.
i.e.   https://blueprints.launchpad.net/vitrage/+spec/alarm-counts-api

let me know if you’d like to setup a meeting to discuss,
Greg.

From: Rob Cresswell 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, May 18, 2017 at 11:40 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [horizon] Blueprint process question

There isn't a specific time for blueprint review at the moment. It's usually 
whenever I get time, or someone asks via email or IRC. During the weekly 
meetings we always have time for open discussion of bugs/blueprints/patches etc.

Rob

On 18 May 2017 at 16:31, Waines, Greg 
> wrote:
A blueprint question for horizon team.

I registered a new blueprint the other day.
https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar

Do I need to do anything else to get this reviewed?  I don’t think so, but 
wanted to double check.
How frequently do horizon blueprints get reviewed?  once a week?

Greg.


p.s. ... the above blueprint does depend on a Vitrage blueprint which I do have 
in review.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Jiri Suchomel
V Wed, 31 May 2017 11:34:20 -0400
Eric Harney  napsáno:

> On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> > Hi,
> > it seems to me that the way of adding extra NFS options to the
> > cinder backend is somewhat confusing.
> > 
> > ...

> This has gotten a bit more confusing than in necessary in Cinder due
> to how the configuration for the NFS and related drivers has been
> tweaked over time.
> 
> The method of putting a list of shares in the nfs_shares_config file
> is effectively deprecated, but still works for now.
> ...

Thanks for answer! 
I should definitely try with those specific host/path/options.

However, it seems that the (supposedly) deprecated way does not
really work how it should, so I filed a bug report: 

https://bugs.launchpad.net/cinder/+bug/1694758

Jiri

-- 
Jiri Suchomel

SUSE LINUX, s.r.o.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano][barbican] Encrypting sensitive properties

2017-05-31 Thread Kirill Zaitsev
As long as this integration is optional (i.e. no barbican — no encryption) It 
feels ok to me. We have a very similar integration with congress, yet you can 
deploy murano with or without it.

As for the way to convey this, I believe metadata attributes were designed to 
answer use-cases like this one. see 
https://docs.openstack.org/developer/murano/appdev-guide/murano_pl/metadata.html
 for more info.

Regards, Kirill

> Le 25 мая 2017 г. à 18:49, Paul Bourke  a écrit :
> 
> Hi all,
> 
> I've been looking at a blueprint[0] logged for Murano which involves 
> encrypting parts of the object model stored in the database that may contain 
> passwords or sensitive information.
> 
> I wanted to see if people had any thoughts or preferences on how this should 
> be done. On the face of it, it seems Barbican is a good choice for solving 
> this, and have read a lengthy discussion around this on the mailing list from 
> earlier this year[1]. Overall the benefits of Barbican seem to be that we can 
> handle the encryption and management of secrets in a common and standard way, 
> and avoid having to implement and maintain this ourselves. The main drawback 
> for Barbican seems to be that we impose another service dependency on the 
> operator, though this complaint seems to be in some way appeased by 
> Castellan, which offers alternative backends to just Barbican (though unsure 
> right now what those are?). The alternative to integrating Barbican/Castellan 
> is to use a more lightweight "roll your own" encryption such as what Glance 
> is using[2].
> 
> After we decide on how we want to implement the encryption there is also the 
> question of how best to expose this feature to users. My current thought is 
> that we can use Murano attributes, so application authors can do something 
> like this:
> 
> - name: appPassword
>  type: password
>  encrypt: true
> 
> This would of course be transparent to the end user of the application. Any 
> thoughts on both issues are very welcome, I hope to have a prototype in the 
> next few days which may help solidify this also.
> 
> Regards,
> -Paul.
> 
> [0] 
> https://blueprints.launchpad.net/murano/+spec/allow-encrypting-of-muranopl-properties
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-January/110192.html
> [2] 
> https://github.com/openstack/glance/blob/48ee8ef4793ed40397613193f09872f474c11abe/glance/common/crypt.py
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] deploy software on Openstack controller on the Overcloud

2017-05-31 Thread Dnyaneshwar Pawar
Hi Alex,

Currently we have puppet modules[0] to configure our software which has 
components on Openstack Controller, Cinder node and Nova node.
As per document[1] we successfully tried out role specific configuration[2].

So, does it mean that if we have an overcloud image with our packages inbuilt 
and we call our configuration scripts using role specific configuration, we may 
not need puppet modules[0] ? Is it acceptable deployment method?

[0] https://github.com/abhishek-kane/puppet-veritas-hyperscale
[1] 
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html
[2] http://paste.openstack.org/show/66/

Thanks,
Dnyaneshwar

On 5/30/17, 6:52 PM, "Alex Schultz" 
> wrote:

On Mon, May 29, 2017 at 5:05 AM, Dnyaneshwar Pawar
> wrote:
Hi,

I am tying to deploy a software on openstack controller on the overcloud.
One way to do this is by modifying ‘overcloud image’ so that all packages of
our software are added to image and then run overcloud deploy.
Other way is to write heat template and puppet module which will deploy the
required packages.

Question: Which of above two approaches is better?

Note: Configuration part of the software will be done via separate heat
template and puppet module.


Usually you do both.  Depending on how the end user is expected to
deploy, if they are using the TripleoPackages service[0] in their
role, the puppet installation of the package won't actually work (we
override the package provider to noop) so it needs to be in the
images.  That being said, usually there is also a bit of puppet that
needs to be written to configure the end service and as a best
practice (and for development purposes), it's a good idea to also
capture the package in the manifest.

Thanks,
-Alex

[0] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml


Thanks and Regards,
Dnyaneshwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Jeremy Stanley
On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
[...]
> Trademark programs are trademark programs - we should have a unified
> process for all of them. Let's not make the same mistakes again by
> creating classes of projects / programs. I do not want this to be
> a distinction as we move forward.

This I agree with. However I'll be surprised if a majority of the QA
team disagree on this point (logistic concerns with how to curate
this over time I can understand, but that just means they need to
interest some people in working on a manageable solution).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread John Dickinson


On 31 May 2017, at 5:34, Monty Taylor wrote:

> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
>> On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
>>> We had a discussion a few months back around what to do for cryptography
>>> since pycrypto is basically dead [1]. After some discussion, at least on
>>> the Cinder project, we decided the best way forward was to use the
>>> cryptography package instead, and work has been done to completely remove
>>> pycrypto usage.
>>>
>>> It all seemed like a good plan at the time.
>>>
>>> I now notice that for the python-cinderclient jobs, there is a pypy job
>>> (non-voting!) that is failing because the cryptography package is not
>>> supported with pypy.
>>>
>>> So this leaves us with two options I guess. Change the cryto library again,
>>> or drop support for pypy.
>>>
>>> I am not aware of anyone using pypy, and there are other valid working
>>> alternatives. I would much rather just drop support for it than redo our
>>> crypto functions again.
>>>
>>> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
>>> some input?
>
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new push 
> and be successful at this point seems low at best.
>
> I'd argue that pypy is already not supported, so dropping the non-voting job 
> doesn't seem like losing very much to me. Reworking cryptography libs again, 
> otoh, seems like a lot of work.
>
> Monty

On the other hand, I've been working with Intel on getting PyPy support in 
Swift (it works, just need to reenable that gate job), and I know they were 
working on support in other projects. Summary is that for Swift, we got around 
2x improvement (lower latency + higher throughput) by simply replacing CPython 
with PyPy. I believe similar gains in other projects were expected.

I've cc'd Peter from Intel who's been working on PyPy quite a bit. I know he'll 
be very interested in this discussion and will have valuable input.

--john




>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] ERROR gear.Server: Exception in poll loop

2017-05-31 Thread Wang Shilong
Ping.

Anyone has some good ideas that i could debug further?

Thanks,
Shilong


___
From: Wang Shilong
Sent: Wednesday, May 24, 2017 19:37
To: openstack-infra ‎[openstack-infra@lists.openstack.org]‎
Cc: fu...@yuggoth.org
Subject: ERROR gear.Server: Exception in poll loop

Hi Guys,

  I hit an error in My CI setup, somehow Gear could not call
Jenkins jobs. zuul could watch gerrit events fine.

And I tried this 'root@r52:~/gearman-plugin-client# python gear_client.py -s 
localhost --function=build:noop-check-communication'  it could work.

But i hit gear errors
$ cat /var/log/zuul/gearman-server.log
2017-05-24 20:22:13,043 ERROR gear.Server: Exception in poll loop:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line 2903, in 
_doPollLoop
self.log.exception("Exception in poll loop:")
  File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line 2916, in 
_pollLoop
for fd, event in ret:
IOError: [Errno 4] Interrupted system call
2017-05-24 20:22:14,081 ERROR gear.Server: Exception in poll loop:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line 2903, in 
_doPollLoop
self.log.exception("Exception in poll loop:")
  File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line 2916, in 
_pollLoop
for fd, event in ret:
IOError: [Errno 4] Interrupted system call

Zuul logs like following:
2017-05-24 11:42:41,857 ERROR gear.Client.unknown: Exception in poll loop:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line 830, in 
_doPollLoop
try:
  File "/usr/local/lib/python2.7/dist-packages/gear/__init__.py", line 856, in 
_pollLoop
len(self.active_connections))
error: (4, 'Interrupted system call')
2017-05-24 11:45:24,565 INFO zuul.Server: Starting scheduler
2017-05-24 11:45:53,960 INFO zuul.Scheduler: Registering foreign project: 
openstack/diskimage-builder
2017-05-24 11:45:54,417 INFO zuul.Scheduler: Registering foreign project: 
openstack/nova
2017-05-24 11:46:03,840 INFO zuul.Scheduler: Registering foreign project: 
openstack/openstack-helm
2017-05-24 11:46:10,344 INFO zuul.Scheduler: Registering foreign project: 
openstack/daisycloud-core
2017-05-24 11:46:45,066 INFO zuul.Scheduler: Registering foreign project: 
openstack/neutron-lib
2017-05-24 11:46:57,904 WARNING zuul.GerritEventConnector: Received 
unrecognized event type 'ref-replicated' from Gerrit.Can 
not get account information.
2017-05-24 11:46:58,045 WARNING zuul.GerritEventConnector: Received 
unrecognized event type 'ref-replicated' from Gerrit.Can 
not get account information.
2017-05-24 11:46:58,280 WARNING zuul.GerritEventConnector: Received 
unrecognized event type 'ref-replicated' from Gerrit.Can 
not get account information.
2017-05-24 11:46:58,422 WARNING zuul.GerritEventConnector: Received 
unrecognized event type 'ref-replicated' from Gerrit.Can 
not get account information.
2017-05-24 11:46:58,423 WARNING zuul.GerritEventConnector: Received 
unrecognized event type 'ref-replicated' from Gerrit.Can 
not get account information.


Any help is much appreciated!

Thanks,
Shilong

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Graham Hayes
On 31/05/17 16:45, Jeremy Stanley wrote:
> On 2017-05-31 15:22:59 + (+), Jeremy Stanley wrote:
>> On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
>> [...]
>>> it's news to me that they're considering reversing course. If the
>>> QA team isn't going to continue, we'll need to figure out what
>>> that means and potentially find another group to do it.
>>
>> I wasn't there for the discussion, but it sounds likely to be a
>> mischaracterization. I'm going to assume it's not true (or much more
>> nuanced) at least until someone responds on behalf of the QA team.
>> This particular subthread is only going to go further into the weeds
>> until it is grounded in some authoritative details.
> 
> Apologies for replying to myself, but per discussion[*] with Chris
> in #openstack-dev I'm adjusting the subject header to make it more
> clear which particular line of speculation I consider weedy.
> 
> Also in that brief discussion, Graham made it slightly clearer that
> he was talking about pushback on the tempest repo getting tests for
> new trademark programs beyond "OpenStack Powered Platform,"
> "OpenStack Powered Compute" and "OpenStack Powered Object Storage."

Trademark programs are trademark programs - we should have a unified
process for all of them. Let's not make the same mistakes again by
creating classes of projects / programs. I do not want this to be
a distinction as we move forward.

> [*]  http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-05-31.log.html#t2017-05-31T15:28:07
>  >
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-31 Thread Amrith Kumar

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, May 31, 2017 12:00 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master
> 
> On 05/31/2017 02:14 AM, Clint Byrum wrote:
> > Either way, it should be much simpler to manage slave lag than to deal
> > with a Galera cluster that won't accept any writes at all because it
> > can't get quorum.
> 
> Would CockroachDB be any better at achieving quorum?
> 
> Genuinely curious. :)

[Amrith Kumar] Last I read about this was in [1] and it sounded like it would 
be no better, or worse.

[1] https://www.cockroachlabs.com/blog/consensus-made-thrive/

> 
> Best,
> -jay
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi][collectd-ceilometer-plugin][ceilometer][rally] Gates with gnocchi+devstack will break

2017-05-31 Thread Julien Danjou
Hi,

If you're consuming Gnocchi via its devstack in the gate, you'll need to
change that soon. As the repository has been moved to GitHub and the
infra team not wanting to depend on GitHub (external) repositories,
you'll need to set up Gnocchi via pip.

I've started doing the work for Ceilometer here:

  https://review.openstack.org/#/c/468844/
  https://review.openstack.org/#/c/468876/

If it can inspire you!

As soon as https://review.openstack.org/#/c/466317/ is merged, your jobs
will break.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] Monasca at PTG in Denver

2017-05-31 Thread witold.be...@est.fujitsu.com
Hi,

I wanted to ask who of you would be interested in attending dedicated Monasca 
sessions during the next Project Teams Gathering in Denver, September 11-15, 
2017 [1]. The team room would probably be booked for one or two days. 
Alternatively we could organize the remote mid-cycle meeting, as we did 
previously.

Please fill the form as soon as possible:
https://goo.gl/forms/7xpgZFrcUCWhqnEg2

[1] https://www.openstack.org/ptg/



Cheers
Witek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-31 Thread Jay Pipes

On 05/31/2017 02:14 AM, Clint Byrum wrote:

Either way, it should be much simpler to manage slave lag than to deal
with a Galera cluster that won't accept any writes at all because it
can't get quorum.


Would CockroachDB be any better at achieving quorum?

Genuinely curious. :)

Best,
-jay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-31 Thread Hongbin Lu
Please find my replies inline.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 9:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang



On 30 May 2017 at 15:26, Hongbin Lu 
> wrote:
Please consider leveraging Fuxi instead.

Is there a missing functionality from rexray?

[Hongbin Lu] From my understanding, Rexray targets on the overcloud use cases 
and assumes that containers are running on top of nova instances. You mentioned 
Magnum is leveraging Rexray for Cinder integration. Actually, I am the core 
reviewer who reviewed and approved those Rexray patches. From what I observed, 
the functionalities provided by Rexray are minimal. What it was doing is simply 
calling Cinder API to search an existing volume, attach the volume to the Nova 
instance, and let docker to bind-mount the volume to the container. At the time 
I was testing it, it seems to have some mystery bugs that prevented me to get 
the cluster to work. It was packaged by a large container image, which might 
take more than 5 minutes to pull down. With that said, Rexray might be a choice 
for someone who are looking for cross cloud-providers solution. Fuxi will focus 
on OpenStack and targets on both overcloud and undercloud use cases. That means 
Fuxi can work with Nova+Cinder or a standalone Cinder. As John pointed out in 
another reply, another benefit of Fuxi is to resolve the fragmentation problem 
of existing solutions. Those are the differentiators of Fuxi.

Kuryr/Fuxi team is working very hard to deliver the docker network/storage 
plugins. I wish you will work with us to get them integrated with 
Magnum-provisioned cluster.

Patches are welcome to support fuxi as an *option* instead of rexray, so users 
can choose.

Currently, COE clusters provisioned by Magnum is far away from 
enterprise-ready. I think the Magnum project will be better off if it can adopt 
Kuryr/Fuxi which will give you a better OpenStack integration.

Best regards,
Hongbin

fuxi feature request: Add authentication using a trustee and a trustID.

[Hongbin Lu] I believe this is already supported.

Cheers,
Spyros


From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen 
> wrote:
Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 [1]: https://review.openstack.org/#/c/468635

Best Wishes!
zengchen



在 2017-05-26 21:30:48,"John Griffith" 
> 写道:


On Thu, May 25, 2017 at 10:01 PM, zengchen 
> wrote:

Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.


   Thanks very much!

Best Wishes!
zengchen


At 2017-05-25 22:47:29, "John Griffith" 
> wrote:


On Thu, May 25, 2017 at 5:50 AM, zengchen 
> wrote:
Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang

At 2017-05-25 19:46:54, "zengchen" 
> wrote:
Hi guys:
hongbin had committed a bp 

[openstack-dev] [qa][tc][all] Tempest to reject trademark tests (was: more tempest plugins)

2017-05-31 Thread Jeremy Stanley
On 2017-05-31 15:22:59 + (+), Jeremy Stanley wrote:
> On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > it's news to me that they're considering reversing course. If the
> > QA team isn't going to continue, we'll need to figure out what
> > that means and potentially find another group to do it.
> 
> I wasn't there for the discussion, but it sounds likely to be a
> mischaracterization. I'm going to assume it's not true (or much more
> nuanced) at least until someone responds on behalf of the QA team.
> This particular subthread is only going to go further into the weeds
> until it is grounded in some authoritative details.

Apologies for replying to myself, but per discussion[*] with Chris
in #openstack-dev I'm adjusting the subject header to make it more
clear which particular line of speculation I consider weedy.

Also in that brief discussion, Graham made it slightly clearer that
he was talking about pushback on the tempest repo getting tests for
new trademark programs beyond "OpenStack Powered Platform,"
"OpenStack Powered Compute" and "OpenStack Powered Object Storage."

[*] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-05-31.log.html#t2017-05-31T15:28:07
 >
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [TripleO] custom configuration to overcloud fails second time

2017-05-31 Thread Dnyaneshwar Pawar
Hi Ben,

On 5/31/17, 8:06 PM, "Ben Nemec" 
> wrote:

I think we would need to see what your custom config templates look like
as well.

Custom config templates: http://paste.openstack.org/show/64/


Also note that it's generally not recommended to drop environment files
from your deploy command unless you explicitly want to stop applying
them.  So if you applied myconfig_1.yaml and then later want to apply
myconfig_2.yaml your deploy command should look like: openstack
overcloud deploy --templates -e myconfig_1.yaml -e myconfig_2.yaml

Yes, I agree. But in my case even if I dropped myconfig_1.yaml while applying 
myconfig_2.yaml , config in step 1 remained unchanged.

On 05/31/2017 07:53 AM, Dnyaneshwar Pawar wrote:
Hi TripleO Experts,
I performed following steps -

  1. openstack overcloud deploy --templates -e myconfig_1.yaml
  2. openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to
the overcloud. And configuration applied by step 1 remains unchanged.

*Do I need to do anything before performing step 2?*


Thanks and Regards,
Dnyaneshwar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Eric Harney
On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> Hi,
> it seems to me that the way of adding extra NFS options to the cinder
> backend is somewhat confusing.
> 
> 1. There is  nfs_mount_options in cinder config file [1]
> 
> 2. Then I can put my options to the nfs_shares_config file - that
> it could contain additional options mentiones [2] or the
> commit message that adds the feature [3]
> 
> Now, when I put my options to both of these places, cinder-volume
> actually uses them twice and executes the command like this
> 
> mount -t nfs -o nfsvers=3 -o nfsvers=3
> 192.168.241.10:/srv/nfs/vi7/cinder 
> /var/lib/cinder/mnt/f5689da9ea41a66eff2ce0ef89b37bce
> 
> BTW, the options coming from nfs_shares_config are called 'flags' by
> cinder/volume/drivers/nfs ([4]).
> 
> Now, to make it more fun, when I actually want to attach a volume to
> running instance, nova uses different way of realizing which NFS options to 
> use:
> 
> - It reads them from _nova_ config option of libvirt.nfs_mount_options
> [5]
> - or it uses those it gets them from cinder when creating cinder
> connection [6] But these are only the options defined in
> nfs_shares_config file, NOT those nfs_mount_options specified in cinder
> config file.
> 
> 
> So. If I put my options to both places, nfs_shares_config file and
> nfs_mount_options, it actually works how I want it to work, as
> current mount does not complain that the option was provided twice. 
> 
> But it looks ugly. And I'm wondering - am I doing it wrong, or
> is there a problem with either cinder or nova (or both)?
> 

This has gotten a bit more confusing than in necessary in Cinder due to
how the configuration for the NFS and related drivers has been tweaked
over time.

The method of putting a list of shares in the nfs_shares_config file is
effectively deprecated, but still works for now.

The preferred method now is to set the following options:
   nas_host:  server address
   nas_share_path:  export path
   nas_mount_options:  options for mounting the export

So whereas before the nfs_shares_config file would have:
   127.0.0.1:/srv/nfs1 -o nfsvers=3

This would now translate to:
   nas_host=127.0.0.1
   nas_share_path=/srv/nfs1
   nas_mount_options = -o nfsvers=3

I believe if you try configuring the driver this way, you will get the
desired result.

The goal was to remove the nfs_shares_config config method, but this
hasn't happened yet -- I/we need to revisit this area and see about
doing this.

Eric

> 
> Jiri
> 
> 
> [1] https://docs.openstack.org/admin-guide/blockstorage-nfs-backend.html
> [2]
> https://docs.openstack.org/newton/config-reference/block-storage/drivers/nfs-volume-driver.html
> [3]
> https://github.com/openstack/cinder/commit/553e0d92c40c73aa1680743c4287f31770131c97
> [4]
> https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L163
> [5]
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L87
> [6] 
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L89
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Newton openstack designate installation

2017-05-31 Thread Graham Hayes
On 30/05/17 11:56, Michel Labarre wrote:
> Hi
> I installed designate on Newton openstack in Centos 7.3.
> After complete designate setup as indicated in
> https://docs.openstack.org/project-install-guide/dns/ocata/install-rdo.html
> (I have not found specific doc for newton)
>  - All processes are running (central, api, mdns, worker, producer, sink)
>  - I created zone with my domain.
>  - I updated my network to indicates domain name ( update with --dns-domain)
>  - I updated my server.conf on the 3 HA neutron servers (add
> `[designate]` group directives, add final dot to my `[default]
> dns_domain` attribute and add in `[default]` the driver :
> `external_dns_driver = designate`
>  - I updated my ml2 plugin to add `dns` in `extension_drivers` attribute.
>  - I restarted neutron servers.
> 
> All commands as 'openstack dns service list' and 'openstack zone list
> ans openstack recordset list xxx' are ok.
> Now, when i create a VM from dashboard, the VM is created without
> problem (as without designate) but no recordset is created. I don't see
> any call to port 9001 in api log. It seems that designate plugin is not
> called...
> Any idea? Thank you very much

can you try to create a port on the network and supply the
"--dns_name " parameter? Just to see if the issue is in nova
or neutron.

Is there any logs in neutron that show a failed connection to Designate?

Thanks,

Graham

> 
>  
> 
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-31 Thread Emilien Macchi
On Tue, May 30, 2017 at 11:11 PM, Matthew Thode
 wrote:
> On 05/30/2017 04:08 PM, Emilien Macchi wrote:
>> On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
>>  wrote:
>>> We have a problem in requirements that projects that don't have the
>>> cycle-with-intermediary release model (most of the cycle-with-milestones
>>> model) don't get integrated with requirements until the cycle is fully
>>> done.  This causes a few problems.
>>>
>>> * These projects don't produce a consumable release for requirements
>>> until end of cycle (which does not accept beta releases).
>>>
>>> * The former causes old requirements to be kept in place, meaning caps,
>>> exclusions, etc. are being kept, which can cause conflicts.
>>>
>>> * Keeping the old version in requirements means that cross dependencies
>>> are not tested with updated versions.
>>>
>>> This has hit us with the mistral and tripleo projects particularly
>>> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
>>> mistral sqlalchemy updates.
>>>
>>> [mistral]
>>> mistral - blocking sqlalchemy - milestones
>>>
>>> [tripleo]
>>> os-refresh-config - blocking pbr - milestones
>>> os-apply-config - blocking pbr - milestones
>>> os-collect-config - blocking pbr - milestones
>>
>> These are cycle-with-milestones., like os-net-config for example,
>> which wasn't mentioned in this email. It has the same releases as
>> os-net-config also, so I'm confused why these 3 cause issue, I
>> probably missed something.
>>
>> Anyway, I'm happy to change os-*-config (from TripleO) to be
>> cycle-with-intermediary. Quick question though, which tag would you
>> like to see, regarding what we already did for pike-1?
>>
>> Thanks,
>>
>
> Pike is fine as it's just master that has this issue.  The problem is
> that the latest release blocks the pbr from upper-constraints from being
> coinstallable.

Done, please review: https://review.openstack.org/#/c/469530/

Thanks,

>>> [nova]
>>> os-vif - blocking pbr - intermediary
>>>
>>> [horizon]
>>> django-openstack-auth - blocking django - intermediary
>>>
>>>
>>> So, here's what needs doing.
>>>
>>> Those projects that are already using the cycle-with-intermediary model
>>> should just do a release.
>>>
>>> For those that are using cycle-with-milestones, you will need to change
>>> to the cycle-with-intermediary model, and do a full release, both can be
>>> done at the same time.
>>>
>>> If anyone has any questions or wants clarifications this thread is good,
>>> or I'm on irc as prometheanfire in the #openstack-requirements channel.
>>>
>>> --
>>> Matthew Thode (prometheanfire)
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>
>
> --
> Matthew Thode (prometheanfire)
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Jeremy Stanley
On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
[...]
> it's news to me that they're considering reversing course. If the
> QA team isn't going to continue, we'll need to figure out what
> that means and potentially find another group to do it.

I wasn't there for the discussion, but it sounds likely to be a
mischaracterization. I'm going to assume it's not true (or much more
nuanced) at least until someone responds on behalf of the QA team.
This particular subthread is only going to go further into the weeds
until it is grounded in some authoritative details.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Clark Boylan
On Wed, May 31, 2017, at 07:39 AM, Sean McGinnis wrote:
> On Wed, May 31, 2017 at 09:47:37AM -0400, Doug Hellmann wrote:
> > Excerpts from Monty Taylor's message of 2017-05-31 07:34:03 -0500:
> > > On 05/31/2017 06:39 AM, Sean McGinnis wrote:
> > > > On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> > > >>
> > > >> I am not aware of anyone using pypy, and there are other valid working
> > > >> alternatives. I would much rather just drop support for it than redo 
> > > >> our
> > > >> crypto functions again.
> > > >>
> > > 
> > 
> > This question came up recently for the Oslo libraries, and I think we
> > also agreed that pypy support was not being actively maintained.
> > 
> > Doug
> > 
> 
> Thanks Doug. If oslo does not support pypy, then I think that makes the
> decision for me. I will put up a patch to get rid of that job and stop
> wasting infra resources on it.

Just a couple things (I don't think it changes the decision made).

Cryptography does at least claim to support PyPy (see
https://pypi.python.org/pypi/cryptography/ trove identifiers), so
possibly a bug on their end that should get filed?

Also, our user clients should probably run under as many interpreters as
possible to make life easier for end users; however, they currently
depend on oslo and if oslo doesn't support pypy then likely not
reasonable to do in user clients.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
On Wed, May 31, 2017 at 09:47:37AM -0400, Doug Hellmann wrote:
> Excerpts from Monty Taylor's message of 2017-05-31 07:34:03 -0500:
> > On 05/31/2017 06:39 AM, Sean McGinnis wrote:
> > > On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> > >>
> > >> I am not aware of anyone using pypy, and there are other valid working
> > >> alternatives. I would much rather just drop support for it than redo our
> > >> crypto functions again.
> > >>
> > 
> 
> This question came up recently for the Oslo libraries, and I think we
> also agreed that pypy support was not being actively maintained.
> 
> Doug
> 

Thanks Doug. If oslo does not support pypy, then I think that makes the
decision for me. I will put up a patch to get rid of that job and stop
wasting infra resources on it.

I was hoping this would be the answer. ;)

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla][stable][security][infra][all] guidelines for managing releases of binary artifacts

2017-05-31 Thread Steven Dake (stdake)
Doug,

Thanks for the resolution.  It is well written and sets appropriate guidelines. 
 I was expecting something terrible – I guess I shouldn’t have expectations 
ahead of a resolution.  Apologies for being a jerk on the ml.

Nice work.

I left some commentary in the review and left a vote of -1 (just a few things 
need tidying, not a -1 of the concept in general).

Regards
-steve


-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, May 30, 2017 at 2:54 PM
To: openstack-dev 
Subject: [openstack-dev] [tc][kolla][stable][security][infra][all]  
guidelines for managing releases of binary artifacts

Based on two other recent threads [1][2] and some discussions on
IRC, I have written up some guidelines [3] that try to address the
concerns I have with us publishing binary artifacts while still
allowing the kolla team and others to move ahead with the work they
are trying to do.

I would appreciate feedback about whether these would complicate
builds or make them impossible, as well as whether folks think they
go far enough to mitigate the risks described in those email threads.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116677.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117282.html
[3] https://review.openstack.org/#/c/469265/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] custom configuration to overcloud fails second time

2017-05-31 Thread Ben Nemec
I think we would need to see what your custom config templates look like 
as well.


Also note that it's generally not recommended to drop environment files 
from your deploy command unless you explicitly want to stop applying 
them.  So if you applied myconfig_1.yaml and then later want to apply 
myconfig_2.yaml your deploy command should look like: openstack 
overcloud deploy --templates -e myconfig_1.yaml -e myconfig_2.yaml


On 05/31/2017 07:53 AM, Dnyaneshwar Pawar wrote:

Hi TripleO Experts,
I performed following steps -

 1. openstack overcloud deploy --templates -e myconfig_1.yaml
 2. openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to
the overcloud. And configuration applied by step 1 remains unchanged.

*Do I need to do anything before performing step 2?*


Thanks and Regards,
Dnyaneshwar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changing project schemas in patches; upgrade implications

2017-05-31 Thread Amrith Kumar
Thx Monty, jroll, smcginnis, zzzeek_ ...


-amrith

--
Amrith Kumar
Phone: +1-978-563-9590


On Wed, May 31, 2017 at 10:17 AM, Monty Taylor  wrote:

> On 05/31/2017 08:51 AM, Amrith Kumar wrote:
>
>> This email thread relates to[1], a change that aims to improve cross-SQL
>> support in project schemas.
>>
>> I want to explicitly exclude the notion of getting rid of support for
>> PostgreSQL in the underlying project schemas, a topic that was discussed at
>> the summit[2].
>>
>> In this change, the author (Thomas Bechtold, copied on this thread) makes
>> the comment that the change "is not changing the schema. It just avoids
>> implicit type conversion".
>>
>> It has long been my understanding that changes like this are not upgrade
>> friendly as it could lead to two installations both with, say version 37 or
>> 38 of the schema, but different table structures. In effect, this change
>> breaks upgradability of systems.
>>
>> i.e. a deployment which had a schema from the install of Ocata would have
>> a v38 table modules table with a default of 0 and one installed with Pike
>> (should this change be accepted) would have a modules table with a default
>> of False.
>>
>
> I agree that if that was the case this would be bad. But I don't think
> it's the case here.
>
> The datatype in the model is already Boolean. So I believe that means this
> will be a tinyint in MySQL and likely a boolean in PG (I'm guessing) the
> only change here is to the SQLA layer in what is being used in code - and
> being more explicit seems good.
>
> So I think this is a win.
>
> I'm raising this issue on the ML because the author also claims (albeit
>> not verified by me) that other projects have accepted changes like this.
>>
>
> Thanks! I think this is an area we need to be careful in - and extra
> eyeballs are a good thing.
>
> I submit to you that the upgrade friendly way of making this change would
>> be to propose a new version of the schema which alters all of these tables
>> and includes the correct default value. On a fresh install, with no data,
>> the upgrade step with this new schema version would bring the table to the
>> right default value and any system with that version of the schema would
>> have an identical set of defaults. Similarly any system with v37 or 38 of
>> the schema would have identical defaults.
>>
>
> Yes - I agree - that would definitely be the right way to do this if there
> was a model change.
>
> What's the advice of the community on this change; I've explicitly added
>> stable-maint-core as reviewers on this change as it does have stable branch
>> upgrade implications.
>>
>> -amrith
>>
>> [1] https://review.openstack.org/#/c/467080/
>> ​[2]https://etherpad.openstack.org/p/BOS-postgresql
>> ​
>> ​​
>>
>> --
>> Amrith Kumar
>> Phone: +1-978-563-9590
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] List user:project associations.

2017-05-31 Thread Ken D'Ambrosio
Hi!  I'm looking for a way to see which users are associated with which 
projects.  The dashboard does it pretty nicely, but I'd prefer from the 
CLI.  Unfortunately, while "openstack role assignment list" seems to be 
what I'd want, it requires *both* a project and a user, which means that 
in order to map everything, I'd have to iterate through every project 
for every user -- about as inefficient a way as I can imagine.  Surely 
there's a better way?


Thanks,

-Ken

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Strange: lost physical connectivity to compute hosts when using native (ryu) openflow interface

2017-05-31 Thread Gustavo Randich
Hi Kevin, I confirm that applying the patch the problem is fixed.

Sorry for the inconvenience.


On Tue, May 30, 2017 at 9:36 PM, Kevin Benton  wrote:

> Do you have that patch already in your environment? If not, can you
> confirm it fixes the issue?
>
> On Tue, May 30, 2017 at 9:49 AM, Gustavo Randich <
> gustavo.rand...@gmail.com> wrote:
>
>> While dumping OVS flows as you suggested, we finally found the cause of
>> the problem: our br-ex OVS bridge lacked the secure fail mode configuration.
>>
>> May be the issue is related to this: https://bugs.launchpad.net/neu
>> tron/+bug/1607787
>>
>> Thank you
>>
>>
>> On Fri, May 26, 2017 at 6:03 AM, Kevin Benton  wrote:
>>
>>> Sorry about the long delay.
>>>
>>> Can you dump the OVS flows before and after the outage? This will let us
>>> know if the flows Neutron setup are getting wiped out.
>>>
>>> On Tue, May 2, 2017 at 12:26 PM, Gustavo Randich <
>>> gustavo.rand...@gmail.com> wrote:
>>>
 Hi Kevin, here is some information aout this issue:

 - if the network outage lasts less than ~1 minute, then connectivity to
 host and instances is automatically restored without problem

 - otherwise:

 - upon outage, "ovs-vsctl show" reports "is_connected: true" in all
 bridges (br-ex / br-int / br-tun)

 - after about ~1 minute, "ovs-vsctl show" ceases to show "is_connected:
 true" on every bridge

 - upon restoring physical interface (fix outage)

 - "ovs-vsctl show" now reports "is_connected: true" in all
 bridges (br-ex / br-int / br-tun)

- access to host and VMs is NOT restored, although some pings
 are sporadically answered by host (~1 out of 20)


 - to restore connectivity, we:


   - execute "ifdown br-ex; ifup br-ex" -> access to host is
 restored, but not to VMs


   - restart neutron-openvswitch-agent -> access to VMs is restored

 Thank you!




 On Fri, Apr 28, 2017 at 5:07 PM, Kevin Benton  wrote:

> With the network down, does ovs-vsctl show that it is connected to the
> controller?
>
> On Fri, Apr 28, 2017 at 2:21 PM, Gustavo Randich <
> gustavo.rand...@gmail.com> wrote:
>
>> Exactly, we access via a tagged interface, which is part of br-ex
>>
>> # ip a show vlan171
>> 16: vlan171:  mtu 9000 qdisc
>> noqueue state UNKNOWN group default qlen 1
>> link/ether 8e:14:8d:c1:1a:5f brd ff:ff:ff:ff:ff:ff
>> inet 10.171.1.240/20 brd 10.171.15.255 scope global vlan171
>>valid_lft forever preferred_lft forever
>> inet6 fe80::8c14:8dff:fec1:1a5f/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> # ovs-vsctl show
>> ...
>> Bridge br-ex
>> Controller "tcp:127.0.0.1:6633"
>> is_connected: true
>> Port "vlan171"
>> tag: 171
>> Interface "vlan171"
>> type: internal
>> ...
>>
>>
>> On Fri, Apr 28, 2017 at 3:03 PM, Kevin Benton 
>> wrote:
>>
>>> Ok, that's likely not the issue then. I assume the way you access
>>> each host is via an IP assigned to an OVS bridge or an interface that
>>> somehow depends on OVS?
>>>
>>> On Apr 28, 2017 12:04, "Gustavo Randich" 
>>> wrote:
>>>
 Hi Kevin, we are using the default listen address of loopback
 interface:

 # grep -r of_listen_address /etc/neutron
 /etc/neutron/plugins/ml2/openvswitch_agent.ini:#of_listen_address
 = 127.0.0.1


 tcp/127.0.0.1:6640 -> ovsdb-server
 /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info
 --remote=punix:/var/run/openvswitch/db.sock
 --private-key=db:Open_vSwitch,SSL,private_key
 --certificate=db:Open_vSwitch,SSL,certificate
 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir
 --log-file=/var/log/openvswitch/ovsdb-server.log
 --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor

 Thanks




 On Fri, Apr 28, 2017 at 5:00 AM, Kevin Benton 
 wrote:

> Are you using an of_listen_address value of an interface being
> brought down?
>
> On Apr 25, 2017 17:34, "Gustavo Randich" <
> gustavo.rand...@gmail.com> wrote:
>
>> (using Mitaka / Ubuntu 16 / Neutron DVR / OVS / VXLAN /
>> l2_population)
>>
>> This sounds very strange (to me): recently, after a switch
>> outage, we lost connectivity to all our Mitaka hosts. We had to 
>> enter via
>> iLO host by host and restart networking 

[openstack-dev] [qa] No QA meeting tomorrow 01/06/2017 9.00UTC

2017-05-31 Thread Ghanshyam Mann
Hi All,

As most of the QA members are in Open Source Summit, Tokyo, I propose
to cancel tomorrow (01/06/2017 9.00UTC) QA meeting.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changing project schemas in patches; upgrade implications

2017-05-31 Thread Monty Taylor

On 05/31/2017 08:51 AM, Amrith Kumar wrote:
This email thread relates to[1], a change that aims to improve cross-SQL 
support in project schemas.


I want to explicitly exclude the notion of getting rid of support for 
PostgreSQL in the underlying project schemas, a topic that was discussed 
at the summit[2].


In this change, the author (Thomas Bechtold, copied on this thread) 
makes the comment that the change "is not changing the schema. It just 
avoids implicit type conversion".


It has long been my understanding that changes like this are not upgrade 
friendly as it could lead to two installations both with, say version 37 
or 38 of the schema, but different table structures. In effect, this 
change breaks upgradability of systems.


i.e. a deployment which had a schema from the install of Ocata would 
have a v38 table modules table with a default of 0 and one installed 
with Pike (should this change be accepted) would have a modules table 
with a default of False.


I agree that if that was the case this would be bad. But I don't think 
it's the case here.


The datatype in the model is already Boolean. So I believe that means 
this will be a tinyint in MySQL and likely a boolean in PG (I'm 
guessing) the only change here is to the SQLA layer in what is being 
used in code - and being more explicit seems good.


So I think this is a win.

I'm raising this issue on the ML because the author also claims (albeit 
not verified by me) that other projects have accepted changes like this.


Thanks! I think this is an area we need to be careful in - and extra 
eyeballs are a good thing.


I submit to you that the upgrade friendly way of making this change 
would be to propose a new version of the schema which alters all of 
these tables and includes the correct default value. On a fresh install, 
with no data, the upgrade step with this new schema version would bring 
the table to the right default value and any system with that version of 
the schema would have an identical set of defaults. Similarly any system 
with v37 or 38 of the schema would have identical defaults.


Yes - I agree - that would definitely be the right way to do this if 
there was a model change.


What's the advice of the community on this change; I've explicitly added 
stable-maint-core as reviewers on this change as it does have stable 
branch upgrade implications.


-amrith

[1] https://review.openstack.org/#/c/467080/
​[2]https://etherpad.openstack.org/p/BOS-postgresql
​
​​

--
Amrith Kumar
Phone: +1-978-563-9590



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [Openstack] Strange: lost physical connectivity to compute hosts when using native (ryu) openflow interface

2017-05-31 Thread Gustavo Randich
Hi Kevin, I confirm that applying the patch the problem is fixed.

Sorry for the inconvenience.


On Tue, May 30, 2017 at 9:36 PM, Kevin Benton  wrote:

> Do you have that patch already in your environment? If not, can you
> confirm it fixes the issue?
>
> On Tue, May 30, 2017 at 9:49 AM, Gustavo Randich <
> gustavo.rand...@gmail.com> wrote:
>
>> While dumping OVS flows as you suggested, we finally found the cause of
>> the problem: our br-ex OVS bridge lacked the secure fail mode configuration.
>>
>> May be the issue is related to this: https://bugs.launchpad.net/neu
>> tron/+bug/1607787
>>
>> Thank you
>>
>>
>> On Fri, May 26, 2017 at 6:03 AM, Kevin Benton  wrote:
>>
>>> Sorry about the long delay.
>>>
>>> Can you dump the OVS flows before and after the outage? This will let us
>>> know if the flows Neutron setup are getting wiped out.
>>>
>>> On Tue, May 2, 2017 at 12:26 PM, Gustavo Randich <
>>> gustavo.rand...@gmail.com> wrote:
>>>
 Hi Kevin, here is some information aout this issue:

 - if the network outage lasts less than ~1 minute, then connectivity to
 host and instances is automatically restored without problem

 - otherwise:

 - upon outage, "ovs-vsctl show" reports "is_connected: true" in all
 bridges (br-ex / br-int / br-tun)

 - after about ~1 minute, "ovs-vsctl show" ceases to show "is_connected:
 true" on every bridge

 - upon restoring physical interface (fix outage)

 - "ovs-vsctl show" now reports "is_connected: true" in all
 bridges (br-ex / br-int / br-tun)

- access to host and VMs is NOT restored, although some pings
 are sporadically answered by host (~1 out of 20)


 - to restore connectivity, we:


   - execute "ifdown br-ex; ifup br-ex" -> access to host is
 restored, but not to VMs


   - restart neutron-openvswitch-agent -> access to VMs is restored

 Thank you!




 On Fri, Apr 28, 2017 at 5:07 PM, Kevin Benton  wrote:

> With the network down, does ovs-vsctl show that it is connected to the
> controller?
>
> On Fri, Apr 28, 2017 at 2:21 PM, Gustavo Randich <
> gustavo.rand...@gmail.com> wrote:
>
>> Exactly, we access via a tagged interface, which is part of br-ex
>>
>> # ip a show vlan171
>> 16: vlan171:  mtu 9000 qdisc
>> noqueue state UNKNOWN group default qlen 1
>> link/ether 8e:14:8d:c1:1a:5f brd ff:ff:ff:ff:ff:ff
>> inet 10.171.1.240/20 brd 10.171.15.255 scope global vlan171
>>valid_lft forever preferred_lft forever
>> inet6 fe80::8c14:8dff:fec1:1a5f/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> # ovs-vsctl show
>> ...
>> Bridge br-ex
>> Controller "tcp:127.0.0.1:6633"
>> is_connected: true
>> Port "vlan171"
>> tag: 171
>> Interface "vlan171"
>> type: internal
>> ...
>>
>>
>> On Fri, Apr 28, 2017 at 3:03 PM, Kevin Benton 
>> wrote:
>>
>>> Ok, that's likely not the issue then. I assume the way you access
>>> each host is via an IP assigned to an OVS bridge or an interface that
>>> somehow depends on OVS?
>>>
>>> On Apr 28, 2017 12:04, "Gustavo Randich" 
>>> wrote:
>>>
 Hi Kevin, we are using the default listen address of loopback
 interface:

 # grep -r of_listen_address /etc/neutron
 /etc/neutron/plugins/ml2/openvswitch_agent.ini:#of_listen_address
 = 127.0.0.1


 tcp/127.0.0.1:6640 -> ovsdb-server
 /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info
 --remote=punix:/var/run/openvswitch/db.sock
 --private-key=db:Open_vSwitch,SSL,private_key
 --certificate=db:Open_vSwitch,SSL,certificate
 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir
 --log-file=/var/log/openvswitch/ovsdb-server.log
 --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor

 Thanks




 On Fri, Apr 28, 2017 at 5:00 AM, Kevin Benton 
 wrote:

> Are you using an of_listen_address value of an interface being
> brought down?
>
> On Apr 25, 2017 17:34, "Gustavo Randich" <
> gustavo.rand...@gmail.com> wrote:
>
>> (using Mitaka / Ubuntu 16 / Neutron DVR / OVS / VXLAN /
>> l2_population)
>>
>> This sounds very strange (to me): recently, after a switch
>> outage, we lost connectivity to all our Mitaka hosts. We had to 
>> enter via
>> iLO host by host and restart networking 

Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-31 Thread zengchen
Hi, Spyros:
Recently, Rexray can supply volume for Docker by integrating Cinder. It is 
great!  However, comparing to Fuxi,  Rexray is a little heavier. Because Rexray 
must depend on Libstorage to communicate with Cinder. Fuxi-golang is just a new 
project which re-implements Fuxi in go language. From the standpoint of Fuxi, 
Fuxi-golang is just her 'sister'.


By the way, you said you have been integrating Manila to Libstorage. But I 
don't see the relevant materials from the link[1]. Is the link wrong or do I 
miss something. Could you give more details about your work on integrating 
Manila?  Thanks very much!


Best Wishes!
zengchen
. 
[1] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata





At 2017-05-30 19:47:26, "Spyros Trigazis"  wrote:

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros 


[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0 
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata


On 27 May 2017 at 12:15, zengchen  wrote:

Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!


 [1]: https://review.openstack.org/#/c/468635


Best Wishes!
zengchen






在 2017-05-26 21:30:48,"John Griffith"  写道:





On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:



Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander
 
is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.
 


   Thanks very much!


Best Wishes!
zengchen





At 2017-05-25 22:47:29, "John Griffith"  wrote:





On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:

Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang




At 2017-05-25 19:46:54, "zengchen"  wrote:

Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question is where to commit codes for it.
We have two choice, 1. create a new repository, 2. create a new branch.  IMO, 
the first one is much better. Because
there are many differences in the layer of infrastructure, such as CI.  What's 
your opinion? Thanks very much
 
Best Wishes
zengchen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Zengchen,


For now I was thinking just use Github and PR's outside of the OpenStack 
projects to bootstrap things and see how far we can get.  I'll update the BP 
this morning with what I believe to be the key tasks to work through.


Thanks,
John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-31 Thread Lance Bragstad
On Fri, May 26, 2017 at 10:21 AM, Sean Dague  wrote:

> On 05/26/2017 10:44 AM, Lance Bragstad wrote:
> 
> > Interesting - I guess the way I was thinking about it was on a per-token
> > basis, since today you can't have a single token represent multiple
> > scopes. Would it be unreasonable to have oslo.context build this
> > information based on multiple tokens from the same user, or is that a
> > bad idea?
>
> No service consumer is interacting with Tokens. That's all been
> abstracted away. The code in the consumers consumer is interested in is
> the context representation.
>
> Which is good, because then the important parts are figuring out the
> right context interface to consume. And the right Keystone front end to
> be explicit about what was intended by the operator "make jane an admin
> on compute in region 1".
>
> And the middle can be whatever works best on the Keystone side. As long
> as the details of that aren't leaked out, it can also be refactored in
> the future by having keystonemiddleware+oslo.context translate to the
> known interface.
>

Ok - I think that makes sense. So if I copy/paste your example from earlier
and modify it a bit ( s/is_admin/global/)::

{
   "user": "me!",
   "global": True,
   "roles": ["admin", "auditor"],
   
}

Or

{
   "user": "me!",
   "global": True,
   "roles": ["reader"],
   
}

That might be one way we can represent global roles through
oslo.context/keystonemiddleware. The library would be on the hook for
maintaining the mapping of token scope to context scope, which makes sense:

if token['is_global'] == True:
context.global = True
elif token['domain_scoped']:
# domain scoping?
else:
# handle project scoping

I need to go dig into oslo.context a bit more to get familiar with how this
works on the project level. Because if I understand correctly, oslo.context
currently doesn't relay global scope and that will be a required thing to
get done before this work is useful, regardless of going with option #1,
#2, and especially #3.



> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-05-31 Thread Matt Riedemann

On 5/31/2017 6:58 AM, Gorka Eguileor wrote:

Hi,

As some of you may know I've been working on improving iSCSI connections
on OpenStack to make them more robust and prevent them from leaving
leftovers on attach/detach operations.

There are a couple of posts [1][2] going in more detail, but a good
summary would be that to fix this issue we require a considerable rework
in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.

Relevant changes for those projects are:

- Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
   case, so a new feature was added to disable automatic scans that added
   unintended devices to the systems.  Done and merged [3][4], it will be
   available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7

- OS-Brick: rework iSCSI to make it robust on unreliable networks, to
   add a `force` detach option that prioritizes leaving a clean system
   over possible data loss, and to support the new Open iSCSI feature.
   Done and pending review [5][6][7]

- Cinder: Handle some attach/detach errors a little better and add
   support to the force detach option for some operations where data loss
   on error is acceptable, ie: create volume from image, restore backup,
   etc. Done and pending review [8][9]

- Nova: I haven't looked into the code here, but I'm sure there will be
   cases where using the force detach operation will be useful.

- Tests: While we do have tempest tests that verify that attach/detach
   operations work both in Nova and in cinder volume creation operations,
   they are not meant to test the robustness of the system, so new tests
   will be required to validate the code.  Done [10]

Proposed tests are simplified versions of the ones I used to validate
the code; but hey, at least these are somewhat readable ;-)
Unfortunately they are not in line with the tempest mission since they
are not meant to be run in a production environment due to their
disruptive nature while injecting errors.  They need to be run
sequentially and without any other operations running on the deployment.
They also run sudo commands via local bash or SSH for the verification
and error generation bits.

We are testing create volume from image and attaching a volume to an
instance under the following networking error scenarios:

  - No errors
  - All paths have 10% incoming packets dropped
  - All paths have 20% incoming packets dropped
  - All paths have 100% incoming packets dropped
  - Half the paths have 20% incoming packets dropped
  - The other half of the paths have 20% incoming packets dropped
  - Half the paths have 100% incoming packets dropped
  - The other half of the paths have 100% incoming packets dropped

There are single execution versions as well as 10 consecutive operations
variants.

Since these are big changes I'm sure we would all feel a lot more
confident to merge them if storage vendors would run the new tests to
confirm that there are no issues with their backends.

Unfortunately to fully test the solution you may need to build the
latest Open-iSCSI package and install it in the system, then you can
just use an all-in-one DevStack with a couple of changes in the local.conf:

enable_service tempest

CINDER_REPO=https://review.openstack.org/p/openstack/cinder
CINDER_BRANCH=refs/changes/45/469445/1

LIBS_FROM_GIT=os-brick

OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
OS_BRICK_BRANCH=refs/changes/94/455394/11

[[post-config|$CINDER_CONF]]
[multipath-backend]
use_multipath_for_image_xfer=true

[[post-config|$NOVA_CONF]]
[libvirt]
volume_use_multipath = True

[[post-config|$KEYSTONE_CONF]]
[token]
expiration = 14400

[[test-config|$TEMPEST_CONFIG]]
[volume-feature-enabled]
multipath = True
[volume]
build_interval = 10
multipath_type = $MULTIPATH_VOLUME_TYPE
backend_protocol_tcp_port = 3260
multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2

Multinode configurations are also supported using SSH with use/password or
private key to introduce the errors or check that the systems didn't leave any
leftovers, the tests can also run a cleanup command between tests, etc., but
that's beyond the scope of this email.

Then you can run them all from /opt/stack/tempest with:

  $ cd /opt/stack/tempest
  $ OS_TEST_TIMEOUT=7200 ostestr -r 
cinder.tests.tempest.scenario.test_multipath.*

But I would recommend first running the simplest one without errors and
manually checking that the multipath is being created.

  $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_1

Then doing the same with one with errors and verify the presence of the
filters in iptables and that the packet drop for those filters is non zero:

  $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_2
  $ sudo iptables -nvL INPUT

Then doing the same with a 

Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-31 Thread Lance Haig

Hi,


On 24.05.17 18:43, Zane Bitter wrote:

On 19/05/17 11:00, Lance Haig wrote:

Hi,

As we know the heat-templates repository has become out of date in some
respects and also has been difficult to be maintained from a community
perspective.

For me the repository is quiet confusing with different styles that are
used to show certain aspects and other styles for older template 
examples.


This I think leads to confusion and perhaps many people who give up on
heat as a resource as things are not that clear.

From discussions in other threads and on the IRC channel I have seen
that there is a need to change things a bit.


This is why I would like to start the discussion that we rethink the
template example repository.

I would like to open the discussion with mys suggestions.

  * We need to differentiate templates that work on earlier versions of
heat that what is the current supported versions.


I typically use the heat_template_version for this. Technically this 
is entirely independent of what resource types are available in Heat. 
Nevertheless, if I submit e.g. a template that uses new resources only 
available in Ocata, I'll set 'heat_template_version: ocata' even if 
the template doesn't contain any Ocata-only intrinsic functions. We 
could make that a convention.

That is one way to achieve this.



  o I have suggested that we create directories that relate to
different versions so that you can create a stable version of
examples for the heat version and they should always remain
stable for that version and once it goes out of support can
remain there.


I'm reluctant to move existing things around unless its absolutely 
necessary, because there are a lot of links out in the wild to 
templates that will break. And they're links directly to the Git repo, 
it's not like we publish them somewhere and could add redirects.


Although that gives me an idea: what if we published them somewhere? 
We could make templates actually discoverable by publishing a list of 
descriptions (instead of just the names like you get through browsing 
the Git repo). And we could even add some metadata to indicate what 
versions of Heat they run on.


It would be better to do something like this. One of the biggest 
learning curves that our users have had is understanding what is 
available in what version of heat and then finding examples of templates 
that match their version.
I wanted to create the heat-lib library so that people could easily find 
working examples for their version of heat and also use the library 
intact as is so that they can get up-to speed really quickly.

This has enabled people to become productive much faster with heat.


  o This would mean people can find their version of heat and know
these templates all work on their version


This would mean keeping multiple copies of each template and 
maintaining them all. I don't think this is the right way to do this - 
to maintain old stuff what you need is a stable branch. That's also 
how you're going to be able to test against old versions of OpenStack 
in the gate.
Well I am not sure that this would be needed. Unless there are many 
backports of new resources to older versions of the templates.
e.g. Would the project backport the Neuton conditionals to the Liberty 
version of heat? I am assuming not.


That means that once a new version of heat is decided the template set 
becomes locked and you just create a copy with the new template version 
and test regression and once that is complete then you start adding the 
changes that are specific to the new version of heat.


I know that initially it would be quiet a bit of work to setup and to 
test the versions but once they are locked then you don't touch them again.


As I suggested in the other thread, I'd be OK with moving deprecated 
stuff to a 'deprecated' directory and then eventually deleting it. 
Stable branches would then correctly reflect the status of those 
templates at each previous release.
That makes sense. I would liek to clarify the above discussion first 
before we look at how to deprecate unsupported versions. I say that as 
many of our customers are running Liberty still :-)





  * We should consider adding a docs section that that includes training
for new users.
  o I know that there are documents hosted in the developer area and
these could be utilized but I would think having a documentation
section in the repository would be a good way to keep the
examples and the documents in the same place.
  o This docs directory could also host some training for new users
and old ones on new features etc.. In a similar line to what is
here in this repo https://github.com/heat-extras/heat-tutorial
  * We should include examples form the default hooks e.g. ansible salt
etc... with SoftwareDeployments.
  o We found this quiet helpful for new users to understand what is
 

[openstack-dev] [neutron] neutron-lib impact: use plugin common constants from neutron-lib

2017-05-31 Thread Boden Russell
If your project uses the constants from neutron.plugins.common.constants
please read on.

Many of the common plugin constants from neutron are now in neutron-lib
[1] and we're ready to consume them neutron [2].

Suggested actions:
- If your project uses any rehomed constants [1], please update your
imports to use them from neutron-lib. Best I can tell no other stadium
projects are using these constants today, but a number of non-stadium
projects are [3].


We can discuss when to land [2] during our weekly neutron meeting.

Thanks


[1] https://review.openstack.org/#/c/429036/
[2] https://review.openstack.org/#/c/469495/
[3]
http://codesearch.openstack.org/?q=from%20neutron%5C.plugins%5C.common%20import%20constants

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why don't we unbind ports or terminate volume connections on shelve offload?

2017-05-31 Thread Matt Riedemann

On 4/13/2017 11:45 AM, Matt Riedemann wrote:
This came up in the nova/cinder meeting today, but I can't for the life 
of me think of why we don't unbind ports or terminate the connection 
volumes when we shelve offload an instance from a compute host.


When you unshelve, if the instance was shelved offloaded, the conductor 
asks the scheduler for a new set of hosts to build the instance on 
(unshelve it). That could be a totally different host.


So am I just missing something super obvious? Or is this the most latent 
bug ever?




Looks like this is a known bug:

https://bugs.launchpad.net/nova/+bug/1547142

The fix on the nova side apparently depends on some changes on the 
cinder side. The new v3.27 APIs in cinder might help with all of this, 
but it doesn't fix old attachments.


By the way, search for shelve + volume in nova bugs and you're rewarded 
with a treasure trove of bugs:


https://bugs.launchpad.net/nova/?field.searchtext=shelved+volume=Search%3Alist=NEW%3Alist=INCOMPLETE_WITH_RESPONSE%3Alist=INCOMPLETE_WITHOUT_RESPONSE%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS%3Alist=FIXCOMMITTED=_reporter=_dupes=on_patch=_no_package=

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-31 Thread Jay Pipes

On 05/31/2017 01:31 AM, Zhenguo Niu wrote:
On Wed, May 31, 2017 at 12:20 PM, Ed Leafe > wrote:


On May 30, 2017, at 9:36 PM, Zhenguo Niu > wrote:

> as placement is not splitted out from nova now, and there would be users 
who only want a baremetal cloud, so we don't add resources to placement yet, but 
it's easy for us to turn to placement to match the node type with mogan flavors.

Placement is a separate service, independent of Nova. It tracks
Ironic nodes as individual resources, not as a "pretend" VM. The
Nova integration for selecting an Ironic node as a resource is still
being developed, as we need to update our view of the mess that is
"flavors", but the goal is to have a single flavor for each Ironic
machine type, rather than the current state of flavors pretending
that an Ironic node is a VM with certain RAM/CPU/disk quantities.


Yes, I understand the current efforts of improving the baremetal nodes 
scheduling. It's not conflict with mogan's goal, and when it is done, we 
can share the same scheduling strategy with placement :)


Mogan is a service for a specific group of users who really want a 
baremetal resource instead of a generic compute resource, on API side, 
we can expose RAID, advanced partitions, nics bonding, firmware 
management, and other baremetal specific capabilities to users. And 
unlike nova's host based availability zone, host aggregates, server 
groups (ironic nodes share the same host), mogan can make it possible to 
divide baremetal nodes into such groups, and make Rack aware for 
affinity and anti-affinity when scheduling.
Zhenguo Niu brings up a very good point here. Currently, all Ironic 
nodes are associated with a single host aggregate in Nova, because of 
the vestigial notion that a compute *service* (ala the nova-compute 
worker) was equal to the compute *node*.


In the placement API, of course, there's no such coupling. A placement 
aggregate != a Nova host aggregate.


So, technically Ironic (or Mogan) can call the placement service to 
create aggregates that match *its* definition of what an aggregate is 
(rack, row, cage, zone, DC, whatever). Furthermore, Ironic (or Mogan) 
can associate Ironic baremetal nodes to one or more of those placement 
aggregates to get around Nova host aggregate to compute service coupling.


That said, there's still lots of missing pieces before placement gets 
affinity/anti-affinity support...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-31 Thread Lance Haig

Hi Zane,


On 24.05.17 18:14, Zane Bitter wrote:

On 22/05/17 12:49, Lance Haig wrote:

I also asked the other day if there is a list of heat version matched to
Openstack version and I was told that there is not.


You mean like 
https://docs.openstack.org/developer/heat/template_guide/hot_spec.html#ocata 
?

Yup Something like this

Lance

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Changing project schemas in patches; upgrade implications

2017-05-31 Thread Amrith Kumar
This email thread relates to[1], a change that aims to improve cross-SQL
support in project schemas.

I want to explicitly exclude the notion of getting rid of support for
PostgreSQL in the underlying project schemas, a topic that was discussed at
the summit[2].

In this change, the author (Thomas Bechtold, copied on this thread) makes
the comment that the change "is not changing the schema. It just avoids
implicit type conversion".

It has long been my understanding that changes like this are not upgrade
friendly as it could lead to two installations both with, say version 37 or
38 of the schema, but different table structures. In effect, this change
breaks upgradability of systems.

i.e. a deployment which had a schema from the install of Ocata would have a
v38 table modules table with a default of 0 and one installed with Pike
(should this change be accepted) would have a modules table with a default
of False.

I'm raising this issue on the ML because the author also claims (albeit not
verified by me) that other projects have accepted changes like this.

I submit to you that the upgrade friendly way of making this change would
be to propose a new version of the schema which alters all of these tables
and includes the correct default value. On a fresh install, with no data,
the upgrade step with this new schema version would bring the table to the
right default value and any system with that version of the schema would
have an identical set of defaults. Similarly any system with v37 or 38 of
the schema would have identical defaults.

What's the advice of the community on this change; I've explicitly added
stable-maint-core as reviewers on this change as it does have stable branch
upgrade implications.

-amrith

[1] https://review.openstack.org/#/c/467080/
​[2]https://etherpad.openstack.org/p/BOS-postgresql
​
​​

--
Amrith Kumar
Phone: +1-978-563-9590
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Scheduler]

2017-05-31 Thread Narendra Pal Singh
Hello,

lets say i have multiple compute nodes, Pool-A has 5 nodes and Pool-B has 4
nodes categorized based on some property.
Now there is request for new instance, i always want this instance to be
placed on any compute in Pool-A.
What would be best approach to address this situation?

-- 
Regards,
NPS
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Zane Bitter

On 31/05/17 09:43, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-31 11:22:50 +0100:

On Wed, 31 May 2017, Graham Hayes wrote:

On 30/05/17 19:09, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:

Note that this goal only applies to tempest _plugins_. Projects
which have their tests in the core of tempest have nothing to do. I
wonder if it wouldn't be more fair for all projects to use plugins
for their tempest tests?


All projects may have plugins, but all projects with tests used by
the Interop WG (formerly DefCore) for trademark certification must
place at least those tests in the tempest repo, to be managed by
the QA team [1]. As new projects are added to those trademark
programs, the tests are supposed to move to the central repo to
ensure the additional review criteria are applied properly.


Thanks for the clarification, Doug. I don't think it changes the
main thrust of what I was trying to say (more below).


[1] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html


In the InterOp discussions in Boston, it was indicated that some people
on the QA team were not comfortable with "non core" project (even in
the InterOp program) having tests in core tempest.

I do think that may be a bigger discussion though.


I'm not suggesting we change everything (because that would take a
lot of time and energy we probably don't have), but I had some
thoughts in reaction to this and sharing is caring:

The way in which the tempest _repo_ is a combination of smoke,
integration, validation and trademark enforcement testing is very
confusing to me. If we then lay on top of that the concept of "core"
and "not core" with regard to who is supposed to put their tests in
a plugin and who isn't (except when it is trademark related!) it all
gets quite bewildering.

The resolution above says: "the OpenStack community will benefit
from having the interoperability tests used by DefCore in a central
location". Findability is a good goal so this a reasonable
assertion, but then the directive to lump those tests in with a
bunch of other stuff seems off if the goal is to "easier to read and
understand a set of tests".

If, instead, Tempest is a framework and all tests are in plugins
that each have their own repo then it is much easier to look for a
repo (if there is a common pattern) and know "these are the interop
tests for openstack" and "these are the integration tests for nova"
and even "these are the integration tests for the thing we are
currently describing as 'core'[1]".

An area where this probably falls down is with validation. How do
you know which plugins to assemble in order to validate this cloud
you've just built? Except that we already have this problem now that
we are requiring most projects to manage their tempest tests as
plugins. Does it become worse by everything being a plugin?

[1] We really need a better name for this.


Yeah, it sounds like the current organization of the repo is not
ideal in terms of equal playing field for all of our project teams.
I would be fine with all of the interop tests being in a plugin
together, or of saying that the tempest repo should only contain
those tests and that others should move to their own plugins. If we're
going to reorganize all of that, we should decide what new structure we
want and work it into the goal.


+1

- ZB


The point of centralizing review of that specific set of tests was
to make it easier for interop folks to help ensure the tests continue
to follow the additionally stringent review criteria that comes
with being used as part of the trademark program. The QA team agreed
to do that, so it's news to me that they're considering reversing
course.  If the QA team isn't going to continue, we'll need to
figure out what that means and potentially find another group to
do it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2017-05-31 07:34:03 -0500:
> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
> > On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> >> We had a discussion a few months back around what to do for cryptography
> >> since pycrypto is basically dead [1]. After some discussion, at least on
> >> the Cinder project, we decided the best way forward was to use the
> >> cryptography package instead, and work has been done to completely remove
> >> pycrypto usage.
> >>
> >> It all seemed like a good plan at the time.
> >>
> >> I now notice that for the python-cinderclient jobs, there is a pypy job
> >> (non-voting!) that is failing because the cryptography package is not
> >> supported with pypy.
> >>
> >> So this leaves us with two options I guess. Change the cryto library again,
> >> or drop support for pypy.
> >>
> >> I am not aware of anyone using pypy, and there are other valid working
> >> alternatives. I would much rather just drop support for it than redo our
> >> crypto functions again.
> >>
> >> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
> >> some input?
> 
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new 
> push and be successful at this point seems low at best.
> 
> I'd argue that pypy is already not supported, so dropping the non-voting 
> job doesn't seem like losing very much to me. Reworking cryptography 
> libs again, otoh, seems like a lot of work.
> 
> Monty
> 

This question came up recently for the Oslo libraries, and I think we
also agreed that pypy support was not being actively maintained.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-31 11:22:50 +0100:
> On Wed, 31 May 2017, Graham Hayes wrote:
> > On 30/05/17 19:09, Doug Hellmann wrote:
> >> Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
> >>> Note that this goal only applies to tempest _plugins_. Projects
> >>> which have their tests in the core of tempest have nothing to do. I
> >>> wonder if it wouldn't be more fair for all projects to use plugins
> >>> for their tempest tests?
> >>
> >> All projects may have plugins, but all projects with tests used by
> >> the Interop WG (formerly DefCore) for trademark certification must
> >> place at least those tests in the tempest repo, to be managed by
> >> the QA team [1]. As new projects are added to those trademark
> >> programs, the tests are supposed to move to the central repo to
> >> ensure the additional review criteria are applied properly.
> 
> Thanks for the clarification, Doug. I don't think it changes the
> main thrust of what I was trying to say (more below).
> 
> >> [1] 
> >> https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
> >
> > In the InterOp discussions in Boston, it was indicated that some people
> > on the QA team were not comfortable with "non core" project (even in
> > the InterOp program) having tests in core tempest.
> >
> > I do think that may be a bigger discussion though.
> 
> I'm not suggesting we change everything (because that would take a
> lot of time and energy we probably don't have), but I had some
> thoughts in reaction to this and sharing is caring:
> 
> The way in which the tempest _repo_ is a combination of smoke,
> integration, validation and trademark enforcement testing is very
> confusing to me. If we then lay on top of that the concept of "core"
> and "not core" with regard to who is supposed to put their tests in
> a plugin and who isn't (except when it is trademark related!) it all
> gets quite bewildering.
> 
> The resolution above says: "the OpenStack community will benefit
> from having the interoperability tests used by DefCore in a central
> location". Findability is a good goal so this a reasonable
> assertion, but then the directive to lump those tests in with a
> bunch of other stuff seems off if the goal is to "easier to read and
> understand a set of tests".
> 
> If, instead, Tempest is a framework and all tests are in plugins
> that each have their own repo then it is much easier to look for a
> repo (if there is a common pattern) and know "these are the interop
> tests for openstack" and "these are the integration tests for nova"
> and even "these are the integration tests for the thing we are
> currently describing as 'core'[1]".
> 
> An area where this probably falls down is with validation. How do
> you know which plugins to assemble in order to validate this cloud
> you've just built? Except that we already have this problem now that
> we are requiring most projects to manage their tempest tests as
> plugins. Does it become worse by everything being a plugin?
> 
> [1] We really need a better name for this.

Yeah, it sounds like the current organization of the repo is not
ideal in terms of equal playing field for all of our project teams.
I would be fine with all of the interop tests being in a plugin
together, or of saying that the tempest repo should only contain
those tests and that others should move to their own plugins. If we're
going to reorganize all of that, we should decide what new structure we
want and work it into the goal.

The point of centralizing review of that specific set of tests was
to make it easier for interop folks to help ensure the tests continue
to follow the additionally stringent review criteria that comes
with being used as part of the trademark program. The QA team agreed
to do that, so it's news to me that they're considering reversing
course.  If the QA team isn't going to continue, we'll need to
figure out what that means and potentially find another group to
do it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] custom configuration to overcloud fails second time

2017-05-31 Thread Dnyaneshwar Pawar
Hi TripleO Experts,

I performed following steps -

  1.  openstack overcloud deploy --templates -e myconfig_1.yaml
  2.  openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to the 
overcloud. And configuration applied by step 1 remains unchanged.

Do I need to do anything before performing step 2?


Thanks and Regards,
Dnyaneshwar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Graham Hayes
On 31/05/17 11:22, Chris Dent wrote:
> On Wed, 31 May 2017, Graham Hayes wrote:
>> On 30/05/17 19:09, Doug Hellmann wrote:
>>> Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
 Note that this goal only applies to tempest _plugins_. Projects
 which have their tests in the core of tempest have nothing to do. I
 wonder if it wouldn't be more fair for all projects to use plugins
 for their tempest tests?
>>>
>>> All projects may have plugins, but all projects with tests used by
>>> the Interop WG (formerly DefCore) for trademark certification must
>>> place at least those tests in the tempest repo, to be managed by
>>> the QA team [1]. As new projects are added to those trademark
>>> programs, the tests are supposed to move to the central repo to
>>> ensure the additional review criteria are applied properly.
> 
> Thanks for the clarification, Doug. I don't think it changes the
> main thrust of what I was trying to say (more below).
> 
>>> [1]
>>> https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
>>>
>>
>> In the InterOp discussions in Boston, it was indicated that some people
>> on the QA team were not comfortable with "non core" project (even in
>> the InterOp program) having tests in core tempest.
>>
>> I do think that may be a bigger discussion though.
> 
> I'm not suggesting we change everything (because that would take a
> lot of time and energy we probably don't have), but I had some
> thoughts in reaction to this and sharing is caring:
> 
> The way in which the tempest _repo_ is a combination of smoke,
> integration, validation and trademark enforcement testing is very
> confusing to me. If we then lay on top of that the concept of "core"
> and "not core" with regard to who is supposed to put their tests in
> a plugin and who isn't (except when it is trademark related!) it all
> gets quite bewildering.
> 
> The resolution above says: "the OpenStack community will benefit
> from having the interoperability tests used by DefCore in a central
> location". Findability is a good goal so this a reasonable
> assertion, but then the directive to lump those tests in with a
> bunch of other stuff seems off if the goal is to "easier to read and
> understand a set of tests".
> 
> If, instead, Tempest is a framework and all tests are in plugins
> that each have their own repo then it is much easier to look for a
> repo (if there is a common pattern) and know "these are the interop
> tests for openstack" and "these are the integration tests for nova"
> and even "these are the integration tests for the thing we are
> currently describing as 'core'[1]".
> 
> An area where this probably falls down is with validation. How do
> you know which plugins to assemble in order to validate this cloud
> you've just built? Except that we already have this problem now that
> we are requiring most projects to manage their tempest tests as
> plugins. Does it become worse by everything being a plugin?

No - this was the gist of my point last year when I proposed the
plugins first (or plugins for all as I called it at the time).

It actually gets better - for a few reasons.

1. We can have a interop-tempest-plugin (under QA control) where all
   interop tests can go, and be tagged for each standard.
   This is good for many reasons, mainly that the tests are curated,
   and projects cannot change tests without someone from QA-core team
   checking it to make sure it does not break backwards compatibility.

2. With the some interop tests being out of the tempest tree, there is
   a very real possibility of a change in tempest blocking someone
   getting certification - tempest does not gate against plugins
   and has broken interfaces in the past.

3. With the new interop programs, the old definition of "core" is more
   murky - and it is looking more and more like the criteria for
   inclusion in openstack/tempest is "be there from the beginning".

> [1] We really need a better name for this.
100% - but I have tried and failed to find a better descriptor, that
everyone understands.

- Graham
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split out from horizon

2017-05-31 Thread Akihiro Motoki
Hi all,

As discussed last month [1], we agree that each neutron-related
dashboard has its own repository.
I would like to move this forward on FWaaS and VPNaaS
as the horizon team plans to split them out as horizon plugins.

A couple of questions hit me.

(1) launchpad project
Do we create a new launchpad project for each dashboard?
At now, FWaaS and VPNaaS projects use 'neutron' for their bug tracking
from the historical reason, it sometimes There are two choices: the
one is to accept dashboard bugs in 'neutron' launchpad,
and the other is to have a separate launchpad project.

My vote is to create a separate launchpad project.
It allows users to search and file bugs easily.

(2) repository name

Are neutron-fwaas-dashboard / neutron-vpnaas-dashboard good repository
names for you?
Most horizon related projects use -dashboard or -ui as their repo names.
I personally prefers to -dashboard as it is consistent with the
OpenStack dashboard
(the official name of horizon). On the other hand, I know some folks
prefer to -ui
as the name is shorter enough.
Any preference?

(3) governance
neutron-fwaas project is under the neutron project.
Does it sound okay to have neutron-fwaas-dashboard under the neutron project?
This is what the neutron team does for neutron-lbaas-dashboard before
and this model is adopted in most horizon plugins (like trove, sahara
or others).

(4) initial core team

My thought is to have neutron-fwaas/vpnaas-core and horizon-core as
the initial core team.
The release team and the stable team follow what we have for
neutron-fwaas/vpnaas projects.
Sounds reasonable?


Finally, I already prepare the split out version of FWaaS and VPNaaS
dashboards in my personal github repos.
Once we agree in the questions above, I will create the repositories
under git.openstack.org.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-April/thread.html#115200

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-31 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/23/2017 12:23 PM, Major Hayden wrote:
> I'll see if we can move forward with 'ansible-hardening' and keep everyone 
> updated! :)

The repo is up and ready to go:

  https://github.com/openstack/ansible-hardening

There are some patches proposed to get the 'openstack-ansible-security' 
references changed to 'ansible-hardening'.

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJZLrzjAAoJEHNwUeDBAR+xSLoP/j/cSbNe11/PfI9htneJraqp
4HpuVxw0i1WyrrasJplt0WFcfaNIn/0JoN0Z3Wf+mqDOFGHOh1IHz4MJKEN6lOG7
XV/mzx3VnH87aLkdLEMznHlymeJaRxRq/8RBKIWQqGyDGjlJcl2mCUItrpIMCQHt
JUJzNCdMpZa7f7xbe7J1CX9cjAI9Sx/g3jS/s3WiWJ/MJR9uGKUKdAUD3kX6RoTb
a8nOvdE7gEyvqOKh/iJm7/LDZ+tM5kS03Su2pJJuSWSg4pluLtvwutdB14d0FRnk
DgW39mi3IMADbvNpH+U+Y+g4ar7QxoIdKDW9DuwCP5cjkx0GTl/2T/IC6bYm99Ko
oo/5xwYMvENndDbi8EbvhRbiGwSx8/mKKOykLlFum3iHqbhcwApHNEGlrXiKW1pz
veJeywJGk6loRcB8/RmvpqUB1EMd0qv+6NNDe/P35mcAFxTvJrQYIVRFrsgfxw1f
5nZNGN6iHmJkSnP0f4j27zasUSEdxpYSYl1A+glU8TJhjLrbCGyFxnRtmJXz7vHP
/N87ufYxOOIMKtduNquNxlSKhhL1xX3cPcTZbvSR3hIncZl5c++0hzgpEgaXHKO5
p2/WgINgftZ9eWO4w7qIz3h774JFi2GwejM6AZ6KWk3uWUlAx/kzOEYepRILpeso
bPStj0ixfMPNUgKuy9Oa
=YwNX
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] custom configuration to overcloud fails second time

2017-05-31 Thread Dnyaneshwar Pawar
Hi TripleO Experts,
I performed following steps -

  1.  openstack overcloud deploy --templates -e myconfig_1.yaml
  2.  openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to the 
overcloud. And configuration applied by step 1 remains unchanged.

Do I need to do anything before performing step 2?


Thanks and Regards,
Dnyaneshwar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Monty Taylor

On 05/31/2017 06:39 AM, Sean McGinnis wrote:

On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:

We had a discussion a few months back around what to do for cryptography
since pycrypto is basically dead [1]. After some discussion, at least on
the Cinder project, we decided the best way forward was to use the
cryptography package instead, and work has been done to completely remove
pycrypto usage.

It all seemed like a good plan at the time.

I now notice that for the python-cinderclient jobs, there is a pypy job
(non-voting!) that is failing because the cryptography package is not
supported with pypy.

So this leaves us with two options I guess. Change the cryto library again,
or drop support for pypy.

I am not aware of anyone using pypy, and there are other valid working
alternatives. I would much rather just drop support for it than redo our
crypto functions again.

Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
some input?


There was work a few years ago to get pypy support going - but it never 
really seemed to catch on. The chance that we're going to start a new 
push and be successful at this point seems low at best.


I'd argue that pypy is already not supported, so dropping the non-voting 
job doesn't seem like losing very much to me. Reworking cryptography 
libs again, otoh, seems like a lot of work.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Mitaka and gnocchi

2017-05-31 Thread mate200
Thanks Grodon ! I've found that I need !

Regarding the output problem, for now I execute gnocchi-api -p 8041 &>> 
/var/log/gnocchi-uwsgi.log  it solves the
problem. Will try to execute api part with uwsgi options.
-- 
Best regards,
Mate200

On Tue, 2017-05-30 at 20:11 +, gordon chung wrote:
> On 30/05/17 10:42 AM, mate...@mailbox.org wrote:
> > Second thing, that I don't understand is how to get access to stored
> > data. For instance, with ceilometer I can execute /ceilometer
> > sample-list -m memory.usage -q resource_id= /
> > and receive memory usage for some instance. Now, If i execute previous
> > command I get '('Connection aborted.', BadStatusLine("''",))', so if I
> > understand right I should use gnocchiclient.
> > I've been playing with it for a few hours already, but still no luck.
> > Could you point me in right direction ?
> > 
> 
> gnocchi is a completely different api from ceilometer. see: 
> http://gnocchi.xyz/gnocchiclient/shell.html for cli usage; 
> http://gnocchi.xyz/rest.html for REST and 
> https://www.slideshare.net/GordonChung/ceilometer-to-gnocchi if you're 
> not sure what difference between ceilometer and gnocchi is (made that 
> over a year ago, so it doesn't cover everything in gnocchi)
> 
> cheers,
> -- 
> gord
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-05-31 Thread Gorka Eguileor
Hi,

As some of you may know I've been working on improving iSCSI connections
on OpenStack to make them more robust and prevent them from leaving
leftovers on attach/detach operations.

There are a couple of posts [1][2] going in more detail, but a good
summary would be that to fix this issue we require a considerable rework
in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.

Relevant changes for those projects are:

- Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
  case, so a new feature was added to disable automatic scans that added
  unintended devices to the systems.  Done and merged [3][4], it will be
  available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7

- OS-Brick: rework iSCSI to make it robust on unreliable networks, to
  add a `force` detach option that prioritizes leaving a clean system
  over possible data loss, and to support the new Open iSCSI feature.
  Done and pending review [5][6][7]

- Cinder: Handle some attach/detach errors a little better and add
  support to the force detach option for some operations where data loss
  on error is acceptable, ie: create volume from image, restore backup,
  etc. Done and pending review [8][9]

- Nova: I haven't looked into the code here, but I'm sure there will be
  cases where using the force detach operation will be useful.

- Tests: While we do have tempest tests that verify that attach/detach
  operations work both in Nova and in cinder volume creation operations,
  they are not meant to test the robustness of the system, so new tests
  will be required to validate the code.  Done [10]

Proposed tests are simplified versions of the ones I used to validate
the code; but hey, at least these are somewhat readable ;-)
Unfortunately they are not in line with the tempest mission since they
are not meant to be run in a production environment due to their
disruptive nature while injecting errors.  They need to be run
sequentially and without any other operations running on the deployment.
They also run sudo commands via local bash or SSH for the verification
and error generation bits.

We are testing create volume from image and attaching a volume to an
instance under the following networking error scenarios:

 - No errors
 - All paths have 10% incoming packets dropped
 - All paths have 20% incoming packets dropped
 - All paths have 100% incoming packets dropped
 - Half the paths have 20% incoming packets dropped
 - The other half of the paths have 20% incoming packets dropped
 - Half the paths have 100% incoming packets dropped
 - The other half of the paths have 100% incoming packets dropped

There are single execution versions as well as 10 consecutive operations
variants.

Since these are big changes I'm sure we would all feel a lot more
confident to merge them if storage vendors would run the new tests to
confirm that there are no issues with their backends.

Unfortunately to fully test the solution you may need to build the
latest Open-iSCSI package and install it in the system, then you can
just use an all-in-one DevStack with a couple of changes in the local.conf:

   enable_service tempest

   CINDER_REPO=https://review.openstack.org/p/openstack/cinder
   CINDER_BRANCH=refs/changes/45/469445/1

   LIBS_FROM_GIT=os-brick

   OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
   OS_BRICK_BRANCH=refs/changes/94/455394/11

   [[post-config|$CINDER_CONF]]
   [multipath-backend]
   use_multipath_for_image_xfer=true

   [[post-config|$NOVA_CONF]]
   [libvirt]
   volume_use_multipath = True

   [[post-config|$KEYSTONE_CONF]]
   [token]
   expiration = 14400

   [[test-config|$TEMPEST_CONFIG]]
   [volume-feature-enabled]
   multipath = True
   [volume]
   build_interval = 10
   multipath_type = $MULTIPATH_VOLUME_TYPE
   backend_protocol_tcp_port = 3260
   multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2

Multinode configurations are also supported using SSH with use/password or
private key to introduce the errors or check that the systems didn't leave any
leftovers, the tests can also run a cleanup command between tests, etc., but
that's beyond the scope of this email.

Then you can run them all from /opt/stack/tempest with:

 $ cd /opt/stack/tempest
 $ OS_TEST_TIMEOUT=7200 ostestr -r 
cinder.tests.tempest.scenario.test_multipath.*

But I would recommend first running the simplest one without errors and
manually checking that the multipath is being created.

 $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_1

Then doing the same with one with errors and verify the presence of the
filters in iptables and that the packet drop for those filters is non zero:

 $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_2
 $ sudo iptables -nvL INPUT

Then doing the same with a Nova test just to verify that it is correctly
configured to use multipathing:

 $ ostestr -n 

Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> We had a discussion a few months back around what to do for cryptography
> since pycrypto is basically dead [1]. After some discussion, at least on
> the Cinder project, we decided the best way forward was to use the
> cryptography package instead, and work has been done to completely remove
> pycrypto usage.
> 
> It all seemed like a good plan at the time.
> 
> I now notice that for the python-cinderclient jobs, there is a pypy job
> (non-voting!) that is failing because the cryptography package is not
> supported with pypy.
> 
> So this leaves us with two options I guess. Change the cryto library again,
> or drop support for pypy.
> 
> I am not aware of anyone using pypy, and there are other valid working
> alternatives. I would much rather just drop support for it than redo our
> crypto functions again.
> 
> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
> some input?
> 
> Sean
> 

That missing reference:

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113568.html
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
We had a discussion a few months back around what to do for cryptography
since pycrypto is basically dead [1]. After some discussion, at least on
the Cinder project, we decided the best way forward was to use the
cryptography package instead, and work has been done to completely remove
pycrypto usage.

It all seemed like a good plan at the time.

I now notice that for the python-cinderclient jobs, there is a pypy job
(non-voting!) that is failing because the cryptography package is not
supported with pypy.

So this leaves us with two options I guess. Change the cryto library again,
or drop support for pypy.

I am not aware of anyone using pypy, and there are other valid working
alternatives. I would much rather just drop support for it than redo our
crypto functions again.

Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
some input?

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-31 Thread Rob Cresswell (rcresswe)

[horizon]
django-openstack-auth - blocking django - intermediary

https://review.openstack.org/#/c/469420 is up to release django_openstack_auth. 
Sorry for the delays.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Chris Dent

On Wed, 31 May 2017, Graham Hayes wrote:

On 30/05/17 19:09, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:

Note that this goal only applies to tempest _plugins_. Projects
which have their tests in the core of tempest have nothing to do. I
wonder if it wouldn't be more fair for all projects to use plugins
for their tempest tests?


All projects may have plugins, but all projects with tests used by
the Interop WG (formerly DefCore) for trademark certification must
place at least those tests in the tempest repo, to be managed by
the QA team [1]. As new projects are added to those trademark
programs, the tests are supposed to move to the central repo to
ensure the additional review criteria are applied properly.


Thanks for the clarification, Doug. I don't think it changes the
main thrust of what I was trying to say (more below).


[1] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html


In the InterOp discussions in Boston, it was indicated that some people
on the QA team were not comfortable with "non core" project (even in
the InterOp program) having tests in core tempest.

I do think that may be a bigger discussion though.


I'm not suggesting we change everything (because that would take a
lot of time and energy we probably don't have), but I had some
thoughts in reaction to this and sharing is caring:

The way in which the tempest _repo_ is a combination of smoke,
integration, validation and trademark enforcement testing is very
confusing to me. If we then lay on top of that the concept of "core"
and "not core" with regard to who is supposed to put their tests in
a plugin and who isn't (except when it is trademark related!) it all
gets quite bewildering.

The resolution above says: "the OpenStack community will benefit
from having the interoperability tests used by DefCore in a central
location". Findability is a good goal so this a reasonable
assertion, but then the directive to lump those tests in with a
bunch of other stuff seems off if the goal is to "easier to read and
understand a set of tests".

If, instead, Tempest is a framework and all tests are in plugins
that each have their own repo then it is much easier to look for a
repo (if there is a common pattern) and know "these are the interop
tests for openstack" and "these are the integration tests for nova"
and even "these are the integration tests for the thing we are
currently describing as 'core'[1]".

An area where this probably falls down is with validation. How do
you know which plugins to assemble in order to validate this cloud
you've just built? Except that we already have this problem now that
we are requiring most projects to manage their tempest tests as
plugins. Does it become worse by everything being a plugin?

[1] We really need a better name for this.
--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] mount ceph block from an instance

2017-05-31 Thread Jean-Philippe Evrard
Hello, 

That was indeed my suggestion.
The alternative would be to make sure your ceph can be routed through your 
public network. But it’s not my infrastructure, I don’t know what you store as 
data, etc…

In either case, you’re making possible for your tenants to access a part of 
your infra (the ceph cluster that’s used for openstack too), so you should 
think of the implications twice (bad neighbours, security intrusions…).
 
Best regards,
JP


On 29/05/2017, 10:27, "fabrice grelaud"  wrote:

Thanks for the answer.

My use case is for a file-hosting software system like « Seafile »  which 
can use a ceph backend (swift too but we don’t deploy swift on our infra).

Our network configuration of our infra is identical as your OSA 
documentation. So, on our compute node we have two bonding interface (bond0 and 
bond1).
The ceph vlan is actually propagate on bond0 (where is attach br-storage) 
to have ceph backend for our openstack.
And on bond1, among other, we have br-vlan for ours vlans providers.

If i understood correctly, the solution is to propagate too on our switch 
the ceph vlan on bond1, and create by neutron the provider network to be 
reachable in the tenant by our file-hosting software.

For security issues, using neutron rbac tool to share only this provider 
network to the tenant in question, could be sufficient ?

I’m all ears ;-) if you have another alternative.

Regards,
Fabrice


> Le 25 mai 2017 à 14:01, Jean-Philippe Evrard 
 a écrit :
> 
> I doubt many people have tried this, because 1) cinder/nova/glance 
probably do the job well in a multi-tenant fashion 2) you’re poking holes into 
your ceph cluster security.
> 
> Anyway, if you still want it, you would need (I guess) have to create a 
provider network that will be allowed to access your ceph network.
> 
> You can either route it from your current public network, or create 
another network. It’s 100% up to you, and not osa specific.
> 
> Best regards,
> JP
> 
> On 24/05/2017, 15:02, "fabrice grelaud"  
wrote:
> 
>Hi osa team,
> 
>i have a multimode openstack-ansible deployed, ocata 15.1.3, with ceph 
as backend for cinder (with our own ceph infra).
> 
>After create an instance with root volume, i would like to mount a 
ceph block or cephfs directly to the vm (not a cinder volume). So i want to 
attach a new interface to the vm that is in the ceph vlan.
>How can i do that ?
> 
>We have our ceph vlan propagated on bond0 interface (bond0.xxx and 
br-storage configured as documented) for openstack infrastructure.
> 
>Should i have to propagate this vlan on the bond1 interface where my 
br-vlan is attach ?
>Should i have to use the existing br-storage where the ceph vlan is 
already propagated (bond0.xxx) ? And how i create the ceph vlan network in 
neutron (by neutron directly or by horizon) ?
> 
>Has anyone ever experienced this ?
> 
>
__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[Openstack-operators] problem with nova placement after update of cloud from Mitaka to Ocata

2017-05-31 Thread federica fanzago

Hello operators,
we have a problem with the placement after the update of our cloud from 
Mitaka to Ocata release.


We started from a mitaka cloud and we have followed these steps: updated 
the cloud controller from Mitaka to newton, run the dbsync, updated from 
newton to ocata adding at this step the db nova_cell0 and run again the 
dbsync.  Then we have updated the compute directly from Mitaka to Ocata.


With the update to Ocata we have added the placement section in 
nova.conf, configured the related endpoint and installed the package 
openstack-nova-placement-api (placement wasn't enabled in newton)


Verifying the operation, the command nova-status upgrade check fails 
with the error


Error:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456, 
in main

ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386, 
in check

result = func(self)
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201, 
in _check_placement

versions = self._placement_get("/")
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189, 
in _placement_get

return client.get(path, endpoint_filter=ks_filter).json()
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", 
line 758, in get

return self.request(url, 'GET', **kwargs)
  File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 
101, in inner

return wrapped(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", 
line 655, in request

raise exceptions.from_response(resp, method, url)
ServiceUnavailable: Service Unavailable (HTTP 503)

Do you have suggestions about how to debug the problem?

Thanks a lot,
cheers,
   Federica


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-05-31 Thread Emilien Macchi
Hey folks,

I've been playing with deploying Neutron in WSGI with Apache and
Tempest tests fail on spawning Nova server when creating Neutron
ports:
http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/console.html#_2017-05-30_13_09_22_715400

I haven't found anything useful in neutron-server logs:
http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz

Before I file a bug in neutron, can anyone look at the logs with me
and see if I missed something in the config:
http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz

Thanks for the help,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] systemd + ENABLED_SERVICES + user_init_file

2017-05-31 Thread Markus Zoeller
On 11.05.2017 15:56, Markus Zoeller wrote:
> I'm working on a nova live-migration hook which configures and starts
> the nova-serialproxy service, runs a subset of tempest tests, and tears
> down the previously started service.
> 
>https://review.openstack.org/#/c/347471/47
> 
> After the change to "systemd", I thought all I have to do was to start
> the service with:
> 
>systemctl enable devstack@n-sproxy
>systemctl restart devstack@n-sproxy
> 
> But this results in the error "Failed to execute operation: No such file
> or directory". The reason is, that there is no systemd "user unit file".
> This file gets written in Devstack at:
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1512-L1529
> 
> For that to happen, a service must be in the list "ENABLED_SERVICES":
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1572-L1574
> 
> Which is *not* the case for the "n-sproxy" service:
> 
> https://github.com/openstack-dev/devstack/blob/8b8441f3becbae2e704932569bff384dcc5c6713/stackrc#L55-L56
> 
> I'm not sure how to approach this problem. I could:
> 1) add "n-sproxy" to the default ENABLED_SERVICES list for Nova in
>Devstack
> 2) always write the systemd user unit file in Devstack
>(despite being an enabled service)
> 3) Write the "user unit file" on the fly in the hook (in Nova).
> 4) ?
> 
> Right now I tend to option 2, as I think it brings more flexibility (for
> other services too) with less change in the set of default enabled
> services in the gate.
> 
> Is this the right direction? Any other thoughts?
> 
> 

FWIW, here's my attempt to implement 2):
https://review.openstack.org/#/c/469390/

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >