Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-31 Thread Zhenguo Niu
On Wed, May 31, 2017 at 10:01 PM, Jay Pipes  wrote:

> On 05/31/2017 01:31 AM, Zhenguo Niu wrote:
>
>> On Wed, May 31, 2017 at 12:20 PM, Ed Leafe > e...@leafe.com>> wrote:
>>
>> On May 30, 2017, at 9:36 PM, Zhenguo Niu > > wrote:
>>
>> > as placement is not splitted out from nova now, and there would be
>> users who only want a baremetal cloud, so we don't add resources to
>> placement yet, but it's easy for us to turn to placement to match the node
>> type with mogan flavors.
>>
>> Placement is a separate service, independent of Nova. It tracks
>> Ironic nodes as individual resources, not as a "pretend" VM. The
>> Nova integration for selecting an Ironic node as a resource is still
>> being developed, as we need to update our view of the mess that is
>> "flavors", but the goal is to have a single flavor for each Ironic
>> machine type, rather than the current state of flavors pretending
>> that an Ironic node is a VM with certain RAM/CPU/disk quantities.
>>
>>
>> Yes, I understand the current efforts of improving the baremetal nodes
>> scheduling. It's not conflict with mogan's goal, and when it is done, we
>> can share the same scheduling strategy with placement :)
>>
>> Mogan is a service for a specific group of users who really want a
>> baremetal resource instead of a generic compute resource, on API side, we
>> can expose RAID, advanced partitions, nics bonding, firmware management,
>> and other baremetal specific capabilities to users. And unlike nova's host
>> based availability zone, host aggregates, server groups (ironic nodes share
>> the same host), mogan can make it possible to divide baremetal nodes into
>> such groups, and make Rack aware for affinity and anti-affinity when
>> scheduling.
>>
> Zhenguo Niu brings up a very good point here. Currently, all Ironic nodes
> are associated with a single host aggregate in Nova, because of the
> vestigial notion that a compute *service* (ala the nova-compute worker) was
> equal to the compute *node*.
>
> In the placement API, of course, there's no such coupling. A placement
> aggregate != a Nova host aggregate.
>
> So, technically Ironic (or Mogan) can call the placement service to create
> aggregates that match *its* definition of what an aggregate is (rack, row,
> cage, zone, DC, whatever). Furthermore, Ironic (or Mogan) can associate
> Ironic baremetal nodes to one or more of those placement aggregates to get
> around Nova host aggregate to compute service coupling.
>
> That said, there's still lots of missing pieces before placement gets
> affinity/anti-affinity support...
>

Thanks Jay, we are also considering how to leverage the placement
aggregates, and if possible, we would like to contribute in this part to
make placement work well for mogan :)


>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-31 Thread Shinobu Kinjo
Hi Carlos,

Since I thought that that is a kind of documentation bug, I filed a
bug on this as Doc bug.
Now I'm seeing that you were assigned...

Regards,
Shinobu Kinjo


On Tue, May 30, 2017 at 5:48 PM, Carlos Camacho Gonzalez
 wrote:
> Hi Shinobu,
>
> It's really helpful to get feedback from customers, please, can you give me
> details about the failures you are having?. If so, sending me directly some
> logs would be great.
>
> Thanks,
> Carlos.
>
>
> On Mon, May 29, 2017 at 9:07 AM, Shinobu Kinjo  wrote:
>>
>> Here is feedback from the customer.
>>
>> Following the guide [1], undercloud restoration was not succeeded.
>>
>> Swift objects could haven't been downloaded after restoration even
>> though they followed all procedures during backing up / restoring
>> their system described in [1].
>>
>> Since that, I'm not 100% sure if `tar -czf` is good enough to take a
>> backup of the system or not.
>>
>> It would be great help to do dry-run against backed up data so that we
>> can make sure that backed up data is completely fine.
>>
>> [1]
>> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/back_up_and_restore_red_hat_enterprise_linux_openstack_platform/back_up_and_restore_the_undercloud
>>
>>
>> On Wed, May 24, 2017 at 4:26 PM, Carlos Camacho Gonzalez
>>  wrote:
>> > Hey folks,
>> >
>> > Based on what we discussed yesterday in the TripleO weekly team meeting,
>> > I'll like to propose a blueprint to create 2 features, basically to
>> > backup
>> > and restore the Undercloud.
>> >
>> > I'll like to follow in the first iteration the available docs for this
>> > purpose [1][2].
>> >
>> > With the addition of backing up the config files on /etc/ specifically
>> > to be
>> > able to recover from a failed Undercloud upgrade, i.e. recover the repos
>> > info removed in [3].
>> >
>> > I'll like to target this for P as I think I have enough time for
>> > coding/testing these features.
>> >
>> > I already have created a blueprint to track this effort
>> > https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore
>> >
>> > What do you think about it?
>> >
>> > Thanks,
>> > Carlos.
>> >
>> > [1]:
>> >
>> > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/back_up_and_restore_red_hat_enterprise_linux_openstack_platform/restore
>> >
>> > [2]:
>> >
>> > https://docs.openstack.org/developer/tripleo-docs/post_deployment/backup_restore_undercloud.html
>> >
>> > [3]:
>> >
>> > https://docs.openstack.org/developer/tripleo-docs/installation/updating.html
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-31 Thread Lance Haig

Hi,

One question I have not asked on this thread is.

What would you like to see changed within the repository and do you have 
a suggestion onhow to fix it.



Regards

Lance


On 31.05.17 16:03, Lance Haig wrote:

Hi,


On 24.05.17 18:43, Zane Bitter wrote:

On 19/05/17 11:00, Lance Haig wrote:

Hi,

As we know the heat-templates repository has become out of date in some
respects and also has been difficult to be maintained from a community
perspective.

For me the repository is quiet confusing with different styles that are
used to show certain aspects and other styles for older template 
examples.


This I think leads to confusion and perhaps many people who give up on
heat as a resource as things are not that clear.

From discussions in other threads and on the IRC channel I have seen
that there is a need to change things a bit.


This is why I would like to start the discussion that we rethink the
template example repository.

I would like to open the discussion with mys suggestions.

  * We need to differentiate templates that work on earlier versions of
heat that what is the current supported versions.


I typically use the heat_template_version for this. Technically this 
is entirely independent of what resource types are available in Heat. 
Nevertheless, if I submit e.g. a template that uses new resources 
only available in Ocata, I'll set 'heat_template_version: ocata' even 
if the template doesn't contain any Ocata-only intrinsic functions. 
We could make that a convention.

That is one way to achieve this.



  o I have suggested that we create directories that relate to
different versions so that you can create a stable version of
examples for the heat version and they should always remain
stable for that version and once it goes out of support can
remain there.


I'm reluctant to move existing things around unless its absolutely 
necessary, because there are a lot of links out in the wild to 
templates that will break. And they're links directly to the Git 
repo, it's not like we publish them somewhere and could add redirects.


Although that gives me an idea: what if we published them somewhere? 
We could make templates actually discoverable by publishing a list of 
descriptions (instead of just the names like you get through browsing 
the Git repo). And we could even add some metadata to indicate what 
versions of Heat they run on.


It would be better to do something like this. One of the biggest 
learning curves that our users have had is understanding what is 
available in what version of heat and then finding examples of 
templates that match their version.
I wanted to create the heat-lib library so that people could easily 
find working examples for their version of heat and also use the 
library intact as is so that they can get up-to speed really quickly.

This has enabled people to become productive much faster with heat.


  o This would mean people can find their version of heat and know
these templates all work on their version


This would mean keeping multiple copies of each template and 
maintaining them all. I don't think this is the right way to do this 
- to maintain old stuff what you need is a stable branch. That's also 
how you're going to be able to test against old versions of OpenStack 
in the gate.
Well I am not sure that this would be needed. Unless there are many 
backports of new resources to older versions of the templates.
e.g. Would the project backport the Neuton conditionals to the Liberty 
version of heat? I am assuming not.


That means that once a new version of heat is decided the template set 
becomes locked and you just create a copy with the new template 
version and test regression and once that is complete then you start 
adding the changes that are specific to the new version of heat.


I know that initially it would be quiet a bit of work to setup and to 
test the versions but once they are locked then you don't touch them 
again.


As I suggested in the other thread, I'd be OK with moving deprecated 
stuff to a 'deprecated' directory and then eventually deleting it. 
Stable branches would then correctly reflect the status of those 
templates at each previous release.
That makes sense. I would liek to clarify the above discussion first 
before we look at how to deprecate unsupported versions. I say that as 
many of our customers are running Liberty still :-)




  * We should consider adding a docs section that that includes 
training

for new users.
  o I know that there are documents hosted in the developer area 
and
these could be utilized but I would think having a 
documentation

section in the repository would be a good way to keep the
examples and the documents in the same place.
  o This docs directory could also host some training for new users
and old ones on new features etc.. In a similar line to what is
here in this repo https://g

Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Matthew Treinish
On Thu, Jun 01, 2017 at 12:32:03PM +0900, Ghanshyam Mann wrote:
> On Thu, Jun 1, 2017 at 9:46 AM, Matthew Treinish  wrote:
> > On Wed, May 31, 2017 at 04:24:14PM +, Jeremy Stanley wrote:
> >> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
> >> [...]
> >> > Trademark programs are trademark programs - we should have a unified
> >> > process for all of them. Let's not make the same mistakes again by
> >> > creating classes of projects / programs. I do not want this to be
> >> > a distinction as we move forward.
> >>
> >> This I agree with. However I'll be surprised if a majority of the QA
> >> team disagree on this point (logistic concerns with how to curate
> >> this over time I can understand, but that just means they need to
> >> interest some people in working on a manageable solution).
> >
> > +1 I don't think anyone disagrees with this. There is a logistical concern
> > with the way the new proposed programs are going to be introduced. Quite
> > frankly it's too varied and broad and I don't think we'll have enough people
> > working on this space to help maintain it in the same manner.
> >
> > It's the same reason we worked on the plugin decomposition in the first 
> > place.
> > You can easily look at the numbers of tests to see this:
> >
> > https://raw.githubusercontent.com/mtreinish/qa-in-the-open/lca2017/tests_per_proj.png
> >
> > Which shows things before the plugin decomposition (and before the big 
> > tent) Just
> > because we said we'd support all the incubated and integrated projects in 
> > tempest
> > didn't mean people were contributing and/or the tests were well maintained.
> >
> > But, as I said elsewhere in this thread this is a bit too early to have the
> > conversation because the new interop programs don't actually exist yet.
> 
> Yes, there is no question on goal to have a unified process for all.
> As Jeremy, Matthew mentioned, key things here is manageability issues.
> 
> We know contributors in QA are reducing cycle by cycle. I might be
> thinking over but I thought about QA team situation when we have
> around 30-40 trademark projects and all tests on Tempest
> repo.Personally I am ok to have tests in Tempest repo or a dedicated
> interop plugin repo which can be controlled by QA at some level But we

I actually don't think a dedicated interop plugin is a good idea. It doesn't
actually solve anything, because the tests are going to be the same and the
same people are going to be maintaining them. All you did was move it into a
different repo which solves none of the problems. What I was referring to was
exploring a more distributed approach to handling the tests (like what we did
for plugin decomposition for higher level services) That is the only way I see
us addressing the work overload problem. But, as I said before this is still
too early to talk about because there aren't defined new programs yet, just
the idea for them and a rough plan. We're still talking very much in the
abstract about everything...

-Matt Treinish

> need dedicated participation from interop + projects liason (I am not
> sure that worked well in pass but if with TC help it might work :)).
> 
> I can recall that, QA team has many patches on plugin side to improve
> them or fix them but may of them has no active reviews or much
> attentions from project team. I am afraid about same case for
> trademark projects also.
> 
> May be broad direction on trademark program and scope of it can help
> to imagine the quantity of programs and tests which QA teams need to
> maintain.



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Template Testing environment

2017-05-31 Thread Lance Haig

Hi All,

I asked on IRC for some guidance on what and how I would be able to test 
templates for the changes to the heat templates repo.


Is there a specific localrc file configuration that I could use for 
Devstack so that IO can get all the services I need to be able to run 
tests against my template changes?


As I am going to be making many changes to the templates I would like to 
test these before I submit reviews.


In addition I want to learn more about how the current tests are run 
with the new "headless" mode


Is someone able to assist me?

Thanks

Lance


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler]

2017-05-31 Thread Narendra Pal Singh
Hello,

I am looking for some suggestions. lets say i have multiple compute nodes,
Pool-A has 5 nodes and Pool-B has 4 nodes categorized based on some
property.
Now there is request for new instance, i always want this instance to be
placed on any compute in Pool-A and not in Pool-B.
What would be best approach to address this situation?

-- 
Best Regards,
Narendra Pal Singh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-31 Thread Lee Yarwood
On 31-05-17 20:06:01, Farr, Kaitlin M. wrote:
>> IMHO for now we are better off storing a secret passphrase in Barbican
>> for use with these encrypted volumes, would there be any objections to
>> this? Are there actual plans to use a symmetric key stored in Barbican
>> to directly encrypt and decrypt volumes?
> 
> It sounds like you're thinking that using a key manager object with the 
> type
> "passphrase" is closer to how the encryptors are using the bytes than using 
> the
> "symmetric key" type, but if you switch over to using passphrases,
> where are you going to generate the random bytes?  Would you prefer the
> user to input their own passphrase?  The benefit of continuing to use 
> symmetric
> keys as "passphrases" is that the key manager can randomly generate the bytes.
> Key generation is a standard feature of key managers, but password generation
> Is not.

Thanks for responding Kaitlin, I'd be happy to have the key manager
generate a random passphrase of a given length as defined by the volume
encryption type. I don't think we would want any user input here as
ultimately the encryption is transparent to them.

> On a side note, I thought the latest QEMU encryption feature was supposed to
> have support for passing in key material directly to the encryptors?  Perhaps
> this is not true and I am misremembering.

That isn't the case, with the native LUKS support in QEMU we can now
skip the use of the front-end encryptors entirely. We simply provide the
passphrase via a libvirt secret associated with the volume that is then
passed to QEMU in a secure fashion [1] to unlock the LUKS volume. 

[1] 
https://www.berrange.com/posts/2016/04/01/improving-qemu-security-part-3-securely-passing-in-credentials/

-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mogan] Architecture diagrams

2017-05-31 Thread Zhenguo Niu
On Thu, Jun 1, 2017 at 4:06 AM, Joshua Harlow  wrote:

> Hi mogan folks,
>
> I was doing some source code examination of mogan and it peaked my
> interest in how it all is connected together. In part I see there is a
> state machine, some taskflow usage, some wsgi usage that looks like parts
> of it are inspired(?) by various other projects.
>
> That got me wondering if there is any decent diagrams or documents that
> explain how it all connects together and I thought I might as well ask and
> see if there are any (50/50 chances? ha).
>

hi Josh, you can find some diagrams/documents on our wiki [1], sorry for
the lack of docs, will enrich it soon.


>
> I am especially interested in the state machine, taskflow and such (no
> tooz seems to be there) and how they are used (if they are, or are going to
> be used); I guess in part because I know the most about those
> libraries/components :)
>
>
In fact, we use the same state machine library like ironic to help control
the baremetal server state change. And we introduced a linear taskflow for
create_server to reliably ensure that workflow is executed in a manner that
can survie process failure by reverting. It's great if we can get
help/suggestions from a taskflow expert :)


> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


[1] https://wiki.openstack.org/wiki/Mogan

-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [panko] dropping hbase driver support

2017-05-31 Thread Hanxi Liu
+1

Hanxi Liu
IRC(lhx_)

On Sat, May 27, 2017 at 1:17 AM, gordon chung  wrote:

> hi,
>
> as all of you know, we moved all storage out of ceilometer so it is
> handles only data generation and normalisation. there seems to be very
> little contribution to panko which handles metadata indexing, event
> storage so given how little it's being adopted and how little resources
> are being put on supporting it, i'd like to proposed to drop hbase
> support as a first step in making the project more manageable for
> whatever resource chooses to support it.
>
> why hbase as initial candidate to prune?
> - it has no testing in gate
> - it never had testing in gate
> - we didn't receive a single reply in user survey saying hbase was used
> - all the devs who originally worked on driver don't work on openstack
> anymore.
> - i'd be surprised if it actually worked
>
> i just realised it's a long weekend in some places so i'll let this
> linger for a bit.
>
> cheers,
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-05-31 Thread Yih Leong, Sun.
I recalled that UC is inviting WG chairs to join the UC IRC meeting to
share/provide WG updates on high level status/activities? Is this something
similar?
Should the WG chair attend the UC meeting instead of setting up another
separate meeting?


On Wednesday, May 31, 2017, MCCABE, JAMEY A  wrote:

> Working group (WG) chairs or delegates, please enter your name (and WG
> name) and what times you could meet at this poll:
> https://beta.doodle.com/poll/6k36zgre9ttciwqz#table
>
>
>
> As back ground and to share progress:
>
>- We started and generally confirmed the desire to have a regular
>cross WG status meeting at the Boston Summit.
>- Specifically the groups interested in Telco NFV and Fog Edge agreed
>to collaborate more often and in a more organized fashion.
>- In e-mails and then in today’s Operators Telco/NFV we finalized a
>proposal to have all WGs meet for high level status monthly and to bring
>the collaboration back to our individual WG sessions.
>- the User Committee sessions are appropriate for the Monthly WG
>Status meeting
>- more detailed coordination across Telco/NFV and Fog Edge groups
>should take place in the Operators Telco NFV WG meetings which already
>occur every 2 weeks.
>- we need participation of each WG Chair (or a delegate)
>- we welcome and request the OPNFV and Linux Foundation and other WGs
>to join us in the cross WG status meetings
>
>
>
> The Doodle was setup to gain concurrence for a time of week in which we
> could schedule and is not intended to be for a specific week.
>
>
>
> ​Jamey McCabe – AT&T Integrated Cloud -jm6819 - mobile if needed
> 847-496-1176
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Ghanshyam Mann
On Thu, Jun 1, 2017 at 9:46 AM, Matthew Treinish  wrote:
> On Wed, May 31, 2017 at 04:24:14PM +, Jeremy Stanley wrote:
>> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
>> [...]
>> > Trademark programs are trademark programs - we should have a unified
>> > process for all of them. Let's not make the same mistakes again by
>> > creating classes of projects / programs. I do not want this to be
>> > a distinction as we move forward.
>>
>> This I agree with. However I'll be surprised if a majority of the QA
>> team disagree on this point (logistic concerns with how to curate
>> this over time I can understand, but that just means they need to
>> interest some people in working on a manageable solution).
>
> +1 I don't think anyone disagrees with this. There is a logistical concern
> with the way the new proposed programs are going to be introduced. Quite
> frankly it's too varied and broad and I don't think we'll have enough people
> working on this space to help maintain it in the same manner.
>
> It's the same reason we worked on the plugin decomposition in the first place.
> You can easily look at the numbers of tests to see this:
>
> https://raw.githubusercontent.com/mtreinish/qa-in-the-open/lca2017/tests_per_proj.png
>
> Which shows things before the plugin decomposition (and before the big tent) 
> Just
> because we said we'd support all the incubated and integrated projects in 
> tempest
> didn't mean people were contributing and/or the tests were well maintained.
>
> But, as I said elsewhere in this thread this is a bit too early to have the
> conversation because the new interop programs don't actually exist yet.

Yes, there is no question on goal to have a unified process for all.
As Jeremy, Matthew mentioned, key things here is manageability issues.

We know contributors in QA are reducing cycle by cycle. I might be
thinking over but I thought about QA team situation when we have
around 30-40 trademark projects and all tests on Tempest
repo.Personally I am ok to have tests in Tempest repo or a dedicated
interop plugin repo which can be controlled by QA at some level But we
need dedicated participation from interop + projects liason (I am not
sure that worked well in pass but if with TC help it might work :)).

I can recall that, QA team has many patches on plugin side to improve
them or fix them but may of them has no active reviews or much
attentions from project team. I am afraid about same case for
trademark projects also.

May be broad direction on trademark program and scope of it can help
to imagine the quantity of programs and tests which QA teams need to
maintain.

-gmann

>
> -Matt Treinish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-31 Thread Mike Bayer



On 05/30/2017 09:06 PM, Jay Pipes wrote:

On 05/30/2017 05:07 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2017-05-30 14:52:01 -0400:

Sorry for the delay in getting back on this... comments inline.

On 05/18/2017 06:13 PM, Adrian Turjak wrote:

Hello fellow OpenStackers,

For the last while I've been looking at options for multi-region
multi-master Keystone, as well as multi-master for other services I've
been developing and one thing that always came up was there aren't many
truly good options for a true multi-master backend.


Not sure whether you've looked into Galera? We had a geo-distributed
12-site Galera cluster servicing our Keystone assignment/identity
information WAN-replicated. Worked a charm for us at AT&T. Much easier
to administer than master-slave replication topologies and the
performance (yes, even over WAN links) of the ws-rep replication was
excellent. And yes, I'm aware Galera doesn't have complete snapshot
isolation support, but for Keystone's workloads (heavy, heavy read, very
little write) it is indeed ideal.



This has not been my experience.

We had a 3 site, 9 node global cluster and it was _extremely_ sensitive
to latency. We'd lose even read ability whenever we had a latency storm
due to quorum problems.

Our sites were London, Dallas, and Sydney, so it was pretty common for
there to be latency between any of them.

I lost track of it after some reorgs, but I believe the solution was
to just have a single site 3-node galera for writes, and then use async
replication for reads. We even helped land patches in Keystone to allow
split read/write host configuration.


Interesting, thanks for the info. Can I ask, were you using the Galera 
cluster for read-heavy data like Keystone identity/assignment storage? 
Or did you have write-heavy data mixed in (like Keystone's old UUID 
token storage...)


I'd also throw in, there's lots of versions of Galera with different 
bugfixes / improvements as we go along, not to mention configuration 
settings if Jay observes it working great on a distributed cluster 
and Clint observes it working terribly, it could be that these were not 
the same Galera versions being used.






It should be noted that CockroachDB's documentation specifically calls 
out that it is extremely sensitive to latency due to the way it measures 
clock skew... so might not be suitable for WAN-separated clusters?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Matthew Treinish
On Wed, May 31, 2017 at 04:24:14PM +, Jeremy Stanley wrote:
> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
> [...]
> > Trademark programs are trademark programs - we should have a unified
> > process for all of them. Let's not make the same mistakes again by
> > creating classes of projects / programs. I do not want this to be
> > a distinction as we move forward.
> 
> This I agree with. However I'll be surprised if a majority of the QA
> team disagree on this point (logistic concerns with how to curate
> this over time I can understand, but that just means they need to
> interest some people in working on a manageable solution).

+1 I don't think anyone disagrees with this. There is a logistical concern
with the way the new proposed programs are going to be introduced. Quite
frankly it's too varied and broad and I don't think we'll have enough people
working on this space to help maintain it in the same manner.

It's the same reason we worked on the plugin decomposition in the first place.
You can easily look at the numbers of tests to see this:

https://raw.githubusercontent.com/mtreinish/qa-in-the-open/lca2017/tests_per_proj.png

Which shows things before the plugin decomposition (and before the big tent) 
Just
because we said we'd support all the incubated and integrated projects in 
tempest
didn't mean people were contributing and/or the tests were well maintained.

But, as I said elsewhere in this thread this is a bit too early to have the
conversation because the new interop programs don't actually exist yet.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests (was: more tempest plugins)

2017-05-31 Thread Matthew Treinish
On Wed, May 31, 2017 at 03:45:52PM +, Jeremy Stanley wrote:
> On 2017-05-31 15:22:59 + (+), Jeremy Stanley wrote:
> > On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > it's news to me that they're considering reversing course. If the
> > > QA team isn't going to continue, we'll need to figure out what
> > > that means and potentially find another group to do it.
> > 
> > I wasn't there for the discussion, but it sounds likely to be a
> > mischaracterization. I'm going to assume it's not true (or much more
> > nuanced) at least until someone responds on behalf of the QA team.
> > This particular subthread is only going to go further into the weeds
> > until it is grounded in some authoritative details.
> 
> Apologies for replying to myself, but per discussion[*] with Chris
> in #openstack-dev I'm adjusting the subject header to make it more
> clear which particular line of speculation I consider weedy.
> 
> Also in that brief discussion, Graham made it slightly clearer that
> he was talking about pushback on the tempest repo getting tests for
> new trademark programs beyond "OpenStack Powered Platform,"
> "OpenStack Powered Compute" and "OpenStack Powered Object Storage."

TBH, it's a bit premature to have the discussion. These additional programs do
not exist yet, and there is a governance road block around this. Right now the
set of projects that can be used defcore/interopWG is limited to the set of 
projects in:

https://governance.openstack.org/tc/reference/tags/tc_approved-release.html

We had a forum session on it (I can't find the etherpad for the session) which
was pretty speculative because it was about planning the new programs. Part of
that discussion was around the feasibility of using tests in plugins and whether
that would be desirable. Personally, I was in favor of doing that for some of
the proposed programs because of the way they were organized it was a good fit.
This is because the proposed new programs were extra additions on top of the
base existing interop program. But it was hardly a definitive discussion.

We will have to have discussions about how we're going to actually implement
the additional programs when we start to create them, but that's not happening
yet.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Matthew Treinish
On Wed, May 31, 2017 at 03:22:59PM +, Jeremy Stanley wrote:
> On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > it's news to me that they're considering reversing course. If the
> > QA team isn't going to continue, we'll need to figure out what
> > that means and potentially find another group to do it.
> 
> I wasn't there for the discussion, but it sounds likely to be a
> mischaracterization. 
> I'm going to assume it's not true (or much more
> nuanced) at least until someone responds on behalf of the QA team.
> This particular subthread is only going to go further into the weeds
> until it is grounded in some authoritative details.

+1

I'm very confused by this whole thread TBH. Was there a defcore test which was
blocked from tempest? Quite frankly the amount of contribution to tempest
specifically for defcore tests is very minimal. (at most 1 or 2 patches per
cycle) It seems like this whole concern is based on a misunderstanding somewhere
and just is going off in a weird direction.

-Matt Treinish



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-31 Thread Michael Johnson
Hi Alex,

 

As you know I am a strong proponent of moving the docs into the project team 
repositories [1].

 

Personally I am in favor of pulling the Band-Aids off and doing option 1.  I 
think centralizing the documentation under one tree and consolidating the build 
into one job has benefits.  I can’t speak to the complexities of the 
documentation template(s?) and the sphinx configuration issues that might arise 
from this plan, but from a PTL/developer/doc writer I like the concept.  I 
fully understand this means work for us to move our API-REF, etc. but I think 
it is worth it.

 

As a secondary vote I am also ok with option 2.  I just think we might as well 
do a full consolidation.

 

I am not a fan of requiring project teams to setup separate repos for the docs, 
there is value to having them in tree for me.  So, I would vote against 3.

 

Michael

 

[1] https://review.openstack.org/#/c/439122/

 

From: Alexandra Settle [mailto:a.set...@outlook.com] 
Sent: Monday, May 22, 2017 2:39 AM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: 'openstack-d...@lists.openstack.org' 
Subject: [openstack-dev] [doc][ptls][all] Documentation publishing future

 

Hi everyone,

 

The documentation team are rapidly losing key contributors and core reviewers. 
We are not alone, this is happening across the board. It is making things 
harder, but not impossible.

Since our inception in 2010, we’ve been climbing higher and higher trying to 
achieve the best documentation we could, and uphold our high standards. This is 
something to be incredibly proud of.

 

However, we now need to take a step back and realise that the amount of work we 
are attempting to maintain is now out of reach for the team size that we have. 
At the moment we have 13 cores, of whom none are full time contributors or 
reviewers. This includes myself.

 

Until this point, the documentation team has owned several manuals that include 
content related to multiple projects, including an installation guide, admin 
guide, configuration guide, networking guide, and security guide. Because the 
team no longer has the resources to own that content, we want to invert the 
relationship between the doc team and project teams, so that we become liaisons 
to help with maintenance instead of asking for project teams to provide 
liaisons to help with content. As a part of that change, we plan to move the 
existing content out of the central manuals repository, into repositories owned 
by the appropriate project teams. Project teams will then own the content and 
the documentation team will assist by managing the build tools, helping with 
writing guidelines and style, but not writing the bulk of the text.

 

We currently have the infrastructure set up to empower project teams to manage 
their own documentation in their own tree, and many do. As part of this change, 
the rest of the existing content from the install guide and admin guide will 
also move into project-owned repositories. We have a few options for how to 
implement the move, and that's where we need feedback now.

 

1. We could combine all of the documentation builds, so that each project has a 
single doc/source directory that includes developer, contributor, and user 
documentation. This option would reduce the number of build jobs we have to 
run, and cut down on the number of separate sphinx configurations in each 
repository. It would completely change the way we publish the results, though, 
and we would need to set up redirects from all of the existing locations to the 
new locations and move all of the existing documentation under the new 
structure.

 

2. We could retain the existing trees for developer and API docs, and add a new 
one for "user" documentation. The installation guide, configuration guide, and 
admin guide would move here for all projects. Neutron's user documentation 
would include the current networking guide as well. This option would add 1 new 
build to each repository, but would allow us to easily roll out the change with 
less disruption in the way the site is organized and published, so there would 
be less work in the short term.

 

3. We could do option 2, but use a separate repository for the new 
user-oriented documentation. This would allow project teams to delegate 
management of the documentation to a separate review project-sub-team, but 
would complicate the process of landing code and documentation updates together 
so that the docs are always up to date.

 

Personally, I think option 2 or 3 are more realistic, for now. It does mean 
that an extra build would have to be maintained, but it retains that key 
differentiator between what is user and developer documentation and involves 
fewer changes to existing published contents and build jobs. I definitely think 
option 1 is feasible, and would be happy to make it work if the community 
prefers this. We could also view option 1 as the longer-term goal, and option 2 
as a

Re: [openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split out from horizon

2017-05-31 Thread Bhatia, Manjeet S


> -Original Message-
> From: Akihiro Motoki [mailto:amot...@gmail.com]
> Sent: Wednesday, May 31, 2017 6:13 AM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split
> out from horizon
> 
> Hi all,
> 
> As discussed last month [1], we agree that each neutron-related dashboard has
> its own repository.
> I would like to move this forward on FWaaS and VPNaaS as the horizon team
> plans to split them out as horizon plugins.
> 
> A couple of questions hit me.
> 
> (1) launchpad project
> Do we create a new launchpad project for each dashboard?
> At now, FWaaS and VPNaaS projects use 'neutron' for their bug tracking from
> the historical reason, it sometimes There are two choices: the one is to 
> accept
> dashboard bugs in 'neutron' launchpad, and the other is to have a separate
> launchpad project.

+1 for separating Launchpad we can one for each and each can cover all the 
issues related
To it. 

> My vote is to create a separate launchpad project.
> It allows users to search and file bugs easily.
> 
> (2) repository name
> 
> Are neutron-fwaas-dashboard / neutron-vpnaas-dashboard good repository
> names for you?
> Most horizon related projects use -dashboard or -ui as their repo
> names.
> I personally prefers to -dashboard as it is consistent with the OpenStack
> dashboard (the official name of horizon). On the other hand, I know some folks
> prefer to -ui as the name is shorter enough.
> Any preference?

Both looks good, but using ui will make it shorter so +1 for ui

> (3) governance
> neutron-fwaas project is under the neutron project.
> Does it sound okay to have neutron-fwaas-dashboard under the neutron
> project?
> This is what the neutron team does for neutron-lbaas-dashboard before and
> this model is adopted in most horizon plugins (like trove, sahara or others).

IMO, we can have it under neutron project for now but for simplicity and
maintenance I'd suggest to branch it out from neutron.

> (4) initial core team
> 
> My thought is to have neutron-fwaas/vpnaas-core and horizon-core as the
> initial core team.
> The release team and the stable team follow what we have for neutron-
> fwaas/vpnaas projects.
> Sounds reasonable?
> 
> 
> Finally, I already prepare the split out version of FWaaS and VPNaaS
> dashboards in my personal github repos.
> Once we agree in the questions above, I will create the repositories under
> git.openstack.org.
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-
> April/thread.html#115200
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] deploy software on Openstack controller on the Overcloud

2017-05-31 Thread Emilien Macchi
On Wed, May 31, 2017 at 6:29 PM, Dnyaneshwar Pawar
 wrote:
> Hi Alex,
>
> Currently we have puppet modules[0] to configure our software which has
> components on Openstack Controller, Cinder node and Nova node.
> As per document[1] we successfully tried out role specific configuration[2].
>
> So, does it mean that if we have an overcloud image with our packages
> inbuilt and we call our configuration scripts using role specific
> configuration, we may not need puppet modules[0] ? Is it acceptable
> deployment method?

So running a binary from Puppet, to make configuration management is
not something we recommend.
Puppet has been good at managing configuration files and services for
example. In your module, you just manage a file and execute it. The
problem with that workflow is we have no idea what happens in backend.
Also we have no way to make Puppet run idempotent, which is one
important aspect in TripleO.

Please tell us what does the binary, and maybe we can convert the
tasks into Puppet resources that could be managed by your module. Also
make the resources by class (service), so we can plug it into the
composable services in TripleO.

Thanks,

> [0] https://github.com/abhishek-kane/puppet-veritas-hyperscale
> [1]
> https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html
> [2] http://paste.openstack.org/show/66/
>
> Thanks,
> Dnyaneshwar
>
> On 5/30/17, 6:52 PM, "Alex Schultz"  wrote:
>
> On Mon, May 29, 2017 at 5:05 AM, Dnyaneshwar Pawar
>  wrote:
>
> Hi,
>
> I am tying to deploy a software on openstack controller on the overcloud.
> One way to do this is by modifying ‘overcloud image’ so that all packages of
> our software are added to image and then run overcloud deploy.
> Other way is to write heat template and puppet module which will deploy the
> required packages.
>
> Question: Which of above two approaches is better?
>
> Note: Configuration part of the software will be done via separate heat
> template and puppet module.
>
>
> Usually you do both.  Depending on how the end user is expected to
> deploy, if they are using the TripleoPackages service[0] in their
> role, the puppet installation of the package won't actually work (we
> override the package provider to noop) so it needs to be in the
> images.  That being said, usually there is also a bit of puppet that
> needs to be written to configure the end service and as a best
> practice (and for development purposes), it's a good idea to also
> capture the package in the manifest.
>
> Thanks,
> -Alex
>
> [0]
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml
>
>
> Thanks and Regards,
> Dnyaneshwar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] policy monitoring panel design

2017-05-31 Thread Eric K
Here's a quick & rough mock-up I put up based on the discussions at Atlanta
PTG.

https://wireframepro.mockflow.com/view/congress-policy-monitor

Anyone can view and comment. To make changes, please sign-up and email me
to get access.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-31 Thread Lance Bragstad
I took a stab at working through the API a bit more and I've capture that
information in the spec [0]. Rendered version is available, too [1].

[0] https://review.openstack.org/#/c/464763/
[1]
http://docs-draft.openstack.org/63/464763/12/check/gate-keystone-specs-docs-ubuntu-xenial/1dbeb65//doc/build/html/specs/keystone/ongoing/global-roles.html

On Wed, May 31, 2017 at 9:10 AM, Lance Bragstad  wrote:

>
>
> On Fri, May 26, 2017 at 10:21 AM, Sean Dague  wrote:
>
>> On 05/26/2017 10:44 AM, Lance Bragstad wrote:
>> 
>> > Interesting - I guess the way I was thinking about it was on a per-token
>> > basis, since today you can't have a single token represent multiple
>> > scopes. Would it be unreasonable to have oslo.context build this
>> > information based on multiple tokens from the same user, or is that a
>> > bad idea?
>>
>> No service consumer is interacting with Tokens. That's all been
>> abstracted away. The code in the consumers consumer is interested in is
>> the context representation.
>>
>> Which is good, because then the important parts are figuring out the
>> right context interface to consume. And the right Keystone front end to
>> be explicit about what was intended by the operator "make jane an admin
>> on compute in region 1".
>>
>> And the middle can be whatever works best on the Keystone side. As long
>> as the details of that aren't leaked out, it can also be refactored in
>> the future by having keystonemiddleware+oslo.context translate to the
>> known interface.
>>
>
> Ok - I think that makes sense. So if I copy/paste your example from
> earlier and modify it a bit ( s/is_admin/global/)::
>
> {
>"user": "me!",
>"global": True,
>"roles": ["admin", "auditor"],
>
> }
>
> Or
>
> {
>"user": "me!",
>"global": True,
>"roles": ["reader"],
>
> }
>
> That might be one way we can represent global roles through 
> oslo.context/keystonemiddleware.
> The library would be on the hook for maintaining the mapping of token scope
> to context scope, which makes sense:
>
> if token['is_global'] == True:
> context.global = True
> elif token['domain_scoped']:
> # domain scoping?
> else:
> # handle project scoping
>
> I need to go dig into oslo.context a bit more to get familiar with how
> this works on the project level. Because if I understand correctly,
> oslo.context currently doesn't relay global scope and that will be a
> required thing to get done before this work is useful, regardless of going
> with option #1, #2, and especially #3.
>
>
>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-05-31 Thread Amrith Kumar
I agree, this would be a good thing to do and something which will definitely 
improve the overall ease of upgrades. We already have two Queens goals though; 
do we want to add a third?

-amrith

P.S. I'd happily volunteer doing this, with a real end user benefit, over the 
current tempest shuffle goal. My 2c worth.

--
Amrith Kumar
amrith.ku...@gmail.com


> -Original Message-
> From: Mike [mailto:thin...@gmail.com]
> Sent: Wednesday, May 31, 2017 4:39 PM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Cc: Sean Dague 
> Subject: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off
> Paste
> 
> Hello everyone,
> 
> As part of our community wide goals process [1], we will discuss the
> potential goals that came out of the forum session in Boston [2].
> These discussions will aid the TC in making a final decision of what goals
> the community will work towards in the Queens release.
> 
> For this thread we will be discussing migrating off paste. This was suggested
> by Sean Dague. I’m not sure if he’s leading this effort, but here’s a excerpt
> from him to get us started:
> 
> A migration path off of paste would be a huge win. Paste deploy is
> unmaintained (as noted in the etherpad) and being in etc means it's another
> piece of gratuitous state that makes upgrading harder than it really should
> be. This is one of those that is going to require someone to commit to
> working out that migration path up front. But it would be a pretty good chunk
> of debt and upgrade ease.
> 
> 
> [1] - https://governance.openstack.org/tc/goals/index.html
> [2] - https://etherpad.openstack.org/p/BOS-forum-Queens-Goals
> 
> —
> Mike Perez
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptl][all] Potential Queens Goal: Continuing Python 3.5+ Support

2017-05-31 Thread Mike
Hello everyone,

For this thread we will be discussing continuing Python 3.5+ support.
Emilien who has been helping with coordinating our efforts here with
Pike can probably add more here, but glancing at our goals document
[1] it looks like we have a lot of unanswered projects’ status, but
mostly we have python 3.5 unit test voting jobs done thanks to this
effort! I have no idea how to use the graphite dashboard, but here’s a
graph [2] showing success vs failure with python-35 jobs across all
projects.

Glancing at that I think it’s safe to say we can start discussions on
moving forward with having our functional tests support python 3.5.
Some projects are already ahead in this. Let the discussions begin so
we can aid the decision in the  TC deciding our community wide goals
for Queens [3].


[1] - https://governance.openstack.org/tc/goals/pike/python35.html
[2] - 
http://graphite.openstack.org/render/?width=1273&height=554&_salt=1496261911.56&from=00%3A00_20170401&until=23%3A59_20170531&target=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.SUCCESS)&target=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.FAILURE)
[3] - https://governance.openstack.org/tc/goals/index.html

—
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-05-31 Thread Mike
Hello everyone,

As part of our community wide goals process [1], we will discuss the
potential goals that came out of the forum session in Boston [2].
These discussions will aid the TC in making a final decision of what
goals the community will work towards in the Queens release.

For this thread we will be discussing migrating off paste. This was
suggested by Sean Dague. I’m not sure if he’s leading this effort, but
here’s a excerpt from him to get us started:

A migration path off of paste would be a huge win. Paste deploy is
unmaintained (as noted in the etherpad) and being in etc means it's
another piece of gratuitous state that makes upgrading harder than it
really should be. This is one of those that is going to require
someone to commit to working out that migration path up front. But it
would be a pretty good chunk of debt and upgrade ease.


[1] - https://governance.openstack.org/tc/goals/index.html
[2] - https://etherpad.openstack.org/p/BOS-forum-Queens-Goals

—
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Amrith Kumar
I've been following this discussion from a safe distance on the mailing
list, and I just caught up on the IRC conversation as well.

It was my understanding that if a project had tests that were going to be
run for the purposes of determination whether something met the standards of
"OpenStack Powered Databases" (TBD), then those tests would reside in the
tempest repository. Any other tempest tests that the project would have for
any purpose (or purposes) should, I understand, be moved into an independent
repository per the Queens community goal[1].

I'm particularly interested in having this matter decided in an email thread
such as this one because of the comment in IRC about a verbal discussion at
the forum not being considered to be a 'binding decision' [2]. It had been
my understanding that the forum and the PTG were places where 'decisions'
were made, but if that is not the case, I'm hoping that one will be
documented as part of this thread and formalized (maybe) with a resolution
by the TC or the Interop/DefCore body?

-amrith


[1]
https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.201
7-05-31.log.html#t2017-05-31T15:39:33

--
Amrith Kumar
amrith.ku...@gmail.com


> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> Sent: Wednesday, May 31, 2017 12:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
 d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark
tests
> 
> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
> [...]
> > Trademark programs are trademark programs - we should have a unified
> > process for all of them. Let's not make the same mistakes again by
> > creating classes of projects / programs. I do not want this to be a
> > distinction as we move forward.
> 
> This I agree with. However I'll be surprised if a majority of the QA team
> disagree on this point (logistic concerns with how to curate this over
time I
> can understand, but that just means they need to interest some people in
> working on a manageable solution).
> --
> Jeremy Stanley


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-31 Thread Farr, Kaitlin M.
Lee, a few thoughts on your previous email.  Many of the details I think you
already know, but I'm clarifying for posterity's sake:

> However the only supported disk encryption formats on the front-end at
> present are plain (dm-crypt) and LUKS, neither of which use the supplied
> key to directly encrypt or decrypt data. Plain derives a fixed length
> master key from the provided key / passphrase and LUKS uses PBKDF2 to
> derive a key from the key / passphrase that unlocks a separate master
> key.

This is true.  When we retrieve the key from Barbican, we don't use the key 
bytes
themselves to encrypt the volume. We format the key as a string and use it as a
passphrase.  You can see this for both the LUKS encryptor [2] and the cryptsetup
encryptor [3].  We pass in a "--key-file=-" parameter (which indicates the 
"keyfile"
should be read from stdin) and then pass in the formatted key.  But according to
the documentation, "keyfile" is a misnomer.  I think it would be clearer if 
dm-crypt
renamed it to something like "passwordfile" because it's still used by dm-crypt 
and
luks as a passphrase [4].

> I also can't find any evidence of these keys being used directly on the
> backend for any direct encryption of volumes within c-vol. Happy to be
> corrected here if there are out-of-tree drivers etc that do this.

There are two options for control_location for the volume encryption type:
'front-end' (nova) and 'back-end' (cinder) [5].  'front-end' is the default, 
and I
know where the code logic is that sets up the encryptors for the front-end,
now in os-brick [1].  But I cannot find any logic that handles the case for 
'back-end'.
I would think the 'back-end' logic would be found in Cinder, but I do not see 
it.
I am under the impression that it was just a placeholder for future 
functionality.

> IMHO for now we are better off storing a secret passphrase in Barbican
> for use with these encrypted volumes, would there be any objections to
> this? Are there actual plans to use a symmetric key stored in Barbican
> to directly encrypt and decrypt volumes?

It sounds like you're thinking that using a key manager object with the type
"passphrase" is closer to how the encryptors are using the bytes than using the
"symmetric key" type, but if you switch over to using passphrases,
where are you going to generate the random bytes?  Would you prefer the
user to input their own passphrase?  The benefit of continuing to use symmetric
keys as "passphrases" is that the key manager can randomly generate the bytes.
Key generation is a standard feature of key managers, but password generation
Is not.

On a side note, I thought the latest QEMU encryption feature was supposed to
have support for passing in key material directly to the encryptors?  Perhaps
this is not true and I am misremembering.

Hopefully that helps,

Kaitlin

1. 
https://github.com/openstack/os-brick/blob/6cf9b1cd689f70a2c50c0fa83a9a9f7c502712a1/os_brick/encryptors/__init__.py#L62
2. 
https://github.com/openstack/os-brick/blob/6cf9b1cd689f70a2c50c0fa83a9a9f7c502712a1/os_brick/encryptors/luks.py#L63-L87
3. 
https://github.com/openstack/os-brick/blob/6cf9b1cd689f70a2c50c0fa83a9a9f7c502712a1/os_brick/encryptors/cryptsetup.py#L104-L129
4. https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption#Keyfiles
5. https://docs.openstack.org/admin-guide/dashboard-manage-volumes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mogan] Architecture diagrams

2017-05-31 Thread Joshua Harlow

Hi mogan folks,

I was doing some source code examination of mogan and it peaked my 
interest in how it all is connected together. In part I see there is a 
state machine, some taskflow usage, some wsgi usage that looks like 
parts of it are inspired(?) by various other projects.


That got me wondering if there is any decent diagrams or documents that 
explain how it all connects together and I thought I might as well ask 
and see if there are any (50/50 chances? ha).


I am especially interested in the state machine, taskflow and such (no 
tooz seems to be there) and how they are used (if they are, or are going 
to be used); I guess in part because I know the most about those 
libraries/components :)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Scheduler]

2017-05-31 Thread Jay Pipes

On 05/31/2017 09:49 AM, Narendra Pal Singh wrote:

Hello,

lets say i have multiple compute nodes, Pool-A has 5 nodes and Pool-B 
has 4 nodes categorized based on some property.
Now there is request for new instance, i always want this instance to be 
placed on any compute in Pool-A.

What would be best approach to address this situation?


FYI, this question is best asked on openstack@ ML. openstack-dev@ is for 
development questions.


You can use the AggregateInstanceExtraSpecsFilter and aggregate metadata 
to accomplish your needs.


Read more about that here:

https://docs.hpcloud.com/hos-3.x/helion/operations/compute/creating_aggregates.html

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] regional incoming storage targets

2017-05-31 Thread gordon chung
here's a scenario: i'd like aggregates stored centrally like it does 
currently with ceph/swift/s3 drivers but i want to collect data from 
many different regions spanning globe. they can all hit the same 
incoming storage but:
- that will be a hell of a lot of load
- single incoming storage locality might not be optimal for all regions 
causing the write performance to take longer than needed for a 'cache' 
storage
- sending HTTP POST with JSON payload probably more bandwidth than 
binary serialised format gnocchi uses internally.

i'm thinking it'd be good to support ability to have each region store 
data 'locally' to minimise latency and then have regional metricd agents 
aggregate into a central target. this is technically possible right now 
by just declaring regional (write-only?) APIs with same storage target 
and indexer targets but a different incoming target per region. the 
problem i think is how to handle coordination_url. it cannot be the same 
coordination_url since that would cause sack locks to overlap. if 
they're different, then i think there's an issue with having a 
centralised API (in addition to regional APIs). specifically, the 
centralised API cannot 'refresh'.

i'm not entirely sure this is an issue, just thought i'd raise it to 
discuss.

regardless, thoughts on maybe writing up deployment strategies like 
this? or make everyone who reads this to erase their minds and use this 
for 'consulting' fees :P

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-31 Thread Jeremy Stanley
On 2017-05-19 09:22:07 -0400 (-0400), Sean Dague wrote:
[...]
> the project,

I hosted the onboarding session for the Infrastructure team. For
various logistical reasons discussed on the planning thread before
the PTG, it was a shared session with many other "horizontal" teams
(QA, Requirements, Stable, Release). We carved the 90-minute block
up into individual subsessions for each team, though due to
scheduling conflicts I was only able to attend the second half
(Release and Infra). Attendance was also difficult to gauge; we had
several other regulars from the Infra team present in the audience,
people associated with other teams with which we shared the room,
and an assortment of new faces but hard to tell which session(s)
they were mainly there to see.

> what you did in the room,

I prepared a quick (5-10 minute) "help wanted" intro slide deck to
set the stage, then transitioned to a less formal mix of Q&A and
open discussion of some of the exciting things we're working on
currently. I felt like we didn't really get as many solid questions
as I was hoping, but the back-and-forth with other team members in
the room about our priority efforts was definitely a good way to
fill in the gaps between.

> what you think worked,

The format wasn't bad. Given the constraints we were under for this,
sharing seems to have worked out pretty well for us and possibly
seeded the audience with people who were interested in what those
other teams had to say and stuck around to see me ramble.

> what you would have done differently
[...]

The goal I had was to drum up some additional solid contributors to
our team, though the upshot (not necessarily negative, just not what
I expected) was that we seemed to get more interest from "adjacent
technologies" representatives interested in what we were doing and
how to replicate it in their ecosystems. If that ends up being a
significant portion of the audience going forward, it's possible we
could make some adjustments to our approach in an attempt to entice
them to collaborate further on co-development of our tools and
processes.

Also, due to the usual no-schedule-is-perfect conundrums, I ended up
running this in the final minutes before the happy hour social (or
maybe even overlapping the start of it) so that _may_ have sucked up
some of our potential audience before I got to the room. Not sure
what to really do to fix that, more just an observation I suppose.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Jay S Bryant



On 5/31/2017 12:02 PM, Jiri Suchomel wrote:

V Wed, 31 May 2017 11:34:20 -0400
Eric Harney  napsáno:


On 05/25/2017 05:51 AM, Jiri Suchomel wrote:

Hi,
it seems to me that the way of adding extra NFS options to the
cinder backend is somewhat confusing.

...

This has gotten a bit more confusing than in necessary in Cinder due
to how the configuration for the NFS and related drivers has been
tweaked over time.

The method of putting a list of shares in the nfs_shares_config file
is effectively deprecated, but still works for now.
...

Thanks for answer!
I should definitely try with those specific host/path/options.

However, it seems that the (supposedly) deprecated way does not
really work how it should, so I filed a bug report:

https://bugs.launchpad.net/cinder/+bug/1694758

Jiri


Jiri,

Thanks.  Does seem, at a minimum, that the documentation needs to be 
updated and that there may be some other bugs here.  We will work this 
through the bug.


Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Wang, Peter Xihong
Thanks John for the intro.

I believe cryptography is supported by pypy today.   Just did a "pip install 
cryptography" using an older version of pypy, pypy2-v5.6.0, with no error.  
This package has been downloaded > 13 millions of times based on pypy package 
tracking site: http://packages.pypy.org/##cryptography.  
I'd be interested in knowing the exact errors where it's failing, and help out.

As John said, we've observed 2x boost in throughput, and 78% reduction in 
latency from the proxy node in Swift lab setups.
We've also seen perf gain from Cinder, Keystone, Nova, Glance, Neutron, with 
most significant gains seen (22x) from Oslo.i18n from our lab setup running 
benchmarks such as Rally.

Switching from CPython to PyPy for OpenStack is not hard.  However, I found it 
really challenging to identify real world (customers) OpenStack examples where 
performance is hindered mainly on Python codes/or interpreter.

Thanks,

Peter


-Original Message-
From: John Dickinson [mailto:m...@not.mn] 
Sent: Wednesday, May 31, 2017 9:22 AM
To: OpenStack Development Mailing List 
Cc: Wang, Peter Xihong 
Subject: Re: [openstack-dev] [requirements] Do we care about pypy for clients 
(broken by cryptography)



On 31 May 2017, at 5:34, Monty Taylor wrote:

> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
>> On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
>>> We had a discussion a few months back around what to do for 
>>> cryptography since pycrypto is basically dead [1]. After some 
>>> discussion, at least on the Cinder project, we decided the best way 
>>> forward was to use the cryptography package instead, and work has 
>>> been done to completely remove pycrypto usage.
>>>
>>> It all seemed like a good plan at the time.
>>>
>>> I now notice that for the python-cinderclient jobs, there is a pypy 
>>> job
>>> (non-voting!) that is failing because the cryptography package is 
>>> not supported with pypy.
>>>
>>> So this leaves us with two options I guess. Change the cryto library 
>>> again, or drop support for pypy.
>>>
>>> I am not aware of anyone using pypy, and there are other valid 
>>> working alternatives. I would much rather just drop support for it 
>>> than redo our crypto functions again.
>>>
>>> Thoughts? I'm sure the Grand Champion of the Clients (Monty) 
>>> probably has some input?
>
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new push 
> and be successful at this point seems low at best.
>
> I'd argue that pypy is already not supported, so dropping the non-voting job 
> doesn't seem like losing very much to me. Reworking cryptography libs again, 
> otoh, seems like a lot of work.
>
> Monty

On the other hand, I've been working with Intel on getting PyPy support in 
Swift (it works, just need to reenable that gate job), and I know they were 
working on support in other projects. Summary is that for Swift, we got around 
2x improvement (lower latency + higher throughput) by simply replacing CPython 
with PyPy. I believe similar gains in other projects were expected.

I've cc'd Peter from Intel who's been working on PyPy quite a bit. I know he'll 
be very interested in this discussion and will have valuable input.

--john




>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
> Just a couple things (I don't think it changes the decision made).
> 
> Cryptography does at least claim to support PyPy (see
> https://pypi.python.org/pypi/cryptography/ trove identifiers), so
> possibly a bug on their end that should get filed?
> 
> Also, our user clients should probably run under as many interpreters as
> possible to make life easier for end users; however, they currently
> depend on oslo and if oslo doesn't support pypy then likely not
> reasonable to do in user clients.
> 
> Clark
> 

This looks like it may be due to the version of pypy packaged for Xenial.

The pypi info for cryptography does state PyPy 5.3+ is supported. The latest
package I see for 16.04 is pypy/xenial-updates 5.1.2+dfsg-1~16.04.

Is there a way we can get a newer version loaded on our images? I would be
fine testing against it, though the lack of oslo support does give a high
risk that something else may break once we get past the cryptography issue.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-05-31 Thread MCCABE, JAMEY A
Working group (WG) chairs or delegates, please enter your name (and WG name) 
and what times you could meet at this poll: 
https://beta.doodle.com/poll/6k36zgre9ttciwqz#table

As background and to share progress:

  *   We started and generally confirmed the desire to have a regular cross WG 
status meeting at the Boston Summit.
  *   Specifically the groups interested in Telco NFV and Fog Edge agreed to 
collaborate more often and in a more organized fashion.
  *   In e-mails and then in today’s Operators Telco/NFV we finalized a 
proposal to have all WGs meet for high level status monthly and to bring the 
collaboration back to our individual WG sessions.
  *   the User Committee sessions are appropriate for the Monthly WG Status 
meeting
  *   more detailed coordination across Telco/NFV and Fog Edge groups should 
take place in the Operators Telco NFV WG meetings which already occur every 2 
weeks.
  *   we need participation of each WG Chair (or a delegate)
  *   we welcome and request the OPNFV and Linux Foundation and other WGs to 
join us in the cross WG status meetings

The Doodle was setup to gain concurrence for a time of week in which we could 
schedule and is not intended to be for a specific week.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Blueprint process question

2017-05-31 Thread Waines, Greg
Hey Rob,

Just thought I’d check in on whether Horizon team has had a chance to review 
the following blueprint:
https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar

The blueprint in Vitrage which the above Horizon blueprint depends on has been 
approved by Vitrage team.
i.e.   https://blueprints.launchpad.net/vitrage/+spec/alarm-counts-api

let me know if you’d like to setup a meeting to discuss,
Greg.

From: Rob Cresswell 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, May 18, 2017 at 11:40 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [horizon] Blueprint process question

There isn't a specific time for blueprint review at the moment. It's usually 
whenever I get time, or someone asks via email or IRC. During the weekly 
meetings we always have time for open discussion of bugs/blueprints/patches etc.

Rob

On 18 May 2017 at 16:31, Waines, Greg 
mailto:greg.wai...@windriver.com>> wrote:
A blueprint question for horizon team.

I registered a new blueprint the other day.
https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar

Do I need to do anything else to get this reviewed?  I don’t think so, but 
wanted to double check.
How frequently do horizon blueprints get reviewed?  once a week?

Greg.


p.s. ... the above blueprint does depend on a Vitrage blueprint which I do have 
in review.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Jiri Suchomel
V Wed, 31 May 2017 11:34:20 -0400
Eric Harney  napsáno:

> On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> > Hi,
> > it seems to me that the way of adding extra NFS options to the
> > cinder backend is somewhat confusing.
> > 
> > ...

> This has gotten a bit more confusing than in necessary in Cinder due
> to how the configuration for the NFS and related drivers has been
> tweaked over time.
> 
> The method of putting a list of shares in the nfs_shares_config file
> is effectively deprecated, but still works for now.
> ...

Thanks for answer! 
I should definitely try with those specific host/path/options.

However, it seems that the (supposedly) deprecated way does not
really work how it should, so I filed a bug report: 

https://bugs.launchpad.net/cinder/+bug/1694758

Jiri

-- 
Jiri Suchomel

SUSE LINUX, s.r.o.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano][barbican] Encrypting sensitive properties

2017-05-31 Thread Kirill Zaitsev
As long as this integration is optional (i.e. no barbican — no encryption) It 
feels ok to me. We have a very similar integration with congress, yet you can 
deploy murano with or without it.

As for the way to convey this, I believe metadata attributes were designed to 
answer use-cases like this one. see 
https://docs.openstack.org/developer/murano/appdev-guide/murano_pl/metadata.html
 for more info.

Regards, Kirill

> Le 25 мая 2017 г. à 18:49, Paul Bourke  a écrit :
> 
> Hi all,
> 
> I've been looking at a blueprint[0] logged for Murano which involves 
> encrypting parts of the object model stored in the database that may contain 
> passwords or sensitive information.
> 
> I wanted to see if people had any thoughts or preferences on how this should 
> be done. On the face of it, it seems Barbican is a good choice for solving 
> this, and have read a lengthy discussion around this on the mailing list from 
> earlier this year[1]. Overall the benefits of Barbican seem to be that we can 
> handle the encryption and management of secrets in a common and standard way, 
> and avoid having to implement and maintain this ourselves. The main drawback 
> for Barbican seems to be that we impose another service dependency on the 
> operator, though this complaint seems to be in some way appeased by 
> Castellan, which offers alternative backends to just Barbican (though unsure 
> right now what those are?). The alternative to integrating Barbican/Castellan 
> is to use a more lightweight "roll your own" encryption such as what Glance 
> is using[2].
> 
> After we decide on how we want to implement the encryption there is also the 
> question of how best to expose this feature to users. My current thought is 
> that we can use Murano attributes, so application authors can do something 
> like this:
> 
> - name: appPassword
>  type: password
>  encrypt: true
> 
> This would of course be transparent to the end user of the application. Any 
> thoughts on both issues are very welcome, I hope to have a prototype in the 
> next few days which may help solidify this also.
> 
> Regards,
> -Paul.
> 
> [0] 
> https://blueprints.launchpad.net/murano/+spec/allow-encrypting-of-muranopl-properties
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-January/110192.html
> [2] 
> https://github.com/openstack/glance/blob/48ee8ef4793ed40397613193f09872f474c11abe/glance/common/crypt.py
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] deploy software on Openstack controller on the Overcloud

2017-05-31 Thread Dnyaneshwar Pawar
Hi Alex,

Currently we have puppet modules[0] to configure our software which has 
components on Openstack Controller, Cinder node and Nova node.
As per document[1] we successfully tried out role specific configuration[2].

So, does it mean that if we have an overcloud image with our packages inbuilt 
and we call our configuration scripts using role specific configuration, we may 
not need puppet modules[0] ? Is it acceptable deployment method?

[0] https://github.com/abhishek-kane/puppet-veritas-hyperscale
[1] 
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html
[2] http://paste.openstack.org/show/66/

Thanks,
Dnyaneshwar

On 5/30/17, 6:52 PM, "Alex Schultz" 
mailto:aschu...@redhat.com>> wrote:

On Mon, May 29, 2017 at 5:05 AM, Dnyaneshwar Pawar
mailto:dnyaneshwar.pa...@veritas.com>> wrote:
Hi,

I am tying to deploy a software on openstack controller on the overcloud.
One way to do this is by modifying ‘overcloud image’ so that all packages of
our software are added to image and then run overcloud deploy.
Other way is to write heat template and puppet module which will deploy the
required packages.

Question: Which of above two approaches is better?

Note: Configuration part of the software will be done via separate heat
template and puppet module.


Usually you do both.  Depending on how the end user is expected to
deploy, if they are using the TripleoPackages service[0] in their
role, the puppet installation of the package won't actually work (we
override the package provider to noop) so it needs to be in the
images.  That being said, usually there is also a bit of puppet that
needs to be written to configure the end service and as a best
practice (and for development purposes), it's a good idea to also
capture the package in the manifest.

Thanks,
-Alex

[0] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml


Thanks and Regards,
Dnyaneshwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Jeremy Stanley
On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
[...]
> Trademark programs are trademark programs - we should have a unified
> process for all of them. Let's not make the same mistakes again by
> creating classes of projects / programs. I do not want this to be
> a distinction as we move forward.

This I agree with. However I'll be surprised if a majority of the QA
team disagree on this point (logistic concerns with how to curate
this over time I can understand, but that just means they need to
interest some people in working on a manageable solution).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread John Dickinson


On 31 May 2017, at 5:34, Monty Taylor wrote:

> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
>> On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
>>> We had a discussion a few months back around what to do for cryptography
>>> since pycrypto is basically dead [1]. After some discussion, at least on
>>> the Cinder project, we decided the best way forward was to use the
>>> cryptography package instead, and work has been done to completely remove
>>> pycrypto usage.
>>>
>>> It all seemed like a good plan at the time.
>>>
>>> I now notice that for the python-cinderclient jobs, there is a pypy job
>>> (non-voting!) that is failing because the cryptography package is not
>>> supported with pypy.
>>>
>>> So this leaves us with two options I guess. Change the cryto library again,
>>> or drop support for pypy.
>>>
>>> I am not aware of anyone using pypy, and there are other valid working
>>> alternatives. I would much rather just drop support for it than redo our
>>> crypto functions again.
>>>
>>> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
>>> some input?
>
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new push 
> and be successful at this point seems low at best.
>
> I'd argue that pypy is already not supported, so dropping the non-voting job 
> doesn't seem like losing very much to me. Reworking cryptography libs again, 
> otoh, seems like a lot of work.
>
> Monty

On the other hand, I've been working with Intel on getting PyPy support in 
Swift (it works, just need to reenable that gate job), and I know they were 
working on support in other projects. Summary is that for Swift, we got around 
2x improvement (lower latency + higher throughput) by simply replacing CPython 
with PyPy. I believe similar gains in other projects were expected.

I've cc'd Peter from Intel who's been working on PyPy quite a bit. I know he'll 
be very interested in this discussion and will have valuable input.

--john




>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-05-31 Thread Graham Hayes
On 31/05/17 16:45, Jeremy Stanley wrote:
> On 2017-05-31 15:22:59 + (+), Jeremy Stanley wrote:
>> On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
>> [...]
>>> it's news to me that they're considering reversing course. If the
>>> QA team isn't going to continue, we'll need to figure out what
>>> that means and potentially find another group to do it.
>>
>> I wasn't there for the discussion, but it sounds likely to be a
>> mischaracterization. I'm going to assume it's not true (or much more
>> nuanced) at least until someone responds on behalf of the QA team.
>> This particular subthread is only going to go further into the weeds
>> until it is grounded in some authoritative details.
> 
> Apologies for replying to myself, but per discussion[*] with Chris
> in #openstack-dev I'm adjusting the subject header to make it more
> clear which particular line of speculation I consider weedy.
> 
> Also in that brief discussion, Graham made it slightly clearer that
> he was talking about pushback on the tempest repo getting tests for
> new trademark programs beyond "OpenStack Powered Platform,"
> "OpenStack Powered Compute" and "OpenStack Powered Object Storage."

Trademark programs are trademark programs - we should have a unified
process for all of them. Let's not make the same mistakes again by
creating classes of projects / programs. I do not want this to be
a distinction as we move forward.

> [*]  http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-05-31.log.html#t2017-05-31T15:28:07
>  >
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-31 Thread Amrith Kumar

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, May 31, 2017 12:00 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master
> 
> On 05/31/2017 02:14 AM, Clint Byrum wrote:
> > Either way, it should be much simpler to manage slave lag than to deal
> > with a Galera cluster that won't accept any writes at all because it
> > can't get quorum.
> 
> Would CockroachDB be any better at achieving quorum?
> 
> Genuinely curious. :)

[Amrith Kumar] Last I read about this was in [1] and it sounded like it would 
be no better, or worse.

[1] https://www.cockroachlabs.com/blog/consensus-made-thrive/

> 
> Best,
> -jay
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi][collectd-ceilometer-plugin][ceilometer][rally] Gates with gnocchi+devstack will break

2017-05-31 Thread Julien Danjou
Hi,

If you're consuming Gnocchi via its devstack in the gate, you'll need to
change that soon. As the repository has been moved to GitHub and the
infra team not wanting to depend on GitHub (external) repositories,
you'll need to set up Gnocchi via pip.

I've started doing the work for Ceilometer here:

  https://review.openstack.org/#/c/468844/
  https://review.openstack.org/#/c/468876/

If it can inspire you!

As soon as https://review.openstack.org/#/c/466317/ is merged, your jobs
will break.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] Monasca at PTG in Denver

2017-05-31 Thread witold.be...@est.fujitsu.com
Hi,

I wanted to ask who of you would be interested in attending dedicated Monasca 
sessions during the next Project Teams Gathering in Denver, September 11-15, 
2017 [1]. The team room would probably be booked for one or two days. 
Alternatively we could organize the remote mid-cycle meeting, as we did 
previously.

Please fill the form as soon as possible:
https://goo.gl/forms/7xpgZFrcUCWhqnEg2

[1] https://www.openstack.org/ptg/



Cheers
Witek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-31 Thread Jay Pipes

On 05/31/2017 02:14 AM, Clint Byrum wrote:

Either way, it should be much simpler to manage slave lag than to deal
with a Galera cluster that won't accept any writes at all because it
can't get quorum.


Would CockroachDB be any better at achieving quorum?

Genuinely curious. :)

Best,
-jay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-31 Thread Hongbin Lu
Please find my replies inline.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 9:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang



On 30 May 2017 at 15:26, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Please consider leveraging Fuxi instead.

Is there a missing functionality from rexray?

[Hongbin Lu] From my understanding, Rexray targets on the overcloud use cases 
and assumes that containers are running on top of nova instances. You mentioned 
Magnum is leveraging Rexray for Cinder integration. Actually, I am the core 
reviewer who reviewed and approved those Rexray patches. From what I observed, 
the functionalities provided by Rexray are minimal. What it was doing is simply 
calling Cinder API to search an existing volume, attach the volume to the Nova 
instance, and let docker to bind-mount the volume to the container. At the time 
I was testing it, it seems to have some mystery bugs that prevented me to get 
the cluster to work. It was packaged by a large container image, which might 
take more than 5 minutes to pull down. With that said, Rexray might be a choice 
for someone who are looking for cross cloud-providers solution. Fuxi will focus 
on OpenStack and targets on both overcloud and undercloud use cases. That means 
Fuxi can work with Nova+Cinder or a standalone Cinder. As John pointed out in 
another reply, another benefit of Fuxi is to resolve the fragmentation problem 
of existing solutions. Those are the differentiators of Fuxi.

Kuryr/Fuxi team is working very hard to deliver the docker network/storage 
plugins. I wish you will work with us to get them integrated with 
Magnum-provisioned cluster.

Patches are welcome to support fuxi as an *option* instead of rexray, so users 
can choose.

Currently, COE clusters provisioned by Magnum is far away from 
enterprise-ready. I think the Magnum project will be better off if it can adopt 
Kuryr/Fuxi which will give you a better OpenStack integration.

Best regards,
Hongbin

fuxi feature request: Add authentication using a trustee and a trustID.

[Hongbin Lu] I believe this is already supported.

Cheers,
Spyros


From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen 
mailto:chenzeng...@163.com>> wrote:
Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 [1]: https://review.openstack.org/#/c/468635

Best Wishes!
zengchen



在 2017-05-26 21:30:48,"John Griffith" 
mailto:john.griffi...@gmail.com>> 写道:


On Thu, May 25, 2017 at 10:01 PM, zengchen 
mailto:chenzeng...@163.com>> wrote:

Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.


   Thanks very much!

Best Wishes!
zengchen


At 2017-05-25 22:47:29, "John Griffith" 
mailto:john.griffi...@gmail.com>> wrote:


On Thu, May 25, 2017 at 5:50 AM, zengchen 
mailto:chenzeng...@163.com>> wrote:
Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang

At 2017-05-25 19:46:54, "zengchen" 
mailto:chenzeng...@163.com>> wrote:
Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question is where to commit codes for it.
We have two choice, 1. create a new repository, 2. create a new branch.  IM

[openstack-dev] [qa][tc][all] Tempest to reject trademark tests (was: more tempest plugins)

2017-05-31 Thread Jeremy Stanley
On 2017-05-31 15:22:59 + (+), Jeremy Stanley wrote:
> On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > it's news to me that they're considering reversing course. If the
> > QA team isn't going to continue, we'll need to figure out what
> > that means and potentially find another group to do it.
> 
> I wasn't there for the discussion, but it sounds likely to be a
> mischaracterization. I'm going to assume it's not true (or much more
> nuanced) at least until someone responds on behalf of the QA team.
> This particular subthread is only going to go further into the weeds
> until it is grounded in some authoritative details.

Apologies for replying to myself, but per discussion[*] with Chris
in #openstack-dev I'm adjusting the subject header to make it more
clear which particular line of speculation I consider weedy.

Also in that brief discussion, Graham made it slightly clearer that
he was talking about pushback on the tempest repo getting tests for
new trademark programs beyond "OpenStack Powered Platform,"
"OpenStack Powered Compute" and "OpenStack Powered Object Storage."

[*] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-05-31.log.html#t2017-05-31T15:28:07
 >
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [TripleO] custom configuration to overcloud fails second time

2017-05-31 Thread Dnyaneshwar Pawar
Hi Ben,

On 5/31/17, 8:06 PM, "Ben Nemec" 
mailto:openst...@nemebean.com>> wrote:

I think we would need to see what your custom config templates look like
as well.

Custom config templates: http://paste.openstack.org/show/64/


Also note that it's generally not recommended to drop environment files
from your deploy command unless you explicitly want to stop applying
them.  So if you applied myconfig_1.yaml and then later want to apply
myconfig_2.yaml your deploy command should look like: openstack
overcloud deploy --templates -e myconfig_1.yaml -e myconfig_2.yaml

Yes, I agree. But in my case even if I dropped myconfig_1.yaml while applying 
myconfig_2.yaml , config in step 1 remained unchanged.

On 05/31/2017 07:53 AM, Dnyaneshwar Pawar wrote:
Hi TripleO Experts,
I performed following steps -

  1. openstack overcloud deploy --templates -e myconfig_1.yaml
  2. openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to
the overcloud. And configuration applied by step 1 remains unchanged.

*Do I need to do anything before performing step 2?*


Thanks and Regards,
Dnyaneshwar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Eric Harney
On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> Hi,
> it seems to me that the way of adding extra NFS options to the cinder
> backend is somewhat confusing.
> 
> 1. There is  nfs_mount_options in cinder config file [1]
> 
> 2. Then I can put my options to the nfs_shares_config file - that
> it could contain additional options mentiones [2] or the
> commit message that adds the feature [3]
> 
> Now, when I put my options to both of these places, cinder-volume
> actually uses them twice and executes the command like this
> 
> mount -t nfs -o nfsvers=3 -o nfsvers=3
> 192.168.241.10:/srv/nfs/vi7/cinder 
> /var/lib/cinder/mnt/f5689da9ea41a66eff2ce0ef89b37bce
> 
> BTW, the options coming from nfs_shares_config are called 'flags' by
> cinder/volume/drivers/nfs ([4]).
> 
> Now, to make it more fun, when I actually want to attach a volume to
> running instance, nova uses different way of realizing which NFS options to 
> use:
> 
> - It reads them from _nova_ config option of libvirt.nfs_mount_options
> [5]
> - or it uses those it gets them from cinder when creating cinder
> connection [6] But these are only the options defined in
> nfs_shares_config file, NOT those nfs_mount_options specified in cinder
> config file.
> 
> 
> So. If I put my options to both places, nfs_shares_config file and
> nfs_mount_options, it actually works how I want it to work, as
> current mount does not complain that the option was provided twice. 
> 
> But it looks ugly. And I'm wondering - am I doing it wrong, or
> is there a problem with either cinder or nova (or both)?
> 

This has gotten a bit more confusing than in necessary in Cinder due to
how the configuration for the NFS and related drivers has been tweaked
over time.

The method of putting a list of shares in the nfs_shares_config file is
effectively deprecated, but still works for now.

The preferred method now is to set the following options:
   nas_host:  server address
   nas_share_path:  export path
   nas_mount_options:  options for mounting the export

So whereas before the nfs_shares_config file would have:
   127.0.0.1:/srv/nfs1 -o nfsvers=3

This would now translate to:
   nas_host=127.0.0.1
   nas_share_path=/srv/nfs1
   nas_mount_options = -o nfsvers=3

I believe if you try configuring the driver this way, you will get the
desired result.

The goal was to remove the nfs_shares_config config method, but this
hasn't happened yet -- I/we need to revisit this area and see about
doing this.

Eric

> 
> Jiri
> 
> 
> [1] https://docs.openstack.org/admin-guide/blockstorage-nfs-backend.html
> [2]
> https://docs.openstack.org/newton/config-reference/block-storage/drivers/nfs-volume-driver.html
> [3]
> https://github.com/openstack/cinder/commit/553e0d92c40c73aa1680743c4287f31770131c97
> [4]
> https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L163
> [5]
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L87
> [6] 
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L89
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-31 Thread Emilien Macchi
On Tue, May 30, 2017 at 11:11 PM, Matthew Thode
 wrote:
> On 05/30/2017 04:08 PM, Emilien Macchi wrote:
>> On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
>>  wrote:
>>> We have a problem in requirements that projects that don't have the
>>> cycle-with-intermediary release model (most of the cycle-with-milestones
>>> model) don't get integrated with requirements until the cycle is fully
>>> done.  This causes a few problems.
>>>
>>> * These projects don't produce a consumable release for requirements
>>> until end of cycle (which does not accept beta releases).
>>>
>>> * The former causes old requirements to be kept in place, meaning caps,
>>> exclusions, etc. are being kept, which can cause conflicts.
>>>
>>> * Keeping the old version in requirements means that cross dependencies
>>> are not tested with updated versions.
>>>
>>> This has hit us with the mistral and tripleo projects particularly
>>> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
>>> mistral sqlalchemy updates.
>>>
>>> [mistral]
>>> mistral - blocking sqlalchemy - milestones
>>>
>>> [tripleo]
>>> os-refresh-config - blocking pbr - milestones
>>> os-apply-config - blocking pbr - milestones
>>> os-collect-config - blocking pbr - milestones
>>
>> These are cycle-with-milestones., like os-net-config for example,
>> which wasn't mentioned in this email. It has the same releases as
>> os-net-config also, so I'm confused why these 3 cause issue, I
>> probably missed something.
>>
>> Anyway, I'm happy to change os-*-config (from TripleO) to be
>> cycle-with-intermediary. Quick question though, which tag would you
>> like to see, regarding what we already did for pike-1?
>>
>> Thanks,
>>
>
> Pike is fine as it's just master that has this issue.  The problem is
> that the latest release blocks the pbr from upper-constraints from being
> coinstallable.

Done, please review: https://review.openstack.org/#/c/469530/

Thanks,

>>> [nova]
>>> os-vif - blocking pbr - intermediary
>>>
>>> [horizon]
>>> django-openstack-auth - blocking django - intermediary
>>>
>>>
>>> So, here's what needs doing.
>>>
>>> Those projects that are already using the cycle-with-intermediary model
>>> should just do a release.
>>>
>>> For those that are using cycle-with-milestones, you will need to change
>>> to the cycle-with-intermediary model, and do a full release, both can be
>>> done at the same time.
>>>
>>> If anyone has any questions or wants clarifications this thread is good,
>>> or I'm on irc as prometheanfire in the #openstack-requirements channel.
>>>
>>> --
>>> Matthew Thode (prometheanfire)
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>
>
> --
> Matthew Thode (prometheanfire)
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Jeremy Stanley
On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
[...]
> it's news to me that they're considering reversing course. If the
> QA team isn't going to continue, we'll need to figure out what
> that means and potentially find another group to do it.

I wasn't there for the discussion, but it sounds likely to be a
mischaracterization. I'm going to assume it's not true (or much more
nuanced) at least until someone responds on behalf of the QA team.
This particular subthread is only going to go further into the weeds
until it is grounded in some authoritative details.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Clark Boylan
On Wed, May 31, 2017, at 07:39 AM, Sean McGinnis wrote:
> On Wed, May 31, 2017 at 09:47:37AM -0400, Doug Hellmann wrote:
> > Excerpts from Monty Taylor's message of 2017-05-31 07:34:03 -0500:
> > > On 05/31/2017 06:39 AM, Sean McGinnis wrote:
> > > > On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> > > >>
> > > >> I am not aware of anyone using pypy, and there are other valid working
> > > >> alternatives. I would much rather just drop support for it than redo 
> > > >> our
> > > >> crypto functions again.
> > > >>
> > > 
> > 
> > This question came up recently for the Oslo libraries, and I think we
> > also agreed that pypy support was not being actively maintained.
> > 
> > Doug
> > 
> 
> Thanks Doug. If oslo does not support pypy, then I think that makes the
> decision for me. I will put up a patch to get rid of that job and stop
> wasting infra resources on it.

Just a couple things (I don't think it changes the decision made).

Cryptography does at least claim to support PyPy (see
https://pypi.python.org/pypi/cryptography/ trove identifiers), so
possibly a bug on their end that should get filed?

Also, our user clients should probably run under as many interpreters as
possible to make life easier for end users; however, they currently
depend on oslo and if oslo doesn't support pypy then likely not
reasonable to do in user clients.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
On Wed, May 31, 2017 at 09:47:37AM -0400, Doug Hellmann wrote:
> Excerpts from Monty Taylor's message of 2017-05-31 07:34:03 -0500:
> > On 05/31/2017 06:39 AM, Sean McGinnis wrote:
> > > On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> > >>
> > >> I am not aware of anyone using pypy, and there are other valid working
> > >> alternatives. I would much rather just drop support for it than redo our
> > >> crypto functions again.
> > >>
> > 
> 
> This question came up recently for the Oslo libraries, and I think we
> also agreed that pypy support was not being actively maintained.
> 
> Doug
> 

Thanks Doug. If oslo does not support pypy, then I think that makes the
decision for me. I will put up a patch to get rid of that job and stop
wasting infra resources on it.

I was hoping this would be the answer. ;)

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla][stable][security][infra][all] guidelines for managing releases of binary artifacts

2017-05-31 Thread Steven Dake (stdake)
Doug,

Thanks for the resolution.  It is well written and sets appropriate guidelines. 
 I was expecting something terrible – I guess I shouldn’t have expectations 
ahead of a resolution.  Apologies for being a jerk on the ml.

Nice work.

I left some commentary in the review and left a vote of -1 (just a few things 
need tidying, not a -1 of the concept in general).

Regards
-steve


-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, May 30, 2017 at 2:54 PM
To: openstack-dev 
Subject: [openstack-dev] [tc][kolla][stable][security][infra][all]  
guidelines for managing releases of binary artifacts

Based on two other recent threads [1][2] and some discussions on
IRC, I have written up some guidelines [3] that try to address the
concerns I have with us publishing binary artifacts while still
allowing the kolla team and others to move ahead with the work they
are trying to do.

I would appreciate feedback about whether these would complicate
builds or make them impossible, as well as whether folks think they
go far enough to mitigate the risks described in those email threads.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116677.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117282.html
[3] https://review.openstack.org/#/c/469265/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] custom configuration to overcloud fails second time

2017-05-31 Thread Ben Nemec
I think we would need to see what your custom config templates look like 
as well.


Also note that it's generally not recommended to drop environment files 
from your deploy command unless you explicitly want to stop applying 
them.  So if you applied myconfig_1.yaml and then later want to apply 
myconfig_2.yaml your deploy command should look like: openstack 
overcloud deploy --templates -e myconfig_1.yaml -e myconfig_2.yaml


On 05/31/2017 07:53 AM, Dnyaneshwar Pawar wrote:

Hi TripleO Experts,
I performed following steps -

 1. openstack overcloud deploy --templates -e myconfig_1.yaml
 2. openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to
the overcloud. And configuration applied by step 1 remains unchanged.

*Do I need to do anything before performing step 2?*


Thanks and Regards,
Dnyaneshwar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changing project schemas in patches; upgrade implications

2017-05-31 Thread Amrith Kumar
Thx Monty, jroll, smcginnis, zzzeek_ ...


-amrith

--
Amrith Kumar
Phone: +1-978-563-9590


On Wed, May 31, 2017 at 10:17 AM, Monty Taylor  wrote:

> On 05/31/2017 08:51 AM, Amrith Kumar wrote:
>
>> This email thread relates to[1], a change that aims to improve cross-SQL
>> support in project schemas.
>>
>> I want to explicitly exclude the notion of getting rid of support for
>> PostgreSQL in the underlying project schemas, a topic that was discussed at
>> the summit[2].
>>
>> In this change, the author (Thomas Bechtold, copied on this thread) makes
>> the comment that the change "is not changing the schema. It just avoids
>> implicit type conversion".
>>
>> It has long been my understanding that changes like this are not upgrade
>> friendly as it could lead to two installations both with, say version 37 or
>> 38 of the schema, but different table structures. In effect, this change
>> breaks upgradability of systems.
>>
>> i.e. a deployment which had a schema from the install of Ocata would have
>> a v38 table modules table with a default of 0 and one installed with Pike
>> (should this change be accepted) would have a modules table with a default
>> of False.
>>
>
> I agree that if that was the case this would be bad. But I don't think
> it's the case here.
>
> The datatype in the model is already Boolean. So I believe that means this
> will be a tinyint in MySQL and likely a boolean in PG (I'm guessing) the
> only change here is to the SQLA layer in what is being used in code - and
> being more explicit seems good.
>
> So I think this is a win.
>
> I'm raising this issue on the ML because the author also claims (albeit
>> not verified by me) that other projects have accepted changes like this.
>>
>
> Thanks! I think this is an area we need to be careful in - and extra
> eyeballs are a good thing.
>
> I submit to you that the upgrade friendly way of making this change would
>> be to propose a new version of the schema which alters all of these tables
>> and includes the correct default value. On a fresh install, with no data,
>> the upgrade step with this new schema version would bring the table to the
>> right default value and any system with that version of the schema would
>> have an identical set of defaults. Similarly any system with v37 or 38 of
>> the schema would have identical defaults.
>>
>
> Yes - I agree - that would definitely be the right way to do this if there
> was a model change.
>
> What's the advice of the community on this change; I've explicitly added
>> stable-maint-core as reviewers on this change as it does have stable branch
>> upgrade implications.
>>
>> -amrith
>>
>> [1] https://review.openstack.org/#/c/467080/
>> ​[2]https://etherpad.openstack.org/p/BOS-postgresql
>> ​
>> ​​
>>
>> --
>> Amrith Kumar
>> Phone: +1-978-563-9590
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] No QA meeting tomorrow 01/06/2017 9.00UTC

2017-05-31 Thread Ghanshyam Mann
Hi All,

As most of the QA members are in Open Source Summit, Tokyo, I propose
to cancel tomorrow (01/06/2017 9.00UTC) QA meeting.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changing project schemas in patches; upgrade implications

2017-05-31 Thread Monty Taylor

On 05/31/2017 08:51 AM, Amrith Kumar wrote:
This email thread relates to[1], a change that aims to improve cross-SQL 
support in project schemas.


I want to explicitly exclude the notion of getting rid of support for 
PostgreSQL in the underlying project schemas, a topic that was discussed 
at the summit[2].


In this change, the author (Thomas Bechtold, copied on this thread) 
makes the comment that the change "is not changing the schema. It just 
avoids implicit type conversion".


It has long been my understanding that changes like this are not upgrade 
friendly as it could lead to two installations both with, say version 37 
or 38 of the schema, but different table structures. In effect, this 
change breaks upgradability of systems.


i.e. a deployment which had a schema from the install of Ocata would 
have a v38 table modules table with a default of 0 and one installed 
with Pike (should this change be accepted) would have a modules table 
with a default of False.


I agree that if that was the case this would be bad. But I don't think 
it's the case here.


The datatype in the model is already Boolean. So I believe that means 
this will be a tinyint in MySQL and likely a boolean in PG (I'm 
guessing) the only change here is to the SQLA layer in what is being 
used in code - and being more explicit seems good.


So I think this is a win.

I'm raising this issue on the ML because the author also claims (albeit 
not verified by me) that other projects have accepted changes like this.


Thanks! I think this is an area we need to be careful in - and extra 
eyeballs are a good thing.


I submit to you that the upgrade friendly way of making this change 
would be to propose a new version of the schema which alters all of 
these tables and includes the correct default value. On a fresh install, 
with no data, the upgrade step with this new schema version would bring 
the table to the right default value and any system with that version of 
the schema would have an identical set of defaults. Similarly any system 
with v37 or 38 of the schema would have identical defaults.


Yes - I agree - that would definitely be the right way to do this if 
there was a model change.


What's the advice of the community on this change; I've explicitly added 
stable-maint-core as reviewers on this change as it does have stable 
branch upgrade implications.


-amrith

[1] https://review.openstack.org/#/c/467080/
​[2]https://etherpad.openstack.org/p/BOS-postgresql
​
​​

--
Amrith Kumar
Phone: +1-978-563-9590



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-31 Thread zengchen
Hi, Spyros:
Recently, Rexray can supply volume for Docker by integrating Cinder. It is 
great!  However, comparing to Fuxi,  Rexray is a little heavier. Because Rexray 
must depend on Libstorage to communicate with Cinder. Fuxi-golang is just a new 
project which re-implements Fuxi in go language. From the standpoint of Fuxi, 
Fuxi-golang is just her 'sister'.


By the way, you said you have been integrating Manila to Libstorage. But I 
don't see the relevant materials from the link[1]. Is the link wrong or do I 
miss something. Could you give more details about your work on integrating 
Manila?  Thanks very much!


Best Wishes!
zengchen
. 
[1] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata





At 2017-05-30 19:47:26, "Spyros Trigazis"  wrote:

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros 


[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0 
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata


On 27 May 2017 at 12:15, zengchen  wrote:

Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!


 [1]: https://review.openstack.org/#/c/468635


Best Wishes!
zengchen






在 2017-05-26 21:30:48,"John Griffith"  写道:





On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:



Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander
 
is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.
 


   Thanks very much!


Best Wishes!
zengchen





At 2017-05-25 22:47:29, "John Griffith"  wrote:





On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:

Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang




At 2017-05-25 19:46:54, "zengchen"  wrote:

Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question is where to commit codes for it.
We have two choice, 1. create a new repository, 2. create a new branch.  IMO, 
the first one is much better. Because
there are many differences in the layer of infrastructure, such as CI.  What's 
your opinion? Thanks very much
 
Best Wishes
zengchen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Zengchen,


For now I was thinking just use Github and PR's outside of the OpenStack 
projects to bootstrap things and see how far we can get.  I'll update the BP 
this morning with what I believe to be the key tasks to work through.


Thanks,
John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-31 Thread Lance Bragstad
On Fri, May 26, 2017 at 10:21 AM, Sean Dague  wrote:

> On 05/26/2017 10:44 AM, Lance Bragstad wrote:
> 
> > Interesting - I guess the way I was thinking about it was on a per-token
> > basis, since today you can't have a single token represent multiple
> > scopes. Would it be unreasonable to have oslo.context build this
> > information based on multiple tokens from the same user, or is that a
> > bad idea?
>
> No service consumer is interacting with Tokens. That's all been
> abstracted away. The code in the consumers consumer is interested in is
> the context representation.
>
> Which is good, because then the important parts are figuring out the
> right context interface to consume. And the right Keystone front end to
> be explicit about what was intended by the operator "make jane an admin
> on compute in region 1".
>
> And the middle can be whatever works best on the Keystone side. As long
> as the details of that aren't leaked out, it can also be refactored in
> the future by having keystonemiddleware+oslo.context translate to the
> known interface.
>

Ok - I think that makes sense. So if I copy/paste your example from earlier
and modify it a bit ( s/is_admin/global/)::

{
   "user": "me!",
   "global": True,
   "roles": ["admin", "auditor"],
   
}

Or

{
   "user": "me!",
   "global": True,
   "roles": ["reader"],
   
}

That might be one way we can represent global roles through
oslo.context/keystonemiddleware. The library would be on the hook for
maintaining the mapping of token scope to context scope, which makes sense:

if token['is_global'] == True:
context.global = True
elif token['domain_scoped']:
# domain scoping?
else:
# handle project scoping

I need to go dig into oslo.context a bit more to get familiar with how this
works on the project level. Because if I understand correctly, oslo.context
currently doesn't relay global scope and that will be a required thing to
get done before this work is useful, regardless of going with option #1,
#2, and especially #3.



> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-05-31 Thread Matt Riedemann

On 5/31/2017 6:58 AM, Gorka Eguileor wrote:

Hi,

As some of you may know I've been working on improving iSCSI connections
on OpenStack to make them more robust and prevent them from leaving
leftovers on attach/detach operations.

There are a couple of posts [1][2] going in more detail, but a good
summary would be that to fix this issue we require a considerable rework
in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.

Relevant changes for those projects are:

- Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
   case, so a new feature was added to disable automatic scans that added
   unintended devices to the systems.  Done and merged [3][4], it will be
   available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7

- OS-Brick: rework iSCSI to make it robust on unreliable networks, to
   add a `force` detach option that prioritizes leaving a clean system
   over possible data loss, and to support the new Open iSCSI feature.
   Done and pending review [5][6][7]

- Cinder: Handle some attach/detach errors a little better and add
   support to the force detach option for some operations where data loss
   on error is acceptable, ie: create volume from image, restore backup,
   etc. Done and pending review [8][9]

- Nova: I haven't looked into the code here, but I'm sure there will be
   cases where using the force detach operation will be useful.

- Tests: While we do have tempest tests that verify that attach/detach
   operations work both in Nova and in cinder volume creation operations,
   they are not meant to test the robustness of the system, so new tests
   will be required to validate the code.  Done [10]

Proposed tests are simplified versions of the ones I used to validate
the code; but hey, at least these are somewhat readable ;-)
Unfortunately they are not in line with the tempest mission since they
are not meant to be run in a production environment due to their
disruptive nature while injecting errors.  They need to be run
sequentially and without any other operations running on the deployment.
They also run sudo commands via local bash or SSH for the verification
and error generation bits.

We are testing create volume from image and attaching a volume to an
instance under the following networking error scenarios:

  - No errors
  - All paths have 10% incoming packets dropped
  - All paths have 20% incoming packets dropped
  - All paths have 100% incoming packets dropped
  - Half the paths have 20% incoming packets dropped
  - The other half of the paths have 20% incoming packets dropped
  - Half the paths have 100% incoming packets dropped
  - The other half of the paths have 100% incoming packets dropped

There are single execution versions as well as 10 consecutive operations
variants.

Since these are big changes I'm sure we would all feel a lot more
confident to merge them if storage vendors would run the new tests to
confirm that there are no issues with their backends.

Unfortunately to fully test the solution you may need to build the
latest Open-iSCSI package and install it in the system, then you can
just use an all-in-one DevStack with a couple of changes in the local.conf:

enable_service tempest

CINDER_REPO=https://review.openstack.org/p/openstack/cinder
CINDER_BRANCH=refs/changes/45/469445/1

LIBS_FROM_GIT=os-brick

OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
OS_BRICK_BRANCH=refs/changes/94/455394/11

[[post-config|$CINDER_CONF]]
[multipath-backend]
use_multipath_for_image_xfer=true

[[post-config|$NOVA_CONF]]
[libvirt]
volume_use_multipath = True

[[post-config|$KEYSTONE_CONF]]
[token]
expiration = 14400

[[test-config|$TEMPEST_CONFIG]]
[volume-feature-enabled]
multipath = True
[volume]
build_interval = 10
multipath_type = $MULTIPATH_VOLUME_TYPE
backend_protocol_tcp_port = 3260
multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2

Multinode configurations are also supported using SSH with use/password or
private key to introduce the errors or check that the systems didn't leave any
leftovers, the tests can also run a cleanup command between tests, etc., but
that's beyond the scope of this email.

Then you can run them all from /opt/stack/tempest with:

  $ cd /opt/stack/tempest
  $ OS_TEST_TIMEOUT=7200 ostestr -r 
cinder.tests.tempest.scenario.test_multipath.*

But I would recommend first running the simplest one without errors and
manually checking that the multipath is being created.

  $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_1

Then doing the same with one with errors and verify the presence of the
filters in iptables and that the packet drop for those filters is non zero:

  $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_2
  $ sudo iptables -nvL INPUT

Then doing the same with a Nova

Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-31 Thread Lance Haig

Hi,


On 24.05.17 18:43, Zane Bitter wrote:

On 19/05/17 11:00, Lance Haig wrote:

Hi,

As we know the heat-templates repository has become out of date in some
respects and also has been difficult to be maintained from a community
perspective.

For me the repository is quiet confusing with different styles that are
used to show certain aspects and other styles for older template 
examples.


This I think leads to confusion and perhaps many people who give up on
heat as a resource as things are not that clear.

From discussions in other threads and on the IRC channel I have seen
that there is a need to change things a bit.


This is why I would like to start the discussion that we rethink the
template example repository.

I would like to open the discussion with mys suggestions.

  * We need to differentiate templates that work on earlier versions of
heat that what is the current supported versions.


I typically use the heat_template_version for this. Technically this 
is entirely independent of what resource types are available in Heat. 
Nevertheless, if I submit e.g. a template that uses new resources only 
available in Ocata, I'll set 'heat_template_version: ocata' even if 
the template doesn't contain any Ocata-only intrinsic functions. We 
could make that a convention.

That is one way to achieve this.



  o I have suggested that we create directories that relate to
different versions so that you can create a stable version of
examples for the heat version and they should always remain
stable for that version and once it goes out of support can
remain there.


I'm reluctant to move existing things around unless its absolutely 
necessary, because there are a lot of links out in the wild to 
templates that will break. And they're links directly to the Git repo, 
it's not like we publish them somewhere and could add redirects.


Although that gives me an idea: what if we published them somewhere? 
We could make templates actually discoverable by publishing a list of 
descriptions (instead of just the names like you get through browsing 
the Git repo). And we could even add some metadata to indicate what 
versions of Heat they run on.


It would be better to do something like this. One of the biggest 
learning curves that our users have had is understanding what is 
available in what version of heat and then finding examples of templates 
that match their version.
I wanted to create the heat-lib library so that people could easily find 
working examples for their version of heat and also use the library 
intact as is so that they can get up-to speed really quickly.

This has enabled people to become productive much faster with heat.


  o This would mean people can find their version of heat and know
these templates all work on their version


This would mean keeping multiple copies of each template and 
maintaining them all. I don't think this is the right way to do this - 
to maintain old stuff what you need is a stable branch. That's also 
how you're going to be able to test against old versions of OpenStack 
in the gate.
Well I am not sure that this would be needed. Unless there are many 
backports of new resources to older versions of the templates.
e.g. Would the project backport the Neuton conditionals to the Liberty 
version of heat? I am assuming not.


That means that once a new version of heat is decided the template set 
becomes locked and you just create a copy with the new template version 
and test regression and once that is complete then you start adding the 
changes that are specific to the new version of heat.


I know that initially it would be quiet a bit of work to setup and to 
test the versions but once they are locked then you don't touch them again.


As I suggested in the other thread, I'd be OK with moving deprecated 
stuff to a 'deprecated' directory and then eventually deleting it. 
Stable branches would then correctly reflect the status of those 
templates at each previous release.
That makes sense. I would liek to clarify the above discussion first 
before we look at how to deprecate unsupported versions. I say that as 
many of our customers are running Liberty still :-)





  * We should consider adding a docs section that that includes training
for new users.
  o I know that there are documents hosted in the developer area and
these could be utilized but I would think having a documentation
section in the repository would be a good way to keep the
examples and the documents in the same place.
  o This docs directory could also host some training for new users
and old ones on new features etc.. In a similar line to what is
here in this repo https://github.com/heat-extras/heat-tutorial
  * We should include examples form the default hooks e.g. ansible salt
etc... with SoftwareDeployments.
  o We found this quiet helpful for new users to understand what is
 

[openstack-dev] [neutron] neutron-lib impact: use plugin common constants from neutron-lib

2017-05-31 Thread Boden Russell
If your project uses the constants from neutron.plugins.common.constants
please read on.

Many of the common plugin constants from neutron are now in neutron-lib
[1] and we're ready to consume them neutron [2].

Suggested actions:
- If your project uses any rehomed constants [1], please update your
imports to use them from neutron-lib. Best I can tell no other stadium
projects are using these constants today, but a number of non-stadium
projects are [3].


We can discuss when to land [2] during our weekly neutron meeting.

Thanks


[1] https://review.openstack.org/#/c/429036/
[2] https://review.openstack.org/#/c/469495/
[3]
http://codesearch.openstack.org/?q=from%20neutron%5C.plugins%5C.common%20import%20constants

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why don't we unbind ports or terminate volume connections on shelve offload?

2017-05-31 Thread Matt Riedemann

On 4/13/2017 11:45 AM, Matt Riedemann wrote:
This came up in the nova/cinder meeting today, but I can't for the life 
of me think of why we don't unbind ports or terminate the connection 
volumes when we shelve offload an instance from a compute host.


When you unshelve, if the instance was shelved offloaded, the conductor 
asks the scheduler for a new set of hosts to build the instance on 
(unshelve it). That could be a totally different host.


So am I just missing something super obvious? Or is this the most latent 
bug ever?




Looks like this is a known bug:

https://bugs.launchpad.net/nova/+bug/1547142

The fix on the nova side apparently depends on some changes on the 
cinder side. The new v3.27 APIs in cinder might help with all of this, 
but it doesn't fix old attachments.


By the way, search for shelve + volume in nova bugs and you're rewarded 
with a treasure trove of bugs:


https://bugs.launchpad.net/nova/?field.searchtext=shelved+volume&search=Search&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.assignee=&field.bug_reporter=&field.omit_dupes=on&field.has_patch=&field.has_no_package=

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-31 Thread Jay Pipes

On 05/31/2017 01:31 AM, Zhenguo Niu wrote:
On Wed, May 31, 2017 at 12:20 PM, Ed Leafe > wrote:


On May 30, 2017, at 9:36 PM, Zhenguo Niu mailto:niu.zgli...@gmail.com>> wrote:

> as placement is not splitted out from nova now, and there would be users 
who only want a baremetal cloud, so we don't add resources to placement yet, but 
it's easy for us to turn to placement to match the node type with mogan flavors.

Placement is a separate service, independent of Nova. It tracks
Ironic nodes as individual resources, not as a "pretend" VM. The
Nova integration for selecting an Ironic node as a resource is still
being developed, as we need to update our view of the mess that is
"flavors", but the goal is to have a single flavor for each Ironic
machine type, rather than the current state of flavors pretending
that an Ironic node is a VM with certain RAM/CPU/disk quantities.


Yes, I understand the current efforts of improving the baremetal nodes 
scheduling. It's not conflict with mogan's goal, and when it is done, we 
can share the same scheduling strategy with placement :)


Mogan is a service for a specific group of users who really want a 
baremetal resource instead of a generic compute resource, on API side, 
we can expose RAID, advanced partitions, nics bonding, firmware 
management, and other baremetal specific capabilities to users. And 
unlike nova's host based availability zone, host aggregates, server 
groups (ironic nodes share the same host), mogan can make it possible to 
divide baremetal nodes into such groups, and make Rack aware for 
affinity and anti-affinity when scheduling.
Zhenguo Niu brings up a very good point here. Currently, all Ironic 
nodes are associated with a single host aggregate in Nova, because of 
the vestigial notion that a compute *service* (ala the nova-compute 
worker) was equal to the compute *node*.


In the placement API, of course, there's no such coupling. A placement 
aggregate != a Nova host aggregate.


So, technically Ironic (or Mogan) can call the placement service to 
create aggregates that match *its* definition of what an aggregate is 
(rack, row, cage, zone, DC, whatever). Furthermore, Ironic (or Mogan) 
can associate Ironic baremetal nodes to one or more of those placement 
aggregates to get around Nova host aggregate to compute service coupling.


That said, there's still lots of missing pieces before placement gets 
affinity/anti-affinity support...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-31 Thread Lance Haig

Hi Zane,


On 24.05.17 18:14, Zane Bitter wrote:

On 22/05/17 12:49, Lance Haig wrote:

I also asked the other day if there is a list of heat version matched to
Openstack version and I was told that there is not.


You mean like 
https://docs.openstack.org/developer/heat/template_guide/hot_spec.html#ocata 
?

Yup Something like this

Lance

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Changing project schemas in patches; upgrade implications

2017-05-31 Thread Amrith Kumar
This email thread relates to[1], a change that aims to improve cross-SQL
support in project schemas.

I want to explicitly exclude the notion of getting rid of support for
PostgreSQL in the underlying project schemas, a topic that was discussed at
the summit[2].

In this change, the author (Thomas Bechtold, copied on this thread) makes
the comment that the change "is not changing the schema. It just avoids
implicit type conversion".

It has long been my understanding that changes like this are not upgrade
friendly as it could lead to two installations both with, say version 37 or
38 of the schema, but different table structures. In effect, this change
breaks upgradability of systems.

i.e. a deployment which had a schema from the install of Ocata would have a
v38 table modules table with a default of 0 and one installed with Pike
(should this change be accepted) would have a modules table with a default
of False.

I'm raising this issue on the ML because the author also claims (albeit not
verified by me) that other projects have accepted changes like this.

I submit to you that the upgrade friendly way of making this change would
be to propose a new version of the schema which alters all of these tables
and includes the correct default value. On a fresh install, with no data,
the upgrade step with this new schema version would bring the table to the
right default value and any system with that version of the schema would
have an identical set of defaults. Similarly any system with v37 or 38 of
the schema would have identical defaults.

What's the advice of the community on this change; I've explicitly added
stable-maint-core as reviewers on this change as it does have stable branch
upgrade implications.

-amrith

[1] https://review.openstack.org/#/c/467080/
​[2]https://etherpad.openstack.org/p/BOS-postgresql
​
​​

--
Amrith Kumar
Phone: +1-978-563-9590
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Scheduler]

2017-05-31 Thread Narendra Pal Singh
Hello,

lets say i have multiple compute nodes, Pool-A has 5 nodes and Pool-B has 4
nodes categorized based on some property.
Now there is request for new instance, i always want this instance to be
placed on any compute in Pool-A.
What would be best approach to address this situation?

-- 
Regards,
NPS
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Zane Bitter

On 31/05/17 09:43, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-31 11:22:50 +0100:

On Wed, 31 May 2017, Graham Hayes wrote:

On 30/05/17 19:09, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:

Note that this goal only applies to tempest _plugins_. Projects
which have their tests in the core of tempest have nothing to do. I
wonder if it wouldn't be more fair for all projects to use plugins
for their tempest tests?


All projects may have plugins, but all projects with tests used by
the Interop WG (formerly DefCore) for trademark certification must
place at least those tests in the tempest repo, to be managed by
the QA team [1]. As new projects are added to those trademark
programs, the tests are supposed to move to the central repo to
ensure the additional review criteria are applied properly.


Thanks for the clarification, Doug. I don't think it changes the
main thrust of what I was trying to say (more below).


[1] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html


In the InterOp discussions in Boston, it was indicated that some people
on the QA team were not comfortable with "non core" project (even in
the InterOp program) having tests in core tempest.

I do think that may be a bigger discussion though.


I'm not suggesting we change everything (because that would take a
lot of time and energy we probably don't have), but I had some
thoughts in reaction to this and sharing is caring:

The way in which the tempest _repo_ is a combination of smoke,
integration, validation and trademark enforcement testing is very
confusing to me. If we then lay on top of that the concept of "core"
and "not core" with regard to who is supposed to put their tests in
a plugin and who isn't (except when it is trademark related!) it all
gets quite bewildering.

The resolution above says: "the OpenStack community will benefit
from having the interoperability tests used by DefCore in a central
location". Findability is a good goal so this a reasonable
assertion, but then the directive to lump those tests in with a
bunch of other stuff seems off if the goal is to "easier to read and
understand a set of tests".

If, instead, Tempest is a framework and all tests are in plugins
that each have their own repo then it is much easier to look for a
repo (if there is a common pattern) and know "these are the interop
tests for openstack" and "these are the integration tests for nova"
and even "these are the integration tests for the thing we are
currently describing as 'core'[1]".

An area where this probably falls down is with validation. How do
you know which plugins to assemble in order to validate this cloud
you've just built? Except that we already have this problem now that
we are requiring most projects to manage their tempest tests as
plugins. Does it become worse by everything being a plugin?

[1] We really need a better name for this.


Yeah, it sounds like the current organization of the repo is not
ideal in terms of equal playing field for all of our project teams.
I would be fine with all of the interop tests being in a plugin
together, or of saying that the tempest repo should only contain
those tests and that others should move to their own plugins. If we're
going to reorganize all of that, we should decide what new structure we
want and work it into the goal.


+1

- ZB


The point of centralizing review of that specific set of tests was
to make it easier for interop folks to help ensure the tests continue
to follow the additionally stringent review criteria that comes
with being used as part of the trademark program. The QA team agreed
to do that, so it's news to me that they're considering reversing
course.  If the QA team isn't going to continue, we'll need to
figure out what that means and potentially find another group to
do it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2017-05-31 07:34:03 -0500:
> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
> > On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> >> We had a discussion a few months back around what to do for cryptography
> >> since pycrypto is basically dead [1]. After some discussion, at least on
> >> the Cinder project, we decided the best way forward was to use the
> >> cryptography package instead, and work has been done to completely remove
> >> pycrypto usage.
> >>
> >> It all seemed like a good plan at the time.
> >>
> >> I now notice that for the python-cinderclient jobs, there is a pypy job
> >> (non-voting!) that is failing because the cryptography package is not
> >> supported with pypy.
> >>
> >> So this leaves us with two options I guess. Change the cryto library again,
> >> or drop support for pypy.
> >>
> >> I am not aware of anyone using pypy, and there are other valid working
> >> alternatives. I would much rather just drop support for it than redo our
> >> crypto functions again.
> >>
> >> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
> >> some input?
> 
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new 
> push and be successful at this point seems low at best.
> 
> I'd argue that pypy is already not supported, so dropping the non-voting 
> job doesn't seem like losing very much to me. Reworking cryptography 
> libs again, otoh, seems like a lot of work.
> 
> Monty
> 

This question came up recently for the Oslo libraries, and I think we
also agreed that pypy support was not being actively maintained.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-31 11:22:50 +0100:
> On Wed, 31 May 2017, Graham Hayes wrote:
> > On 30/05/17 19:09, Doug Hellmann wrote:
> >> Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
> >>> Note that this goal only applies to tempest _plugins_. Projects
> >>> which have their tests in the core of tempest have nothing to do. I
> >>> wonder if it wouldn't be more fair for all projects to use plugins
> >>> for their tempest tests?
> >>
> >> All projects may have plugins, but all projects with tests used by
> >> the Interop WG (formerly DefCore) for trademark certification must
> >> place at least those tests in the tempest repo, to be managed by
> >> the QA team [1]. As new projects are added to those trademark
> >> programs, the tests are supposed to move to the central repo to
> >> ensure the additional review criteria are applied properly.
> 
> Thanks for the clarification, Doug. I don't think it changes the
> main thrust of what I was trying to say (more below).
> 
> >> [1] 
> >> https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
> >
> > In the InterOp discussions in Boston, it was indicated that some people
> > on the QA team were not comfortable with "non core" project (even in
> > the InterOp program) having tests in core tempest.
> >
> > I do think that may be a bigger discussion though.
> 
> I'm not suggesting we change everything (because that would take a
> lot of time and energy we probably don't have), but I had some
> thoughts in reaction to this and sharing is caring:
> 
> The way in which the tempest _repo_ is a combination of smoke,
> integration, validation and trademark enforcement testing is very
> confusing to me. If we then lay on top of that the concept of "core"
> and "not core" with regard to who is supposed to put their tests in
> a plugin and who isn't (except when it is trademark related!) it all
> gets quite bewildering.
> 
> The resolution above says: "the OpenStack community will benefit
> from having the interoperability tests used by DefCore in a central
> location". Findability is a good goal so this a reasonable
> assertion, but then the directive to lump those tests in with a
> bunch of other stuff seems off if the goal is to "easier to read and
> understand a set of tests".
> 
> If, instead, Tempest is a framework and all tests are in plugins
> that each have their own repo then it is much easier to look for a
> repo (if there is a common pattern) and know "these are the interop
> tests for openstack" and "these are the integration tests for nova"
> and even "these are the integration tests for the thing we are
> currently describing as 'core'[1]".
> 
> An area where this probably falls down is with validation. How do
> you know which plugins to assemble in order to validate this cloud
> you've just built? Except that we already have this problem now that
> we are requiring most projects to manage their tempest tests as
> plugins. Does it become worse by everything being a plugin?
> 
> [1] We really need a better name for this.

Yeah, it sounds like the current organization of the repo is not
ideal in terms of equal playing field for all of our project teams.
I would be fine with all of the interop tests being in a plugin
together, or of saying that the tempest repo should only contain
those tests and that others should move to their own plugins. If we're
going to reorganize all of that, we should decide what new structure we
want and work it into the goal.

The point of centralizing review of that specific set of tests was
to make it easier for interop folks to help ensure the tests continue
to follow the additionally stringent review criteria that comes
with being used as part of the trademark program. The QA team agreed
to do that, so it's news to me that they're considering reversing
course.  If the QA team isn't going to continue, we'll need to
figure out what that means and potentially find another group to
do it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] custom configuration to overcloud fails second time

2017-05-31 Thread Dnyaneshwar Pawar
Hi TripleO Experts,

I performed following steps -

  1.  openstack overcloud deploy --templates -e myconfig_1.yaml
  2.  openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to the 
overcloud. And configuration applied by step 1 remains unchanged.

Do I need to do anything before performing step 2?


Thanks and Regards,
Dnyaneshwar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Graham Hayes
On 31/05/17 11:22, Chris Dent wrote:
> On Wed, 31 May 2017, Graham Hayes wrote:
>> On 30/05/17 19:09, Doug Hellmann wrote:
>>> Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
 Note that this goal only applies to tempest _plugins_. Projects
 which have their tests in the core of tempest have nothing to do. I
 wonder if it wouldn't be more fair for all projects to use plugins
 for their tempest tests?
>>>
>>> All projects may have plugins, but all projects with tests used by
>>> the Interop WG (formerly DefCore) for trademark certification must
>>> place at least those tests in the tempest repo, to be managed by
>>> the QA team [1]. As new projects are added to those trademark
>>> programs, the tests are supposed to move to the central repo to
>>> ensure the additional review criteria are applied properly.
> 
> Thanks for the clarification, Doug. I don't think it changes the
> main thrust of what I was trying to say (more below).
> 
>>> [1]
>>> https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
>>>
>>
>> In the InterOp discussions in Boston, it was indicated that some people
>> on the QA team were not comfortable with "non core" project (even in
>> the InterOp program) having tests in core tempest.
>>
>> I do think that may be a bigger discussion though.
> 
> I'm not suggesting we change everything (because that would take a
> lot of time and energy we probably don't have), but I had some
> thoughts in reaction to this and sharing is caring:
> 
> The way in which the tempest _repo_ is a combination of smoke,
> integration, validation and trademark enforcement testing is very
> confusing to me. If we then lay on top of that the concept of "core"
> and "not core" with regard to who is supposed to put their tests in
> a plugin and who isn't (except when it is trademark related!) it all
> gets quite bewildering.
> 
> The resolution above says: "the OpenStack community will benefit
> from having the interoperability tests used by DefCore in a central
> location". Findability is a good goal so this a reasonable
> assertion, but then the directive to lump those tests in with a
> bunch of other stuff seems off if the goal is to "easier to read and
> understand a set of tests".
> 
> If, instead, Tempest is a framework and all tests are in plugins
> that each have their own repo then it is much easier to look for a
> repo (if there is a common pattern) and know "these are the interop
> tests for openstack" and "these are the integration tests for nova"
> and even "these are the integration tests for the thing we are
> currently describing as 'core'[1]".
> 
> An area where this probably falls down is with validation. How do
> you know which plugins to assemble in order to validate this cloud
> you've just built? Except that we already have this problem now that
> we are requiring most projects to manage their tempest tests as
> plugins. Does it become worse by everything being a plugin?

No - this was the gist of my point last year when I proposed the
plugins first (or plugins for all as I called it at the time).

It actually gets better - for a few reasons.

1. We can have a interop-tempest-plugin (under QA control) where all
   interop tests can go, and be tagged for each standard.
   This is good for many reasons, mainly that the tests are curated,
   and projects cannot change tests without someone from QA-core team
   checking it to make sure it does not break backwards compatibility.

2. With the some interop tests being out of the tempest tree, there is
   a very real possibility of a change in tempest blocking someone
   getting certification - tempest does not gate against plugins
   and has broken interfaces in the past.

3. With the new interop programs, the old definition of "core" is more
   murky - and it is looking more and more like the criteria for
   inclusion in openstack/tempest is "be there from the beginning".

> [1] We really need a better name for this.
100% - but I have tried and failed to find a better descriptor, that
everyone understands.

- Graham
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split out from horizon

2017-05-31 Thread Akihiro Motoki
Hi all,

As discussed last month [1], we agree that each neutron-related
dashboard has its own repository.
I would like to move this forward on FWaaS and VPNaaS
as the horizon team plans to split them out as horizon plugins.

A couple of questions hit me.

(1) launchpad project
Do we create a new launchpad project for each dashboard?
At now, FWaaS and VPNaaS projects use 'neutron' for their bug tracking
from the historical reason, it sometimes There are two choices: the
one is to accept dashboard bugs in 'neutron' launchpad,
and the other is to have a separate launchpad project.

My vote is to create a separate launchpad project.
It allows users to search and file bugs easily.

(2) repository name

Are neutron-fwaas-dashboard / neutron-vpnaas-dashboard good repository
names for you?
Most horizon related projects use -dashboard or -ui as their repo names.
I personally prefers to -dashboard as it is consistent with the
OpenStack dashboard
(the official name of horizon). On the other hand, I know some folks
prefer to -ui
as the name is shorter enough.
Any preference?

(3) governance
neutron-fwaas project is under the neutron project.
Does it sound okay to have neutron-fwaas-dashboard under the neutron project?
This is what the neutron team does for neutron-lbaas-dashboard before
and this model is adopted in most horizon plugins (like trove, sahara
or others).

(4) initial core team

My thought is to have neutron-fwaas/vpnaas-core and horizon-core as
the initial core team.
The release team and the stable team follow what we have for
neutron-fwaas/vpnaas projects.
Sounds reasonable?


Finally, I already prepare the split out version of FWaaS and VPNaaS
dashboards in my personal github repos.
Once we agree in the questions above, I will create the repositories
under git.openstack.org.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-April/thread.html#115200

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-31 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/23/2017 12:23 PM, Major Hayden wrote:
> I'll see if we can move forward with 'ansible-hardening' and keep everyone 
> updated! :)

The repo is up and ready to go:

  https://github.com/openstack/ansible-hardening

There are some patches proposed to get the 'openstack-ansible-security' 
references changed to 'ansible-hardening'.

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJZLrzjAAoJEHNwUeDBAR+xSLoP/j/cSbNe11/PfI9htneJraqp
4HpuVxw0i1WyrrasJplt0WFcfaNIn/0JoN0Z3Wf+mqDOFGHOh1IHz4MJKEN6lOG7
XV/mzx3VnH87aLkdLEMznHlymeJaRxRq/8RBKIWQqGyDGjlJcl2mCUItrpIMCQHt
JUJzNCdMpZa7f7xbe7J1CX9cjAI9Sx/g3jS/s3WiWJ/MJR9uGKUKdAUD3kX6RoTb
a8nOvdE7gEyvqOKh/iJm7/LDZ+tM5kS03Su2pJJuSWSg4pluLtvwutdB14d0FRnk
DgW39mi3IMADbvNpH+U+Y+g4ar7QxoIdKDW9DuwCP5cjkx0GTl/2T/IC6bYm99Ko
oo/5xwYMvENndDbi8EbvhRbiGwSx8/mKKOykLlFum3iHqbhcwApHNEGlrXiKW1pz
veJeywJGk6loRcB8/RmvpqUB1EMd0qv+6NNDe/P35mcAFxTvJrQYIVRFrsgfxw1f
5nZNGN6iHmJkSnP0f4j27zasUSEdxpYSYl1A+glU8TJhjLrbCGyFxnRtmJXz7vHP
/N87ufYxOOIMKtduNquNxlSKhhL1xX3cPcTZbvSR3hIncZl5c++0hzgpEgaXHKO5
p2/WgINgftZ9eWO4w7qIz3h774JFi2GwejM6AZ6KWk3uWUlAx/kzOEYepRILpeso
bPStj0ixfMPNUgKuy9Oa
=YwNX
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] custom configuration to overcloud fails second time

2017-05-31 Thread Dnyaneshwar Pawar
Hi TripleO Experts,
I performed following steps -

  1.  openstack overcloud deploy --templates -e myconfig_1.yaml
  2.  openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to the 
overcloud. And configuration applied by step 1 remains unchanged.

Do I need to do anything before performing step 2?


Thanks and Regards,
Dnyaneshwar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Monty Taylor

On 05/31/2017 06:39 AM, Sean McGinnis wrote:

On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:

We had a discussion a few months back around what to do for cryptography
since pycrypto is basically dead [1]. After some discussion, at least on
the Cinder project, we decided the best way forward was to use the
cryptography package instead, and work has been done to completely remove
pycrypto usage.

It all seemed like a good plan at the time.

I now notice that for the python-cinderclient jobs, there is a pypy job
(non-voting!) that is failing because the cryptography package is not
supported with pypy.

So this leaves us with two options I guess. Change the cryto library again,
or drop support for pypy.

I am not aware of anyone using pypy, and there are other valid working
alternatives. I would much rather just drop support for it than redo our
crypto functions again.

Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
some input?


There was work a few years ago to get pypy support going - but it never 
really seemed to catch on. The chance that we're going to start a new 
push and be successful at this point seems low at best.


I'd argue that pypy is already not supported, so dropping the non-voting 
job doesn't seem like losing very much to me. Reworking cryptography 
libs again, otoh, seems like a lot of work.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-05-31 Thread Gorka Eguileor
Hi,

As some of you may know I've been working on improving iSCSI connections
on OpenStack to make them more robust and prevent them from leaving
leftovers on attach/detach operations.

There are a couple of posts [1][2] going in more detail, but a good
summary would be that to fix this issue we require a considerable rework
in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.

Relevant changes for those projects are:

- Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
  case, so a new feature was added to disable automatic scans that added
  unintended devices to the systems.  Done and merged [3][4], it will be
  available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7

- OS-Brick: rework iSCSI to make it robust on unreliable networks, to
  add a `force` detach option that prioritizes leaving a clean system
  over possible data loss, and to support the new Open iSCSI feature.
  Done and pending review [5][6][7]

- Cinder: Handle some attach/detach errors a little better and add
  support to the force detach option for some operations where data loss
  on error is acceptable, ie: create volume from image, restore backup,
  etc. Done and pending review [8][9]

- Nova: I haven't looked into the code here, but I'm sure there will be
  cases where using the force detach operation will be useful.

- Tests: While we do have tempest tests that verify that attach/detach
  operations work both in Nova and in cinder volume creation operations,
  they are not meant to test the robustness of the system, so new tests
  will be required to validate the code.  Done [10]

Proposed tests are simplified versions of the ones I used to validate
the code; but hey, at least these are somewhat readable ;-)
Unfortunately they are not in line with the tempest mission since they
are not meant to be run in a production environment due to their
disruptive nature while injecting errors.  They need to be run
sequentially and without any other operations running on the deployment.
They also run sudo commands via local bash or SSH for the verification
and error generation bits.

We are testing create volume from image and attaching a volume to an
instance under the following networking error scenarios:

 - No errors
 - All paths have 10% incoming packets dropped
 - All paths have 20% incoming packets dropped
 - All paths have 100% incoming packets dropped
 - Half the paths have 20% incoming packets dropped
 - The other half of the paths have 20% incoming packets dropped
 - Half the paths have 100% incoming packets dropped
 - The other half of the paths have 100% incoming packets dropped

There are single execution versions as well as 10 consecutive operations
variants.

Since these are big changes I'm sure we would all feel a lot more
confident to merge them if storage vendors would run the new tests to
confirm that there are no issues with their backends.

Unfortunately to fully test the solution you may need to build the
latest Open-iSCSI package and install it in the system, then you can
just use an all-in-one DevStack with a couple of changes in the local.conf:

   enable_service tempest

   CINDER_REPO=https://review.openstack.org/p/openstack/cinder
   CINDER_BRANCH=refs/changes/45/469445/1

   LIBS_FROM_GIT=os-brick

   OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
   OS_BRICK_BRANCH=refs/changes/94/455394/11

   [[post-config|$CINDER_CONF]]
   [multipath-backend]
   use_multipath_for_image_xfer=true

   [[post-config|$NOVA_CONF]]
   [libvirt]
   volume_use_multipath = True

   [[post-config|$KEYSTONE_CONF]]
   [token]
   expiration = 14400

   [[test-config|$TEMPEST_CONFIG]]
   [volume-feature-enabled]
   multipath = True
   [volume]
   build_interval = 10
   multipath_type = $MULTIPATH_VOLUME_TYPE
   backend_protocol_tcp_port = 3260
   multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2

Multinode configurations are also supported using SSH with use/password or
private key to introduce the errors or check that the systems didn't leave any
leftovers, the tests can also run a cleanup command between tests, etc., but
that's beyond the scope of this email.

Then you can run them all from /opt/stack/tempest with:

 $ cd /opt/stack/tempest
 $ OS_TEST_TIMEOUT=7200 ostestr -r 
cinder.tests.tempest.scenario.test_multipath.*

But I would recommend first running the simplest one without errors and
manually checking that the multipath is being created.

 $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_1

Then doing the same with one with errors and verify the presence of the
filters in iptables and that the packet drop for those filters is non zero:

 $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_2
 $ sudo iptables -nvL INPUT

Then doing the same with a Nova test just to verify that it is correctly
configured to use multipathing:

 $ ostestr -n 
cinde

Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
> We had a discussion a few months back around what to do for cryptography
> since pycrypto is basically dead [1]. After some discussion, at least on
> the Cinder project, we decided the best way forward was to use the
> cryptography package instead, and work has been done to completely remove
> pycrypto usage.
> 
> It all seemed like a good plan at the time.
> 
> I now notice that for the python-cinderclient jobs, there is a pypy job
> (non-voting!) that is failing because the cryptography package is not
> supported with pypy.
> 
> So this leaves us with two options I guess. Change the cryto library again,
> or drop support for pypy.
> 
> I am not aware of anyone using pypy, and there are other valid working
> alternatives. I would much rather just drop support for it than redo our
> crypto functions again.
> 
> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
> some input?
> 
> Sean
> 

That missing reference:

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113568.html
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread Sean McGinnis
We had a discussion a few months back around what to do for cryptography
since pycrypto is basically dead [1]. After some discussion, at least on
the Cinder project, we decided the best way forward was to use the
cryptography package instead, and work has been done to completely remove
pycrypto usage.

It all seemed like a good plan at the time.

I now notice that for the python-cinderclient jobs, there is a pypy job
(non-voting!) that is failing because the cryptography package is not
supported with pypy.

So this leaves us with two options I guess. Change the cryto library again,
or drop support for pypy.

I am not aware of anyone using pypy, and there are other valid working
alternatives. I would much rather just drop support for it than redo our
crypto functions again.

Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
some input?

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-31 Thread Rob Cresswell (rcresswe)

[horizon]
django-openstack-auth - blocking django - intermediary

https://review.openstack.org/#/c/469420 is up to release django_openstack_auth. 
Sorry for the delays.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Chris Dent

On Wed, 31 May 2017, Graham Hayes wrote:

On 30/05/17 19:09, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:

Note that this goal only applies to tempest _plugins_. Projects
which have their tests in the core of tempest have nothing to do. I
wonder if it wouldn't be more fair for all projects to use plugins
for their tempest tests?


All projects may have plugins, but all projects with tests used by
the Interop WG (formerly DefCore) for trademark certification must
place at least those tests in the tempest repo, to be managed by
the QA team [1]. As new projects are added to those trademark
programs, the tests are supposed to move to the central repo to
ensure the additional review criteria are applied properly.


Thanks for the clarification, Doug. I don't think it changes the
main thrust of what I was trying to say (more below).


[1] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html


In the InterOp discussions in Boston, it was indicated that some people
on the QA team were not comfortable with "non core" project (even in
the InterOp program) having tests in core tempest.

I do think that may be a bigger discussion though.


I'm not suggesting we change everything (because that would take a
lot of time and energy we probably don't have), but I had some
thoughts in reaction to this and sharing is caring:

The way in which the tempest _repo_ is a combination of smoke,
integration, validation and trademark enforcement testing is very
confusing to me. If we then lay on top of that the concept of "core"
and "not core" with regard to who is supposed to put their tests in
a plugin and who isn't (except when it is trademark related!) it all
gets quite bewildering.

The resolution above says: "the OpenStack community will benefit
from having the interoperability tests used by DefCore in a central
location". Findability is a good goal so this a reasonable
assertion, but then the directive to lump those tests in with a
bunch of other stuff seems off if the goal is to "easier to read and
understand a set of tests".

If, instead, Tempest is a framework and all tests are in plugins
that each have their own repo then it is much easier to look for a
repo (if there is a common pattern) and know "these are the interop
tests for openstack" and "these are the integration tests for nova"
and even "these are the integration tests for the thing we are
currently describing as 'core'[1]".

An area where this probably falls down is with validation. How do
you know which plugins to assemble in order to validate this cloud
you've just built? Except that we already have this problem now that
we are requiring most projects to manage their tempest tests as
plugins. Does it become worse by everything being a plugin?

[1] We really need a better name for this.
--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] mount ceph block from an instance

2017-05-31 Thread Jean-Philippe Evrard
Hello, 

That was indeed my suggestion.
The alternative would be to make sure your ceph can be routed through your 
public network. But it’s not my infrastructure, I don’t know what you store as 
data, etc…

In either case, you’re making possible for your tenants to access a part of 
your infra (the ceph cluster that’s used for openstack too), so you should 
think of the implications twice (bad neighbours, security intrusions…).
 
Best regards,
JP


On 29/05/2017, 10:27, "fabrice grelaud"  wrote:

Thanks for the answer.

My use case is for a file-hosting software system like « Seafile »  which 
can use a ceph backend (swift too but we don’t deploy swift on our infra).

Our network configuration of our infra is identical as your OSA 
documentation. So, on our compute node we have two bonding interface (bond0 and 
bond1).
The ceph vlan is actually propagate on bond0 (where is attach br-storage) 
to have ceph backend for our openstack.
And on bond1, among other, we have br-vlan for ours vlans providers.

If i understood correctly, the solution is to propagate too on our switch 
the ceph vlan on bond1, and create by neutron the provider network to be 
reachable in the tenant by our file-hosting software.

For security issues, using neutron rbac tool to share only this provider 
network to the tenant in question, could be sufficient ?

I’m all ears ;-) if you have another alternative.

Regards,
Fabrice


> Le 25 mai 2017 à 14:01, Jean-Philippe Evrard 
 a écrit :
> 
> I doubt many people have tried this, because 1) cinder/nova/glance 
probably do the job well in a multi-tenant fashion 2) you’re poking holes into 
your ceph cluster security.
> 
> Anyway, if you still want it, you would need (I guess) have to create a 
provider network that will be allowed to access your ceph network.
> 
> You can either route it from your current public network, or create 
another network. It’s 100% up to you, and not osa specific.
> 
> Best regards,
> JP
> 
> On 24/05/2017, 15:02, "fabrice grelaud"  
wrote:
> 
>Hi osa team,
> 
>i have a multimode openstack-ansible deployed, ocata 15.1.3, with ceph 
as backend for cinder (with our own ceph infra).
> 
>After create an instance with root volume, i would like to mount a 
ceph block or cephfs directly to the vm (not a cinder volume). So i want to 
attach a new interface to the vm that is in the ceph vlan.
>How can i do that ?
> 
>We have our ceph vlan propagated on bond0 interface (bond0.xxx and 
br-storage configured as documented) for openstack infrastructure.
> 
>Should i have to propagate this vlan on the bond1 interface where my 
br-vlan is attach ?
>Should i have to use the existing br-storage where the ceph vlan is 
already propagated (bond0.xxx) ? And how i create the ceph vlan network in 
neutron (by neutron directly or by horizon) ?
> 
>Has anyone ever experienced this ?
> 
>
__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-05-31 Thread Emilien Macchi
Hey folks,

I've been playing with deploying Neutron in WSGI with Apache and
Tempest tests fail on spawning Nova server when creating Neutron
ports:
http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/console.html#_2017-05-30_13_09_22_715400

I haven't found anything useful in neutron-server logs:
http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz

Before I file a bug in neutron, can anyone look at the logs with me
and see if I missed something in the config:
http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz

Thanks for the help,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] systemd + ENABLED_SERVICES + user_init_file

2017-05-31 Thread Markus Zoeller
On 11.05.2017 15:56, Markus Zoeller wrote:
> I'm working on a nova live-migration hook which configures and starts
> the nova-serialproxy service, runs a subset of tempest tests, and tears
> down the previously started service.
> 
>https://review.openstack.org/#/c/347471/47
> 
> After the change to "systemd", I thought all I have to do was to start
> the service with:
> 
>systemctl enable devstack@n-sproxy
>systemctl restart devstack@n-sproxy
> 
> But this results in the error "Failed to execute operation: No such file
> or directory". The reason is, that there is no systemd "user unit file".
> This file gets written in Devstack at:
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1512-L1529
> 
> For that to happen, a service must be in the list "ENABLED_SERVICES":
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1572-L1574
> 
> Which is *not* the case for the "n-sproxy" service:
> 
> https://github.com/openstack-dev/devstack/blob/8b8441f3becbae2e704932569bff384dcc5c6713/stackrc#L55-L56
> 
> I'm not sure how to approach this problem. I could:
> 1) add "n-sproxy" to the default ENABLED_SERVICES list for Nova in
>Devstack
> 2) always write the systemd user unit file in Devstack
>(despite being an enabled service)
> 3) Write the "user unit file" on the fly in the hook (in Nova).
> 4) ?
> 
> Right now I tend to option 2, as I think it brings more flexibility (for
> other services too) with less change in the set of default enabled
> services in the gate.
> 
> Is this the right direction? Any other thoughts?
> 
> 

FWIW, here's my attempt to implement 2):
https://review.openstack.org/#/c/469390/

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 22

2017-05-31 Thread Graham Hayes
On 30/05/17 19:09, Doug Hellmann wrote:
> Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
>>
>> There's no TC meeting this week. Thierry did a second weekly status
>> report[^1]. There will be a TC meeting next week (Tuesday, 6th June
>> at 20:00 UTC) with the intention of discussing the proposals about
>> postgreSQL (of which more below). Here are my comments on pending TC
>> activity that either seems relevant or needs additional input.
>>
>> [^1]: 
>> 
>>
>> # Pending Stuff
>>
>> ## Queens Community Goals
>>
>> Proposals for community-wide goals[^2] for the Queens cycle have started
>> coming in. These are changes which, if approved, all projects are
>> expected to satisfy. In Pike the goals are:
>>
>> * [all control plane APIs deployable as WSGI 
>> apps](https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html)
>> * [supporting Python 
>> 3.5](https://governance.openstack.org/tc/goals/pike/python35.html)
>>
>> The full suite of goals for Queens has not yet been decided.
>> Identifying goals is a community-wide process. Your ideas are
>> wanted.
>>
>> ### Split Tempest Plugins into Separate Repos
>>
>> This goal for Queens is already approved. Any project which manages
>> its tempest tests as a plugin should move those tests into a
>> separate repo. The goal is at[^3]. The review for it[^4] has further
>> discussion on why it is a good idea.
>>
>> The original goal did not provide instructions on how to do it.
>> There is a proposal in progress[^5] to add a link to an etherpad[^6]
>> with instructions.
>>
>> Note that this goal only applies to tempest _plugins_. Projects
>> which have their tests in the core of tempest have nothing to do. I
>> wonder if it wouldn't be more fair for all projects to use plugins
>> for their tempest tests?
> 
> All projects may have plugins, but all projects with tests used by
> the Interop WG (formerly DefCore) for trademark certification must
> place at least those tests in the tempest repo, to be managed by
> the QA team [1]. As new projects are added to those trademark
> programs, the tests are supposed to move to the central repo to
> ensure the additional review criteria are applied properly.
> 
> [1] 
> https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html

In the InterOp discussions in Boston, it was indicated that some people
on the QA team were not comfortable with "non core" project (even in
the InterOp program) having tests in core tempest.

I do think that may be a bigger discussion though.

> 
>>
>> ### Two Proposals on Improving Version Discovery
>>
>> Monty has been writing API-WG guidelines about how to properly use
>> the service catalog and do version discovery[^7]. Building from that
>> he's proposed two new goals:
>>
>> * [Add Queens goal to add collection 
>> links](https://review.openstack.org/#/c/468436/)
>> * [Add Queens goal for full discovery 
>> alignment](https://review.openstack.org/#/c/468437/)
>>
>> The first is a small step in the direction of improving version
>> discovery, the second is all the steps to getting all projects
>> supporting proper version discovery, in case we are feeling extra
>> capable.
>>
>> Both of these need review from project contributors, first to see if there
>> is agreement on the strategies, second to see if they are
>> achievable.
>>
>> [^2]: 
>> [^3]: 
>> 
>> [^4]: 
>> [^5]: 
>> [^6]: 
>> [^7]: 
>>
>> ## etcd as a base service
>>
>> etcd has been proposed as a base service[^8]. A "base" service is
>> one that that can be expected to be present in any OpenStack
>> deployment. The hope is that by declaring this we can finally
>> bootstrap the distributed locking, group membership and service
>> liveness functionality that we've been talking about for years. If
>> you want this please say so on the review. You want this.
>>
>> If for some reason you _don't_ want this, then you'll want to
>> register your reasons as soon as possible. The review will merge
>> soon.
>>
>> [^8]: 
>>
>> ## openstack-tc IRC channel
>>
>> With the decrease in the number of TC meetings on IRC there's a plan
>> to have [office hours](https://review.openstack.org/#/c/467256/)
>> where some significant chunk of the TC will be available. Initially
>> this was going to be in the `#openstack-dev` channel but in the
>> hopes of making the logs readable after the fact, a [new channel is
>> proposed](https://review.openstack.org/#/c/467386/).
>>
>> This is likely to pass soon, unless objections are raised. If you
>> have some, please raise them on the review.
>>
>>

Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-31 Thread Renat Akhmerov

On 31 May 2017, 15:08 +0700, Thierry Carrez , wrote:
>
> > This has hit us with the mistral and tripleo projects particularly
> > (tagged in the title). They disallow pbr-3.0.0 and in the case of
> > mistral sqlalchemy updates.
> >
> > [mistral]
> > mistral - blocking sqlalchemy - milestones
>
> I wonder why mistral is in requirements. Looks like tripleo-common is
> depending on it ? Could someone shine some light on this ? It might just
> mean mistral-lib is missing a few functions, and switching the release
> model of mistral itself might be overkill ?

This dependency is currently needed to create custom Mistral actions. It was 
originally not the best architecture and one of the reasons to create 
'mistral-lib' was in getting rid of dependency on ‘mistral’ by moving all 
that’s needed for creating actions into a lib (plus something else). The thing 
is that the transition is not over and APIs that we put into ‘mistral-lib’ are 
still experimental. The plan is to complete this initiative, including docs and 
needed refactoring, till the end of Pike.

What possible negative consequences may we have if we switch release model to 
"cycle-with-intermediary”? Practically, all our releases, even those made after 
milestones, are considered stable and I don’t see issues if we’ll be producing 
full releases every time. Btw, how does stable branch maintenance work in this 
case? I guess it should be the same, one stable branch per cycle. I’d 
appreciate if you could clarify this.

Renat Akhmerov
@Nokia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 22

2017-05-31 Thread Graham Hayes
On 30/05/17 18:16, Chris Dent wrote:
> 
> There's no TC meeting this week. Thierry did a second weekly status
> report[^1]. There will be a TC meeting next week (Tuesday, 6th June
> at 20:00 UTC) with the intention of discussing the proposals about
> postgreSQL (of which more below). Here are my comments on pending TC
> activity that either seems relevant or needs additional input.
> 
> [^1]:
> 
> 
> # Pending Stuff
> 
> ## Queens Community Goals
> 
> Proposals for community-wide goals[^2] for the Queens cycle have started
> coming in. These are changes which, if approved, all projects are
> expected to satisfy. In Pike the goals are:
> 
> * [all control plane APIs deployable as WSGI
> apps](https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html)
> 
> * [supporting Python
> 3.5](https://governance.openstack.org/tc/goals/pike/python35.html)
> 
> The full suite of goals for Queens has not yet been decided.
> Identifying goals is a community-wide process. Your ideas are
> wanted.
> 
> ### Split Tempest Plugins into Separate Repos
> 
> This goal for Queens is already approved. Any project which manages
> its tempest tests as a plugin should move those tests into a
> separate repo. The goal is at[^3]. The review for it[^4] has further
> discussion on why it is a good idea.
> 
> The original goal did not provide instructions on how to do it.
> There is a proposal in progress[^5] to add a link to an etherpad[^6]
> with instructions.
> 
> Note that this goal only applies to tempest _plugins_. Projects
> which have their tests in the core of tempest have nothing to do. I
> wonder if it wouldn't be more fair for all projects to use plugins
> for their tempest tests?

+ 1000. But apparently I am wrong on this.

> 
> ### Two Proposals on Improving Version Discovery
> 
> Monty has been writing API-WG guidelines about how to properly use
> the service catalog and do version discovery[^7]. Building from that
> he's proposed two new goals:
> 
> * [Add Queens goal to add collection
> links](https://review.openstack.org/#/c/468436/)
> * [Add Queens goal for full discovery
> alignment](https://review.openstack.org/#/c/468437/)
> 
> The first is a small step in the direction of improving version
> discovery, the second is all the steps to getting all projects
> supporting proper version discovery, in case we are feeling extra
> capable.
> 
> Both of these need review from project contributors, first to see if there
> is agreement on the strategies, second to see if they are
> achievable.
> 
> [^2]: 
> [^3]:
> 
> 
> [^4]: 
> [^5]: 
> [^6]: 
> [^7]: 
> 
> ## etcd as a base service
> 
> etcd has been proposed as a base service[^8]. A "base" service is
> one that that can be expected to be present in any OpenStack
> deployment. The hope is that by declaring this we can finally
> bootstrap the distributed locking, group membership and service
> liveness functionality that we've been talking about for years. If
> you want this please say so on the review. You want this.
> 
> If for some reason you _don't_ want this, then you'll want to
> register your reasons as soon as possible. The review will merge
> soon.
> 
> [^8]: 
> 
> ## openstack-tc IRC channel
> 
> With the decrease in the number of TC meetings on IRC there's a plan
> to have [office hours](https://review.openstack.org/#/c/467256/)
> where some significant chunk of the TC will be available. Initially
> this was going to be in the `#openstack-dev` channel but in the
> hopes of making the logs readable after the fact, a [new channel is
> proposed](https://review.openstack.org/#/c/467386/).
> 
> This is likely to pass soon, unless objections are raised. If you
> have some, please raise them on the review.
> 
> ## postgreSQL
> 
> The discussions around postgreSQL have yet to resolve. See [last week's
> report](https://anticdent.org/tc-report-21.html) for additional
> information. Because things are blocked and there have been some
> expressions of review fatigue there will be, as mentioned above, a
> TC meeting next week on 6th June, 20:00 UTC. Show up if you have an
> opinion if or how postgreSQL should or should not have a continuing
> presence in OpenStack. Some links:
> 
> * [original proposal documenting the lack of community attention to
>   postgreSQL](https://review.openstack.org/#/c/427880/)
> * [a shorter, less MySQL-oriented
> version](https://review.openstack.org/#/c/465589/)
> * [related email
>  
> thread](http://lists.openstack.org/pipermail/openstack-dev/2017-May/116642.html)
> 
> * [active vs external approaches to RDBMS

Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-31 Thread Thierry Carrez
Matthew Thode wrote:
> We have a problem in requirements that projects that don't have the
> cycle-with-intermediary release model (most of the cycle-with-milestones
> model) don't get integrated with requirements until the cycle is fully
> done.  This causes a few problems.
> [...]

Makes sense. Rules that apply for libraries should apply to other strong
dependencies.

> This has hit us with the mistral and tripleo projects particularly
> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
> mistral sqlalchemy updates.
> 
> [mistral]
> mistral - blocking sqlalchemy - milestones

I wonder why mistral is in requirements. Looks like tripleo-common is
depending on it ? Could someone shine some light on this ? It might just
mean mistral-lib is missing a few functions, and switching the release
model of mistral itself might be overkill ?

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 21) - Devmode OVB, RDO Cloud and config management

2017-05-31 Thread Jiří Stránský

On 30.5.2017 23:03, Emilien Macchi wrote:

On Fri, May 26, 2017 at 4:58 PM, Attila Darazs  wrote:

If the topics below interest you and you want to contribute to the
discussion, feel free to join the next meeting:

Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

= Periodic & Promotion OVB jobs Quickstart transition =

We had a lively technical discussions this week. Gabriele's work on
transitioning the periodic & promotion jobs is nearly complete, only needs
reviews at this point. We won't set a transition date for these as it is not
really impacting folks long term if these jobs are failing for a few days at
this point. We'll transition when everything is ready.

= RDO Cloud & Devmode OVB =

We continued planning the introduction of RDO Cloud for the upstream OVB
jobs. We're still at the point of account setup.

The new OVB based devmode seems to be working fine. If you have access to
RDO Cloud, and haven't tried it already, give it a go. It can set up a full
master branch based deployment within 2 hours, including any pending changes
baked into the under & overcloud.

When you have your account info sourced, all it takes is

$ ./devmode.sh --ovb

from your tripleo-quickstart repo! See here[1] for more info.

= Container jobs on nodepool multinode =

Gabriele is stuck with these new Quickstart jobs. We would need a deep dive
into debugging and using the container based TripleO deployments. Let us
know if you can do one!


I've pinged some folks around, let's see if someone volunteers to make it.


I can lead the deep dive as i've been involved in implementation of all 
the multinode jobs (deployment, upgrade, scenarios deployment + upgrade).


Currently only the upgrade job is merged and working. Deployment job is 
ready to merge (imho :) ), let's get it in place before we do the deep 
dive, so that we talk about things that are already working as intended. 
However, i think we don't have to block on getting the scenario jobs all 
going and polished, those are not all green yet (2/4 deployment jobs 
green, 1/4 upgrade jobs green).


I hope that sounds like a sensible plan. Let me know in case you have 
any feedback.


Thanks

Jirka




= How to handle Quickstart configuration =

This a never-ending topic, on which we managed to spend a good chunk of time
this week as well. Where should we put various configs? Should we duplicate
a bunch of variables or cut them into small files?

For now it seems we can agree on 3 levels of configuration:

* nodes config (i.e. how many nodes we want for the deployment)
* envionment + provisioner settings (i.e. you want to run on rdocloud with
ovb, or on a local machine with libvirt)
* featureset (a certain set of features enabled/disabled for the jobs, like
pacemaker and ssl)

This seems rather straightforward until we encounter exceptions. We're going
to figure out the edge cases and rework the current configs to stick to the
rules.


That's it for this week. Thank you for reading the summary.

Best regards,
Attila

[1] http://docs.openstack.org/developer/tripleo-quickstart/devmode-ovb.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [L2-Gateway] Query on redundant configuration of OpenStack's L2 gateway

2017-05-31 Thread Ran Xiao
Hi Ricardo,

Thanks very much for your help.
I've checked the file you shared and will contact them.

BR,
Ran Xiao

-Original Message-
From: Ricardo Noriega De Soto [mailto:rnori...@redhat.com] 
Sent: Tuesday, May 30, 2017 6:50 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [L2-Gateway] Query on redundant configuration of 
OpenStack's L2 gateway

Hi Ran Xiao,

Please, take a look at this doc I found online:

https://docs.google.com/document/d/1U7M78uNhBp8eZIu0YH04u6Vzj6t2NpvDY4peX8svsXY/edit#heading=h.xochfa5fqf06

You might want to contact those folks!

Cheers

On Tue, May 23, 2017 at 1:21 PM, Ran Xiao mailto:ran-x...@vf.jp.nec.com> > wrote:


Hi All,

  I have a query on usage of L2GW NB API.
  I have to integrate L2GW with ODL.
  And there are two L2GW nodes named l2gw1 and l2gw2.
  OVS HW VTEP Emulator is running on each node.
  Does the following command work for configuring these two nodes a 
L2GW HA Cluster?

  neutron l2-gateway-create gw_name --device 
name=l2gw1,interface_names=eth2 \
--device 
name=l2gw2,interface_names=eth2

  Version : stable/ocata

  Thanks in advance.

BR,
Ran Xiao




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 





-- 

Ricardo Noriega


Senior Software Engineer - NFV Partner Engineer | Office of Technology  | Red 
Hat

irc: rnoriega @freenode

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] OpenStack and OVN integration is failing on multi-node physical machines.(probably a bug)

2017-05-31 Thread Numan Siddique
Hi Pranab,

Request to not drop the mailing list

Please see below for comments.



On Sat, May 27, 2017 at 11:30 AM, pranab boruah  wrote:

> Thanks Numan for the reply. I modified the system service script of
> neutron server and made sure that it starts only after ovn-northd service
> is up and running.
>
> Able to launch VMs now.
> But the VMs doesn't get any dhcp ip. Is there any logs relevant to ovn
> native dhcp server that I can look for?
>

Since you are using Newton, you probably need to set ovn_native_dhcp=True
in /etc/neutron/plugin.ini or /etc/neutron/plugins/ml2/ml2_conf.ini.
Otherwise you are expected to use dhcp agent.



>
> I have another question:
> There are two ways to include the OVN specific configurations. One way is
> to add a new [ovn] section in /etc/neutron/plugin.ini file. Second way is
> to modify the /etc/neutron/plugins/networking-ovn/networking-ovn.ini
> file. Which is the right file that we should modify and if I had included
> the ovn configurations in both files, which one takes precedence?
>

Better to use  etc/neutron/plugin.ini or
/etc/neutron/plugins/ml2/ml2_conf.ini. I think the last included config
file overrides the values.


> After all this issues with the setup, we are planning to build a Triple-O
> setup.  I remember Russell Bryant mentioning that there is a Heat template
> for OVN. We are planning to use that. Any caveats/guides you would like to
> recommend for triple-O OVN integration? It would be really useful.
>
>
You need to include
/usr/share/openstack-triple-heat-templates/environments/neutron-ml2-ovn.yaml
[1] in the environment templates when calling openstack overcloud deploy
and it should work.
I would suggest using Ocata or master and OVS 2.7 in order to have a
successful deployment.
You can virt-customize your overcloud image and update ovs if you like.
There is a small script here [2] which does that. You can have a look into
it if you want.


Thanks
Numan



[1] -
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-ml2-ovn.yaml

[2] -
https://github.com/numansiddique/overcloud_image_for_ovn/blob/master/build_ovn_oc_image.sh



> Thanks
> Pranab
>
>
> On May 24, 2017 23:38, "Numan Siddique"  wrote:
>
>
>
> On Tue, May 23, 2017 at 6:48 PM, pranab boruah <
> pranabjyotibor...@gmail.com> wrote:
>
>> Hi,
>> We are building a multi-node physical set-up of OpenStack Newton. The
>> goal is to finally integrate the set-up with OVN.
>> Lab details:
>> 1 Controller, 2 computes
>>
>> CentOS-7.3, OpenStack Newton, separate network for mgmt and tunnel
>> OVS version: 2.6.1
>>
>> I followed the following guide to deploy OpenStack Newton using the
>> PackStack utility:
>>
>> http://networkop.co.uk/blog/2016/11/27/ovn-part1/
>>
>> Before I started integrating with OVN, I made sure that the set-up(ML2
>> and OVS) was working by launching VMs. VMs on cross compute node were
>> able to ping each other.
>>
>> Now, I followed the official guide for OVN integration:
>>
>> http://docs.openstack.org/developer/networking-ovn/install.html
>>
>> Error details :
>> Neutron Server log shows :
>>
>>  ERROR networking_ovn.ovsdb.impl_idl_ovn [-] OVS database connection
>> to OVN_Northbound failed with error: '{u'error': u'unknown database',
>> u'details': u'get_schema request specifies unknown database
>> OVN_Northbound', u'syntax': u'["OVN_Northbound"]'}'. Verify that the
>> OVS and OVN services are available and that the 'ovn_nb_connection'
>> and 'ovn_sb_connection' configuration options are correct.
>>
>> The issue is ovsdb-server on the controller binds with the port
>> 6641.instead of 6640.
>>
>>
>
> Hi Pranab,
> Normally I have seen this happening when neutron-server (i.e the
> networking-ovn ML2 driver) tries to connect to the OVN northbound
> ovsdb-server (on port 6641) and fails (mainly because the OVN NB db
> ovsdb-server) is not running. In such case the code here [1] runs
> "ovs-vsctl add-connection ptcp:6640:.. which causes the main ovsdb-server
> (for conf.db) to listen on port 6641.
>
> Can you make sure that ovsdb-server's for OVN are running before the
> neutron-server is started.
>
> May be to see if it works you can run "ovs-vsctl del-manager" and then run
> netsat -putna | grep 6641 and verify that OVN NB db ovsdb-server listens on
> 6641.
>
> [1] - https://github.com/openstack/neutron/blob/stable/newton/
> neutron/agent/ovsdb/native/connection.py#L82
>   https://github.com/openstack/neutron/blob/stable/newton/
> neutron/agent/ovsdb/native/helpers.py#L41
>
> Thanks
> Numan
>
> #  netstat -putna | grep 6641
>>
>> tcp0  0 192.168.10.10:6641  0.0.0.0:*
>> LISTEN  809/ovsdb-server
>>
>> # netstat -putna | grep 6640 (shows no output)
>>
>> Now, OVN NB DB tries to listen on port 6641, but since it is used by
>> the ovsdb-server, it's unable to. PID of ovsdb-server is 809, while
>> the pid of OVN NB DB is 4217.
>>
>> OVN NB DB logs shows this:
>>
>> 2017-05-23T12:58:09.444Z|01421|ovsdb_jsonrpc_server|ERR|ptc