[openstack-dev] Ocata - Ubuntu 16.04 - OVN does not work with DPDK

2017-04-07 Thread Martinx - ジェームズ
Guys,

 I manage to deploy Ocata on Ubuntu 16.04 with OVN for the first time ever,
today!

 It looks very, very good... OVN L3 Router is working, OVN DHCP working...
bridge mappings "br-ex" on each compute node... All good!

 Then, I've said: time for DPDK!

 I manage to use OVS with DPDK easily on top of Ubuntu (plus Ocata Cloud
Archive) with plain KVM, no OpenStack, so, I have experience about how to
setup DPDK, OVS+DPDK, Libvirt vhostuser, KVM and etc...

 After configuring DPDK on a compute node, I tried the following
instructions:

 https://docs.openstack.org/developer/networking-ovn/dpdk.html

 It looks quite simple!

 To make things even simpler, I have just 1 controller, and 1 compute node,
to begin with, before enabling DPDK at the compute node and changing the
"br-int" datapath, I deleted all OVN Routers and all Neutron Networks and
Subnets, that was previously working with regular OVS (no DPDK).

 Then, after enabling DPDK and updating the "br-int" and the "br-ex"
interfaces, right after connecting the "OVN L3 Router" into the "ext-net /
br-ex" network, the following errors appeared on OpenvSwitch logs of the
related compute node (OpenFlow error):


 * After connecting OVN L3 Router against the "ext-net / br-ex" Flat / VLAN
Network:

 ovs-vswitchd.log:

 http://paste.openstack.org/show/605800/

 ovn-controller.log:

 http://paste.openstack.org/show/605801/


 Also, after connecting the OVN L3 Router into the local (GENEVE) network,
very similar error messages appeared on OpenvSwitch logs...


 * After connecting OVN L3 Router on a "local" GENEVE Network:

 ovs-vswitchd.log:

 http://paste.openstack.org/show/605804/

 ovn-controller.log:

 http://paste.openstack.org/show/605805/


 * Output of "ovs-vsctl show" at the single compute node, after plugging
the OVN L3 Router against the two networks (external / GENEVE):

 http://paste.openstack.org/show/605806/


 Then, I tried to launch an Instance anyway and, for my surprise, the
Instance was created! Using vhostuser OVS+DPDK socket!

 Also, the Instance got its IP! Which is great!

 However, the Instance can not ping its OVN L3 Router (its default
gateway), with or without any kind of security groups applied, no deal...
:-(

 BTW, the Instance did not received the ARP stuff of the OVN L3 Router, I
mean, for the instance, the gateway IP on "arp -an" shows "".


 * The ovs-vswitchd.log after launching an Instance on top of OVN/OVS+DPDK:

 http://paste.openstack.org/show/605807/

 * The output of "ovs-vsctl show" after launching the above instance:

 http://paste.openstack.org/show/605809/ - Line 33 is the dpdkvhostuser


 Just to give another try, I started a second Instance, to see if the
Instances can ping each other... Also did not worked, the Instances can not
ping each other.


 So, from what I'm seeing, OVN on top of DPDK does not work.

 Any tips?


 NOTE:

 I tried to enable "hugepages" on my OpenStack's flavor, just in case...
Then, I found another bug, it doesn't even boot the Instance:

 https://bugs.launchpad.net/cloud-archive/+bug/1680956


 For now, I'll deploy Ocata with regular OVN, no DPDK, but, my goal with
this cloud is for high performance networks, so, I need DPDK, and I also
need GENEVE and Provider Networks, everything on top of DPDK.

---
 After researching more about this "high perf networks", I found this:

 * DPDK-like performance in Linux kernel with XDP !

 http://openvswitch.org/support/ovscon2016/7/0930-pettit.pdf

 https://www.iovisor.org/technology/xdp
 https://www.iovisor.org/technology/ebpf

 https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/

 But I have no idea about how to use OpenvSwitch with this thing, however,
if I can achieve DPDK-Like performance, without DPDK, using just plain
Linux, I'm a 100% sure that I'll prefer it!

 I'm okay to give OpenvSwitch + DPDK another try, even knowing that it
[OVS] STILL have serious problems (
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1577088)...
---

 OpenStack on Ubuntu rocks!   :-D

Thanks!
Thiago
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Tags for volumes

2017-04-07 Thread Matt Riedemann

On 3/27/2017 9:59 AM, Duncan Thomas wrote:

On 27 March 2017 at 14:20, 王玺源 > wrote:

I think the reason is quite simple:
1. Some users don't want to use key/value pairs to tag volums. They
just need some simple strings.


...and some do. We can hide this in the client and just save tags under
a metadata item called 'tags', with no API changes needed on the cinder
side and backwards compatability on the client.


2. Metadata must be shorter than 255. If users don't need keys, use
tag here can save some spaces.


How many / long tags are you considering supporting?


3. Easy for quick searching or filter. Users don't need to know
what' the key related to the value.


The client can hide all this, so it is not really a justification


4. For Web App, it should be a basic function[1]


Web standards are not really standards. You can find a million things
that apps 'should' do. They're usually contradictory.




[1]https://en.m.wikipedia.org/wiki/Tag_(metadata)



2017-03-27 19:49 GMT+08:00 Sean McGinnis :

On Mon, Mar 27, 2017 at 03:13:59PM +0800, 王玺源 wrote:
> Hi cinder team:
>
> I want to know what's your thought about adding tags for volumes.
>
> Now Many resources, like Nova instances, Glance images, Neutron
> networks and so on, all support tagging. And some of our cloud 
customers
> want this feature in Cinder as well. It's useful for auditing, 
billing for
> could admin, it can let admin and users filter resources by tag, it 
can let
> users categorize resources for different usage or just remarks 
something.
>
> Actually there is a related spec in Cinder 2 years ago, but
> unfortunately it was not accepted and was abandoned :
> https://review.openstack.org/#/c/99305/

>
> Can we bring it up and revisit it a second time now? What's cinder
> team's idea?  Can you give me some advice that if we can do it or not?

Can you give any reason why the existing metadata mechanism does
not or will
not work for them? There was some discussion in that spec
explaining why it
was rejected at the time. I don't think anything has changed
since then that
would change what was said there.

>
>
> Thanks!
>
> Wangxiyuan

>

__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
--
Duncan Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Duncan, which client are you referring to? python-cinderclient? Or are 
you suggesting duplicating that client-side logic in every client 
library available in the ecosystem.


I brought up the same questions about using metadata when we added 
server tags support to nova, and it's just too heavy weight in this case 
when all you want is a dumb simple little tag.


The nova spec discusses metadata as an alternative if you're interested:

https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/tag-instances.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [qa][cinder] critical fix for ceph job

2017-04-07 Thread Matt Riedemann

On 4/7/2017 9:43 AM, Jordan Pittier wrote:



On Fri, Apr 7, 2017 at 4:15 PM, Ghanshyam Mann > wrote:

Thanks. I am not sure these kind of driver specific behavior on APIs
side. This bring up question that should not cinder APIs be consistent
from usage point. In this case[1], create backup API can accept
'container' param and do/don't create pool as per configured driver?
Then have better documentation for that what all driver honor that or
not.

Any suggestion ?

..1 https://review.openstack.org/#/c/454321/3



Yeah, I've left a comment on that review. And another comment on
https://review.openstack.org/#/c/454722 :

"I'd rather we revert the change completely than to see this merged.

If the Ceph backup driver doesn't support the container argument it
should either grow support for it, or ignore that argument, or we change
Cinder's API completely so that the container argument is not part of
the public API anymore.

Do we expect each and every user to know what each and every drivers
support ? I don"t think so, so Tempest shouldn"t either."



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I left a comment in there too. This wasn't the right way to get around 
this. I've tried the same thing before, back when the encrypted volume 
tests were failing for ceph because it simply wasn't supported in nova.


Jon, as we discussed at the PTG, you need to get a whitelist or 
blacklist file into nova like we have for the cells v1 job and we can 
use that on the ceph job config to control the tests that are run, so we 
don't need to make changes like this in Tempest. Let's work on that and 
then we can revert this workaround to Tempest.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC Candidacy

2017-04-07 Thread Adrian Otto
I am announcing my candidacy [1] for a seat on the OpenStack Technical 
Committee. I've had the honor of working on and with OpenStack since it was 
conceived. Since our first Design Summit on July 13-16 2010, I have been 
thrilled to be a part of our movement together. We have come a long way since 
then, from a hopeful open source project with our friends at Rackspace and NASA 
to a thriving global community that has set the definitive standard for cloud 
software.

Over the past seven years, I have viewed OpenStack as my primary pursuit. I 
love our community, and the way we embrace the “four opens”. Along this 
journey, I have done my very best to push the limits of our innovative spirit, 
and pursue new and exciting ways of using software defined systems to make 
possible what we could have only imagined when we started. I have served our 
community as an innovator and as a PTL for the good part of the past 5 years. I 
served as the founding PTL of OpenStack Solum, and pivoted to become the 
founding PTL of OpenStack Magnum. I currently serve in this role today. Each of 
these project pursuits were aimed at making OpenStack technology more easily 
automated, more efficient, and combining it with cutting edge new technologies.

I am now ready to embark on a wider mission. I’m prepared to transition 
leadership of Magnum to my esteemed team members, and pursue a role with the 
OpenStack TC to have an even more profound impact on our future success. I have 
a unique perspective on OpenStack governance, by repeatedly using our various 
processes and applying our rules, guidelines, and values as they have evolved 
to I deeply respect the OpenStack community, our TC, and their respective 
membership. I look forward to serving in an expanded role, and helping to make 
OpenStack even better than it is today.

I will support efforts to:

1) Make OpenStack a leading platform for running the next generation of cloud 
native applications. This means making sensible and secure ways of allowing our 
data plane and control plane systems to integrate. For example, OpenStack 
should be just as good at running container workloads as it is for running bare 
metal and virtualized ones. Our applications should be able to self heal, 
scale, and dynamically interact with our clouds in a way that’s
safe and effective.

2) Expand our support for languages beyond Python. Over the past year, our TC 
has taken productive steps in this direction, and I would like to further 
advance this work so that we can introduce software written in other languages, 
such as Golang, in a way that’s supportable and appropriate for our community’s 
growing needs.

3) Advocate for inclusivity and diversity, not only for software languages but 
for contributors from all corners of the Earth. I feel it’s important to 
consider perspectives from various geographies, cultures, and of course from 
each gender. I want to maintain a welcoming destination where both novice and 
veteran contributors will thrive.

4) Continue our current work on our “One Platform” pursuit, and help to refine 
which of our teams should remain in openstack, and which should not. I will 
also work to contribute to documenting our culture and systems and clearly 
defining “how we work”. For an example of this, see how we recently did this 
within the Magnum team [2]. We can borrow from these ideas and re-use the ones 
that are generally useful. This reference should give you a sense of what we 
can accomplish together.

I respectfully ask for your vote and support to pursue this next ambition, and 
I look forward to the honor of serving you well.

Thanks,

Adrian Otto

[1] https://review.openstack.org/454908
[2] https://docs.openstack.org/developer/magnum/policies.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-04-07 Thread Doug Hellmann
Excerpts from Emilien Macchi's message of 2017-04-07 11:16:34 -0400:

> I've resurrected an old spec: https://review.openstack.org/#/c/243114/
> - addressed some comments and put TODOs that Doug and I will work
> together.
> The target is set to Queens but we might provide some proof-of-concept
> during Pike to make progress.
> TripleO project is very interested by this feature and I'm pretty sure
> other deployment tools might be too. Any feedback on the work here is
> more than welcome!

After discussing this with Emilien for a while this afternoon, I've
completely rewritten the spec. I abandoned the old one in favor of
https://review.openstack.org/#/c/454897/, but include a reference
to the old one for attribution and archaeologists.

One of the hardest aspects to address was the "mutable" option
feature we recently added. The current implementation of that feature
includes a lot of protective logic to try to avoid letting an
application reload a config setting with a different value unless
the application has explicitly said it knows how to handle cases
where the value changes (think about connection strings for long-lived
database connections).

Although I was a proponent of having the reload feature address
that issue, I'm not sure the complexity of the current implementation
is something we want to hang on to. In the new spec, I propose an
alternative treatment, which is to not cache mutable values in the
first place so the service never needs to receive a signal to reload.
The reload API is retained, and simply discards *everything* and
starts the configuration object over as though it was being freshly
created. This will be a big change, but the feature is new I think
the propose behavior better fits the spirit of the need anyway. Please
provide feedback if you think otherwise.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 18

2017-04-07 Thread Matt Riedemann

On 4/7/2017 8:35 AM, Chris Dent wrote:


There was a nova-specs sprint this week, so a lot of eyes were on
specs but there continues to be regular progress on resource
providers, the placement API and related work in the scheduler and
resource tracker. If you're doing work that I haven't noticed and
reported in here that you think should be, please follow up with
some links.

# What Matters Most

The addition of traits to the placement API is very close, one patch
remains. Linked below. That means that the top of the priority stack
is the spec for claims via the scheduler. Also linked below.

# What's Changed

There's a new spec for including user and project information in
allocations. This is a start towards allowing placement info to be
used for the counting required for quotas. There's a spec and a
followup review to fix some issues with it:


http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/placement-project-user.html

https://review.openstack.org/#/c/454352/

# Help Wanted

Areas where volunteers are needed.

* General attention to bugs tagged placement:
https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
section below).

* Helping to create and evaluate functional tests of the resource
tracker and the ways in which it and nova-scheduler use the
reporting client. For some info see
https://etherpad.openstack.org/p/nova-placement-functional
and talk to edleafe.

* Performance testing. If you have access to some nodes, some basic
   benchmarking and profiling would be very useful. See the
   performance section below. Is there room on OSIC for this kind of
   thing?

# Main Themes

## Traits

The work to implement the traits API in placement is happening at


https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-traits


There's one patch left to get the API in place and a patch for a new
command to sync the os-traits library into the database:

https://review.openstack.org/#/c/450125/

There is a stack of changes to the os-traits library to add more traits
and also automate creating symbols associated with the trait
strings:

https://review.openstack.org/#/c/448282/4

## Ironic/Custom Resource Classes

There's a blueprint for "custom resource classes in flavors" that
describes the stuff that will actually make use of custom resource
classes:


https://blueprints.launchpad.net/nova/+spec/custom-resource-classes-in-flavors


The spec has merged, but the implementation has not yet started.

Over in Ironic some functional and integration tests have started:

https://review.openstack.org/#/c/443628/

## Claims in the Scheduler

Progress has been made on the spec for claims in the scheduler:

https://review.openstack.org/#/c/437424/

Some differences of opinion on what's possible now and what the API
should expose have been resolved, but now we need to resolve some
questions on how (or even if) to most effectively deal with
reconciling allocations that used to happen in the resource tracker
and will now happen in the scheduler.

Eyes and brains required.

Thinking about this stuff has also revealed some places where it's
possible for allocations to become wrong or orphaned:

https://bugs.launchpad.net/nova/+bug/1679750
https://bugs.launchpad.net/nova/+bug/1661312

## Shared Resource Providers

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

Progress on this will continue once traits and claims have moved forward.

## Nested Resource Providers

The spec for this has been updated with what was learned at the PTG
and moved to pike and merged:


http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/nested-resource-providers.html


## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

Several reviews are in progress for documenting the placement API.
This is likely going to take quite a few iterations as we work out
the patterns and tooling. But it's great to see the progress and
when looking at the draft rendered docs it makes placement feel like
a real thing™.

Find me (cdent) or Andrey (avolkov) if you want to help out or have
other questions.

## Performance

We're aware that there are some redundancies in the resource tracker
that we'd like to clean up


http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html

but it's also the case that we've done no performance testing on the
placement service itself.

We ought to do some testing to make sure there aren't unexpected
performance drains. Is this something where we could get time on the
OSIC hardware?

# Other Code/Specs

* https://review.openstack.org/#/c/418393/
 A spec for improving the level of detail and structure in placement
 error responses so that it is easier to distinguish between
 different types of, for example, 409 responses.

 This hasn't seen any attention since March 17, and as a result
 didn't 

Re: [openstack-dev] [tripleo] container jobs are unstable

2017-04-07 Thread Dan Prince
On Thu, 2017-04-06 at 15:32 -0400, Paul Belanger wrote:
> On Thu, Mar 30, 2017 at 11:01:08AM -0400, Paul Belanger wrote:
> > On Thu, Mar 30, 2017 at 03:08:57PM +0100, Steven Hardy wrote:
> > > To be fair, we discussed this on IRC yesterday, everyone agreed
> > > infra
> > > supported docker cache/registry was a great idea, but you said
> > > there was no
> > > known timeline for it actually getting done.
> > > 
> > > So while we all want to see that happen, and potentially help out
> > > with the
> > > effort, we're also trying to mitigate the fact that work isn't
> > > done by
> > > working around it in our OVB environment.
> > > 
> > > FWIW I think we absolutely need multinode container jobs, e.g
> > > using infra
> > > resources, as that has worked out great for our puppet based CI,
> > > but we
> > > really need to work out how to optimize the container download
> > > speed in
> > > that environment before that will work well AFAIK.
> > > 
> > > You referenced https://review.openstack.org/#/c/447524/ in your
> > > other
> > > reply, which AFAICS is a spec about publishing to dockerhub,
> > > which sounds
> > > great, but we have the opposite problem, we need to consume those
> > > published
> > > images during our CI runs, and currently downloading images takes
> > > too long.
> > > So we ideally need some sort of local registry/pull-through-cache 
> > > that
> > > speeds up that process.
> > > 
> > > How can we move forward here, is there anyone on the infra side
> > > we can work
> > > with to discuss further?
> > > 
> > 
> > Yes, I am currently working with clarkb to adress some of these
> > concerns. Today
> > we are looking at setup our cloud mirrors to cache[1] specific
> > URLs, for example
> > we are trying testing out http://trunk.rdoproject.org  This is not
> > a long term
> > solution for projects, but a short. It will be opt-in for now,
> > rather then us
> > set it up for all jobs.  Long term, we move rdoproject.org into
> > AFS.
> > 
> > I have been trying to see if we can do the same for docker hub, and
> > continue to
> > run it.  The main issue, at least for me, is we don't want to
> > depend on docker
> > tooling for this. I'd rather not install a docker into our control
> > play at this
> > point in time.
> > 
> > So, all of that to stay, it will take some time. I understand it is
> > a high
> > priority, but lets solve the current mirroring issues with tripleo
> > first (RDO,
> > gems, github), and lets see if the apache cache proxy with work for
> > hub.docker.com too.
> > 
> > [1] https://review.openstack.org/451554
> 
> Wanted to follow up to this thread, we managed to get a reverse proxy
> cache[2]
> for https://registry-1.docker.io working. So far, I've just tested
> ubuntu,
> fedora, centos images but the caching works. Once we land this, any
> jobs using
> docker can take advantage of the mirror.
> 
> [2] https://review.openstack.org/#/c/453811


Thanks for your help in this Paul.

A reverse proxy cache wasn't exactly what I was expecting so it took a
few more patches to get all this initially wired into the TripleO OVB
jobs (6 patches so far). Once we have this we can duplicate a similar
setup for the multinode patches as well.

I created a quick etherpad below [1] to track the status of these
patches. I think they mostly need to land in the order they are listed
in the etherpad...

[1] https://etherpad.openstack.org/p/tripleo-docker-registry-mirror

> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week (04/07-04/13)

2017-04-07 Thread Brian Rosmaita
First, thanks to everyone who coded and reviewed the items identified as
Pike milestone 1 priorities.  They were merged yesterday and Hemanth,
our release czar, proposed a patch to enable the release:
https://review.openstack.org/#/c/454760/

Next, don't forget that the spec freeze is the end of the day UTC on
Friday 14 April.

Finally, in addition to keeping an eye on revised spec patches that need
reviews, the priority for the week is (of course) image import:
- https://review.openstack.org/#/c/391442/
- https://review.openstack.org/#/c/391441/
- https://review.openstack.org/#/c/443636/
- https://review.openstack.org/#/c/443632/
- https://review.openstack.org/#/c/443633/

Have a productive week!
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] call for comments on next operator survey draft

2017-04-07 Thread Brian Rosmaita
Hello Glancers,

The topic of the April operators' survey is Glance cache usage.  Any
developers who are interested in the Glance cache should take a look and
let me know if there are any questions that should be added/revised:

https://goo.gl/forms/LvAq0anMiHRh7Hb32

Please respond before the next Glance meeting (14:00 UTC 13 April 2017).
 I'd like to launch the survey on Monday, 17 April.

(Any operators who see this, feel free to join in the commentary, but
please don't answer for real, as I'll purge all responses before the
survey launches.)

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [architecture] Arch-WG, we hardly knew ye..

2017-04-07 Thread Clint Byrum
Excerpts from lebre.adrien's message of 2017-04-07 15:50:05 +0200:
> Dear Clint, Dear all, 
> 
> It is indeed unfortunate that the WG ends.
> 
> From our side (Inria folks from the Discovery initiative [1]), we had planned 
> to join the effort after the Boston Summit as we are currently addressing two 
> points that are/were closely related to the Arch WG (at least from our 
> understanding): 
> 
> * We have been working during this last cycle on the consolidation of the 
> EnOS solution [2] (in particular with advice from the performance team). 
> EnOS aims to perform OpenStack Performance analyses/profiling.  It is built 
> on top of Kolla and integrates Rally and Shaker and more recently  OSProfiler.
> We are currently conducting preliminary experiments to draw, in an automatic 
> manner, sequence diagrams such as 
> https://docs.openstack.org/ops-guide/_images/provision-an-instance.png
> We hope it will help our community to understand better the OpenStack 
> architecture as well as the interactions between the different services, in 
> particular it would help us to follow changes between cycles. 
> 
> * We started a discussion regarding the oslo.messaging driver [3]. 
> Our goal is, first,  to make a qualitative analysis of AMQP messages (i.e. we 
> would like to understand the different AMQP exchanges better) and try to 
> identify possible improvements. 
> Second, we will do performance analysis of the rabbitMQ under different 
> scenarios and, according to gathered results, we will conduct additional 
> experiments with alternatives solutions (ZMQ, Qpid, ...)
> Please note that this is a preliminary discussion, so we just exchanged 
> between a few folks from the Massively Distributed WG. We proposed a session 
> to the forum [4] with the goal of opening the discussion more generally to 
> the community. 
> 
> We are convinced of the relevance of a WG such as the Architecture one, but 
> as you probably already know, it is difficult for most of us to join 
> different meetings several times per week, and thus rather difficult to 
> gather efforts that are done in other WGs. I don't know whether and how we 
> can improve this solution but it would make sense. 
> 

Hi! I highly recommend working with the performance team for both of
those efforts:

https://wiki.openstack.org/wiki/Meetings/Performance

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-04-07 Thread Emilien Macchi
On Fri, Apr 7, 2017 at 2:27 PM, Andreas Jaeger  wrote:
> On 2017-04-07 18:55, Emilien Macchi wrote:
>> os-cloud-config has been retired in Infra and in the repo.
>> RDO packaging has also been updated.
>> Now waiting for final approval in Governance:
>> https://review.openstack.org/#/c/451096/
>>
>> If bug fixes has to happen, please submit to stable branches directly
>> and let us know on #rdo, we'll update the pin.
>
> Emilien, note that the repo is completely read only - on all branches.
> You cannot submit anything anymore,

Ok thanks for the reminder and sorry for the confusion I brought
(TIL). I don't think it's a problem for us.

Thanks!

> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-04-07 Thread Andreas Jaeger
On 2017-04-07 18:55, Emilien Macchi wrote:
> os-cloud-config has been retired in Infra and in the repo.
> RDO packaging has also been updated.
> Now waiting for final approval in Governance:
> https://review.openstack.org/#/c/451096/
> 
> If bug fixes has to happen, please submit to stable branches directly
> and let us know on #rdo, we'll update the pin.

Emilien, note that the repo is completely read only - on all branches.
You cannot submit anything anymore,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tosca-parser][heat-translator] Meeting Time change

2017-04-07 Thread HADDLETON, Robert W (Bob)


The combined tosca-parser/heat-translator weekly IRC meeting has been on 
Thursdays at 1600UTC in #openstack-heat-translator.


Due to scheduling conflicts I need to change the meeting time, and I'd 
like to propose Thursdays at 1400UTC.


Please reply with any questions/concerns/counter-proposals.

Thanks

Bob Haddleton

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] TC candidacy

2017-04-07 Thread Ed Leafe
Hello! I am once again announcing my candidacy for a position on the OpenStack 
Technical Committee.

For those who do not know me, I'm easy to find. My name is Ed Leafe; I'm 
'edleafe' on IRC, and @edleafe on Twitter. I have been involved with OpenStack 
since the very beginning, when I was working for Rackspace as a core member of 
the Nova team. An internal job change took me away from active development 
after Essex, but since being hired by IBM, I've been back in the OpenStack 
universe since Kilo. As a result of this long involvement, I have always had a 
strong interest in helping to shape the direction of OpenStack, and if there is 
one thing people will agree about me, is that I'm never shy about voicing my 
opinion, whether the majority agree with me or not (they usually don't!). I now 
spend most of my time working on Nova and the new Placement service. I also 
spend a good deal of time arguing over obscure HTTP issues with the API Working 
Group, and sometimes blog about them [0].

You'd think that with this long involvement, I'd be happy to see OpenStack 
continue on the course it's been on, and for the most part, you'd be right. 
What we've gotten right is the way we work together, focusing on community over 
corporate interests - that is essential for any project like OpenStack. What we 
really could improve, though, is how we focus our efforts, and how we set 
ourselves up for the future.

The Big Tent change was important for making this feel like an inclusive 
community, and for allowing for some competition among differing approaches. 
Where I think it's been problematic is that while the notion that "we're all 
OpenStack" is wonderful, this egalitarian approach has made it somewhat 
confusing, not only to the outside markets, but to the way we govern ourselves. 
I think that it's important to recognize that OpenStack can be divided into two 
parts: Infrastructure as a Service (IaaS), and everything else. Monty Taylor 
first outlined this split back in 2014 [1], and while there is still some room 
to debate which projects fall into which group, I think it's a more important 
distinction than ever. The "Layer 1" projects have a strong dependency on each 
other, and need to have a common way of doing things. But for all the other 
projects that build on top of this core, that sort of conformity is not 
critical. In fact, it can be a hindrance. So I believe that different technical 
rules should apply to these two groups.

The recent discussions on the approval of Golang and other languages into the 
OpenStack ecosystem [2] highlighted the need for this division. For the core 
IaaS projects, there should be a very, very high bar for using !Python. But for 
the others, I'd prefer to let them make their own choices. If they choose a 
language that is difficult to deploy and maintain, or that doesn't create logs 
like the rest of OpenStack, it's going to wither and die unless the benefits it 
brings is great enough to make that increased burden.

To my mind, this is the only way to make OpenStack better: focus on making the 
IaaS core as rock-solid and dependable as possible, but then open things up for 
experimentation everywhere else. As long as a project follows the Four Opens 
[3], let them make the decisions on the trade-offs. As an API wonk, that's what 
the benefit of consistent APIs offers: the ability of any app to interact, not 
just those written in the same language.

This ties in with my other main concern: the narrowing-but-still-wide 
separation between OpenStack developers and operators. We've made a lot of 
progress over the last few cycles, but we still need to get a lot better. In my 
former life in the construction industry, there were always architects who 
designed very interesting things, but which were a complete pain to build. 
Inevitably this was the result of the architect having little practical 
experience in the field getting their hands dirty building things. Many of the 
comments I've heard from OpenStack operators have a similar aspect to them. I 
know that I have never run a large cloud, so when an operator tells me about an 
issue, I listen. I'd like to see the TC continue to encourage more 
opportunities for OpenStack developers to be able to listen and work with 
OpenStack operators.

So if you're still reading up to this point, perhaps you might want to consider 
voting for me for the TC. But either way, please ask questions of the 
candidates. That's the only way to know that the people you choose share your 
concerns, and that will help to ensure that the TC represents your interests.

[0] https://blog.leafe.com
[1] 
http://superuser.openstack.org/articles/openstack-as-layers-but-also-a-big-tent-but-also-a-bunch-of-cats/
[2] 
https://github.com/openstack/governance/blob/master/reference/new-language-requirements.rst
[3] https://governance.openstack.org/tc/reference/opens.html

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP

Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-04-07 Thread Emilien Macchi
os-cloud-config has been retired in Infra and in the repo.
RDO packaging has also been updated.
Now waiting for final approval in Governance:
https://review.openstack.org/#/c/451096/

If bug fixes has to happen, please submit to stable branches directly
and let us know on #rdo, we'll update the pin.
Thanks,

On Mon, Apr 3, 2017 at 11:56 AM, Emilien Macchi  wrote:
> The retirement patch is ready whenever you are:
> https://review.openstack.org/#/c/450946/
>
> Please review.
>
> Thanks,
>
> On Mon, Apr 3, 2017 at 3:53 AM, Bogdan Dobrelya  wrote:
>> On 31.03.2017 13:58, Jiří Stránský wrote:
>>> On 30.3.2017 17:39, Juan Antonio Osorio wrote:
 Why not drive the post-config with something like shade over ansible?
 Similar to what the kolla-ansible community is doing.
>>>
>>> We could use those perhaps, if they bring enough benefit to add them to
>>> the container image(s) (i think we'd still want to drive it via a
>>> container rather than fully externally). It's quite tempting to just
>>
>> I hope we still want to decouple configuration from distribution. Wrt
>> versioning issue, custom entry points seem bound to versions of the heat
>> templates and data living there anyway. So it sounds reasonable to me to
>> ship and version entrypoints among templates as a single bundle and
>> please please please keep those out of images.
>>
>>> load a yaml file with the endpoint definitions and just iterate over
>>> them and let Ansible handle the actual API calls...
>>>
>>> However, currently i can't see endpoint management in the cloud modules
>>> docs [1], just service management. Looks like there's still a feature
>>> gap at the moment.
>>>
>>> Jirka
>>
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][docker] registry-1.docker.io to reverse proxy cache

2017-04-07 Thread Paul Belanger
Greetings,

In an effort to help projects depending on docker infrastructure, we've setup a
reverse proxy cache for http://registry-1.docker.io. Please see the
instructions in 453811[1] on how to configure dockerd to use them.  While
testing, I did run into an issue with docker 17.04 so suggest you use a lower
version for now.

Please reply to the thread when you have a patches propose to use the proxy
cache. We working to update our configure_mirror.sh[2] so jobs get the URL
easier, hope to land this today. We'd like to monitor the proxy cache for a few
jobs first, before opening the flood gates.

If you have questions, feel free to drop by #openstack-infra or reply.

[1] https://review.openstack.org/#/c/453811/
[2] https://review.openstack.org/#/c/454334/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] potential virtual meetup (like previous virtual midcycle, but not in the middle of the cycle)

2017-04-07 Thread Dmitry Tantsur

Hi all!

Our virtual midcycle last cycle was apparently a success, and I've heard certain 
folks suggesting having more such virtual meetups. I wonder if it's time to have 
one relatively soon. Depending on how many things we want to discuss, we could 
take only 1 or 2 days (instead of 3 for the midcycle).


I've started collecting potential topics and other ideas in 
https://etherpad.openstack.org/p/ironic-virtual-meetup.


For those who don't know: the last midcycle was via phone (SIP). We had 3 days 
with 2 time slots of ~ 4 hours each day to cover different timezones.


Please let me know what you think. Would such event be useful for you?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-docs] [OpenStack-I18n] [dev] What's up, doc?

2017-04-07 Thread Alexandra Settle
Team,

This week I have still been working on drafting a governance tag for our guides 
called "docs:follows-policy". I have been working with Doug Hellmann 
(dhellmann) in the last week to change to draft dramatically, so it would be 
good for doc people and doc liaisons to review. We are trying to make this a 
broader tag, so it can be applied for other guides too. To review: 
https://review.openstack.org/#/c/445536/ I am also in the process of 
documenting guidelines in our Contribution Guide - which would also benefit 
from doc reviews:  https://review.openstack.org/#/c/453642/

Would like to call out and thank John Davidge for his awesome work with the 
Networking Guide and neutron-related patches. He's been providing valuable 
guidance, and reviews, and it has been greatly appreciated by myself and the 
team.

Lana Brindley has done an awesome job for the last two weeks in keeping our bug 
list under control. We are down to an amazing 102 bugs in queue, and 82 bugs 
closed this cycle already!
Next week, I will be looking after the bug triage liaison role!
If you're sitting there thinking "bugs are for me, I really love triaging 
bugs!" well, you're in luck! We have one spot open for the rest of the cycle 
(14 Aug - 28 Aug): 
https://wiki.openstack.org/wiki/Documentation/SpecialityTeams#Bug_Triage_Team

== The Road to the Summit in Boston ==

Keep an eye out for the docs and I18n have a project onboarding room at the 
summit. Melvin Hillsman (mrhillsman) of the User Committee submitted a forum 
topic for the Ops Guide to get operator feedback.
David Flanders from the Foundation has also proposed a forum topic for 
developer.openstack.org (which currently houses our API, SDK, and other dev 
stuff). We'll be discussing major changes to that and would like to see some 
feedback from people here. Any questions on that, shoot it my way. My main 
objective for this forum topic is to reduce our current technical debt that 
lives on this site.
For more information on forum topics: https://wiki.openstack.org/wiki/Forum
Schedule has been released: 
https://www.openstack.org/summit/boston-2017/summit-schedul

== Specialty Team Reports ==

* API - Anne Gentle: API versioning in relation to release versioning is 
currently manually compiled for the 40-ish API services, so ideas on how to 
automate and surface that info welcomed. More info: 
http://lists.openstack.org/pipermail/openstack-dev/2017-March/114690.html
* Configuration Reference and CLI Reference - Tomoyuki Kato: N/A
* High Availability Guide - Ianeta Hutchinson: We are continuing to collaborate 
with OS DevOps team. See the tag ha-guide-draft for bugs opened to fill content 
for the new guide.
* Hypervisor Tuning Guide - Blair Bethwaite: N/A
* Installation guides - Lana Brindley: We have now branched, so please remember 
to backport if you have edits to the Ocata guide now. Big thanks to all the 
testers who have been working hard over the past month or two (that Nova cells 
bug was *tough*!), and to Brian and Mariia for doing the heavy lifting. Noticed 
a bunch links to draft versions of the guide in Newton/Ocata branches, 
backports for that have been merged, and the Contributor Guide updated so we 
don't miss it in the future (https://docs.openstack.org/contributor 
guide/release/taskdetail.html#update-links-in-all-books).
* Networking Guide - John Davidge: More patches landed in the last couple of 
weeks dealing with the move to OSC, and more are still in flight. Progress also 
continues on RFC 5737 compliance. Thanks to all contributors for their work.
* Operations and Architecture Design guides - Darren Chan: Arch Guide: edited 
architecture considerations content and cleaned up the index page structure 
which was applied across OS manuals. Some ops-related content was moved to the 
Ops Guide. Our current focus is improving the storage design content and 
networking design content.
* Training Guides - Matjaz Pancur: N/A
* Training labs - Roger Luethi: We released the Ocata version of training-labs 
this week.
* User guides - Joseph Robinson: For the user guides - the spec on migrating 
the Admin Guide content this week moved closer to merging. I started preparing 
the work items for action on the User Guides tasks wiki page.
* Theme and Tools - Brian Moss: Anne has a spec up for theme consolidation, 
please check it out: https://review.openstack.org/#/c/454346/, Brian has fixed 
up the sitemap tool tests, reviews welcome: 
https://review.openstack.org/#/c/453976/. 41 open bugs, 13 closed in Pike

== Doc team meeting ==
Our next meeting will be on Thursday, 20 April at 2100 UTC in 
#openstack-meeting-alt.
For more meeting details, including minutes and the agenda: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
The meeting chair will be me!

Big thanks to Joseph for stepping up in the last 2 meetings and hosting in my 
absence! Really appreciated it :)

--
Have a great weekend :)
Alex

IRC: asettle
Twitter: dewsday


Re: [openstack-dev] [architecture] Arch-WG, we hardly knew ye..

2017-04-07 Thread Joshua Harlow

Sad to see it go, but I understand the reasoning about why.

Back to the coal mines :-P

-Josh

Clint Byrum wrote:

I'm going to be blunt. I'm folding the Architecture Working Group
immediately following our meeting today at 2000 UTC. We'll be using the
time to discuss continuity of the base-services proposal, and any other
draw-down necessary. After that our meetings will cease.

I had high hopes for the arch-wg, with so many joining us to discuss
things in Atlanta. But ultimately, we remain a very small group with
very limited resources, and so I don't think it's the best use of our
time to continue chasing the arch-wg.

Thanks everyone for your contributions. See you in the trenches.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-04-07 Thread Emilien Macchi
On Thu, Apr 6, 2017 at 7:08 PM, Doug Hellmann  wrote:
> Excerpts from Emilien Macchi's message of 2017-04-06 18:17:59 -0400:
>> On Wed, Mar 22, 2017 at 11:23 AM, Flavio Percoco  wrote:
>> > On 15/03/17 15:40 -0400, Doug Hellmann wrote:
>> >>
>> >> Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:
>> >>>
>> >>> On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
>> >>> > Team,
>> >>> >
>> >>> > So one more thing popped up again on IRC:
>> >>> > https://etherpad.openstack.org/p/oslo.config_etcd_backend
>> >>> >
>> >>> > What do you think? interested in this work?
>> >>> >
>> >>> > Thanks,
>> >>> > Dims
>> >>> >
>> >>> > PS: Between this thread and the other one about Tooz/DLM and
>> >>> > os-lively, we can probably make a good case to add etcd as a base
>> >>> > always-on service.
>> >>>
>> >>> As I mentioned in the other thread, there was specific and strong
>> >>> anti-etcd sentiment in Tokyo which is why we decided to use an
>> >>> abstraction. I continue to be in favor of us having one known service in
>> >>> this space, but I do think that it's important to revisit that decision
>> >>> fully and in context of the concerns that were raised when we tried to
>> >>> pick one last time.
>> >>>
>> >>> It's worth noting that there is nothing particularly etcd-ish about
>> >>> storing config that couldn't also be done with zk and thus just be an
>> >>> additional api call or two added to Tooz with etcd and zk drivers for it.
>> >>>
>> >>
>> >> The fun* thing about working with these libraries is managing the
>> >> interdependencies. If we're going to have an abstraction library that
>> >> provides configuration options for seeing the backend, like we do in
>> >> oslo.db and olso.messaging, then the configuration library can't use it
>> >> or we have a circular dependency.
>> >>
>> >> Luckily, tooz does not currently use oslo.config. So, oslo.config could
>> >> use tooz and we could create an oslo.dlm library with a shallow
>> >> interface mapping config options to tooz calls to open connections or
>> >> whatever we need from tooz in an application. Then apps could use
>> >> oslo.dlm instead of calling into tooz directly and the configuration of
>> >> the backend would be hidden from the application developer.
>> >
>> >
>> > Replying here becasue I like the proposal, I like what Monty said and I 
>> > also
>> > like what Doug said. Most of the issues and concerns have been covered in
>> > this
>> > thread and I don't have much else to add other than +1.
>>
>> The one-million-dollar question now is: what are the next steps?
>> It sounds like an oslo spec would be nice to summarize the ideas here
>> and talk about design.
>>
>> I volunteer to help but I would need someone more familiar than I am with 
>> Oslo.
>> Please let me know if you're interested to work on it with me
>> otherwise I'll chase chase some of you :-)
>
> I can help from the Oslo side.

I've resurrected an old spec: https://review.openstack.org/#/c/243114/
- addressed some comments and put TODOs that Doug and I will work
together.
The target is set to Queens but we might provide some proof-of-concept
during Pike to make progress.
TripleO project is very interested by this feature and I'm pretty sure
other deployment tools might be too. Any feedback on the work here is
more than welcome!

Thanks,

> Doug
>
>>
>> Thanks for the nice discussions here, I think we've made good progress.
>>
>> >> Doug
>> >>
>> >> * your definition of "fun" may be different than mine
>> >
>> >
>> > Which is probably different than mine :)
>> >
>> > --
>> > @flaper87
>> > Flavio Percoco
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC candidacy (dims)

2017-04-07 Thread Davanum Srinivas
Team,

Please consider my candidacy for another term on the OpenStack Technical
Committee. In my previous candidacy statement[1], i had mentioned the following
challenges - "balance stability/innovation, fostering new talent, target our
limited resources to make OpenStack even better". This is as true as it was
a year ago given the challenges we face today as a foundation. As a group, the
TC has made good progress in helping with those challenges. Some
highlights include:

* Recent Board/TC workshop where we identified 4 critical areas and setup up
work groups to come up with actionable items that we can do in the
short term [2].
* Technical Committee Vision for 2019 [2]

Since the following were my previous goals from [1]:
- Cross Project issues, increasing collaboration between projects
- Backwards compatibility issues and testing
- Evangelizing great ideas from different projects

In addition to spending time on the Release and requirements teams, Here are
some ways i helped on the last year
* Helped setup the python3 support in various projects and infrastructure and
  get several projects to setup CI jobs with functional tests
* Spent time in the Kubernetes community to figure out how to work better
  between the two communities, the areas of overlap etc.
* Started the go-and-containers initiative[4] to help advance golang as a choice
  for projects and increase areas of collaboration with Docker, Kubernetes.
* Actively or passively nudged several projects including Zun, OpenStack-Helm
  and Loci. Please look them up and participate!

In addition to challenges and goals from before, we have some new ones that i
would like to work on:
- We need to draw boundaries around what we do well so we can talk better to
  others in the larger open source eco systems (adjacent communities) about how
  end users can compose and customize what they need for their environments
- Some projects need help since cores have moved to work on other
things or their
  employers have moved away from supporting those projects. We need to
figure out
  how to help, not just from a maintenance point of view, but how to convert
  folks who are part timers or operators to help actively to keep things going.
- Folks who are just getting started in OpenStack do not know how to advance in
  the community (contributor->core, core for multiple projects, cross project
  collaboration etc), so we should setup programs and processes (Ladder program
  in vision statement) to help with this situation.

I love being part of the OpenStack community and learning from all the great
talented people is what keeps me going. Thanks for the opportunity to serve
on the TC last year and would love to do so for another term.

[1] 
https://git.openstack.org/cgit/openstack/election/tree/candidates/newton/TC/Davanum_Srinivas.txt
[2] 
https://crustyblaa.com/march-8-2017-openstack-foundation-strategic-planning-workshop.html
[3] 
http://docs-draft.openstack.org/62/453262/3/check/gate-governance-docs-ubuntu-xenial/e13854f//doc/build/html/reference/technical-committee-vision-2019.html
[4] https://etherpad.openstack.org/p/go-and-containers


-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][cinder] critical fix for ceph job

2017-04-07 Thread Ghanshyam Mann
Thanks. I am not sure these kind of driver specific behavior on APIs
side. This bring up question that should not cinder APIs be consistent
from usage point. In this case[1], create backup API can accept
'container' param and do/don't create pool as per configured driver?
Then have better documentation for that what all driver honor that or
not.

Any suggestion ?

..1 https://review.openstack.org/#/c/454321/3

-gmann


On Fri, Apr 7, 2017 at 9:44 PM, Jon Bernard  wrote:
> I just posted a critical fix the ceph job [1].  I'm anxious to get this
> landed so I can start working on the other intermittent failures in the
> gate.  Any feedback is much appreciated.
>
> [1] https://review.openstack.org/#/c/454321/
>
> --
> Jon
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Risk prediction model for OpenStack

2017-04-07 Thread Rossella Sblendido
Hi Zoey Lin,

thanks a lot for sharing your analysis. I agree with you, diversity and
turnover of developers could be a good predictor for risk. Anyway
there's an important point that should be considered: who's reviewing
the code? A developer that has experience in a specific module might
stop contributing to it directly but might still review the code. This
helps the newcomers learning faster and reduces the risk of introducing
bugs.

cheers,

Rossella

On 04/06/2017 05:22 AM, 林泽燕 wrote:
> Hi Kevin,
> 
> I believe that the code ownership can reflect the module size in some
> way. The code ownership is small, which means a lot of commits made by a
> group of contributors, and thus the module might be quite large.
> And I think the amount of bugs might be used to identify the risky files
> for developers and managers and could act as an indicator of workload
> needed to improve the quality of the file, thus our develop team can
> estimate the workload on each file and adjust the work priority.
> I wonder if I have made it clear. Thank you for your attention.
> 
> Zoey Lin
> 
> 
> -原始邮件-
> *发件人:*"Kevin Benton" 
> *发送时间:*2017-04-05 22:17:08 (星期三)
> *收件人:* "OpenStack Development Mailing List (not for usage
> questions)" 
> *抄送:*
> *主题:* Re: [openstack-dev] [neutron] Risk prediction model for
> OpenStack
> 
> Thanks for this analysis. So one thing that jumps out at me right
> away is the correlation of this with the module size.
> ovs_neutron_agent.py is one of the biggest modules (if not the
> biggest non-test module) in Neutron, so if you don't control for
> line count in the analysis, this one would come out on top even if
> it had the same code quality (bugs per line) as other modules. How
> do you deal with module size?
> 
> Cheers,
> Kevin Benton
> 
> On Tue, Apr 4, 2017 at 11:01 PM, 林泽燕  > wrote:
> 
> Dear everyone, 
> 
> My name is Zoey Lin, majored in Computer Science, Peking
> University, China. I’m a candidate of Master Degree. Recently
> I'm making a research on OpenStack about the contribution
> composition of a code file, to predict the potential amount of
> defect that the file would have in the later development stage
> of a release.
> 
> I wonder if I could show you my study, including some metrics
> for the prediction model and a visualization tool. I would
> appreciate it if you could share your opinions or give some
> advices, which would really, really help me a lot. Thank you so
> much for your kindness. :)
> 
>  
> 
> First of all, I would give a brief introduction to my study. I
> analyzed and designed some metrics to describe the contribution
> composition of a code file, and then use these metrics to train
> a model to predict the amount of bug that a file would have as a
> risk value in the later development stage of a release, using
> the historical commit log data of the former development stage.
> The model showed a good performance. I also developed a tool to
> visualize the metrics and the potential risk value, which we
> believe could help developers and project managers being aware
> of the current situation and risk of the code file. We expect
> that project managers could estimate the workload, adjust the
> personnel assignment and locate the development problems based
> on the information of the tool and finally could reduce the risk
> and improve the quality of the project efficiently in some way.
> 
>  
> 
> Then, I would introduce two main metrics of my model.
> 
> 1. code ownership of files and developers: 
> 
> Code ownership shows the number of engineers contributing to a
> source code artifact and the relative proportion of their
> contributions. The code ownership of a file refers to the
> proportion of ownership for the contributor with the highest
> proportion of ownership, which could indicate that whether there
> is one developer who “owns” the file and has a high level of
> expertise, who can act as a single point of contact for others
> who need to use the component, need changes to it, or just have
> questions about it. Minor developer refers to developer whose
> code ownership is lower than 5%. Previous literatures have
> proven that code ownership and amount of minor developer
> strongly correlate with code defects.
> 
> 2. contribution diversity of the file:
> 
> We measured the uncertainty in a code file's contributions (or
> the diversity of sources of contributions) in a given month
> 

Re: [openstack-dev] [architecture] Arch-WG, we hardly knew ye..

2017-04-07 Thread lebre . adrien
Dear Clint, Dear all, 

It is indeed unfortunate that the WG ends.

From our side (Inria folks from the Discovery initiative [1]), we had planned 
to join the effort after the Boston Summit as we are currently addressing two 
points that are/were closely related to the Arch WG (at least from our 
understanding): 

* We have been working during this last cycle on the consolidation of the EnOS 
solution [2] (in particular with advice from the performance team). 
EnOS aims to perform OpenStack Performance analyses/profiling.  It is built on 
top of Kolla and integrates Rally and Shaker and more recently  OSProfiler.
We are currently conducting preliminary experiments to draw, in an automatic 
manner, sequence diagrams such as 
https://docs.openstack.org/ops-guide/_images/provision-an-instance.png
We hope it will help our community to understand better the OpenStack 
architecture as well as the interactions between the different services, in 
particular it would help us to follow changes between cycles. 

* We started a discussion regarding the oslo.messaging driver [3]. 
Our goal is, first,  to make a qualitative analysis of AMQP messages (i.e. we 
would like to understand the different AMQP exchanges better) and try to 
identify possible improvements. 
Second, we will do performance analysis of the rabbitMQ under different 
scenarios and, according to gathered results, we will conduct additional 
experiments with alternatives solutions (ZMQ, Qpid, ...)
Please note that this is a preliminary discussion, so we just exchanged between 
a few folks from the Massively Distributed WG. We proposed a session to the 
forum [4] with the goal of opening the discussion more generally to the 
community. 

We are convinced of the relevance of a WG such as the Architecture one, but as 
you probably already know, it is difficult for most of us to join different 
meetings several times per week, and thus rather difficult to gather efforts 
that are done in other WGs. I don't know whether and how we can improve this 
solution but it would make sense. 

In any case,  thanks for giving a first try. 
Ad_rien_ 

[1]: http://beyondtheclouds.github.io
[2]: https://enos.readthedocs.io/en/latest/
[3]: https://etherpad.openstack.org/p/oslo_messaging_discussion 
[4]: http://forumtopics.openstack.org/cfp/details/62

- Mail original -
> De: "Clint Byrum" 
> À: "openstack-dev" 
> Envoyé: Jeudi 6 Avril 2017 19:53:17
> Objet: [openstack-dev] [architecture] Arch-WG, we hardly knew ye..
> 
> I'm going to be blunt. I'm folding the Architecture Working Group
> immediately following our meeting today at 2000 UTC. We'll be using
> the
> time to discuss continuity of the base-services proposal, and any
> other
> draw-down necessary. After that our meetings will cease.
> 
> I had high hopes for the arch-wg, with so many joining us to discuss
> things in Atlanta. But ultimately, we remain a very small group with
> very limited resources, and so I don't think it's the best use of our
> time to continue chasing the arch-wg.
> 
> Thanks everyone for your contributions. See you in the trenches.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] amrith candidacy for OpenStack Technical Committee

2017-04-07 Thread Amrith Kumar
My name is Amrith Kumar, I've been around OpenStack since a little
before the IceHouse release (about 3 years ago). During this time,
I've worked mostly on Trove, and a little bit on other projects but
mostly in service of Trove. I've also served as a member of the
Stewardship Working Group, and have participated in other activities
that seek to foster and encourage participation in OpenStack
(mentorship, for example).

I submit my candidacy for election to the TC at what I believe we will
look back upon as a point of inflection in the OpenStack trajectory.

Trove, the project that I've been most closely associated with, and
some other projects that are not part of the 'core OpenStack' have
faced a decline in participation. To be clear, I don't intend to place
blame on the 'big tent' proposal; I think the approach proposed in the
big tent was rational and needed at the time. The old way of doing
things was unsustainable and the new model is much more
scalable. However, it is an observable fact that the non-core projects
have seen a decline in participation as more corporate focus is placed
on the few core projects.

At this point of inflection, I think the TC should focus more of its
attention on three specific things that I promise to champion if
elected to the TC. I describe each in turn below.

I would like to make it easier for newcomers to OpenStack to
participate in the project, and I will make this a priority on the
Technical Committee. I have over the past few years done several
things in this area including talks at universities, presentations at
meetups, blogging and other things that aim to simplify this and make
the 'newcomer experience' a lot better. This is something I promise to
continue if elected to the TC.

A significant part of my motivation for running for election for the
TC is based on what was discussed at a leadership training for TC
members that I attended in Ann Arbor last year.

I'm happy to see that the TC has taken a more assertive position on
defining a vision, something which was discussed at length, and I
believe long overdue. I have long advocated that the TC should take a
more assertive and prescriptive approach towards the technical
decisions that it makes, and this is something I promise to bring to
the TC if elected.

I'm a strong proponent of allowing projects to make the right
technical decisions for their own areas, but still see the value in a
central governance structure. A part of the evolution of OpenStack
that I welcome is the recent move to take positive action on the
Golang issue, and to get work going to allow projects to choose a
language other than Python.

Finally, I believe that the TC is in need of some 'new voices' and I
promise to bring a new and different perspective to the deliberations
of the TC.

Thanks,

-amrith

--
Amrith Kumar
amrith.ku...@gmail.com




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 18

2017-04-07 Thread Chris Dent


There was a nova-specs sprint this week, so a lot of eyes were on
specs but there continues to be regular progress on resource
providers, the placement API and related work in the scheduler and
resource tracker. If you're doing work that I haven't noticed and
reported in here that you think should be, please follow up with
some links.

# What Matters Most

The addition of traits to the placement API is very close, one patch
remains. Linked below. That means that the top of the priority stack
is the spec for claims via the scheduler. Also linked below.

# What's Changed

There's a new spec for including user and project information in
allocations. This is a start towards allowing placement info to be
used for the counting required for quotas. There's a spec and a
followup review to fix some issues with it:


http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/placement-project-user.html
https://review.openstack.org/#/c/454352/

# Help Wanted

Areas where volunteers are needed.

* General attention to bugs tagged placement:
https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
section below).

* Helping to create and evaluate functional tests of the resource
tracker and the ways in which it and nova-scheduler use the
reporting client. For some info see
https://etherpad.openstack.org/p/nova-placement-functional
and talk to edleafe.

* Performance testing. If you have access to some nodes, some basic
   benchmarking and profiling would be very useful. See the
   performance section below. Is there room on OSIC for this kind of
   thing?

# Main Themes

## Traits

The work to implement the traits API in placement is happening at


https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-traits

There's one patch left to get the API in place and a patch for a new
command to sync the os-traits library into the database:

https://review.openstack.org/#/c/450125/

There is a stack of changes to the os-traits library to add more traits
and also automate creating symbols associated with the trait
strings:

https://review.openstack.org/#/c/448282/4

## Ironic/Custom Resource Classes

There's a blueprint for "custom resource classes in flavors" that
describes the stuff that will actually make use of custom resource
classes:


https://blueprints.launchpad.net/nova/+spec/custom-resource-classes-in-flavors

The spec has merged, but the implementation has not yet started.

Over in Ironic some functional and integration tests have started:

https://review.openstack.org/#/c/443628/

## Claims in the Scheduler

Progress has been made on the spec for claims in the scheduler:

https://review.openstack.org/#/c/437424/

Some differences of opinion on what's possible now and what the API
should expose have been resolved, but now we need to resolve some
questions on how (or even if) to most effectively deal with
reconciling allocations that used to happen in the resource tracker
and will now happen in the scheduler.

Eyes and brains required.

Thinking about this stuff has also revealed some places where it's
possible for allocations to become wrong or orphaned:

https://bugs.launchpad.net/nova/+bug/1679750
https://bugs.launchpad.net/nova/+bug/1661312

## Shared Resource Providers

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

Progress on this will continue once traits and claims have moved forward.

## Nested Resource Providers

The spec for this has been updated with what was learned at the PTG
and moved to pike and merged:


http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/nested-resource-providers.html

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

Several reviews are in progress for documenting the placement API.
This is likely going to take quite a few iterations as we work out
the patterns and tooling. But it's great to see the progress and
when looking at the draft rendered docs it makes placement feel like
a real thing™.

Find me (cdent) or Andrey (avolkov) if you want to help out or have
other questions.

## Performance

We're aware that there are some redundancies in the resource tracker
that we'd like to clean up

  
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html

but it's also the case that we've done no performance testing on the
placement service itself.

We ought to do some testing to make sure there aren't unexpected
performance drains. Is this something where we could get time on the
OSIC hardware?

# Other Code/Specs

* https://review.openstack.org/#/c/418393/
 A spec for improving the level of detail and structure in placement
 error responses so that it is easier to distinguish between
 different types of, for example, 409 responses.

 This hasn't seen any attention since March 17, and as a result
 didn't get any attention 

[openstack-dev] [election][tc] Nominating for Technical Committee

2017-04-07 Thread Joshua Hesketh
Howdy,

My name is Joshua Hesketh and I would like to self nominate for the
Technical
Committee.

I work for Rackspace Australia and have been involved in the OpenStack
community
since the Havana release circa 2013. I have been primarily working on the
Infrastructure (infra) project where I am both a core contributor and root.

Outside of OpenStack, I have experience leading various communities within
the
Oceania region. For six years I served the open source community as
President
and Treasurer of Linux Australia. During this time I helped to grow and
scale
the organisation from overseeing a single conference each year, to over
eight.
Additionally I've directly helped organise and run multiple Linux Australia
Conferences (linux.conf.au), PyCon Australia Conferences, as well as
OpenStack
and cloud/infrastructure miniconfs.

I am a strong proponent and advocate for free and open source software. I am
also a strong believer in the Four Opens that make up OpenStack, and
believe we
lead the way for managing open clouds. I want to do all that I can to help
further this mission.

The Technical Committee has been doing a great job leading the community
and I
don't have a particular platform for radical change. Instead, I would like
to
help continue to strengthen and work on the committee with a focus on the
following:

 - Enabling the community. The TC is here to serve the community and
facilitate
   collaboration.
 - Protecting our values, in particular the Four Opens.
 - Continuing to investigate and evaluate alternative technologies and how
we
   work with them. For example, evaluating and introducing new languages.
 - Promote collaboration with the wider open source community. For example,
   participating with communities such as kubernetes.
 - Easing pain for operators, deployers and users by helping to improve
   consistency across projects through OpenStack-wide goals.

I would be glad to answer any questions during the candidacy period (and any
time for that matter) and if elected I look forward to working in this new
role.

Thank you for your consideration,
Joshua Hesketh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][cinder] critical fix for ceph job

2017-04-07 Thread Jon Bernard
I just posted a critical fix the ceph job [1].  I'm anxious to get this
landed so I can start working on the other intermittent failures in the
gate.  Any feedback is much appreciated.

[1] https://review.openstack.org/#/c/454321/

-- 
Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Last days for TC candidate announcements

2017-04-07 Thread Davanum Srinivas
Folks,

There's just 5 candidates so far.. So please consider throwing your
hat in the ring.
https://governance.openstack.org/election/#pike-tc-candidates

Thanks,
Dims

On Fri, Apr 7, 2017 at 5:03 AM, Tristan Cacqueray  wrote:
> A quick reminder that we are in the last days for TC candidate
> announcements.
>
> If you want to stand for the TC, don't delay, follow the instructions at
> [1] to make sure the community knows your intentions.
>
> Make sure your candidacy has been submitted to the openstack/election
> repository and approved by the election officials.
>
> Thank you,
> -Tristan
>
> [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Manila]share or volume's size unit

2017-04-07 Thread Duncan Thomas
Cinder will store the volume as 1G in the database (and quota) even if
the volume is only 500M. It will stay as 500M when it is attached
though. It's a side effect of importing volumes, but that's usually a
pretty uncommon thing to do, so shouldn't affect many people or cause
a huge amount of trouble.

There are also backends that allocate in units greater than 1G, and so
sometimes give you slightly bigger volumes than you asked for. Cinder
doesn't not go out if its way to support this; again the database and
quota will reflect what you asked for, the attached volume will be a
slightly different size.

In your case, extend might be one way to solve the problem, if you
backend supports it. I'm not certain what will happen if you ask
cinder to extend to 1G a volume it already thinks is 1G... if it
doesn't work, please file a bug.

On 7 April 2017 at 09:01, jun zhong  wrote:
> Hi guys,
>
> We know the share's size unit is in gigabiyte in manila, and volume's size
> unit is also in gigabiyte in cinder, But there is a question that the size
> is not exactly after we migrate tradition enviroment to OpenStack.
> For example:
> 1.There is original volume(vol_1) with 500MB size in tradition enviroment
> 2.We want to use openstack to manage this volume(vol_1)
> 3.We can only use 1GB volume to manage the original volume(vol_1), because
> the cinder volume size can not support 500MB.
> How to deal with this? Could we set the volume or share's unit to float or
> something else? or add new unit MB? or just extend the original volume size?
>
>
> Thanks
> jun
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] version document for project navigator

2017-04-07 Thread Monty Taylor

On 04/06/2017 03:51 PM, Jimmy McArthur wrote:

Cool. Thanks Monty!


Monty Taylor 
April 6, 2017 at 3:21 PM
On 04/06/2017 11:58 AM, Jimmy McArthur wrote:

Assuming this format is accepted, do you all have any sense of when this
data will be complete for all projects?


Hopefully "soon" :)

Honestly, it's not terribly difficult data to produce, so once we're
happy with it and where it goes, crowdsourcing filling it all in
should go quickly.


Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format
works great. We can actually derive the age of the project from this
information as well by identifying the first release that has API data
for a particular project. I'm indifferent about where it lives, so I'd
defer to you all to determine the best spot.

I really appreciate you all putting this together!

Jimmy


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 5, 2017 at 5:28 AM

Somehow missed this thread, so will repost here comments I made
elsewhere:

This looks good, but I would rather not overload the releases
repository. My personal preference (which was also expressed by
Doug in the TC meeting) would be to set this information up in a
"project-navigator" git repo that we would reuse for any information we
need to collect from projects for accurate display on the project
navigator. If the data is not maintained anywhere else (or easily
derivable from existing data), we would use that repository to collect
it from projects.

That way there is a clear place to go to to propose fixes to the
project
navigator data. Not knowing how to fix that data is a common complaint,
so if we can point people to a git repo (and redirect people from there
to the places where other bits of information happen to live) that
would
be great.

Monty Taylor 
April 4, 2017 at 5:47 PM
Hey all,

As per our discussion in today's TC meeting, I have made a document
format for reporting versions to the project navigator. I stuck it in
the releases repo:

  https://review.openstack.org/453361

Because there was already per-release information there, and the
governance repo did not have that structure.

I've included pseudo-code and a human explanation of how to get from a
service's version discovery document to the data in this document, but
also how it can be maintained- which is likely to be easier by hand
than by automation - but who knows, maybe we decide we want to make a
devstack job for each service that runs on tag events that submits a
patch to the releases repo. That sounds like WAY more work than once a
cycle someone adding a few lines of json to a repo - but *shrug*.

Basing it on the version discovery docs show a few things:

* "As a user, I want to consume an OpenStack Service's Discovery
Document" is a thing people might want to do and want to do
consistently across services.

* We're not that far off from being able to do that today.

* Still, like we are in many places, we're randomly different in a few
minor ways that do not actually matter but make life harder for our
users.

Thoughts and feedback more than welcome!
Monty

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 6, 2017 at 11:58 AM
Assuming this format is accepted, do you all have any sense of when
this data will be complete for all projects?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format
works great. We can actually derive the age of the project from this

[openstack-dev] [nova][api] quota-class-show not sync to quota-show

2017-04-07 Thread Chen CH Ji

Version 2.35 removed most deprecated output like floating ip etc so we
won't have following in quota-show output
| floating_ips| 10|
| fixed_ips   | -1|
| security_groups | 10|
| security_group_rules| 20|

however, quota-class-show still have those output, should we use 2.35 to
fix this bug or add a new microversion or because os-quota-class-sets is
about to deprecate, we can let it be ? Thanks

DEBUG (session:347) REQ: curl -g -i -X GET
http://192.168.123.10:8774/v2.1/os-quota-class-sets/1 -H
"OpenStack-API-Version: compute 2.41" -H "User-Agent: python-novaclient" -H
"Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.41" -H
"X-Auth-Token: {SHA1}5008bb2787a9548d65b063f4db2525b4e3bf7163"

RESP BODY: {"quota_class_set": {"injected_file_content_bytes": 10240,
"metadata_items": 128, "ram": 51200, "floating_ips": 10, "key_pairs": 100,
"id": "1", "instances": 10, "security_group_rules": 20, "injected_files":
5, "cores": 20, "fixed_ips": -1, "injected_file_path_bytes": 255,
"security_groups": 10}}

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Last days for TC candidate announcements

2017-04-07 Thread Tristan Cacqueray

A quick reminder that we are in the last days for TC candidate
announcements.

If you want to stand for the TC, don't delay, follow the instructions at
[1] to make sure the community knows your intentions.

Make sure your candidacy has been submitted to the openstack/election
repository and approved by the election officials.

Thank you,
-Tristan

[1] http://governance.openstack.org/election/#how-to-submit-your-candidacy


pgppaQbN6GuU2.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-20, April 10-14

2017-04-07 Thread Thierry Carrez
Welcome to our regular release countdown email!

Development Focus
-

Teams should be wrapping up Pike-1 work.

Actions
---

Next week is the Pike-1 deadline for cycle-with-milestones projects.
That means that before EOD on Thursday all milestone-driven projects
should propose a SHA for Pike-1 as a change to the openstack/releases
repository. As a reminder, Pike-1 versions should look like "P.0.0.0b1"
where P = Ocata version + 1. So if your Ocata release was "5.0.0",
Pike-1 should be "6.0.0.0b1".

Pike-1 also marks the deadline for switching release models, as well as
submitting your team responses to the Pike release goals[1]. Responses
to those goals are needed even if the work is already done. At this hour
we seem to still be missing some (or all) answers from cinder,
cloudkitty, congress, designate, docs, dragonflow, ec2api, freezer,
fuel, heat, horizon, infra, karbor, kolla, kuryr, magnum, manila,
mistral, monasca, murano, neutron, nova, octavia, charms,
OpenStackAnsible, OpenStackClient, oslo, packaging-deb, packaging-rpm,
rally, refstack, requirements, sahara, searchlight, security, senlin,
solum, swift, tacker, telemetry, tricircle, trove, vitrage, watcher,
winstackers, and zaqar.

[1] https://governance.openstack.org/tc/goals/pike/index.html

Upcoming Deadlines & Dates
--

TC election self-nomination: April 9
Pike-1 milestone: April 13
Forum at OpenStack Summit in Boston: May 8-11

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-07 Thread Ravi Sekhar Reddy Konda
Hi Diarmuid,

Even in our setup Sravanthi by mistake has deleted all the ports
we are trying to bring up again, if it is done today wil ping you and schedule 
again

Otherwise i will schedule for Monday

Thanks,
Ravi
- Original Message -
From: jtale...@redhat.com
To: openstack-dev@lists.openstack.org, jkilp...@redhat.com
Sent: Thursday, April 6, 2017 4:15:58 PM GMT +05:30 Chennai, Kolkata, Mumbai, 
New Delhi
Subject: Re: [openstack-dev] [tripleo] pingtest vs tempest

On Thu, Apr 6, 2017 at 5:32 AM, Sagi Shnaidman  wrote:
> HI,
>
> I think Rally or Browbeat and other performance oriented solutions won't
> serve our needs, because we run TripleO CI on virtualized environment with
> very limited resources. Actually we are pretty close to full utilizing these
> resources when deploying openstack, so very little is available for test.
> It's not a problem to run tempest API tests because they are cheap - take
> little time, little resources, but also gives little coverage. Scenario test
> are more interesting and gives us more coverage, but also takes a lot of
> resources (which we don't have sometimes).

Sagi,
In my original message I mentioned a "targeted" test, I should
explained that more. We could configure the specific scenario so that
the load on the virt overcloud would be minimal. Justin Kilpatrick
already have Browbeat integrated with TripleO Quickstart[1], so there
shouldn't be much work to try this proposed solution.

>
> It may be useful to run a "limited edition" of API tests that maximize
> coverage and don't duplicate, for example just to check service working
> basically, without covering all its functionality. It will take very little
> time (i.e. 5 tests for each service) and will give a general picture of
> deployment success. It will cover fields that are not covered by pingtest as
> well.
>
> I think could be an option to develop a special scenario tempest tests for
> TripleO which would fit our needs.

I haven't looked at Tempest in a long time, so maybe its functionality
has improved. I just saw the opportunity to integrate Browbeat/Rally
into CI to test the functionality of OpenStack services, while also
capturing performance metrics.

Joe

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_browbeat_tree_master_ci-2Dscripts=DwIGaQ=RoP1YumCXCgaWHvlZYR8PQcxBKCX5YTpkKY057SbK10=rFCQ76TW5HZUgA7b20ApVcXgXru6mvz4fvCm1_H6w1k=c7EeLf1PQSsV2XbWBhv6CWOzUFDRnDiIheN4lDKjyq8=Z0jGFw40ezDmSb3F6ns5SXRvacH6AgU0TK5dKBSRgEs=
 

>
> Thanks
>
>
> On Wed, Apr 5, 2017 at 11:49 PM, Emilien Macchi  wrote:
>>
>> Greetings dear owls,
>>
>> I would like to bring back an old topic: running tempest in the gate.
>>
>> == Context
>>
>> Right now, TripleO gate is running something called pingtest to
>> validate that the OpenStack cloud is working. It's an Heat stack, that
>> deploys a Nova server, some volumes, a glance image, a neutron network
>> and sometimes a little bit more.
>> To deploy the pingtest, you obviously need Heat deployed in your
>> overcloud.
>>
>> == Problems:
>>
>> Although pingtest has been very helpful over the last years:
>> - easy to understand, it's an Heat template, like an OpenStack user
>> would do to deploy their apps.
>> - fast: the stack takes a few minutes to be created and validated
>>
>> It has some limitations:
>> - Limitation to what Heat resources support (example: some OpenStack
>> resources can't be managed from Heat)
>> - Impossible to run a dynamic workflow (test a live migration for example)
>>
>> == Solutions
>>
>> 1) Switch pingtest to Tempest run on some specific tests, with feature
>> parity of what we had with pingtest.
>> For example, we could imagine to run the scenarios that deploys VM and
>> boot from volume. It would test the same thing as pingtest (details
>> can be discussed here).
>> Each scenario would run more tests depending on the service that they
>> run (scenario001 is telemetry, so it would run some tempest tests for
>> Ceilometer, Aodh, Gnocchi, etc).
>> We should work at making the tempest run as short as possible, and the
>> close as possible from what we have with a pingtest.
>>
>> 2) Run custom scripts in TripleO CI tooling, called from the pingtest
>> (heat template), that would run some validations commands (API calls,
>> etc).
>> It has been investigated in the past but never implemented AFIK.
>>
>> 3) ?
>>
>> I tried to make this text short and go straight to the point, please
>> bring feedback now. I hope we can make progress on $topic during Pike,
>> so we can increase our testing coverage and detect deployment issues
>> sooner.
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] [Cinder][Manila]share or volume's size unit

2017-04-07 Thread jun zhong
Hi guys,

We know the share's size unit is in gigabiyte in manila, and volume's size
unit is also in gigabiyte in cinder, But there is a question that the size
is not exactly after we migrate tradition enviroment to OpenStack.
For example:
1.There is original volume(vol_1) with 500MB size in tradition enviroment
2.We want to use openstack to manage this volume(vol_1)
3.We can only use 1GB volume to manage the original volume(vol_1), because
the cinder volume size can not support 500MB.
How to deal with this? Could we set the volume or share's unit to float or
something else? or add new unit MB? or just extend the original volume size?


Thanks
jun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [architecture] Arch-WG, we hardly knew ye..

2017-04-07 Thread Thierry Carrez
Clint Byrum wrote:
> I'm going to be blunt. I'm folding the Architecture Working Group
> immediately following our meeting today at 2000 UTC. We'll be using the
> time to discuss continuity of the base-services proposal, and any other
> draw-down necessary. After that our meetings will cease.
> 
> I had high hopes for the arch-wg, with so many joining us to discuss
> things in Atlanta. But ultimately, we remain a very small group with
> very limited resources, and so I don't think it's the best use of our
> time to continue chasing the arch-wg.

While I'm sad to see this group disappear, it ended up being just a few
folks with limited time on their hands, and the output of the group was
limited. With such activity levels it is clearly not worth the overhead
of maintaining regular meetings, tracking, processes and structure
around our work.

I expect most of the initiatives to continue to be pushed through
existing venues like the Technical Committee or the promising new
inter-project workgroups. I for one intend to continue to push to define
base services in OpenStack and introduce etcd to that mix.

Thanks Clint for encouraging us to take a step back and spend more time
thinking about that sort of thing !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev