Re: [openstack-dev] [Openstack-operators] [nova] Next minimum libvirt version

2017-02-10 Thread gustavo panizzo
On Fri, Feb 10, 2017 at 05:42:26PM +, Daniel P. Berrange wrote:
> On Thu, Feb 09, 2017 at 05:29:22PM -0600, Matt Riedemann wrote:
> > Since danpb hasn't been around I've sort of forgotten about this, but we
> > should talk about bumping the minimum required libvirt version in nova.
> > 
> > Currently it's 1.2.1 and the next was set to 1.2.9.
> > 
> > On master we're gating on ubuntu 14.04 which has libvirt 1.3.1 (14.04 had
> > 1.2.2).
> > 
> > If we move to require 1.2.9 that effectively kills 14.04 support for
> > devstack + libvirt on master, which is probably OK.
> > 
> > There is also the distro support wiki [1] which hasn't been updated in
> > awhile.
> > 
> > I'm wondering if 1.2.9 is a safe move for the next required minimum version
> > and if so, does anyone have ideas on the next required version after that?
> 
> I think libvirt 1.2.9 is absolutely fine as a next version. It is still
> ancient history comparatively speaking.
> 
> The more difficult question is what happens after that. To go further than
> that effectively requires dropping Debian as a supportable platform since
> AFAIK, they never rebase libvirt & next Debian major release is still
> unannounced.  So the question is whether "stock" Debian is something the
> project cares about targetting or will the answer be that Debian users
> are required to pull in newer libvirt from elsewhere.
Debian 9.0 has been frozen, soon it will be released with libvirt 3.0.0
Previous release has 1.2.9 

https://packages.debian.org/search?keywords=libvirt0&searchon=names&suite=all§ion=all

> Also, it is just as important to consider minimum QEMU versions at the
> same time, though it could just be set to the lowest common denominator
> across distros that remain, after choosing the libvirt version.

Qemu version for the next release is 2.8, previous release is 2.1

https://packages.debian.org/search?keywords=qemu&searchon=names&suite=all§ion=all

> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|
> 
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

keybase: https://keybase.io/gfa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Clark Boylan
On Fri, Feb 10, 2017, at 10:54 AM, Ihar Hrachyshka wrote:
> Oh nice, I haven't seen that. It does give (virtualized) CPU model
> types. I don't see a clear correlation between models and
> failures/test times though. We of course miss some more details, like
> flags being emulated, but I doubt it will give us a clue.

Yes, this will still be the virtualized CPU. Also the lack of cpu flag
info is a regression compared to the old method of collecting this data.
If we think that info could be useful somehow we should find a way to
add it back in. (Maybe just add back the cat /proc/cpuinfo step in
devstack-gate).
 
> It would be interesting to know the overcommit/system load for each
> hypervisor affected. But I assume we don't have access to that info,
> right?

Correct, with the exception of infracloud and OSIC (if we ask nicely) I
don't expect it will be very easy to get this sort of information from
our clouds.

For infracloud a random sample of a hypervisor shows that it has 24 real
cores. In the vanilla region we are limited to 126 VM  instances with
8vcpu each. We have ~41 hypervisors which is just over 3 VM instances
per hypervisor. 24realcpus/8vcpu = 3 VM instances without
oversubscribing. So we are just barely oversubscribing if at all.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Make certs insecure in magnum drivers

2017-02-10 Thread Adrian Otto
I have opened the following bug ticket for this issue:

https://bugs.launchpad.net/magnum/+bug/1663757

Regards,

Adrian

On Feb 10, 2017, at 1:46 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:

What I’d like to see in this case is to use secure connections by default, and 
to make workarounds for self signed certificates or other optional workarounds 
for those who need them. I would have voted against patch set 383493. It’s also 
not linked to a bug ticket, which we normally require prior to merge. I’ll see 
if I can track down the author to see about fixing this properly, or if there 
is a volunteer to do this better, I’m open to that too.

Adrian

On Feb 10, 2017, at 2:05 AM, Kevin Lefevre 
mailto:lefevre.ke...@gmail.com>> wrote:

Hi,

This change (https://review.openstack.org/#/c/383493/) makes certificates 
request to magnum_api insecure since is a common use case.

In swarm drivers, the make-cert.py script is in python whereas in K8s for 
CoreOS and Atomic, it is a shell script.

I wanted to make the change (https://review.openstack.org/#/c/430755/) but it 
gets flagged by bandit because of python requests pacakage insecure TLS.

I know that we should supports Custom CA in the futur but if right now (and 
according to the previous merged change) insecure request are by default, what 
should we do ?

Do we disable bandit for the the swarm drivers ? Or do you use the same scripts 
(and keep it as simple as possible) for all the drivers, possibly without 
python as it is not included in CoreOS.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] In what sense is it multi-tenant?

2017-02-10 Thread Fox, Kevin M
You can give multiple tenants each their own subdomains and each tenant can't 
write to each others domains.

In addition, if memory serves, each tenant could have private domain servers 
too which only they could access but manageable through the same api.

Thanks,
Kevin

From: Mike Spreitzer [mspre...@us.ibm.com]
Sent: Friday, February 10, 2017 1:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Designate] In what sense is it multi-tenant?

In what sense is Designate multi-tenant?  Can it be programmed to give 
different views to different DNS clients?  (If so, how?)

Thanks,
Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kuryr] python-k8sclient vs client-python (was Fwd: client-python Beta Release)

2017-02-10 Thread Davanum Srinivas
Dear Magnum Team,

Please see review:
https://review.openstack.org/#/c/432421/

It depends on the requirements review:
https://review.openstack.org/#/c/432409/

Thanks,
Dims

On Mon, Jan 30, 2017 at 11:54 AM, Antoni Segura Puimedon  wrote:

>
>
> On Thu, Jan 26, 2017 at 12:41 PM, Davanum Srinivas 
> wrote:
>
>> Team,
>>
>> A bit of history, we had a client generated from swagger definition for a
>> while in Magnum, we plucked it out into python-k8sclient which then got
>> used by fuel-ccp, kuryr etc. Recently the kuberneted team started an effort
>> called client-python. Please see 1.0.0b1 announcement.
>>
>> * It's on pypi[1] and readthedocs[2]
>> * i've ported the e2e tests in python-k8sclient that runs against an
>> actual k8s setup and got that working
>> * i've looked at various tests in kuryr, fuel-ccp, magnum etc to see what
>> could be ported as well. most of it is merged already. i have a couple of
>> things in progress
>>
>> So, when client-python hits 1.0.0, Can we please mothball our
>> python-k8sclient and switch over to the k8s community supported option?
>> Can you please evaluate what's missing so we can make sure those things
>> get into 1.0.0 final?
>>
>
> I am all for this. Thanks for the good work Davanum! I think this is a
> perfect case where the OpenStack Community can give back to other upstream
> communities and we should improve client-python where we need.
>
>
>>
>> Thanks,
>> Dims
>>
>> [1] https://pypi.python.org/pypi/kubernetes
>> [2] http://kubernetes.readthedocs.io/en/latest/kubernetes.html
>>
>> -- Forwarded message --
>> From: 'Mehdy Bohlool' via Kubernetes developer/contributor discussion <
>> kubernetes-...@googlegroups.com>
>> Date: Wed, Jan 25, 2017 at 8:34 PM
>> Subject: client-python Beta Release
>> To: Kubernetes developer/contributor discussion <
>> kubernetes-...@googlegroups.com>, kubernetes-us...@googlegroups.com
>>
>>
>> Python client is now in beta. Please find more information here:
>> https://github.com/kubernetes-incubator/client-python/
>> releases/tag/v1.0.0b1
>>
>> You can reach the maintainers of this project at SIG API Machinery
>> .
>> If you have any problem with the client or any suggestions, please file an
>> issue .
>>
>>
>> Mehdy Bohlool |  Software Engineer |  me...@google.com |  mbohlool@github
>> 
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes developer/contributor discussion" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-dev+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/kubernetes-dev/CACd0WeG3O1t%3DXt7AGykyK7CcLmVYyJAB918c%2
>> BXvteqVrW3nb7A%40mail.gmail.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate] In what sense is it multi-tenant?

2017-02-10 Thread Mike Spreitzer
In what sense is Designate multi-tenant?  Can it be programmed to give 
different views to different DNS clients?  (If so, how?)

Thanks,
Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Make certs insecure in magnum drivers

2017-02-10 Thread Adrian Otto
What I’d like to see in this case is to use secure connections by default, and 
to make workarounds for self signed certificates or other optional workarounds 
for those who need them. I would have voted against patch set 383493. It’s also 
not linked to a bug ticket, which we normally require prior to merge. I’ll see 
if I can track down the author to see about fixing this properly, or if there 
is a volunteer to do this better, I’m open to that too.

Adrian

> On Feb 10, 2017, at 2:05 AM, Kevin Lefevre  wrote:
> 
> Hi,
> 
> This change (https://review.openstack.org/#/c/383493/) makes certificates 
> request to magnum_api insecure since is a common use case.
> 
> In swarm drivers, the make-cert.py script is in python whereas in K8s for 
> CoreOS and Atomic, it is a shell script.
> 
> I wanted to make the change (https://review.openstack.org/#/c/430755/) but it 
> gets flagged by bandit because of python requests pacakage insecure TLS.
> 
> I know that we should supports Custom CA in the futur but if right now (and 
> according to the previous merged change) insecure request are by default, 
> what should we do ?
> 
> Do we disable bandit for the the swarm drivers ? Or do you use the same 
> scripts (and keep it as simple as possible) for all the drivers, possibly 
> without python as it is not included in CoreOS.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API WG PTG planning

2017-02-10 Thread Ed Leafe
On Feb 10, 2017, at 2:48 PM, Matt Riedemann  wrote:

> I assumed we'd take the opportunity to talk about capabilities [1] at the PTG 
> but couldn't find any etherpad for the API WG on the wiki [2].
> 
> Is the API WG getting together on Monday or Tuesday?
> 
> [1] https://review.openstack.org/#/c/386555/
> [2] https://wiki.openstack.org/wiki/PTG/Pike/Etherpads

We weren’t listed on the etherpad listing, so we didn’t know if we could take a 
slot. So we asked the Architecture WG if we could share space with them. The 
capabilities discussion is one of the ones we are planning on:

https://etherpad.openstack.org/p/ptg-architecture-workgroup


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hierarchical quotas at the PTG?

2017-02-10 Thread Matt Riedemann
Operators want hierarchical quotas [1]. Nova doesn't have them yet and 
we've been hesitant to invest scarce developer resources in them since 
we've heard that the implementation for hierarchical quotas in Cinder 
has some issues. But it's unclear to some (at least me) what those 
issues are.


Has anyone already planned on talking about hierarchical quotas at the 
PTG, like the architecture work group?


I know there was a bunch of razzle dazzle before the Austin summit about 
quotas, but I have no idea what any of that led to. Is there still a 
group working on that and can provide some guidance here?


[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-January/012450.html


--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week (02/10-02/16)

2017-02-10 Thread Brian Rosmaita
Hello Glancers,

Here are the weekly priorities:

1.  RC-1 Testing
It's looking like there won't be an RC-2, so no need to wait, do some
testing on RC-1 now.  If you do find an issue, please create a bug, tag
it as 'ocata-rc-potential' and give a shout in #openstack-glance for
rosmaita or sigmavirus to assess rc-potential.

2. Specs for Pike
Now's the time to start turning your ideas into cold, hard RST.
If they could use some discussion, add something to the PTG etherpad:
https://etherpad.openstack.org/p/glance-pike-ptg-planning

3. Community Goals for Pike
Take some time to look at the Pike community goals and start thinking
about whether you'd like to get in on the action:
https://governance.openstack.org/tc/goals/pike/index.html


cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Jeremy Stanley
On 2017-02-10 17:47:50 + (+), Hayes, Graham wrote:
[...]
> I am struggling to think of even one) multi tenant DNS management
> APIs.
[...]

I ran http://www.nictool.com/ at a service provider years ago,
selected specifically because it's a multi-tenant (or at least could
be made reasonably so via RBAC) authoritative DNS manglement
frontend with a usable API. The AGPL license on it would probably
make it unsuitable for a lot of our downstream ecosystem however.

If anything, I see that (and the struggle I went through back then
to find anything remotely close to fitting that use case) as an
explanation for why Designate exists. There just isn't a lot out
there focused on this problem space except what random service
providers have written in-house, and the vast majority of them never
see the light of (free software) day.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] API WG PTG planning

2017-02-10 Thread Matt Riedemann
I assumed we'd take the opportunity to talk about capabilities [1] at 
the PTG but couldn't find any etherpad for the API WG on the wiki [2].


Is the API WG getting together on Monday or Tuesday?

[1] https://review.openstack.org/#/c/386555/
[2] https://wiki.openstack.org/wiki/PTG/Pike/Etherpads

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Should nova just default to use cinder v3 in Pike?

2017-02-10 Thread Matt Riedemann
While talking about [1] yesterday and trying to figure out how to 
configure nova to use cinder v3 in the CI jobs in Pike, things got a bit 
messy from the CI job configuration perspective.


My initial plan was to make the nova-next (formerly "placement" job [2]) 
use cinder v3 but that breaks down a bit when that job runs on 
stable/newton where nova doesn't support cinder v3.


So when the cat woke me up at 3am I couldn't stop thinking that we 
should just default "[cinder]/catalog_info" in nova.conf to cinderv3 in 
Pike. Then CI on master will be running nova + cinder v3 (which should 
be backward compatible with cinder v2). That way I don't have to mess 
with marking a single CI job in master as using cinder v3 when by 
default they all will.


We'll still want some nova + cinder v2 coverage and I initially though 
grenade would provide that, but I don't think it will since we don't 
explicitly write out the 'catalog_info' value in nova.conf during a 
devstack run, but we could do that in stable/ocata devstack and then it 
would persist through an upgrade from Ocata to Pike. There are other 
ways to get that coverage too, that's just my first thought.


Anyway, I just remembered this and it was middle-of-the-night thinking, 
so I'm looking to see if this makes sense or what is wrong with it.


[1] https://review.openstack.org/#/c/420201/
[2] https://review.openstack.org/#/c/431704/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack] Getting on the Pike PTG Agenda?

2017-02-10 Thread Catherine Cuong Diep



Hi Aimee,

Thanks for raising awareness of the absent etherpad link.  The RefStack
etherpad link has been added to
https://wiki.openstack.org/wiki/PTG/Pike/Etherpads#Monday_-.3E_Tuesday  ...
We look forward to discussion regarding OPNFV at PTG.

Catherine Diep
- Forwarded by Catherine Cuong Diep/San Jose/IBM on 02/10/2017 11:34 AM
-

From:   Aimee Ukasick 
To: openstack-dev@lists.openstack.org
Date:   02/10/2017 07:06 AM
Subject:[openstack-dev] [refstack] Getting on the Pike PTG Agenda?



Hi Refstack team - a team from OPNFV will be at the Pike PTG, and we
would like to meet with the RefStack team to discuss building a direct
link to RefStack and other upstream verification projects. We would like
to present the OPNFV Dovetail project and our goals for leveraging
upstream test frameworks, as well as supplementing them with
OPNFV-specific tests (some of which will work their way upstream over
time).

I don't see a Pike PTG etherpad for RefStack on the PTG/Pike/Etherpads
page, so how do I book time on the RefStack agenda?

Thanks in advance!
--

Aimee Ukasick
AT&T Open Source

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] devstack/heat problem with master_wait_condition

2017-02-10 Thread Syed Armani
Hello Stanisław,

Were you able to solve this issue?

Cheers,
Syed

On Wed, Aug 26, 2015 at 2:14 PM, Sergey Kraynev 
wrote:

> Hi Stanislaw,
>
> Your host with Fedora should have special config file, which will send
> signal to WaitCondition.
> For good example please take a look this template https://github.com/
> openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087
> 569340109b/hot/native_waitcondition.yaml
> 
>
> Also the best place for such question I suppose will be
> https://ask.openstack.org/en/questions/
> 
>
> Regards,
> Sergey.
>
> On 26 August 2015 at 09:23, Pitucha, Stanislaw Izaak <
> stanislaw.pitu...@hp.com> wrote:
>
>> Hi all,
>>
>> I’m trying to stand up magnum according to the quickstart instructions
>> with devstack.
>>
>> There’s one resource which times out and fails: master_wait_condition.
>> The kube master (fedora) host seems to be created, I can login to it via
>> ssh, other resources are created successfully.
>>
>>
>>
>> What can I do from here? How do I debug this? I tried to look for the
>> wc_notify itself to try manually, but I can’t even find that script.
>>
>>
>>
>> Best Regards,
>>
>> Stanisław Pitucha
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Brandon B. Jozsa
I’m just catching up with this thread, but I absolutely agree with Jay on 
documentation and just general messaging. This is not just a Designate issue at 
all though, this is an issue that many projects have. Well drafted specs, 
mission, scope, and general messaging are sometimes the bane of great ideas or 
projects. The more solid the documentation is, and to that point installation 
guides, the better people will understand the value of the project. Sure DNS 
isn’t the sexiest thing out there, but if you make it more secure and more 
stable as a service element…I think we’d all agree, that provides very high 
value.

If there’s a way that our teams can help, let us know (or reach out to me 
directly).

Brandon



On February 9, 2017 at 9:35:55 PM, Jay Pipes 
(jaypi...@gmail.com) wrote:

On 02/09/2017 02:19 PM, Hayes, Graham wrote:


> Where too now then?
> ===
>
> Well, this is where I call out to people who actually use the project -
> don't
> jump ship and use something else because of the picture I have painted.
> We are
> a dedicated team, how cares about the project. We just need some help.
>
> I know there are large telcos who use Designate. I am sure there is tooling,
> or docs build up in these companies that could be very useful to the
> project.
>
> Nearly every commercial OpenStack distro has Designate. Some have had it
> since
> the beginning. Again, developers, docs, tooling, testers, anything and
> everything is welcome. We don't need a massive amount of resources - we
> are a
> small ish, stable, project.
>
> We need developers with upstream time allocated, and the budget to go to
> events
> like the PTG - for cross project work, and internal designate road map,
> these
> events form the core of how we work.
>
> We also need help from cross project teams - the work done by them is
> brilliant
> but it can be hard for smaller projects to consume. We have had a lot of
> progress since the `Leveller Playing Field`_ debate, but a lot of work is
> still optimised for the larger teams who get direct support, or well
> resourced
> teams who can dedicate people to the implementation of plugins / code.
>
> As someone I was talking to recently said - AWS is not winning public cloud
> because of commodity compute (that does help - a lot), but because of the
> added services that make using the cloud, well, cloud like. OpenStack
> needs to
> decide that either it is just compute, or if it wants the eco-system. [5]_
> Designate is far from alone in this.



Graham, thank you for the heartfelt post. I may not agree with all your
points, but I know you're coming from the right place and truly want to
see Designate (and OpenStack in general) succeed.

Your point about smaller projects finding it more difficult to "consume"
help from cross-project teams is an interesting one. When the big tent
was being discussed, I remember the TC specifically discussing a change
for cross-project team focus: moving from a "we do this work for you"
role to a "we help you do this work for yourself" role. You're correct
that the increase in OpenStack projects meant that the cross-project
teams simply would not be able to continue to be a service to other
teams. This was definitely predicted during the big tent discussions.

If I had one piece of advice to give Designate, it would be to
prioritize getting documentation (both installation as well as dev-ref
and operational docs) in good shape. I know writing docs sucks, but docs
are a springboard for users and contributors alike and can have a
multiplying effect that's difficult to overstate. Getting those install
and developer docs started would enable the cross-project docs team to
guide Designate contributors in enhancing and cleaning up the docs and
putting some polish on 'em. Your idea above that maybe some users
already wrote some docs is a good one. Maybe reach out personally to
those telcos and see if they can dig something up that can be the basis
for upstream docs.

Best,
-jay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Hayes, Graham
On 10/02/17 19:11, Joshua Harlow wrote:
> Fox, Kevin M wrote:
>> I'd say kube-dns and designate are very different technologies.
>>
>> kube-dns is a service discovery mechanism for kubernetes intend to provide 
>> internal k8s resolution. The fact it uses dns to implement service discovery 
>> is kind of an implementation detail, not its primary purpose. There's no 
>> need for private dns management, scaling past the size of the k8s cluster 
>> itself, etc. A much easier problem to solve at the moment.
>>
>> Designate really is a multitenant dns as a service implementation. While it 
>> can be used for service discovery, its not its primary purpose.
>>
>> I see no reason they couldn't share some common pieces, but care should be 
>> given not to just say, lets throw out one for the other, as they really are 
>> different animals.
>>
> 
> Arg, the idea wasn't meant to be that (abandon one for the other), but 
> just to investigate the larger world and maybe we have to adapt our 
> model of `multitenant dns as a service implementation` to be slightly 
> different; so what..., if it means we get to keep contributors and grow 
> a larger community (and partner with others and learn new things and 
> adopt new strategies/designs and push the limits of tech and ...) by 
> doing so then that's IMHO good.
> 

Sure - we are always open to changing out outlook.

There are however huge differences in the problem set between running
authoritative DNS, and service discovery DNS.

In service discovery, you want instant and consistant updates of
records, and is a single user environment - only one user will ever
query those DNS servers. As a result, you are not as resource
constrained when writing the DNS server, and can use slower data
storage systems (like etcd).

Authoritative DNS is accessed by multiple users, and resources per
request really do matter (this is part of the reason we do not have
a user facing DNS server as part of Designate).

The vast majority (all?) of the new DNS projects (especially in the
CNCF) are focused on Service Discovery. It is usually assumed the IaaS
underneath (AWS, Azure etc) have an auth DNS service available to use
(much like VMs are).

Because of this, I do not see a huge amount we can leverage from others,
or a huge amount we can offer others.

>> Thanks,
>> Kevin
>> 
>> From: Jay Pipes [jaypi...@gmail.com]
>> Sent: Friday, February 10, 2017 9:50 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [designate] Status of the project
>>
>> On 02/10/2017 12:21 PM, Joshua Harlow wrote:
>>> Hayes, Graham wrote:
 The HTML version of this is here:
 http://graham.hayes.ie/posts/openstack-designate-where-we-are/

 I have been asked a few times recently "What is the state of the
 Designate
 project?", "How is Designate getting on?", and by people who know what is
 happening "What are you going to do about Designate?".

 Needless to say, all of this is depressing to me, and the people that
 I have
 worked with for the last number of years to make Designate a truly
 useful,
 feature rich project.

 *TL;DR;* for this - Designate is not in a sustainable place.

 To start out - Designate has always been a small project. DNS does not
 have
 massive *cool* appeal - its not shiny, pretty, or something you see on
 the
 front page of HackerNews (unless it breaks - then oh boy do people
 become DNS
 experts).

>>> Thanks for posting this, I know it was not easy to write...
>>>
>>> Knowing where this is at and the issues. It makes me wonder if it is
>>> worthwhile to start thinking about how we can start to look at 'outside
>>> the openstack' projects for DNS. I believe there is a few that are
>>> similar enough to designate (though I don't know well enough) for
>>> example things like SkyDNS (or others which I believe there are a few).
>>>
>>> Perhaps we need to start thinking outside the openstack 'box' in regards
>>> to NIH syndrome and accept the fact that we as a community may not be
>>> able to recreate the world successfully in all cases (the same could be
>>> said about things like k8s and others).
>>>
>>> If we got out of the mindset of openstack as a thing must have tightly
>>> integrated components (over all else) and started thinking about how we
>>> can be much more loosely coupled (and even say integrating non-python,
>>> non-openstack projects) would that be beneficial (I think it would)?
>>
>> This is already basically what Designate *is today*.
>>
>> http://docs.openstack.org/developer/designate/support-matrix.html
>>
>> Just because something is written in Golang and uses etcd for storage
>> doesn't make it "better" or not NIH.
>>
>> For the record, the equivalent to Designate in k8s land is Kube2Sky, the
>> real difference being that Designate has a whole lot more options when
>> it comes to the DNS drivers and Designate integra

Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Hayes, Graham
On 10/02/17 18:10, Joshua Harlow wrote:
> Jay Pipes wrote:
>> On 02/10/2017 12:21 PM, Joshua Harlow wrote:
>>> Hayes, Graham wrote:
 The HTML version of this is here:
 http://graham.hayes.ie/posts/openstack-designate-where-we-are/

 I have been asked a few times recently "What is the state of the
 Designate
 project?", "How is Designate getting on?", and by people who know
 what is
 happening "What are you going to do about Designate?".

 Needless to say, all of this is depressing to me, and the people that
 I have
 worked with for the last number of years to make Designate a truly
 useful,
 feature rich project.

 *TL;DR;* for this - Designate is not in a sustainable place.

 To start out - Designate has always been a small project. DNS does not
 have
 massive *cool* appeal - its not shiny, pretty, or something you see on
 the
 front page of HackerNews (unless it breaks - then oh boy do people
 become DNS
 experts).

>>>
>>> Thanks for posting this, I know it was not easy to write...
>>>
>>> Knowing where this is at and the issues. It makes me wonder if it is
>>> worthwhile to start thinking about how we can start to look at 'outside
>>> the openstack' projects for DNS. I believe there is a few that are
>>> similar enough to designate (though I don't know well enough) for
>>> example things like SkyDNS (or others which I believe there are a few).
>>>
>>> Perhaps we need to start thinking outside the openstack 'box' in regards
>>> to NIH syndrome and accept the fact that we as a community may not be
>>> able to recreate the world successfully in all cases (the same could be
>>> said about things like k8s and others).
>>>
>>> If we got out of the mindset of openstack as a thing must have tightly
>>> integrated components (over all else) and started thinking about how we
>>> can be much more loosely coupled (and even say integrating non-python,
>>> non-openstack projects) would that be beneficial (I think it would)?
>>
>> This is already basically what Designate *is today*.
>>
>> http://docs.openstack.org/developer/designate/support-matrix.html
>>
>> Just because something is written in Golang and uses etcd for storage
>> doesn't make it "better" or not NIH.
> 
> Agreed, do those other projects (written in golang, or etcd or other) 
> have communities that are growing; can we ensure better success (and 
> health of our own community) by partnering with them? That was the main 
> point (I don't really care what language they are written in or what 
> storage backend they use).
> 
>>
>> For the record, the equivalent to Designate in k8s land is Kube2Sky, the
>> real difference being that Designate has a whole lot more options when
>> it comes to the DNS drivers and Designate integrates with OpenStack
>> services like Keystone.
>>
> 
> That's cool, thanks; TIL.
> 
>> Also, there's more to cloud DNS services than service discovery, which
>> is what SkyDNS was written for.
> 
> Sure, it was just an example.
> 
> The point was along the lines of if a project in our community is 
> struggling and there is a similar project outside of openstack (that is 
> trying to do similar things) is not struggling; perhaps it's better to 
> partner with that other project and enhance that other project (and then 
> recommend said project as the next-generation of ${whatever_project} was 
> struggling here).

As I said in my reply - there *is* no other project.

> Said evaluation is something that we would likely have to do over time 
> as well (because as from this example, desigate was a larger group once, 
> it is now smaller).
> 
>>
>> best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Joshua Harlow

Fox, Kevin M wrote:

I'd say kube-dns and designate are very different technologies.

kube-dns is a service discovery mechanism for kubernetes intend to provide 
internal k8s resolution. The fact it uses dns to implement service discovery is 
kind of an implementation detail, not its primary purpose. There's no need for 
private dns management, scaling past the size of the k8s cluster itself, etc. A 
much easier problem to solve at the moment.

Designate really is a multitenant dns as a service implementation. While it can 
be used for service discovery, its not its primary purpose.

I see no reason they couldn't share some common pieces, but care should be 
given not to just say, lets throw out one for the other, as they really are 
different animals.



Arg, the idea wasn't meant to be that (abandon one for the other), but 
just to investigate the larger world and maybe we have to adapt our 
model of `multitenant dns as a service implementation` to be slightly 
different; so what..., if it means we get to keep contributors and grow 
a larger community (and partner with others and learn new things and 
adopt new strategies/designs and push the limits of tech and ...) by 
doing so then that's IMHO good.



Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Friday, February 10, 2017 9:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [designate] Status of the project

On 02/10/2017 12:21 PM, Joshua Harlow wrote:

Hayes, Graham wrote:

The HTML version of this is here:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/

I have been asked a few times recently "What is the state of the
Designate
project?", "How is Designate getting on?", and by people who know what is
happening "What are you going to do about Designate?".

Needless to say, all of this is depressing to me, and the people that
I have
worked with for the last number of years to make Designate a truly
useful,
feature rich project.

*TL;DR;* for this - Designate is not in a sustainable place.

To start out - Designate has always been a small project. DNS does not
have
massive *cool* appeal - its not shiny, pretty, or something you see on
the
front page of HackerNews (unless it breaks - then oh boy do people
become DNS
experts).


Thanks for posting this, I know it was not easy to write...

Knowing where this is at and the issues. It makes me wonder if it is
worthwhile to start thinking about how we can start to look at 'outside
the openstack' projects for DNS. I believe there is a few that are
similar enough to designate (though I don't know well enough) for
example things like SkyDNS (or others which I believe there are a few).

Perhaps we need to start thinking outside the openstack 'box' in regards
to NIH syndrome and accept the fact that we as a community may not be
able to recreate the world successfully in all cases (the same could be
said about things like k8s and others).

If we got out of the mindset of openstack as a thing must have tightly
integrated components (over all else) and started thinking about how we
can be much more loosely coupled (and even say integrating non-python,
non-openstack projects) would that be beneficial (I think it would)?


This is already basically what Designate *is today*.

http://docs.openstack.org/developer/designate/support-matrix.html

Just because something is written in Golang and uses etcd for storage
doesn't make it "better" or not NIH.

For the record, the equivalent to Designate in k8s land is Kube2Sky, the
real difference being that Designate has a whole lot more options when
it comes to the DNS drivers and Designate integrates with OpenStack
services like Keystone.

Also, there's more to cloud DNS services than service discovery, which
is what SkyDNS was written for.

best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Ihar Hrachyshka
Oh nice, I haven't seen that. It does give (virtualized) CPU model
types. I don't see a clear correlation between models and
failures/test times though. We of course miss some more details, like
flags being emulated, but I doubt it will give us a clue.

It would be interesting to know the overcommit/system load for each
hypervisor affected. But I assume we don't have access to that info,
right?

Ihar

On Fri, Feb 10, 2017 at 8:39 AM, Clark Boylan  wrote:
> On Fri, Feb 10, 2017, at 08:21 AM, Morales, Victor wrote:
>>
>> On 2/9/17, 10:59 PM, "Ihar Hrachyshka"  wrote:
>>
>> >Hi all,
>> >
>> >I noticed lately a number of job failures in neutron gate that all
>> >result in job timeouts. I describe
>> >gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
>> >timeouts happening in other jobs too.
>> >
>> >The failure mode is all operations, ./stack.sh and each tempest test
>> >take significantly more time (like 50% to 150% more, which results in
>> >job timeout triggered). An example of what I mean can be found in [1].
>> >
>> >A good run usually takes ~20 minutes to stack up devstack; then ~40
>> >minutes to pass full suite; a bad run usually takes ~30 minutes for
>> >./stack.sh; and then 1:20h+ until it is killed due to timeout.
>> >
>> >It affects different clouds (we see rax, internap, infracloud-vanilla,
>> >ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
>> >pypi or apt mirrors because then we would see slowdown in ./stack.sh
>> >phase only.
>> >
>> >We can't be sure that CPUs are the same, and devstack does not seem to
>> >dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure
>>
>> I don’t think that logging this information could be useful mainly
>> because this depends on enabling *host-passthrough*[3] in nova-compute
>> configuration of Public cloud providers
>
> While this is true we do log it anyways (was useful for sorting out live
> migration cpu flag inconsistencies). For example:
> http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/devstack-gate-setup-host.txt.gz
> and grep for 'cpu'.
>
> Note that we used to grab proper /proc/cpuinfo contents but now its just
> whatever ansible is reporting back in its fact list there.
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Fox, Kevin M
I'd say kube-dns and designate are very different technologies.

kube-dns is a service discovery mechanism for kubernetes intend to provide 
internal k8s resolution. The fact it uses dns to implement service discovery is 
kind of an implementation detail, not its primary purpose. There's no need for 
private dns management, scaling past the size of the k8s cluster itself, etc. A 
much easier problem to solve at the moment.

Designate really is a multitenant dns as a service implementation. While it can 
be used for service discovery, its not its primary purpose.

I see no reason they couldn't share some common pieces, but care should be 
given not to just say, lets throw out one for the other, as they really are 
different animals.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Friday, February 10, 2017 9:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [designate] Status of the project

On 02/10/2017 12:21 PM, Joshua Harlow wrote:
> Hayes, Graham wrote:
>> The HTML version of this is here:
>> http://graham.hayes.ie/posts/openstack-designate-where-we-are/
>>
>> I have been asked a few times recently "What is the state of the
>> Designate
>> project?", "How is Designate getting on?", and by people who know what is
>> happening "What are you going to do about Designate?".
>>
>> Needless to say, all of this is depressing to me, and the people that
>> I have
>> worked with for the last number of years to make Designate a truly
>> useful,
>> feature rich project.
>>
>> *TL;DR;* for this - Designate is not in a sustainable place.
>>
>> To start out - Designate has always been a small project. DNS does not
>> have
>> massive *cool* appeal - its not shiny, pretty, or something you see on
>> the
>> front page of HackerNews (unless it breaks - then oh boy do people
>> become DNS
>> experts).
>>
>
> Thanks for posting this, I know it was not easy to write...
>
> Knowing where this is at and the issues. It makes me wonder if it is
> worthwhile to start thinking about how we can start to look at 'outside
> the openstack' projects for DNS. I believe there is a few that are
> similar enough to designate (though I don't know well enough) for
> example things like SkyDNS (or others which I believe there are a few).
>
> Perhaps we need to start thinking outside the openstack 'box' in regards
> to NIH syndrome and accept the fact that we as a community may not be
> able to recreate the world successfully in all cases (the same could be
> said about things like k8s and others).
>
> If we got out of the mindset of openstack as a thing must have tightly
> integrated components (over all else) and started thinking about how we
> can be much more loosely coupled (and even say integrating non-python,
> non-openstack projects) would that be beneficial (I think it would)?

This is already basically what Designate *is today*.

http://docs.openstack.org/developer/designate/support-matrix.html

Just because something is written in Golang and uses etcd for storage
doesn't make it "better" or not NIH.

For the record, the equivalent to Designate in k8s land is Kube2Sky, the
real difference being that Designate has a whole lot more options when
it comes to the DNS drivers and Designate integrates with OpenStack
services like Keystone.

Also, there's more to cloud DNS services than service discovery, which
is what SkyDNS was written for.

best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next minimum libvirt version

2017-02-10 Thread Matt Riedemann

On 2/10/2017 11:18 AM, Thomas Bechtold wrote:


For SUSE the wiki is updated and 1.2.9 should be fine.


Cheers,

Tom



Thanks Tom.

Would 1.3.1 as the next minimum in Queens be acceptable for SUSE?

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] IRC Mishaps

2017-02-10 Thread Jonathan Proulx

Well the worst thing I've done is type and send my password...that was
on an internal work channel not an OpenStack one, but I think that
only made it more embarrassing!

-Jon

On Wed, Feb 08, 2017 at 08:36:16PM +, Kendall Nelson wrote:
:Hello All!
:
:So I am sure we've all seen it: people writing terminal commands into our
:project channels, misusing '/' commands, etc. But have any of you actually
:done it?
:
:If any of you cores, ptls or other upstanding members of our wonderful
:community have had one of these embarrassing experiences please reply! I am
:writing an article for the SuperUser trying to make us all seem a little
:more human to people new to the community and new to using IRC. It can be
:scary asking questions to such a large group of smart people and its even
:more off putting when we make mistakes in front of them.
:
:So please share your stories!
:
:-Kendall Nelson (diablo_rojo)

:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Support for non-x86_64 architectures

2017-02-10 Thread Marcin Juszkiewicz
Hello

At Linaro I work on running OpenStack on AArch64 (arm64, 64-bit arm,
ARMv8a) architecture. We built Cinder, Glance, Heat, Horizon, Keystone,
Neutron and Nova for our use and deployed it several times.

But for next release we decided to move to use containers for delivering
components. This got me working on Kolla to get it working on our machines.

The problem is that Kolla targets only x86-64 architecture. I was not
surprised when saw that and do not blame anyone for it. That's quite
common behaviour nowadays when there is no Alpha nor Itanium on a market.

So I digged a bit and found patch [1] which added ppc64le architecture
support. Fetched, reviewed and decided that it can be used as a base for
my work.

1. https://review.openstack.org/#/c/423239/6

I cut all stuff about repositories and other ppc64le/ubuntu specific
issues and then edited it to take care of aarch64 as well. Then I posted
it to gerrit for review [2].

2. https://review.openstack.org/#/c/430940

Jenkins looks happy about it, I got some comments from few developers
(both in review and on irc) and handled them proper way.

I tested patch with "aarch64/ubuntu" and "aarch64/debian" images used as
a base. My target are CentOS (waiting for official image) and Debian.

Current state:

19:18 hrw@pinkiepie-centos:kolla$ docker images|grep kolla/ubuntu|wc -l
29
19:18 hrw@pinkiepie-centos:kolla$ docker images|grep kolla/debian|wc -l
124

During weekend I will run more builds to check all possible images.

If someone has some spare time then I would love to see my patch
reviewed. There is one change affecting x86-64: Debian/Ubuntu
repositories are split to base + architecture ones to allow for
architecture specific repos configuration.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Joshua Harlow

Jay Pipes wrote:

On 02/10/2017 12:21 PM, Joshua Harlow wrote:

Hayes, Graham wrote:

The HTML version of this is here:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/

I have been asked a few times recently "What is the state of the
Designate
project?", "How is Designate getting on?", and by people who know
what is
happening "What are you going to do about Designate?".

Needless to say, all of this is depressing to me, and the people that
I have
worked with for the last number of years to make Designate a truly
useful,
feature rich project.

*TL;DR;* for this - Designate is not in a sustainable place.

To start out - Designate has always been a small project. DNS does not
have
massive *cool* appeal - its not shiny, pretty, or something you see on
the
front page of HackerNews (unless it breaks - then oh boy do people
become DNS
experts).



Thanks for posting this, I know it was not easy to write...

Knowing where this is at and the issues. It makes me wonder if it is
worthwhile to start thinking about how we can start to look at 'outside
the openstack' projects for DNS. I believe there is a few that are
similar enough to designate (though I don't know well enough) for
example things like SkyDNS (or others which I believe there are a few).

Perhaps we need to start thinking outside the openstack 'box' in regards
to NIH syndrome and accept the fact that we as a community may not be
able to recreate the world successfully in all cases (the same could be
said about things like k8s and others).

If we got out of the mindset of openstack as a thing must have tightly
integrated components (over all else) and started thinking about how we
can be much more loosely coupled (and even say integrating non-python,
non-openstack projects) would that be beneficial (I think it would)?


This is already basically what Designate *is today*.

http://docs.openstack.org/developer/designate/support-matrix.html

Just because something is written in Golang and uses etcd for storage
doesn't make it "better" or not NIH.


Agreed, do those other projects (written in golang, or etcd or other) 
have communities that are growing; can we ensure better success (and 
health of our own community) by partnering with them? That was the main 
point (I don't really care what language they are written in or what 
storage backend they use).




For the record, the equivalent to Designate in k8s land is Kube2Sky, the
real difference being that Designate has a whole lot more options when
it comes to the DNS drivers and Designate integrates with OpenStack
services like Keystone.



That's cool, thanks; TIL.


Also, there's more to cloud DNS services than service discovery, which
is what SkyDNS was written for.


Sure, it was just an example.

The point was along the lines of if a project in our community is 
struggling and there is a similar project outside of openstack (that is 
trying to do similar things) is not struggling; perhaps it's better to 
partner with that other project and enhance that other project (and then 
recommend said project as the next-generation of ${whatever_project} was 
struggling here).


Said evaluation is something that we would likely have to do over time 
as well (because as from this example, desigate was a larger group once, 
it is now smaller).




best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Hayes, Graham
On 10/02/17 16:39, Alexandra Settle wrote:
> Sorry, I’m top posting to this reply because Outlook is a terrible inline 
> poster.
> 
> Hey Designaters! 
> 
> Have you tried pinging the docs team? (ie: me – Hello! I’m the docs PTL)

Hi - We have in the past, and did not get very far. I know things are
now changing, so expect me to ping you over the next few weeks :)

> Over the last few cycles our team has been able to step in and help with 
> formatting, writing, and organizing documentation. Helping small projects 
> (like OpenStack-Ansible) to fix several of the issues you noted below 
> (install, operations, dev docs). During the Newton cycle I was able to help 
> the OSA team organize their dev docs: 
> http://docs.openstack.org/developer/openstack-ansible/developer-docs/index.html,
>  and as a team we created the Deploy Guide 
> http://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/. And 
> for Ocata, we have been working heavily on their operations content: 
> http://docs.openstack.org/developer/openstack-ansible/draft-operations-guide/index.html
>  
> 
> Technical writers do not often have the expertise to provide SME knowledge 
> (that’s your job), but we are experts when it concerns ensuring documentation 
> is in the right place to help new users. And we are the guardians of the 
> Operations Guide.
> 
> I know your conversation is much larger than just fixing documentation, but 
> if we can help, please shout out :) 

Thanks for reaching out, I will definitely shout if need it.

- Graham

> On 2/10/17, 3:53 PM, "Hayes, Graham"  wrote:
> 
> On 10/02/17 02:40, Jay Pipes wrote:
> > On 02/09/2017 02:19 PM, Hayes, Graham wrote:
> > 
> > 
> >> Where too now then?
> >> ===
> >>
> >> Well, this is where I call out to people who actually use the project -
> >> don't
> >> jump ship and use something else because of the picture I have painted.
> >> We are
> >> a dedicated team, how cares about the project. We just need some help.
> >>
> >> I know there are large telcos who use Designate. I am sure there is 
> tooling,
> >> or docs build up in these companies that could be very useful to the
> >> project.
> >>
> >> Nearly every commercial OpenStack distro has Designate. Some have had 
> it
> >> since
> >> the beginning. Again, developers, docs, tooling, testers, anything and
> >> everything is welcome. We don't need a massive amount of resources - we
> >> are a
> >> small ish, stable, project.
> >>
> >> We need developers with upstream time allocated, and the budget to go 
> to
> >> events
> >> like the PTG - for cross project work, and internal designate road map,
> >> these
> >> events form the core of how we work.
> >>
> >> We also need help from cross project teams - the work done by them is
> >> brilliant
> >> but it can be hard for smaller projects to consume. We have had a lot 
> of
> >> progress since the `Leveller Playing Field`_ debate, but a lot of work 
> is
> >> still optimised for the larger teams who get direct support, or well
> >> resourced
> >> teams who can dedicate people to the implementation of plugins / code.
> >>
> >> As someone I was talking to recently said - AWS is not winning public 
> cloud
> >> because of commodity compute (that does help - a lot), but because of 
> the
> >> added services that make using the cloud, well, cloud like. OpenStack
> >> needs to
> >> decide that either it is just compute, or if it wants the eco-system. 
> [5]_
> >> Designate is far from alone in this.
> > 
> > 
> > 
> > Graham, thank you for the heartfelt post. I may not agree with all your 
> > points, but I know you're coming from the right place and truly want to 
> > see Designate (and OpenStack in general) succeed.
> 
> Thanks for reading - it ended up longer than expected.
> 
> > Your point about smaller projects finding it more difficult to 
> "consume" 
> > help from cross-project teams is an interesting one. When the big tent 
> > was being discussed, I remember the TC specifically discussing a change 
> > for cross-project team focus: moving from a "we do this work for you" 
> > role to a "we help you do this work for yourself" role. You're correct 
> > that the increase in OpenStack projects meant that the cross-project 
> > teams simply would not be able to continue to be a service to other 
> > teams. This was definitely predicted during the big tent discussions.
> 
> I remember the same things being discussed. However, that is not what
> happened, at least not immediately, and it can be very hard to
> motivate yourself to work on things when everytime you ask for help
> you get nothing, other than a link to the docs page you have read
> a 100 times.
> 
> > If I had one piece of adv

Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Jay Pipes

On 02/10/2017 12:21 PM, Joshua Harlow wrote:

Hayes, Graham wrote:

The HTML version of this is here:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/

I have been asked a few times recently "What is the state of the
Designate
project?", "How is Designate getting on?", and by people who know what is
happening "What are you going to do about Designate?".

Needless to say, all of this is depressing to me, and the people that
I have
worked with for the last number of years to make Designate a truly
useful,
feature rich project.

*TL;DR;* for this - Designate is not in a sustainable place.

To start out - Designate has always been a small project. DNS does not
have
massive *cool* appeal - its not shiny, pretty, or something you see on
the
front page of HackerNews (unless it breaks - then oh boy do people
become DNS
experts).



Thanks for posting this, I know it was not easy to write...

Knowing where this is at and the issues. It makes me wonder if it is
worthwhile to start thinking about how we can start to look at 'outside
the openstack' projects for DNS. I believe there is a few that are
similar enough to designate (though I don't know well enough) for
example things like SkyDNS (or others which I believe there are a few).

Perhaps we need to start thinking outside the openstack 'box' in regards
to NIH syndrome and accept the fact that we as a community may not be
able to recreate the world successfully in all cases (the same could be
said about things like k8s and others).

If we got out of the mindset of openstack as a thing must have tightly
integrated components (over all else) and started thinking about how we
can be much more loosely coupled (and even say integrating non-python,
non-openstack projects) would that be beneficial (I think it would)?


This is already basically what Designate *is today*.

http://docs.openstack.org/developer/designate/support-matrix.html

Just because something is written in Golang and uses etcd for storage 
doesn't make it "better" or not NIH.


For the record, the equivalent to Designate in k8s land is Kube2Sky, the 
real difference being that Designate has a whole lot more options when 
it comes to the DNS drivers and Designate integrates with OpenStack 
services like Keystone.


Also, there's more to cloud DNS services than service discovery, which 
is what SkyDNS was written for.


best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Mike Spreitzer
Joshua Harlow  wrote on 02/10/2017 12:21:08 PM:

> Knowing where this is at and the issues. It makes me wonder if it is 
> worthwhile to start thinking about how we can start to look at 'outside 
> the openstack' projects for DNS. I believe there is a few that are 
> similar enough to designate (though I don't know well enough) for 
> example things like SkyDNS (or others which I believe there are a few).
> 
> Perhaps we need to start thinking outside the openstack 'box' in regards 

> to NIH syndrome and accept the fact that we as a community may not be 
> able to recreate the world successfully in all cases (the same could be 
> said about things like k8s and others).
> 
> If we got out of the mindset of openstack as a thing must have tightly 
> integrated components (over all else) and started thinking about how we 
> can be much more loosely coupled (and even say integrating non-python, 
> non-openstack projects) would that be beneficial (I think it would)?

I think you might be on to something.  The Kubernetes community seems to 
be thinking about an external DNS service too.  I see 
https://github.com/kubernetes-incubator/external-dns was just created, but 
do not know anything more about it.

Regards,
Mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Hayes, Graham
On 10/02/17 17:24, Joshua Harlow wrote:
> Hayes, Graham wrote:
>> The HTML version of this is here:
>> http://graham.hayes.ie/posts/openstack-designate-where-we-are/
>>
>> I have been asked a few times recently "What is the state of the Designate
>> project?", "How is Designate getting on?", and by people who know what is
>> happening "What are you going to do about Designate?".
>>
>> Needless to say, all of this is depressing to me, and the people that I have
>> worked with for the last number of years to make Designate a truly useful,
>> feature rich project.
>>
>> *TL;DR;* for this - Designate is not in a sustainable place.
>>
>> To start out - Designate has always been a small project. DNS does not have
>> massive *cool* appeal - its not shiny, pretty, or something you see on the
>> front page of HackerNews (unless it breaks - then oh boy do people
>> become DNS
>> experts).
>>
> 
> Thanks for posting this, I know it was not easy to write...
> 
> Knowing where this is at and the issues. It makes me wonder if it is 
> worthwhile to start thinking about how we can start to look at 'outside 
> the openstack' projects for DNS. I believe there is a few that are 
> similar enough to designate (though I don't know well enough) for 
> example things like SkyDNS (or others which I believe there are a few).

SkyDNS is a mechanism for service discovery, not a DNS API. In reality
there is very few (if any actually - I am struggling to think of even
one) multi tenant DNS management APIs.

The use of DNS in clouds is more than just having an cli to call to
update DNS entries. Integration's between the cloud components are what
make it useful - Heat / CloudFormation / Terraform resources that can
read info from network ports, floating IPs, load balancers, etc is
where the value comes in.

Combined with the integration in neutron that we (finally) merged
recently, I think we have a compelling case to keep Designate.

Ask most AWS users how much they use route53 - this is what we should
be aiming for with Designate.

> Perhaps we need to start thinking outside the openstack 'box' in regards 
> to NIH syndrome and accept the fact that we as a community may not be 
> able to recreate the world successfully in all cases (the same could be 
> said about things like k8s and others).

sure - this comes back to be "base set" of services that the Arch WG
were looking at. however, having coupled services can be useful, and
having a guaranteed API across clouds (one of our goals afaik) for
basic services (which I would count DNS as) is a big deal.

> If we got out of the mindset of openstack as a thing must have tightly 
> integrated components (over all else) and started thinking about how we 
> can be much more loosely coupled (and even say integrating non-python, 
> non-openstack projects) would that be beneficial (I think it would)?
> 
> -Josh
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Next minimum libvirt version

2017-02-10 Thread Daniel P. Berrange
On Thu, Feb 09, 2017 at 05:29:22PM -0600, Matt Riedemann wrote:
> Since danpb hasn't been around I've sort of forgotten about this, but we
> should talk about bumping the minimum required libvirt version in nova.
> 
> Currently it's 1.2.1 and the next was set to 1.2.9.
> 
> On master we're gating on ubuntu 14.04 which has libvirt 1.3.1 (14.04 had
> 1.2.2).
> 
> If we move to require 1.2.9 that effectively kills 14.04 support for
> devstack + libvirt on master, which is probably OK.
> 
> There is also the distro support wiki [1] which hasn't been updated in
> awhile.
> 
> I'm wondering if 1.2.9 is a safe move for the next required minimum version
> and if so, does anyone have ideas on the next required version after that?

I think libvirt 1.2.9 is absolutely fine as a next version. It is still
ancient history comparatively speaking.

The more difficult question is what happens after that. To go further than
that effectively requires dropping Debian as a supportable platform since
AFAIK, they never rebase libvirt & next Debian major release is still
unannounced.  So the question is whether "stock" Debian is something the
project cares about targetting or will the answer be that Debian users
are required to pull in newer libvirt from elsewhere.

Also, it is just as important to consider minimum QEMU versions at the
same time, though it could just be set to the lowest common denominator
across distros that remain, after choosing the libvirt version.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-10 Thread Brian Rosmaita
I want to give all interested parties a heads up that I have scheduled a
session in the Macon room from 9:30-10:30 a.m. on Thursday morning
(February 23).

Here's what we need to discuss.  This is from my perspective as Glance
PTL, so it's going to be Glance-centric.  This is a quick narrative
description; please go to the session etherpad [0] to turn this into a
specific set of discussion items.

Glance is the OpenStack image cataloging and delivery service.  A few
cycles ago (Juno?), someone noticed that maybe Glance could be
generalized so that instead of storing image metadata and image data,
Glance could store arbitrary digital "stuff" along with metadata
describing the "stuff".  Some people (like me) thought that this was an
obvious direction for Glance to take, but others (maybe wiser, cooler
heads) thought that Glance needed to focus on image cataloging and
delivery and make sure it did a good job at that.  Anyway, the Glance
mission statement was changed to include artifacts, but the Glance
community never embraced them 100%, and in Newton, Glare split off as
its own project (which made sense to me, there was too much unclarity in
Glance about how Glare fit in, and we were holding back development, and
besides we needed to focus on images), and the Glance mission statement
was re-amended specifically to exclude artifacts and focus on images and
metadata definitions.

OK, so the current situation is:
- Glance "does" image cataloging and delivery and metadefs, and that's
all it does.
- Glare is an artifacts service (cataloging and delivery) that can also
handle images.

You can see that there's quite a bit of overlap.  I gave you the history
earlier because we did try to work as a single project, but it did not
work out.

So, now we are in 2017.  The OpenStack development situation has been
fragile since the second half of 2016, with several big OpenStack
sponsors pulling way back on the amount of development resources being
contributed to the community.  This has left Glare in the position where
it cannot qualify as a Bit Tent project, even though there is interest
in artifacts.

Mike Fedosin, the PTL for Glare, has asked me about Glare becoming part
of the Glance project again.  I will be completely honest, I am inclined
to say "no".  I have enough problems just getting Glance stuff done (for
example, image import missed Ocata).  But in addition to doing what's
right for Glance, I want to do what's right for OpenStack.  And I look
at the overlap and think ...

Well, what I think is that I don't want to go through the Juno-Newton
cycles of argument again.  And we have to do what is right for our users.

The point of this session is to discuss:
- What does the Glance community see as the future of Glance?
- What does the wider OpenStack community (TC) see as the future of Glance?
- Maybe, more importantly, what does the wider community see as the
obligations of Glance?
- Does Glare fit into this vision?
- What kind of community support is there for Glare?

My reading of Glance history is that while some people were on board
with artifacts as the future of Glance, there was not a sufficient
critical mass of the Glance community that endorsed this direction and
that's why things unravelled in Newton.  I don't want to see that happen
again.  Further, I don't think the Glance community got the word out to
the broader OpenStack community about the artifacts project, and we got
a lot of pushback along the lines of "WTF? Glance needs to do images"
variety.  And probably rightly so -- Glance needs to do images.  My
point is that I don't want Glance to take Glare back unless it fits in
with what the community sees as the appropriate direction for Glance.
And I certainly don't want to take it back if the entire Glance
community is not on board.

Anyway, that's what we're going to discuss.  I've booked one of the
fishbowl rooms so we can get input from people beyond just the Glance
and Glare projects.

cheers,
brian

[0] https://etherpad.openstack.org/p/pike-glance-glare-discussion


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-10 Thread Anna Taraday
Hello everyone!

In Juno in Neutron was implemented L3 HA feature based on Keepalived
(VRRP). During next cycles it was improved, we performed scale testing [1]
to find weak places and tried to fix them. The only alternative for L3 HA
with VRRP is router rescheduling performed by Neutron server, but it is
significantly slower and depends on control plane.

What issues we experienced with L3 HA VRRP?

   1. Bugs in Keepalived (bad versions) [2]
   2. Split brain [3]
   3. Complex structure (ha networks, ha interfaces) - which actually cause
   races that we were fixing during Liberty, Mitaka and Newton.

This all is not critical, but this is a bad experience and not everyone
ready (or want) to use Keepalived approach.

I think we can make things more flexible. For example, we can allow user to
use external services like etcd instead of Keepalived to synchronize
current HA state across agents. I've done several experiments and I've got
failover time comparable to L3 HA with VRRP. Tooz [4] can be used to
abstract from concrete backend. For example, it can allow us to use
Zookeeper, Redis and other backends to store HA state.

What I want to propose?

I want to bring up idea that Neutron should have some general classes for
L3 HA which will allow to use not only Keepalived but also other backends
for HA state. This at least will make it easier to try some other
approaches and compare them with existing ones.

Does this sound reasonable?

[1] -
http://docs.openstack.org/developer/performance-docs/test_results/neutron_features/index.html
[2] - https://bugs.launchpad.net/neutron/+bug/1497272
https://bugs.launchpad.net/neutron/+bug/1433172
[3] - https://bugs.launchpad.net/neutron/+bug/1375625
[4] - http://docs.openstack.org/developer/tooz/




-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Joshua Harlow

Hayes, Graham wrote:

The HTML version of this is here:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/

I have been asked a few times recently "What is the state of the Designate
project?", "How is Designate getting on?", and by people who know what is
happening "What are you going to do about Designate?".

Needless to say, all of this is depressing to me, and the people that I have
worked with for the last number of years to make Designate a truly useful,
feature rich project.

*TL;DR;* for this - Designate is not in a sustainable place.

To start out - Designate has always been a small project. DNS does not have
massive *cool* appeal - its not shiny, pretty, or something you see on the
front page of HackerNews (unless it breaks - then oh boy do people
become DNS
experts).



Thanks for posting this, I know it was not easy to write...

Knowing where this is at and the issues. It makes me wonder if it is 
worthwhile to start thinking about how we can start to look at 'outside 
the openstack' projects for DNS. I believe there is a few that are 
similar enough to designate (though I don't know well enough) for 
example things like SkyDNS (or others which I believe there are a few).


Perhaps we need to start thinking outside the openstack 'box' in regards 
to NIH syndrome and accept the fact that we as a community may not be 
able to recreate the world successfully in all cases (the same could be 
said about things like k8s and others).


If we got out of the mindset of openstack as a thing must have tightly 
integrated components (over all else) and started thinking about how we 
can be much more loosely coupled (and even say integrating non-python, 
non-openstack projects) would that be beneficial (I think it would)?


-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next minimum libvirt version

2017-02-10 Thread Thomas Bechtold
Hi,

On Thu, 2017-02-09 at 17:29 -0600, Matt Riedemann wrote:
> Since danpb hasn't been around I've sort of forgotten about this, but
> we 
> should talk about bumping the minimum required libvirt version in
> nova.
> 
> Currently it's 1.2.1 and the next was set to 1.2.9.
> 
> On master we're gating on ubuntu 14.04 which has libvirt 1.3.1
> (14.04 
> had 1.2.2).
> 
> If we move to require 1.2.9 that effectively kills 14.04 support for 
> devstack + libvirt on master, which is probably OK.
> 
> There is also the distro support wiki [1] which hasn't been updated
> in 
> awhile.

For SUSE the wiki is updated and 1.2.9 should be fine.


Cheers,

Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-10 Thread Flavio Percoco

On 10/02/17 15:24 +0100, Flavio Percoco wrote:

On 09/02/17 09:57 +0100, Flavio Percoco wrote:

Greetings,

I was talking with Tony and he mentioned that he's recording a new demo for
kuryr and, well, it'd be great to also use the containerized version of TripleO
for the demo.

His plan is to have this demo out by next week and that may be too tight for the
containerized version of TripleO (it may be not, let's try). That said, I think
it's still a good opportunity for us to sit down at the PTG and play with this a
bit further.

So, before we set a date and time for this, I wanted to extend the invite to
other folks and see if there's some interest. It be great to also have folks
from Kolla and openstack-helm joining.

Looking forward to hearing ideas and hacking with y'all,
Flavio


So, given the interest and my hope to group as much folks from other teams as
possible, what about we just schedule this for Wednesday at 09:00 am ?

I'm not sure what room we can crash yet but I'll figure it out soon and let
y'all know.

Any objections/observations?
Flavio


Ok, one more heads up here. We have a room!

I've put the projects names in one of the cross-teams collaboration rooms[0].

The room name is Macon and it's setup in fishbowl style. It can fit 50, which
seemed more than enough for this session.

Looking forward to seeing y'all there,
Flavio

[0] https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Clark Boylan
On Fri, Feb 10, 2017, at 08:21 AM, Morales, Victor wrote:
> 
> On 2/9/17, 10:59 PM, "Ihar Hrachyshka"  wrote:
> 
> >Hi all,
> >
> >I noticed lately a number of job failures in neutron gate that all
> >result in job timeouts. I describe
> >gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
> >timeouts happening in other jobs too.
> >
> >The failure mode is all operations, ./stack.sh and each tempest test
> >take significantly more time (like 50% to 150% more, which results in
> >job timeout triggered). An example of what I mean can be found in [1].
> >
> >A good run usually takes ~20 minutes to stack up devstack; then ~40
> >minutes to pass full suite; a bad run usually takes ~30 minutes for
> >./stack.sh; and then 1:20h+ until it is killed due to timeout.
> >
> >It affects different clouds (we see rax, internap, infracloud-vanilla,
> >ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
> >pypi or apt mirrors because then we would see slowdown in ./stack.sh
> >phase only.
> >
> >We can't be sure that CPUs are the same, and devstack does not seem to
> >dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure
> 
> I don’t think that logging this information could be useful mainly
> because this depends on enabling *host-passthrough*[3] in nova-compute
> configuration of Public cloud providers

While this is true we do log it anyways (was useful for sorting out live
migration cpu flag inconsistencies). For example:
http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/devstack-gate-setup-host.txt.gz
and grep for 'cpu'.

Note that we used to grab proper /proc/cpuinfo contents but now its just
whatever ansible is reporting back in its fact list there.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Alexandra Settle
Sorry, I’m top posting to this reply because Outlook is a terrible inline 
poster.

Hey Designaters! 

Have you tried pinging the docs team? (ie: me – Hello! I’m the docs PTL)

Over the last few cycles our team has been able to step in and help with 
formatting, writing, and organizing documentation. Helping small projects (like 
OpenStack-Ansible) to fix several of the issues you noted below (install, 
operations, dev docs). During the Newton cycle I was able to help the OSA team 
organize their dev docs: 
http://docs.openstack.org/developer/openstack-ansible/developer-docs/index.html,
 and as a team we created the Deploy Guide 
http://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/. And 
for Ocata, we have been working heavily on their operations content: 
http://docs.openstack.org/developer/openstack-ansible/draft-operations-guide/index.html
 

Technical writers do not often have the expertise to provide SME knowledge 
(that’s your job), but we are experts when it concerns ensuring documentation 
is in the right place to help new users. And we are the guardians of the 
Operations Guide.

I know your conversation is much larger than just fixing documentation, but if 
we can help, please shout out :) 

On 2/10/17, 3:53 PM, "Hayes, Graham"  wrote:

On 10/02/17 02:40, Jay Pipes wrote:
> On 02/09/2017 02:19 PM, Hayes, Graham wrote:
> 
> 
>> Where too now then?
>> ===
>>
>> Well, this is where I call out to people who actually use the project -
>> don't
>> jump ship and use something else because of the picture I have painted.
>> We are
>> a dedicated team, how cares about the project. We just need some help.
>>
>> I know there are large telcos who use Designate. I am sure there is 
tooling,
>> or docs build up in these companies that could be very useful to the
>> project.
>>
>> Nearly every commercial OpenStack distro has Designate. Some have had it
>> since
>> the beginning. Again, developers, docs, tooling, testers, anything and
>> everything is welcome. We don't need a massive amount of resources - we
>> are a
>> small ish, stable, project.
>>
>> We need developers with upstream time allocated, and the budget to go to
>> events
>> like the PTG - for cross project work, and internal designate road map,
>> these
>> events form the core of how we work.
>>
>> We also need help from cross project teams - the work done by them is
>> brilliant
>> but it can be hard for smaller projects to consume. We have had a lot of
>> progress since the `Leveller Playing Field`_ debate, but a lot of work is
>> still optimised for the larger teams who get direct support, or well
>> resourced
>> teams who can dedicate people to the implementation of plugins / code.
>>
>> As someone I was talking to recently said - AWS is not winning public 
cloud
>> because of commodity compute (that does help - a lot), but because of the
>> added services that make using the cloud, well, cloud like. OpenStack
>> needs to
>> decide that either it is just compute, or if it wants the eco-system. 
[5]_
>> Designate is far from alone in this.
> 
> 
> 
> Graham, thank you for the heartfelt post. I may not agree with all your 
> points, but I know you're coming from the right place and truly want to 
> see Designate (and OpenStack in general) succeed.

Thanks for reading - it ended up longer than expected.

> Your point about smaller projects finding it more difficult to "consume" 
> help from cross-project teams is an interesting one. When the big tent 
> was being discussed, I remember the TC specifically discussing a change 
> for cross-project team focus: moving from a "we do this work for you" 
> role to a "we help you do this work for yourself" role. You're correct 
> that the increase in OpenStack projects meant that the cross-project 
> teams simply would not be able to continue to be a service to other 
> teams. This was definitely predicted during the big tent discussions.

I remember the same things being discussed. However, that is not what
happened, at least not immediately, and it can be very hard to
motivate yourself to work on things when everytime you ask for help
you get nothing, other than a link to the docs page you have read
a 100 times.

> If I had one piece of advice to give Designate, it would be to 
> prioritize getting documentation (both installation as well as dev-ref 
> and operational docs) in good shape. I know writing docs sucks, but docs 
> are a springboard for users and contributors alike and can have a 
> multiplying effect that's difficult to overstate. Getting those install 
> and developer docs started would enable the cross-project docs team to 
> guide Designate contributors in enh

Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Morales, Victor

On 2/9/17, 10:59 PM, "Ihar Hrachyshka"  wrote:

>Hi all,
>
>I noticed lately a number of job failures in neutron gate that all
>result in job timeouts. I describe
>gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
>timeouts happening in other jobs too.
>
>The failure mode is all operations, ./stack.sh and each tempest test
>take significantly more time (like 50% to 150% more, which results in
>job timeout triggered). An example of what I mean can be found in [1].
>
>A good run usually takes ~20 minutes to stack up devstack; then ~40
>minutes to pass full suite; a bad run usually takes ~30 minutes for
>./stack.sh; and then 1:20h+ until it is killed due to timeout.
>
>It affects different clouds (we see rax, internap, infracloud-vanilla,
>ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
>pypi or apt mirrors because then we would see slowdown in ./stack.sh
>phase only.
>
>We can't be sure that CPUs are the same, and devstack does not seem to
>dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure

I don’t think that logging this information could be useful mainly because this 
depends on enabling *host-passthrough*[3] in nova-compute configuration of 
Public cloud providers

>if it would help anyway). Neither we have a way to learn whether
>slowliness could be a result of adherence to RFC1149. ;)
>
>We discussed the matter in neutron channel [2] though couldn't figure
>out the culprit, or where to go next. At this point we assume it's not
>neutron's fault, and we hope others (infra?) may have suggestions on
>where to look.
>
>[1] 
>http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/console.html#_2017-02-09_04_47_12_874550
>[2] 
>http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2017-02-10.log.html#t2017-02-10T04:06:01
[3] 
http://docs.openstack.org/newton/config-reference/compute/hypervisor-kvm.html 

>
>Thanks,
>Ihar
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 11

2017-02-10 Thread Ed Leafe
Your regular reporter, Chris Dent, is on PTO today, so I'm filling in. I'll be 
brief.

After the flurry of activity to get as much in before the Ocata RCs, this past 
week was relatively calm. Work continued on the patch to have Ironic resources 
tracked as, well, individual entities instead of pseudo-VMs, and with a little 
more clarity, should be ready to merge soon.

https://review.openstack.org/#/c/404472/

The patch series to add the concept of nested resource providers is moving 
forward a bit more slowly. Nested RPs allow for modeling of complex resources, 
such as a compute node that contains PCI devices, each of which has multiple 
physical and virtual functions. The series starts here:

https://review.openstack.org/#/c/415920/

We largely ignored traits, which represent the qualitative part of a resource, 
and focused on the quantitative side during Ocata. With Pike development now 
open, we look to begin discussing and developing the traits work in more 
detail. The spec for traits is here:

https://review.openstack.org/#/c/345138/

…and the series of POC code starts with:

https://review.openstack.org/#/c/377381/9

We've also begun planning for the discussions at the PTG around what our goals 
for Pike will be. I'm sure that there will a summary of those discussions in 
one of these emails after the PTG.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Hayes, Graham
On 10/02/17 15:48, Doug Hellmann wrote:
> Excerpts from Jay Pipes's message of 2017-02-09 21:33:03 -0500:
>> On 02/09/2017 02:19 PM, Hayes, Graham wrote:
>> 
>>
>>> Where too now then?
>>> ===
>>>
>>> Well, this is where I call out to people who actually use the project -
>>> don't
>>> jump ship and use something else because of the picture I have painted.
>>> We are
>>> a dedicated team, how cares about the project. We just need some help.
>>>
>>> I know there are large telcos who use Designate. I am sure there is tooling,
>>> or docs build up in these companies that could be very useful to the
>>> project.
>>>
>>> Nearly every commercial OpenStack distro has Designate. Some have had it
>>> since
>>> the beginning. Again, developers, docs, tooling, testers, anything and
>>> everything is welcome. We don't need a massive amount of resources - we
>>> are a
>>> small ish, stable, project.
>>>
>>> We need developers with upstream time allocated, and the budget to go to
>>> events
>>> like the PTG - for cross project work, and internal designate road map,
>>> these
>>> events form the core of how we work.
>>>
>>> We also need help from cross project teams - the work done by them is
>>> brilliant
>>> but it can be hard for smaller projects to consume. We have had a lot of
>>> progress since the `Leveller Playing Field`_ debate, but a lot of work is
>>> still optimised for the larger teams who get direct support, or well
>>> resourced
>>> teams who can dedicate people to the implementation of plugins / code.
>>>
>>> As someone I was talking to recently said - AWS is not winning public cloud
>>> because of commodity compute (that does help - a lot), but because of the
>>> added services that make using the cloud, well, cloud like. OpenStack
>>> needs to
>>> decide that either it is just compute, or if it wants the eco-system. [5]_
>>> Designate is far from alone in this.
>>
>> 
>>
>> Graham, thank you for the heartfelt post. I may not agree with all your 
>> points, but I know you're coming from the right place and truly want to 
>> see Designate (and OpenStack in general) succeed.
>>
>> Your point about smaller projects finding it more difficult to "consume" 
>> help from cross-project teams is an interesting one. When the big tent 
>> was being discussed, I remember the TC specifically discussing a change 
>> for cross-project team focus: moving from a "we do this work for you" 
>> role to a "we help you do this work for yourself" role. You're correct 
>> that the increase in OpenStack projects meant that the cross-project 
>> teams simply would not be able to continue to be a service to other 
>> teams. This was definitely predicted during the big tent discussions.
>>
>> If I had one piece of advice to give Designate, it would be to 
>> prioritize getting documentation (both installation as well as dev-ref 
>> and operational docs) in good shape. I know writing docs sucks, but docs 
>> are a springboard for users and contributors alike and can have a 
>> multiplying effect that's difficult to overstate. Getting those install 
>> and developer docs started would enable the cross-project docs team to 
>> guide Designate contributors in enhancing and cleaning up the docs and 
>> putting some polish on 'em. Your idea above that maybe some users 
>> already wrote some docs is a good one. Maybe reach out personally to 
>> those telcos and see if they can dig something up that can be the basis 
>> for upstream docs.
>>
>> Best,
>> -jay
>>
> 
> Thank you for bringing this into the open, Graham.
> 
> I think we have several projects that would benefit by transitioning
> from relying solely on vendor contributions to building up the
> deployer/user contributor base. That's a relatively new approach
> for some parts of the OpenStack community, but it's common elsewhere
> in open source projects. The shift is likely to mean some changes
> in the way we organize ourselves, because it may not be reasonable
> to assume user-contributors have large amounts of time to focus on
> long review cycles, traveling to sprints, or the other intensive
> activities that are part of our current routine. (That's not to say
> the Designate team has introduced any of those issues, of course.
> We need to be thinking about removing obstacles for contributors
> across the entire community.)

Yes - definitely. We try to be good about review cycles (with the amount
we get, it is not that difficult for us to be good about bug triage, and
review triage), but I agree - how we work does make things difficult
for user contributors to become key contributors to a project.

Even for people who want to contribute as a hobby, the time and level
of funding required is quite high.

Thanks,

Graham

> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.ope

Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Hayes, Graham
On 10/02/17 02:40, Jay Pipes wrote:
> On 02/09/2017 02:19 PM, Hayes, Graham wrote:
> 
> 
>> Where too now then?
>> ===
>>
>> Well, this is where I call out to people who actually use the project -
>> don't
>> jump ship and use something else because of the picture I have painted.
>> We are
>> a dedicated team, how cares about the project. We just need some help.
>>
>> I know there are large telcos who use Designate. I am sure there is tooling,
>> or docs build up in these companies that could be very useful to the
>> project.
>>
>> Nearly every commercial OpenStack distro has Designate. Some have had it
>> since
>> the beginning. Again, developers, docs, tooling, testers, anything and
>> everything is welcome. We don't need a massive amount of resources - we
>> are a
>> small ish, stable, project.
>>
>> We need developers with upstream time allocated, and the budget to go to
>> events
>> like the PTG - for cross project work, and internal designate road map,
>> these
>> events form the core of how we work.
>>
>> We also need help from cross project teams - the work done by them is
>> brilliant
>> but it can be hard for smaller projects to consume. We have had a lot of
>> progress since the `Leveller Playing Field`_ debate, but a lot of work is
>> still optimised for the larger teams who get direct support, or well
>> resourced
>> teams who can dedicate people to the implementation of plugins / code.
>>
>> As someone I was talking to recently said - AWS is not winning public cloud
>> because of commodity compute (that does help - a lot), but because of the
>> added services that make using the cloud, well, cloud like. OpenStack
>> needs to
>> decide that either it is just compute, or if it wants the eco-system. [5]_
>> Designate is far from alone in this.
> 
> 
> 
> Graham, thank you for the heartfelt post. I may not agree with all your 
> points, but I know you're coming from the right place and truly want to 
> see Designate (and OpenStack in general) succeed.

Thanks for reading - it ended up longer than expected.

> Your point about smaller projects finding it more difficult to "consume" 
> help from cross-project teams is an interesting one. When the big tent 
> was being discussed, I remember the TC specifically discussing a change 
> for cross-project team focus: moving from a "we do this work for you" 
> role to a "we help you do this work for yourself" role. You're correct 
> that the increase in OpenStack projects meant that the cross-project 
> teams simply would not be able to continue to be a service to other 
> teams. This was definitely predicted during the big tent discussions.

I remember the same things being discussed. However, that is not what
happened, at least not immediately, and it can be very hard to
motivate yourself to work on things when everytime you ask for help
you get nothing, other than a link to the docs page you have read
a 100 times.

> If I had one piece of advice to give Designate, it would be to 
> prioritize getting documentation (both installation as well as dev-ref 
> and operational docs) in good shape. I know writing docs sucks, but docs 
> are a springboard for users and contributors alike and can have a 
> multiplying effect that's difficult to overstate. Getting those install 
> and developer docs started would enable the cross-project docs team to 
> guide Designate contributors in enhancing and cleaning up the docs and 
> putting some polish on 'em. Your idea above that maybe some users 
> already wrote some docs is a good one. Maybe reach out personally to 
> those telcos and see if they can dig something up that can be the basis 
> for upstream docs.

Yeah, writing docs is hard to do right, and honestly, we have been
trying to just keep up with bugs recently.

The problem for us is, where do things like operational docs go?
There is an ops guide, but it is very hard to see where we could
put content. I am also a firm believer that
docs.openstack.org/developer/ is *not* the place for
end user / ops / etc documentation - its why our docs are so
messy right now.

I have pinged a few people for docs, and am waiting for responses.

> Best,
> -jay
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2017-02-09 21:33:03 -0500:
> On 02/09/2017 02:19 PM, Hayes, Graham wrote:
> 
> 
> > Where too now then?
> > ===
> >
> > Well, this is where I call out to people who actually use the project -
> > don't
> > jump ship and use something else because of the picture I have painted.
> > We are
> > a dedicated team, how cares about the project. We just need some help.
> >
> > I know there are large telcos who use Designate. I am sure there is tooling,
> > or docs build up in these companies that could be very useful to the
> > project.
> >
> > Nearly every commercial OpenStack distro has Designate. Some have had it
> > since
> > the beginning. Again, developers, docs, tooling, testers, anything and
> > everything is welcome. We don't need a massive amount of resources - we
> > are a
> > small ish, stable, project.
> >
> > We need developers with upstream time allocated, and the budget to go to
> > events
> > like the PTG - for cross project work, and internal designate road map,
> > these
> > events form the core of how we work.
> >
> > We also need help from cross project teams - the work done by them is
> > brilliant
> > but it can be hard for smaller projects to consume. We have had a lot of
> > progress since the `Leveller Playing Field`_ debate, but a lot of work is
> > still optimised for the larger teams who get direct support, or well
> > resourced
> > teams who can dedicate people to the implementation of plugins / code.
> >
> > As someone I was talking to recently said - AWS is not winning public cloud
> > because of commodity compute (that does help - a lot), but because of the
> > added services that make using the cloud, well, cloud like. OpenStack
> > needs to
> > decide that either it is just compute, or if it wants the eco-system. [5]_
> > Designate is far from alone in this.
> 
> 
> 
> Graham, thank you for the heartfelt post. I may not agree with all your 
> points, but I know you're coming from the right place and truly want to 
> see Designate (and OpenStack in general) succeed.
> 
> Your point about smaller projects finding it more difficult to "consume" 
> help from cross-project teams is an interesting one. When the big tent 
> was being discussed, I remember the TC specifically discussing a change 
> for cross-project team focus: moving from a "we do this work for you" 
> role to a "we help you do this work for yourself" role. You're correct 
> that the increase in OpenStack projects meant that the cross-project 
> teams simply would not be able to continue to be a service to other 
> teams. This was definitely predicted during the big tent discussions.
> 
> If I had one piece of advice to give Designate, it would be to 
> prioritize getting documentation (both installation as well as dev-ref 
> and operational docs) in good shape. I know writing docs sucks, but docs 
> are a springboard for users and contributors alike and can have a 
> multiplying effect that's difficult to overstate. Getting those install 
> and developer docs started would enable the cross-project docs team to 
> guide Designate contributors in enhancing and cleaning up the docs and 
> putting some polish on 'em. Your idea above that maybe some users 
> already wrote some docs is a good one. Maybe reach out personally to 
> those telcos and see if they can dig something up that can be the basis 
> for upstream docs.
> 
> Best,
> -jay
> 

Thank you for bringing this into the open, Graham.

I think we have several projects that would benefit by transitioning
from relying solely on vendor contributions to building up the
deployer/user contributor base. That's a relatively new approach
for some parts of the OpenStack community, but it's common elsewhere
in open source projects. The shift is likely to mean some changes
in the way we organize ourselves, because it may not be reasonable
to assume user-contributors have large amounts of time to focus on
long review cycles, traveling to sprints, or the other intensive
activities that are part of our current routine. (That's not to say
the Designate team has introduced any of those issues, of course.
We need to be thinking about removing obstacles for contributors
across the entire community.)

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift][swiftclient][horizon] Improve UX by enabling HTTP headers configuration in UI and CLI

2017-02-10 Thread John Dickinson


On 10 Feb 2017, at 7:07, Denis Makogon wrote:

> Greetings.
>
> I've been developing Swift middleware that depends on specific HTTP headers
> and figured out that there's only one way to specify them on client side -
> only in programmatically i can add HTTP headers to each Swift HTTP API
> method (CLI and dashboard are not supporting HTTP headers configuration,
> except by default enabled cases like "copy" middleware because swiftclient
> defines in as separate API method).
>
> My point here is, as developer, i don't have OpenStack-aligned way to
> examine HTTP headers-dependent middleware without hacking into both
> swiftclient and dashboard what makes me fall back to cURL that brings a lot
> overhead while working with Swift.
>
> So, is there any interest in having such thing in swiftclient and,
> subsequently, in dashboard?
> If yes, let me know (it shouldn't be that complicated because at
> swiftclient python API level we already capable to send HTTP headers).

Good news! python-swiftclient already supports sending arbitrary headers via 
the CLI with the -H/--header (in addition to the SDK, as you mentioned). IIRC 
this is *not* yet supported in the combined openstack client, but I think it 
would be a great addition.

--John


>
> Kind regards,
> Denis Makogon
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pbr and warnerrors status

2017-02-10 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2017-02-10 13:57:03 +0100:
> On 02/08/2017 05:22 PM, Jay Faulkner  wrote:
>  > [...]
> > IMO, I’d suggest skipping this and just fixing the broken attribute. 
> > Projects that are impacted by the change simply need to merge a one-line 
> > change to set warnerrors=false. FWIW, you can actually run a local docs 
> > build, find and resolve warnings without the PBR change. It seems overkill 
> > to me to change the term since doing so will be super confusing to anyone 
> > who hasn’t read this mailing list thread.
> >
> > If there’s a significant enough concern, a change could be pushed to set 
> > warnerrors=false on the projects that are concerned before this release is 
> > made.
> 
> Yes, this should indeed be an easy change to fix it.
> 
> So, since requirements freeze is over, let me remove my WIP on
> https://review.openstack.org/430618 so that it can be released as 
> convenient for the release team,
> 
> Andreas

I think we're unlikely to release a new version of pbr until after the
ocata final releases are done. It's too pervasive and poses a lot of
risk if something is broken.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift][swiftclient][horizon] Improve UX by enabling HTTP headers configuration in UI and CLI

2017-02-10 Thread Denis Makogon
Greetings.

I've been developing Swift middleware that depends on specific HTTP headers
and figured out that there's only one way to specify them on client side -
only in programmatically i can add HTTP headers to each Swift HTTP API
method (CLI and dashboard are not supporting HTTP headers configuration,
except by default enabled cases like "copy" middleware because swiftclient
defines in as separate API method).

My point here is, as developer, i don't have OpenStack-aligned way to
examine HTTP headers-dependent middleware without hacking into both
swiftclient and dashboard what makes me fall back to cURL that brings a lot
overhead while working with Swift.

So, is there any interest in having such thing in swiftclient and,
subsequently, in dashboard?
If yes, let me know (it shouldn't be that complicated because at
swiftclient python API level we already capable to send HTTP headers).

Kind regards,
Denis Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] Getting on the Pike PTG Agenda?

2017-02-10 Thread Aimee Ukasick
Hi Refstack team - a team from OPNFV will be at the Pike PTG, and we
would like to meet with the RefStack team to discuss building a direct
link to RefStack and other upstream verification projects. We would like
to present the OPNFV Dovetail project and our goals for leveraging
upstream test frameworks, as well as supplementing them with
OPNFV-specific tests (some of which will work their way upstream over time).

I don't see a Pike PTG etherpad for RefStack on the PTG/Pike/Etherpads
page, so how do I book time on the RefStack agenda?

Thanks in advance!
-- 

Aimee Ukasick
AT&T Open Source

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-10 Thread Pete Birley
Flavio,

Sounds great to me, look forward to catching up and plotting some
collaborative hacking :)

Catch you all at the PTG!

Cheers

Pete

On Fri, Feb 10, 2017 at 2:24 PM, Flavio Percoco  wrote:

> On 09/02/17 09:57 +0100, Flavio Percoco wrote:
>
>> Greetings,
>>
>> I was talking with Tony and he mentioned that he's recording a new demo
>> for
>> kuryr and, well, it'd be great to also use the containerized version of
>> TripleO
>> for the demo.
>>
>> His plan is to have this demo out by next week and that may be too tight
>> for the
>> containerized version of TripleO (it may be not, let's try). That said, I
>> think
>> it's still a good opportunity for us to sit down at the PTG and play with
>> this a
>> bit further.
>>
>> So, before we set a date and time for this, I wanted to extend the invite
>> to
>> other folks and see if there's some interest. It be great to also have
>> folks
>> from Kolla and openstack-helm joining.
>>
>> Looking forward to hearing ideas and hacking with y'all,
>> Flavio
>>
>
> So, given the interest and my hope to group as much folks from other teams
> as
> possible, what about we just schedule this for Wednesday at 09:00 am ?
>
> I'm not sure what room we can crash yet but I'll figure it out soon and let
> y'all know.
>
> Any objections/observations?
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

[image: Port.direct] 

Pete Birley / Director
pete@port.direct / +447446862551

*PORT.*DIRECT
United Kingdom
https://port.direct

This e-mail message may contain confidential or legally privileged
information and is intended only for the use of the intended recipient(s).
Any unauthorized disclosure, dissemination, distribution, copying or the
taking of any action in reliance on the information herein is prohibited.
E-mails are not secure and cannot be guaranteed to be error free as they
can be intercepted, amended, or contain viruses. Anyone who communicates
with us by e-mail is deemed to have accepted these risks. Port.direct is
not responsible for errors or omissions in this message and denies any
responsibility for any damage arising from the use of e-mail. Any opinion
and other statement contained in this message and any attachment are solely
those of the author and do not necessarily represent those of the company.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-10 Thread Antoni Segura Puimedon
On Fri, Feb 10, 2017 at 3:24 PM, Flavio Percoco  wrote:

> On 09/02/17 09:57 +0100, Flavio Percoco wrote:
>
>> Greetings,
>>
>> I was talking with Tony and he mentioned that he's recording a new demo
>> for
>> kuryr and, well, it'd be great to also use the containerized version of
>> TripleO
>> for the demo.
>>
>> His plan is to have this demo out by next week and that may be too tight
>> for the
>> containerized version of TripleO (it may be not, let's try). That said, I
>> think
>> it's still a good opportunity for us to sit down at the PTG and play with
>> this a
>> bit further.
>>
>> So, before we set a date and time for this, I wanted to extend the invite
>> to
>> other folks and see if there's some interest. It be great to also have
>> folks
>> from Kolla and openstack-helm joining.
>>
>> Looking forward to hearing ideas and hacking with y'all,
>> Flavio
>>
>
> So, given the interest and my hope to group as much folks from other teams
> as
> possible, what about we just schedule this for Wednesday at 09:00 am ?
>
> I'm not sure what room we can crash yet but I'll figure it out soon and let
> y'all know.
>
> Any objections/observations?


Sounds good to me!


>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-10 Thread Flavio Percoco

On 09/02/17 09:57 +0100, Flavio Percoco wrote:

Greetings,

I was talking with Tony and he mentioned that he's recording a new demo for
kuryr and, well, it'd be great to also use the containerized version of TripleO
for the demo.

His plan is to have this demo out by next week and that may be too tight for the
containerized version of TripleO (it may be not, let's try). That said, I think
it's still a good opportunity for us to sit down at the PTG and play with this a
bit further.

So, before we set a date and time for this, I wanted to extend the invite to
other folks and see if there's some interest. It be great to also have folks
from Kolla and openstack-helm joining.

Looking forward to hearing ideas and hacking with y'all,
Flavio


So, given the interest and my hope to group as much folks from other teams as
possible, what about we just schedule this for Wednesday at 09:00 am ?

I'm not sure what room we can crash yet but I'll figure it out soon and let
y'all know.

Any objections/observations?
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-10 Thread Pete Birley
Dan,

There's no way I could have put that any better than Tony!
Though he's given me a bit too much credit, I actually just extended his
work to make use of Lbaasv2, and never got round to fully making use of
OVN's native LoadBalancing.

Cheers

Pete

On Fri, Feb 10, 2017 at 12:34 AM, Antoni Segura Puimedon  wrote:

>
>
> On Thu, Feb 9, 2017 at 10:00 PM, Dan Sneddon  wrote:
>
>> Pete, thanks for mentioning network isolation and segmentation. That's
>> my area of interest, since I'm focused on underlay networking for
>> TripleO and bare-metal networking in Ironic.
>>
>> Network isolation is going to be important for several reasons:
>>
>> 1) Separation of control and data plane in deployments
>> 2) Tenant isolation in multi-tenant Ironic BMaaS
>> 3) Network Function Virtualization (NFV) use cases
>>
>> The intention of the isolated networking model for TripleO was to
>> separate control and data plane, as well as tenant from administrative
>> traffic. A secondary goal was to make this highly configurable and
>> customizable. This has been well received by many operators who have
>> rigid security isolation requirements (such as PCI-DSS for financial
>> transactions), or those who customize their underlay network to
>> integrate into an existing networking topology. I'm thinking about how
>> to do something similar in Kubernetes, perhaps with Kuryr.
>>
>> The Harbor project looks very interesting. Do you have any more
>> information about how Harbor uses Raven to achieve isolation? Also, are
>> you saying that Harbor uses an older (prototype) version of Raven, or
>> are you referring to Raven itself as a prototype?
>>
>
> I can answer to some of that :-)
>
> Raven was the Python 3 asyncio based prototype my team built back
> when I was at Midokura for integrating Kubernetes and Neutron as
> something to then upstream to Kuryr with the help of the rest of the
> community (taking the lessons learned from the PoC and improving
> on it). So yes, Raven itself was a prototype (a quite functional one)
> and led to what we know today in Kuryr as the kuryr-kubernetes
> controller, which is now almost at the same level of features, missing
> just two patches for the service support.
>
> I have to note here, that Pete did some interesting modifications to
> Raven like OVN support addition and leveraging the watcher model
> to make, IIRC, the cluster services use the native OVN load balancer
> rather than neutron-lbaas.
>
> The Kuryr-kubernetes controller is built with pluggability in mind and it
> has a system of drivers (using stevedore) for acquiring resources.  This
> makes things like what Pete did easier to achieve with the new codebase
> and also pick yourself the level of isolation that you want. Let's say
> that you want
> to have the different OSt components pick different networks or even
> projects, you would just need to make a very small driver like [0] or [1]
> that could, for example, make an http request to some service that held
> a mapping, read some specific annotation, etc.
>
> In terms of isolation for deployments, we are starting discussion about
> leveraging the new CNI support for reporting multiple interfaces (still not
> implemented in k8s, but playing is fun) so that we can put the pods that
> need it both in the control and in the data plane, we'll probably need to
> tweak the interface of the drivers so that they can return an iterable.
>
>
> [0] https://github.com/openstack/kuryr-kubernetes/blob/master/
> kuryr_kubernetes/controller/drivers/default_project.py#L39
> [1] https://github.com/openstack/kuryr-kubernetes/
> blob/master/kuryr_kubernetes/controller/drivers/default_subnet.py#L56
>
>>
>> I'll be at the PTG Tuesday through Friday morning. I'm looking forward
>> to having some conversations about this topic.
>>
>> --
>> Dan Sneddon |  Senior Principal OpenStack Engineer
>> dsned...@redhat.com |  redhat.com/openstack
>> dsneddon:irc|  @dxs:twitter
>>
>> On 02/09/2017 09:56 AM, Pete Birley wrote:
>> > Hi Flavio,
>> >
>> > I've been doing some work on packaging Kuryr for use with K8s as an
>> > underlay for OpenStack on Kubernetes. When we met up in Brno the Harbor
>> > project I showed you used Tony's old Raven Prototype to provide the
>> > network isolation and segmentation in K8s. I've since begun to lay the
>> > groundwork for OpenStack-Helm to support similar modes of operation,
>> > allowing both service isolation and also combined networking between
>> > OpenStack and K8s, where pods and VMs can co-exist on the same Neutron
>> > Networks.
>> >
>> > I'm not sure I will have things fully functional within OpenStack-Helm
>> > by the PTG, but it would be great to sit down and work out how we can
>> > ensure that not only do we not end up replicating work needlessly, but
>> > also find further opportunities to collaborate. I'll be in Atlanta all
>> > week, though I think some of the OS-Helm and Kolla-K8s developers will
>> > be leaving on Wed, would a particular day/tim

Re: [openstack-dev] [oslo] pbr and warnerrors status

2017-02-10 Thread Andreas Jaeger

On 02/08/2017 05:22 PM, Jay Faulkner  wrote:
> [...]

IMO, I’d suggest skipping this and just fixing the broken attribute. Projects 
that are impacted by the change simply need to merge a one-line change to set 
warnerrors=false. FWIW, you can actually run a local docs build, find and 
resolve warnings without the PBR change. It seems overkill to me to change the 
term since doing so will be super confusing to anyone who hasn’t read this 
mailing list thread.

If there’s a significant enough concern, a change could be pushed to set 
warnerrors=false on the projects that are concerned before this release is made.


Yes, this should indeed be an easy change to fix it.

So, since requirements freeze is over, let me remove my WIP on
https://review.openstack.org/430618 so that it can be released as 
convenient for the release team,


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next minimum libvirt version

2017-02-10 Thread Kashyap Chamarthy
On Thu, Feb 09, 2017 at 05:29:22PM -0600, Matt Riedemann wrote:
> Since danpb hasn't been around I've sort of forgotten about this, but we
> should talk about bumping the minimum required libvirt version in nova.
> 
> Currently it's 1.2.1 and the next was set to 1.2.9.
> 
> On master we're gating on ubuntu 14.04 which has libvirt 1.3.1 (14.04 had
> 1.2.2).
> 
> If we move to require 1.2.9 that effectively kills 14.04 support for
> devstack + libvirt on master, which is probably OK.
> 
> There is also the distro support wiki [1] which hasn't been updated in
> awhile.
> 
> I'm wondering if 1.2.9 is a safe move for the next required minimum version
> and if so, does anyone have ideas on the next required version after that?

1.2.9 was release on OCT 2014.

And there have been 26 releases from 1.2.9 until now:

1.2.{10,11,12,13,14,15,16,17,18,19,20,21}
1.3.{0,1,2,3,4,5}
2.0.0 [01 JUL 2016]
2.1.0 [02 AUG 2016]
2.2.0 [02 SEP 2016]
2.3.0 [04 OCT 2016]
2.4.0 [01 NOV 2016]
2.5.0 [04 DEC 2016]
3.0.0 [17 JAN 2017]
3.1.0 [Unreleased, as of 10-FEB-2017]

IIUC, going by how[X] Dan settled on 1.1.1, as minimum version for
Newton, it seems like have to mine through the current releases,
that'll: 

  - provide unconditional support for specific features we need; 
  - removes need for maintaining backward compatibility code

[X] 
http://lists.openstack.org/pipermail/openstack-operators/2015-October/008400.html

---

Good news for future: Upstream libvirt recently started an effort to
make release notes more consumable, by writing more 'structured' release
notes, and categorizing the work into: new Features; improvements; and
bug fixes.

You can see the result by comparing this page:

http://libvirt.org/news.html

With the older releases (where it's mostly information from Git
commits):

http://libvirt.org/news-2016.html

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-docs] What's up, Doc? 10 Feb 2017

2017-02-10 Thread Alexandra Settle
Team team team team team,

Welcome to my first edition of What's up, Doc?

Firstly, I just want to thank everyone who supported me when I announced I 
would like to run for manuals PTL. Although I ran the election uncontested 
(like the beginning of all good dictatorships), it was wonderful to receive 
messages of support from everyone. I’ve slowly been ramping up and getting used 
to the mass influx of emails. Turns out I’m really good at sending out emails 
with the incorrect time and/or date. Big thanks to Andreas for informing me the 
next 9th of January will be in 2020.

Without further ado, let's get down to business! We have a very short window 
now between Ocata and the start of Pike, and we still have quite a number of 
things left to achieve.
- If anyone has time to dedicate to Install Guide testing - please sign up 
(links below). Thanks to those who have already volunteered to help Lana out!
- As per my email yesterday, I'd love to see some people out there smashing 
bugs! (links also below, and more info in the meeting minutes from yesterday).

== Progress towards Ocata ==

* 12 days to go!
* Closed 256 bugs so far. We have 128 open bugs left: 
https://bugs.launchpad.net/openstack-manuals/+bugs
* Release tasks are being tracked here: 
https://wiki.openstack.org/wiki/Documentation/OcataDeliverables
* Install Guide testing is being tracked here: 
https://wiki.openstack.org/wiki/Documentation/OcataDocTesting

== The Road to PTG in Atlanta ==

Docs is a horizontal project, so our sessions will run across the Monday and 
Tuesday of the event. We will be combining the docs event with i18n, so 
translators and docs people will all be in the room together. Everyone welcome! 
Conversation topics for Docs and i18n here: 
https://etherpad.openstack.org/p/docs-i18n-ptg-pike

Event info is available here: http://www.openstack.org/ptg
Purchase tickets here: https://pikeptg.eventbrite.com/
Tickets for the Boston summit: https://www.openstack.org/summit/boston-2017/

== Specialty Team Reports ==

API - Anne Gentle:
The trove API docs are incomplete after migration, and a user reported the bug 
to the ML. Anne to log the missing clustering API info. Alex and Anne to meet 
with the app dev community manager at the Foundation to talk about goals for 
developer.openstack.org. The NFV Orchestration (tacker) team landed their API 
ref this week.

Configuration Reference and CLI Reference - Tomoyuki Kato:
CLI Reference: Updated some CLI references. Added aodhclient. Config Reference: 
Start working on Ocata updates.

High Availability Guide - Ianeta Hutchinson:
Sent a message to the ML looking for people interested in helping out on the HA 
guide for Pike. Planning for the PTG as I can’t attend.

Hypervisor Tuning Guide - Blair Bethwaite:
No report this week

Installation guides - Lana Brindley:
Landing page review: https://review.openstack.org/#/c/425821/12. Install guide 
testing is well underway.

Networking Guide - John Davidge:
Working on organising a Networking Guide working group with the neutron team at 
the PTG. Also been smashing bugs.

Operations and Architecture Design guides - Darren Chan:
Darren and Ben are working on an action plan for the Arch Guide to be worked on 
during Pike to get the current draft guide published.

Security Guide - Nathaniel Dillon:
Moved the sec-guide bugs to the sec team Launchpad. Sec and doc team to 
coordinate and come up with action plan for the future of the sec guide at the 
PTG.

Training Guides - Matjaz Pancur:
No report this week

Training labs - Pranav Salunke, Roger Luethi:
Training-labs has a rough, but working Ocata patch. Issues we found are noted 
on the Etherpad as you requested. We should be able to release within a week or 
two after Ocata is official.

User guides - Joseph Robinson:
The legacy command changes have gone through well this release, and the next 
steps is to check with the nova, neutron, cinder, and glance teams on the 
status of some specific project commands

== Doc team meeting ==

Our next meeting will be on Thursday 23 February at 2100 UTC in 
#openstack-meeting-alt. We will not be skipping the meeting in favor for the 
PTG as the docs sessions are on the Mon-Tues and I would love the opportunity 
to immediately report back to people who cannot attend.

Meeting chair will be me (Alexandra Settle - asettle)! \o/

For more meeting details, including minutes and the agenda: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting

--

Have a great weekend!

Alex
All round badass and supernaturally good potato peeler

IRC: asettle
Twitter: dewsday

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [containers][magnum] Make certs insecure in magnum drivers

2017-02-10 Thread Kevin Lefevre
Hi,

This change (https://review.openstack.org/#/c/383493/) makes certificates 
request to magnum_api insecure since is a common use case.

In swarm drivers, the make-cert.py script is in python whereas in K8s for 
CoreOS and Atomic, it is a shell script.

I wanted to make the change (https://review.openstack.org/#/c/430755/) but it 
gets flagged by bandit because of python requests pacakage insecure TLS.

I know that we should supports Custom CA in the futur but if right now (and 
according to the previous merged change) insecure request are by default, what 
should we do ?

Do we disable bandit for the the swarm drivers ? Or do you use the same scripts 
(and keep it as simple as possible) for all the drivers, possibly without 
python as it is not included in CoreOS.


signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][qa][grenade] Release blocked on grenade job not testing from newton

2017-02-10 Thread Vasyl Saienko
The root cause why ironic grenade job was broken is described in
https://bugs.launchpad.net/ironic/+bug/1663371
In two words, during Ocata we removed DEFAULT_IMAGE_NAME setting logic from
devstack [0]. As soon stable/ocata
was cut for devstack variable DEFAULT_IMAGE_NAME is not visible in grenade,
and default value in nova resources.sh
script was picked [1].

It was fixed by [2] by sourcing ironic vars (now we set DEFAULT_IMAGE_NAME
there) in grenade, to made it available for all
grenade scripts.

The problem described earlier (we do testing upgrade from master to master)
still exist.
It affects not only ironic, but all projects that do not have latest stable
branch (ie stable/ocata now). As soon they cut it all becomes
to normal. But we have a short period of time when all projects testing
upgrades from master to master.
Related bug [3].



[0]
https://github.com/openstack-dev/devstack/commit/d89b175321ac293454ad15caaee13c0ae46b0bd6
[1]
https://github.com/openstack-dev/grenade/blob/master/projects/60_nova/resources.sh#L31
[2] https://review.openstack.org/#/c/431369/
[3] https://bugs.launchpad.net/grenade/+bug/1663505

On Thu, Feb 9, 2017 at 4:02 PM, Jim Rollenhagen 
wrote:

> On Thu, Feb 9, 2017 at 7:00 AM, Jim Rollenhagen 
> wrote:
>
>> Hey folks,
>>
>> Ironic plans to release Ocata this week, once we have a couple small
>> patches
>> and a release note cleanup landed.
>>
>> However, our grenade job is now testing master->master, best I can tell.
>> This
>> is pretty clearly due to this d-s-g commit:
>> https://github.com/openstack-infra/devstack-gate/commit/9c75
>> 2b02fbd57c7021a7c9295bf4d68a0d1973a8
>>
>> Evidence:
>>
>> * it appears to be checking out a change on master into the old side:
>>   http://logs.openstack.org/44/354744/10/check/gate-grenade-ds
>> vm-ironic-ubuntu-xenial/4b395ff/logs/grenade.sh.txt.gz#_
>> 2017-02-09_07_15_32_979
>>
>> * and somewhat coincidentally, our grenade job seems to be broken when
>> master
>>   (ocata) is on the old side, because we now select instance images in our
>>   devstack plugin:
>>   http://logs.openstack.org/44/354744/10/check/gate-grenade-ds
>> vm-ironic-ubuntu-xenial/4b395ff/logs/grenade.sh.txt.gz#_
>> 2017-02-09_08_07_10_946
>>
>> So, we're currently blocking the ironic release on this, as obviously we
>> don't
>> want to release if we don't know upgrades work. As I see it, we have two
>> options:
>>
>> 1) Somehow fix devstack-gate and configure our jobs in project-config
>> such that
>> this job upgrades newton->master. I might need some help on navigating
>> this
>> one.
>>
>> 2) Make our grenade job non-voting for now, release 7.0.0 anyway, and
>> immediately make sure that the stable/ocata branch runs grenade as
>> expected and
>> passes. If it isn't passing, fix what we need to and cut 7.0.1 ASAP.
>>
>
> After talking to Doug and Sean on IRC, I think this is the best
> option. We don't necessarily need to make it non-voting if we
> can fix it quickly (Vasyl is working on this already).
>
> We still have a week to release from the Ocata branch if we need
> to get more things in. They'll just need to go through the backport
> process.
>
> // jim
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mitaka - Unable to attach volume to VM

2017-02-10 Thread Sam Huracan
Hi all,

I find out reason
I lack config [cinder] os_region_name in compute nova.conf, therefore
nova-api contacts with cinder in the rest site.

Thanks everyone :)



2017-02-10 11:04 GMT+07:00 Sam Huracan :

> Hi Sean,
>
> I've checked 'openstack volume list', the state of all volume is avaiable,
> and I can download image to volume.
> I also use Ceph as other Cinder volume backend, and issue is similarly.
> Same log.
>
> Port 3260 have opened on iptables.
>
> When I nova --debug volume-attach, I see nova contact to cinder for
> volume, but nova log still returns "VolumeNotFound", can't understand.
> http://paste.openstack.org/show/598332/
>
> cinder-scheduler.log and cinder-volume.log do not have any error and
> attaching log.
>
>
> 2017-02-10 10:16 GMT+07:00 Sean McGinnis :
>
>> On Fri, Feb 10, 2017 at 02:18:15AM +0700, Sam Huracan wrote:
>> > Hi guys,
>> >
>> > I meet this issue when deploying Mitaka.
>> > When I attach LVM volume to VM, it keeps state "Attaching". I am also
>> > unable to boot VM from volume.
>> >
>> > This is /var/log/nova/nova-compute.log in Compute node when I attach
>> volume.
>> > http://paste.openstack.org/show/598282/
>> >
>> > Mitaka version: http://prntscr.com/e6ns0u
>> >
>> > Can you help me solve this issue?
>> >
>> > Thanks a lot
>>
>>
>> Hi Sam,
>>
>> Any errors in the Cinder logs? Or just the ones from Nova not finding the
>> volume?
>>
>> Sean
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Attila Fazekas
I wonder, can we switch to CINDER_ISCSI_HELPER="lioadm"  ?

On Fri, Feb 10, 2017 at 9:17 AM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

> I believe those are traces left by the reference implementation of cinder
> setting very high debug level on tgtd. I'm not sure if that's related or
> the culprit at all (probably the culprit is a mix of things).
>
> I wonder if we could disable such verbosity on tgtd, which certainly is
> going to slow down things.
>
> On Fri, Feb 10, 2017 at 9:07 AM, Antonio Ojea  wrote:
>
>> I guess it's an infra issue, specifically related to the storage, or the
>> network that provide the storage.
>>
>> If you look at the syslog file [1] , there are a lot of this entries:
>>
>> Feb 09 04:20:42 
>> 
>>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(2024) 
>> no more dataFeb 09 04:20:42 
>> 
>>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(1996) 
>> found a task 71 131072 0 0Feb 09 04:20:42 
>> 
>>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_data_rsp_build(1136) 
>> 131072 131072 0 26214471Feb 09 04:20:42 
>> 
>>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: __cmd_done(1281) (nil) 
>> 0x2563000 0 131072
>>
>> grep tgtd syslog.txt.gz| wc
>>   139602 1710808 15699432
>>
>> [1] http://logs.openstack.org/95/429095/2/check/gate-tempest-dsv
>> m-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz
>>
>>
>>
>> On Fri, Feb 10, 2017 at 5:59 AM, Ihar Hrachyshka 
>> wrote:
>>
>>> Hi all,
>>>
>>> I noticed lately a number of job failures in neutron gate that all
>>> result in job timeouts. I describe
>>> gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
>>> timeouts happening in other jobs too.
>>>
>>> The failure mode is all operations, ./stack.sh and each tempest test
>>> take significantly more time (like 50% to 150% more, which results in
>>> job timeout triggered). An example of what I mean can be found in [1].
>>>
>>> A good run usually takes ~20 minutes to stack up devstack; then ~40
>>> minutes to pass full suite; a bad run usually takes ~30 minutes for
>>> ./stack.sh; and then 1:20h+ until it is killed due to timeout.
>>>
>>> It affects different clouds (we see rax, internap, infracloud-vanilla,
>>> ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
>>> pypi or apt mirrors because then we would see slowdown in ./stack.sh
>>> phase only.
>>>
>>> We can't be sure that CPUs are the same, and devstack does not seem to
>>> dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure
>>> if it would help anyway). Neither we have a way to learn whether
>>> slowliness could be a result of adherence to RFC1149. ;)
>>>
>>> We discussed the matter in neutron channel [2] though couldn't figure
>>> out the culprit, or where to go next. At this point we assume it's not
>>> neutron's fault, and we hope others (infra?) may have suggestions on
>>> where to look.
>>>
>>> [1] http://logs.openstack.org/95/429095/2/check/gate-tempest-dsv
>>> m-neutron-dvr-ubuntu-xenial/35aa22f/console.html#_2017-02-09
>>> _04_47_12_874550
>>> [2] http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/
>>> %23openstack-neutron.2017-02-10.log.html#t2017-02-10T04:06:01
>>>
>>> Thanks,
>>> Ihar
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Miguel Angel Ajo Pelayo
I believe those are traces left by the reference implementation of cinder
setting very high debug level on tgtd. I'm not sure if that's related or
the culprit at all (probably the culprit is a mix of things).

I wonder if we could disable such verbosity on tgtd, which certainly is
going to slow down things.

On Fri, Feb 10, 2017 at 9:07 AM, Antonio Ojea  wrote:

> I guess it's an infra issue, specifically related to the storage, or the
> network that provide the storage.
>
> If you look at the syslog file [1] , there are a lot of this entries:
>
> Feb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(2024) no 
> more dataFeb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(1996) 
> found a task 71 131072 0 0Feb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_data_rsp_build(1136) 
> 131072 131072 0 26214471Feb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: __cmd_done(1281) (nil) 
> 0x2563000 0 131072
>
> grep tgtd syslog.txt.gz| wc
>   139602 1710808 15699432
>
> [1] http://logs.openstack.org/95/429095/2/check/gate-tempest-
> dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz
>
>
>
> On Fri, Feb 10, 2017 at 5:59 AM, Ihar Hrachyshka 
> wrote:
>
>> Hi all,
>>
>> I noticed lately a number of job failures in neutron gate that all
>> result in job timeouts. I describe
>> gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
>> timeouts happening in other jobs too.
>>
>> The failure mode is all operations, ./stack.sh and each tempest test
>> take significantly more time (like 50% to 150% more, which results in
>> job timeout triggered). An example of what I mean can be found in [1].
>>
>> A good run usually takes ~20 minutes to stack up devstack; then ~40
>> minutes to pass full suite; a bad run usually takes ~30 minutes for
>> ./stack.sh; and then 1:20h+ until it is killed due to timeout.
>>
>> It affects different clouds (we see rax, internap, infracloud-vanilla,
>> ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
>> pypi or apt mirrors because then we would see slowdown in ./stack.sh
>> phase only.
>>
>> We can't be sure that CPUs are the same, and devstack does not seem to
>> dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure
>> if it would help anyway). Neither we have a way to learn whether
>> slowliness could be a result of adherence to RFC1149. ;)
>>
>> We discussed the matter in neutron channel [2] though couldn't figure
>> out the culprit, or where to go next. At this point we assume it's not
>> neutron's fault, and we hope others (infra?) may have suggestions on
>> where to look.
>>
>> [1] http://logs.openstack.org/95/429095/2/check/gate-tempest-dsv
>> m-neutron-dvr-ubuntu-xenial/35aa22f/console.html#_2017-02-
>> 09_04_47_12_874550
>> [2] http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/
>> %23openstack-neutron.2017-02-10.log.html#t2017-02-10T04:06:01
>>
>> Thanks,
>> Ihar
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Antonio Ojea
I guess it's an infra issue, specifically related to the storage, or the
network that provide the storage.

If you look at the syslog file [1] , there are a lot of this entries:

Feb 09 04:20:42

ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd:
iscsi_task_tx_start(2024) no more dataFeb 09 04:20:42

ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd:
iscsi_task_tx_start(1996) found a task 71 131072 0 0Feb 09 04:20:42

ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd:
iscsi_data_rsp_build(1136) 131072 131072 0 26214471Feb 09 04:20:42

ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: __cmd_done(1281) (nil)
0x2563000 0 131072

grep tgtd syslog.txt.gz| wc
  139602 1710808 15699432

[1]
http://logs.openstack.org/95/429095/2/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz



On Fri, Feb 10, 2017 at 5:59 AM, Ihar Hrachyshka 
wrote:

> Hi all,
>
> I noticed lately a number of job failures in neutron gate that all
> result in job timeouts. I describe
> gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
> timeouts happening in other jobs too.
>
> The failure mode is all operations, ./stack.sh and each tempest test
> take significantly more time (like 50% to 150% more, which results in
> job timeout triggered). An example of what I mean can be found in [1].
>
> A good run usually takes ~20 minutes to stack up devstack; then ~40
> minutes to pass full suite; a bad run usually takes ~30 minutes for
> ./stack.sh; and then 1:20h+ until it is killed due to timeout.
>
> It affects different clouds (we see rax, internap, infracloud-vanilla,
> ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
> pypi or apt mirrors because then we would see slowdown in ./stack.sh
> phase only.
>
> We can't be sure that CPUs are the same, and devstack does not seem to
> dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure
> if it would help anyway). Neither we have a way to learn whether
> slowliness could be a result of adherence to RFC1149. ;)
>
> We discussed the matter in neutron channel [2] though couldn't figure
> out the culprit, or where to go next. At this point we assume it's not
> neutron's fault, and we hope others (infra?) may have suggestions on
> where to look.
>
> [1] http://logs.openstack.org/95/429095/2/check/gate-tempest-
> dsvm-neutron-dvr-ubuntu-xenial/35aa22f/console.html#_
> 2017-02-09_04_47_12_874550
> [2] http://eavesdrop.openstack.org/irclogs/%23openstack-
> neutron/%23openstack-neutron.2017-02-10.log.html#t2017-02-10T04:06:01
>
> Thanks,
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev