[openstack-dev] [TripleO] PLUMgrid Neutron Integration for stable/mitaka

2016-03-30 Thread Qasim Sarfraz
Hi TripleO Folks,

As Mitaka branch was cut few days ago, I would like a request for backport
for PLUMgrid Neutron integration patch [1]. The patch has been in progress
for a while and adds support for enabling PLUMgrid neutron plugin [3]
*optionally*. It has a low impact or risk since it can only be enabled
using an env file.

If we can please vote to agree to get this in as an exception that would be
great.

Also, if we agree to backport stable/mitaka patch I would also like to
request the backport of stable/liberty patch [2].

[1] - https://review.openstack.org/#/c/299151/
[2] - https://review.openstack.org/#/c/299119
/
[3] - https://wiki.openstack.org/wiki/PLUMgrid-Neutron

-- 
Regards,
Qasim Sarfraz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-03-30 Thread Fox, Kevin M
The main issue is one of upgradability, not stability. We all know tripleo is 
stable. Tripleo cant do upgrades today. We're looking for ways to get there. So 
"upgrading" to ansible isnt nessisary for sure since folks deploying tripleo 
today must assume they cant upgrade anyway.

Honestly I have doubts any config management system from puppet to heat 
software deployments can be coorced to do a cloud upgrade without downtime 
without a huge amount of workarounds. You really either need a workflow 
oriented system with global knowledge like ansible or a container orchestration 
system like kubernes to ensure you dont change too many things at once and 
break things. You need to be able to run some old things and some new, all at 
the same time. And in some cases different versions/config of the same service 
on different machines.

Thoughts on how this may be made to work with puppet/heat?

Thanks,
Kevin


From: Dan Prince
Sent: Monday, March 28, 2016 12:07:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

On Wed, 2016-03-23 at 07:54 -0400, Ryan Hallisey wrote:
> *Snip*
>
> >
> > Indeed, this has literally none of the benefits of the ideal Heat
> > deployment enumerated above save one: it may be entirely the wrong
> > tool
> > in every way for the job it's being asked to do, but at least it
> > is
> > still well-integrated with the rest of the infrastructure.
> >
> > Now, at the Mitaka summit we discussed the idea of a 'split
> > stack',
> > where we have one stack for the infrastructure and a separate one
> > for
> > the software deployments, so that there is no longer any tight
> > integration between infrastructure and software. Although it makes
> > me a
> > bit sad in some ways, I can certainly appreciate the merits of the
> > idea
> > as well. However, from the argument above we can deduce that if
> > this is
> > the *only* thing we do then we will end up in the very worst of
> > all
> > possible worlds: the wrong tool for the job, poorly integrated.
> > Every
> > single advantage of using Heat to deploy software will have
> > evaporated,
> > leaving only disadvantages.
> I think Heat is a very powerful tool having done the container
> integration
> into the tripleo-heat-templates I can see its appeal.  Something I
> learned
> from integration, was that Heat is not the best tool for container
> deployment,
> at least right now.  We were able to leverage the work in Kolla, but
> what it
> came down to was that we're not using containers or Kolla to its max
> potential.
>
> I did an evaluation recently of tripleo and kolla to see what we
> would gain
> if the two were to combine. Let's look at some items on tripleo's
> roadmap.
> Split stack, as mentioned above, would be gained if tripleo were to
> adopt
> Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates
> config
> and deployment.  Therefore, allowing for the decoupling for each
> piece of
> the stack.  Composable roles, this would be the ability to land
> services
> onto separate hosts on demand.  Kolla also already does this [1].
> Finally,
> container integration, this is just a given :).
>
> In the near term, if tripleo were to adopt Kolla as its overcloud it
> would
> be provided these features and retire heat to setting up the
> baremetal nodes
> and providing those ips to ansible.  This would be great for kolla
> too because
> it would provide baremetal provisioning.
>
> Ian Main and I are currently working on a POC for this as of last
> week [2].
> It's just a simple heat template :).
>
> I think further down the road we can evaluate using kubernetes [3].
> For now though,  kolla-anisble is rock solid and is worth using for
> the
> overcloud.

Yeah, well TripleO heat Overclouds are rock solid too. They just aren't
using containers everywhere yet. So lets fix that.

I'm not a fan of replacing the TripleO overcloud configuration with
Kolla. I don't think there is feature parity, the architectures are
different (HA, etc.) and I don't think you could easily pull off an
upgrade from one deployment to the other (going from TripleO Heat
template deployed overcloud to Kolla deployed overcloud).

>
> Thanks!
> -Ryan
>
> [1] - https://github.com/openstack/kolla/blob/master/ansible/inventor
> y/multinode
> [2] - https://github.com/rthallisey/kolla-heat-templates
> [3] - https://review.openstack.org/#/c/255450/
>
>
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-30 Thread Guz Egor
-1
who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort 
(http://kubernetes.io/docs/user-guide/services/#type-nodeport)functionality is 
not going to work without public ip and this is very handy feature.  
--- Egor
  From: 王华 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Wednesday, March 30, 2016 8:41 PM
 Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?
   
Hi yuanying,
I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pulldocker images from docker hub in nodes floating ips are needed. To reduce 
theusage of floating ip, we can use proxy. Only some nodes have floating ips, 
andother nodes can access docker hub by proxy.
Best Regards,Wanghua
On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao  wrote:

  Hi Yuanying,
 +1 
 I think we can add option on whether to using floating ip address since IP 
address are
 kinds of resource which not wise to waste.
 
 On 2016年03月31日 10:40, 大塚元央 wrote:
  
 Hi team, 
  Previously, we had a reason why all nodes should have floating ips [1]. But 
now we have a LoadBalancer features for masters [2] and minions [3]. And also 
minions do not necessarily need to have floating ips [4]. I think it’s the time 
to remove floating ips from all nodes.
  
  I know we are using floating ips in gate to get log files, So it’s not good 
idea to remove floating ips entirely. 
  I want to introduce `disable-floating-ips-to-nodes` parameter to bay model. 
  Thoughts? 
  [1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html 
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha [3]: 
https://blueprints.launchpad.net/magnum/+spec/external-lb [4]: 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html 
  Thanks -yuanying   
  
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 -- 
Best Regards, Eli Qiao (乔立勇)
Intel OTC China 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-30 Thread Qiao, Liyong
Oh, that reminds me,
MesosMonitor requires to use master node’s floating ip address directly to get 
state information.

BR, Eli(Li Yong)Qiao

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: Thursday, March 31, 2016 11:41 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?
Importance: Low

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
> wrote:
Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying


__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-30 Thread 王华
Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we
need to pull
docker images from docker hub in nodes floating ips are needed. To reduce
the
usage of floating ip, we can use proxy. Only some nodes have floating ips,
and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao  wrote:

> Hi Yuanying,
> +1
> I think we can add option on whether to using floating ip address since IP
> address are
> kinds of resource which not wise to waste.
>
>
> On 2016年03月31日 10:40, 大塚元央 wrote:
>
> Hi team,
>
> Previously, we had a reason why all nodes should have floating ips [1].
> But now we have a LoadBalancer features for masters [2] and minions [3].
> And also minions do not necessarily need to have floating ips [4].
> I think it’s the time to remove floating ips from all nodes.
>
> I know we are using floating ips in gate to get log files,
> So it’s not good idea to remove floating ips entirely.
>
> I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.
>
> Thoughts?
>
> [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
> [2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
> [3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
> [4]:
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html
>
> Thanks
> -yuanying
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Best Regards, Eli Qiao (乔立勇)
> Intel OTC China
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Rescheduling IRC meetings

2016-03-30 Thread liusheng

Another personal suggestion:

maybe we can have a weekly routine mail thread to present the things 
need to be discussed or need to be notified. The mail will also list the 
topics posted in meeting agenda and ask to Telemetry folks if  a online 
IRC meeting is necessary, if there are a very few topics or low priority 
topics, or the topics can be suitably discussed asynchronously, we can 
disuccs them in the mail thread.


any thoughts?

在 2016/3/30 19:45, Julien Danjou 写道:

Hi folks,

I've recently noticed that our way of collaborating is now mostly done
asynchronously through Gerrit and the mailing list. Which is good since
it's easy for everyone to participate. Some synchronous discussion
happens in #openstack-telemetry, which is also a good thing since it's a
handy medium for that.

On the other hand, I've noted that our IRC meetings are being less and
less useful these last weeks. Most of them only ran for a few minutes,
and were essentially gordc doing his PTL smalltalk alone.

Therefore I would suggest to schedule those meetings every 2 weeks
rather than every week as it is currently.

Thoughts?

Cheers,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-30 Thread Eli Qiao

Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since 
IP address are

kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:

Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay 
model.


Thoughts?

[1]: 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html

[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html


Thanks
-yuanying


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards, Eli Qiao (乔立勇)
Intel OTC China

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Senlin] Asking about launching an instance in Senlin cluster

2016-03-30 Thread Qiming Teng
Hi,

Please refer to Senlin user documentation [1] to start your journey (aka
adventure). Also, you can stop by the #senlin channel on freenode IRC
for any questions, suggestions.

BTW, This mailinglist is intended for developers not for usage
questions. You really want to check if someone on the IRC channel can
help you on these questions.


[1] http://docs.openstack.org/developer/senlin/user/index.html

Regards,
  Qiming

On Wed, Mar 30, 2016 at 02:39:56PM +0700, Nguyen Huy Cuong wrote:
> Dear OpenStack Supporter,
> 
> I am Cuong Nguyen, an Vietnamese IT engineer.
> 
> Currently, I am researching about Senlin to apply for my work.
> I research to launch an virtual machine on a Senlin cluster.
> Could you please advice me to perform this action?
> If I am missing something, please let me know.
> 
> Thank and best regards,
> 
> Cuong Nguyen.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] The same SRIOV / NFV CI failures missed a regression, why?

2016-03-30 Thread Robert Collins
On 26 March 2016 at 09:08, Jeremy Stanley  wrote:
> On 2016-03-25 15:20:00 -0400 (-0400), Jay Pipes wrote:
> [...]
>> 3) The upstream Infrastructure team works with the hired system
>> administrators to create a single CI system that can spawn
>> functional test jobs on the lab hardware and report results back
>> to upstream Gerrit
> [...]
>
> This bit is something the TripleO team has struggled to accomplish
> over the past several years (running a custom OpenStack deployment
> tied directly into our CI), so at a minimum we'd want to know how
> the proposed implementation would succeed in ways that they've so
> far found a significant challenge even with a larger sysadmin team
> than you estimate being required.

I think what Jay is getting at is to have the *exact same approach*
third-party CI for NFV and PCI have been using - so whatever
$behind-the-abstraction setup they are using, but community accessible
and visible, unlike the current behind-corprorate-firewall setups.

I'm not saying this is better or worse, but it is different to the
tripleo approach of providing a Nova API endpoint for zuul.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC] TC Non-candidancy

2016-03-30 Thread Robert Collins
Hi everyone - I'm not submitting my hat for the ring this cycle - I
think its important we both share the work, and bring more folk up
into the position of having-been-on-the-TC. I promise to still hold
strong opinions weakly, and to discuss those in TC meetings :).

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] How about use IP instead of hostname when live Migrate.

2016-03-30 Thread Zhenyu Zheng
Hi, Nova,

Currently, we use destination host name as the target node uri by default
(if the newly added config option live_migration_inbound_addr has not been
set). This will need transformations from hostname to IP to perform the
operation such as copy disks, and it will depend on DNS services, for
example, we have to add our destination host to /etc/hosts in Ubuntu.

Actually, it is not hard for nova to get the destination nodes' IP address,
why don't we pass it in migrate data and use it as the uri for disk
migrations instead of hostname?

Any thoughts?

I have submitted a bug report for this and there is a Error Log about it,
please checkout:
https://bugs.launchpad.net/nova/+bug/1564197

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-30 Thread Matt Riedemann



On 3/30/2016 5:55 PM, Armando M. wrote:



On 29 March 2016 at 18:55, Matt Riedemann > wrote:



On 3/29/2016 4:44 PM, Armando M. wrote:



On 29 March 2016 at 08:08, Matt Riedemann

>> wrote:

 Nova has had some long-standing bugs that Sahid is trying
to fix
 here [1].

 You can create a network in neutron with
 port_security_enabled=False. However, the bug is that since
Nova
 adds the 'default' security group to the request (if none are
 specified) when allocating networks, neutron raises an
error when
 you try to create a port on that network with a 'default'
security
 group.

 Sahid's patch simply checks if the network that we're going
to use
 has port_security_enabled and if it does not, no security
groups are
 applied when creating the port (regardless of what's
requested for
 security groups, which in nova is always at least 'default').

 There has been a similar attempt at fixing this [2]. That
change
 simply only added the 'default' security group when allocating
 networks with nova-network. It omitted the default security
group if
 using neutron since:

 a) If the network does not have port security enabled,
we'll blow up
 trying to add a port on it with the default security group.

 b) If the network does have port security enabled, neutron will
 automatically apply a 'default' security group to the port,
nova
 doesn't need to specify one.

 The problem both Feodor's and Sahid's patches ran into was
that the
 nova REST API adds a 'default' security group to the server
create
 response when using neutron if specific security groups
weren't on
 the server create request [3].

 This is clearly wrong in the case of
 network.port_security_enabled=False. When listing security
groups
 for an instance, they are correctly not listed, but the server
 create response is still wrong.

 So the question is, how to resolve this?  A few options
come to mind:

 a) Don't return any security groups in the server create
response
 when using neutron as the backend. Given by this point
we've cast
 off to the compute which actually does the work of network
 allocation, we can't call back into the network API to see what
 security groups are being used. Since we can't be sure, don't
 provide what could be false info.

 b) Add a new method to the network API which takes the
requested
 networks from the server create request and returns a best
guess if
 security groups are going to be applied or not. In the case of
 network.port_security_enabled=False, we know a security
group won't
 be applied so the method returns False. If there is
 port_security_enabled, we return whatever security group was
 requested (or 'default'). If there are multiple networks on the
 request, we return the security groups that will be applied
to any
 networks that have port security enabled.

 Option (b) is obviously more intensive and requires hitting the
 neutron API from nova API before we respond, which we'd like to
 avoid if possible. I'm also not sure what it means for the
 auto-allocated-topology (get-me-a-network) case. With a
standard
 devstack setup, a network created via the
auto-allocated-topology
 API has port_security_enabled=True, but I also have the 'Port
 Security' extension enabled and the default public external
network
 has port_security_enabled=True. What if either of those are
False
 (or the port security extension is disabled)? Does the
 auto-allocated network inherit port_security_enabled=False?
We could
 duplicate that logic in Nova, but it's more proxy work that
we would
 like to avoid.


Port security on the external network has no role in this
because this
is not the network you'd be creating ports on. Even if it had
port-security=False, an auto-allocated network will still be created
with port security enabled (i.e. =True).

A user can obviously 

Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-30 Thread Matt Riedemann



On 3/30/2016 5:50 PM, Armando M. wrote:



On 30 March 2016 at 13:40, Sean Dague > wrote:

On 03/29/2016 09:55 PM, Matt Riedemann wrote:

>
> Yup, HenryG walked me through the cases on IRC today.
>
> The more I think about option (b) above, the less I like that idea given
> how much work goes into the allocate_for_instance code in nova where
> it's already building the list of possible networks that will be used
> for creating/updating ports, we'd essentially have to duplicate that
> logic in a separate method to get an idea of what security groups would
> be applied.
>
> I'd prefer to be lazy and go with option (a) and just say nova doesn't
> return security-groups in the REST API when creating a server and
> neutron is the network API. That would require a microversion probably,
> but it would still be easy to do. I'm not sure if that's the best user
> experience though.
>

Is there a sane resource on the neutron side we could link to? Today
security_groups are returned with a name from nova, which made sense
when it was an internal structure, but makes way less sense now.

"security_groups": [
{
 "href": "",
 }
]

Where the link is to a neutron resource (and we could do a local link
for the few nova net folks) might be more appropriate.


Not that I could think of, though the extra level of indirection to
solve this issue is kind of a neat idea.


 -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, not really, see what we have to do to get the list of security 
groups for a given list of instances [1].


That builds a list of ports from the list of instances, then from the 
list of ports it builds a list of security groups mapped to each port, 
and then does some cleanup after that to make it look like nova-network 
security groups for the compute API response (as a side note, it seems 
like this is an area where we could do some performance optimizations by 
not pulling back all of the port / security group details, only get the 
fields we need).


Would we need to link to a neutron API? Could we just provide a link 
back to 'servers//os-security-groups'?


[1] 
https://github.com/openstack/nova/blob/f8a01ccdffc13403df77148867ef3821100b5edb/nova/network/security_group/neutron_driver.py#L373


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-30 Thread 大塚元央
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-30 Thread Matt Riedemann



On 3/30/2016 9:14 PM, GHANSHYAM MANN wrote:

On Thu, Mar 31, 2016 at 10:39 AM, Matt Riedemann
 wrote:



On 3/30/2016 8:20 PM, Matt Riedemann wrote:




On 3/30/2016 7:56 PM, Matt Riedemann wrote:




On 3/30/2016 7:38 PM, Matt Riedemann wrote:




On 2/25/2016 5:31 AM, Takashi Natsume wrote:


Hi Nova and Cinder developers.

As I reported in a bug report [1], nova swap volume
(updating an attached volume) fuction does not work
in the case of non admin users by default.
(Volumes are stuck.)

Before I was working for fixing another swap volume bug [2][3].
But Ryan fixed it on the Cinder side [4].
As a result, admin users can execute swap volume function,
but it was not fixed in the case of non admin users.
So I reported the bug report [1].

In the patch[5], I tried to change the default cinder's policy
to allow non admin users to execute migrate_volume_completion API.
But it was rejected by the cinder project ('-2' was voted).

In the patch[5], it was suggested to make the swap volume API admin
only
on the Nova side.
But IMO, the swap volume function should be allowed to non admin users
because attaching a volume and detaching a volume can be performed
by non admin users.



I agree with this. DuncanT said in IRC that he didn't think non-admin
users should be using the swap-volume API in nova because it can be
problematic, but I'm not sure why, is there more history or detail
there? I'd think it shouldn't be any worse than doing a detach/attach in
quick succession (like in a CI test for example).



If migrate_volume_completion is only allowed to admin users
by default on the Cinder side, attaching a new volume and
detaching an old volume should be performed on the Nova side
when swapping volumes.



My understanding of the problem is as follows:

1. Admin-initiated volume migration in Cinder calls off to Nova to
perform the swap-volume, and then Nova calls back to Cinder's
migrate_volume_completion API. This is fine since it's an admin that
initiated this series of operations on the Cinder side (that's by
default, however, this is broken if the policy file for Cinder is change
to allow non-admins to migrate volumes).

2. A non-admin swap-volume API call in Nova fails because Nova blindly
makes the migrate_volume_completion call to Cinder which fails with a
403 because the Cinder API policy has that as an admin action by
default.

I don't know the history around when the swap-volume API was added to
Nova, was it specifically for this volume migration scenario in Cinder?
   Are there other use cases?  Knowing those would be good to determine
if Nova should change it's default policy for swap-volume, although,
again, that's only a default and can be changed per deployment so we
probably shouldn't rely on it.

Ideally we would have implemented this like the nova/neutron server
events callback API in Nova during vif plugging (nova does the vif plug
on the host then waits for neutron to update it's database for the port
status and sends an event (API call) to nova to continue booting the
server). That server events API in nova is admin-only by default and
neutron is configured with admin credentials for nova to use it.

Another option would be for Nova to handle a 403 response when calling
Cinder's migrate_volume_completion API and ignore it if we don't have an
admin context. This is pretty hacky though. It assumes that it's a
non-admin user initiating the swap-volume operation. It wouldn't be a
problem for the volume migration operation initiated in Cinder since by
default that's admin-only, so nova shouldn't get a 403 when calling
migrate_volume_completion. The trap would be if the cinder policy for
volume migration was changed to allow non-admins, but if someone did
that, they should also change the policy for migrate_volume_completion
to allow non-admin too.



If you have a good idea, please let me know it.

[1] Cinder volumes are stuck when non admin user executes nova swap
volume API
  https://bugs.launchpad.net/cinder/+bug/1522705

[2] Cinder volume stuck in swap_volume
  https://bugs.launchpad.net/nova/+bug/1471098

[3] Fix cinder volume stuck in swap_volume
  https://review.openstack.org/#/c/207385/

[4] Fix swap_volume for case without migration
  https://review.openstack.org/#/c/247767/

[5] Enable volume owners to execute migrate_volume_completion
  https://review.openstack.org/#/c/253363/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp






__



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





I also just checked Tempest and apparently we have no coverage for the
swap-volume API in Nova, we should fix that as part of this.



I've done some more digging. The swap-volume 

Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-30 Thread GHANSHYAM MANN
On Thu, Mar 31, 2016 at 10:39 AM, Matt Riedemann
 wrote:
>
>
> On 3/30/2016 8:20 PM, Matt Riedemann wrote:
>>
>>
>>
>> On 3/30/2016 7:56 PM, Matt Riedemann wrote:
>>>
>>>
>>>
>>> On 3/30/2016 7:38 PM, Matt Riedemann wrote:



 On 2/25/2016 5:31 AM, Takashi Natsume wrote:
>
> Hi Nova and Cinder developers.
>
> As I reported in a bug report [1], nova swap volume
> (updating an attached volume) fuction does not work
> in the case of non admin users by default.
> (Volumes are stuck.)
>
> Before I was working for fixing another swap volume bug [2][3].
> But Ryan fixed it on the Cinder side [4].
> As a result, admin users can execute swap volume function,
> but it was not fixed in the case of non admin users.
> So I reported the bug report [1].
>
> In the patch[5], I tried to change the default cinder's policy
> to allow non admin users to execute migrate_volume_completion API.
> But it was rejected by the cinder project ('-2' was voted).
>
> In the patch[5], it was suggested to make the swap volume API admin
> only
> on the Nova side.
> But IMO, the swap volume function should be allowed to non admin users
> because attaching a volume and detaching a volume can be performed
> by non admin users.


 I agree with this. DuncanT said in IRC that he didn't think non-admin
 users should be using the swap-volume API in nova because it can be
 problematic, but I'm not sure why, is there more history or detail
 there? I'd think it shouldn't be any worse than doing a detach/attach in
 quick succession (like in a CI test for example).

>
> If migrate_volume_completion is only allowed to admin users
> by default on the Cinder side, attaching a new volume and
> detaching an old volume should be performed on the Nova side
> when swapping volumes.


 My understanding of the problem is as follows:

 1. Admin-initiated volume migration in Cinder calls off to Nova to
 perform the swap-volume, and then Nova calls back to Cinder's
 migrate_volume_completion API. This is fine since it's an admin that
 initiated this series of operations on the Cinder side (that's by
 default, however, this is broken if the policy file for Cinder is change
 to allow non-admins to migrate volumes).

 2. A non-admin swap-volume API call in Nova fails because Nova blindly
 makes the migrate_volume_completion call to Cinder which fails with a
 403 because the Cinder API policy has that as an admin action by
 default.

 I don't know the history around when the swap-volume API was added to
 Nova, was it specifically for this volume migration scenario in Cinder?
   Are there other use cases?  Knowing those would be good to determine
 if Nova should change it's default policy for swap-volume, although,
 again, that's only a default and can be changed per deployment so we
 probably shouldn't rely on it.

 Ideally we would have implemented this like the nova/neutron server
 events callback API in Nova during vif plugging (nova does the vif plug
 on the host then waits for neutron to update it's database for the port
 status and sends an event (API call) to nova to continue booting the
 server). That server events API in nova is admin-only by default and
 neutron is configured with admin credentials for nova to use it.

 Another option would be for Nova to handle a 403 response when calling
 Cinder's migrate_volume_completion API and ignore it if we don't have an
 admin context. This is pretty hacky though. It assumes that it's a
 non-admin user initiating the swap-volume operation. It wouldn't be a
 problem for the volume migration operation initiated in Cinder since by
 default that's admin-only, so nova shouldn't get a 403 when calling
 migrate_volume_completion. The trap would be if the cinder policy for
 volume migration was changed to allow non-admins, but if someone did
 that, they should also change the policy for migrate_volume_completion
 to allow non-admin too.

>
> If you have a good idea, please let me know it.
>
> [1] Cinder volumes are stuck when non admin user executes nova swap
> volume API
>  https://bugs.launchpad.net/cinder/+bug/1522705
>
> [2] Cinder volume stuck in swap_volume
>  https://bugs.launchpad.net/nova/+bug/1471098
>
> [3] Fix cinder volume stuck in swap_volume
>  https://review.openstack.org/#/c/207385/
>
> [4] Fix swap_volume for case without migration
>  https://review.openstack.org/#/c/247767/
>
> [5] Enable volume owners to execute migrate_volume_completion
>  https://review.openstack.org/#/c/253363/
>
> Regards,
> Takashi Natsume
> NTT Software Innovation Center
> 

Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-03-30 Thread yongli he

Hi, mriedem

Shaohe is on vacation. And Intel SRIOV CI  comment  to Neutron. running 
the macvtap vnic  SRIOV testing and plus required neutron smoking test.


[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI

Regards
Yongli He




在 2016年03月30日 23:21, Matt Riedemann 写道:

Intel has a few third party CIs in the third party systems wiki [1].

I was talking with Moshe Levi today about expanding coverage for 
mellanox CI in nova, today they run an SRIOV CI for vnic type 
'direct'. I'd like them to also start running their 'macvtap' CI on 
the same nova changes (that job only runs in neutron today I think).


I'm trying to see what we have for coverage on these different NFV 
configurations, and because of limited resources to run NFV CI, don't 
want to duplicate work here.


So I'm wondering what the various Intel NFV CI jobs run, specifically 
the Intel Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].


From the wiki it looks like the Intel Networking CI tests ovs-dpdk but 
only for Neutron. Could that be expanded to also test on Nova changes 
that hit a sub-set of the nova tree?


I really don't know what the latter two jobs test as far as 
configuration is concerned, the descriptions in the wikis are pretty 
empty (please update those to be more specific).


Please also include in the wiki the recheck method for each CI so I 
don't have to dig through Gerrit comments to find one.


[1] https://wiki.openstack.org/wiki/ThirdPartySystems
[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-30 Thread Matt Riedemann



On 3/30/2016 8:20 PM, Matt Riedemann wrote:



On 3/30/2016 7:56 PM, Matt Riedemann wrote:



On 3/30/2016 7:38 PM, Matt Riedemann wrote:



On 2/25/2016 5:31 AM, Takashi Natsume wrote:

Hi Nova and Cinder developers.

As I reported in a bug report [1], nova swap volume
(updating an attached volume) fuction does not work
in the case of non admin users by default.
(Volumes are stuck.)

Before I was working for fixing another swap volume bug [2][3].
But Ryan fixed it on the Cinder side [4].
As a result, admin users can execute swap volume function,
but it was not fixed in the case of non admin users.
So I reported the bug report [1].

In the patch[5], I tried to change the default cinder's policy
to allow non admin users to execute migrate_volume_completion API.
But it was rejected by the cinder project ('-2' was voted).

In the patch[5], it was suggested to make the swap volume API admin
only
on the Nova side.
But IMO, the swap volume function should be allowed to non admin users
because attaching a volume and detaching a volume can be performed
by non admin users.


I agree with this. DuncanT said in IRC that he didn't think non-admin
users should be using the swap-volume API in nova because it can be
problematic, but I'm not sure why, is there more history or detail
there? I'd think it shouldn't be any worse than doing a detach/attach in
quick succession (like in a CI test for example).



If migrate_volume_completion is only allowed to admin users
by default on the Cinder side, attaching a new volume and
detaching an old volume should be performed on the Nova side
when swapping volumes.


My understanding of the problem is as follows:

1. Admin-initiated volume migration in Cinder calls off to Nova to
perform the swap-volume, and then Nova calls back to Cinder's
migrate_volume_completion API. This is fine since it's an admin that
initiated this series of operations on the Cinder side (that's by
default, however, this is broken if the policy file for Cinder is change
to allow non-admins to migrate volumes).

2. A non-admin swap-volume API call in Nova fails because Nova blindly
makes the migrate_volume_completion call to Cinder which fails with a
403 because the Cinder API policy has that as an admin action by
default.

I don't know the history around when the swap-volume API was added to
Nova, was it specifically for this volume migration scenario in Cinder?
  Are there other use cases?  Knowing those would be good to determine
if Nova should change it's default policy for swap-volume, although,
again, that's only a default and can be changed per deployment so we
probably shouldn't rely on it.

Ideally we would have implemented this like the nova/neutron server
events callback API in Nova during vif plugging (nova does the vif plug
on the host then waits for neutron to update it's database for the port
status and sends an event (API call) to nova to continue booting the
server). That server events API in nova is admin-only by default and
neutron is configured with admin credentials for nova to use it.

Another option would be for Nova to handle a 403 response when calling
Cinder's migrate_volume_completion API and ignore it if we don't have an
admin context. This is pretty hacky though. It assumes that it's a
non-admin user initiating the swap-volume operation. It wouldn't be a
problem for the volume migration operation initiated in Cinder since by
default that's admin-only, so nova shouldn't get a 403 when calling
migrate_volume_completion. The trap would be if the cinder policy for
volume migration was changed to allow non-admins, but if someone did
that, they should also change the policy for migrate_volume_completion
to allow non-admin too.



If you have a good idea, please let me know it.

[1] Cinder volumes are stuck when non admin user executes nova swap
volume API
 https://bugs.launchpad.net/cinder/+bug/1522705

[2] Cinder volume stuck in swap_volume
 https://bugs.launchpad.net/nova/+bug/1471098

[3] Fix cinder volume stuck in swap_volume
 https://review.openstack.org/#/c/207385/

[4] Fix swap_volume for case without migration
 https://review.openstack.org/#/c/247767/

[5] Enable volume owners to execute migrate_volume_completion
 https://review.openstack.org/#/c/253363/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp





__



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





I also just checked Tempest and apparently we have no coverage for the
swap-volume API in Nova, we should fix that as part of this.



I've done some more digging. The swap-volume functionality was added to
nova here [1].  The cinder use of it for volume migration was added here
[2].

Looking at the cinder volume API for 

Re: [openstack-dev] [Neutron] BGP support

2016-03-30 Thread Armando M.
On 30 March 2016 at 17:07, Abhishek Raut  wrote:

> I think what Gary is talking about is BGP and the Border Gateway API
> spec[1] in L2 GW repo.
> [1] https://review.openstack.org/#/c/270786/
>

Spec [1] has nothing to do with BGP (the routing protocol) last time I
checked (note to self: I should go and have another look). We should
probably consider clarify the confusion that stems from the use of the word
'Border' in spec [1].

A.


>
>
—Abhishek Raut
>
> From: "Tidwell, Ryan" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, March 30, 2016 at 4:52 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] BGP support
>
> Gary,
>
>
>
> I’m not sure I understand the relationship you’re drawing between BGP and
> L2 GW, could you elaborate?  The BGP code that landed in Mitaka is mostly
> geared toward the use case where you want to directly route your tenant
> networks without any NAT (ie no floating IP’s, no SNAT).  Neutron peers
> with upstream routers and announces prefixes that tenants allocate
> dynamically.  We have talked about how we could build on what was merged in
> Mitaka to support L3 VPN in the future, but to my knowledge no concrete
> plan has emerged as of yet.
>
>
>
> -Ryan
>
>
>
> *From:* Gary Kotton [mailto:gkot...@vmware.com ]
> *Sent:* Sunday, March 27, 2016 11:36 PM
> *To:* OpenStack List
> *Subject:* [openstack-dev] [Neutron] BGP support
>
>
>
> Hi,
>
> In the M cycle BGP support was added in tree. I have seen specs in the L2
> GW project for this support too. Are we planning to consolidate the
> efforts? Will the BGP code be moved from the Neutron git to the L2-GW
> project? Will a new project be created?
>
> Sorry, a little in the dark here and it would be nice if someone could
> please provide some clarity here. It would be a pity that there were
> competing efforts and my take would be that the Neutron code would be the
> single source of truth (until we decide otherwise).
>
> I think that the L2-GW project would be a very good place for that service
> code to reside. It can also have MPLS etc. support. So it may be a natural
> fit.
>
> Thanks
>
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-30 Thread Matt Riedemann



On 3/30/2016 7:56 PM, Matt Riedemann wrote:



On 3/30/2016 7:38 PM, Matt Riedemann wrote:



On 2/25/2016 5:31 AM, Takashi Natsume wrote:

Hi Nova and Cinder developers.

As I reported in a bug report [1], nova swap volume
(updating an attached volume) fuction does not work
in the case of non admin users by default.
(Volumes are stuck.)

Before I was working for fixing another swap volume bug [2][3].
But Ryan fixed it on the Cinder side [4].
As a result, admin users can execute swap volume function,
but it was not fixed in the case of non admin users.
So I reported the bug report [1].

In the patch[5], I tried to change the default cinder's policy
to allow non admin users to execute migrate_volume_completion API.
But it was rejected by the cinder project ('-2' was voted).

In the patch[5], it was suggested to make the swap volume API admin only
on the Nova side.
But IMO, the swap volume function should be allowed to non admin users
because attaching a volume and detaching a volume can be performed
by non admin users.


I agree with this. DuncanT said in IRC that he didn't think non-admin
users should be using the swap-volume API in nova because it can be
problematic, but I'm not sure why, is there more history or detail
there? I'd think it shouldn't be any worse than doing a detach/attach in
quick succession (like in a CI test for example).



If migrate_volume_completion is only allowed to admin users
by default on the Cinder side, attaching a new volume and
detaching an old volume should be performed on the Nova side
when swapping volumes.


My understanding of the problem is as follows:

1. Admin-initiated volume migration in Cinder calls off to Nova to
perform the swap-volume, and then Nova calls back to Cinder's
migrate_volume_completion API. This is fine since it's an admin that
initiated this series of operations on the Cinder side (that's by
default, however, this is broken if the policy file for Cinder is change
to allow non-admins to migrate volumes).

2. A non-admin swap-volume API call in Nova fails because Nova blindly
makes the migrate_volume_completion call to Cinder which fails with a
403 because the Cinder API policy has that as an admin action by default.

I don't know the history around when the swap-volume API was added to
Nova, was it specifically for this volume migration scenario in Cinder?
  Are there other use cases?  Knowing those would be good to determine
if Nova should change it's default policy for swap-volume, although,
again, that's only a default and can be changed per deployment so we
probably shouldn't rely on it.

Ideally we would have implemented this like the nova/neutron server
events callback API in Nova during vif plugging (nova does the vif plug
on the host then waits for neutron to update it's database for the port
status and sends an event (API call) to nova to continue booting the
server). That server events API in nova is admin-only by default and
neutron is configured with admin credentials for nova to use it.

Another option would be for Nova to handle a 403 response when calling
Cinder's migrate_volume_completion API and ignore it if we don't have an
admin context. This is pretty hacky though. It assumes that it's a
non-admin user initiating the swap-volume operation. It wouldn't be a
problem for the volume migration operation initiated in Cinder since by
default that's admin-only, so nova shouldn't get a 403 when calling
migrate_volume_completion. The trap would be if the cinder policy for
volume migration was changed to allow non-admins, but if someone did
that, they should also change the policy for migrate_volume_completion
to allow non-admin too.



If you have a good idea, please let me know it.

[1] Cinder volumes are stuck when non admin user executes nova swap
volume API
 https://bugs.launchpad.net/cinder/+bug/1522705

[2] Cinder volume stuck in swap_volume
 https://bugs.launchpad.net/nova/+bug/1471098

[3] Fix cinder volume stuck in swap_volume
 https://review.openstack.org/#/c/207385/

[4] Fix swap_volume for case without migration
 https://review.openstack.org/#/c/247767/

[5] Enable volume owners to execute migrate_volume_completion
 https://review.openstack.org/#/c/253363/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp





__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





I also just checked Tempest and apparently we have no coverage for the
swap-volume API in Nova, we should fix that as part of this.



I've done some more digging. The swap-volume functionality was added to 
nova here [1].  The cinder use of it for volume migration was added here 
[2].


Looking at the cinder volume API for migrate_volume_completion, it 
expects the source 

Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-30 Thread Matt Riedemann



On 3/30/2016 7:38 PM, Matt Riedemann wrote:



On 2/25/2016 5:31 AM, Takashi Natsume wrote:

Hi Nova and Cinder developers.

As I reported in a bug report [1], nova swap volume
(updating an attached volume) fuction does not work
in the case of non admin users by default.
(Volumes are stuck.)

Before I was working for fixing another swap volume bug [2][3].
But Ryan fixed it on the Cinder side [4].
As a result, admin users can execute swap volume function,
but it was not fixed in the case of non admin users.
So I reported the bug report [1].

In the patch[5], I tried to change the default cinder's policy
to allow non admin users to execute migrate_volume_completion API.
But it was rejected by the cinder project ('-2' was voted).

In the patch[5], it was suggested to make the swap volume API admin only
on the Nova side.
But IMO, the swap volume function should be allowed to non admin users
because attaching a volume and detaching a volume can be performed
by non admin users.


I agree with this. DuncanT said in IRC that he didn't think non-admin
users should be using the swap-volume API in nova because it can be
problematic, but I'm not sure why, is there more history or detail
there? I'd think it shouldn't be any worse than doing a detach/attach in
quick succession (like in a CI test for example).



If migrate_volume_completion is only allowed to admin users
by default on the Cinder side, attaching a new volume and
detaching an old volume should be performed on the Nova side
when swapping volumes.


My understanding of the problem is as follows:

1. Admin-initiated volume migration in Cinder calls off to Nova to
perform the swap-volume, and then Nova calls back to Cinder's
migrate_volume_completion API. This is fine since it's an admin that
initiated this series of operations on the Cinder side (that's by
default, however, this is broken if the policy file for Cinder is change
to allow non-admins to migrate volumes).

2. A non-admin swap-volume API call in Nova fails because Nova blindly
makes the migrate_volume_completion call to Cinder which fails with a
403 because the Cinder API policy has that as an admin action by default.

I don't know the history around when the swap-volume API was added to
Nova, was it specifically for this volume migration scenario in Cinder?
  Are there other use cases?  Knowing those would be good to determine
if Nova should change it's default policy for swap-volume, although,
again, that's only a default and can be changed per deployment so we
probably shouldn't rely on it.

Ideally we would have implemented this like the nova/neutron server
events callback API in Nova during vif plugging (nova does the vif plug
on the host then waits for neutron to update it's database for the port
status and sends an event (API call) to nova to continue booting the
server). That server events API in nova is admin-only by default and
neutron is configured with admin credentials for nova to use it.

Another option would be for Nova to handle a 403 response when calling
Cinder's migrate_volume_completion API and ignore it if we don't have an
admin context. This is pretty hacky though. It assumes that it's a
non-admin user initiating the swap-volume operation. It wouldn't be a
problem for the volume migration operation initiated in Cinder since by
default that's admin-only, so nova shouldn't get a 403 when calling
migrate_volume_completion. The trap would be if the cinder policy for
volume migration was changed to allow non-admins, but if someone did
that, they should also change the policy for migrate_volume_completion
to allow non-admin too.



If you have a good idea, please let me know it.

[1] Cinder volumes are stuck when non admin user executes nova swap
volume API
 https://bugs.launchpad.net/cinder/+bug/1522705

[2] Cinder volume stuck in swap_volume
 https://bugs.launchpad.net/nova/+bug/1471098

[3] Fix cinder volume stuck in swap_volume
 https://review.openstack.org/#/c/207385/

[4] Fix swap_volume for case without migration
 https://review.openstack.org/#/c/247767/

[5] Enable volume owners to execute migrate_volume_completion
 https://review.openstack.org/#/c/253363/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





I also just checked Tempest and apparently we have no coverage for the 
swap-volume API in Nova, we should fix that as part of this.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [magnum] Discuss the blueprint"support-private-registry"

2016-03-30 Thread Eli Qiao
Sound good if bp /allow-user-softwareconfig/ 
can 
support configure CA, if it can be land, then I am going to drop this bp 
/support-private-registry (which is insceure)
/but for now, I need to use patches for /support-private-registry /for 
my local testing stuff.


Looking forwarding for patches of /allow-user-softwareconfig

BR, Eli
/
On 2016年03月30日 22:20, Kai Qiang Wu wrote:


I agree to that support-private-registry should be secure. As insecure 
seems not much useful for production use.
Also I understood the point setup related CA could be diffcult than 
normal HTTP, but we want to know if

https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

Could address the issue and make templates clearer to understood ? If 
related patch or spec proposed, we are glad to review and make it better.





Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Inactive hide details for Ricardo Rocha ---30/03/2016 09:09:14 
pm---Hi. On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao 
 wrote:


From: Ricardo Rocha 
To: "OpenStack Development Mailing List (not for usage questions)" 


Date: 30/03/2016 09:09 pm
Subject: Re: [openstack-dev] [magnum] Discuss the blueprint 
"support-private-registry"






Hi.

On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao  wrote:
>
> Hi Hongbin
>
> Thanks for starting this thread,
>
>
>
> I initial propose this bp because I am in China which is behind 
China great

> wall and can not have access of gcr.io directly, after checking our
> cloud-init script, I see that
>
> lots of code are *hard coded* to using gcr.io, I personally though 
this is

> not good idea. We can not force user/customer to have internet access in
> their environment.
>
> I proposed to use insecure-registry to give customer/user (Chinese 
or whom

> doesn't have gcr.io access) a chance to switch use their own
> insecure-registry to deploy
> k8s/swarm bay.
>
> For your question:
>>  Is the private registry secure or insecure? If secure, how to 
handle
>> the authentication secrets. If insecure, is it OK to connect a 
secure bay to

>> an insecure registry?
> An insecure-resigtry should be 'secure' one, since customer need to 
setup it

> and make sure it's clear one and in this case, they could be a private
> cloud.
>
>>  Should we provide an instruction for users to pre-install the private
>> registry? If not, how to verify the correctness of this feature?
>
> The simply way to pre-install private registry is using 
insecure-resigtry

> and docker.io has very simple steps to start it [1]
> for other, docker registry v2 also supports using TLS enable mode 
but this

> will require to tell docker client key and crt file which will make
> "support-private-registry" complex.
>
> [1] https://docs.docker.com/registry/
> [2]https://docs.docker.com/registry/deploying/

'support-private-registry' and 'allow-insecure-registry' sound 
different to me.


We're using an internal docker registry at CERN (v2, TLS enabled), and
have the magnum nodes setup to use it.

We just install our CA certificates in the nodes (cp to
etc/pki/ca-trust/source/anchors/, update-ca-trust) - had to change the
HEAT templates for that, and submitted a blueprint to be able to do
similar things in a cleaner way:
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

That's all that is needed, the images are then prefixed with the
registry dns location when referenced - example:
docker.cern.ch/my-fancy-image.

Things we found on the way:
- registry v2 doesn't seem to allow anonymous pulls (you can always
add an account with read-only access everywhere, but it means you need
to always authenticate at least with this account)
https://github.com/docker/docker/issues/17317
- swarm 1.1 and kub8s 1.0 allow authentication to the registry from
the client (which was good news, and it works fine), handy if you want
to push/pull with authentication.

Cheers,
 Ricardo

>
>
>
> On 2016年03月30日 07:23, Hongbin Lu wrote:
>
> Hi team,
>
>
>
> This is the item we didn’t have time to discuss in our team meeting, 
so I

> started the discussion in here.
>
>
>
> Here is the blueprint:
> 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry . 
Per
> my understanding, the goal 

Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-30 Thread Matt Riedemann



On 2/25/2016 5:31 AM, Takashi Natsume wrote:

Hi Nova and Cinder developers.

As I reported in a bug report [1], nova swap volume
(updating an attached volume) fuction does not work
in the case of non admin users by default.
(Volumes are stuck.)

Before I was working for fixing another swap volume bug [2][3].
But Ryan fixed it on the Cinder side [4].
As a result, admin users can execute swap volume function,
but it was not fixed in the case of non admin users.
So I reported the bug report [1].

In the patch[5], I tried to change the default cinder's policy
to allow non admin users to execute migrate_volume_completion API.
But it was rejected by the cinder project ('-2' was voted).

In the patch[5], it was suggested to make the swap volume API admin only
on the Nova side.
But IMO, the swap volume function should be allowed to non admin users
because attaching a volume and detaching a volume can be performed
by non admin users.


I agree with this. DuncanT said in IRC that he didn't think non-admin 
users should be using the swap-volume API in nova because it can be 
problematic, but I'm not sure why, is there more history or detail 
there? I'd think it shouldn't be any worse than doing a detach/attach in 
quick succession (like in a CI test for example).




If migrate_volume_completion is only allowed to admin users
by default on the Cinder side, attaching a new volume and
detaching an old volume should be performed on the Nova side
when swapping volumes.


My understanding of the problem is as follows:

1. Admin-initiated volume migration in Cinder calls off to Nova to 
perform the swap-volume, and then Nova calls back to Cinder's 
migrate_volume_completion API. This is fine since it's an admin that 
initiated this series of operations on the Cinder side (that's by 
default, however, this is broken if the policy file for Cinder is change 
to allow non-admins to migrate volumes).


2. A non-admin swap-volume API call in Nova fails because Nova blindly 
makes the migrate_volume_completion call to Cinder which fails with a 
403 because the Cinder API policy has that as an admin action by default.


I don't know the history around when the swap-volume API was added to 
Nova, was it specifically for this volume migration scenario in Cinder? 
 Are there other use cases?  Knowing those would be good to determine 
if Nova should change it's default policy for swap-volume, although, 
again, that's only a default and can be changed per deployment so we 
probably shouldn't rely on it.


Ideally we would have implemented this like the nova/neutron server 
events callback API in Nova during vif plugging (nova does the vif plug 
on the host then waits for neutron to update it's database for the port 
status and sends an event (API call) to nova to continue booting the 
server). That server events API in nova is admin-only by default and 
neutron is configured with admin credentials for nova to use it.


Another option would be for Nova to handle a 403 response when calling 
Cinder's migrate_volume_completion API and ignore it if we don't have an 
admin context. This is pretty hacky though. It assumes that it's a 
non-admin user initiating the swap-volume operation. It wouldn't be a 
problem for the volume migration operation initiated in Cinder since by 
default that's admin-only, so nova shouldn't get a 403 when calling 
migrate_volume_completion. The trap would be if the cinder policy for 
volume migration was changed to allow non-admins, but if someone did 
that, they should also change the policy for migrate_volume_completion 
to allow non-admin too.




If you have a good idea, please let me know it.

[1] Cinder volumes are stuck when non admin user executes nova swap volume API
 https://bugs.launchpad.net/cinder/+bug/1522705

[2] Cinder volume stuck in swap_volume
 https://bugs.launchpad.net/nova/+bug/1471098

[3] Fix cinder volume stuck in swap_volume
 https://review.openstack.org/#/c/207385/

[4] Fix swap_volume for case without migration
 https://review.openstack.org/#/c/247767/

[5] Enable volume owners to execute migrate_volume_completion
 https://review.openstack.org/#/c/253363/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGP support

2016-03-30 Thread Abhishek Raut
I think what Gary is talking about is BGP and the Border Gateway API spec[1] in 
L2 GW repo.
[1] https://review.openstack.org/#/c/270786/

-Abhishek Raut

From: "Tidwell, Ryan" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 30, 2016 at 4:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Neutron] BGP support

Gary,

I'm not sure I understand the relationship you're drawing between BGP and L2 
GW, could you elaborate?  The BGP code that landed in Mitaka is mostly geared 
toward the use case where you want to directly route your tenant networks 
without any NAT (ie no floating IP's, no SNAT).  Neutron peers with upstream 
routers and announces prefixes that tenants allocate dynamically.  We have 
talked about how we could build on what was merged in Mitaka to support L3 VPN 
in the future, but to my knowledge no concrete plan has emerged as of yet.

-Ryan

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Sunday, March 27, 2016 11:36 PM
To: OpenStack List
Subject: [openstack-dev] [Neutron] BGP support

Hi,
In the M cycle BGP support was added in tree. I have seen specs in the L2 GW 
project for this support too. Are we planning to consolidate the efforts? Will 
the BGP code be moved from the Neutron git to the L2-GW project? Will a new 
project be created?
Sorry, a little in the dark here and it would be nice if someone could please 
provide some clarity here. It would be a pity that there were competing efforts 
and my take would be that the Neutron code would be the single source of truth 
(until we decide otherwise).
I think that the L2-GW project would be a very good place for that service code 
to reside. It can also have MPLS etc. support. So it may be a natural fit.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGP support

2016-03-30 Thread Tidwell, Ryan
Gary,

I’m not sure I understand the relationship you’re drawing between BGP and L2 
GW, could you elaborate?  The BGP code that landed in Mitaka is mostly geared 
toward the use case where you want to directly route your tenant networks 
without any NAT (ie no floating IP’s, no SNAT).  Neutron peers with upstream 
routers and announces prefixes that tenants allocate dynamically.  We have 
talked about how we could build on what was merged in Mitaka to support L3 VPN 
in the future, but to my knowledge no concrete plan has emerged as of yet.

-Ryan

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Sunday, March 27, 2016 11:36 PM
To: OpenStack List
Subject: [openstack-dev] [Neutron] BGP support

Hi,
In the M cycle BGP support was added in tree. I have seen specs in the L2 GW 
project for this support too. Are we planning to consolidate the efforts? Will 
the BGP code be moved from the Neutron git to the L2-GW project? Will a new 
project be created?
Sorry, a little in the dark here and it would be nice if someone could please 
provide some clarity here. It would be a pity that there were competing efforts 
and my take would be that the Neutron code would be the single source of truth 
(until we decide otherwise).
I think that the L2-GW project would be a very good place for that service code 
to reside. It can also have MPLS etc. support. So it may be a natural fit.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] meeting topics for 3/31/2016 networking-sfc project IRC meeting

2016-03-30 Thread Cathy Zhang
Hi everyone,

First thanks to Louis Fourie and Paul Carver for chairing the project IRC 
meetings while I was on business trip.
Here are some topics I have in mind for tomorrow's meeting discussion. Feel 
free to add more.


1.   ODL SFC Driver to networking-sfc

2.   Networking-sfc driver to Tacker

3."Symmetric" parameter in the chain_param and let the underlying 
driver handle creating a reverse chain

4."SFC path-ID" parameter in the chain_param for supporting the 
wireless Gi-Lan scenario

5.   Source port specification in the FC

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] TC candidacy

2016-03-30 Thread Anita Kuno
Please accept my candidacy for election to the technical committee.

Election season is both a time of intense activity as well as
(hopefully) a time to self-evaluate and commit or recommit to a personal
vision of OpenStack. OpenStack is exactly as we define it and I hope
many folks are taking a bit of time to consider their vision as well as
the meaning of one's personal work and efforts in that vision.

I'd like to share some of my vision as well as what I find meaningful in
my daily activities.

Vision: creating cloud software that our users can use
That's a paraphrase of the OpenStack mission statement (both in its
present form and in the form that is undergoing amendment currently).
The vision that gets me up everyday (or middle of the night if it is
Tuesday and I'm chairing a third party meeting) is that I'm engaged in
creating software that makes clouds and that our users are using. Now
many of my activities may seem far removed from the creation of software
parts some days but that is the motivation that drives my actions.

Some of my daily activities: answering questions in -infra, helping
folks debug logs, reviewing project-config patches and discussing design
of project-config related concerns, attending weekly meetings of other
teams, chairing two third party meetings per week

That might not seem like much, and I used to be able to do more things
that I could list, but some days it is all I can do to read backscroll
and keep up with the conversation in channel at the same time. Our
incredible growth has put us in a position of having to be in
fire-fighting mode in what used to be a few times per release, to being
in fire fighting mode once a week, to being in fire fighting mode all
the time. I'm concerned that our wonderful, incredible growth as an
entity is causing some teams to not have the time they need to
communicate with each other.

I'll add in here a quote I came across the other day from Viktor Frankl,
Austrian psychiatrist:
"Freedom, however, is not the last word. Freedom is only part of the
story and half of the truth. Freedom is but the negative aspect of the
whole phenomenon whose positive aspect is responsibleness. In fact,
freedom is in danger of degenerating into mere arbitrariness unless it
is lived in terms of responsibleness." Now Viktor goes on to say he
thinks the United States should build another statue, I'm not going to
hold my breath on that but I do agree with Viktor in as much as the
portion I have quoted here.

I think we as a community need to start taking a look at our
responsibilities, to ourselves as people, to our co-workers and team
members and to others in the community as well. Businesses don't decide
how we treat each other, only we can decide that. We are putting an
awful lot of pressure on each other and we don't seem to be allowing for
a pause.

Folks who are able to be effective in these circumstances have all
undertaken personal choices about how much input they can address at any
given time. Yes we need new contributors and those willing to teach and
mentor them are appreciated for their actions, but we also need to start
valuing each other again on a daily basis, some days for just continuing
to show up and do our best.

Good jazz musicians know the basic patterns of jazz but the most
important quality they have is to listen. To play with the intent of
pausing and letting someone else take centrestage for 8 or 16 bars, then
taking a turn themselves. Jazz works because of the ability to pause and
listen. I'm concerned we as a  group have lost our ability to pause, and
by extension to listen.

I don't have solutions, I do have some observations. I also am effective
at leading and moderating discussions. Any solution our group finds
needs to come from the group. It is hard to have a group discussion any
more, we have moved to talking points in order to get things done and
that is sad for me to witness, as I see listening as a group as a
strength. I also consider solutions coming from the group as a strength
as well.

One last point I will make is that operators are becoming more effective
at communicating their needs, to each other and also to developers. I
think there are some good structures in place to foster that
communication and I see that as a huge benefit to OpenStack.

Thanks for reading.

Please read all the candidate platforms and please vote.

Anita Kuno. (anteaya)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] TC Candidacy for Carol Barrett

2016-03-30 Thread Barrett, Carol L
Hi - I am announcing my candidacy for the OpenStack Technical Committee for the 
upcoming term.



Currently, I am a Data Center Software Planner at Intel. I have been an active 
member in the community since 2013, working with Community teams to understand 
market needs and bring that information to the Community (in the form of specs, 
blueprints, prototypes and user stories) and developing materials to speed 
planning for, and adoption of OpenStack (multi-release roadmaps, Reference 
Architectures, Customer Testimonials, Evaluation and Deployment guides, etc).



As a TC member, I will work to support our success by utilizing my areas of 
expertise. Specifically, you can expect me to:

1) Collaborate to more tightly integrate requirements from our Markets into the 
Specification, Design and Development processes across projects.

2) Establish a mechanism for tracking the implementation of specifications and 
communicating progress and plans to our Markets and ecosystem.

3) Foster discussion on the future direction for our Community and our 
software. What's the vision of the Data Center of the Future? What is needed to 
realize that? How do we make sure that OpenStack is the preferred Cloud 
Platform in this future environment?

4) Work to ensure we have a thriving community that welcomes all comers and 
supports them to become contributing members and leaders.



We have a lot of opportunity for growth and success. I would like to join the 
TC to help our Community realize this potential.



Thank you for your consideration

Carol Barrett



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [elections][tc] TC Candidacy

2016-03-30 Thread Ildikó Váncsa
Hi All,

I would like to throw my hat in the ring to be part of the OpenStack Technical
Committee.

I'm working at Ericsson, where I'm coordinating the OpenStack related activities
across the company. I started to contribute to OpenStack more than two years ago
by adding rich query functionality to Ceilometer. Since then I got nominated as
a core reviewer in the project, where beyond other contributions, I became the
constant voice of usability aspects and backward compatibility. Among other
projects I'm contributing to Nova and OpenStack Manuals as well[1].

I would like to increase diversity in the TC group, but more importantly my
goal is to add fresh blood to the forum driving this community. In my
opinion it is very important to constantly revisit items from a different
perspective and let new voices articulate their views and ideas in order to see
the big picture. Long term this can ensure that OpenStack keeps its adaptability
to continue to be a smoothly functioning ecosystem just as being a stable and
successful software package.

In the course of my current role within Ericsson I encourage and follow the
contribution activities of my colleagues, by which I have a good oversight on
how OpenStack works as a community. Driven by my colleagues' and my own
experiences I see cross-project collaboration as an area, where notwithstanding
the already ongoing great work there is still room for further improvements. As
part of the TC I would like to participate in driving cross-project activities,
where I plan to focus on the on-boarding aspects. In my opinion it is very
important to spend time educating newcomers and helping to rapidly build up
competencies regarding processes likewise in the realm of technology and
coding abilities and guidelines. As part of this mission I would continue to
invest energy in bringing sessions on stage at the OpenStack Summits[2][3] to
share the lessons I learnt during the past two and a half years. It is key to
explain to people how the community really works beyond the well-documented
processes and tools that we are using every day. The devil is in the details, as
always.

I mentioned adaptability earlier, which I feel being a very important aspect as
it outlines the ability to constantly change when needed. In my view operating
OpenStack as Big Tent is a good approach, although I think it is important to
check the principles and review criteria to evaluate the new project
candidates to rationalize the current review process. I also got the impression
that the idea behind the tags is being devalued due to proliferation. I have
similar feelings regarding governance on project level. In my view it is very
important to give guidelines on how the teams can efficiently operate, but it is
time consuming and destroys the focus of the TC to spend much energy on giving
detailed descriptions and regulations on how these groups should work within
OpenStack. Among others I would like to revisit the aforementioned items to make
the responsibilities and activities of the TC more clear and make the group even
more efficient.

As an employee of a Telecom vendor I would like to bring in the NFV mindset to
OpenStack to be part of the daily discussions as opposed to being a mysterious
abbreviation that introduces competing priorities. During the past few years I
saw and experienced the difficulties and even pain of trying to find the common
language between the two groups and making the contributions happen. I think it
is very important to help the Telecom industry in the transformation to fit
into the cloud terminology and the open source era. By this process OpenStack
can leverage the advantages that this completely different set of priorities
and requirements could offer, such as increased stability and advanced
networking functionality.

I'm involved in OPNFV[4] and it is part of my mission to find the connection
points between the two communities[5] to build a large ecosystem which fits
OpenStack's current priorities[6] in the sense of actively supporting and being
the foundation for NFV. As a benefit for us we can use the test environment and
test frameworks to see how OpenStack operates in a fully integrated environment
on top of different hardware platforms. OPNFV brings integration and functional
testing to a different level which is an important feedback to our community and
a good checkpoint to our users. As more and more Telecom operators are looking
at OpenStack as a main building block of their systems, I would like to serve as
a representative in the technical driving force of OpenStack, who can operate as
a translator to build a common roadmap.

Thank you for reading through my thoughts. It is an honor to be part of
OpenStack, which is why I would like to take part in bringing this community
further to provide a package that can serve the whole industry and which is
backed up by a growing and smoothly functioning ecosystem.

Thank you for your consideration.

Best Regards,

Re: [openstack-dev] [lbaas] [octavia] Proposing Bharath Munirajulu as Octavia Core

2016-03-30 Thread Brandon Logan
+1

On Wed, 2016-03-30 at 13:56 -0700, Michael Johnson wrote:
> I would like to nominate Bharath Munirajulu (bharathm) as a OpenStack
> Octavia core reviewer.
> His contributions [1] are in line with other cores and he has been an
> active member of our community.  I have been impressed with the
> insight and quality of his reviews.
> 
> Current Octavia cores, please vote by replying to this e-mail.
> 
> Michael
> 
> 
> [1] http://stackalytics.com/report/contribution/octavia/90
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-30 Thread Armando M.
On 29 March 2016 at 18:55, Matt Riedemann 
wrote:

>
>
> On 3/29/2016 4:44 PM, Armando M. wrote:
>
>>
>>
>> On 29 March 2016 at 08:08, Matt Riedemann > > wrote:
>>
>> Nova has had some long-standing bugs that Sahid is trying to fix
>> here [1].
>>
>> You can create a network in neutron with
>> port_security_enabled=False. However, the bug is that since Nova
>> adds the 'default' security group to the request (if none are
>> specified) when allocating networks, neutron raises an error when
>> you try to create a port on that network with a 'default' security
>> group.
>>
>> Sahid's patch simply checks if the network that we're going to use
>> has port_security_enabled and if it does not, no security groups are
>> applied when creating the port (regardless of what's requested for
>> security groups, which in nova is always at least 'default').
>>
>> There has been a similar attempt at fixing this [2]. That change
>> simply only added the 'default' security group when allocating
>> networks with nova-network. It omitted the default security group if
>> using neutron since:
>>
>> a) If the network does not have port security enabled, we'll blow up
>> trying to add a port on it with the default security group.
>>
>> b) If the network does have port security enabled, neutron will
>> automatically apply a 'default' security group to the port, nova
>> doesn't need to specify one.
>>
>> The problem both Feodor's and Sahid's patches ran into was that the
>> nova REST API adds a 'default' security group to the server create
>> response when using neutron if specific security groups weren't on
>> the server create request [3].
>>
>> This is clearly wrong in the case of
>> network.port_security_enabled=False. When listing security groups
>> for an instance, they are correctly not listed, but the server
>> create response is still wrong.
>>
>> So the question is, how to resolve this?  A few options come to mind:
>>
>> a) Don't return any security groups in the server create response
>> when using neutron as the backend. Given by this point we've cast
>> off to the compute which actually does the work of network
>> allocation, we can't call back into the network API to see what
>> security groups are being used. Since we can't be sure, don't
>> provide what could be false info.
>>
>> b) Add a new method to the network API which takes the requested
>> networks from the server create request and returns a best guess if
>> security groups are going to be applied or not. In the case of
>> network.port_security_enabled=False, we know a security group won't
>> be applied so the method returns False. If there is
>> port_security_enabled, we return whatever security group was
>> requested (or 'default'). If there are multiple networks on the
>> request, we return the security groups that will be applied to any
>> networks that have port security enabled.
>>
>> Option (b) is obviously more intensive and requires hitting the
>> neutron API from nova API before we respond, which we'd like to
>> avoid if possible. I'm also not sure what it means for the
>> auto-allocated-topology (get-me-a-network) case. With a standard
>> devstack setup, a network created via the auto-allocated-topology
>> API has port_security_enabled=True, but I also have the 'Port
>> Security' extension enabled and the default public external network
>> has port_security_enabled=True. What if either of those are False
>> (or the port security extension is disabled)? Does the
>> auto-allocated network inherit port_security_enabled=False? We could
>> duplicate that logic in Nova, but it's more proxy work that we would
>> like to avoid.
>>
>>
>> Port security on the external network has no role in this because this
>> is not the network you'd be creating ports on. Even if it had
>> port-security=False, an auto-allocated network will still be created
>> with port security enabled (i.e. =True).
>>
>> A user can obviously change that later on.
>>
>>
>> [1] https://review.openstack.org/#/c/284095/
>> [2] https://review.openstack.org/#/c/173204/
>> [3]
>>
>> https://github.com/openstack/nova/blob/f8a01ccdffc13403df77148867ef3821100b5edb/nova/api/openstack/compute/security_groups.py#L472-L475
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>

Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-30 Thread Armando M.
On 30 March 2016 at 13:40, Sean Dague  wrote:

> On 03/29/2016 09:55 PM, Matt Riedemann wrote:
> 
> >
> > Yup, HenryG walked me through the cases on IRC today.
> >
> > The more I think about option (b) above, the less I like that idea given
> > how much work goes into the allocate_for_instance code in nova where
> > it's already building the list of possible networks that will be used
> > for creating/updating ports, we'd essentially have to duplicate that
> > logic in a separate method to get an idea of what security groups would
> > be applied.
> >
> > I'd prefer to be lazy and go with option (a) and just say nova doesn't
> > return security-groups in the REST API when creating a server and
> > neutron is the network API. That would require a microversion probably,
> > but it would still be easy to do. I'm not sure if that's the best user
> > experience though.
> >
>
> Is there a sane resource on the neutron side we could link to? Today
> security_groups are returned with a name from nova, which made sense
> when it was an internal structure, but makes way less sense now.
>
> "security_groups": [
>{
> "href": "",
> }
> ]
>
> Where the link is to a neutron resource (and we could do a local link
> for the few nova net folks) might be more appropriate.
>

Not that I could think of, though the extra level of indirection to solve
this issue is kind of a neat idea.


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting cancelled this week

2016-03-30 Thread Christopher Aedo
Due to a very light agenda (and everyone being pretty busy at the
moment) we're going to skip the meeting this week.

Our next meeting is scheduled for April 7th, the agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Bharath Munirajulu as Octavia Core

2016-03-30 Thread Eichberger, German
+1

Great work Bharath!!



On 3/30/16, 1:56 PM, "Michael Johnson"  wrote:

>I would like to nominate Bharath Munirajulu (bharathm) as a OpenStack
>Octavia core reviewer.
>His contributions [1] are in line with other cores and he has been an
>active member of our community.  I have been impressed with the
>insight and quality of his reviews.
>
>Current Octavia cores, please vote by replying to this e-mail.
>
>Michael
>
>
>[1] http://stackalytics.com/report/contribution/octavia/90
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Bharath Munirajulu as Octavia Core

2016-03-30 Thread Lingxian Kong
not core reviewer, but wanna give my +1

On Thu, Mar 31, 2016 at 9:56 AM, Michael Johnson  wrote:
> I would like to nominate Bharath Munirajulu (bharathm) as a OpenStack
> Octavia core reviewer.
> His contributions [1] are in line with other cores and he has been an
> active member of our community.  I have been impressed with the
> insight and quality of his reviews.
>
> Current Octavia cores, please vote by replying to this e-mail.
>
> Michael
>
>
> [1] http://stackalytics.com/report/contribution/octavia/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Matt Riedemann



On 3/30/2016 2:26 PM, Sean Dague wrote:

During the Nova API meeting we had some conversations about priorities,
but this feels like the thing where a mailing list conversation is more
inclusive to get agreement on things. I think we need to remain focused
on what API related work will have the highest impact on our users.
(some brain storming was here -
https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
completely straw man proposal on priorities for the Newton cycle.

* Top Priority Items *

1. API Reference docs in RST which include microversions (drivers: me,
auggy, annegentle) -
https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
2. Discoverable Policy (drivers: laski, claudio) -
https://review.openstack.org/#/c/289405/
3. ?? (TBD)

I think realistically 3 priority items is about what we can sustain, and
I'd like to keep it there. Item #3 has a couple of options.

* Lower Priority Background Work *

- POC of Gabbi for additional API validation
- Microversion Testing in Tempest (underway)
- Some of the API WG recommendations

* Things we shouldn't do this cycle *

- Tasks API - not because it's not a good idea, but because I think
until we get ~ 3 core team members agreed that it's their number #1 item
for the cycle, it's just not going to get enough energy to go somewhere
useful. There are some other things on deck that we just need to clear
first.
- API wg changes for error codes - we should fix that eventually, but
that should come as a single microversion to minimize churn. That's
coordination we don't really have the bandwidth for this cycle.

* Things we need to decide this cycle *

- When are we deleting the legacy v2 code base in tree?


As discussed in IRC today, first steps (I think) are removing the 
deprecated 'osapi_v21.enabled' option in newton so v2.1 can't be disabled.


And we need to think about logging a warning if you're using v2.0.

That sets a timetable for removal of v2.0 in the O release at the earliest.

We also talked about fixing bugs in v2.0 today and I plan on putting up 
a patch to the nova devref policy section about bug fixes for v2.0. 
Basically latent bugs won't be fixed unless they are blocking some other 
effort (NovaObjectDictCompat removal comes to mind) or fixes a security 
issue. And we shouldn't introduce new 500 errors in the v2.0 API.




* Final priority item *

For the #3 priority item one of the things that came up today was the
structured errors spec by the API working group. That would be really
nice... but in some ways really does need the entire new API reference
docs in place. And maybe is better in O.

One other issue that we've been blocking on for a while has been
Capabilities discovery. Some API proposed adds like live resize have
been conceptually blocked behind this one. Once upon a time there was a
theory that JSON Home was a thing, and would slice our bread and julien
our fries, and solve all this. But it's a big thing to get right, and
JSON Home has an unclear future. And, we could server our users pretty
well with a much simpler take on capabilities. For instance

  GET /servers/{id}/capabilities

{
 "capabilities" : {
 "resize": True,
 "live-resize": True,
 "live-migrate": False
 ...
  }
}

Effectively an actions map for servers. Lots of details would have to be
sorted out on this one, clearly needs a spec, however I think that this
would help unstick some other things people would like in Nova, without
making our interop story terrible. This would need a driver for this effort.

Every thing here is up for discussion. This is a summary of some of what
was in the meeting, plus some of my own thoughts. Please chime in on any
of this. It would be good to be of general agreement pre summit, so we
could focus conversation there more on the hows for getting things done.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Andrew Laski


On Wed, Mar 30, 2016, at 04:47 PM, Adam Young wrote:
> On 03/30/2016 04:16 PM, Andrew Laski wrote:
> >
> > On Wed, Mar 30, 2016, at 03:54 PM, Matt Riedemann wrote:
> >>
> >> On 3/30/2016 2:42 PM, Andrew Laski wrote:
> >>>
> >>> On Wed, Mar 30, 2016, at 03:26 PM, Sean Dague wrote:
>  During the Nova API meeting we had some conversations about priorities,
>  but this feels like the thing where a mailing list conversation is more
>  inclusive to get agreement on things. I think we need to remain focused
>  on what API related work will have the highest impact on our users.
>  (some brain storming was here -
>  https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
>  completely straw man proposal on priorities for the Newton cycle.
> 
>  * Top Priority Items *
> 
>  1. API Reference docs in RST which include microversions (drivers: me,
>  auggy, annegentle) -
>  https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
>  2. Discoverable Policy (drivers: laski, claudio) -
> >> Selfishly I'd like Laski to be as focused on cells v2 as possible, but
> >> he does have a spec up related to this.
> > At the midcycle I agreed to look into the oslo.policy work involved with
> > this because I have some experience there. I wasn't planning to be much
> > involved beyond that, and Claudiu has a spec up for the API side of it.
> > But in my mind there's a chain backwards from capabilities API to
> > discoverable policy and I want to move the oslo.policy work ahead
> > quickly if I can to unblock that.
> 
> There is a CLI that does something like what you want already:
> 
> https://review.openstack.org/#/c/170978/
> 
> You basically want a server based version of that that returns all the 
> "true" values.

Exactly.

The shortcoming of the CLI is that not all policies are guaranteed to be
defined in a policy.json file. It's entirely possible for there to be a
policy check with no definition anywhere which will just fall back to a
defined default rule. A big part of my policy
proposal(https://review.openstack.org/#/c/290155/) is to require
policies to be registered in code like configuration options are. This
allows for an exhaustive check against all used policies. And will allow
for policy file generation from services.


> 
> >
> >
>  https://review.openstack.org/#/c/289405/
>  3. ?? (TBD)
> 
>  I think realistically 3 priority items is about what we can sustain, and
>  I'd like to keep it there. Item #3 has a couple of options.
> >> Agree to keep the priority list as small as possible, because this is
> >> just a part of our overall backlog of priorities.
> >>
>  * Lower Priority Background Work *
> 
>  - POC of Gabbi for additional API validation
> >> I'm assuming cdent would be driving this, and he's also working on the
> >> resource providers stuff for the scheduler, but might be a decent side
> >> project for him to stay sane.
> >>
>  - Microversion Testing in Tempest (underway)
> >> How much coverage do we have today? This could be like novaclient where
> >> people just start hacking on adding tests for each microversion
> >> (assuming gmann would be working on this).
> >>
>  - Some of the API WG recommendations
> 
>  * Things we shouldn't do this cycle *
> 
>  - Tasks API - not because it's not a good idea, but because I think
>  until we get ~ 3 core team members agreed that it's their number #1 item
>  for the cycle, it's just not going to get enough energy to go somewhere
>  useful. There are some other things on deck that we just need to clear
>  first.
> >>> Agreed. I would love to drive this forward but there are just too many
> >>> other areas to focus on right now.
> >> +1
> >>
> >>>
>  - API wg changes for error codes - we should fix that eventually, but
>  that should come as a single microversion to minimize churn. That's
>  coordination we don't really have the bandwidth for this cycle.
> >> +1
> >>
>  * Things we need to decide this cycle *
> 
>  - When are we deleting the legacy v2 code base in tree?
> >> Do you have some high-level milestone thoughts here? I thought there was
> >> talk about not even thinking about this until Barcelona?
> >>
>  * Final priority item *
> 
>  For the #3 priority item one of the things that came up today was the
>  structured errors spec by the API working group. That would be really
>  nice... but in some ways really does need the entire new API reference
>  docs in place. And maybe is better in O.
> 
>  One other issue that we've been blocking on for a while has been
>  Capabilities discovery. Some API proposed adds like live resize have
>  been conceptually blocked behind this one. Once upon a time there was a
>  theory that JSON Home was a thing, and would slice our bread and julien
>  our fries, and solve all this. But it's a big thing to get right, 

Re: [openstack-dev] [tripleo] Policy Managment and distribution.

2016-03-30 Thread Steven Hardy
On Tue, Mar 29, 2016 at 07:37:02PM -0400, Emilien Macchi wrote:
> On Tue, Mar 29, 2016 at 6:18 PM, Adam Young  wrote:
> > Keystone has a policy API, but no one uses it.  It allows us to associate a
> > policy file with an endpoint. Upload a json blob, it gets a uuid.  Associate
> > the UUID with the endpoint.  It could also be associated with a service, and
> > then it is associated with all endpoint for that service unless explicitly
> > overwritten.
> >
> > Assuming all of the puppet modules for all of the services support managing
> > the policy files, how hard would it be to synchronize between the database
> > and what we distribute to the nodes?  Something along the lines of:  if I
> > put a file in this directory, I want puppet to use it the next time I do a
> > redeploy, and I also want it uploaded to Keystone and associate with the
> > endpoint?
> >
> > As a start, I want to be able to replace the baseline policy.json file with
> > the cloudsample.  We ship both.
> >
> >
> > We have policy.pp in Puppet Keystone for this use case.
> > In tripleO, we could create a parameter that uses would use to
> > configure specific policies. It would be an hash and puppet will
> > manage the policies.  This would handle the Keystone case, but we need
> > to customize all of the policy files, for all of the services, for
> > example, to add the is_admin_project check.  I'd like to get this mechanism
> > in place before I start that work, so I can easily test changes.
> 
> ++
> 
> the keystone::policy (and generally neutron::policy, nova::policy,
> etc...) class is pretty robust:
> 
> * it creates news policies or update existing ones.
> * it does not delete old policies, that are already in the file.
> * notify keystone service on every change.
> 
> Please use it and let us know if we need to change something in
> puppet-keystone, that would help you to deploy the use-case.

I tried driving the heat::policy class today via our existing extraconfig
parameter, it works like this:

parameter_defaults:
  controllerExtraConfig:
controller_classes:
  - heat::policy
heat::policy::policies:
  context_is_admin:
key: context_is_admin
value: foo:bar

Just include this in an environment file, and the policy.json for heat will
be updated by puppet.

The only issue I found was the json formatting is lost when puppet renders
the file, it becomes a giant json blob (newlines/formatting gone).

Other than that it works well, so I assume the other interfaces will work
similarly for all other services.

The other approch I tried was using the new DeployArtifacts interface,
which isn't yet documented, but was proposed in this spec:

https://github.com/openstack/tripleo-specs/blob/master/specs/mitaka/puppet-modules-deployment-via-swift.rst

This was originally envisaged as a method to deploy updated puppet modules
(when not delivered via package updates), but it actually works great as a
general purpose deploy-files mechanism too:

Here's how it works:

1. Clone https://github.com/openstack/tripleo-common (not sure if the
script needed is packaged yet)

2. mkdir -p policyfiles/etc/heat/ && cd policyfiles

3. vim etc/heat/policy.json (copy the initial file from somewhere)

4. tar -cvzf policy123.tgz etc # Note the tarball is expanded from "/" so
ensure path prefixes are as required

5. ./tripleo-common/scripts/upload-swift-artifacts -f
policyfiles/policy123.tgz

This creates a special environment file here:

cat /home/stack/.tripleo/environments/deployment-artifacts.yaml

*NOTE* this will *always* be used now until you delete it, you don't need
to explicitly specify it via "-e"

I guess there are pros/cons to both methods, and neither uses keystone as
the policy store - I'm not sure how important that is vs having them stored
"somewhere", e.g it seems like they could just as well be stored in a git
repo or swift on the undercloud, there's not huge benefit to storing them
in the overcloud unless other services support automatically consuming
them?

Cheers,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Anita Kuno
On 03/30/2016 05:24 AM, Thierry Carrez wrote:
> See new version attached. This will be pushed to the official schedule
> tomorrow, so please reply here today if you see any major issue with it.
> 
> Changes:
> - swapped the release and stable slots to accommodate mriedem
> - moved Astara fishbowl to Thu morning to avoid conflict with Tacker
> - moved OpenStackClient, Stable and Release to a larger fishbowl room
> 
> Cheers,
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Thank you to you and the scheduling team for taking care of all of us.

Thanks Thierry,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas] [octavia] Proposing Bharath Munirajulu as Octavia Core

2016-03-30 Thread Michael Johnson
I would like to nominate Bharath Munirajulu (bharathm) as a OpenStack
Octavia core reviewer.
His contributions [1] are in line with other cores and he has been an
active member of our community.  I have been impressed with the
insight and quality of his reviews.

Current Octavia cores, please vote by replying to this e-mail.

Michael


[1] http://stackalytics.com/report/contribution/octavia/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin

2016-03-30 Thread McLellan, Steven
Hi Hiroyuki,

It would be worth being certain about what access we have to keypairs before 
committing to a plugin; if we cannot retrieve the initial list or receive 
notifications on new keypairs, we likely can't index them at all. If we have 
partial access we may be able to make a decision on whether it will be good 
enough. Please feel free to get in touch in IRC (#openstack-searchlight) if 
that would be useful.

Steve

From: Hiroyuki Eguchi >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, March 29, 2016 at 7:13 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi Lakshmi,

Thank you for your advice.
I'm trying to index the public keys.
I'm gonna try to discuss in searchlight-specs before starting development.

Thanks
Hiroyuki.



差出人: Sampath, Lakshmi [lakshmi.samp...@hpe.com]
送信日時: 2016年3月29日 2:03
宛先: OpenStack Development Mailing List (not for usage questions)
件名: Re: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi Hiroyuki,

For this plugin what data are you indexing in Elasticsearch. I mean what do you 
expect users to search on and retrieve? Are you trying to index the public keys?
Talking directly to DB is not advisable, but before that we need to discuss 
what data is being indexed and the security implication of it (RBAC) to users 
who can/cannot access it.

I would suggest start a spec in openstack/searchlight-specs under newton for 
reviewing/feedback.
https://github.com/openstack/searchlight-specs.git


Thanks
Lakshmi.

From: Hiroyuki Eguchi [mailto:h-egu...@az.jp.nec.com]
Sent: Sunday, March 27, 2016 10:26 PM
To: OpenStack Development Mailing List (not for usage questions) 
[openstack-dev@lists.openstack.org] 
>
Subject: [openstack-dev] [searchlight] Add Nova Keypair Plugin

Hi.

I am developing this plugin.
https://blueprints.launchpad.net/searchlight/+spec/nova-keypair-plugin

However I faced the problem that a admin user can not retrieve a keypair 
information created by another user.
So it is impossible to sync the keypair between OpenStack DB and Elasticsearch, 
unless connect to OpenStack DB directly.
Is there any suggestions to resolve it ?

thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Hongbin Lu
Hi Thierry,

After discussing with the Kuryr PTL (Gal), we agreed to have a shared fishbowl 
session between Magnum and Kuryr. I would like to schedule it at Thursday 11:50 
- 12:30 for now (by using the original Magnum fishbowl slot). We might adjust 
the time later if needed. Thanks.

Best regards,
Hongbin

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: March-30-16 5:25 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Austin Design Summit track layout

See new version attached. This will be pushed to the official schedule 
tomorrow, so please reply here today if you see any major issue with it.

Changes:
- swapped the release and stable slots to accommodate mriedem
- moved Astara fishbowl to Thu morning to avoid conflict with Tacker
- moved OpenStackClient, Stable and Release to a larger fishbowl room

Cheers,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Adam Young

On 03/30/2016 04:16 PM, Andrew Laski wrote:


On Wed, Mar 30, 2016, at 03:54 PM, Matt Riedemann wrote:


On 3/30/2016 2:42 PM, Andrew Laski wrote:


On Wed, Mar 30, 2016, at 03:26 PM, Sean Dague wrote:

During the Nova API meeting we had some conversations about priorities,
but this feels like the thing where a mailing list conversation is more
inclusive to get agreement on things. I think we need to remain focused
on what API related work will have the highest impact on our users.
(some brain storming was here -
https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
completely straw man proposal on priorities for the Newton cycle.

* Top Priority Items *

1. API Reference docs in RST which include microversions (drivers: me,
auggy, annegentle) -
https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
2. Discoverable Policy (drivers: laski, claudio) -

Selfishly I'd like Laski to be as focused on cells v2 as possible, but
he does have a spec up related to this.

At the midcycle I agreed to look into the oslo.policy work involved with
this because I have some experience there. I wasn't planning to be much
involved beyond that, and Claudiu has a spec up for the API side of it.
But in my mind there's a chain backwards from capabilities API to
discoverable policy and I want to move the oslo.policy work ahead
quickly if I can to unblock that.


There is a CLI that does something like what you want already:

https://review.openstack.org/#/c/170978/

You basically want a server based version of that that returns all the 
"true" values.






https://review.openstack.org/#/c/289405/
3. ?? (TBD)

I think realistically 3 priority items is about what we can sustain, and
I'd like to keep it there. Item #3 has a couple of options.

Agree to keep the priority list as small as possible, because this is
just a part of our overall backlog of priorities.


* Lower Priority Background Work *

- POC of Gabbi for additional API validation

I'm assuming cdent would be driving this, and he's also working on the
resource providers stuff for the scheduler, but might be a decent side
project for him to stay sane.


- Microversion Testing in Tempest (underway)

How much coverage do we have today? This could be like novaclient where
people just start hacking on adding tests for each microversion
(assuming gmann would be working on this).


- Some of the API WG recommendations

* Things we shouldn't do this cycle *

- Tasks API - not because it's not a good idea, but because I think
until we get ~ 3 core team members agreed that it's their number #1 item
for the cycle, it's just not going to get enough energy to go somewhere
useful. There are some other things on deck that we just need to clear
first.

Agreed. I would love to drive this forward but there are just too many
other areas to focus on right now.

+1




- API wg changes for error codes - we should fix that eventually, but
that should come as a single microversion to minimize churn. That's
coordination we don't really have the bandwidth for this cycle.

+1


* Things we need to decide this cycle *

- When are we deleting the legacy v2 code base in tree?

Do you have some high-level milestone thoughts here? I thought there was
talk about not even thinking about this until Barcelona?


* Final priority item *

For the #3 priority item one of the things that came up today was the
structured errors spec by the API working group. That would be really
nice... but in some ways really does need the entire new API reference
docs in place. And maybe is better in O.

One other issue that we've been blocking on for a while has been
Capabilities discovery. Some API proposed adds like live resize have
been conceptually blocked behind this one. Once upon a time there was a
theory that JSON Home was a thing, and would slice our bread and julien
our fries, and solve all this. But it's a big thing to get right, and
JSON Home has an unclear future. And, we could server our users pretty
well with a much simpler take on capabilities. For instance

   GET /servers/{id}/capabilities

{
  "capabilities" : {
  "resize": True,
  "live-resize": True,
  "live-migrate": False
  ...
   }
}

Effectively an actions map for servers. Lots of details would have to be
sorted out on this one, clearly needs a spec, however I think that this
would help unstick some other things people would like in Nova, without
making our interop story terrible. This would need a driver for this
effort.

I think this ties directly into the discoverable policy item above. I
may be misunderstanding this proposal but I would expect that it has
some link with what a user is allowed to do. Without some improvements
to the policy handling within Nova this is not currently possible.

Agree with Laski here.




Every thing here is up for discussion. This is a summary of some of what
was in the meeting, plus some of my own thoughts. Please chime in on any
of this. It would be good 

Re: [openstack-dev] [nova][neutron] What to do about booting into port_security_enabled=False networks?

2016-03-30 Thread Sean Dague
On 03/29/2016 09:55 PM, Matt Riedemann wrote:

> 
> Yup, HenryG walked me through the cases on IRC today.
> 
> The more I think about option (b) above, the less I like that idea given
> how much work goes into the allocate_for_instance code in nova where
> it's already building the list of possible networks that will be used
> for creating/updating ports, we'd essentially have to duplicate that
> logic in a separate method to get an idea of what security groups would
> be applied.
> 
> I'd prefer to be lazy and go with option (a) and just say nova doesn't
> return security-groups in the REST API when creating a server and
> neutron is the network API. That would require a microversion probably,
> but it would still be easy to do. I'm not sure if that's the best user
> experience though.
> 

Is there a sane resource on the neutron side we could link to? Today
security_groups are returned with a name from nova, which made sense
when it was an internal structure, but makes way less sense now.

"security_groups": [
   {
"href": "",
}
]

Where the link is to a neutron resource (and we could do a local link
for the few nova net folks) might be more appropriate.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Service Catalog TNG work in Mitaka ... next steps

2016-03-30 Thread Jay Pipes

On 03/29/2016 06:49 PM, Matt Riedemann wrote:

On 3/29/2016 2:30 PM, Sean Dague wrote:

At the Mitaka Summit we had a double session on the Service Catalog,
where we stood, and where we could move forward. Even though the service
catalog isn't used nearly as much as we'd like, it's used in just enough
odd places that every change pulls on a few other threads that are
unexpected. So this is going to be a slow process going forward, but I
do have faith we'll get there.



Thanks for the write up.


Indeed, thanks very much, Sean, it's super-helpful to read these status 
summaries.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Sean Dague
On 03/30/2016 04:17 PM, Jay Pipes wrote:
> On 03/30/2016 12:54 PM, Matt Riedemann wrote:
 - POC of Gabbi for additional API validation
>>
>> I'm assuming cdent would be driving this, and he's also working on the
>> resource providers stuff for the scheduler, but might be a decent side
>> project for him to stay sane.
> 
> Actually, Sergey Nikitin can be responsible for driving this effort,
> with guidance from Chris.
> 
 - Microversion Testing in Tempest (underway)
>>
>> How much coverage do we have today? This could be like novaclient where
>> people just start hacking on adding tests for each microversion
>> (assuming gmann would be working on this).
> 
> I would prefer to see gabbits validating Nova's API back-microversions
> and instead focus on Tempest validating only the latest microversion API.
> 
> The validation of all the various microversion variations should IMHO be
> the purview of in-tree functional tests within Nova.

That's largely the way it is today, and how I see it going forward (in
tree testing for full coverage, a representative cross section in
Tempest). There are details on the way Tempest works against all
versions of OpenStack that mean that what is in Tempest is really a
baseline set of tests, plus some tests which test integration
functionality at particular microversions if they are available in the
target.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Are injected files in compute going to be deprecated?

2016-03-30 Thread Rob Crittenden

In nova/compute/manager.py I see:

def inject_file(self, context, path, file_contents, instance):
"""Write a file to the specified path in an instance on this 
host."""

# NOTE(russellb) Remove this method, as well as the underlying virt
# driver methods, when the compute rpc interface is bumped to 4.x
# as it is no longer used.


The RPC API is at 4.5 as of Liberty. Does that mean it is going soon? I 
couldn't find any deprecation references other than this one.


rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Andrew Laski


On Wed, Mar 30, 2016, at 03:54 PM, Matt Riedemann wrote:
> 
> 
> On 3/30/2016 2:42 PM, Andrew Laski wrote:
> >
> >
> > On Wed, Mar 30, 2016, at 03:26 PM, Sean Dague wrote:
> >> During the Nova API meeting we had some conversations about priorities,
> >> but this feels like the thing where a mailing list conversation is more
> >> inclusive to get agreement on things. I think we need to remain focused
> >> on what API related work will have the highest impact on our users.
> >> (some brain storming was here -
> >> https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
> >> completely straw man proposal on priorities for the Newton cycle.
> >>
> >> * Top Priority Items *
> >>
> >> 1. API Reference docs in RST which include microversions (drivers: me,
> >> auggy, annegentle) -
> >> https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
> >> 2. Discoverable Policy (drivers: laski, claudio) -
> 
> Selfishly I'd like Laski to be as focused on cells v2 as possible, but 
> he does have a spec up related to this.

At the midcycle I agreed to look into the oslo.policy work involved with
this because I have some experience there. I wasn't planning to be much
involved beyond that, and Claudiu has a spec up for the API side of it.
But in my mind there's a chain backwards from capabilities API to
discoverable policy and I want to move the oslo.policy work ahead
quickly if I can to unblock that.


> 
> >> https://review.openstack.org/#/c/289405/
> >> 3. ?? (TBD)
> >>
> >> I think realistically 3 priority items is about what we can sustain, and
> >> I'd like to keep it there. Item #3 has a couple of options.
> 
> Agree to keep the priority list as small as possible, because this is 
> just a part of our overall backlog of priorities.
> 
> >>
> >> * Lower Priority Background Work *
> >>
> >> - POC of Gabbi for additional API validation
> 
> I'm assuming cdent would be driving this, and he's also working on the 
> resource providers stuff for the scheduler, but might be a decent side 
> project for him to stay sane.
> 
> >> - Microversion Testing in Tempest (underway)
> 
> How much coverage do we have today? This could be like novaclient where 
> people just start hacking on adding tests for each microversion 
> (assuming gmann would be working on this).
> 
> >> - Some of the API WG recommendations
> >>
> >> * Things we shouldn't do this cycle *
> >>
> >> - Tasks API - not because it's not a good idea, but because I think
> >> until we get ~ 3 core team members agreed that it's their number #1 item
> >> for the cycle, it's just not going to get enough energy to go somewhere
> >> useful. There are some other things on deck that we just need to clear
> >> first.
> >
> > Agreed. I would love to drive this forward but there are just too many
> > other areas to focus on right now.
> 
> +1
> 
> >
> >
> >> - API wg changes for error codes - we should fix that eventually, but
> >> that should come as a single microversion to minimize churn. That's
> >> coordination we don't really have the bandwidth for this cycle.
> 
> +1
> 
> >>
> >> * Things we need to decide this cycle *
> >>
> >> - When are we deleting the legacy v2 code base in tree?
> 
> Do you have some high-level milestone thoughts here? I thought there was 
> talk about not even thinking about this until Barcelona?
> 
> >>
> >> * Final priority item *
> >>
> >> For the #3 priority item one of the things that came up today was the
> >> structured errors spec by the API working group. That would be really
> >> nice... but in some ways really does need the entire new API reference
> >> docs in place. And maybe is better in O.
> >>
> >> One other issue that we've been blocking on for a while has been
> >> Capabilities discovery. Some API proposed adds like live resize have
> >> been conceptually blocked behind this one. Once upon a time there was a
> >> theory that JSON Home was a thing, and would slice our bread and julien
> >> our fries, and solve all this. But it's a big thing to get right, and
> >> JSON Home has an unclear future. And, we could server our users pretty
> >> well with a much simpler take on capabilities. For instance
> >>
> >>   GET /servers/{id}/capabilities
> >>
> >> {
> >>  "capabilities" : {
> >>  "resize": True,
> >>  "live-resize": True,
> >>  "live-migrate": False
> >>  ...
> >>   }
> >> }
> >>
> >> Effectively an actions map for servers. Lots of details would have to be
> >> sorted out on this one, clearly needs a spec, however I think that this
> >> would help unstick some other things people would like in Nova, without
> >> making our interop story terrible. This would need a driver for this
> >> effort.
> >
> > I think this ties directly into the discoverable policy item above. I
> > may be misunderstanding this proposal but I would expect that it has
> > some link with what a user is allowed to do. Without some improvements
> > to the policy handling within Nova this is not currently 

Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Jay Pipes

On 03/30/2016 12:54 PM, Matt Riedemann wrote:

- POC of Gabbi for additional API validation


I'm assuming cdent would be driving this, and he's also working on the
resource providers stuff for the scheduler, but might be a decent side
project for him to stay sane.


Actually, Sergey Nikitin can be responsible for driving this effort, 
with guidance from Chris.



- Microversion Testing in Tempest (underway)


How much coverage do we have today? This could be like novaclient where
people just start hacking on adding tests for each microversion
(assuming gmann would be working on this).


I would prefer to see gabbits validating Nova's API back-microversions 
and instead focus on Tempest validating only the latest microversion API.


The validation of all the various microversion variations should IMHO be 
the purview of in-tree functional tests within Nova.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Matt Riedemann



On 3/30/2016 2:42 PM, Andrew Laski wrote:



On Wed, Mar 30, 2016, at 03:26 PM, Sean Dague wrote:

During the Nova API meeting we had some conversations about priorities,
but this feels like the thing where a mailing list conversation is more
inclusive to get agreement on things. I think we need to remain focused
on what API related work will have the highest impact on our users.
(some brain storming was here -
https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
completely straw man proposal on priorities for the Newton cycle.

* Top Priority Items *

1. API Reference docs in RST which include microversions (drivers: me,
auggy, annegentle) -
https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
2. Discoverable Policy (drivers: laski, claudio) -


Selfishly I'd like Laski to be as focused on cells v2 as possible, but 
he does have a spec up related to this.



https://review.openstack.org/#/c/289405/
3. ?? (TBD)

I think realistically 3 priority items is about what we can sustain, and
I'd like to keep it there. Item #3 has a couple of options.


Agree to keep the priority list as small as possible, because this is 
just a part of our overall backlog of priorities.




* Lower Priority Background Work *

- POC of Gabbi for additional API validation


I'm assuming cdent would be driving this, and he's also working on the 
resource providers stuff for the scheduler, but might be a decent side 
project for him to stay sane.



- Microversion Testing in Tempest (underway)


How much coverage do we have today? This could be like novaclient where 
people just start hacking on adding tests for each microversion 
(assuming gmann would be working on this).



- Some of the API WG recommendations

* Things we shouldn't do this cycle *

- Tasks API - not because it's not a good idea, but because I think
until we get ~ 3 core team members agreed that it's their number #1 item
for the cycle, it's just not going to get enough energy to go somewhere
useful. There are some other things on deck that we just need to clear
first.


Agreed. I would love to drive this forward but there are just too many
other areas to focus on right now.


+1





- API wg changes for error codes - we should fix that eventually, but
that should come as a single microversion to minimize churn. That's
coordination we don't really have the bandwidth for this cycle.


+1



* Things we need to decide this cycle *

- When are we deleting the legacy v2 code base in tree?


Do you have some high-level milestone thoughts here? I thought there was 
talk about not even thinking about this until Barcelona?




* Final priority item *

For the #3 priority item one of the things that came up today was the
structured errors spec by the API working group. That would be really
nice... but in some ways really does need the entire new API reference
docs in place. And maybe is better in O.

One other issue that we've been blocking on for a while has been
Capabilities discovery. Some API proposed adds like live resize have
been conceptually blocked behind this one. Once upon a time there was a
theory that JSON Home was a thing, and would slice our bread and julien
our fries, and solve all this. But it's a big thing to get right, and
JSON Home has an unclear future. And, we could server our users pretty
well with a much simpler take on capabilities. For instance

  GET /servers/{id}/capabilities

{
 "capabilities" : {
 "resize": True,
 "live-resize": True,
 "live-migrate": False
 ...
  }
}

Effectively an actions map for servers. Lots of details would have to be
sorted out on this one, clearly needs a spec, however I think that this
would help unstick some other things people would like in Nova, without
making our interop story terrible. This would need a driver for this
effort.


I think this ties directly into the discoverable policy item above. I
may be misunderstanding this proposal but I would expect that it has
some link with what a user is allowed to do. Without some improvements
to the policy handling within Nova this is not currently possible.


Agree with Laski here.






Every thing here is up for discussion. This is a summary of some of what
was in the meeting, plus some of my own thoughts. Please chime in on any
of this. It would be good to be of general agreement pre summit, so we
could focus conversation there more on the hows for getting things done.


Thanks for writing this up. I'm trying to get all of the nova subteam 
meetings on my calendar, but this one is hard for me to to make on time 
given daycare duties each morning.




-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Email had 1 attachment:
+ 

Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Matthew Treinish
On Wed, Mar 30, 2016 at 03:26:13PM -0400, Sean Dague wrote:
> During the Nova API meeting we had some conversations about priorities,
> but this feels like the thing where a mailing list conversation is more
> inclusive to get agreement on things. I think we need to remain focused
> on what API related work will have the highest impact on our users.
> (some brain storming was here -
> https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
> completely straw man proposal on priorities for the Newton cycle.
> 
> * Top Priority Items *
> 
> 1. API Reference docs in RST which include microversions (drivers: me,
> auggy, annegentle) -
> https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
> 2. Discoverable Policy (drivers: laski, claudio) -
> https://review.openstack.org/#/c/289405/
> 3. ?? (TBD)
> 
> I think realistically 3 priority items is about what we can sustain, and
> I'd like to keep it there. Item #3 has a couple of options.
> 
> * Lower Priority Background Work *
> 
> - POC of Gabbi for additional API validation
> - Microversion Testing in Tempest (underway)

FWIW, the  framework for using microversions in tempest is done (and is part of
tempest.lib too) and the BP for that has been marked as implemented:

http://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/api-microversions-testing-support.html

All that's needed now is to actually start to leverage it by adding tests with
microversions. IIRC there is only 1 right now, just a pro forma test for v2.2.
The docs for using it are here:

http://docs.openstack.org/developer/tempest/microversion_testing.html

> - Some of the API WG recommendations
> 
> * Things we shouldn't do this cycle *
> 
> - Tasks API - not because it's not a good idea, but because I think
> until we get ~ 3 core team members agreed that it's their number #1 item
> for the cycle, it's just not going to get enough energy to go somewhere
> useful. There are some other things on deck that we just need to clear
> first.
> - API wg changes for error codes - we should fix that eventually, but
> that should come as a single microversion to minimize churn. That's
> coordination we don't really have the bandwidth for this cycle.
> 
> * Things we need to decide this cycle *
> 
> - When are we deleting the legacy v2 code base in tree?

I can get behind doing this. I think we've been running the 2.1 base compat
as the default for long enough that there aren't gonna be any surprises if
we drop the old v2 code in Newton.

> 
> * Final priority item *
> 
> For the #3 priority item one of the things that came up today was the
> structured errors spec by the API working group. That would be really
> nice... but in some ways really does need the entire new API reference
> docs in place. And maybe is better in O.
> 
> One other issue that we've been blocking on for a while has been
> Capabilities discovery. Some API proposed adds like live resize have
> been conceptually blocked behind this one. Once upon a time there was a
> theory that JSON Home was a thing, and would slice our bread and julien
> our fries, and solve all this. But it's a big thing to get right, and
> JSON Home has an unclear future. And, we could server our users pretty
> well with a much simpler take on capabilities. For instance
> 
>  GET /servers/{id}/capabilities
> 
> {
> "capabilities" : {
> "resize": True,
> "live-resize": True,
> "live-migrate": False
> ...
>  }
> }
> 
> Effectively an actions map for servers. Lots of details would have to be
> sorted out on this one, clearly needs a spec, however I think that this
> would help unstick some other things people would like in Nova, without
> making our interop story terrible. This would need a driver for this effort.
> 
> Every thing here is up for discussion. This is a summary of some of what
> was in the meeting, plus some of my own thoughts. Please chime in on any
> of this. It would be good to be of general agreement pre summit, so we
> could focus conversation there more on the hows for getting things done.
> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Andrew Laski


On Wed, Mar 30, 2016, at 03:26 PM, Sean Dague wrote:
> During the Nova API meeting we had some conversations about priorities,
> but this feels like the thing where a mailing list conversation is more
> inclusive to get agreement on things. I think we need to remain focused
> on what API related work will have the highest impact on our users.
> (some brain storming was here -
> https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
> completely straw man proposal on priorities for the Newton cycle.
> 
> * Top Priority Items *
> 
> 1. API Reference docs in RST which include microversions (drivers: me,
> auggy, annegentle) -
> https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
> 2. Discoverable Policy (drivers: laski, claudio) -
> https://review.openstack.org/#/c/289405/
> 3. ?? (TBD)
> 
> I think realistically 3 priority items is about what we can sustain, and
> I'd like to keep it there. Item #3 has a couple of options.
> 
> * Lower Priority Background Work *
> 
> - POC of Gabbi for additional API validation
> - Microversion Testing in Tempest (underway)
> - Some of the API WG recommendations
> 
> * Things we shouldn't do this cycle *
> 
> - Tasks API - not because it's not a good idea, but because I think
> until we get ~ 3 core team members agreed that it's their number #1 item
> for the cycle, it's just not going to get enough energy to go somewhere
> useful. There are some other things on deck that we just need to clear
> first.

Agreed. I would love to drive this forward but there are just too many
other areas to focus on right now.


> - API wg changes for error codes - we should fix that eventually, but
> that should come as a single microversion to minimize churn. That's
> coordination we don't really have the bandwidth for this cycle.
> 
> * Things we need to decide this cycle *
> 
> - When are we deleting the legacy v2 code base in tree?
> 
> * Final priority item *
> 
> For the #3 priority item one of the things that came up today was the
> structured errors spec by the API working group. That would be really
> nice... but in some ways really does need the entire new API reference
> docs in place. And maybe is better in O.
> 
> One other issue that we've been blocking on for a while has been
> Capabilities discovery. Some API proposed adds like live resize have
> been conceptually blocked behind this one. Once upon a time there was a
> theory that JSON Home was a thing, and would slice our bread and julien
> our fries, and solve all this. But it's a big thing to get right, and
> JSON Home has an unclear future. And, we could server our users pretty
> well with a much simpler take on capabilities. For instance
> 
>  GET /servers/{id}/capabilities
> 
> {
> "capabilities" : {
> "resize": True,
> "live-resize": True,
> "live-migrate": False
> ...
>  }
> }
> 
> Effectively an actions map for servers. Lots of details would have to be
> sorted out on this one, clearly needs a spec, however I think that this
> would help unstick some other things people would like in Nova, without
> making our interop story terrible. This would need a driver for this
> effort.

I think this ties directly into the discoverable policy item above. I
may be misunderstanding this proposal but I would expect that it has
some link with what a user is allowed to do. Without some improvements
to the policy handling within Nova this is not currently possible.


> 
> Every thing here is up for discussion. This is a summary of some of what
> was in the meeting, plus some of my own thoughts. Please chime in on any
> of this. It would be good to be of general agreement pre summit, so we
> could focus conversation there more on the hows for getting things done.
> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Sean Dague
During the Nova API meeting we had some conversations about priorities,
but this feels like the thing where a mailing list conversation is more
inclusive to get agreement on things. I think we need to remain focused
on what API related work will have the highest impact on our users.
(some brain storming was here -
https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
completely straw man proposal on priorities for the Newton cycle.

* Top Priority Items *

1. API Reference docs in RST which include microversions (drivers: me,
auggy, annegentle) -
https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
2. Discoverable Policy (drivers: laski, claudio) -
https://review.openstack.org/#/c/289405/
3. ?? (TBD)

I think realistically 3 priority items is about what we can sustain, and
I'd like to keep it there. Item #3 has a couple of options.

* Lower Priority Background Work *

- POC of Gabbi for additional API validation
- Microversion Testing in Tempest (underway)
- Some of the API WG recommendations

* Things we shouldn't do this cycle *

- Tasks API - not because it's not a good idea, but because I think
until we get ~ 3 core team members agreed that it's their number #1 item
for the cycle, it's just not going to get enough energy to go somewhere
useful. There are some other things on deck that we just need to clear
first.
- API wg changes for error codes - we should fix that eventually, but
that should come as a single microversion to minimize churn. That's
coordination we don't really have the bandwidth for this cycle.

* Things we need to decide this cycle *

- When are we deleting the legacy v2 code base in tree?

* Final priority item *

For the #3 priority item one of the things that came up today was the
structured errors spec by the API working group. That would be really
nice... but in some ways really does need the entire new API reference
docs in place. And maybe is better in O.

One other issue that we've been blocking on for a while has been
Capabilities discovery. Some API proposed adds like live resize have
been conceptually blocked behind this one. Once upon a time there was a
theory that JSON Home was a thing, and would slice our bread and julien
our fries, and solve all this. But it's a big thing to get right, and
JSON Home has an unclear future. And, we could server our users pretty
well with a much simpler take on capabilities. For instance

 GET /servers/{id}/capabilities

{
"capabilities" : {
"resize": True,
"live-resize": True,
"live-migrate": False
...
 }
}

Effectively an actions map for servers. Lots of details would have to be
sorted out on this one, clearly needs a spec, however I think that this
would help unstick some other things people would like in Nova, without
making our interop story terrible. This would need a driver for this effort.

Every thing here is up for discussion. This is a summary of some of what
was in the meeting, plus some of my own thoughts. Please chime in on any
of this. It would be good to be of general agreement pre summit, so we
could focus conversation there more on the hows for getting things done.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] minutes of the Trove meeting 2016-03-30

2016-03-30 Thread Amrith Kumar
Meeting minutes can be found at 
http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-03-30-18.01.txt

Thanks to all who attended.

-amrith
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-30 Thread Michał Jastrzębski
So I made this:
https://review.openstack.org/#/c/299563/

I'm not super fond of reverting commits from the middle of release,
because this will make a lot of mess. I'd rather re-implement keystone
bootstrap logic and make it conditional as it is not that complicated.

On 30 March 2016 at 12:37, Ryan Hallisey  wrote:
> Agreed this needs to happen +1,
>
> - Original Message -
> From: "Jeff Peeler" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, March 30, 2016 1:22:31 PM
> Subject: Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty 
> within the Liberty branch
>
> On Wed, Mar 30, 2016 at 3:52 AM, Steven Dake (stdake)  
> wrote:
>>
>>
>> From: Jeffrey Zhang 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Wednesday, March 30, 2016 at 12:29 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty
>> within the Liberty branch
>>
>> +1
>>
>> A lot of changes has been make in Mitaka. Backport is difficult.
>>
>> But using Mitaka deploy Liberty also has *much works*. For example,
>> revert config file change which deprecated in Mitaka and Liberty support.
>>
>> A important one is the `keystone-manage bootstrap` command to create the
>> keystone admin account. This is adderecently and only exist in the Mitaka
>> branch. So when using this method, we should revert some commit and use
>> the old way method.
>>
>>
>> Agreed.
>
> I'm sure there will be some checking and such once all the code has
> been shuffled around, but I think doing this work is better than
> abandoning a branch. So +1 to proposal.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-quickstart import

2016-03-30 Thread Paul Belanger
On Tue, Mar 29, 2016 at 08:30:22PM -0400, John Trowbridge wrote:
> Hola,
> 
> With the approval of the tripleo-quickstart spec[1], it is time to
> actually start doing the work. The first work item is moving it to the
> openstack git. The spec talks about moving it as is, and this would
> still be fine.
> 
> However, there are roles in the tripleo-quickstart tree that are not
> directly related to the instack-virt-setup replacement aspect that is
> approved in the spec (image building, deployment). I think these should
> be split into their own ansible-role-* repos, so that they can be
> consumed using ansible-galaxy. It would actually even make sense to do
> that with the libvirt role responsible for setting up the virtual
> environment. The tripleo-quickstart would then be just an integration
> layer making consuming these roles for virtual deployments easy.
> 
> This way if someone wanted to make a different role for say OVB
> deployments, it would be easy to use the other roles on top of a
> differently provisioned undercloud.
> 
> Similarly, if we wanted to adopt ansible to drive tripleo-ci, it would
> be very easy to only consume the roles that make sense for the tripleo
> cloud.
> 
> So the first question is, should we split the roles out of
> tripleo-quickstart?
> 
> If so, should we do that before importing it to the openstack git?
> 
> Also, should the split out roles also be on the openstack git?
> 
So, we actually have a few ansible roles in OpenStack, mostly imported by
myself.  The OpenStack ansible teams has a few too.

I would propose, keep them included in your project for now and maybe start a
different discussion with all the ansible projects (kolla, ansible-openstack,
windmill, etc) to see how to best move forward.  I've discussed with openstack
ansible in the past about moving the roles I have uploaded into their team and
hope to bring it up again at Austin.

> Maybe this all deserves its own spec and we tackle it after completing
> all of the work for the first spec. I put this on the meeting agenda for
> today, but we didn't get to it.
> 
> - trown
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kloudbuster] authorization failed problem

2016-03-30 Thread Akshay Kumar Sanghai
Hi Yichen,
Thanks a lot . I will try with v6 and reach out to you for further help.

Regards,
Akshay

On Wed, Mar 30, 2016 at 11:35 PM, Yichen Wang (yicwang) 
wrote:

> Hi, Akshay,
>
>
>
> From the log you attached, the good news is you got KloudBuster installed
> and running fine! The problem is the image you are using (v5) is outdated
> for the latest KloudBuster main code. J
>
>
>
> Normally for every version of KloudBuster, it needs certain version of
> image to support the full functionality. In the case when new feature is
> brought in, we tag the main code with a new version, and bump up the image
> version. Like from v5 to v6, we added the capability to support storage
> testing on cinder volume and ephemeral disks as well. We are right in our
> time for publishing the v6 image to the OpenStack App Catalog, which may
> take another day or two. This is why you are seeing the connection to the
> redis agent in KB-Proxy is failing…
>
>
>
> In order to unblock you, here is the RC image of v6 we are using right
> now, replace it in your cloud and KloudBuster should be good to go:
>
> https://cisco.box.com/s/xelzx15swjra5qr0ieafyxnbyucnnsa0
>
>
>
> Now back to your question.
>
> -Does the server side means the cloud generating the traffic and client
> side means the the cloud on which connections are established? Can you
> please elaborate on client, server and proxy?
>
> [Yichen] It is the other way around. Server is running nginx, and client
> is running the traffic generator (wrk2). It is like the way we normally
> understand. Since there might be lots of servers and clients in the same
> cloud, so KB-Proxy is an additional VM that runs in the clients side to
> orchestrate all client VMs to generate traffic, collect the results from
> each VM, and send them back to the main KloudBuster for processing.
> KB-Proxy is the where the redis server is sitting, and acts as the proxy
> node to connect all internal VMs to the external network. This is why a
> floating IP is needed for the proxy node.
>
>
>
> -while running the kloudbuster, I saw "setting up redis connection". Can
> you please expain to which connection is established and why? Is it
> KB_PROXY?
>
> [Yichen] As I explained above, KB-Proxy is the bridge between internal VM
> and external world (like the host you are running KloudBuster from).
> “Setting up redis connection” means the KloudBuster is trying to connect to
> the redis server on the KB-Proxy node. You may see some retries because it
> does take some time for the VM to be up running.
>
>
>
> Thanks very much!
>
>
>
> Regards,
>
> Yichen
>
>
>
> *From:* Akshay Kumar Sanghai [mailto:akshaykumarsang...@gmail.com]
> *Sent:* Wednesday, March 30, 2016 7:31 AM
> *To:* Alec Hothan (ahothan) 
> *Cc:* OpenStack List ; Yichen Wang
> (yicwang) 
>
> *Subject:* Re: [openstack-dev] [kloudbuster] authorization failed problem
>
>
>
> Hi Alec,
>
> Thanks for clarifying. I didnot have the cinder service previously. It was
> not a complete setup. Now, I did the setup of cinder service.
>
> Output of keystone service list.
>
> [image: Inline image 1]
>
> I installed the setup of openstack using the installation guide for ubuntu
> and for kloudbuster, its a pypi based installation. So, I am running
> kloudbuster using the CLI option.
>
> kloudbuster --tested-rc keystone-openrc.sh --tested-passwd * --config
> kb.cfg
>
>
>
> contents of kb.cfg:
>
> image_name: 'kloudbuster'
>
>
>
> I added the kloudbuster v5 version as glance image with name as
> kloudbuster.
>
>
>
> I don't understand some basic things. If you can help, then that would be
> great.
>
> -Does the server side means the cloud generating the traffic and client
> side means the the cloud on which connections are established? Can you
> please elaborate on client, server and proxy?
>
> -while running the kloudbuster, I saw "setting up redis connection". Can
> you please expain to which connection is established and why? Is it
> KB_PROXY?
>
>
>
> Please find attached the run of kloudbuster as a file. I have still not
> succeeded in running the kloudbuster, some errors.
>
> I appreciate your help Alec.
>
>
>
> Thanks,
>
> Akshay
>
>
>
> On Mon, Mar 28, 2016 at 8:59 PM, Alec Hothan (ahothan) 
> wrote:
>
>
>
> Can you describe what you mean by "do not have a cinder service"?
>
> Can you provide the output of "keystone service-list"?
>
>
>
> We'd have to know a bit more about what you have been doing:
>
> how did you install your openstack, how did you install kloudbuster, which
> kloudbuster qcow2 image version did you use, who did you run kloudbuster
> (cli or REST or web UI), what config file have you been using, complete log
> of the run (including backtrace)...
>
>
>
> But the key is - you should really have a fully working openstack
> deployment before using kloudbuster. Nobody has never tried so far to use
> kloudbuster without such 

Re: [openstack-dev] [i18n][horizon][sahara][trove][magnum][murano] dashboard plugin release schedule

2016-03-30 Thread Craig Vyvial
Just an update on this thread that the trove-dashboard RC2 was released
https://review.openstack.org/#/c/298365/

Thanks,
Craig Vyvial

On Wed, Mar 23, 2016 at 11:36 PM Craig Vyvial  wrote:

> The trove-dashboard has its own stable/mitaka branch [1] as well. We have
> an RC1 release already and we can make sure to land the translations and
> cut an RC2 early next week (March 28).
>
> Thanks,
> Craig Vyvial
>
> [1] https://github.com/openstack/trove-dashboard/tree/stable/mitaka
>
>
> On Wed, Mar 23, 2016 at 11:02 PM Akihiro Motoki  wrote:
>
>> Thank you all for your supports.
>> We can see the progress of translations at [0]
>>
>> Shu,
>> Magnum UI adopts the independent release model. Good to know you have
>> stable/mitaka branch :)
>> Once the stable branch is cut, let not only me but also the i18n team
>> know it.
>> openstack-i18n ML is the best place to do it.
>> If so, the i18n team and the infra team will setup required action for
>> Zanata sync.
>>
>> [0]
>> https://translate.openstack.org/version-group/view/mitaka-translation/projects
>>
>> 2016-03-24 12:33 GMT+09:00 Shuu Mutou :
>> > Hi Akihiro,
>> >
>> > Thank you for your announcement.
>> > We will create stable/mitaka branch for Magnum-UI in this week,
>> > and that will freeze strings.
>> >
>> > Thanks,
>> >
>> > Shu Muto
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kloudbuster] authorization failed problem

2016-03-30 Thread Yichen Wang (yicwang)
Hi, Akshay,

From the log you attached, the good news is you got KloudBuster installed and 
running fine! The problem is the image you are using (v5) is outdated for the 
latest KloudBuster main code. ☺

Normally for every version of KloudBuster, it needs certain version of image to 
support the full functionality. In the case when new feature is brought in, we 
tag the main code with a new version, and bump up the image version. Like from 
v5 to v6, we added the capability to support storage testing on cinder volume 
and ephemeral disks as well. We are right in our time for publishing the v6 
image to the OpenStack App Catalog, which may take another day or two. This is 
why you are seeing the connection to the redis agent in KB-Proxy is failing…

In order to unblock you, here is the RC image of v6 we are using right now, 
replace it in your cloud and KloudBuster should be good to go:
https://cisco.box.com/s/xelzx15swjra5qr0ieafyxnbyucnnsa0

Now back to your question.
-Does the server side means the cloud generating the traffic and client side 
means the the cloud on which connections are established? Can you please 
elaborate on client, server and proxy?
[Yichen] It is the other way around. Server is running nginx, and client is 
running the traffic generator (wrk2). It is like the way we normally 
understand. Since there might be lots of servers and clients in the same cloud, 
so KB-Proxy is an additional VM that runs in the clients side to orchestrate 
all client VMs to generate traffic, collect the results from each VM, and send 
them back to the main KloudBuster for processing. KB-Proxy is the where the 
redis server is sitting, and acts as the proxy node to connect all internal VMs 
to the external network. This is why a floating IP is needed for the proxy node.

-while running the kloudbuster, I saw "setting up redis connection". Can you 
please expain to which connection is established and why? Is it KB_PROXY?
[Yichen] As I explained above, KB-Proxy is the bridge between internal VM and 
external world (like the host you are running KloudBuster from). “Setting up 
redis connection” means the KloudBuster is trying to connect to the redis 
server on the KB-Proxy node. You may see some retries because it does take some 
time for the VM to be up running.

Thanks very much!

Regards,
Yichen

From: Akshay Kumar Sanghai [mailto:akshaykumarsang...@gmail.com]
Sent: Wednesday, March 30, 2016 7:31 AM
To: Alec Hothan (ahothan) 
Cc: OpenStack List ; Yichen Wang (yicwang) 

Subject: Re: [openstack-dev] [kloudbuster] authorization failed problem

Hi Alec,
Thanks for clarifying. I didnot have the cinder service previously. It was not 
a complete setup. Now, I did the setup of cinder service.
Output of keystone service list.
[Inline image 1]
I installed the setup of openstack using the installation guide for ubuntu and 
for kloudbuster, its a pypi based installation. So, I am running kloudbuster 
using the CLI option.
kloudbuster --tested-rc keystone-openrc.sh --tested-passwd * --config kb.cfg

contents of kb.cfg:
image_name: 'kloudbuster'

I added the kloudbuster v5 version as glance image with name as kloudbuster.

I don't understand some basic things. If you can help, then that would be great.
-Does the server side means the cloud generating the traffic and client side 
means the the cloud on which connections are established? Can you please 
elaborate on client, server and proxy?
-while running the kloudbuster, I saw "setting up redis connection". Can you 
please expain to which connection is established and why? Is it KB_PROXY?

Please find attached the run of kloudbuster as a file. I have still not 
succeeded in running the kloudbuster, some errors.
I appreciate your help Alec.

Thanks,
Akshay

On Mon, Mar 28, 2016 at 8:59 PM, Alec Hothan (ahothan) 
> wrote:

Can you describe what you mean by "do not have a cinder service"?
Can you provide the output of "keystone service-list"?

We'd have to know a bit more about what you have been doing:
how did you install your openstack, how did you install kloudbuster, which 
kloudbuster qcow2 image version did you use, who did you run kloudbuster (cli 
or REST or web UI), what config file have you been using, complete log of the 
run (including backtrace)...

But the key is - you should really have a fully working openstack deployment 
before using kloudbuster. Nobody has never tried so far to use kloudbuster 
without such basic service as cinder working.

Thanks

  Alec



From: Akshay Kumar Sanghai 
>
Date: Monday, March 28, 2016 at 6:51 AM
To: OpenStack List 
>, 
Alec Hothan >
Cc: "Yichen Wang (yicwang)" >
Subject: Re: 

Re: [openstack-dev] [Neutron]Relationship between physical networks and segment

2016-03-30 Thread Miguel Lavalle
Bob,

Thanks for your detailed response. In it you "strongly recommend that any
functionality trying to make decisions based on connectivity do so by
calling into the registered mechanism drivers, so they can decide whether
whatever they manage has connectivity". After eading this I went through
the mechanism driver API definition (currently at
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/driver_api.py#n549).
The only method and the API that seems to be useful to implement your
recommendation is  filter_hosts_with_segment_access (currently at line
914). Is this method the right way to go?

Thanks

Miguel

On Tue, Mar 29, 2016 at 4:47 PM, Robert Kukura 
wrote:

> My answers below are from the perspective of normal (non-routed) networks
> implemented in ML2. The support for routed networks should build on this
> without breaking it.
>
> On 3/29/16 3:38 PM, Miguel Lavalle wrote:
>
> Hi,
>
> I am writing a patchset to build a mapping between hosts and network
> segments. The goal of this mapping is to be able to say whether a host has
> access to a given network segment. I am building this mapping assuming that
> if a host A has a bridges mapping containing 'physnet 1' and a segment has
> 'physnet 1' in its 'physical_network' attribute, then the host has access
> to that segment.
>
> 1) Is this assumption correct? Looking at method check_segment_for_agent
> in
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_agent.py#n180
> seems to me to suggest that my assumption is correct?
>
> This is true for certain agent-based mechanism drivers, but cannot be
> assumed to be the case for all mechanism drivers (even all those that use
> agents. Any use of mapping info (i.e. from agents_db or elsewhere) is
> specific to an individual mechanism driver. I'd strongly recommend that any
> functionality trying to make decisions based on connectivity do so by
> calling into the registered mechanism drivers, so they can decide whether
> whatever they manage has connectivity.
>
> Also note that connectivity may involve hierarchical port binding, in
> which case you really need to try to bind a port to determine if you have
> connectivity. I'm not suggesting that there is a requirement to mix HPB and
> routed networks, but please try not to build assumptions into ML2 plugin
> code that don't work with HPB or that are only valid for a subset of
> mechanism drivers.
>
>
> 2) Furthermore, when a segment is mapped to a physical network, is there a
> one to one relationship between segments and physical nets?
>
> Certainly different virtual networks can map to different segments (i.e.
> VLANs) on the same physical network. It is even possible for the same
> virtual network to have multiple segments on the same physical network.
>
> -Bob
>
>
> Thanks
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [RDO] Volunteers needed

2016-03-30 Thread Aleksandra Fedorova
Hi, Vladimir,

this is a great feature, which can make Fuel truly universal. But i
think it is hard to fully commit to the whole thing at once,
especially for new contributors.

Let's start with splitting it into some observable chunks of work, in
a form of wiki page, blueprint or spec. This way it will become
visible - where to start and how much effort it would take.

Also, do you think it fits into Internship ideas [1] ?

[1] https://wiki.openstack.org/wiki/Internship_ideas


On Tue, Mar 29, 2016 at 3:48 PM, Vladimir Kozhukalov
 wrote:
> Dear all,
>
> Fuel currently supports deployment of OpenStack using DEB packages
> (particularly Ubuntu, and Debian in near future). But we also used to deploy
> OpenStack on CentOS, but at some point we switched our focus on Ubuntu. It
> is not so hard to implement deployment of RDO using Fuel. Volunteers are
> welcome. You can contact Fuel team here in [openstack-dev] maling list or in
> #fuel IRC channel. It would be nice to see more people from different
> backgrounds contributing to Fuel.
>
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aleksandra Fedorova
CI Team Lead
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-30 Thread Ryan Hallisey
Agreed this needs to happen +1,

- Original Message -
From: "Jeff Peeler" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, March 30, 2016 1:22:31 PM
Subject: Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty 
within the Liberty branch

On Wed, Mar 30, 2016 at 3:52 AM, Steven Dake (stdake)  wrote:
>
>
> From: Jeffrey Zhang 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Wednesday, March 30, 2016 at 12:29 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty
> within the Liberty branch
>
> +1
>
> A lot of changes has been make in Mitaka. Backport is difficult.
>
> But using Mitaka deploy Liberty also has *much works*. For example,
> revert config file change which deprecated in Mitaka and Liberty support.
>
> A important one is the `keystone-manage bootstrap` command to create the
> keystone admin account. This is adderecently and only exist in the Mitaka
> branch. So when using this method, we should revert some commit and use
> the old way method.
>
>
> Agreed.

I'm sure there will be some checking and such once all the code has
been shuffled around, but I think doing this work is better than
abandoning a branch. So +1 to proposal.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fw:Extending API via Plugins

2016-03-30 Thread Serg Melikyan
Hi wangzhh,

I think this look reasonable, but I would prefer to have a proper spec for
this feature. I generally like to have extendable API in Murano.

On Thu, Mar 24, 2016 at 8:49 PM, 王正浩  wrote:

> I'm sorry that I forgot to tell you murano-paste.ini should be modified to
> ...
> [app:apiv1app]
> paste.app_factory = murano.api.v1.router:APIRouterV1.factory
> ...
> And the file [4] is murano/api/v1/extensions_base.py
>
> -- Original --
> *From: * "王正浩";
> *Date: * Fri, Mar 25, 2016 11:10 AM
> *To: * "List";
> *Cc: * "smelikyan";
> *Subject: * Extending API via Plugins
>
> Hi Serg Melikyan,
> I don't know much about CF Broker API. I'm sorry that I
> have no real use-case. But here is a test one which I plan to
> complete.
>
> I modified murano/common/wsgi.py[0], murano/api/v1/router.py[1],
> added
> ...
> murano.api.v1.extensions =
> test = murano.api.v1.extensions.test:testAPI
> ...
> to the setup.cfg[2]. Imply the class testAPI[3].
> The class testAPI  inherit a base class APIExtensionBase[4].
> I'll show you my code.
> P.S. I copied it from nova. So there are some extra code, hope you don't
> mind.
> [0] http://paste.openstack.org/show/491841/
> [1] http://paste.openstack.org/show/491840/
> [2] http://paste.openstack.org/show/491842/
> [3] http://paste.openstack.org/show/491845/
> [4] http://paste.openstack.org/show/491843/
>



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-30 Thread Michal Rostecki

On 03/30/2016 09:51 AM, Steven Dake (stdake) wrote:

Michal,

We can't commit to that.  We discussed it at midcycle and unable to commit
to it then.  We are essentially abandoning the 1.0.0 release and replacing
it with 1.1.0 because of the documentation outlined here:

http://docs.openstack.org/developer/kolla/liberty-deployment-warning.html


If we could have made data containers work this whole discussion would
have been moot in the first place because it was the trigger for this
discussion.



OK, makes sense. Then I'm fully +1.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty within the Liberty branch

2016-03-30 Thread Jeff Peeler
On Wed, Mar 30, 2016 at 3:52 AM, Steven Dake (stdake)  wrote:
>
>
> From: Jeffrey Zhang 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Wednesday, March 30, 2016 at 12:29 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla][vote] Just make Mitaka deploy Liberty
> within the Liberty branch
>
> +1
>
> A lot of changes has been make in Mitaka. Backport is difficult.
>
> But using Mitaka deploy Liberty also has *much works*. For example,
> revert config file change which deprecated in Mitaka and Liberty support.
>
> A important one is the `keystone-manage bootstrap` command to create the
> keystone admin account. This is adderecently and only exist in the Mitaka
> branch. So when using this method, we should revert some commit and use
> the old way method.
>
>
> Agreed.

I'm sure there will be some checking and such once all the code has
been shuffled around, but I think doing this work is better than
abandoning a branch. So +1 to proposal.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-03-30 Thread Carl Baldwin
On Wed, Mar 30, 2016 at 9:03 AM, Pavel Bondar  wrote:
> Kevin Benton commented on review page for current migration to pluggable
> approach [1]:
>
> IMO this cannot be optional. It's going to be a nightmare to try to support
> two IPAM systems that people may have switched between at various points in
> time. I would much rather go all-in on the upgrade by making it automatic
> with alembic and removing the option to use the legacy IPAM code completely.
>
> I've already been bitten by testing the new IPAM code with the config option
> and switching back which resulted in undeletable subnets. Now we can always
> yell at anyone that changes the config option like I did, but it takes a lot
> of energy to yell at users and they don't care for it much. :)
>
> Even ignoring the support issue, consider schema changes. This migration
> script will have to be constantly updated to work with whatever the current
> state of the schema is on both sets of ipam tables. Without constant in-tree
> testing enforcing that, we are one schema change away from this script
> breaking.
>
> So let's bite the bullet and make this a normal contract migration. Either
> the new ipam system is stable enough for us to commit to supporting it and
> fix whatever bugs it may have, or we need to remove it from the tree.
> Supporting both systems is unsustainable.
>
> This sound reasonable to me. It simplify support and testing (testing both
> implementations in gate with full coverage is not easy).
> From user prospective there should be no visible changes between pluggable
> ipam and non-pluggable.
> And making switch early in release cycle gives us enough time to fix any bug
> we will find in pluggable implementation.

This is what I want too but some people wanted to allow choice.

> Right now we have some open bugs for pluggable code [2], but they are still
> possible to fix.

Yes, we've got to fix this one but I think we have a way forward.  I'm
actually going to be working in IPAM for the next week or two on work
related to the thread I posted to yesterday [3].  Maybe I could help
out with this.  Could you get this migration lined up in review and
then we'll tackle the bugs as a joint effort?  Hopefully we can make
the switch before summit.

Carl

> Does it make sense to you?
>
> [1] https://review.openstack.org/#/c/277767/
> [2] https://bugs.launchpad.net/neutron/+bug/1543094

[3] http://lists.openstack.org/pipermail/openstack-dev/2016-March/090748.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Thierry Carrez

Matt Riedemann wrote:

1:30pm on Thursday (first slot after lunch is unconference for nova)
would work for me.


OK, I'll make that swap too (unless Josh hits me with a trout over the 
night)


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] single router per tenant in network context

2016-03-30 Thread Akshay Kumar Sanghai
Hi Aleksandr,
Thanks Aleksandr. According to your references,I made the necessary changes
to the code for single router , but now facing problems in the resource
cleanup. While I correct the code, Can you suggest how to generate traffic
between the VMs ? Is there any tool that is generally used for traffic
generation?

Thanks,
Akshay

On Wed, Mar 16, 2016 at 1:41 PM, Aleksandr Maretskiy <
amarets...@mirantis.com> wrote:

> Hi,
>
> network context creates router for each network automatically, so you can
> not reduce the number of routers with this context
>
> https://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/network/networks.py#L79
>
> However you can create and use own network context plugin, inherited from
> https://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/network/networks.py#L31
> and override its setup() method - create single router per tenant and then
> attach it to each created network, like here
> https://github.com/openstack/rally/blob/master/rally/plugins/openstack/wrappers/network.py#L342-L343
>
> Ask me if you need more help
>
>
> On Tue, Mar 15, 2016 at 7:58 PM, Akshay Kumar Sanghai <
> akshaykumarsang...@gmail.com> wrote:
>
>> Hi,
>> I have a openstack setup with 1 controller node, 1 network node and 2
>> compute nodes. I want to perform scale testing of the setup in the
>> following manner:
>>
>> - Create 10 tenants
>> - Create 1 router per tenant
>> - Create 100 neutron networks across 10 tenants attached to the router
>> - Create 500 VMs spread across 10 tenants attached to the networks
>>
>> I used the boot_server scenario and defined the number of networks and
>> tenants in the network and users context respectively. But I want only one
>> router to be created per tenant. In the current case, one router is created
>> per network.
>>
>> Do i have an option to accomplish this using the existing rally code? Or
>> should i go ahead and make some change in the network context for my use
>> case?
>>
>> Thanks,
>> Akshay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Thierry Carrez

John Dickinson wrote:

Yes, I checked on that. Based on where in the schedule that is, and based on 
the way we're planning on doing the working sessions, it will be just fine for 
me to not be in that particular session. It's much better for the community 
that we have the contiguous block that I personally be present in that time 
slot.


OK, I'll make the swap happen then.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Matt Riedemann



On 3/30/2016 10:42 AM, Ihar Hrachyshka wrote:

Thierry Carrez  wrote:


Ihar Hrachyshka wrote:

I'd then have the same problem than Matt here as I'm the Release CPL
for Nova. That said, given I think I'm the only person probably having
the issue, I can say fair enough and try to clone myself before the
Summit :-)


Actually, now that I am a release CPL for Neutron, as well as stable
representative for the project, and both release and stable sessions are
overlapping with 2 out of 4 Neutron sessions on that day, plus I have a
talk to do that same day in the morning, I am concerned that I will need
to skip 3 of 4 design sessions for Neutron on Thursday. Which honestly
is *very* painful for me.

With that in mind, could we try to move at least some of those cross
sessions to e.g. 1:30 - 2:10 where we don’t have Neutron sessions at
all, neither any infra/qa slots [docs only]?


There is an Oslo slot we could maybe swap Stable with. Would that work ?


Depends on which slot we talk about. If it’s the morning one, that’s the
time I have to give the talk, so it would not work for me; if that’s the
one in the afternoon [at the time of a ‘docs’ session], that one would
work for me just fine.

Thanks in advance,

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



1:30pm on Thursday (first slot after lunch is unconference for nova) 
would work for me.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread John Dickinson
Yes, I checked on that. Based on where in the schedule that is, and based on 
the way we're planning on doing the working sessions, it will be just fine for 
me to not be in that particular session. It's much better for the community 
that we have the contiguous block that I personally be present in that time 
slot.

--John




On 30 Mar 2016, at 1:01, Thierry Carrez wrote:

> John Dickinson wrote:
>> On Thursday, I'd like to proposed swapping Swift's 1:30pm session with 
>> Fuel's 2:20pm session. This will give Swift a contiguous time block in the 
>> afternoon, and Fuel's session would line up right after their full morning 
>> (albeit after the lunch break).
>>
>> I have not had a chance to talk to anyone on the Fuel team about this.
>
> John,
>
> You are giving a conference talk at 2:20pm on Thursday, which is why I left 
> that as a hole in the Swift Design Summit schedule.
>
> Swapping is definitely possible, but I figured you would not intentionally 
> create a conflict for you here ?
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-03-30 Thread Neil Jerram
On 30/03/16 16:08, Pavel Bondar wrote:

> We are now in early Newton, so it is good time to discuss plan for
> pluggable ipam for this release cycle.
>
> Kevin Benton commented on review page for current migration to pluggable
> approach [1]:
>>
>> IMO this cannot be optional. It's going to be a nightmare to try to
>> support two IPAM systems that people may have switched between at
>> various points in time. I would much rather go all-in on the upgrade
>> by making it automatic with alembic and removing the option to use the
>> legacy IPAM code completely.
>>
>> I've already been bitten by testing the new IPAM code with the config
>> option and switching back which resulted in undeletable subnets. Now
>> we can always yell at anyone that changes the config option like I
>> did, but it takes a lot of energy to yell at users and they don't care
>> for it much. :)
>>
>> Even ignoring the support issue, consider schema changes. This
>> migration script will have to be constantly updated to work with
>> whatever the current state of the schema is on both sets of ipam
>> tables. Without constant in-tree testing enforcing that, we are one
>> schema change away from this script breaking.
>>
>> So let's bite the bullet and make this a normal contract migration.
>> Either the new ipam system is stable enough for us to commit to
>> supporting it and fix whatever bugs it may have, or we need to remove
>> it from the tree. Supporting both systems is unsustainable.
>>
> This sound reasonable to me. It simplify support and testing (testing
> both implementations in gate with full coverage is not easy).
>  From user prospective there should be no visible changes between
> pluggable ipam and non-pluggable.
> And making switch early in release cycle gives us enough time to fix any
> bug we will find in pluggable implementation.
>
> Right now we have some open bugs for pluggable code [2], but they are
> still possible to fix.
>
> Does it make sense to you?

Yes!  Kill the non-pluggable code already.  Neutron desperately needs to 
have less and simpler code in any area where it can possibly get it.

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-30 Thread Neil Jerram
On 29/03/16 21:55, Carl Baldwin wrote:
>
> I thought of another type of grouping which could benefit pluggable
> IPAM today.  It occurred to me as I was refreshing my memory on how
> pluggable IPAM works when there are multiple subnets on a network.
> Currently, Neutron's backend pulls the subnets  and then tries to ask
> the IPAM driver for an IP on each one in turn [1].  This is
> inefficient and I think it is a natural opportunity to evolve the IPAM
> interface to allow this to be handled within the driver itself.  The
> driver could optimize it to avoid repeated round-trips to an external
> server.

Yes, that sounds sensible.  It would be nice to continue supporting the 
current pattern too though.

> Anyway, it occurred to me that this is just like segment aware IPAM
> except that the network is the group instead of the segment.  The IPAM
> driver could consider it another orthogonal grouping of subnets (even
> though it isn't really orthogonal to Neutron's point of view).  I
> could provide an implementation that would provide a shim for existing
> IPAM drivers to work without modification.  In fact, I could do that
> for all the types of grouping I've mentioned.

Yes - I think this is the same as I've been saying in previous replies, 
i.e. that the Neutron core can filter the subnets before it offers them 
to the driver.  Is that what you meant too?

 > Drivers could choose to
> sub-class the behavior to optimize it if they have the capability.
>
> Carl
>
> [1] 
> https://github.com/openstack/neutron/blob/4a6d05e410/neutron/db/ipam_pluggable_backend.py#L88

While we're here...

 def _ipam_try_allocate_ip(self, context, ipam_driver, port, ip_dict):
 factory = ipam_driver.get_address_request_factory()
 ip_request = factory.get_request(context, port, ip_dict)
 ipam_subnet = ipam_driver.get_subnet(ip_dict['subnet_id'])
 return ipam_subnet.allocate(ip_request)

What is the benefit of the separate "ipam_subnet = " line?  Why not just:

 def _ipam_try_allocate_ip(self, context, ipam_driver, port, ip_dict):
 factory = ipam_driver.get_address_request_factory()
 ip_request = factory.get_request(context, port, ip_dict)
 return ipam_driver.allocate(ip_request)

Similarly, what is the benefit of calling the driver twice to convert 
from (available info) to (request object) and then from (request object) 
to (IP allocation)?  Why not go directly from (available info) to (IP 
allocation)?

Finally, while I'm asking IPAM interface questions, what are subnet 
requests for?

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][magnum-ui] Have magnum jobs respect upper-constraints.txt

2016-03-30 Thread Amrith Kumar
Thanks Hongbin, I've updated the bug summary to be more generic.

-amrith

> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Wednesday, March 30, 2016 11:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [magnum][magnum-ui] Have magnum jobs respect
> upper-constraints.txt
> 
> Hi team,
> 
> After a quick check, it seems python-magnumclient and magnum-ui don't use
> upper constraints. Magnum (the main repo) uses upper constraints in
> integration tests (gate-functional-*), but doesn't use it in others (e.g.
> py27, py34, pep8, docs, coverage). The missing of upper constraints could
> be problematic. Tickets were created to fix that:
> https://bugs.launchpad.net/trove/+bug/1563038 .
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: March-30-16 8:33 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [release][all] What is upper-constraints.txt?
> 
> Folks,
> 
> Quick primer/refresh because of some gate/CI issues we saw last few days
> with Routes===2.3
> 
> upper-constraints.txt is the current set of all the global libraries that
> should be used by all the CI jobs.
> 
> This file is in the openstack/requirements repo:
> http://git.openstack.org/cgit/openstack/requirements/tree/upper-
> constraints.txt
> http://git.openstack.org/cgit/openstack/requirements/tree/upper-
> constraints.txt?h=stable/mitaka
> 
> Anyone working on a project, please ensure that all CI jobs respect
> constraints, example from trove below. If jobs don't respect constraints
> then they are more likely to break:
> https://review.openstack.org/#/c/298850/
> 
> Anyone deploying openstack, please consult this file as it's the one
> *sane* set of libraries that we test with.
> 
> Yes, global-requirements.txt has the ranges that end up in project
> requirements files. However, upper-constraints.txt is what we test for
> sure in OpenStack CI.
> 
> Thanks,
> Dims
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-30 Thread Neil Jerram
On 29/03/16 19:16, Carl Baldwin wrote:
> I've been playing with this a bit on this patch set [1].  I haven't
> gotten very far yet but it has me thinking.
>
> Calico has a similar use case in mind as I do.  Essentially, we both
> want to group subnets to allow for aggregation of routes.  (a) In
> routed networks, we want to group them by segment and the boundaries
> are hard meaning that if an IP is requested from a particular segment,
> IPAM should fail if it can't allocate it from the same.
>
> (b) For Calico, I believe that the goal is to group by host.  Their
> boundaries are soft meaning that it is okay to allocate any IP on the
> network but one that allows maximum route aggregation is strongly
> preferred.

Correct, and thanks for thinking about this case.

> (c) Brian Haley will soon post a spec to address the need to group
> subnets by service type.  This is another sort of grouping but is
> orthogonal to the need to group for routing purposes.  Here, we're
> trying to group like ports together so that we can use different types
> of addresses.  This kind of grouping could coexist with route grouping
> since they are orthogonal.
>
> Given all this grouping, it seems like it might make sense to add some
> sort of grouping feature to IPAM.  Here's how I'm thinking it will
> work.
>
> 1.  When a subnet is allocated, a group id(s) can be passed with the
> request.  IPAM will remember the group id with the subnet.
> 2.  When an IP address is needed, a group id(s) can be passed with the
> request.  IPAM will try to allocate from a subnet with a matching
> group id(s).

When you say "a group id(s) can be passed", are you thinking of the API 
into Neutron (e.g. from Nova), or of the API between the Neutron core 
and a pluggable IPAM driver?  (Or some other API?)

My guess is you mean the API between Neutron core and a pluggable IPAM 
driver.  For that case, I think your suggestion would make sense if 
information about all available subnets was passed upfront to the driver 
- i.e. whenever the network/subnet data model changes - but I am not 
sure if that is what happens.  Rather, I think it might be the case that 
the available subnets/CIDRs are passed to the driver for each IP 
allocation that is wanted.

If that last sentence is correct, then the Neutron core could do the 
filtering-by-group-id itself, and simply pass a filtered set of 
subnets/CIDRs to the driver, and I think that that would be simpler.

> 3.  If no IP address is available that exactly matches the group id(s)
> then IPAM may fall back to another subnet.  This behavior needs to be
> different for the various use cases mentioned which is where it gets
> kind of complicated.
>(a) No fallback is allowed.  The IP allocation should fail.
>(b) We can fall back to any other subnet.  There might be some
> reasons to prefer some over others but this could get complicated
> fast.
>(c) We can fall back to any subnet with None as its group (legacy
> subnets) but not to other groups (e.g. if I'm trying to allocate a
> floating IP address, I don't want to fall back to a subnet meant for
> DVR gateways because those aren't public IP addresses).
>
> I put (s) after group id in many cases above because it appears that
> we can have more than one kind of orthogonal grouping to consider at
> the same time.
>
> What do folks think?
>
> Am I trying to generalize too much and making it complicated?

FWIW, I don't think so.  But I'd like to be a lot surer about the shape 
of the existing pluggable IPAM interface.

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][magnum-ui] Have magnum jobs respect upper-constraints.txt

2016-03-30 Thread Andreas Jaeger
On 03/30/2016 05:31 PM, Hongbin Lu wrote:
> Hi team,
> 
> After a quick check, it seems python-magnumclient and magnum-ui don't use 
> upper constraints. Magnum (the main repo) uses upper constraints in 
> integration tests (gate-functional-*), but doesn't use it in others (e.g. 
> py27, py34, pep8, docs, coverage). The missing of upper constraints could be 
> problematic. Tickets were created to fix that: 
> https://bugs.launchpad.net/trove/+bug/1563038 .

As mentioned in the other thread: Do not run it in post jobs! Double
check everything to not break your publishing of documents, release
notes or tarballs,

Andreas

> Best regards,
> Hongbin
> 
> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com] 
> Sent: March-30-16 8:33 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [release][all] What is upper-constraints.txt?
> 
> Folks,
> 
> Quick primer/refresh because of some gate/CI issues we saw last few days with 
> Routes===2.3
> 
> upper-constraints.txt is the current set of all the global libraries that 
> should be used by all the CI jobs.
> 
> This file is in the openstack/requirements repo:
> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt
> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/mitaka
> 
> Anyone working on a project, please ensure that all CI jobs respect 
> constraints, example from trove below. If jobs don't respect constraints then 
> they are more likely to break:
> https://review.openstack.org/#/c/298850/
> 
> Anyone deploying openstack, please consult this file as it's the one
> *sane* set of libraries that we test with.
> 
> Yes, global-requirements.txt has the ranges that end up in project 
> requirements files. However, upper-constraints.txt is what we test for sure 
> in OpenStack CI.
> 
> Thanks,
> Dims
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Ihar Hrachyshka

Thierry Carrez  wrote:


Ihar Hrachyshka wrote:

I'd then have the same problem than Matt here as I'm the Release CPL
for Nova. That said, given I think I'm the only person probably having
the issue, I can say fair enough and try to clone myself before the
Summit :-)


Actually, now that I am a release CPL for Neutron, as well as stable
representative for the project, and both release and stable sessions are
overlapping with 2 out of 4 Neutron sessions on that day, plus I have a
talk to do that same day in the morning, I am concerned that I will need
to skip 3 of 4 design sessions for Neutron on Thursday. Which honestly
is *very* painful for me.

With that in mind, could we try to move at least some of those cross
sessions to e.g. 1:30 - 2:10 where we don’t have Neutron sessions at
all, neither any infra/qa slots [docs only]?


There is an Oslo slot we could maybe swap Stable with. Would that work ?


Depends on which slot we talk about. If it’s the morning one, that’s the  
time I have to give the talk, so it would not work for me; if that’s the  
one in the afternoon [at the time of a ‘docs’ session], that one would work  
for me just fine.


Thanks in advance,

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Segments, subnet types, and IPAM

2016-03-30 Thread Neil Jerram
On 11/03/16 23:20, Carl Baldwin wrote:
> Hi,

Hi Carl, and sorry for the lateness of this reply.

> I have started to get into coding [1] for the Neutron routed networks
> specification [2].
>
> This spec proposes a new association between network segments and
> subnets.  This affects how IPAM needs to work because until we know
> where the port is going to land, we cannot allocate an IP address for
> it.

Note: according to the text and discussion at 
https://review.openstack.org/#/c/263898/, Neutron actually _does_ 
already know where the port is going to land (i.e. the chosen host) at 
the time when it is first allocating an IP address.  At least in the 
most common case, which is when the user request is "launch instance(s) 
on  network".

> Also, IPAM will need to somehow be aware of segments.  We have
> proposed a host / segment mapping which could be transformed to a host
> / subnet mapping for IPAM purposes.
>
> I wanted to get the opinion of folks like Salvatore, John Belamaric,
> and you (if you interested) on this.  How will this affect the
> interface to pluggable IPAM and how can pluggable implementations can
> accommodate this change.

Well, first of all we have a problem that the pluggable IPAM interface 
is not documented as it is already.  So it is tricky to comment at all 
on how that interface might need to change. :-)

Petra Sargent, with a little of my help, has started documenting the 
interface at https://review.openstack.org/#/c/289460/, but I think there 
is still lots more to be said there - so I would encourage anyone with 
existing knowledge of the IPAM interface to go there and contribute 
useful chunks of extra explanatory text.

With that as a big caveat, here are my thoughts so far.  The pluggable 
IPAM interface has core Neutron code on one side, and pluggable IPAM 
drivers on the other.  As a general principle, it's better if we can 
keep complexity on the core side, and keep the IPAM drivers as simple as 
possible to write and maintain; because there is only one Neutron core, 
and there will in future be many IPAM drivers.

I believe it's already the case that the core tells the driver about the 
subnet(s) that the driver can allocate an IP from.  (Although I'm not 
sure exactly what form that takes, and also if the subnet(s) is/are 
specified on a per-instance basis or per-group of instances, or 
something else.)  Therefore, in the case where segments are being used, 
and subnets are defined with affinity to those segments, the core could 
handle that by reducing the set of subnets that it offers to the driver; 
and that would not require any change to existing IPAM drivers.

At least in the first implementation, I would _not_ pass any new 
segment-specific information over the pluggable IPAM interface (i.e. to 
the driver), because I don't see any reason for the driver to need this; 
I think it's better to contain the handling of that within the Neutron core.

I believe that the core does _not_ tell the driver about the chosen host 
for the port for which an IP allocation is wanted (i.e. 
port['binding:host_id']).  I would like this information to be passed to 
the driver, so as to enable cases where some kind of host-subnet 
affinity is desirable, but that affinity cannot be described by Neutron 
segment objects.  So in this case the driver should receive

- all of the subnet(s) that are defined for the relevant Network

- the chosen host

and it would use its own out-of-band knowledge to decide how to allocate 
an IP from some subrange of the available subnet(s), depending on the 
chosen host.

Finally, I believe that the current pluggable IPAM interface technically 
already allows the last paragraph to be achieved - but that it is pretty 
hard and complex to do that, as it requires subclassing many classes. 
Assuming I'm right about that, I don't think that such a simple 
interface enhancement should require so much work, and hence I'd prefer 
binding:host_id to be added to the core interface.

>  Obviously, we wouldn't require
> implementations to support it

"implementations" = existing IPAM drivers, here?  I think what we need 
can be done in a way such that there is no "it" for them to support.

> but routed networks wouldn't be very
> useful without it.

Here, I assume that "it" means "allocating IPs in a host- or 
segment-dependent way", and I agree with you that this is a practical 
requirement for large scale routed network usage.

>  So, those implementations would not be compatible
> when routed networks are deployed.

(Again, I think we shouldn't need to say this.)

> Another related topic was brought up in the recent Neutron mid-cycle.
> We talked about adding a service type attribute to to subnets.  The
> reason for this change is to allow operators to create special subnets
> on a network to be used only by certain kinds of ports.  For example,
> DVR fip namespace gateway ports burn a public IP for no good reason.
> This new feature would allow 

Re: [openstack-dev] [Fuel] Extra red tape for filing bugs

2016-03-30 Thread Roman Prykhodchenko
We also often use bugtracker as a TODO tracker. This template does not work for 
TODOs at all. I understand that it’s not technically mandatory to follow it, 
but if that Fuel Bug Checker is going to spam on every single TODO, our inboxes 
will overflow.

> 30 бер. 2016 р. о 17:37 Roman Prykhodchenko  написав(ла):
> 
> Guys,
> 
> I’m not trying to be a foreteller but with a bug template this huge and 
> complicated people will either not follow it or track bugs somewhere else. 
> Perhaps we should make it simpler?
> 
> Detailed bug description:
> 
> Steps to reproduce:
> 
> Expected results:
> 
> Actual result:
> 
> Reproducibility:
> 
> Workaround:
> 
> Impact:
> 
> Description of the environment:
> Operation system: 
> Versions of components: 
> Reference architecture: 
> Network model: 
> Related projects installed: 
> Additional information:
> 
> 
> 
> - romcheg



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Extra red tape for filing bugs

2016-03-30 Thread Roman Prykhodchenko
Guys,

I’m not trying to be a foreteller but with a bug template this huge and 
complicated people will either not follow it or track bugs somewhere else. 
Perhaps we should make it simpler?

Detailed bug description:
 
Steps to reproduce:
 
Expected results:
 
Actual result:
 
Reproducibility:
 
Workaround:
 
Impact:
 
Description of the environment:
 Operation system: 
 Versions of components: 
 Reference architecture: 
 Network model: 
 Related projects installed: 
Additional information:
 


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][magnum-ui] Have magnum jobs respect upper-constraints.txt

2016-03-30 Thread Hongbin Lu
Hi team,

After a quick check, it seems python-magnumclient and magnum-ui don't use upper 
constraints. Magnum (the main repo) uses upper constraints in integration tests 
(gate-functional-*), but doesn't use it in others (e.g. py27, py34, pep8, docs, 
coverage). The missing of upper constraints could be problematic. Tickets were 
created to fix that: https://bugs.launchpad.net/trove/+bug/1563038 .

Best regards,
Hongbin

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: March-30-16 8:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [release][all] What is upper-constraints.txt?

Folks,

Quick primer/refresh because of some gate/CI issues we saw last few days with 
Routes===2.3

upper-constraints.txt is the current set of all the global libraries that 
should be used by all the CI jobs.

This file is in the openstack/requirements repo:
http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt
http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/mitaka

Anyone working on a project, please ensure that all CI jobs respect 
constraints, example from trove below. If jobs don't respect constraints then 
they are more likely to break:
https://review.openstack.org/#/c/298850/

Anyone deploying openstack, please consult this file as it's the one
*sane* set of libraries that we test with.

Yes, global-requirements.txt has the ranges that end up in project requirements 
files. However, upper-constraints.txt is what we test for sure in OpenStack CI.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Thierry Carrez

Ihar Hrachyshka wrote:

I'd then have the same problem than Matt here as I'm the Release CPL
for Nova. That said, given I think I'm the only person probably having
the issue, I can say fair enough and try to clone myself before the
Summit :-)


Actually, now that I am a release CPL for Neutron, as well as stable
representative for the project, and both release and stable sessions are
overlapping with 2 out of 4 Neutron sessions on that day, plus I have a
talk to do that same day in the morning, I am concerned that I will need
to skip 3 of 4 design sessions for Neutron on Thursday. Which honestly
is *very* painful for me.

With that in mind, could we try to move at least some of those cross
sessions to e.g. 1:30 - 2:10 where we don’t have Neutron sessions at
all, neither any infra/qa slots [docs only]?


There is an Oslo slot we could maybe swap Stable with. Would that work ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-03-30 Thread Matt Riedemann

Intel has a few third party CIs in the third party systems wiki [1].

I was talking with Moshe Levi today about expanding coverage for 
mellanox CI in nova, today they run an SRIOV CI for vnic type 'direct'. 
I'd like them to also start running their 'macvtap' CI on the same nova 
changes (that job only runs in neutron today I think).


I'm trying to see what we have for coverage on these different NFV 
configurations, and because of limited resources to run NFV CI, don't 
want to duplicate work here.


So I'm wondering what the various Intel NFV CI jobs run, specifically 
the Intel Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].


From the wiki it looks like the Intel Networking CI tests ovs-dpdk but 
only for Neutron. Could that be expanded to also test on Nova changes 
that hit a sub-set of the nova tree?


I really don't know what the latter two jobs test as far as 
configuration is concerned, the descriptions in the wikis are pretty 
empty (please update those to be more specific).


Please also include in the wiki the recheck method for each CI so I 
don't have to dig through Gerrit comments to find one.


[1] https://wiki.openstack.org/wiki/ThirdPartySystems
[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-03-30 Thread Pavel Bondar
On 12.02.2016 15:01, Ihar Hrachyshka wrote:
> Salvatore Orlando  wrote:
>
>> On 11 February 2016 at 20:17, John Belamaric
>>  wrote:
>>
>>> On Feb 11, 2016, at 12:04 PM, Armando M.  wrote:
>>>
>>>
>>>
>>> On 11 February 2016 at 07:01, John Belamaric
>>>  wrote:
>>>
>>>
>>>
>>> It is only internal implementation changes.
>>>
>>> That's not entirely true, is it? There are config variables to
>>> change and it opens up the possibility of a scenario that the
>>> operator may not care about.
>>>
>>
>> If we were to remove the non-pluggable version altogether, then the
>> default for ipam_driver would switch from None to internal.
>> Therefore, there would be no config file changes needed.
>>
>> I think this is correct.
>> Assuming the migration path to Neutron will include the data
>> transformation from built-in to pluggable IPAM, do we just remove the
>> old code and models?
>> On the other hand do you think it might make sense to give operators
>> a chance to rollback - perhaps just in case some nasty bug pops up?
>
> They can always revert to a previous release. And if we enable the new
> implementation start of Newton, we’ll have enough time to fix bugs
> that will pop up in gate.
>
We are now in early Newton, so it is good time to discuss plan for
pluggable ipam for this release cycle.

Kevin Benton commented on review page for current migration to pluggable
approach [1]:
>
> IMO this cannot be optional. It's going to be a nightmare to try to
> support two IPAM systems that people may have switched between at
> various points in time. I would much rather go all-in on the upgrade
> by making it automatic with alembic and removing the option to use the
> legacy IPAM code completely.
>
> I've already been bitten by testing the new IPAM code with the config
> option and switching back which resulted in undeletable subnets. Now
> we can always yell at anyone that changes the config option like I
> did, but it takes a lot of energy to yell at users and they don't care
> for it much. :)
>
> Even ignoring the support issue, consider schema changes. This
> migration script will have to be constantly updated to work with
> whatever the current state of the schema is on both sets of ipam
> tables. Without constant in-tree testing enforcing that, we are one
> schema change away from this script breaking.
>
> So let's bite the bullet and make this a normal contract migration.
> Either the new ipam system is stable enough for us to commit to
> supporting it and fix whatever bugs it may have, or we need to remove
> it from the tree. Supporting both systems is unsustainable.
>
This sound reasonable to me. It simplify support and testing (testing
both implementations in gate with full coverage is not easy).
>From user prospective there should be no visible changes between
pluggable ipam and non-pluggable.
And making switch early in release cycle gives us enough time to fix any
bug we will find in pluggable implementation.

Right now we have some open bugs for pluggable code [2], but they are
still possible to fix.

Does it make sense to you?

[1] https://review.openstack.org/#/c/277767/
[2] https://bugs.launchpad.net/neutron/+bug/1543094
>> What's the team level of confidence in the robustness of the
>> reference IPAM driver?
>>
>> Salvatore
>>
>>
>>
>>
>> John
>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Relationship between physical networks and segment

2016-03-30 Thread Neil Jerram
On 29/03/16 20:42, Miguel Lavalle wrote:
> Hi,

Hi Miguel,

> I am writing a patchset to build a mapping between hosts and network
> segments. The goal of this mapping is to be able to say whether a host
> has access to a given network segment. I am building this mapping
> assuming that if a host A has a bridges mapping containing 'physnet 1'
> and a segment has 'physnet 1' in its 'physical_network' attribute, then
> the host has access to that segment.
>
> 1) Is this assumption correct? Looking at method check_segment_for_agent
> in
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_agent.py#n180
> seems to me to suggest that my assumption is correct?

Yes, I would say so.  In other words: if a host can access a particular 
physical network, it can access all segments that use that physical network.

>
> 2) Furthermore, when a segment is mapped to a physical network, is there
> a one to one relationship between segments and physical nets?

No; I would say that segments are N:1 with physical networks, with VLANs 
being the most obvious example.

Neil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-30 Thread Hongbin Lu
Another use case I can think of is to cache the required docker images in the 
Glance image.

This is an important use case because we have containerized most of the COE 
components (e.g. kube-scheduler, swarm-manager, etc.). As a result, each bay 
needs to pull docker images over the Internet on provisioning or scaling stage. 
If a large number of bays pull docker images at the same time, it will generate 
a lot of traffic. Therefore, it is desirable to have all the required docker 
images pre-downloaded into the Glance image. I expect we can leverage 
diskimage-builder to achieve the goal.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: March-29-16 4:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Generate atomic images using 
diskimage-builder


In multiple occasions in the past, we have had to use version of some software 
that's not available yet
in the upstream image for bug fixes or new features (Kubernetes, Docker, 
Flannel,...). Eventually the upstream
image would catch up, but having the tool to customize let us push forward with 
development, and gate tests
if it makes sense.

Ton Ngo,


[Inactive hide details for Yolanda Robla Mota ---03/29/2016 01:35:48 PM---So 
the advantages I can see with diskimage-builder are]Yolanda Robla Mota 
---03/29/2016 01:35:48 PM---So the advantages I can see with diskimage-builder 
are: - we reuse the same tooling that is present

From: Yolanda Robla Mota 
>
To: 
>
Date: 03/29/2016 01:35 PM
Subject: Re: [openstack-dev] [magnum] Generate atomic images using 
diskimage-builder





So the advantages I can see with diskimage-builder are:
- we reuse the same tooling that is present in other openstack projects
to generate images, rather than relying on an external image
- it improves the control we have on the contents of the image, instead
of seeing that as a black box. At the moment we can rely on the default
tree for fedora 23, but this can be updated per magnum needs
- reusability: we have atomic 23 now, but why not create magnum images
with dib, for ubuntu, or any other distros ? Relying on
diskimage-builder makes it easy and flexible, because it's a matter of
adding the right elements.

Best
Yolanda

El 29/03/16 a las 21:54, Steven Dake (stdake) escribió:
> Adrian,
>
> Makes sense.  Do the images have to be built to be mirrored though?  Can't
> they just be put on the mirror sites fro upstream?
>
> Thanks
> -steve
>
> On 3/29/16, 11:02 AM, "Adrian Otto" 
> > wrote:
>
>> Steve,
>>
>> I¹m very interested in having an image locally cached in glance in each
>> of the clouds used by OpenStack infra. The local caching of the glance
>> images will produce much faster gate testing times. I don¹t care about
>> how the images are built, but we really do care about the performance
>> outcome.
>>
>> Adrian
>>
>>> On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake) 
>>> >
>>> wrote:
>>>
>>> Yolanda,
>>>
>>> That is a fantastic objective.  Matthieu asked why build our own images
>>> if
>>> the upstream images work and need no further customization?
>>>
>>> Regards
>>> -steve
>>>
>>> On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
>>> >
>>> wrote:
>>>
 Hi
 The idea is to build own images using diskimage-builder, rather than
 downloading the image from external sources. By that way, the image can
 live in our mirrors, and is built using the same pattern as other
 images
 used in OpenStack.
 It also opens the door to customize the images, using custom trees, if
 there is a need for it. Actually we rely on official tree for Fedora 23
 Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
 default.

 Best,
 Yolanda

 El 29/03/16 a las 10:17, Mathieu Velten escribió:
> Hi,
>
> We are using the official Fedora Atomic 23 images here (on Mitaka M1
> however) and it seems to work fine with at least Kubernetes and Docker
> Swarm.
> Any reason to continue building specific Magnum image ?
>
> Regards,
>
> Mathieu
>
> Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
>> Hi
>> I wanted to start a discussion on how Fedora Atomic images are being
>> built. Currently the process for generating the atomic images used
>> on
>> Magnum is described here:
>> http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
>> l.
>> The image needs to be built manually, uploaded to fedorapeople, and
>> then
>> consumed from there in the magnum tests.
>> I have been working on a feature to allow 

Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

2016-03-30 Thread Gal Sagie
All these slots are fine with me, added Kuryr team as CC to make sure most
can attend any of these times.



On Wed, Mar 30, 2016 at 5:12 PM, Hongbin Lu  wrote:

> Gal,
>
>
>
> Thursday 4:10 – 4:50 conflicts with a Magnum workroom session, but we can
> choose from:
>
> · 11:00 – 11:40
>
> · 11:50 – 12:30
>
> · 3:10 – 3:50
>
>
>
> Please let us know if some of the slots don’t work well with your schedule.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Gal Sagie [mailto:gal.sa...@gmail.com]
> *Sent:* March-30-16 2:00 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum][kuryr] Shared session in design
> summit
>
>
>
> Anything you pick is fine with me, Kuryr fishbowl session is on
> Thursday 4:10 - 4:50, i personally
>
> think the Magnum integration is important enough and i dont mind using
> this time for the session as well.
>
>
>
> Either way i am also ok with the 11-11:40 and the 11:50-12:30 sessions or
> the 3:10-3:50
>
>
>
> On Tue, Mar 29, 2016 at 11:32 PM, Hongbin Lu 
> wrote:
>
> Hi all,
>
>
>
> As discussed before, our team members want to establish a shared session
> between Magnum and Kuryr. We expected a lot of attendees in the session so
> we need a large room (fishbowl). Currently, Kuryr has only 1 fishbowl
> session, and they possibly need it for other purposes. A solution is to
> promote one of the Magnum fishbowl session to be the shared session, or
> leverage one of the free fishbowl slot. The schedule is as below.
>
>
>
> Please vote your favorite time slot:
> http://doodle.com/poll/zuwercgnw2uecs5y .
>
>
>
> Magnum fishbowl session:
>
> · 11:00 - 11:40 (Thursday)
>
> · 11:50 - 12:30
>
> · 1:30 - 2:10
>
> · 2:20 - 3:00
>
> · 3:10 - 3:50
>
>
>
> Free fishbowl slots:
>
> · 9:00 – 9:40 (Thursday)
>
> · 9:50 – 10:30
>
> · 3:10 – 3:50 (conflict with Magnum session)
>
> · 4:10 – 4:50 (conflict with Magnum session)
>
> · 5:00 – 5:40 (conflict with Magnum session)
>
>
>
> Best regards,
>
> Hongbin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Ihar Hrachyshka

Sylvain Bauza  wrote:




Le 29/03/2016 09:48, Thierry Carrez a écrit :

Matt Riedemann wrote:

I see one problem. The single stable team fishbowl session is scheduled
for the last slot on Thursday (5pm). The conflict I have with that is
the nova team does it's priority/scheduling talk in the last session on
the last day for design summit fishbowl sessions (non-meetup style).

I'm wondering if we could move the stable session to 4:10 on Thursday. I
don't see any infra/QA sessions happening at that time, which is the
cross-project people we need for the stable session.


There is the release management fishbowl session at 4:10pm on Thursday  
that would conflict (a lot of people are involved in both). Maybe we  
could swap those two (put stable at 4:10pm and relmgt at 5:00pm). It  
looks like that would work ?


I'd then have the same problem than Matt here as I'm the Release CPL for  
Nova. That said, given I think I'm the only person probably having the  
issue, I can say fair enough and try to clone myself before the Summit :-)


Actually, now that I am a release CPL for Neutron, as well as stable  
representative for the project, and both release and stable sessions are  
overlapping with 2 out of 4 Neutron sessions on that day, plus I have a  
talk to do that same day in the morning, I am concerned that I will need to  
skip 3 of 4 design sessions for Neutron on Thursday. Which honestly is  
*very* painful for me.


With that in mind, could we try to move at least some of those cross  
sessions to e.g. 1:30 - 2:10 where we don’t have Neutron sessions at all,  
neither any infra/qa slots [docs only]?


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kloudbuster] authorization failed problem

2016-03-30 Thread Akshay Kumar Sanghai
Hi Alec,
Thanks for clarifying. I didnot have the cinder service previously. It was
not a complete setup. Now, I did the setup of cinder service.
Output of keystone service list.
[image: Inline image 1]
I installed the setup of openstack using the installation guide for ubuntu
and for kloudbuster, its a pypi based installation. So, I am running
kloudbuster using the CLI option.
kloudbuster --tested-rc keystone-openrc.sh --tested-passwd * --config
kb.cfg

contents of kb.cfg:
image_name: 'kloudbuster'

I added the kloudbuster v5 version as glance image with name as
kloudbuster.

I don't understand some basic things. If you can help, then that would be
great.
-Does the server side means the cloud generating the traffic and client
side means the the cloud on which connections are established? Can you
please elaborate on client, server and proxy?
-while running the kloudbuster, I saw "setting up redis connection". Can
you please expain to which connection is established and why? Is it
KB_PROXY?

Please find attached the run of kloudbuster as a file. I have still not
succeeded in running the kloudbuster, some errors.
I appreciate your help Alec.

Thanks,
Akshay

On Mon, Mar 28, 2016 at 8:59 PM, Alec Hothan (ahothan) 
wrote:

>
> Can you describe what you mean by "do not have a cinder service"?
> Can you provide the output of "keystone service-list"?
>
> We'd have to know a bit more about what you have been doing:
> how did you install your openstack, how did you install kloudbuster, which
> kloudbuster qcow2 image version did you use, who did you run kloudbuster
> (cli or REST or web UI), what config file have you been using, complete log
> of the run (including backtrace)...
>
> But the key is - you should really have a fully working openstack
> deployment before using kloudbuster. Nobody has never tried so far to use
> kloudbuster without such basic service as cinder working.
>
> Thanks
>
>   Alec
>
>
>
> From: Akshay Kumar Sanghai 
> Date: Monday, March 28, 2016 at 6:51 AM
> To: OpenStack List , Alec Hothan <
> ahot...@cisco.com>
> Cc: "Yichen Wang (yicwang)" 
> Subject: Re: [openstack-dev] [kloudbuster] authorization failed problem
>
> Hi Alec,
> Thanks for the help. I ran into another problem. At present I do not have
> a cinder service. So ,when i am trying to run kloudbuster, I am getting
> this error:
> "EndpointNotFound: publicURL endpoint for volumev2 service not found"
> Is it possible to run the scale test (creation of VMs, router, network)
> without having a cinder service? Any option that can be used so that
> kloudbuster can run without cinder.
>
> Thanks,
> Akshay
>
> On Wed, Mar 23, 2016 at 9:05 PM, Alec Hothan (ahothan) 
> wrote:
>
>> Hi Akshay
>>
>> The URL you are using is a private address (
>> http://192.168.138.51:5000/v2.0) and is likely the reason it does not
>> work.
>> If you run the kloudbuster App in the cloud, this app needs to have
>> access to the cloud under test.
>> So even if you can access 192.168.138.51 from your local browser (which
>> runs on your workstation or laptop) it may not be accessible from a VM that
>> runs in your cloud.
>> For that to work you need to get an URL that is reachable from the VM.
>>
>> In some cases where the cloud under test is local, it is easier to just
>> run kloudbuster locally as well (from the same place where you can ping
>> 192.168.138.51).
>> You can either use a local VM to run the kloudbuster image (vagrant,
>> virtual box...) or just simpler, install kloudbuster locally using git
>> clone or pip install (see the installation instructions in the doc
>> http://kloudbuster.readthedocs.org/en/latest/).
>>
>> Regards,
>>
>>Alec
>>
>>
>>
(vkb) root@controller:~# kloudbuster --tested-rc keystone-openrc.sh 
--tested-passwd sanghai --config kb.cfg
2016-03-30 19:37:35 WARNING No public key is found or specified to instantiate 
VMs. You will not be able to access the VMs spawned by KloudBuster.
2016-03-30 19:37:36 INFO Creating kloud: KBs
2016-03-30 19:37:36 INFO Creating kloud: KBc
2016-03-30 19:37:36 INFO Creating tenant: KBs-T0
2016-03-30 19:37:36 INFO Creating user: KBs-T0-U
2016-03-30 19:37:36 INFO Creating routers and networks for tenant KBs-T0
2016-03-30 19:37:38 INFO Scheduled to create VMs for network KBs-T0-U-R0-N0...
2016-03-30 19:37:38 INFO Creating tenant: KBc-T0
2016-03-30 19:37:38 INFO Creating user: KBc-T0-U
2016-03-30 19:37:39 INFO Creating routers and networks for tenant KBc-T0
2016-03-30 19:37:40 INFO Scheduled to create VMs for network KBc-T0-U-R0-N0...
2016-03-30 19:37:41 INFO Creating Instance: KB-PROXY
2016-03-30 19:37:51 INFO Setting up the redis connections...
2016-03-30 19:39:59 INFO Preparing metadata for VMs... (Server)
2016-03-30 19:39:59 INFO Creating Instance: KBs-T0-U-R0-N0-I0
2016-03-30 19:40:08 INFO Preparing metadata for VMs... (Client)
2016-03-30 19:40:08 INFO Creating Instance: 

Re: [openstack-dev] [release][all] What is upper-constraints.txt?

2016-03-30 Thread Amrith Kumar
> -Original Message-
> From: Andreas Jaeger [mailto:a...@suse.com]
> Sent: Wednesday, March 30, 2016 9:03 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [release][all] What is upper-constraints.txt?
> 
> On 2016-03-30 14:33, Davanum Srinivas wrote:
> > Folks,
> >
> > Quick primer/refresh because of some gate/CI issues we saw last few
> > days with Routes===2.3
> >
> > upper-constraints.txt is the current set of all the global libraries
> > that should be used by all the CI jobs.
> 
> We're not ready yet for such a general recommendation, see below for some
> details.
> 
> > This file is in the openstack/requirements repo:
> > http://git.openstack.org/cgit/openstack/requirements/tree/upper-constr
> > aints.txt
> > http://git.openstack.org/cgit/openstack/requirements/tree/upper-constr
> > aints.txt?h=stable/mitaka
> >
> > Anyone working on a project, please ensure that all CI jobs respect
> > constraints, example from trove below. If jobs don't respect
> > constraints then they are more likely to break:
> > https://review.openstack.org/#/c/298850/
> >
> > Anyone deploying openstack, please consult this file as it's the one
> > *sane* set of libraries that we test with.
> >
> > Yes, global-requirements.txt has the ranges that end up in project
> > requirements files. However, upper-constraints.txt is what we test for
> > sure in OpenStack CI.
> 
> 
> Note that upper-constraints is not ready for full usage in the project.
> It only works in check and gate jobs but especially does not work in post
> jobs. If you implement it improperly, your jobs will fail - and post jobs
> will fail silently (see also [1] for an infra discussion).
> Before infra can support this everywhere, our tools need to be fixed to
> handle constraints in all queues.
> 
> Right now, I consider upper-constraints experimental since it does not
> work for all queues and is not fool-proof. So, if you go down this road,
> triple check that *all* your jobs do the right thing,

[amrith] I'm really thrilled that dims sent this to the ML, and that we got 
your review comments Andreas. I'll try my best to fix them and ping you for a 
re-review.

> 
> Andreas
> 
> [1]
> http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-15-
> 19.03.html
> 
> 
> 
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all] What is upper-constraints.txt?

2016-03-30 Thread Amrith Kumar
> -Original Message-
> From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> Sent: Wednesday, March 30, 2016 9:01 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [release][all] What is upper-constraints.txt?
> 
> On Wed, Mar 30, 2016 at 08:33:06AM -0400, Davanum Srinivas wrote:
> > Folks,
> >
> > Quick primer/refresh because of some gate/CI issues we saw last few
> > days with Routes===2.3
> >
> > upper-constraints.txt is the current set of all the global libraries
> > that should be used by all the CI jobs.
> >
> > This file is in the openstack/requirements repo:
> > http://git.openstack.org/cgit/openstack/requirements/tree/upper-constr
> > aints.txt
> > http://git.openstack.org/cgit/openstack/requirements/tree/upper-constr
> > aints.txt?h=stable/mitaka
> >
> > Anyone working on a project, please ensure that all CI jobs respect
> > constraints, example from trove below. If jobs don't respect
> > constraints then they are more likely to break:
> > https://review.openstack.org/#/c/298850/
> 
> While I agree that projects should do this, do keep in mind that projects
> that do not run tests on openstack/requirements changes still have plenty
> of room for breaks when u-c changes are merged. :)
> 

[amrith] Speaking only from personal experience, I'd rather see gate/check jobs 
fail when u-c changes, rather than each time some python library in the wide 
world changes in an unexpected way. But, I suspect there are benefits in having 
canaries in the coal mine.

The more interesting part of this though is that we should be exhorting 
deployers to rely on u-c.txt or potentially something better like pip freeze. 
But I'm not a deployer and I don't know whether that would work for deployers 
at large.

> // jim
> 
> >
> > Anyone deploying openstack, please consult this file as it's the one
> > *sane* set of libraries that we test with.
> >
> > Yes, global-requirements.txt has the ranges that end up in project
> > requirements files. However, upper-constraints.txt is what we test for
> > sure in OpenStack CI.
> >
> > Thanks,
> > Dims
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the blueprint"support-private-registry"

2016-03-30 Thread Kai Qiang Wu
I agree to that support-private-registry should be secure. As insecure
seems not much useful for production use.
Also I understood the point setup related CA could be diffcult than normal
HTTP, but we want to know if
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

Could address the issue and make templates clearer to understood ? If
related patch or spec proposed, we are glad to review and make it better.




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Ricardo Rocha 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   30/03/2016 09:09 pm
Subject:Re: [openstack-dev] [magnum] Discuss the
blueprint   "support-private-registry"



Hi.

On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao  wrote:
>
> Hi Hongbin
>
> Thanks for starting this thread,
>
>
>
> I initial propose this bp because I am in China which is behind China
great
> wall and can not have access of gcr.io directly, after checking our
> cloud-init script, I see that
>
> lots of code are *hard coded* to using gcr.io, I personally though this
is
> not good idea. We can not force user/customer to have internet access in
> their environment.
>
> I proposed to use insecure-registry to give customer/user (Chinese or
whom
> doesn't have gcr.io access) a chance to switch use their own
> insecure-registry to deploy
> k8s/swarm bay.
>
> For your question:
>>  Is the private registry secure or insecure? If secure, how to
handle
>> the authentication secrets. If insecure, is it OK to connect a secure
bay to
>> an insecure registry?
> An insecure-resigtry should be 'secure' one, since customer need to setup
it
> and make sure it's clear one and in this case, they could be a private
> cloud.
>
>>  Should we provide an instruction for users to pre-install the private
>> registry? If not, how to verify the correctness of this feature?
>
> The simply way to pre-install private registry is using insecure-resigtry
> and docker.io has very simple steps to start it [1]
> for other, docker registry v2 also supports using TLS enable mode but
this
> will require to tell docker client key and crt file which will make
> "support-private-registry" complex.
>
> [1] https://docs.docker.com/registry/
> [2]https://docs.docker.com/registry/deploying/

'support-private-registry' and 'allow-insecure-registry' sound different to
me.

We're using an internal docker registry at CERN (v2, TLS enabled), and
have the magnum nodes setup to use it.

We just install our CA certificates in the nodes (cp to
etc/pki/ca-trust/source/anchors/, update-ca-trust) - had to change the
HEAT templates for that, and submitted a blueprint to be able to do
similar things in a cleaner way:
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

That's all that is needed, the images are then prefixed with the
registry dns location when referenced - example:
docker.cern.ch/my-fancy-image.

Things we found on the way:
- registry v2 doesn't seem to allow anonymous pulls (you can always
add an account with read-only access everywhere, but it means you need
to always authenticate at least with this account)
https://github.com/docker/docker/issues/17317
- swarm 1.1 and kub8s 1.0 allow authentication to the registry from
the client (which was good news, and it works fine), handy if you want
to push/pull with authentication.

Cheers,
  Ricardo

>
>
>
> On 2016年03月30日 07:23, Hongbin Lu wrote:
>
> Hi team,
>
>
>
> This is the item we didn’t have time to discuss in our team meeting, so I
> started the discussion in here.
>
>
>
> Here is the blueprint:
> https://blueprints.launchpad.net/magnum/+spec/support-private-registry .
Per
> my understanding, the goal of the BP is to allow users to specify the url
of
> their private docker registry where the bays pull the kube/swarm images
(if
> they are not able to access docker hub or other public registry). An
> assumption is that users need to pre-install their own private registry
and
> upload all the required images to there. There are several potential
issues
> of this proposal:
>
> ・ Is the private registry secure or insecure? If secure, how to
> handle the authentication secrets. If insecure, is it OK to connect a
secure
> bay to an insecure registry?
>
> ・ Should we provide an instruction for users to pre-install the
> private registry? If not, how to verify the correctness of this feature?
>
>
>
> Thoughts?
>
>
>
> Best regards,
>
> 

Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-03-30 Thread Tomasz 'Zen' Napierala
Hi,

Do we have any requirements for the new tool? Do we know what we don’t like 
about current implementation, what should be avoided, etc.? Before that we can 
only speculate.
From my ops experience, shotgun like tools will not work conveniently on medium 
to big environments. Even on medium env amount of logs is just too huge to 
handle by such simple tool. In such environments better pattern is to use 
dedicated log collection / analysis tool, just like StackLight.
At the other hand I’m not sure if ansible is the right tool for that. It has 
some features (like ‘fetch’ command) but in general it’s a configuration 
management tool, and I’m not sure how it would act under such heavy load.

Regards,

> On 30 Mar 2016, at 15:20, Vladimir Kozhukalov  
> wrote:
> 
> ​Igor,
> 
> I can not agree more. Wherever possible we should
> use existent mature solutions. Ansible is really
> convenient and well known solution, let's try to 
> use it. 
> 
> Yet another thing should be taken into account. 
> One of Shotgun features is diagnostic report
> that could then be attached to bugs to identify 
> the content of env. This report could also be 
> used to reproduce env and then fight a bug. 
> I'd like we to have this kind of report. 
> Is it possible to implement such a feature
> using Ansible? If yes, then let's switch to Ansible
> as soon as possible.
> 
> ​
> 
> Vladimir Kozhukalov
> 
> On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky  
> wrote:
> Neil Jerram wrote:
> > But isn't Ansible also over-complicated for just running commands over SSH?
> 
> It may be not so "simple" to ignore that. Ansible has a lot of modules
> which might be very helpful. For instance, Shotgun makes a database
> dump and there're Ansible modules with the same functionality [1].
> 
> Don't think I advocate Ansible as a replacement. My point is, let's
> think about reusing ready solutions. :)
> 
> - igor
> 
> 
> [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
> 
> On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram  
> wrote:
> >
> > FWIW, as a naive bystander:
> >
> > On 30/03/16 11:06, Igor Kalnitsky wrote:
> >> Hey Fuelers,
> >>
> >> I know that you probably wouldn't like to hear that, but in my opinion
> >> Fuel has to stop using Shotgun. It's nothing more but a command runner
> >> over SSH. Besides, it has well known issues such as retrieving remote
> >> directories with broken symlinks inside.
> >
> > It makes sense to me that a command runner over SSH might not need to be
> > a whole Fuel-specific component.
> >
> >> So I propose to find a modern alternative and reuse it. If we stop
> >> supporting Shotgun, we can spend extra time to focus on more important
> >> things.
> >>
> >> As an example, we can consider to use Ansible. It should not be tricky
> >> to generate Ansible playbook instead of generating Shotgun one.
> >> Ansible is a  well known tool for devops and cloud operators, and they
> >> we will only benefit if we provide possibility to extend diagnostic
> >> recipes in usual (for them) way. What do you think?
> >
> > But isn't Ansible also over-complicated for just running commands over SSH?
> >
> > Neil
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-03-30 Thread Adam Heczko
Osquery [1] could also be considered as providing a lot of useful
informations in a convenient way.

[1] https://osquery.io/


On Wed, Mar 30, 2016 at 3:20 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> ​Igor,
>
> I can not agree more. Wherever possible we should
> use existent mature solutions. Ansible is really
> convenient and well known solution, let's try to
> use it.
>
> Yet another thing should be taken into account.
> One of Shotgun features is diagnostic report
> that could then be attached to bugs to identify
> the content of env. This report could also be
> used to reproduce env and then fight a bug.
> I'd like we to have this kind of report.
> Is it possible to implement such a feature
> using Ansible? If yes, then let's switch to Ansible
> as soon as possible.
>
> ​
>
> Vladimir Kozhukalov
>
> On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky 
> wrote:
>
>> Neil Jerram wrote:
>> > But isn't Ansible also over-complicated for just running commands over
>> SSH?
>>
>> It may be not so "simple" to ignore that. Ansible has a lot of modules
>> which might be very helpful. For instance, Shotgun makes a database
>> dump and there're Ansible modules with the same functionality [1].
>>
>> Don't think I advocate Ansible as a replacement. My point is, let's
>> think about reusing ready solutions. :)
>>
>> - igor
>>
>>
>> [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
>>
>> On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram 
>> wrote:
>> >
>> > FWIW, as a naive bystander:
>> >
>> > On 30/03/16 11:06, Igor Kalnitsky wrote:
>> >> Hey Fuelers,
>> >>
>> >> I know that you probably wouldn't like to hear that, but in my opinion
>> >> Fuel has to stop using Shotgun. It's nothing more but a command runner
>> >> over SSH. Besides, it has well known issues such as retrieving remote
>> >> directories with broken symlinks inside.
>> >
>> > It makes sense to me that a command runner over SSH might not need to be
>> > a whole Fuel-specific component.
>> >
>> >> So I propose to find a modern alternative and reuse it. If we stop
>> >> supporting Shotgun, we can spend extra time to focus on more important
>> >> things.
>> >>
>> >> As an example, we can consider to use Ansible. It should not be tricky
>> >> to generate Ansible playbook instead of generating Shotgun one.
>> >> Ansible is a  well known tool for devops and cloud operators, and they
>> >> we will only benefit if we provide possibility to extend diagnostic
>> >> recipes in usual (for them) way. What do you think?
>> >
>> > But isn't Ansible also over-complicated for just running commands over
>> SSH?
>> >
>> > Neil
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mitaka RC packages for openSUSE and SLES available

2016-03-30 Thread Thomas Bechtold
Hi,

In the last few weeks we've been working hard on stabilizing the Mitaka
packages, and the packages currently available in Cloud:OpenStack:Mitaka
[0] pass early testing.

Feel free to try them out by adding the repository:

http://download.opensuse.org/repositories/Cloud:/OpenStack:/Mitaka/$DI
STRO/

to your repository list. We currently maintain + test the packages for
SLE 12SP1 and openSUSE Leap 42.1.
We also started to automatically  track the stable/mitaka branches for
the different services so the packages are automatically updated when CI
passes.

If you find issues, please do not hesitate to report them to
opensuse-cloud at opensuse.org or to https://bugzilla.opensuse.org/

Thanks !

Have a lot of fun,
Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

2016-03-30 Thread Hongbin Lu
Gal,

Thursday 4:10 – 4:50 conflicts with a Magnum workroom session, but we can 
choose from:

· 11:00 – 11:40

· 11:50 – 12:30

· 3:10 – 3:50

Please let us know if some of the slots don’t work well with your schedule.

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: March-30-16 2:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

Anything you pick is fine with me, Kuryr fishbowl session is on Thursday 4:10 - 
4:50, i personally
think the Magnum integration is important enough and i dont mind using this 
time for the session as well.

Either way i am also ok with the 11-11:40 and the 11:50-12:30 sessions or the 
3:10-3:50

On Tue, Mar 29, 2016 at 11:32 PM, Hongbin Lu 
> wrote:
Hi all,

As discussed before, our team members want to establish a shared session 
between Magnum and Kuryr. We expected a lot of attendees in the session so we 
need a large room (fishbowl). Currently, Kuryr has only 1 fishbowl session, and 
they possibly need it for other purposes. A solution is to promote one of the 
Magnum fishbowl session to be the shared session, or leverage one of the free 
fishbowl slot. The schedule is as below.

Please vote your favorite time slot: http://doodle.com/poll/zuwercgnw2uecs5y .

Magnum fishbowl session:

• 11:00 - 11:40 (Thursday)

• 11:50 - 12:30

• 1:30 - 2:10

• 2:20 - 3:00

• 3:10 - 3:50

Free fishbowl slots:

• 9:00 – 9:40 (Thursday)

• 9:50 – 10:30

• 3:10 – 3:50 (conflict with Magnum session)

• 4:10 – 4:50 (conflict with Magnum session)

• 5:00 – 5:40 (conflict with Magnum session)

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-30 Thread Doug Hellmann


On Wed, Mar 30, 2016, at 03:37 AM, Thomas Goirand wrote:
> On 03/29/2016 08:33 PM, Doug Hellmann wrote:
> > If the core doc team isn't able to help you maintain it, maybe it's a
> > candidate for a separate guide, just like we're discussing for projects
> > that aren't part of the DefCore set included in the main guide.
> > 
> > Doug
> 
> This is exactly what I don't want. Only installing the packages
> themselves is different. Like for example, "apt-get install foo" and
> answering a few debconf prompts is often enough to get packages to work,
> without the need for manual setup of dbs, or rabbitMQ credentials. But
> that's maybe 20% of the install-guide, and the rest of is left
> untouched, with no conditionals. For example the description of the
> services, testing them after install, etc. Having a separated guide
> would mean that someone would be left to write a full install-guide from
> scratch, alone. That isn't desirable.
> 
> It is also my hope that the packaging on upstream infra will get going.
> If it does, it will make more sense to get the Debian guide up to speed,
> and probably there will be more contributors.

Perhaps that common content should be a separate guide? I don't know the
best solution, but I don't think requiring any one team to keep up with
*everything* needed to install all projects on all platforms using all
available tools is the right approach. See Conway's Law.

Doug

> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Rescheduling IRC meetings

2016-03-30 Thread gordon chung


On 30/03/2016 8:06 AM, Julien Danjou wrote:
> On Wed, Mar 30 2016, Chris Dent wrote:
>
>> Another option on the meetings would be to do what the cross project
>> meetings do: Only have the meeting if there are agenda items.
>
> That's a good idea, I'd be totally cool with that too. We could send a
> mail indicating there would be meeting with a 1 week prior notice.
>

you don't like watching me talk to myself?

i believe this problem arises mainly at the beginning and end of cycle 
where we don't have any issues regarding blueprints as they were just 
discussed or won't be discussed. it also doesn't help that the meeting 
time is tied to North America when most of our devs are not here.

that said, i do prefer following the cross project model. when should we 
assign cut off for meeting items? we shouldn't make cutoff so late that 
it causes people to sit around waiting to see if there's a meeting or not.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-03-30 Thread Vladimir Kozhukalov
​Igor,

I can not agree more. Wherever possible we should
use existent mature solutions. Ansible is really
convenient and well known solution, let's try to
use it.

Yet another thing should be taken into account.
One of Shotgun features is diagnostic report
that could then be attached to bugs to identify
the content of env. This report could also be
used to reproduce env and then fight a bug.
I'd like we to have this kind of report.
Is it possible to implement such a feature
using Ansible? If yes, then let's switch to Ansible
as soon as possible.

​

Vladimir Kozhukalov

On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky 
wrote:

> Neil Jerram wrote:
> > But isn't Ansible also over-complicated for just running commands over
> SSH?
>
> It may be not so "simple" to ignore that. Ansible has a lot of modules
> which might be very helpful. For instance, Shotgun makes a database
> dump and there're Ansible modules with the same functionality [1].
>
> Don't think I advocate Ansible as a replacement. My point is, let's
> think about reusing ready solutions. :)
>
> - igor
>
>
> [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
>
> On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram 
> wrote:
> >
> > FWIW, as a naive bystander:
> >
> > On 30/03/16 11:06, Igor Kalnitsky wrote:
> >> Hey Fuelers,
> >>
> >> I know that you probably wouldn't like to hear that, but in my opinion
> >> Fuel has to stop using Shotgun. It's nothing more but a command runner
> >> over SSH. Besides, it has well known issues such as retrieving remote
> >> directories with broken symlinks inside.
> >
> > It makes sense to me that a command runner over SSH might not need to be
> > a whole Fuel-specific component.
> >
> >> So I propose to find a modern alternative and reuse it. If we stop
> >> supporting Shotgun, we can spend extra time to focus on more important
> >> things.
> >>
> >> As an example, we can consider to use Ansible. It should not be tricky
> >> to generate Ansible playbook instead of generating Shotgun one.
> >> Ansible is a  well known tool for devops and cloud operators, and they
> >> we will only benefit if we provide possibility to extend diagnostic
> >> recipes in usual (for them) way. What do you think?
> >
> > But isn't Ansible also over-complicated for just running commands over
> SSH?
> >
> > Neil
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the blueprint "support-private-registry"

2016-03-30 Thread Ricardo Rocha
Hi.

On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao  wrote:
>
> Hi Hongbin
>
> Thanks for starting this thread,
>
>
>
> I initial propose this bp because I am in China which is behind China great
> wall and can not have access of gcr.io directly, after checking our
> cloud-init script, I see that
>
> lots of code are *hard coded* to using gcr.io, I personally though this is
> not good idea. We can not force user/customer to have internet access in
> their environment.
>
> I proposed to use insecure-registry to give customer/user (Chinese or whom
> doesn't have gcr.io access) a chance to switch use their own
> insecure-registry to deploy
> k8s/swarm bay.
>
> For your question:
>>  Is the private registry secure or insecure? If secure, how to handle
>> the authentication secrets. If insecure, is it OK to connect a secure bay to
>> an insecure registry?
> An insecure-resigtry should be 'secure' one, since customer need to setup it
> and make sure it's clear one and in this case, they could be a private
> cloud.
>
>>  Should we provide an instruction for users to pre-install the private
>> registry? If not, how to verify the correctness of this feature?
>
> The simply way to pre-install private registry is using insecure-resigtry
> and docker.io has very simple steps to start it [1]
> for other, docker registry v2 also supports using TLS enable mode but this
> will require to tell docker client key and crt file which will make
> "support-private-registry" complex.
>
> [1] https://docs.docker.com/registry/
> [2]https://docs.docker.com/registry/deploying/

'support-private-registry' and 'allow-insecure-registry' sound different to me.

We're using an internal docker registry at CERN (v2, TLS enabled), and
have the magnum nodes setup to use it.

We just install our CA certificates in the nodes (cp to
etc/pki/ca-trust/source/anchors/, update-ca-trust) - had to change the
HEAT templates for that, and submitted a blueprint to be able to do
similar things in a cleaner way:
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

That's all that is needed, the images are then prefixed with the
registry dns location when referenced - example:
docker.cern.ch/my-fancy-image.

Things we found on the way:
- registry v2 doesn't seem to allow anonymous pulls (you can always
add an account with read-only access everywhere, but it means you need
to always authenticate at least with this account)
https://github.com/docker/docker/issues/17317
- swarm 1.1 and kub8s 1.0 allow authentication to the registry from
the client (which was good news, and it works fine), handy if you want
to push/pull with authentication.

Cheers,
  Ricardo

>
>
>
> On 2016年03月30日 07:23, Hongbin Lu wrote:
>
> Hi team,
>
>
>
> This is the item we didn’t have time to discuss in our team meeting, so I
> started the discussion in here.
>
>
>
> Here is the blueprint:
> https://blueprints.launchpad.net/magnum/+spec/support-private-registry . Per
> my understanding, the goal of the BP is to allow users to specify the url of
> their private docker registry where the bays pull the kube/swarm images (if
> they are not able to access docker hub or other public registry). An
> assumption is that users need to pre-install their own private registry and
> upload all the required images to there. There are several potential issues
> of this proposal:
>
> · Is the private registry secure or insecure? If secure, how to
> handle the authentication secrets. If insecure, is it OK to connect a secure
> bay to an insecure registry?
>
> · Should we provide an instruction for users to pre-install the
> private registry? If not, how to verify the correctness of this feature?
>
>
>
> Thoughts?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Best Regards, Eli Qiao (乔立勇)
> Intel OTC China
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >